redis分布式缓存应用—五大数据类型:keyStringHashListSetZset,配置文件redis.conf解析

    技术2024-10-19  29

    1.redis键key

    1.key keys *:查看当前key列 exists key的名字,判断某个key是否存在 move key db --->当前库就没有了,被移除了 expire key 秒钟:为给定的key设置过期时间(到期/终止时间) ttl key 查看还有多少秒过期,-1表示永不过期,-2表示已过期 type key 查看你的key是什么类型 [cevent@hadoop213 ~]$ redis-cli -p 6379 开启客户端 127.0.0.1:6379> set k1 value1 OK 127.0.0.1:6379> set k2 value2 OK 127.0.0.1:6379> set k3 value3 OK 127.0.0.1:6379> exists k1 判断k1是否存在,存在1不存在0 (integer) 1 127.0.0.1:6379> exists k10 (integer) 0 127.0.0.1:6379> move k3 2 将k3移动到2号库 (integer) 1 127.0.0.1:6379> keys * 1) "k2" 2) "k1" 127.0.0.1:6379> select 2 进入2号库 OK 127.0.0.1:6379[2]> get k3 获取k3 "value3" 127.0.0.1:6379[2]> select 0 返回1号库 OK 127.0.0.1:6379> ttl k1 查看k1生命周期,-1表示永不过期 (integer) -1 127.0.0.1:6379> expire k2 10 设置k2终止生命10秒 (integer) 1 127.0.0.1:6379> get k2 "value2" 127.0.0.1:6379> ttl k2 查询k2状态 (integer) 4 127.0.0.1:6379> ttl k2 已经失效-2 (integer) -2 127.0.0.1:6379> keys * 已过期不存在(被移除内存系统) 1) "k1" 127.0.0.1:6379> get k2 (nil) 127.0.0.1:6379> set k1 cevent 覆盖k1值set OK 127.0.0.1:6379> get k1 "cevent" 127.0.0.1:6379> lpush ceventlist 1 2 3 4 5 创建list (integer) 5 127.0.0.1:6379> lrange ceventlist 0 -1 查询范围 1) "5" 2) "4" 3) "3" 4) "2" 5) "1" 127.0.0.1:6379> type ceventlist 查看*list类型 list 127.0.0.1:6379> keys * 1) "ceventlist" 2) "k1"

    2.Redis字符串(String)

    127.0.0.1:6379> del ceventlist 删除集合 (integer) 1 127.0.0.1:6379> keys * 1) "k1" 127.0.0.1:6379> get k1 "cevent" 127.0.0.1:6379> append k1 619 为k1追加值 (integer) 9 127.0.0.1:6379> get k1 "cevent619" 127.0.0.1:6379> strlen k1 查看k1长度 (integer) 9 127.0.0.1:6379> set k2 6 创建k2值 OK 127.0.0.1:6379> get k2 "6" 127.0.0.1:6379> incr k2 增加(自增1)k2 (integer) 7 127.0.0.1:6379> incr k2 (integer) 8 127.0.0.1:6379> incr k2 (integer) 9 127.0.0.1:6379> get k2 获取k2 "9" 127.0.0.1:6379> decr k2 减(自减1)k2 (integer) 8 127.0.0.1:6379> get k2 "8" 127.0.0.1:6379> incrby k2 5 指定k2增加的int值 (integer) 13 127.0.0.1:6379> decrby k2 2 指定k2减少的int值 (integer) 11 127.0.0.1:6379> decrby k2 2 (integer) 9 127.0.0.1:6379> set k3 value3 OK 127.0.0.1:6379> incr k3 计算只能与数字 (error) ERR value is not an integer or out of range 127.0.0.1:6379> get k1 获取k1 "cevent619" 127.0.0.1:6379> getrange k1 0 -1 获取k的范围(0开始 -1结束) "cevent619" 127.0.0.1:6379> getrange k1 0 2 获取k1的范围(0开始 index=2结束) "cev" 127.0.0.1:6379> setrange k1 0 eee 设置index范围值 (integer) 9 127.0.0.1:6379> get k1 "eeeent619" 127.0.0.1:6379> setex k4 10 v4 创建k4并赋值10秒的生命周期 OK 127.0.0.1:6379> ttl k4 查看ttl生命 (integer) 6 127.0.0.1:6379> get k4 可获取 "v4" 127.0.0.1:6379> ttl k4 ttl=-2为已过期 (integer) -2 127.0.0.1:6379> get k4 获值为空 (nil) 127.0.0.1:6379> setnx k1 val 如果不存在才会被创建执行setnx (integer) 0 127.0.0.1:6379> get k1 "eeeent619" 127.0.0.1:6379> setnx k11 val11 不存在被创建 (integer) 1 127.0.0.1:6379> mset k1 value1 k2 v2 k3 v3 同时创建多个库并赋值 OK 127.0.0.1:6379> mget k1 k2 k3 同时获取多个库 1) "value1" 2) "v2" 3) "v3" 127.0.0.1:6379> keys * 1) "k11" 2) "k2" 3) "k1" 4) "k3" 127.0.0.1:6379> get k1 覆盖k1值 "value1" 127.0.0.1:6379> msetnx k3 v3 k4 v4 如果有一个不存在,那么k3修改成功,不会创建k4 (integer) 0 127.0.0.1:6379> keys * 1) "k11" 2) "k2" 3) "k1" 4) "k3" 127.0.0.1:6379> get k3 "v3" 127.0.0.1:6379> mset k4 v4 k5 v5 如果都不存在,则会被创建 OK 127.0.0.1:6379> keys * 1) "k4" 2) "k5" 3) "k11" 4) "k2" 5) "k1" 6) "k3"

    3.Redis列表(List)

    [cevent@hadoop213 redis-3.0.4]$ ll 总用量 152 -rw-rw-r--. 1 cevent cevent 31391 9月 8 2015 00-RELEASENOTES -rw-rw-r--. 1 cevent cevent 53 9月 8 2015 BUGS -rw-rw-r--. 1 cevent cevent 1439 9月 8 2015 CONTRIBUTING -rw-rw-r--. 1 cevent cevent 1487 9月 8 2015 COPYING drwxrwxr-x. 6 cevent cevent 4096 7月 1 17:51 deps -rw-rw-r--. 1 cevent cevent 83 7月 2 18:01 dump.rdb -rw-rw-r--. 1 cevent cevent 11 9月 8 2015 INSTALL -rw-rw-r--. 1 cevent cevent 151 9月 8 2015 Makefile -rw-rw-r--. 1 cevent cevent 4223 9月 8 2015 MANIFESTO -rw-rw-r--. 1 cevent cevent 5201 9月 8 2015 README -rw-rw-r--. 1 cevent cevent 41404 7月 1 21:20 redis.conf -rwxrwxr-x. 1 cevent cevent 271 9月 8 2015 runtest -rwxrwxr-x. 1 cevent cevent 280 9月 8 2015 runtest-cluster -rwxrwxr-x. 1 cevent cevent 281 9月 8 2015 runtest-sentinel -rw-rw-r--. 1 cevent cevent 7109 9月 8 2015 sentinel.conf drwxrwxr-x. 2 cevent cevent 4096 7月 1 17:52 src drwxrwxr-x. 10 cevent cevent 4096 9月 8 2015 tests drwxrwxr-x. 5 cevent cevent 4096 9月 8 2015 utils [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf 启动redis服务 [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 启动客户端 127.0.0.1:6379> keys * 1) "k1" 2) "k3" 3) "k5" 4) "k11" 5) "k4" 6) "k2" 127.0.0.1:6379> lpush list01 1 2 3 4 5 (l=leftà正序进 反序出 push(压栈,插入)输出一个list) (integer) 5 127.0.0.1:6379> lrange list01 0 -1 1) "5" 2) "4" 3) "3" 4) "2" 5) "1" 127.0.0.1:6379> rpush list02 1 2 3 4 5 (r=rightà正序进 正序出 rpush) (integer) 5 127.0.0.1:6379> lrange list02 0 -1 1) "1" 2) "2" 3) "3" 4) "4" 5) "5" 127.0.0.1:6379> lpop list01 (l先进先出,所以第一个pop(出栈,清出)-取出数据的是5) "5" 127.0.0.1:6379> rpop list01 (r倒序先出) "1" 127.0.0.1:6379> lpop list02 (rpush倒序取出数据) "1" 127.0.0.1:6379> rpop list02 "5" 127.0.0.1:6379> lrange list01 0 -1 (按照lpush进入的顺序输出) 1) "4" 2) "3" 3) "2" 127.0.0.1:6379> lindex list01 0 (l=left index索引值) "4" 127.0.0.1:6379> lindex list02 0 "2" 127.0.0.1:6379> llen list01 (查询list01长度) (integer) 3 127.0.0.1:6379> rpush list03 1 1 1 2 2 2 2 4 4 4 4 5 5 5 5 6 插入集合值 (integer) 16 127.0.0.1:6379> lrem list03 2 4 删除2个4(rem=remove) (integer) 2 127.0.0.1:6379> lrange list03 0 -1 查询删除结果 1) "1" 2) "1" 3) "1" 4) "2" 5) "2" 6) "2" 7) "2" 8) "4" 9) "4" 10) "5" 11) "5" 12) "5" 13) "5" 14) "6" 127.0.0.1:6379> lpush list04 1 2 3 4 5 6 7 8 8 9 插入list (integer) 10 127.0.0.1:6379> lrange list04 0 -1 查询list 1) "9" 2) "8" 3) "8" 4) "7" 5) "6" 6) "5" 7) "4" 8) "3" 9) "2" 10) "1" 127.0.0.1:6379> ltrim list04 0 5 (修剪list 0-5(不包含5)) OK 127.0.0.1:6379> lrange list04 (error) ERR wrong number of arguments for 'lrange' command 127.0.0.1:6379> lrange list04 0 -1 1) "9" 2) "8" 3) "8" 4) "7" 5) "6" 6) "5" 127.0.0.1:6379> lrange list02 0 -1 1) "2" 2) "3" 3) "4" 127.0.0.1:6379> rpoplpush list04 list02 (rpop将list04倒序0=5出栈-清出,lpush到list02正序输出0位置) "5" 127.0.0.1:6379> lrange list04 0 -1 (5被清出) 1) "9" 2) "8" 3) "8" 4) "7" 5) "6" 127.0.0.1:6379> lrange list02 0 -1 (5进入正序0) 1) "5" 2) "2" 3) "2" 4) "3" 5) "4" 127.0.0.1:6379> lset list04 1 cevent 覆盖指定下标值 OK 127.0.0.1:6379> lrange list04 0 -1 下标1变为cevent 1) "9" 2) "cevent" 3) "8" 4) "7" 5) "6" 127.0.0.1:6379> linsert list04 before cevent hadoop 根据值,进行前before后after插入 (integer) 6 127.0.0.1:6379> linsert list04 after cevent azkaban (integer) 7 127.0.0.1:6379> lrange list04 0 -1 1) "9" 2) "hadoop" 3) "cevent" 4) "azkaban" 5) "8" 6) "7" 7) "6"

    4.Redis集合(Set) :单值多value

    4.1sadd/smembers/sismember

    127.0.0.1:6379> sadd set01 1 1 2 2 3 3 插入值不可重复set(重复的值会被忽略) (integer) 3 127.0.0.1:6379> smembers set01 查询set集合,s members 1) "1" 2) "2" 3) "3" 127.0.0.1:6379> sismember set01 1 查看1是否是(s is member)集合成员 (integer) 1 127.0.0.1:6379> smembers set01 1) "1" 2) "2" 3) "3" 127.0.0.1:6379> sismember set01 ce (integer) 0(不是集合成员)

    4.2scard,获取集合里面的元素个数

    127.0.0.1:6379> scard set01 获取集合内值的个数 (integer) 3

    4.3srem key value 删除集合中元素

    127.0.0.1:6379> srem set01 3 删除集合内指定值(s remove) (integer) 1 127.0.0.1:6379> smembers set01 1) "1" 2) "2"

    4.4srandmember key 某个整数(随机出几个数)

    从set集合里面随机取出2个

    如果超过最大数量就全部取出,

    如果写的值是负数,比如-3 ,表示需要取出3个,但是可能会有重复值。

    127.0.0.1:6379> sadd set02 1 2 3 4 5 6 7 8 创建set (integer) 8 127.0.0.1:6379> srandmember set02 3 在set集合中随机取出3个数字 1) "4" 2) "8" 3) "2" 127.0.0.1:6379> srandmember set02 3 1) "1" 2) "4" 3) "5"

    4.5spop key 随机出栈

    127.0.0.1:6379> spop set02 随机出栈集合pop "3" 127.0.0.1:6379> spop set02 "2" 127.0.0.1:6379> smembers set02 查询集合 1) "1" 2) "4" 3) "5" 4) "6" 5) "7" 6) "8"

    4.6smove key1 key2 在key1里某个值 作用是将key1里的某个值赋给key2

    127.0.0.1:6379> smove set03 set02 cevent 将set03中的cevent移入set02 (integer) 1 127.0.0.1:6379> smembers set03 1) "echo 127.0.0.1:6379> smembers set02 1) "1" 2) "4" 3) "8" 4) "7" 5) "6" 6) "cevent" 7) "5" 8) "2"

    5.数学集合类

    5.1差集:sdiff 在第一个set里面而不在后面任何一个set里面的项

    127.0.0.1:6379> sadd set04 1 2 3 4 5 6 (integer) 6 127.0.0.1:6379> sadd set05 1 4 ce vent (integer) 4 127.0.0.1:6379> sdiff set04 set05 第一个set为主键,在第一个里面的被排除,取第一个set集合的剩余值,第二个集合的剩余值不会被提取 1) "2" 2) "3" 3) "5" 4) "6"

    5.2交集:sinter

    127.0.0.1:6379> sinter set04 set05 取两个集合的交集 1) "1" 2) "4"

    5.3并集:sunion

    127.0.0.1:6379> sunion set04 set05 取并集,并去重 1) "1" 2) "4" 3) "vent" 4) "ce" 5) "6" 6) "3" 7) "5" 8) "2"

    6.Redis哈希(Hash)- KV模式不变,但V是一个键值对

    6.1hset/hget/hmset/hmget/hgetall/hdel

    127.0.0.1:6379> hset user id 619 key:user(一个key) value:id 619(键值对) (integer) 1 127.0.0.1:6379> hget user id 查询时也需要传入 键值对的key "619" 127.0.0.1:6379> hset user name cevent (integer) 1 127.0.0.1:6379> hget user name "cevent" 127.0.0.1:6379> hmset customer id 66 name cevn age 30 创建hash集合hmset OK 127.0.0.1:6379> hmget customer id name age 查看hm集合值hmget 1) "66" 2) "cevn" 3) "30" 127.0.0.1:6379> hgetall customer 全查hgetall 1) "id" 2) "66" 3) "name" 4) "cevn" 5) "age" 6) "30" 127.0.0.1:6379> hdel user name 删除值hdel (integer) 1 127.0.0.1:6379> hgetall user 获取user 1) "id" 2) "619"

    6.2hlen

    127.0.0.1:6379> hlen user 获取user集合的长度 (integer) 1 127.0.0.1:6379> hlen customer (integer) 3

    6.3hexists key 在key里面的某个值的key

    127.0.0.1:6379> hexists customer id 判断hashset集合是否存在value值 0不存在 1存在 (integer) 1 127.0.0.1:6379> hexists customer email (integer) 0

    6.4hkeys/hvals

    127.0.0.1:6379> hkeys customer 查询集合的所有key 1) "id" 2) "name" 3) "age" 127.0.0.1:6379> hvals customer 查询集合的所有值hvals … 1) "66" 2) "cevn" 3) "30"

    6.5hincrby/hincrbyfloat

    127.0.0.1:6379> hincrby customer age 2 自增hash increment by 2=hincrby (integer) 32 127.0.0.1:6379> hincrby customer age 4 (integer) 36 127.0.0.1:6379> hset customer score 92.8 建立小数集合 (integer) 1 127.0.0.1:6379> hincrbyfloat customer score 0.5 自增小数hash increment by float=hincrbyfloat "93.3"

    6.6hsetnx:不存在赋值,存在了无效

    127.0.0.1:6379> hsetnx customer age 26 建立hash set集合,有key则无法添加 (integer) 0 127.0.0.1:6379> hsetnx customer email 1540001771@qq.com 建立setnx ,无key直接添加 (integer) 1 127.0.0.1:6379> hgetall customer 1) "id" 2) "66" 3) "name" 4) "cevn" 5) "age" 6) "36" 7) "score" 8) "93.3" 9) "email" 10) "1540001771@qq.com" 127.0.0.1:6379> hvals customer 1) "66" 2) "cevn" 3) "36" 4) "93.3" 5) "1540001771@qq.com"

    7.Redis有序集合Zset(sorted set)

    7.1zadd/zrange、withscores

    127.0.0.1:6379> zadd zset01 60 v1 70 v2 80 v3 90 v4 100 v5 创建有序集合 score key value 键值对 (integer) 5 127.0.0.1:6379> zrange zset01 0 -1 查看集value 1) "v1" 2) "v2" 3) "v3" 4) "v4" 5) "v5" 127.0.0.1:6379> zrange zset01 0 -1 withscores 查看集合score键 1) "v1" 2) "60" 3) "v2" 4) "70" 5) "v3" 6) "80" 7) "v4" 8) "90" 9) "v5" 10) "100"

    7.2zrangebyscore key 开始score 结束score( 不包含Limit 作用是返回限制:limit 开始下标步 多少步

    127.0.0.1:6379> zrangebyscore zset01 60 90 查看score的value,r range by score set集合 startxxx endxxx(包含) 1) "v1" 2) "v2" 3) "v3" 4) "v4" 127.0.0.1:6379> zrangebyscore zset01 60 (90 不包含90的score 1) "v1" 2) "v2" 3) "v3" 127.0.0.1:6379> zrangebyscore zset01 (60 (90 不包含score=60/90 1) "v2" 2) "v3" 127.0.0.1:6379> zrangebyscore zset01 60 90 limit 2 2 条件限制limit 限制下标 每页限制条数 1) "v3" 2) "v4"

    7.3zrem key 某score下对应的value值,作用是删除元素

    删除元素,格式是zrem zset的key 项的值,项的值可以是多个zrem key score某个对应值,可以是多个值

    127.0.0.1:6379> zrem zset01 v5 删除zset集合的vlaue值,将key-score一并删除 (integer) 1 127.0.0.1:6379> zrange zset01 0 -1 withscores 查询set集合列表 1) "v1" 2) "60" 3) "v2" 4) "70" 5) "v3" 6) "80" 7) "v4" 8) "90"

    7.4zcard/zcount key score区间/zrank key values值,作用是获得下标值/zscore key 对应值,获得分数

    zcard:获取集合中元素个数

    zcount :获取分数区间内元素个数,zcount key 开始分数区间 结束分数区间

    zrank:获取value在zset中的下标位置

    zscore:按照值获得对应的分数

    127.0.0.1:6379> zcard zset01 查询集合中的个数,以score为单元 (integer) 4 127.0.0.1:6379> zcount zset01 60 80 计算集合score在60-80的个数 (integer) 3 127.0.0.1:6379> zrank zset01 v4 查看集合v4的下标位置(0 1 2 3) (integer) 3 127.0.0.1:6379> zscore zset01 v4 查询v4的分数 "90"

    7.5zrevrank key values值,作用是逆序获得下标值

    127.0.0.1:6379> zrevrank zset01 v4 反转倒序查看指定vlaue索引下标(z reverse rank) (integer) 0 127.0.0.1:6379> zrange zset01 0 -1 (v4为最后一位,翻转为第一位0) 1) "v1" 2) "v2" 3) "v3" 4) "v4"

    7.6zrevrange

    zrevrange 127.0.0.1:6379> zrevrange zset01 0 -1 反转倒序查询 1) "v4" 2) "v3" 3) "v2" 4) "v1"

    7.7zrevrangebyscore key 结束score 开始score

    127.0.0.1:6379> zrevrangebyscore zset01 90 60 反转获取值,根据score,反转的话,第一个下标为90 1) "v4" 2) "v3" 3) "v2" 4) "v1"

    1.Units单位

    (1)配置大小单位,开头定义了一些基本的度量单位,只支持bytes,不支持bit

    (2)对大小写不敏感

    进入vim编辑器

    查询显示行:: set nu

    到最后一行:shift g

    到首行:gg

    Redis.conf总计行

    1 # Redis configuration file example 2 3 # Note on units: when memory size is needed, it is possible to specify 4 # it in the usual form of 1k 5GB 4M and so forth: 5 # 6 # 1k => 1000 bytes 7 # 1kb => 1024 bytes 差异值 8 # 1m => 1000000 bytes 9 # 1mb => 1024*1024 bytes 10 # 1g => 1000000000 bytes 11 # 1gb => 1024*1024*1024 bytes 12 # 13 # units are case insensitive so 1GB 1Gb 1gB are all the same. 非大小写敏感区分,GB、gB都相同 924 # By default "hz" is set to 10. Raising the value will use more CPU when 925 # Redis is idle, but at the same time will make Redis more responsive when 926 # there are many keys expiring at the same time, and timeouts may be 927 # handled with more precision. 928 # 929 # The range is between 1 and 500, however a value over 100 is usually not 930 # a good idea. Most users should use the default of 10 and raise this up to 931 # 100 only in environments where very low latency is required. 932 hz 10 933 934 # When a child rewrites the AOF file, if the following option is enabled 935 # the file will be fsync-ed every 32 MB of data generated. This is useful 936 # in order to commit the file to the disk more incrementally and avoid 937 # big latency spikes. 938 aof-rewrite-incremental-fsync yes

    2.INCLUDES包含和我们的Struts2配置文件类似,可以通过includes包含,redis.conf可以作为总闸,包含其他

    15 ################################## INCLUDES ################################ ### 16 17 # Include one or more other config files here. This is useful if you 18 # have a standard template that goes to all Redis servers but also need 19 # to customize a few per-server settings. Include files can include 20 # other files, so use this wisely. 21 # 22 # Notice option "include" won't be rewritten by command "CONFIG REWRITE" 23 # from admin or Redis Sentinel. Since Redis always uses the last processed 24 # line as value of a configuration directive, you'd better put includes 25 # at the beginning of this file to avoid overwriting config change at runtim e. 26 # 27 # If instead you are interested in using includes to override configuration 28 # options, it is better to use include as the last line. 29 # 30 # include /path/to/local.conf 31 # include /path/to/other.conf

    3.GENERAL通用

    33 ################################ GENERAL ################################## ### 34 35 # By default Redis does not run as a daemon. Use 'yes' if you need it. 36 # Note that Redis will write a pid file in /var/run/redis.pid when daemonize d. 37 daemonize yes 开启守护进程 38 39 # When running daemonized, Redis writes a pid file in /var/run/redis.pid by 40 # default. You can specify a custom pid file location here. 41 pidfile /var/run/redis.pid 进程管道id文件,开启时如果没有指定文件,默认为此目录文件为file通道 42 43 # Accept connections on the specified port, default is 6379. 44 # If port 0 is specified Redis will not listen on a TCP socket. 45 port 6379 46 47 # TCP listen() backlog. 48 # 49 # In high requests-per-second environments you need an high backlog in order 高并发环境 50 # to avoid slow clients connections issues. Note that the Linux kernel 51 # will silently truncate it to the value of /proc/sys/net/core/somaxconn so 52 # make sure to raise both the value of somaxconn and tcp_max_syn_backlog 53 # in order to get the desired effect. 54 tcp-backlog 511 求backlog的队列总和,队列连接池backlog 55 56 # By default Redis listens for connections from all the network interfaces 57 # available on the server. It is possible to listen to just one or multiple 58 # interfaces using the "bind" configuration directive, followed by one or 59 # more IP addresses. 60 # 61 # Examples: 62 # 63 # bind 192.168.1.100 10.0.0.1 端口绑定 64 # bind 127.0.0.1 65 66 # Specify the path for the Unix socket that will be used to listen for 67 # incoming connections. There is no default, so Redis will not listen 68 # on a unix socket when not specified. 69 # 70 # unixsocket /tmp/redis.sock 71 # unixsocketperm 700 72 73 # Close the connection after a client is idle for N seconds (0 to disable) 74 timeout 0 当长时间没有使用redis服务,则会自动超时关闭 75 76 # TCP keepalive. 77 # 78 # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence 79 # of communication. This is useful for two reasons: 80 # 81 # 1) Detect dead peers. 82 # 2) Take the connection alive from the point of view of network 83 # equipment in the middle. 84 # 85 # On Linux, the specified value (in seconds) is the period used to send ACKs . 86 # Note that to close the connection the double of the time is needed. 87 # On other kernels the period depends on the kernel configuration. 88 # 89 # A reasonable value for this option is 60 seconds. 90 tcp-keepalive 0 当前tcp检测为不检测 91 92 # Specify the server verbosity level. 93 # This can be one of: 94 # debug (a lot of information, useful for development/testing) 95 # verbose (many rarely useful info, but not a mess like the debug level) 96 # notice (moderately verbose, what you want in production probably) 97 # warning (only very important / critical messages are logged) 98 loglevel notice 99 100 # Specify the log file name. Also the empty string can be used to force 101 # Redis to log on the standard output. Note that if you use standard 日志redis标准输出 102 # output for logging but daemonize, logs will be sent to /dev/null 103 logfile "" 104 105 # To enable logging to the system logger, just set 'syslog-enabled' to yes, 106 # and optionally update the other syslog parameters to suit your needs. 107 # syslog-enabled no 108 109 # Specify the syslog identity. 110 # syslog-ident redis 系统日志-ID 111 112 # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. 输出日志设备LOCAL0-7 113 # syslog-facility local0 默认使用local 0 114 115 # Set the number of databases. The default database is DB 0, you can select 116 # a different one on a per-connection basis using SELECT <dbid> where 117 # dbid is a number between 0 and 'databases'-1 118 databases 16 119

    4.SNAPSHOTTING快照

    120 ################################ SNAPSHOTTING 快照数据备份相关设置############################# ### 121 # 122 # Save the DB on disk: redis中会将snapshotting中的DB保存在硬盘中 123 # 124 # save <seconds> <changes> 命令 125 # 126 # Will save the DB if both the given number of seconds and the given 保存DB动作 127 # number of write operations against the DB occurred. 128 # 129 # In the example below the behaviour will be to save: 130 # after 900 sec (15 min) if at least 1 key changed 如果在900s/15分钟内,1个key被改变,就会进行dump.rdb存储 131 # after 300 sec (5 min) if at least 10 keys changed如果在300s/5分钟内,10个key被改变,就会进行dump.rdb存储 132 # after 60 sec if at least 10000 keys changed 如果在60s/1分钟内,keys值由1万次变更,进行dump.rdb存储 133 # 134 # Note: you can disable saving completely by commenting out all "save" lin es. 可以禁用 135 # 136 # It is also possible to remove all the previously configured save 137 # points by adding a save directive with a single empty string argument 138 # like in the following example: 139 # 140 # save "" 禁用设置 141 142 save 900 1 默认 143 save 300 10 144 save 60 10000 145 146 # By default Redis will stop accepting writes if RDB snapshots are enabled 147 # (at least one save point) and the latest background save failed. 148 # This will make the user aware (in a hard way) that data is not persisting 149 # on disk properly, otherwise chances are that no one will notice and some 150 # disaster will happen. 151 # 152 # If the background saving process will start working again Redis will 153 # automatically allow writes again. 154 # 155 # However if you have setup your proper monitoring of the Redis server 156 # and persistence, you may want to disable this feature so that Redis will 157 # continue to work as usual even if there are problems with disk, 158 # permissions, and so forth. 159 stop-writes-on-bgsave-error yes 出错后立即保存 160 161 # Compress string objects using LZF when dump .rdb databases? 是否启动压缩算法 162 # For default that's set to 'yes' as it's almost always a win. 163 # If you want to save some CPU in the saving child set it to 'no' but 164 # the dataset will likely be bigger if you have compressible values or keys. 165 rdbcompression yes 166 167 # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. 检查64位算法校验 168 # This makes the format more resistant to corruption but there is a performa nce 169 # hit to pay (around 10%) when saving and loading RDB files, so you can disa ble it 170 # for maximum performances. 171 # 172 # RDB files created with checksum disabled have a checksum of zero that will 173 # tell the loading code to skip the check. 174 rdbchecksum yes 175 176 # The filename where to dump the DB 177 dbfilename dump.rdb 查询rdb数据位置 178 179 # The working directory. 180 # 181 # The DB will be written inside this directory, with the filename specified 182 # above using the 'dbfilename' configuration directive. 183 # 184 # The Append Only File will also be created inside this directory. 185 # 186 # Note that you must specify a directory here, not a file name. 187 dir ./ 默认dir目录位置

    5.SECURITY安全:访问密码的查看、设置和取消

    [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 127.0.0.1:6379> ping PONG 127.0.0.1:6379> config get requirepass 获取默认请求密码 1) "requirepass" 2) "" 空串 127.0.0.1:6379> config get dir 获取路径目录 1) "dir" 2) "/opt/module/redis-3.0.4" 127.0.0.1:6379> config set requirepass "cevent" 设置redis密码 OK 127.0.0.1:6379> ping 不输入密码不通 (error) NOAUTH Authentication required. 127.0.0.1:6379> auth cevent 输入面 OK 127.0.0.1:6379> ping PONG 127.0.0.1:6379> config set requirepass "" 修改回空串 OK 127.0.0.1:6379> ping PONG 378 ################################## SECURITY ################################ ### 379 380 # Require clients to issue AUTH <PASSWORD> before processing any other 设置密码访问任何进程之前,输入AUTH <PASSWORD> 381 # commands. This might be useful in environments in which you do not trust 382 # others with access to the host running redis-server. 383 # 384 # This should stay commented out for backward compatibility and because most 385 # people do not need auth (e.g. they run their own servers). 386 # 387 # Warning: since Redis is pretty fast an outside user can try up to 388 # 150k passwords per second against a good box. This means that you should 389 # use a very strong password otherwise it will be very easy to break. 390 # 391 # requirepass foobared 392 393 # Command renaming. 394 # 395 # It is possible to change the name of dangerous commands in a shared 396 # environment. For instance the CONFIG command may be renamed into something 397 # hard to guess so that it will still be available for internal-use tools 398 # but not available for general clients. 399 # 400 # Example: 401 # 402 # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 403 # 404 # It is also possible to completely kill a command by renaming it into 405 # an empty string: 406 # 407 # rename-command CONFIG "" 408 # 409 # Please note that changing the name of commands that are logged into the 410 # AOF file or transmitted to slaves may cause problems. 411

    6. LIMITS限制:redis缓存清除策略

    412 ################################### LIMITS ################################# ### 413 414 # Set the max number of connected clients at the same time. By default 415 # this limit is set to 10000 clients, however if the Redis server is not 416 # able to configure the process file limit to allow for the specified limit 417 # the max number of allowed clients is set to the current file limit 418 # minus 32 (as Redis reserves a few file descriptors for internal uses). 419 # 420 # Once the limit is reached Redis will close all the new connections sending 421 # an error 'max number of clients reached'. 422 # 423 # maxclients 10000 默认最大用户端访问量1万 424 425 # Don't use more memory than the specified amount of bytes. 426 # When the memory limit is reached Redis will try to remove keys 427 # according to the eviction policy selected (see maxmemory-policy). 428 # 429 # If Redis can't remove keys according to the policy, or if the policy is 430 # set to 'noeviction', Redis will start to reply with errors to commands 431 # that would use more memory, like SET, LPUSH, and so on, and will continue 432 # to reply to read-only commands like GET. 433 # 434 # This option is usually useful when using Redis as an LRU cache, or to set 435 # a hard memory limit for an instance (using the 'noeviction' policy). 436 # 437 # WARNING: If you have slaves attached to an instance with maxmemory on, 438 # the size of the output buffers needed to feed the slaves are subtracted 439 # from the used memory count, so that network problems / resyncs will 440 # not trigger a loop where keys are evicted, and in turn the output 441 # buffer of slaves is full with DELs of keys evicted triggering the deletion 442 # of more keys, and so forth until the database is completely emptied. 443 # 444 # In short... if you have slaves attached it is suggested that you set a low er 445 # limit for maxmemory so that there is some free RAM on the system for slave 446 # output buffers (but this is not needed if the policy is 'noeviction'). 447 # 448 # maxmemory <bytes> 缓存过期策略 449 450 # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory 451 # is reached. You can select among five behaviors: 如果达到最大缓存,从以下5中缓存策略中执行过期清除 452 # 453 # volatile-lru -> remove the key with an expire set using an LRU algorithm 454 # allkeys-lru -> remove any key according to the LRU algorithm 清除机制:LRU(left right use)先进后出,最近最少使用的清除 455 # volatile-random -> remove a random key with an expire set 随机清除 456 # allkeys-random -> remove a random key, any key 457 # volatile-ttl -> remove the key with the nearest expire time (minor TTL) 有限时间内清除TTL(time to leave) 458 # noeviction -> don't expire at all, just return an error on write operations 永不清除配置 459 # 460 # Note: with any of the above policies, Redis will return an error on write 461 # operations, when there are no suitable keys for eviction. 462 # 463 # At the date of writing these commands are: set setnx setex append 464 # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd 465 # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby 466 # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby 467 # getset mset msetnx exec sort 468 # 469 # The default is: 470 # 471 # maxmemory-policy noeviction 默认永不过期/永不清除(一般真正实操不配noeviction永不过期) 472 473 # LRU and minimal TTL algorithms are not precise algorithms but approximated 474 # algorithms (in order to save memory), so you can tune it for speed or 475 # accuracy. For default Redis will check five keys and pick the one that was 476 # used less recently, you can change the sample size using the following 477 # configuration directive. 478 # 479 # The default of 5 produces good enough results. 10 Approximates very closel y 480 # true LRU but costs a bit more CPU. 3 is very fast but not very accurate. 481 # 482 # maxmemory-samples 5 默认选取5个缓存样本 483

    7.常见配置redis.conf

    Processed: 0.015, SQL: 9