redis批量挖矿
㈠ 有没有好的方法遍历redis里面的所有key
在linux中批量操作redis中的keys的方法:1.统计个数:redis中名称含有OMP_OFFLINE的key的个数;src/redis-clikeys"*OMP_OFFLINE*"|wc-l2.批量删除批量删除0号数据库中名称含有OMP_OFFLINE的key:src/redis-cli-n0keys"*OMP_OFFLINE*"|xargssrc/redis-cli-n0del在redis的客户端环境中并不支持批量删除。
㈡ 对于redis的批量删除指定key,有没有好的方法
1. 终端
获取所有Key命令:redis-cli keys ‘*’ ;
获取指定前缀的key:redis-cli KEYS “e:*”
如果需要导出,可以redis-cli keys ‘*’ > /data/redis_key.txt
删除指定前缀的Key redis-cli KEYS “e:*” | xargs redis-cli DEL
㈢ 如何批量删除Redis下特定pattern的keys
在linux中 批量操作redis中的 keys的方法:
1.统计个数:
redis中名称含有OMP_OFFLINE的key的个数;
src/redis-cli keys "*OMP_OFFLINE*"|wc -l
2.批量删除
批量删除 0号数据库中名称含有OMP_OFFLINE的key:
src/redis-cli -n 0 keys "*OMP_OFFLINE*"|xargs src/redis-cli -n 0 del
在redis的客户端环境中并不支持批量删除。
㈣ redis 怎么批量获取数据
Redis Mass Insertion
Sometimes Redis instances needs to be loaded with big amount of preexisting or user generated data in a short amount of time, so that millions of keys will be created as fast as possible.
This is called a mass insertion, and the goal of this document is to provide information about how to feed Redis with data as fast as possible.
Use the protocol, Luke
Using a normal Redis client to perform mass insertion is not a good idea for a few reasons: the naive approach of sending one command after the other is slow because you have to pay for the round trip time for every command. It is possible to use pipelining, but for mass insertion of many records you need to write new commands while you read replies at the same time to make sure you are inserting as fast as possible.
Only a small percentage of clients support non-blocking I/O, and not all the clients are able to parse the replies in an efficient way in order to maximize throughput. For all this reasons the preferred way to mass import data into Redis is to generate a text file containing the Redis protocol, in raw format, in order to call the commands needed to insert the required data.
For instance if I need to generate a large data set where there are billions of keys in the form: `keyN -> ValueN' I will create a file containing the following commands in the Redis protocol format:
SET Key0 Value0
SET Key1 Value1
...
SET KeyN ValueN
Once this file is created, the remaining action is to feed it to Redis as fast as possible. In the past the way to do this was to use the netcat with the following command:
(cat data.txt; sleep 10) | nc localhost 6379 > /dev/null
However this is not a very reliable way to perform mass import because netcat does not really know when all the data was transferred and can't check for errors. In the unstable branch of Redis at github the redis-cli utility supports a new mode called pipe mode that was designed in order to perform mass insertion. (This feature will be available in a few days in Redis 2.6-RC4 and in Redis 2.4.14).
Using the pipe mode the command to run looks like the following:
cat data.txt | redis-cli --pipe
That will proce an output similar to this:
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 1000000
The redis-cli utility will also make sure to only redirect errors received from the Redis instance to the standard output.
㈤ 使用python同步mysql到redis由于数据较多,一条一条读出来写到redis太慢,有没有可以批量操作的。
importredis
importtime
redis=redis.Redis(host='localhost',port=6379,db=0)
s_time=time.time()
withredis.pipeline()aspipe:
pipe.multi()
forindex,iteminiteminenumerate(qset):#qset是你查询出来的结果集,
key=item['id']
value=item['name']
ret=pipe.sadd(key,value)
ifindex%1000==0:
print"Nowcnt:%d"%(i+1)
pipe.execute()
pipe.multi()
print"Execute..."
pipe.execute()
e_time=time.time()
上面省略了mysql查询代码,而且是以键值对来描述的
㈥ Java如何获取Redis中存储的大量内容
第一,大量的数据是不会考虑放在JVM内存中;
第二,如果需要缓存大量的dto,动态数据(又称过程数据)一般用的是redis;如果是静态,系统启动时就加载的大量配置,一般考虑放ehcache。
第三,由于redis用的是物理内存,不是JVM内存,一般情况下往redis里丢千万级别的记录数基本不影响性能,
㈦ 如何高效地向Redis写入大量的数据
具体实现步骤如下:
1. 新建一个文本文件,包含redis命令
SET Key0 Value0
SET Key1 Value1
...
SET KeyN ValueN
如果有了原始数据,其实构造这个文件并不难,譬如shell,python都可以
2. 将这些命令转化成Redis Protocol。
因为Redis管道功能支持的是Redis Protocol,而不是直接的Redis命令。
如何转化,可参考后面的脚本。
3. 利用管道插入
cat data.txt | redis-cli --pipe
Shell VS Redis pipe
下面通过测试来具体看看Shell批量导入和Redis pipe之间的效率。
测试思路:分别通过shell脚本和Redis pipe向数据库中插入10万相同数据,查看各自所花费的时间。
Shell
脚本如下:
#!/bin/bash
for ((i=0;i<100000;i++))
do
echo -en "helloworld" | redis-cli -x set name$i >>redis.log
done
每次插入的值都是helloworld,但键不同,name0,name1...name99999。
Redis pipe
Redis pipe会稍微麻烦一点
1> 首先构造redis命令的文本文件
在这里,我选用了python
#!/usr/bin/python
for i in range(100000):
print 'set name'+str(i),'helloworld'
# python 1.py > redis_commands.txt
# head -2 redis_commands.txt
set name0 helloworld
set name1 helloworld
2> 将这些命令转化成Redis Protocol
在这里,我利用了github上一个shell脚本,
#!/bin/bash
while read CMD; do
# each command begins with *{number arguments in command}\r\n
XS=($CMD); printf "*${#XS[@]}\r\n"
# for each argument, we append ${length}\r\n{argument}\r\n
for X in $CMD; do printf "\$${#X}\r\n$X\r\n"; done
done < redis_commands.txt
# sh 20.sh > redis_data.txt
# head -7 redis_data.txt
*3
$3
set
$5
name0
$10
helloworld
至此,数据构造完毕。
测试结果
㈧ 怎么向redis导入大量数据
具体实现步骤如下:1.新建一个文本文件,包含redis命令如果有了原始数据,其实构造这个文件并不难,譬如shell,python都可以2.将这些命令转化成RedisProtocol。因为Redis管道功能支持的是RedisProtocol,而不是直接的Redis命令。如何转化,可参考后面的脚本。3.利用管道插入catdata.txt|redis-cli--pipeShellVSRedispipe下面通过测试来具体看看Shell批量导入和Redispipe之间的效率。测试思路:分别通过shell脚本和Redispipe向数据库中插入10万相同数据,查看各自所花费的时间。Shell脚本如下:#!/bin/bashfor((i=0;i>redis.logdone每次插入的值都是helloworld,但键不同,name0,name1name99999。RedispipeRedispipe会稍微麻烦一点1>首先构造redis命令的文本文件在这里,我选用了python#!/usr/bin/pythonforiinrange(100000):print'setname'+str(i),'helloworld'#python1.py>redis_commands.txt#head-2redis_commands.>将这些命令转化成RedisProtocol在这里,我利用了github上一个shell脚本,#!/bin/bashwhilereadCMD;do#eachcommandbeginswith*{numberargumentsincommand}\r\nXS=($CMD);printf"*${#XS[@]}\r\n"#foreachargument,weappend${length}\r\n{argument}\r\nforXin$CMD;doprintf"\$${#X}\r\n$X\r\n";donedoneredis_data.txt#head-7redis_data.txt*3$3set$5name0$10helloworld至此,数据构造完毕。测试结果
㈨ 如何获取redis内的所有内容
1、到远程的仓库进行搜索。
㈩ windows下怎么批量删除redis key
在linux中 批量操作redis中的 keys的方法:1.统计个数:
redis中名称含有OMP_OFFLINE的key的个数;
src/redis-cli keys "*OMP_OFFLINE*"|wc -l
2.批量删除
批量删除 0号数据库中名称含有OMP_OFFLINE的key:
src/redis-cli -n 0 keys "*OMP_OFFLINE*"|xargs src/redis-cli -n 0 del
在redis的客户端环境中并不支持批量删除。