Position: Home page » Computing » Decentralized distributed lock

Decentralized distributed lock

Publish: 2021-04-20 12:57:33
1. I have been in touch with Qian'an for a long time. In terms of ability, Qian'an is still very good. Its advantage is that it is the largest digital asset transaction in the world, and the best trading volume in 24 hours has exceeded US $10 billion. It's amazing to achieve such a performance in the instry.
2. Mods creatively proposes grid chain architecture. Grid chain not only retains the core features of blockchain, such as decentralization, disintermediation, distrust, openness, transparency, traceability and non tampering, but also integrates pipeline acceleration technology, time slice dynamic token task scheling technology, dol division and cooperation mechanism, multi-layer consensus mechanism, auxiliary chain technology, etc, Greatly improve the transaction processing speed, greatly rece energy consumption, give the Internet of things high-speed computing capacity.
3. The difference between redis and memcached

problems encountered by traditional MySQL + memcached architecture

MySQL is suitable for mass data storage. Hot data is loaded into the cache through memcached to speed up access. Many companies have used this architecture before, but with the continuous increase of business data volume and the continuous growth of access volume, We have encountered a lot of problems:

1. MySQL needs to constantly dismantle the database and tables, and memcached also needs to continue to expand, which takes up a lot of development time

2. Data consistency between memcached and mysql

3. Memcached data hit rate is low or down machine, a large number of access directly penetrates into dB, which MySQL cannot support

4. Cross machine cache synchronization

there are many NoSQL procts in full bloom. In recent years, many kinds of NoSQL procts have sprung up in the instry. How to use these procts correctly and maximize their advantages is a problem that we need to study and think deeply. In the final analysis, the most important thing is to understand the positioning of these procts, In general, these nosqls are mainly used to solve the following problems

1. A small amount of data storage, high-speed read-write access. This kind of proct ensures high-speed access by means of all data in-motion, and provides the function of data landing. In fact, this is the main application scenario of redis

2. Massive data storage, distributed system support, data consistency guarantee, convenient cluster node addition / deletion

3. The former is a completely decentralized design, in which cluster information is transmitted between nodes through gossip mode to ensure the final consistency of data. The latter is a centralized scheme design, which guarantees strong consistency through a distributed lock service. Data is written to memory and redo log first, and then is periodically compated to disk, optimizing random writing to sequential writing, Improve write performance

4. Schema free, auto sharding, etc. For example, some common document databases support schema free, directly store JSON format data, and support auto sharding and other functions, such as mongodb

in the face of these different types of NoSQL procts, we need to choose the most appropriate proct according to our business scenarios

redis is suitable for scenarios, how to use it correctly

we have analyzed earlier that redis is most suitable for all data in momory scenarios. Although redis also provides persistence function, it is actually more of a disk -
backed function, which is quite different from persistence in the traditional sense, It seems that redis is more like an enhanced version of memcached, so when to use
memcached and redis

if you simply compare the difference between redis and memcached, most of you will get the following opinion:

1 redis not only supports simple K / V data, but also provides list, set, Zset, hash and other data structures

2 redis supports data backup, that is, data backup in master slave mode

3 redis supports data persistence. It can keep the data in memory in the disk, and can be loaded again when it is restarted

apart from these, we can go deep into the internal structure of redis to observe more essential differences and understand the design of redis

in
redis, not all data are always stored in memory. This is the biggest difference from memcached. Redis will only cache all
key information. If redis finds that the memory usage exceeds a certain threshold, it will trigger the swap operation. Redis will cache all
key information according to "swappability =
age * log (size)_ in_ The "memory" counter
calculates which values of the keys need to be swap to the disk. Then the values corresponding to these keys are persisted to the disk and cleared in the memory. This feature enables redis to

keep data larger than its own memory size. Of course, the memory of the machine itself must be able to hold all the keys. After all, the data will not be swap. At the same time, when redis swips the data in the memory

to the disk, the main thread providing the service and the sub thread performing the swap operation will share this part of the memory. Therefore, if the data requiring swap is updated, redis will block the
operation until the sub thread completes the swap operation

comparison before and after using redis specific memory model:
VM off: 300K keys, 4096 bytes values: 1.3g used
VM on: 300K keys, 4096 bytes values: 73m used
VM off: 1 million keys, 256 bytes values: 430.12m used
VM on: 1 million keys, 256 bytes values: 160.09m used
VM on: 1 million keys, values as large as you want, Still: 160.09m used

when reading data from redis,

if the value corresponding to the read key is not in memory, redis needs to load the corresponding data from the swap file, and then return it to the requester

there is an I / O thread pool problem. By default, redis will be blocked, that is, it will not be blocked until all the swap files are loaded. This strategy is more suitable for

batch operation when the number of clients is small. However, if redis is applied in a large website application, it is obviously unable to meet the situation of large concurrency. So when redis is running, we set the size of the I / O thread
pool to perform concurrent operations on the read requests that need to load the corresponding data from the swap file, so as to rece the blocking time

if you want to use redis in the environment of massive data, I believe it is indispensable to understand the memory design and blocking of redis<

supplementary knowledge points:

comparison of memcached and redis

1 network IO model

memcached is a multi-threaded, non blocking IO multiplexing network model, which is divided into listening main thread and worker sub thread, The network layer uses the event library encapsulated by libevent, and the multithreading model can play a multi-core role. However, the cache
coherence and lock problems are introced. For example, the stats
command is the most commonly used command in memcached. In fact, all operations of memcached have to lock and count the global variable, which brings performance loss<

(memcached network IO model)

redis uses a single thread IO reuse model and encapsulates a simple aeevent event processing framework, which mainly implements epoll, kqueue and select.
for simple IO only operations, single thread can maximize the speed advantage, but redis also provides some simple computing functions, such as sorting For these operations, the single thread model will seriously affect the overall throughput, and the entire IO scheling is blocked ring CPU computing< Memory management

2. Memcached uses pre allocated memory pool, uses slab and chunk with different sizes to manage memory, and item selects appropriate chunk storage according to size. Internal
memory pool can save the cost of applying / releasing memory and rece the generation of memory fragmentation, but it also brings a certain degree of space waste, In addition, when there is still a lot of memory space, new data can be
deleted. For the reasons, please refer to timyang's article: http://timyang.net/data/Memcached-lru-evictions/

redis uses on-site memory application to store data, and rarely uses free list to optimize memory allocation, which leads to memory fragmentation to a certain extent, Redis
according to the storage command parameters, the data with expiration time will be stored separately, and they are called temporary data. Non temporary data will never be deleted, even if the physical memory is not enough, swap will not delete any non temporary data (but it will try to delete some temporary data). In this regard, redis is more suitable for storage than cache

3. Data consistency

memcached provides CAS command, which can ensure the consistency of the same data in multiple concurrent access operations. Redis doesn't provide CAS command, which can't guarantee this. However, redis provides transaction function, which can guarantee the atomicity of a series of commands and won't be interrupted by any operation< Memory mode and other aspects

memcached only supports simple key value storage, does not support enumeration, persistence, replication and other functions

in addition to key / value, redis also supports many data structures such as list, set, sorted set and hash, and provides keys

for enumeration operation, but it cannot be used online, If you need to enumerate online data, redis provides a tool that can directly scan its MP file and enumerate all data. Redis also provides functions such as persistence and replication< About client support in different languages

both memcached and redis have plenty of third-party clients to choose from. However, because memcached has been developing for a longer time,
in terms of client support, many clients of memcached are more mature and stable, Redis protocol itself is more complex than memcached, and the author continues to add new functions
and so on, so the follow-up speed of the corresponding third-party client may not catch up, and sometimes it may need to make some modifications on the basis of the third-party client to make better use of it

according to the above comparison, when we do not want data to be kicked out, or need more data types than key / value, or need landing function, redis is more suitable than memcached<

about some peripheral functions of redis

in addition to being used as storage, redis also provides some other functions, such as aggregate computing, PubSub, scripting, etc. for such functions, we need to understand its implementation principle and its limitations before we can use them correctly, such as PubSub function, In fact, there is no persistence support. All messages from the consumer will be lost when the connection is broken or reconnected.
in addition, functions such as aggregate computing and scripting are limited by the redis single thread model, so it is impossible to achieve very high throughput, so they need to be used with caution

generally speaking, redis author is a very diligent developer. We can often see that the author is trying a variety of new ideas and ideas. For these functions, we need to have a deep understanding before using them

summary:

1. The best way to use redis is to use all data in memory

2. Redis is more used as a substitute for memcached

3. When more data types than key / value are needed, redI is used
4. Difference between redis and memcached

problems encountered by traditional MySQL + memcached architecture
MySQL is suitable for mass data storage. Hot data is loaded into cache through memcached to speed up access. Many companies have used this architecture before, but with the continuous increase of business data volume and access volume, We have encountered many problems:
1. MySQL needs to be constantly disassembled, and memcached needs to be constantly expanded, which takes up a lot of development time
2. Data consistency between memcached and mysql
3. The hit rate of memcached data is low or the machine is down, and a large number of access directly penetrates into dB, which cannot be supported by mysql
4. Cross machine cache synchronization
many NoSQL procts are in full bloom, how to choose them
in recent years, many kinds of NoSQL procts have emerged in the instry, so how to correctly use these procts and maximize their advantages is a problem that we need to deeply study and think about. In the final analysis, the most important thing is to understand the positioning of these procts, In general, these nosqls are mainly used to solve the following problems
1. A small amount of data storage, high-speed read-write access. This kind of proct ensures high-speed access by means of all data in-motion, and provides the function of data landing. In fact, this is the main application scenario of redis
2. Massive data storage, distributed system support, data consistency guarantee, convenient cluster node addition / deletion
3. Dynamo and BigTable are the most representative papers in this field. The former is a completely decentralized design, in which cluster information is transmitted between nodes through gossip mode to ensure the final consistency of data. The latter is a centralized scheme design, which guarantees strong consistency through a distributed lock service. Data is written to memory and redo log first, and then is periodically compated to disk, optimizing random writing to sequential writing, Improve write performance
4. Schema free, auto sharding, etc. For example, some common document databases support schema free, directly store JSON format data, and support auto sharding and other functions, such as mongodb
in the face of these different types of NoSQL procts, we need to choose the most appropriate proct according to our business scenarios
redis is suitable for scenarios, and how to use it correctly
we have analyzed earlier that redis is most suitable for scenarios where all data are in momory. Although redis also provides persistence function, it is actually more of a disk backed function, which is quite different from the traditional persistence function. Then you may have doubts, it seems that redis is more like an enhanced version of memcached, So when to use memcached and redis

if you simply compare the difference between redis and memcached, most of you will get the following opinion:

1 redis not only supports simple K / V data, but also provides list, set, Zset, hash and other data structures
2 redis supports data backup, that is, data backup in master slave mode
3 redis supports data persistence. It can keep the data in memory on disk, and can be loaded again for use when restarting

apart from these, we can go deep into the internal structure of redis to observe more essential differences and understand the design of redis

in redis, not all data are always stored in memory. This is the biggest difference from memcached. Redis will only cache all the key information. If redis finds that the memory usage exceeds a certain threshold, it will trigger the swap operation. According to the "swap ability = age * log (size)_ in_ It calculates which values of the keys need to be swap to the disk. Then the values corresponding to these keys are persisted to the disk and cleared in the memory. This feature enables redis to keep more data than its own memory size. Of course, the memory of the machine itself must be able to hold all the keys. After all, the data will not be swap. At the same time, when redis swips the data in memory to the disk, the main thread providing the service and the sub thread performing the swap operation will share this part of the memory. Therefore, if the data requiring swap is updated, redis will block the operation until the sub thread completes the swap operation

comparison before and after using redis specific memory model:
VM off: 300K keys, 4096 bytes values: 1.3g used
VM on: 300K keys, 4096 bytes values: 73m used
VM off: 1 million keys, 256 bytes values: 430.12m used
VM on: 1 million keys, 256 bytes values: 160.09m used
VM on: 1 million keys, values as large as you want, Still: 160.09m used

when reading data from redis, if the value corresponding to the read key is not in memory, redis needs to load the corresponding data from the swap file, and then return it to the requester. There is an I / O thread pool problem. By default, redis will be blocked, that is, it will not be blocked until all the swap files are loaded. This strategy is suitable for batch operation when the number of clients is small. However, if redis is applied in a large website application, it is obviously unable to meet the situation of large concurrency. So when redis is running, we set the size of the I / O thread pool to perform concurrent operations on the read requests that need to load the corresponding data from the swap file to rece the blocking time

if you want to use redis in the environment of massive data, I believe it is indispensable to understand the memory design and blocking of redis.
5. Difference between redis and memcached

problems encountered by traditional MySQL + memcached architecture
MySQL is suitable for mass data storage. Hot data is loaded into cache through memcached to speed up access. Many companies have used this architecture before, but with the continuous increase of business data volume and access volume, We have encountered many problems:
1. MySQL needs to be constantly disassembled, and memcached needs to be constantly expanded, which takes up a lot of development time
2. Data consistency between memcached and mysql
3. The hit rate of memcached data is low or the machine is down, and a large number of access directly penetrates into dB, which cannot be supported by mysql
4. Cross machine cache synchronization
many NoSQL procts are in full bloom, how to choose them
in recent years, many kinds of NoSQL procts have emerged in the instry, so how to correctly use these procts and maximize their advantages is a problem that we need to deeply study and think about. In the final analysis, the most important thing is to understand the positioning of these procts, In general, these nosqls are mainly used to solve the following problems
1. A small amount of data storage, high-speed read-write access. This kind of proct ensures high-speed access by means of all data in-motion, and provides the function of data landing. In fact, this is the main application scenario of redis
2. Massive data storage, distributed system support, data consistency guarantee, convenient cluster node addition / deletion
3. Dynamo and BigTable are the most representative papers in this field. The former is a completely decentralized design, in which cluster information is transmitted between nodes through gossip mode to ensure the final consistency of data. The latter is a centralized scheme design, which guarantees strong consistency through a distributed lock service. Data is written to memory and redo log first, and then is periodically compated to disk, optimizing random writing to sequential writing, Improve write performance
4. Schema free, auto sharding, etc. For example, some common document databases support schema free, directly store JSON format data, and support auto sharding and other functions, such as mongodb
in the face of these different types of NoSQL procts, we need to choose the most appropriate proct according to our business scenarios
redis is suitable for scenarios, and how to use it correctly
we have analyzed earlier that redis is most suitable for scenarios where all data are in momory. Although redis also provides persistence function, it is actually more of a disk backed function, which is quite different from the traditional persistence function. Then you may have doubts, it seems that redis is more like an enhanced version of memcached, So when to use memcached and redis

if you simply compare the difference between redis and memcached, most of you will get the following opinion:

1 redis not only supports simple K / V type data, but also provides list, set, Zset, hash and other data structure storage

2 redis supports data backup, that is, data backup in master slave mode

3 redis supports data persistence. It can keep the data in memory in the disk, and can be loaded again when it is restarted

apart from these, we can go deep into the internal structure of redis to observe more essential differences and understand the design of redis

in redis, not all data are always stored in memory. This is the biggest difference from memcached. Redis will only cache all the key information. If redis finds that the memory usage exceeds a certain threshold, it will trigger the swap operation. According to the "swap ability = age * log (size)_ in_ It calculates which values of the keys need to be swap to the disk. Then the values corresponding to these keys are persisted to the disk and cleared in the memory. This feature enables redis to keep more data than its own memory size. Of course, the memory of the machine itself must be able to hold all the keys. After all, the data will not be swap. At the same time, when redis swips the data in memory to the disk, the main thread providing the service and the sub thread performing the swap operation will share this part of the memory. Therefore, if the data requiring swap is updated, redis will block the operation until the sub thread completes the swap operation

comparison before and after using redis specific memory model:
VM off: 300K keys, 4096 bytes values: 1.3g used
VM on: 300K keys, 4096 bytes values: 73m used
VM off: 1 million keys, 256 bytes values: 430.12m used
VM on: 1 million keys, 256 bytes values: 160.09m used
VM on: 1 million keys, values as large as you want, Still: 160.09m used

when reading data from redis, if the value corresponding to the read key is not in memory, redis needs to load the corresponding data from the swap file, and then return it to the requester. There is an I / O thread pool problem. By default, redis will be blocked, that is, it will not be blocked until all the swap files are loaded. This strategy is suitable for batch operation when the number of clients is small. However, if redis is applied in a large website application, it is obviously unable to meet the situation of large concurrency. So when redis is running, we set the size of the I / O thread pool to perform concurrent operations on the read requests that need to load the corresponding data from the swap file to rece the blocking time

if you want to use redis in the environment of massive data, I believe it is indispensable to understand the memory design and blocking of redis<

supplementary knowledge points:
comparison of memcached and redis
1 network IO model
memcached is a multi-threaded, non blocking IO multiplexing network model, which is divided into listening main thread and worker sub thread. The listening thread listens for network connection, and after receiving the request, it passes the connection description word pipe to the worker thread to read and write io. The network layer uses the event library encapsulated by libevent, Multithreading model can play the role of multi-core, but it introces cache coherence and lock problems. For example, stats command is the most commonly used command in memcached. In fact, all operations of memcached have to lock and count the global variable, which brings performance loss<

(memcached network IO model)
redis uses a single thread IO reuse model and encapsulates a simple aeevent event processing framework, which mainly implements epoll, kqueue and select. For simple IO only operations, single thread can maximize the speed advantage, but redis also provides some simple computing functions, such as sorting, aggregation, etc, For these operations, the single thread model will seriously affect the overall throughput. In the CPU computing process, the entire IO scheling is blocked
2. Memory management
memcached uses pre allocated memory pool, uses slab and chunk with different sizes to manage memory, and selects appropriate chunk for item according to size. Memory pool can save the cost of applying / releasing memory and rece the generation of memory fragmentation, but it also brings a certain degree of space waste, In addition, when there is still a lot of memory space, new data may also be eliminated. For the reasons, please refer to timyang's article: http://timyang.net/data/Memcached-lru-evictions/
redis uses on-site memory application to store data, and rarely uses free list to optimize memory allocation, which leads to memory fragmentation to a certain extent, According to the storage command parameters, redis will store the data with expiration time separately and call them temporary data. Non temporary data will never be deleted. Even if the physical memory is not enough, swap will not delete any non temporary data (but will try to delete some temporary data). In this regard, redis is more suitable for storage than cache
3. Data consistency
memcached provides CAS command, which can ensure the consistency of the same data in multiple concurrent access operations. Redis doesn't provide CAS command, which can't guarantee this. However, redis provides transaction function, which can guarantee the atomicity of a series of commands and won't be interrupted by any operation<
4. Storage mode and other aspects
memcached only supports simple key value storage, does not support enumeration, persistence, replication and other functions
in addition to key / value, redis also supports many data structures such as list, set, sorted set and hash, and provides keys
for enumeration operation, but it cannot be used online, Redis provides a tool to scan its MP files directly and enumerate all the data. Redis also provides functions such as persistence and replication
5. Client support in different languages
in terms of clients in different languages, memcached and redis have a wealth of third-party clients to choose from. However, memcached has been developing for a longer time. At present, in terms of client support, many clients of memcached are more mature and stable, while redis is more complex than memcached because of its protocol, In addition, the author continues to add new functions, the corresponding third-party client follow-up speed may not catch up, sometimes you may need to make some modifications on the basis of the third-party client in order to better use
according to the above comparison, when we do not want data to be kicked out, or need more data types than key / value, or need landing function, redis is more suitable than memcached
about some peripheral functions of redis
in addition to being used as storage, redis also provides some other functions, such as aggregate computing, PubSub, scripting, etc. for this kind of function, we need to understand its implementation principle and clearly understand its limitations before we can use it properly, such as PubSub function, which actually has no persistence support, All messages from the consumer will be lost when the connection is broken or reconnected. For example, functions such as aggregate computing and scripting are limited by the redis single thread model, so it is impossible to achieve high throughput, so they need to be used with caution
generally speaking, redis author is a very diligent developer. We can often see that the author is trying a variety of new ideas and ideas. For these functions, we need to have a deep understanding before using them< Summary:
1. The best way to use redis is to use all data in memory
2. Redis is used as a substitute for memcached in more scenarios
3. When more data types than key / value are needed, redis is more suitable
4. When the stored data cannot be eliminated, redis is more suitable.
Hot content
Inn digger Publish: 2021-05-29 20:04:36 Views: 341
Purchase of virtual currency in trust contract dispute Publish: 2021-05-29 20:04:33 Views: 942
Blockchain trust machine Publish: 2021-05-29 20:04:26 Views: 720
Brief introduction of ant mine Publish: 2021-05-29 20:04:25 Views: 848
Will digital currency open in November Publish: 2021-05-29 19:56:16 Views: 861
Global digital currency asset exchange Publish: 2021-05-29 19:54:29 Views: 603
Mining chip machine S11 Publish: 2021-05-29 19:54:26 Views: 945
Ethereum algorithm Sha3 Publish: 2021-05-29 19:52:40 Views: 643
Talking about blockchain is not reliable Publish: 2021-05-29 19:52:26 Views: 754
Mining machine node query Publish: 2021-05-29 19:36:37 Views: 750