Position: Home page » Computing » Cloud computing power convergence

Cloud computing power convergence

Publish: 2021-04-14 19:29:51
1. Cloud computing power is that the mine divides the mining machine into a smaller computing power contract according to the computing power, and agrees on different service life. Users purchase this computing power contract for mining. Whether the cloud computing platform is reliable depends on whether there are real mining machines in the mine, which can be investigated.
2. Cloud computing power means that you rent a fixed computing power for a period of time, and at the same time you have the computing power income ring this period. If you dig bitcoin, then you are equivalent to buying a coin with a discount, but if the price of the coin falls sharply, you will not lose a lot
it is equivalent to selling the mining machine separately, but the ownership of the mining machine belongs to the company selling cloud computing power.
3. Behind the cloud computing power is actually the server room we know. More knowledge about computing power can also be learned from the tenth power platform.
4.

5. Reliable, the mining cost of cloud computing power is lower, the income is stable, and the mining risk is borne by the platform. It's particularly important to choose a reliable cloud computing platform. I think it's good for Chinese and foreign mining cloud computing platforms and large mines.
6. Find an article

let's put it this way: the "dark blue" that won the chess champion Kasparov in 1997 is a supercomputer, while alphago, who is going to play go with Li Shishi, is an artificial intelligence program developed by deepmind, a Google company. Let's talk about world peace. But alphago, as a program, has to run on a computer to compete with human beings. So change the question to "how many times more powerful is the computer that will play go with human beings than dark blue?"
we are still able to give an approximate answer by simple calculation. After all, in terms of measuring computer performance, we already have a fairly unified standard: the number of floating-point operations per second. For convenience, we all call it "flops" below
don't be scared away by the computer term "floating-point operation". In other words, floating-point operation is actually four arithmetic operations with decimals. For example, 1.2 plus 2.1 is a typical floating-point operation. If your primary school math teacher is not an American, then we estimate that at this moment you have already calculated the result by heart to be 3.3. But it's not that easy for computers
we know that computers operate on binary numbers composed of 0 and 1. For example, in the basic binary system, 1 is 1, 2 becomes 10, 3 is 11, 4 is 100... This operation mode allows us to assemble stable and effective computing machines with the simplest circuit components, but it also brings a problem: the number that computers can handle is only integers. If you want to express a 0.1 with 0 and 1 without any other mathematical method... Let's really talk about world peace
the solution to this problem is very simple: 0.1 can be regarded as the result of 1 divided by 10. If we want the computer to calculate a number with a decimal point, we just need to tell the CPU how many zeros are added after 1. But in this way, the computer in dealing with the decimal point, there are several more steps. So the speed of floating-point operation has become the standard to measure the performance of computer
take dark blue, which beat human beings in chess, for example, its computing power is 11.38 gflops, which means that dark blue can calculate 11.38 billion times of addition, subtraction, multiplication and division with decimals per second. ENIAC, the first general-purpose computer that helped the United States design and manufacture atomic bombs ring World War II, had only 300 flops
How about the performance of dark blue today? Three words: weak explosion. As far as the CPU used in PC is concerned, as early as 2006, Intel's first generation core 2 has steadily surpassed dark blue. This does not include the effect bonus brought by GPU in the graphics card. The performance of the most common integrated graphics card today has exceeded 700 gflops. If you really want to compete in performance, dark blue, the supercomputer of the last century, even if you are in a group, you may not be able to choose the laptop in front of you alone
what kind of performance level has today's supercomputers reached? Tianhe-2 is the fastest supercomputer in the world. Its floating-point computing capacity has reached 33.86 PFlops. In other words, dark blue needs to grow to 300000 times its own performance to be comparable to tianhe-2
however, for dark blue, such comparison is too unfair. Because even in those days, dark blue was not the fastest supercomputer. In contrast, only through the computers used by Google alphago can we compare the amazing development of our computers in the past 20 years
according to a paper published by the Google team in nature, alphago initially "trained" Ai to play go on a Google computer. According to the description in the paper, Google uses this computer to make alphago's go level close to that of European champion fan Hui. However, in addition to mentioning that the computer is equipped with 48 CPUs and 8 GPUs, the paper does not even mention the performance of the computer. Fortunately, alphago runs on the cloud computing platform. We only need to compare the computer data of our competitors, and then we can get the general information
for example, in December last year, Alibaba cloud opened its high-performance computing services to the public. According to Alibaba cloud's description, the single floating-point computing capability of these computers is 11 tflops, and they can also be used to train AI to learn by themselves. If Google's computer performance is close to that of Alibaba cloud, then the hardware driven by alphago has a performance at least 1000 times that of dark blue
but the story is not over yet. Alphago is not only a "stand-alone" version. In order to achieve higher computing power, Google has also connected alphago to a network of 1202 CPUs. After networking, alphago's computing power has increased by 24 times, jumping from the "stand-alone version" not to the level of the second stage of the career to the level of the fifth stage of the career
so how many times more powerful is alphago than dark blue? It's estimated that you've got the answer: 25000 times. From this point of view, we can also see how complicated go is. It takes 20 years to improve the performance of the computer before it can defeat human beings in chess and then sit in front of the top human players on the go board. But in the final analysis, alphago's most important achievement is not to use computers with excellent performance, but to make programs think, learn and improve in a human way for the first time. So in the next few days, no matter who wins or loses, we are witnessing the beginning of a new era
of course, don't forget to pay attention to Sina Technology. We will be at the front to welcome you to the first dawn of this new era.
Hot content
Inn digger Publish: 2021-05-29 20:04:36 Views: 341
Purchase of virtual currency in trust contract dispute Publish: 2021-05-29 20:04:33 Views: 942
Blockchain trust machine Publish: 2021-05-29 20:04:26 Views: 720
Brief introduction of ant mine Publish: 2021-05-29 20:04:25 Views: 848
Will digital currency open in November Publish: 2021-05-29 19:56:16 Views: 861
Global digital currency asset exchange Publish: 2021-05-29 19:54:29 Views: 603
Mining chip machine S11 Publish: 2021-05-29 19:54:26 Views: 945
Ethereum algorithm Sha3 Publish: 2021-05-29 19:52:40 Views: 643
Talking about blockchain is not reliable Publish: 2021-05-29 19:52:26 Views: 754
Mining machine node query Publish: 2021-05-29 19:36:37 Views: 750