GPU unit: 1T
Graphics cards can't dig out bitcoin right now. Your calculation power is the calculation power of Ethereum. The calculation method is also wrong
You can refer to the following, according to some commonly used graphics cards in the Internet bar market, sort out the price and calculation power of a related graphics card, as well as the expected return to the current period, It can be used as a reference:
{rrrrrrr}
power consumption: 243w
computing power: 22.4m
price of graphics card: 1999 yuan
quantity of eth g every 24 hours: 0.015
revenue generated every 24 hours: 24.48 yuan
expected payback time: 81.66 days
power consumption: 159w
computing power: 24.3m
price of graphics card: 1599 yuan Yuan
number of eth g every 24 hours: 0.017
revenue generated every 24 hours: 27.9 yuan
estimated payback time: 57.31 days
total power consumption: 171w
computing power: 24.4m
price: 1999 yuan
number of eth g every 24 hours: 0.017
revenue generated every 24 hours: 27.87 yuan
estimated payback time: 71.73 days
Video card (graphics card) full name display interface card, also known as display adapter, is the most basic configuration of computer, one of the most important accessories. As an important part of the computer host, the graphics card is the equipment of digital to analog signal conversion, and it undertakes the task of output display graphics
the graphics card is connected to the main board of the computer, which converts the digital signal of the computer into an analog signal for the display. At the same time, the graphics card still has the ability of image processing, which can help the CPU work and improve the overall running speed. For those engaged in professional graphic design, graphics card is very important. Civil and military graphics chip suppliers mainly include amd (ultra micro semiconctor) and NVIDIA (NVIDIA). Today's TOP500 computer, including graphics card computing core. In scientific computing, graphics card is called display accelerator
Under the current difficulty, 1t mining machine can dig about 0.0268 BTC per day, which is about 1 bitcoin in 50. It is about 100 yuan in RMB. How much bitcoin you can dig in a day depends on how many shares you hold, that is, computing power. However, the computing power is floating. When the overall computing power is low, it's more. Otherwise, it's less. You can go to Babbitt's website to check the specific content when you have time< br />
first of all, can CPU remove cache like GPU? no way. There are two key factors for GPU to get rid of cache: the particularity of data (high alignment, pipeline processing, not conforming to localization assumption, rarely writing back data), and high speed bus. For the latter problem, CPU is subject to the backward data bus standard, which can be changed in theory. For the former problem, it is very difficult to solve in theory. Because the CPU to provide versatility, it can not limit the type of processing data. That's why GPU can never replace CPU
secondly, can CPU add many cores? no way. First, the cache takes up the area. Secondly, the CPU needs to increase the complexity of each core in order to maintain cache consistency. In addition, in order to make better use of cache and deal with data that are not aligned and need a lot of write back, CPU needs complex optimization (branch prediction, out of order execution, and some vectorization instructions and long pipeline simulating GPU). Therefore, the complexity of a CPU core is much higher than that of GPU, and the cost is higher (not that the etching cost is high, but the complexity reces the film rate, so the final cost will be high). So CPU can't add core like GPU
as for the control ability, the current situation of GPU is worse than CPU, but it is not an essential problem. However, control like recursion is not suitable for highly aligned and pipeline processed data, which is essentially a data problem.
in addition, the current CPU design is also learning from GPU, that is, adding floating-point operation units with parallel computing and not so many control structures. For example, Intel's SSE Instruction set can perform four floating-point operations at the same time, and many registers have been added. In addition, if you want to learn GPU computing, you can download a CUDA SDK, which has very detailed instructions
ordinary users do not need to care about the computing power of the graphics card, only GPU programmers care about this problem when they write CUDA programs to develop GPU computing. As long as you know the model of your computer's graphics card, you can find the corresponding computing power https://developer.nvidia.com/cuda-gpus .
Yes, when NVIDIA designs and selects models, Ti has better performance than no ti. It can also be said that GPU has strong processing power. Sometimes in detail analysis, sometimes without ti is better. For example, in the figure below, the acceleration frequency and basic speed of Ti are better, but the overall performance of Ti is much better

the role of GPU
GPU is the "heart" of the display card, which is equivalent to the role of CPU in the computer. It determines the grade and most of the performance of the display card, and it is also the basis for the difference between 2D display card and 3D display card. 2D display chips mainly rely on the processing power of CPU when processing 3D images and special effects, which is called "soft acceleration". 3D display chip is the so-called "hardware acceleration" function, which integrates 3D image and special effect processing functions into the display chip. The display chip is usually the largest chip (with the most pins) on the display card. Now most of the graphics cards on the market use NVIDIA and ATI graphics processing chips< So NVIDIA first proposed the concept of GPU when it released geforce 256 graphics processing chip in 1999. GPU makes the graphics card less dependent on CPU, and does part of the original CPU work, especially in 3D graphics processing. The core technologies of GPU include hardware T & L, cubic environment material mapping and vertex blending, texture compression and bump mapping, al texture four pixel 256 bit rendering engine, etc
simply speaking, GPU is a display chip that can support T & L (transform and lighting, polygon conversion and light processing) in hardware, because T & L is an important part of 3D rendering, which is used to calculate the 3D position of polygons and process dynamic light effects, also known as "geometric processing". A good T & L unit can provide detailed 3D objects and advanced light effects; In most PCs, most of the T & L operations are handled by the CPU (that is, the so-called software T & L). Because the CPU has many tasks, in addition to T & L, it also does memory management, input response and other non 3D graphics processing work, so the performance will be greatly reced in the actual operation, and the graphics card often waits for CPU data, Its computing speed is far behind the requirements of today's complex 3D games. Even if the working frequency of CPU exceeds 1GHz or higher, it will not help much. Because this is a problem caused by the design of PC itself, it has nothing to do with the speed of CPU<
the difference between GPU and DSP
GPU is different from DSP architecture in several aspects. All its calculations use floating-point algorithm, and there is no bit or integer operation instruction at present. In addition, because GPU is specially designed for image processing, the storage system is actually a two-dimensional segmented storage space, including a segment number (from which the image is read) and a two-dimensional address (x, y coordinates in the image). In addition, there are no indirect write instructions. The output write address is determined by the raster processor and cannot be changed by the program. This is a great challenge for algorithms that are naturally distributed in memory. Finally, communication is not allowed between the processes of different fragments. In fact, the fragment processor is a SIMD data parallel execution unit, which executes code independently in all fragments
despite the above constraints, GPU can effectively perform a variety of operations, from linear algebra and signal processing to numerical simulation. Although the concept is simple, new users will still be confused when using GPU, because GPU requires proprietary graphics knowledge. In this case, some software tools can help. CG and HLSL are two high-level shadowing languages that allow users to write C-like code and then compile it into fragment assembly language. Brook is a high-level language designed for GPU computing and does not need graphics knowledge. Therefore, it can be regarded as a good starting point for the staff who use GPU for development for the first time. Brook is an extension of C language, which integrates a simple data parallel programming structure that can be directly mapped to GPU. The data stored and operated by GPU is figuratively compared to "stream", which is similar to the array in standard C. A kernel is a function that operates on a stream. Calling a core function on a series of input streams means that an implicit loop is implemented on the stream elements, that is, the core body is called on each stream element. Brook also provides rection mechanism, such as sum, maximum or proct calculation for all elements in a flow. Brook also completely hides all the details of the graphics API, and virtualizes the parts of GPU that are not familiar to many users, such as the two-dimensional memory system. Applications written in brook include linear algebra subroutine, fast Fourier transform, ray tracing and image processing. Using ATI's X800XT and NVIDIA's geforce 6800 ultra GPU, under the same cache and SSE assembly optimized Pentium 4 execution conditions, the speed of many such applications can be increased as much as 7 times
users who are interested in GPU computing try to map the algorithm to the basic elements of graphics. The advent of advanced programming languages like brook makes it easy for novices to grasp the performance advantages of GPU. The convenience of accessing the computing functions of GPU also makes the evolution of GPU continue, not only as a rendering engine, but also as the main computing engine of personal computer.
