Computers provide computing power
The types of chips that provide computing power for AI include GPU, FPGA and ASIC
GPU is a kind of microprocessor specialized in image operation on personal computers, workstations, game machines and some mobile devices (such as tablet computers, smart phones, etc.). It is similar to Cu, except that GPU is designed to perform complex mathematical and geometric calculations, which are necessary for graphics rendering
FPGA can complete any digital device function chip, even high-performance CPU can be implemented with FPGA. In 2015, Intel acquired the FPGA long alter head with us $16.1 billion. One of its purposes is to focus on the development of FPGA's special computing power in the field of artificial intelligence in the future
ASIC refers to the integrated circuits designed and manufactured according to the requirements of specific users or the needs of specific electronic systems. Strictly speaking, ASIC is a special chip, which is different from the traditional general chip. It's a chip specially designed for a specific need. The TPU that Google recently exposed for AI deep learning computing is also an ASIC
extended data:
chips are also called integrated circuits. According to different functions, they can be divided into many kinds, including those responsible for power supply voltage output control, audio and video processing, and complex operation processing. The algorithm can only run with the help of chips, and because each chip has different computing power in different scenarios, the processing speed and energy consumption of the algorithm are also different. Today, with the rapid development of the artificial intelligence market, people are looking for chips that can make the deep learning algorithm perform faster and with lower energy consumption
MPE, 8gflops
CPE, 11gflops
the peak computing power of Shenwei Taihu Lake light system reaches 100pflops<
it is necessary to mention here that floating-point computing power refers to the processing power of computer floating-point computing, and the computer has a special floating-point processor FPU for floating-point processing.
home computer 2G Hz, 4G Hz refers to the dominant frequency of the computer, the dominant frequency is 4G Hz, and the floating-point processing power of the computer is about 4gflops. However, the dominant frequency is not equal to floating-point processing power
dominant frequency means the number of computer clock cycles per second. The more processing per second, the more powerful the computer is
the main frequency of CPU does not represent the processing power of CPU and the influence of instruction pipeline on the processing power of CPU
clock cycle is the basic unit of CPU operation, and a floating-point calculation may need several to dozens of clock cycles. So the relationship between the dominant frequency and floating-point processing power is obvious.
each task in your operating system has a process, and each process is composed of many threads. If you've ever had an operating system class, you should have met multi threads programming. You let different threads cooperate to complete the same task, which is actually called parallel computing and distributed processing.
you can see that the projects of those parallel computing classes also run algorithms with ordinary computers, The key point is that you will simulate this environment. Just like those who run network algorithms use socket to simulate, they also use a PC to simulate the network of N workstations.
if you say multi-core CPU, First of all, you don't have access to the firmware of ordinary motherboards. So you have to buy a programmable board of general purpose, But if you want to buy a multi-core basic, you can't find it... Only in the laboratory can you have it... Even if you find it... I think that the money for that board is estimated to be several times of the tuition for you to go to graate school and then enter the laboratory...
in a word, if the software is implemented, today's PC should have no problem, As for how to start parallel computing mode, you need to read the datasheet of this board.
I don't agree with the above saying that there must be two cores, People with two brains can do parallel. But the left brain and the right brain of one brain are also parallel. When refining different neurons, the other completely independent neuron is also parallel. Parallel is a way to solve problems, an idea. Parallel computing was first proposed in 1912. I can't remember when the multi-core processor appeared, But after 2000, at least
in order to avoid the suspicion of advertising, you don't need to send the website. You can search the Internet directly. There are instructions on the official website
work w = - Mg * (z2-z1) = - 100 * 9.8 * (2-8) = 5880 (Jiao)
[welcome to inquire, thank you for adopting!]
if you play 360 games with PC through simulator,
you are in a circle.
you should understand the story of tortoise and rabbit race
At the programming level, first of all, 1, 2 and 3 are binary shaping variables,
(why binary?)
my understanding is that binary operation is more convenient than 99 multiplication table
high and low levels (voltages) are easy to express and difficult to make mistakes, but if you want to divide them into a series of different voltages... It's a thankless thing demanding high cost and hard work. In short, it's money burning. It will also be very expensive. Will you buy it They are simple to work with -- no big addition tables and multiplication tables to learn, just do the sample things over and over, very fast.
they just use two values of voltage, magnetism, Or other signal, which makes the hardware easy to design and more noise resistant.
for computers, binary numbers are great things, because:
they simply work together - there is no big addition table and multiplication table to learn, just do the same thing over and over again, very fast
They only use voltage, magnetic, or other signals, which makes the hardware easier to design and more noise resistantthey are all binary numbers in essence - suppose they are represented by 8 bits (16 bits plus 8 zeros), It can represent a larger range without overflow):
this is 1-00000001 - = 0 + 1 in computer eyes
this is 2-00000010 - = 2 + 0 in computer eyes
this is 3-00000011 - = 2 + 1 in computer eyes
binary bit by bit addition:
case 1: 0 + 0 = 0
case 2: 0 + 1 = 1 + 0 = 1
case 3: 1 + 1 = 0, and carry

