Position: Home page » Computing » Decentralization of machine learning

Decentralization of machine learning

Publish: 2021-04-18 19:20:48
1. Compared with the traditional centralized financial system, the decentralized financial platform has three advantages:
A. indivials with asset management needs do not need to trust any intermediaries, and the new trust is rebuilt on the machine and code
B. everyone has access, no one has central control
C. all protocols are open source, so anyone can cooperate on the protocols to build new financial procts and accelerate financial innovation under the network effect
defi is a relatively broad concept, including currency issuance, currency transaction, loan, asset transaction, investment and financing, etc
we regard the birth of BTC and other cryptocurrencies as the first stage of decentralized finance. However, the decentralization of currency issuance and storage only provides a point-to-point settlement solution, which is not enough to support the rich financial business. The rapid development of decentralized lending agreements in the past two years will have the opportunity to further open the financial system of the blockchain world and bring decentralized finance into the second stage<

the Xueshuo innovation blockchain Technology Workstation of Lianqiao ecation online is the only approved "blockchain Technology Specialty" pilot workstation of "smart learning workshop 2020 Xueshuo innovation workstation" launched by the school planning, construction and development center of the Ministry of ecation of China. Based on providing diversified growth paths for students, the professional station promotes the reform of the training mode of the combination of professional degree research, proction, learning and research, and constructs the applied and compound talent training system.
2. At present, the mainstream of machine learning is divided into supervised learning, unsupervised learning and reinforcement learning
supervised learning:
supervised learning can be divided into "regression" and "classification"
in the regression problem, we will predict a continuous value. That is to say, we try to match the input variable and output with a continuous function; In the classification problem, we predict a discrete value, and we try to match the input variable with the discrete category
each data point will be labeled, such as category label or value related label. An example of a category label: classifying an image as "apple" or "orange"; Examples of numerical labels are: predicting the price of a second-hand house. The purpose of supervised learning is to learn many labeled samples, and then make predictions for new data. For example, accurately identify the fruit in the new photo (classification) or predict the price of second-hand house (regression)
unsupervised learning:
in unsupervised learning, we basically don't know what the result will be like, but we can extract a special structure from the data by clustering
the given data in unsupervised learning is different from that in supervised learning. The data point has no associated label. On the contrary, the goal of unsupervised learning algorithm is to organize the data in a certain way, and then find out the internal structure in the data. This includes clustering data, or finding a simpler way to deal with complex data to make it look simpler
reinforcement learning:
alphago uses reinforcement learning. Reinforcement learning is a learning model, which does not directly give you solutions - you have to find solutions through trial and error
reinforcement learning doesn't need labels. The better you choose to move, the more feedback you get. Therefore, you can learn to play go by performing these actions to see whether you win or lose. You don't need to be told what is good and what is bad
what influenced me most was the offline class of chaos University, which was told by Michael I. Jordan, a heavyweight of AI. One of the videos was a simulated person who could not stand up and run normally by using reinforcement learning algorithm, and the real code was less than 100 lines, a page of ppt
summary:
at present, supervised learning and unsupervised learning are the most commonly used, especially supervised learning, because most of the application scenarios can create direct value for the company, so we can pay more attention to it if we are looking for a job
but reinforcement learning is the future, because the ability to learn is not limited by data.
3. In 1956, several computer scientists gathered at Dartmouth conferences and put forward the concept of "artificial intelligence". Since then, artificial intelligence has been lingering in people's minds, and slowly incubated in scientific research laboratories. In the following decades, artificial intelligence has been in the reversal of poles, or called the prediction of the brilliant future of human civilization; Or be thrown into the garbage as a technological maniac. To be honest, until 2012, both voices existed at the same time

in the past few years, especially since 2015, artificial intelligence has begun to explode. A large part is e to the wide application of GPU, which makes parallel computing faster, cheaper and more efficient. Of course, the combination of unlimited storage capacity and sudden burst of data flood (big data) also makes the image data, text data, transaction data and mapping data burst out

let's sort out how computer scientists developed artificial intelligence from the earliest signs to support applications used by hundreds of millions of users every day<

Artificial Intelligence - the intelligence given to people by machines

King me: the program that can play international checkers is a typical application of early artificial intelligence, which set off a wave in the 1950s Translator's note: when a checkers piece reaches the baseline position, it can become king, and the king piece can move backward)< As early as the meeting in the summer of 1956, the pioneers of artificial intelligence dreamed of using computers that just appeared at that time to construct complex machines with the same essential characteristics as human intelligence. This is what we call "strong artificial intelligence" (general AI). This omnipotent machine, it has all our senses (even more than people), all our reason, can think like us

people always see such machines in movies: friendly, like C-3PO in Star Wars; Evil, like the terminator. Strong AI only exists in movies and science fiction now. The reason is not hard to understand. We can't realize them yet, at least not yet

what we can achieve at present is generally called "narrow AI". Weak artificial intelligence is a technology that can perform specific tasks as well as or better than human beings. For example, image classification on pinterest; Or face recognition on Facebook

these are examples of weak AI in practice. These technologies realize some specific parts of human intelligence. But how do they work? Where does this intelligence come from? This brings us to the inner layer of concentric circles, machine learning<

machine learning -- a way to achieve artificial intelligence

spam free diet: machine learning can help you filter (most) spam in your email Translator's note: the word spam in English comes from spam, a lunch meat brand that was heavily aided by the United States in World War II. Until the 1960s, British agriculture did not recover from the losses of World War II, so it imported a large number of cheap canned meat procts from the United States. It's said that it's not very delicious and full of market.)

the most basic method of machine learning is to use algorithms to analyze data, learn from it, and then make decisions and predictions on real world events. Different from the traditional hard coded software program, machine learning uses a large amount of data to "train" and learn how to complete the task from the data through various algorithms

machine learning comes directly from the early field of artificial intelligence. Traditional algorithms include decision tree learning, dective logic programming, clustering, reinforcement learning and Bayesian networks. As we all know, we have not yet realized strong AI. Early machine learning methods can not even achieve weak artificial intelligence

the most successful application of machine learning is computer vision, although it still needs a lot of manual coding to complete the work. People need to write classifiers and edge detection filters manually, so that the program can recognize where the object starts and ends; Write shape detection program to judge whether the detected object has eight edges; Write a classifier to recognize the letter "st-o-p". Using these hand written classifiers, people can finally develop algorithms to sense the image and determine whether the image is a stop sign

the result is not bad, but it is not the kind of success that can inspire people. Especially in cloudy days, the signboard is not so clear and visible, or it is partially blocked by the tree, so the algorithm is difficult to succeed. That's why, some time ago, the performance of computer vision has not been close to human ability. It's too rigid and vulnerable to environmental conditions

with the advance of time, the development of learning algorithm has changed everything<

deep learning -- a technology for machine learning
herding cats: searching for cat images from YouTube videos is the first demonstration of the outstanding performance of deep learning Translator's note: herding cats is an English idiom. It's used to describe a chaotic situation in which the task is difficult to complete

Artificial Neural Networks (ANN) is an important algorithm in early machine learning, after decades of ups and downs. The principle of neural networks is inspired by the physiological structure of our brain - the interconnected neurons. But different from the fact that a neuron in the brain can connect any neuron within a certain distance, the artificial neural network has discrete layers, connections and the direction of data transmission

for example, we can cut an image into image blocks and input them to the first layer of neural network. Each neuron in the first layer passes data to the second layer. The second layer of neurons do the same thing, passing data to the third layer, and so on, until the last layer, and then generate the results

each neuron assigns a weight to its input, which is directly related to the task it performs. The final output is determined by the sum of these weights

we still take the stop sign as an example. All the elements of a stop sign image are broken, and then "checked" with neurons: octagonal shape, red color like a fire engine, prominent letters, typical size of traffic signs and motionless characteristics, etc. The task of neural network is to give a conclusion whether it is a stop sign or not. According to all the weights, the neural network will give a well thought out guess - "probability vector"

in this example, the system may give such a result: 86% may be a stop sign; 7% may be a speed limit sign; 5% of it could be a kite hanging on a tree and so on. Then the network structure tells the neural network whether its conclusion is correct

even in this case, it is quite advanced. Until recently, neural networks have been forgotten by AI circles. In fact, in the early days of artificial intelligence, neural network already existed, but the contribution of neural network to "intelligence" is very little. The main problem is that even the most basic neural network needs a lot of computation. It is difficult to meet the operation requirements of neural network algorithm

however, there are still some devout research teams, represented by Geoffrey Hinton of the University of Toronto, who insist on the research and realize the operation and proof of concept of parallel algorithm with supercomputing as the goal. But it was not until GPU was widely used that these efforts were effective

let's go back to this stop sign recognition example. Neural network is molated and trained, and it is easy to make mistakes from time to time. What it needs most is training. It takes hundreds or even millions of images to train, until the input weights of neurons are molated very accurately, and the correct results can be obtained every time whether there is fog, sunny day or rainy day

only at this time can we say that the neural network has successfully learned the appearance of a stop sign; Or in the Facebook app, neural networks learn from your mother's face; Or in 2012, Professor Andrew ng realized the neural network in Google to learn the appearance of cats and so on

Professor Wu's breakthrough lies in the fact that these neural networks have been significantly enlarged from the foundation. There are many layers and neurons, and then input massive data to the system to train the network. In Professor Wu's case, the data are images from 10 million YouTube Videos. Professor Wu added "deep" to deep learning. The "depth" here means that there are many layers in the neural network

now, image recognition after deep learning training can even do better than human in some scenes: from recognizing cats, to identifying early components of cancer in blood, to identifying tumors in MRI. Google's alphago first learned how to play go, then trained with itself. The way it trains its own neural network is to constantly play chess with itself, repeatedly and never stop<

deep learning gives a bright future to AI

deep learning enables machine learning to realize many applications, and expands the field of AI. Deep learning realizes all kinds of tasks in a decadent way, which makes all the auxiliary functions of machines possible. Driverless cars, preventive health care, and even better movie recommendations are all around the corner or on the way

AI is now, tomorrow. With deep learning, AI can even reach the level of science fiction that we imagine. I'll take your C-3PO. I wish you had your terminator.
4.

In short, machine learning is the method to realize artificial intelligence, and deep learning is the technology to realize machine learning. Machine learning needs artificial assistance (semi-automatic) in the realization of artificial intelligence, and deep learning makes the process fully automated.

three relationships:

for example, to identify whether a fruit is an orange or an apple through machine learning algorithm, it needs to input the characteristic data of the fruit manually to generate a certain algorithm model, Furthermore, it can accurately predict the types of fruits with these characteristics, and deep learning can automatically discover the characteristics and then judge< br />

5.

classification based on learning strategies

learning strategies refer to the reasoning strategies adopted by the system in the learning process. A learning system is always composed of learning and environment. The environment (such as books or teachers) provides information, while the learning part realizes information transformation, remembers it in comprehensible form, and obtains useful information from it. In the process of learning, the less reasoning a student (learning part) uses, the more dependent he is on the teacher (environment), and the heavier the burden on the teacher. The classification criteria of learning strategies are classified according to how much reasoning and how difficult it is for students to achieve information conversion. According to the order from simple to complex, they can be divided into the following six basic types:

1) rote learning

learners directly absorb the information provided by the environment without any reasoning or other knowledge conversion. Such as Samuel's checkers program, Newell and Simon's lt system. This kind of learning system mainly considers how to index the stored knowledge and make use of it. The systematic learning method is to learn directly through the pre programmed and constructed program. The learner does not do any work, or directly receives the established facts and data for learning, and does not make any reasoning for the input information

2) learning from instruction or learning by being told

students obtain information from the environment (teachers or other information sources such as textbooks), transform knowledge into internal usable forms, and organically combine new knowledge with original knowledge. So students are required to have a certain degree of reasoning ability, but the environment still needs to do a lot of work. Teachers put forward and organize knowledge in some form, so that students' knowledge can be continuously increased. This kind of learning method is similar to the school teaching method of human society. The task of learning is to establish a system, so that it can accept teaching and suggestions, and effectively store and apply the learned knowledge. Many expert systems use this method to acquire knowledge when building knowledge base. A typical application of teaching learning is foo program

3) learning by dection

students use dective reasoning. Reasoning starts from Axiom and deces conclusion through logical transformation. This reasoning is & quot; Fidelity & quot; The process of transformation and specialization enables students to acquire useful knowledge in the process of reasoning. This learning method includes macro operation learning, knowledge editing and chunking technology. The reverse process of dective reasoning is inctive reasoning

4) learning by analogy

using the similarity of knowledge in two different domains (source domain and target domain), we can dece the corresponding knowledge of target domain from the knowledge of source domain (including similar features and other properties) through analogy, so as to realize learning. Analogical learning system can transform an existing computer application system into a new field to complete similar functions that were not designed before

analogical learning needs more reasoning than the above three learning methods. It generally requires to retrieve the available knowledge from the knowledge source (source domain), then transform it into a new form and use it in a new situation (target domain). Analogical learning plays an important role in the history of human science and technology, and many scientific discoveries are obtained by analogy. For example, the famous Rutherford analogy reveals the mystery of atomic structure by comparing atomic structure (target domain) with solar system (source domain)

5) explanation based learning (EBL)

according to the goal concept provided by teachers, an example of the concept, domain theory and operational criteria, students first construct an explanation to explain why the example meets the goal concept, and then generalize the explanation to a sufficient condition for the goal concept to meet the operational criteria. EBL has been widely used in knowledge base refinement and system performance improvement

the famous EBL systems include genesis by G. Dejon, lexii and leap by T. Mitchell, and prodigy by S. Minton

6) learning from inction

inctive learning is to provide some examples or counter examples of a concept by teachers or environment, so that students can get a general description of the concept through inctive reasoning. The reasoning workload of this kind of learning is much more than that of teaching learning and dective learning, because the environment does not provide general concept description (such as axiom). To some extent, inctive learning has more reasoning than analogical learning, because there is no similar concept that can be used as & quot; Source concept & quot; Take it and use it. Inctive learning is the most basic and mature learning method, which has been widely studied and applied in the field of artificial intelligence

classification based on the representation of acquired knowledge

the knowledge acquired by learning system may include behavior rules, description of physical objects, problem solving strategies, various classifications and other knowledge types used for task implementation

for the knowledge acquired in learning, there are mainly the following forms:

1) algebraic expression parameters

the goal of learning is to adjust the algebraic expression parameters or coefficients of a fixed function form to achieve an ideal performance

2) decision tree

uses decision tree to classify objects. Each internal node in the tree corresponds to an object attribute, and each side corresponds to the optional value of these attributes. The leaf node of the tree corresponds to each basic classification of objects

3) formal grammar

in the learning of identifying a specific language, a series of expressions of the language are summarized to form the formal grammar of the language

Proction rule is expressed as condition action pair, which has been widely used. The main learning behaviors in learning system are: generation, generalization, specialization or synthesis of proction rules

The basic components of formal logic expressions are propositions, predicates, variables, statements that restrict the range of variables, and embedded logic expressions

Some systems use graph matching and graph transformation schemes to effectively compare and index knowledge

7) framework and schema

each framework contains a set of slots to describe all aspects of things (concepts and indivials)

Computer programs and other process codes acquire this form of knowledge in order to acquire the ability to realize a specific process, not to infer the internal structure of the process

9) neural network

is mainly used in connection learning. The knowledge acquired by learning is finally summed up as a neural network

Sometimes the knowledge acquired in a learning system needs to apply the above knowledge representation forms comprehensively

According to the degree of fineness, knowledge representation can be divided into two categories: coarse-grained symbolic representation with high degree of generalization, symbolic representation with high degree of generalization, and symbolic representation with high degree of generalization?? Sub symbolic representation with low generalization. For example, decision tree, formal grammar, proction rule, formal logic expression, framework and pattern belong to symbolic representation class; The algebraic expression parameters, graphs and networks, neural networks and so on belong to the class of sub symbol representation

classified by application fields

the main application fields are: expert system, cognitive simulation, planning and problem solving, data mining, network information service, image recognition, fault diagnosis, natural language understanding, robot and game, etc

from the perspective of task types reflected in the execution part of machine learning, most of the application research fields are basically focused on the following two categories: classification and problem solving

(1) the classification task requires the system to analyze the unknown input pattern (the description of the pattern) according to the known classification knowledge, so as to determine the category of the input pattern. The corresponding learning goal is to learn the classification criteria (such as classification rules)

(2) the task of problem solving requires that for a given target state,?? Find an action sequence to transform the current state into the target state; Most of the research work of machine learning in this field focuses on acquiring knowledge (such as search control knowledge, heuristic knowledge, etc.) that can improve the efficiency of problem solving through learning

comprehensive classification

comprehensively consider the historical origin of various learning methods, knowledge representation, reasoning strategies, similarity of result evaluation, relative concentration of researchers' communication, application fields and other factors. Machine learning methods [1] are divided into the following six categories:

1) empirical inctive learning

empirical inctive learning uses some data intensive empirical methods (such as version space method, ID3 method, law discovery method) for inctive learning. Its examples and learning results are generally represented by attributes, predicates, relations and other symbols. It is equivalent to inctive learning based on learning strategy classification, but decting the parts of join learning, genetic algorithm and reinforcement learning

2) analytical learning

analytical learning method is based on one or a few examples, using domain knowledge for analysis. Its main characteristics are as follows:

· inference strategy is mainly dective rather than inctive

· use past problem solving experience (examples) to guide new problem solving, or generate search control rules that can use domain knowledge more effectively

The goal of analytic learning is to improve the performance of the system, not to describe new concepts. Analytic learning includes application interpretive learning, dective learning, multi-level structure chunking and macro operation learning

3) analogical learning

it is equivalent to analogical learning based on the classification of learning strategies. In this type of learning, the more attractive research is learning by comparing with specific cases in the past, which is called case-based learning_ Based learning, or case-based learning for short

4) genetic algorithm

genetic algorithm simulates mutation and exchange of biological reproction and Darwinian natural selection (survival of the fittest in each ecological environment). It encodes the possible solution of the problem as a vector, which is called an indivial, and each element of the vector is called a gene. It evaluates each indivial in the population (the set of indivials) by using the objective function (corresponding to the natural selection criteria), and performs genetic operations such as selection, exchange, variation and so on according to the evaluation value (fitness), so as to obtain a new population. Genetic algorithm is suitable for very complex and difficult environment, for example, with a lot of noise and irrelevant data, things are constantly updated, problem objectives can not be clearly and accurately defined, and the value of current behavior can be determined through a long execution process. Like neural network, the research of genetic algorithm has developed into an independent branch of artificial intelligence, whose representative is J.H.Holland

5) join learning

the typical join model is an artificial neural network, which is composed of some simple computing units called neurons and the weighted join between units

6) reinforcement learning

What are the characteristics of reinforcement learning

6. Machine learning
machine learning is the study of how a computer simulates or realizes human learning behavior in order to acquire new knowledge or skills, reorganize the existing knowledge structure, and constantly improve its own performance. It is the core of artificial intelligence and the fundamental way to make computer have intelligence. It is applied in all fields of artificial intelligence. It mainly uses inction and synthesis rather than interpretation

learning ability is a very important feature of intelligent behavior, but the mechanism of learning is still unclear. People have given various definitions of machine learning. H. According to A. Simon, learning is an adaptive change made by the system, which makes the system more effective in completing the same or similar tasks next time. R. According to S. Michalski, learning is to construct or modify the representation of what we have experienced. People engaged in the development of expert system think that learning is the acquisition of knowledge. The first one emphasizes the external behavior effect of learning, the second one emphasizes the internal process of learning, and the third one focuses on the practicability of knowledge engineering

machine learning plays an important role in the research of artificial intelligence. An intelligent system without learning ability is difficult to be called a real intelligent system, but the previous intelligent systems are generally lack of learning ability. For example, they can't correct themselves when they encounter errors; Will not improve their own performance through experience; It will not automatically acquire and discover the required knowledge. Their reasoning is limited to dection and lack of inction, so they can only prove existing facts and theorems at most, but can not find new theorems, laws and rules. With the development of artificial intelligence, these limitations become more and more prominent. It is in this case that machine learning has graally become one of the core of artificial intelligence research. It has been widely used in various branches of artificial intelligence, such as expert system, automatic reasoning, natural language understanding, pattern recognition, computer vision, intelligent robot and so on. One of the most typical problems is the bottleneck of knowledge acquisition in expert system, which people have been trying to overcome by machine learning

the research of machine learning is based on the understanding of human learning mechanism in physiology, cognitive science, etc., to establish the computational model or cognitive model of human learning process, to develop various learning theories and learning methods, to study general learning algorithms and theoretical analysis, and to establish a task-oriented learning system with specific applications. These research objectives influence and promote each other

since the first symposium on machine learning was held in Carnegie Mellon University in 1980, the research on machine learning has developed rapidly and has become one of the central topics< At present, the research work in the field of machine learning mainly focuses on the following three aspects:

(1) task oriented research and analysis to improve the performance of a group of scheled tasks

(2) cognitive model studies human learning process and carries out computer simulation< (3) theoretical analysis explores all kinds of possible learning methods and algorithms independent of application domain in theory

machine learning is another important research field of artificial intelligence application after expert system, and also one of the core research topics of artificial intelligence and neural computing. The existing computer systems and artificial intelligence systems have little or very limited learning ability, so they can not meet the new requirements of science and technology and proction. This chapter will first introce the definition, significance and brief history of machine learning, then discuss the main strategies and basic structure of machine learning, and finally study various methods and technologies of machine learning one by one, including mechanical learning, explanation based learning, case-based learning, concept based learning, analogy based learning and training neural network based learning. The discussion of machine learning and the progress of machine learning research will promote the further development of artificial intelligence and the whole science and technology< First, the definition and research significance of machine learning

learning is an important intelligent behavior of human beings, but for a long time, there are different opinions on what is learning. Sociologists, logicians and psychologists all have different views. According to Simon, the master of artificial intelligence, learning is the enhancement or improvement of the system's own ability in the repeated work, so that the next time the system performs the same task or similar task, it will be better or more efficient than now. Simon's definition of learning itself shows the important role of learning

can machines have the same learning ability as human beings? In 1959, Samuel of the United States designed a chess program. This program has the ability to learn. It can improve its chess skills in the continuous game. Four years later, the program beat the designer. Three years later, the program defeated an eight year old winner in the United States. This program shows people the ability of machine learning and raises many social and philosophical problems

whether the ability of a machine can surpass that of a human is a major argument of many negative people: a machine is man-made, and its performance and action are completely determined by the designer, so in any case, its ability will not exceed that of the designer himself. This opinion is true for a machine that does not have the ability to learn, but it is worth considering for a machine that has the ability to learn, because the ability of this machine is constantly improving in application. After a period of time, the designer himself does not know what level of its ability has reached

what is machine learning? So far, there is no unified definition of "machine learning", and it is difficult to give a recognized and accurate definition. In order to discuss and estimate the progress of the subject, it is necessary to give a definition of machine learning, even if it is incomplete and inadequate. As the name suggests, machine learning is a subject that studies how to use machines to simulate human learning activities. A slightly strict formulation is: machine learning is a study of machines acquiring new knowledge and new skills, and identifying existing knowledge. The "machine" here refers to the computer; Now it is an electronic computer, and later it may be a neutron computer, photon computer or neural computer, etc.

Second, the development history of machine learning

machine learning is a relatively young branch of artificial intelligence research, and its development process can be divided into four periods
the first stage is from the mid-1950s to the mid-1960s, which is a warm period& gt;
the second stage is from the mid-1960s to the mid-1970s, which is known as the calm period of machine learning< The third stage is from the mid-1970s to the mid-1980s, which is called the Renaissance

the latest stage of machine learning began in 1986< (1) machine learning has become a new interdisciplinary subject and formed a course in Colleges and universities. It integrates psychology, biology, neurophysiology, mathematics, automation and computer science to form the theoretical basis of machine learning

(2) a variety of forms of integrated learning systems are emerging, which combine various learning methods and learn from each other. In particular, the coupling of connection learning and symbolic learning can better solve the acquisition and refinement of knowledge and skills in continuous signal processing

(3) the unified view of machine learning and artificial intelligence is forming. For example, the idea that learning and problem solving are combined and knowledge expression is easy to learn proces the block learning of general intelligent system soar. Case based learning, which combines analogical learning with problem solving, has become an important direction of experiential learning

(4) the application scope of various learning methods is expanding, and some of them have become commodities. The knowledge acquisition tool of inctive learning has been widely used in the diagnosis and classification expert system. Connective learning is dominant in phonograph recognition. Analytic learning has been used to design comprehensive expert system. Genetic algorithm and reinforcement learning have good application prospects in engineering control. Neural network connection learning coupled with symbolic system will play an important role in intelligent management and intelligent robot motion planning

(5) academic activities related to machine learning are unprecedentedly active. In addition to the annual machine learning seminar, there are also computer learning theory conference and genetic algorithm conference< Thirdly, the main strategies of machine learning are as follows: learning is a complex intelligent activity, and the learning process is closely related to the reasoning process. According to the amount of reasoning used in learning, the strategies of machine learning can be divided into four types: mechanical learning, learning by teaching, learning by analogy and learning by case. The more reasoning used in learning, the stronger the ability of the system< Four, the basic structure of machine learning system. The environment provides some information to the learning part of the system. The learning part uses the information to modify the knowledge base, so as to improve the efficiency of the executive part of the system to complete the task. The executive part completes the task according to the knowledge base, and feeds back the information to the learning part. In the specific application, the environment, knowledge base and execution part determine the specific work content, and the problems to be solved in the learning part are completely determined by the above three parts. Next, we describe the influence of these three parts on the design of learning system

the most important factor affecting the design of learning system is the information provided by the environment. Or more specifically, the quality of information. In the knowledge base, there are general principles to guide the actions of the executive part, but the information provided by the environment to the learning system is various. If the quality of the information is high and the difference with the general principle is small, the learning part is easier to deal with. If the learning system is provided with specific information to guide the implementation of specific actions in a disorderly way, the learning system needs to delete unnecessary details after obtaining enough data, summarize and promote them, form the general principles of guiding actions, and put them into the knowledge base. In this way, the task of learning part is relatively heavy and the design is more difficult

because the information obtained by the learning system is often incomplete, the reasoning of the learning system is not completely reliable, and the rules it summarizes may or may not be correct. This should be tested through the implementation effect. Correct rules can improve the efficiency of the system and should be retained; Incorrect rules should be modified or removed from the database

knowledge base is the second factor that affects the design of learning system. There are many forms of knowledge representation, such as eigenvectors, first-order logic statements, proction rules, semantic networks and frameworks. These representations have their own characteristics. When choosing the representations, we should consider the following four aspects:

(1) strong expressive ability 2) Easy to reason
7.

Michael Jordan is known as the father of machine learning

Professor Michael Jordan, currently teaching at the University of California, Berkeley, is also an academician of the American Academy of Sciences, the American Academy of engineering, and the American Academy of Arts and Sciences. He is the only scientist in the field of artificial intelligence who has achieved this achievement. He is recognized as one of the pioneers in the field of machine learning

at the global mobile Internet Conference, Yann Lecun, chief AI science of Facebook AI team, Michael I. Jordan, University of California, Berkeley, and Kaifu Lee, chairman and CEO of Innovation workshop, discussed the present and future of AI

on May 27, 2017, Jordan signed with ant financial services. On the ant technology day held on that day, Michael I. Jordan, the world-class leader in the field of artificial intelligence, took over the offer of employment from Jing Xiandong, CEO of ant financial services, and officially became the chairman of the newly established scientific think tank of ant financial services

< H2 > extended materials:

Michael I. Jordan has many students, such as yoshua bengio, zoubin ghahramani and Wu Enda, the former chief scientist of network

Michael I. Jordan attended the ML summit 2018 global machine learning technology conference hosted by boolan at Crowne Plaza Shanghai from September 22 to 23, 2018. Michael I. Jordan's "machine learning frontier development" keynote speech, in-depth elaboration of the latest frontier development in the field of machine learning and the latest research results of the machine learning team

8.

Error = bias + variance

the error reflects the accuracy of the whole model, the deviation reflects the error between the sample output and the actual value of the model, the accuracy of the model itself, the variance of each output reflects the error between the model and the expected output of the model, and the stability of the model. Therefore, for example, in the target experiment, the target is to hit 10 rings, but in fact only 7 rings are hit, so the error here is 3. There are two reasons to specifically analyze the seven rings: one is to solve the problem, for example, the actual shooting target is 9 rings instead of 10 rings; The second is the stability of the gun itself. Although the target is 9 rings, there are only 7 rings

so the more we experiment, the closer we are, so we will average all the goals, and we should be closer to the center. More microcosmic analysis shows that the predicted value of the model deviates greatly from the expectation. In the case of fixed model, the reason is also in the data, such as some outliers. In the most extreme case, we assume that only one point is abnormal. If there is only one model training, then this point will affect the whole model, making the learning model very different

however, if k-fold cross validation is used for training, only one model will be affected by this abnormal data, while the rest of the k-1 models are normal. On average, the impact of this abnormal data is greatly reced. In contrast, the deviation can be direct modeling, as long as the minimum error of training samples can be ensured to be small, and to achieve this goal, all data must be trained together to achieve the optimal solution of the model. Therefore, the objective function of k-fold cross validation destroys the previous situation, so the deviation of the model is bound to increase

9. What is "integrated information"
the goal of machine learning is to learn the rules through data, which is embodied in mathematical model. As for the prediction function, is it the integration information you want. This can be discussed
10. Every time we talk about virtualization or cloud computing, administrators will complain the same way: & quot; Data center network can't keep up with the development of cloud computing;. The computing and storage capacity of data center has been greatly improved in the past decade, while the network still adopts the past architecture, and has not kept pace with this evolution. With the rapid development of cloud computing and mobile Internet, the demand of enterprises to greatly improve the capacity of data center is becoming stronger and stronger& quot;
why we need network virtualization

the traditional three-tier architecture is collapsing in the new world of big data and cloud computing, and the big two-tier technology is becoming more and more popular. After the hardware devices in the data center are virtualized, they can be further logically pooled, and the logical resource pool
can span multiple data centers and provide virtual data centers on the logical resource pool for users to use, so as to connect multiple discrete, hierarchical and heterogeneous data centers into a new cloud data center. From this point of view,
virtualization of the network becomes absolutely necessary to provide an engine for elastic and scalable workloads, rather than managing the connections between discrete physical elements separately
in essence, network virtualization is the natural and necessary evolution of server virtualization. It allows the entire data center to be managed as a computing and storage resource, which can meet the workload requirements of dynamic applications
what kind of network virtualization is suitable for the future cloud computing data center
SDN provides another way to solve the problem. However, SDN only solves part of the problems, but does not solve all the problems existing in the current network:
problem 1: flexible expansion of functions: in order to realize the software definition of network functions, the device infrastructure must be flexible and programmable, and flexible expansion of functions requires an open and flexible controller platform architecture
question 2: smooth evolution: no customer can completely abandon the existing network and build a new one. The next generation network must be able to deploy directly in the current network, smooth transition, in order to survive. This requires the controller to have an open north-south interface in order to adapt to the traditional network
for the future cloud computing data center, network virtualization solutions need to adapt to the wave of computing and storage virtualization, quickly realize the distribution of cloud computing services, and meet the needs of dynamic application workload; At the same time, we need to help administrators manage physical network and virtual network more simply to realize network visualization
openness is also a measure of perfect network virtualization. Only by providing rich north-south interfaces and open APIs, and meeting the docking requirements of mainstream cloud platforms in the instry, can we meet the rapid development of cloud computing business. At the same time, openness also means that we can develop different plug-ins to adapt to the existing network, so as to realize the smooth evolution of the network
how does Huawei agilecontroller build a future oriented network virtualization solution
agile network is Huawei's next generation network solution for the enterprise market. Based on the idea of SDN and three major architecture innovations, it enables the network to serve the business quickly and flexibly, enables the enterprise to obtain four times of the business innovation speed, and helps the enterprise to get the first chance in the fierce competition<
agile network controller, intelligent data center brain
Huawei's agile controller aims to build a simple, efficient and open cloud data center network for customers, integrate cloud network, support the rapid development of Enterprise Cloud business, and make data center network serve cloud business more agile
First: efficient business, realize the rapid distribution of network resources automatically
in cloud computing, storage and virtual machines have been applied automatically on demand, and Huawei agilecontroller can realize the automatic distribution of network resources voluntarily. Applying for network resources is as convenient and efficient as applying for virtual machines, so as to make cloud computing services go online faster, It greatly reces the period of business online<
Second, the operation and maintenance is simple, enabling the collaborative management and control of virtual and physical networks
Huawei agilecontroller can realize the collaborative management and control of physical and virtual networks, and support the unified management of physical resources and virtual resources (physical networks, virtual machines, virtual switches, distributed virtual switches, etc.); Through the network visualization, the management is simpler, which greatly reces the management difficulty of the administrator

in the data center network, another important problem is virtual machine migration. Agilecontroller can automatically adapt to the high-speed migration of network policies. Different from other virtual machine network policies in the instry, agilecontroller issues network policies through the high-speed radius interface, which greatly improves the speed of network policy deployment, It is 10-20 times that of the instry, and can meet the sudden migration needs of
data center mass virtual machines; Combined with Huawei's rich layer-2 network solutions (trill / evn, etc.), VMware virtual machines can migrate freely within and across data centers, making cloud business deployment more flexible
thirdly, it is open and can connect with mainstream cloud platforms
agile network is a system that can be
programmed from hardware defined network to software defined network. Huawei agilecontroller provides rich north-south interfaces, open API, realize the programmability of forwarding surface and control surface, connect with customers' existing equipment and business systems,
improve the end-to-end operation and maintenance efficiency, speed up the launch of new business, and create a rapid innovation environment for enterprises< There are many cloud platforms in the instry, such as
Huawei fusionsphere, VMware vcac, openstack and so on. Huawei agilecontroller supports docking with them, and is committed to building
a flexible and open platform, integrating excellent practices in various fields, so that users can flexibly define the network according to business needs, and achieve on demand
we have no doubt that the development momentum of data center is fast. How does the infrastructure support this growth? Virtualization is just one part of it. Future networks may need more features. In many characteristics, how to choose to build their own network? At present, the agile network of HUAWEI
has absorbed the essence of SDN, at the same time, has considered the smooth evolution of the existing network. The simplicity, efficiency and openness of agilecontroller have laid a solid foundation for the successful construction of the future network. < agilecontroller
cloud computing makes network applications wonderful and application innovation easier; Network is the cornerstone of cloud computing. Without network, there will be no cloud computing. The development of cloud computing puts forward higher requirements for the network

agilecontroller emerges as the times require, which simplifies the operation difficulty of cloud platform for customers. Now with agilecontroller, automatic middleware can help users manage devices. At the same time, Huawei's agile controller is an open platform, which will open the north and south of the agile controller to the interface, so as to give the instry customers an open and self defined space, and build the agile business practice with the partners, so that they can gather more on the business change and transformation, and realize the ICT cloud integration management, It greatly improves the deployment and management efficiency of cloud computing
and makes the physical network, like the computing storage resources, a part of the cloud. The network and computing work together and can be seen from each other, making cloud computing simple.
Hot content
Inn digger Publish: 2021-05-29 20:04:36 Views: 341
Purchase of virtual currency in trust contract dispute Publish: 2021-05-29 20:04:33 Views: 942
Blockchain trust machine Publish: 2021-05-29 20:04:26 Views: 720
Brief introduction of ant mine Publish: 2021-05-29 20:04:25 Views: 848
Will digital currency open in November Publish: 2021-05-29 19:56:16 Views: 861
Global digital currency asset exchange Publish: 2021-05-29 19:54:29 Views: 603
Mining chip machine S11 Publish: 2021-05-29 19:54:26 Views: 945
Ethereum algorithm Sha3 Publish: 2021-05-29 19:52:40 Views: 643
Talking about blockchain is not reliable Publish: 2021-05-29 19:52:26 Views: 754
Mining machine node query Publish: 2021-05-29 19:36:37 Views: 750