Position: Home page » Blockchain » Enjoy dream blockchain

Enjoy dream blockchain

Publish: 2021-04-25 10:58:38
1. I wrote a lot of blockchain project analysis before and after, from the initial bitcoin, Ethereum and EOS to various application-oriented projects, such as exchange, currency, u like and other content-based projects, including the project party's draft. I have formed a set pattern for project analysis in my mind. I know this framework after reading my project analysis articles. I analyze the project white paper according to "project introction, launch background, project innovation, project team, development trends, token and market, project risk, competitive proct analysis, suggestions and summary", or match the content and framework in the white paper, Some will be filled in, others will not be written out; Next is the supplementary information as evidence, combing ideas, and then according to their own understanding to write project analysis, so as to be regarded as their own project analysis. For project analysis, I can summarize three steps: the first step is to collect information (see white papers, search engines, official articles, etc.); The second step is to set the frame template; The third step is to delete the content to form one's own understanding.
2. I mean the price. The price is proportional to the calculation power. The higher the computing power is, the higher the probability is. It is predicted that the current valuation is too low. Many miners are mobilizing their computing power to join in. I was optimistic a while ago, and it is not surprising. In this way, mining can survive for a while. Yes, rising means more computing power will come in. See more answers & gt& gt;
3. At present, the blockchain has not been recognized by everyone. I suggest you not to be so impulsive!
4. Map rec - maprec is a programmable model that uses cluster parallel and distributed algorithms to process large data sets. Apache maprec is derived from Google maprec: simplify data processing in large clusters. The current version of Apache maprec is built on the Apache horn framework. YARN = “Yet-Another-Resource-Negotiator” Horn can run non maprec model applications. Horn is an attempt of Apache Hadoop to surpass maprec's data processing ability

HDFS - the Hadoop distributed file system (HDFS) provides a solution for storing large files across multiple machines. Hadoop and HDFS are derived from Google File System (GFS). Before Hadoop 2.0.0, namenode was a single point of failure (spof) in HDFS cluster. This problem is solved by using the high availability features of zookeeper and HDFS, which provides the option to run two plicate namenodes in the same cluster and the same active / passive configuration
HBase - inspired by Google BigTable. HBase is an open source implementation of Google BigTable. Similar to Google BigTable, it uses GFS as its file storage system, and HBase uses Hadoop HDFS as its file storage system; Google runs maprec to process massive data in BigTable, and HBase also uses Hadoop maprec to process massive data in HBase; Google BigTable uses chubby as a collaborative service, while HBase uses zookeeper as its counterpart
hive - data warehouse infrastructure developed by Facebook. Data collection, query and analysis. Hive provides a SQL like language (not compatible with SQL92): hiveql
pig - pig provides an engine to execute data streams in parallel in Hadoop. Pig contains a language: Pig Latin, which is used to express these data streams. Pig Latin includes a large number of traditional data operations (join, sort, filter, etc.) and allows users to develop their own functions to view, process and write data. Pig runs on Hadoop and is used in Hadoop distributed file system, HDFS, Hadoop processing system and maprec. Pig uses maprec to perform all data processing and compile Pig Latin scripts. Users can write one or more maprec jobs into a series and then execute them. Pig Latin doesn't look like most programming languages, with no if state or for loop
zookeeper - zookeeper is a formal sub project of Hadoop. It is a reliable coordination system for large distributed systems. Its functions include configuration maintenance, name service, distributed synchronization, group service, etc. The goal of zookeeper is to encapsulate complex and error prone key services, and provide users with simple and easy-to-use interfaces and high-performance and stable systems. Zookeeper is an open source implementation of chubby of Google. It is a highly effective and reliable cooperative work system. Zookeeper can be used for leader election, configuration information maintenance, etc. in a distributed environment, we need a master instance or store some configuration information to ensure the consistency of file writing, etc
mahout - machine learning library and mathematics library based on maprec.
5. Master Lu, it's wrong
change the software< br />CPU-Z、aida64……
Hot content
Inn digger Publish: 2021-05-29 20:04:36 Views: 341
Purchase of virtual currency in trust contract dispute Publish: 2021-05-29 20:04:33 Views: 942
Blockchain trust machine Publish: 2021-05-29 20:04:26 Views: 720
Brief introduction of ant mine Publish: 2021-05-29 20:04:25 Views: 848
Will digital currency open in November Publish: 2021-05-29 19:56:16 Views: 861
Global digital currency asset exchange Publish: 2021-05-29 19:54:29 Views: 603
Mining chip machine S11 Publish: 2021-05-29 19:54:26 Views: 945
Ethereum algorithm Sha3 Publish: 2021-05-29 19:52:40 Views: 643
Talking about blockchain is not reliable Publish: 2021-05-29 19:52:26 Views: 754
Mining machine node query Publish: 2021-05-29 19:36:37 Views: 750