Position: Home page » Equipment » Docker open 2375 mining

Docker open 2375 mining

Publish: 2021-05-02 18:59:55
1. Command no problem, check your nginx, and then use docker PS to see the specific port mapping, and then you can enter the container through exec, and look at the internal curl localhost
2. Make docker provide external service network configuration (bridge and port)
Fengyun, it's her<

Frontier:
after the docker starts the container, how to provide external services? I hope you can help after reading this article
docker's network problems are not clear, please ask
the container generated by docker will generally generate an IP address for you, which is the same IP segment as the address of docker 0
through the IP a command, we can see the IP and subnet range of docker 0. You will find that in addition to docker 0, there is a vethc digital network card, which is also a virtual network card tied to the bridge

we create a container and expose port 22. This 22 port means that 22 ports are exposed to the outside world. The system will assign you a port from the range of 49000-49900 ports
docker run indicates the port in two ways. One is - P, which identifies the port relationship declared by the dockerfile in the container. The other one is - P, which is a little ass, so he can be straight white. For example - P 6379 is exposed to the outside world. 6379: 6379 on the outside, 6379 on the inside< br />
root@dev-ops :~# docker run -d -p 22 --name=" redis_ test" rastasheep/ubuntu-sshd

root@dev-ops :~#
root@dev-ops :~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed7887b93aa4 rastasheep/ubuntu- sshd:latest /usr/sbin/sshd -D 7 seconds ago Up 7 seconds 0 .0.0.0:49153-> 22/tcp redis_ test
root@dev-ops : ~ #

originally, I thought docker wrote a socker to do the port mapping function. After reading the document, I know that it just called an IPtable port mapping function< br />iptables -t nat -L

Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere ! 127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- localhost/16 ! localhost/16

Chain DOCKER (2 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere tcp dpt:49153 to :172.17.0.2:22
root@dev-ops : ~ #

with IP of host and IP of container, what are you afraid of! What you want to map, map yourself
iptables - t NAT - a pre Routing - I eth0 - P TCP -- dport 80 - J DNAT -- to 172.31.0.23:80

the containers between dockers are interoperable by default, that is, two containers can communicate with each other. If you want to control the communication between containers, you can use the -- ICC attribute of docker to control

OK, let's talk about how to bridge. If you find it troublesome and need to do port mapping or modify the mapping every time, you can consider using the bridging network card mode. It seems that some BBS don't recommend bridging. It's probably for the sake of safety. After all, NAT is safe to the outside world, and the external service only exposes the port. And bridging will expose IP

suspend docker service
sudo service docker stop
use IP command to make docker 0 network card down
sudo IP link set dev docker 0 down
delete network card
sudo brctl delbr docker 0
create a network card named bridge0
sudo brctl addbr bridge0
IP address and subnet
IP addr add 192.168.5.1/24 dev bridge0
start bridge network card
sudo IP link set dev bridge0 up
write configuration
echo & 39; DOCKER_ OPTS="- b=bridge0"&# 39; & gt;& gt; / Etc / default / docker
sudo service docker start
there is a problem with this bridging method, that is, its IP is given to you by docker's own detection, and it seems that it is not given to the container by DHCP to get idle. When I did the test again yesterday, there was an IP occupied, but he still distributed it to an occupied IP address. In this case, there will be a conflict. If the IP side overlaps with the company or online, it may cause IP address conflict. It is my personal conclusion to encounter IP address conflict after bridging, or it may be caused by my environment!
3. The cause of vulnerability discovery is that when one student was using docker swarm, he found that a TCP port 2375 would be open on the docker node under management, bound to 0.0.0,
4. We can easily run most GUI programs in the docker container without errors. Docker is an open source project, which provides a lightweight container for packaging, distributing and running arbitrary programs. It has no language support, framework or packaging system restrictions, and can run anywhere, at any time, from small home computers to high-end servers. This allows people to package different packages for deployment and expansion of network applications, databases and back-end services without having to rely on a specific stack or provider

here are the simple steps of how to run GUI program in docker container. In this tutorial, we'll use Firefox as an example

1. Install docker

before we start, we must first make sure that docker has been installed in the Linux host. Here, I am running the CentOS 7 host. We will run the yum manager and the following command to install docker

# Yum install docker

# systemctl restart docker. Service

2. Create dockerfile

now that the docker daemon is running, we are ready to create our own Firefox
docker container. We need to create a dockerfile, in which we need to input the required configuration to create a working Firefox container. In order to run docker
image, we need to use the latest version of CentOS. To create a docker image, we need to create a file named dockerfile with a text editor

? Nano dockerfile

next, add the following line to the dockerfile and save it< br />
#!/ bin/bash
FROM centos:7
Run Yum install - y firebox
replace the following 0 with your own uid / GID
Run export uid = 0 GID = 0
run MKDIR - P / home / developer
Run echo & quot; developer:x :${uid}:${gid}:Developer,,,:/home/developer:/bin/bash"& gt;& gt;/ etc/passwd
RUN echo " developer:x :${uid}:"& gt;& gt;/ etc/group
RUN echo " developer ALL=(ALL) NOPASSWD: ALL"& gt;& gt;/ Etc / sudoers
run Chmod 0440 / etc / sudoers
Run chown ${uid}: ${GID} - R / home / developer
User developer
env home / home / developer
CMD / usr / bin / firewall

note: in the configuration of the fourth line, replace 0 with your own user and group ID. We can get uid and GID in shell or terminal with the following command

# ID $user

3. Construct docker container

next, we will build a container according to the dockerfile above. It will install the firebox browser and the packages it needs. It then sets the user permissions and makes it work. Here, the image name is firebox. You can name it according to your needs

? Docker build -- RM - t firebox.

4. Run the docker container

now, if all goes well, we can run our GUI program in the docker container running in the image of CentOS 7, that is, the Firefox browser

# docker run - Ti -- RM - e display = $display - V / TMP /. X11 UNIX / TMP /. X11 UNIX firebox

summary
it's a great experience to run GUI program in docker container, it doesn't do any harm to your host file system. It depends entirely on your docker container.
5.

Qingyun qingcloud recently announced the launch of docker image warehouse service. The service includes docker public image warehouse and harbor private image warehouse. Users can choose the appropriate image warehouse scheme according to their needs. The launch of docker image warehouse marks the further improvement of qingcloud container platform, including a series of container applications and services including kubernetes container layout and management, harbor private image warehouse, docker public image warehouse, etcd key value storage service, SDN network express service, etc., and cooperates with partners in container fields such as rancher and Xiyun, It helps users to develop, deploy and upgrade container related applications quickly, and greatly reces the development and management threshold of container applications

the public image warehouse of docker launched this time provides users with safe, reliable, easy-to-use, compatible and open centralized storage and distribution services of docker images for free, supports the creation of multiple docker namespace and multiple docker users, and flexibly manages users' docker images. The bottom layer of docker public image warehouse is based on qgstor object storage, providing users with massive image storage services. In addition, qingcloud also provides harbor private image warehouse, which is convenient for users to deploy docker image warehouse with high availability, high security and high performance with one click

Qingyun qingcloud container platform is a complete set of container deployment and management platform delivered through the qingcloud AppCenter, which supports a variety of cloud container deployment methods, and provides container management functions such as image warehouse, scheling and choreography, service awareness, cross platform management, etc. The qingcloud container platform fully integrates the high-performance network and storage capacity of the qingcloud IAAs platform, provides the ultimate performance guarantee for the container platform, and supports enterprise users to deploy a highly available, reliable and high-performance container platform with one click

Qingyun qingcloud's complete enterprise level container service platform has the following highlights:

deep integration of the cloud platform: deep integration with Qingyun qingcloud cloud platform, full integration of the underlying SDS (software defined storage) and SDN (Software Defined Network) capabilities of Qingyun IAAs, providing SDN network direct and storage persistence solutions, Provide the ultimate network and storage performance support for the container running environment

one click deployment, light operation and maintenance: the application is delivered through the framework of qingcloud AppCenter, one click deployment, continuous upgrading, providing application life cycle management functions such as creation, expansion, health monitoring, user management, and providing perfect service monitoring and logging functions, which is the best way to practice Devops

compatibility and openness: qcloud kubernetes container service is fully compatible with native API syntax, which minimizes the cost of learning and migration for users. Native applications developed based on kubernetes can also be seamlessly migrated to qcloud platform

unified architecture: qingcloud IAAs realizes the unified management and operation of virtual host, physical host (bar metal service) and container under the same technical architecture, which can realize seamless interworking and resource sharing under the unified network and storage environment, avoiding the separation of the system

According to Gan Quan, CTO of Qingyun qingcloud, combined with the technical advantages of qingcloud in IAAs and AppCenter, the container platform launched by qingcloud can provide enterprise users with one click deployment, elastic scaling and extreme performance container services, and provide technical platform support for users to easily build docker service, Devops platform and micro service architecture. In the future, qingcloud will deeply participate in various container open source projects, unite with more partners in the container field, and provide users with one-stop container platform services

6. Create a redis docker container

first, let's create a dockerfile

from for redis ubuntu:12.10
RUN apt-get update
RUN apt-get -y install redis-server
EXPOSE 6379
ENTRYPOINT ["/ usr/bin/redis-server"]
now you need to create an image through dockerfile and replace it with your own name<

sudo docker build - t / redis.
run the service

use the redis image we just created

use - D to run the service separation mode and let the container run in the background

it is important that we do not open the container port. Instead, we will use a container to connect to the redis container database

sudo docker run - name redis - D / redis
create your web application container

now we can create our application container. We use the - link parameter to create a connection redis container. We use the alias dB, This will create a secure communication tunnel in the redis container and the redis instance container

sudo docker run link redis:db -i -t ubuntu:12.10 /bin/bash
enter the container we just created, We need to install the redis cli binary package of redis to test the connection

apt get update
apt get - y install redis server
service redis server stop
now we can test the connection. First, we need to check the environment variables of the web application container, We can use our IP and port to connect the redis container

env
.
dB_ NAME=/violet_ wolf/db
DB_ PORT_ 6379_ TCP_ PORT=6379
DB_ PORT= tcp://172.17.0.33 :6379
DB_ PORT_ 6379_ TCP= tcp://172.17.0.33 :6379
DB_ PORT_ 6379_ TCP_ ADDR=172.17.0.33
DB_ PORT_ 6379_ TCP_ Proto = TCP
we can see that we have a list of environment variables prefixed with DB. DB comes from the current container with the specified alias. Let's use dB_ PORT_ 6379_ TCP_ The addr variable connects to the redis container< br />
redis-cli -h $DB_ PORT_ 6379_ TCP_ ADDR
redis 172.17.0.33:6379>< br />redis 172.17.0.33:6379> set docker awesome
OK
redis 172.17.0.33:6379> get docker
" awesome"< br />redis 172.17.0.33:6379> Exit
we can easily use this or other environment variables to connect to the redis container on our web application container
7. Sometimes, looking at the explorer, you will find a strange phenomenon. When the utilization rate of physical memory is less than 50%, swap space is used. Using swap is obviously not as fast as using physical memory. How to modify
in Ubuntu, the value of swap is closely related to how to use swap partition
when snappness = 0, it means to maximize the use of physical memory, and then the swap space; Swappiness = 100 indicates that the swap partition is actively used, and the data in memory is moved to the swap space in time. At the two extremes, for the default setting of Ubuntu, this value is equal to 60, which is suggested to be changed to 10. To do this:
1. Check the swappiness in your system, and enter cat / proc / sys / VM / swappiness in the terminal. The result should be 60
2. Change the swappiness value to 10. Enter sudo GEDIT / etc / sysctl.conf in the terminal, then add VM. Swappiness = 10 in the last line and save it
3. Restart the computer to make the settings take effect
in this way, Ubuntu can maximize the use of physical memory!!
8. Check the node docker configuration 1. Open the docker configuration file (CentOS 7 for example) VIM / etc / sysconfig / docker 2. Add - H tcp://0.0.0.0 : 2375 to optionssolutions = & 39- g /cutome-path/docker -H tcp://0.0.0.0 :2375' 3. Centos6.6 needs to add - H UNIX: / / / var / run / docker. So
9. Check the node docker configuration 1. Open the docker configuration file (CentOS 7 for example) VIM / etc / sysconfig / docker 2. Add - H tcp://0.0.0.0 : 2375 to optionssolutions = & 39- g /cutome-path/docker -H tcp://0.0.0.0 :2375' 3. CentOS6
10. Step 1 - create dockerfile

the following dockerfile can meet the above requirements:
* * from** golang:1.6

*# Install beego and the bee dev tool*

**RUN** go get github.com/astaxie/beego && go get github.com/beego/bee

*# Expose the application on port 8080*

**EXPOSE** 8080

*# Set the entry point of the container to the bee command that runs the*

*# application and watches for changes*

**CMD** [" bee", & quot; run"]

first line,
from golang:1.6

take the official image file of go as the basic image. The image file is pre installed with go 1.6. The image has set the value of $gopath to / go. All packages installed in / go / SRC will be accessible by the go command

on the second line,
run go get github.com/astaxie/beego & go get github.com/beego/bee

install the beego package and bee tools. The beego package will be used in the application. The bee tool language reloads our code repeatedly in development

the third line,
expose 8080

use container to open 8080 port for application on development host

the last line,
CMD [& quot; bee", & quot; run"]

use the bee command to start the online reload of the application< Once the docker file is created, run the following command to create an image:
docker build - t Ma image.

execute the above command to create an image named Ma image. The image can now be used by anyone who uses the application. This will ensure that the team can use a unified development environment

to view the image list on your system, Run the following command:
docker images

this line will output something similar to the following:
reposition tag image ID created size

MA image latest 8d53aa0dd0cb 31 seconds ago 784.7 MB

golang 1.6 22a6ecf1f7cc 5 days ago 743.9 MB

note the exact name and size of the image The numbers may be different, but you should at least see golang and Ma image in the list

Step 3 - run the container

once the Ma image is completed, you can start a container with the following command:
docker run - it -- RM -- name Ma instance - P 8080:8080 & # 92

- V / APP / mathapp / go / SRC / mathapp - w / go / SRC / mathapp Ma image

let's analyze the above command to see what it does< br />
The docker run command is used to start a container from an image

- The IT tag starts the container interactively

-- RM tag will clear the container after it is closed

-- Name Ma instance name the container Ma instance

- P 8080: the 8080 tag allows access to the container through port 8080

- V / APP / mathapp / go / SRC / mathapp is more complex. It maps the host's / APP / mathapp to the container's / go / SRC / mathapp. This will make the development files accessible inside and outside the container< br />
The Ma image section declares the image used for the container

executing the above command will start the docker container. The container developed port 8080 for its own application. Whenever it makes a change, it will automatically refactor its own application. I will see the following output on the console:
bee: 1.4.1

beego: 1.6.1

go: go version go1.6 Linux / AMD64

2016 / 04 / 10 13:04:15 [info] uses & 39; MathApp' as ' appname'< br />
2016/04/10 13:04:15 [INFO] Initializing watcher...

2016/04/10 13:04:15 [TRAC] Directory(/go/src/MathApp)

2016/04/10 13:04:15 [INFO] Start building...

2016/04/10 13:04:18 [SUCC] Build was successful

2016/04/10 13:04:18 [INFO] Restarting MathApp ...

2016/04/10 13:04:18 [INFO] ./MathAp p is running...

2016/04/10 13:04:18 [asm_ amd64.s:1998][I] http server Running on :8080
Hot content
Inn digger Publish: 2021-05-29 20:04:36 Views: 341
Purchase of virtual currency in trust contract dispute Publish: 2021-05-29 20:04:33 Views: 942
Blockchain trust machine Publish: 2021-05-29 20:04:26 Views: 720
Brief introduction of ant mine Publish: 2021-05-29 20:04:25 Views: 848
Will digital currency open in November Publish: 2021-05-29 19:56:16 Views: 861
Global digital currency asset exchange Publish: 2021-05-29 19:54:29 Views: 603
Mining chip machine S11 Publish: 2021-05-29 19:54:26 Views: 945
Ethereum algorithm Sha3 Publish: 2021-05-29 19:52:40 Views: 643
Talking about blockchain is not reliable Publish: 2021-05-29 19:52:26 Views: 754
Mining machine node query Publish: 2021-05-29 19:36:37 Views: 750