linux

CentOS 7 yum安装 RabbitMQ

方法 一
1、下载 erlang

wget http://www.rabbitmq.com/releases/erlang/erlang-19.0.4-1.el7.centos.x86_64.rpm
2、安装 erlang

rpm -ivh erlang-19.0.4-1.el7.centos.x86_64.rpm
检查是否安装成功

[root@centos7 src]# erl
Erlang/OTP 19 [erts-8.0.3] [source] [64-bit] [async-threads:10]…………

Eshell V8.0.3 (abort with ^G)
1>
3、下载 rabbitmq

wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.6/rabbitmq-server-3.6.6-1.el7.noarch.rpm
4、安装 rabbitmq

rpm -ivh rabbitmq-server-3.6.6-1.el7.noarch.rpm

# 如果提示如下
warning: rabbitmq-server-3.6.6-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature,
key ID 6026dfca: NOKEY
error: Failed dependencies:
socat is needed by rabbitmq-server-3.6.6-1.el7.noarch

# 安装 socat
yum install socat -y
方法 二
1、配置 epel

yum install epel-release
2、安装

yum install rabbitmq-server
5、启动

# 启动
systemctl start rabbitmq-server

# 状态
systemctl status rabbitmq-server

# 关闭
systemctl stop rabbitmq-server
6、配置网页插件(从网页登陆端口号:15672)

# 启用插件
rabbitmq-plugins enable rabbitmq_management
7、配置防火墙

# 添加端口
vi /etc/sysconfig/iptables
-A INPUT -p tcp -m state –state NEW -m tcp –dport 15672 -j ACCEPT

# 重启生效
systemctl restart iptables
8、配置访问账号密码和权限

# 添加用户mq,密码mq123
rabbitmqctl add_user mq mq123

# 添加权限
rabbitmqctl set_permissions -p / mq “.*” “.*” “.*”

# 修改用户角色
rabbitmqctl set_user_tags mq administrator

# 其它操作
# 删除一个用户
rabbitmqctl delete_user Username

# 修改用户密码
rabbitmqctl change_password Username Newpassword

# 查看当前用户列表
rabbitmqctl list_users
9、登陆

http://192.168.1.72:15672

 

Error loading MySQLdb Module ‘Did you install mysqlclient or MySQL-python?’

24

Faced same problem after migrating to python 3. Apparently, MySQL-python is incompatible, so as per official django docs, installed mysqlclient using pip install mysqlclient on Mac. Note that there are some OS specific issues mentioned in docs.

Quoting from docs:

Prerequisites

You may need to install the Python and MySQL development headers and libraries like so:

sudo apt-get install python-dev default-libmysqlclient-dev # Debian / Ubuntu

sudo yum install python-devel mysql-devel # Red Hat / CentOS

brew install mysql-connector-c # macOS (Homebrew) (Currently, it has bug. See below)

On Windows, there are binary wheels you can install without MySQLConnector/C or MSVC.

Note on Python 3 : if you are using python3 then you need to install python3-dev using the following command :

sudo apt-get install python3-dev # debian / Ubuntu

sudo yum install python3-devel # Red Hat / CentOS

Note about bug of MySQL Connector/C on macOS

See also: https://bugs.mysql.com/bug.php?id=86971

Versions of MySQL Connector/C may have incorrect default configuration options that cause compilation errors when mysqlclient-python is installed. (As of November 2017, this is known to be true for homebrew’s mysql-connector-c and official package)

 

centos 8 saltstack安装

  1. Run the following commands to install the SaltStack repository and key:
    sudo rpm --import https://repo.saltproject.io/py3/redhat/8/x86_64/latest/SALTSTACK-GPG-KEY.pub
    curl -fsSL https://repo.saltproject.io/py3/redhat/8/x86_64/latest.repo | sudo tee /etc/yum.repos.d/salt.repo
  2. Run sudo yum clean expire-cache
  3. Install the salt-minion, salt-master, or other Salt components:
    • sudo yum install salt-master
    • sudo yum install salt-minion
    • sudo yum install salt-ssh
    • sudo yum install salt-syndic
    • sudo yum install salt-cloud
    • sudo yum install salt-api
  4. Enable and start service for salt-minion, salt-master, or other Salt components:
    • sudo systemctl enable salt-master && sudo systemctl start salt-master
    • sudo systemctl enable salt-minion && sudo systemctl start salt-minion
    • sudo systemctl enable salt-syndic && sudo systemctl start salt-syndic
    • sudo systemctl enable salt-api && sudo systemctl start salt-api

CentOS7离线安装Nginx及配置

下载离线包:

用浏览器打开地址,选择您要下载的版本:http://nginx.org/packages/centos/7/x86_64/RPMS/

 

 

如图,我下载的1.16.1版。

执行安装:

将下载的rpm包上传到服务器,然后赋予可执行权限,执行安装。

sudo yum install -y nginx-1.16.1-1.el7.ngx.x86_64.rpm

启动Nginx并设置开机启动

sudo service nginx start
#或者
sudo systemctl start nginx.service
sudo systemctl enable nginx.service

查看Nginx版本

nginx -v

查看Nginx启动状态

sudo service nginx status
sudo systemctl status nginx.service

如图为正在运行状态:

 

 

停止Nginx服务

 

查看Nginx位置:

whereis nginx
sudo whereis nginx

默认安装则配置文件一般是这个:/etc/nginx/nginx.conf

root用户启动nginx默认监听80端口:

此时,如果您用浏览器访问却看不见这个界面,仍然打不开有可能是服务器防火墙拦截了80端口。去设置允许80端口,或者直接关闭防火墙即可。

sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service

然后再访问:

 

就正常了

 

 

卸载Nginx

yum remove nginx

检查配置文件是否正确

nginx -t -c /usr/nginx/conf/nginx.conf 
# 或者
/usr/nginx/sbin/nginx -t

 

重启Nginx

 

nginx -s reload

/usr/nginx/sbin/nginx -s reload

 

安装后一般会自动创建nginx用户:

 

MySQL : 自动生成创建时间、更新时间;自动更新更新时间

示例:

create table `user_info` (
    `id` bigint unsigned not null auto_increment comment '自增ID',
    `name` varchar(45) not null default '' comment '用户名',
    `created_at` datetime(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3) COMMENT '创建时间',
    `updated_at` datetime(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3) ON UPDATE CURRENT_TIMESTAMP(3) COMMENT '修改时间',
    primary key (`id`)
) engine = InnoDB character set = utf8mb4;
mysql> insert into user_info (name) values('letianbiji.com');
mysql> select * from user_info;
+----+----------------+-------------------------+-------------------------+
| id | name           | created_at              | updated_at              |
+----+----------------+-------------------------+-------------------------+
|  1 | letianbiji.com | 2020-06-06 22:29:38.930 | 2020-06-06 22:29:38.930 |
+----+----------------+-------------------------+-------------------------+

可以看到,created_at 和 updated_at 自动生成了。

mysql> update user_info set name = 'letian' where id=1;
mysql> select * from user_info;
+----+--------+-------------------------+-------------------------+
| id | name   | created_at              | updated_at              |
+----+--------+-------------------------+-------------------------+
|  1 | letian | 2020-06-06 22:29:38.930 | 2020-06-06 22:31:26.345 |
+----+--------+-------------------------+-------------------------+

可以看到,updated_at 自动更新了。

这种自动更新的方式仅支持 CURRENT_TIMESTAMP ,不支持其他函数。例如:

-- 下面的 DDL 会报错
create table `user_info_2` (
    `id` bigint unsigned not null auto_increment comment '自增ID',
    `name` varchar(45) not null default '' comment '用户名',
    `created_at` int NOT NULL DEFAULT unix_timestamp() COMMENT '创建时间',
    `updated_at` bigint NOT NULL DEFAULT unix_timestamp() ON UPDATE unix_timestamp() COMMENT '修改时间',
    primary key (`id`)
) engine = InnoDB character set = utf8mb4;

9 reasons why terraform is a pain, and 1 why you should still care

Working with Terraform can be difficult and cumbersome, but it’s still worth it.

 

Background story

Back in 2015, when I first found out about Terraform, it looked like a Valhalla to me. Terraform was about to solve the issue of provisioning complicated infrastructure – bringing together worlds of multiple cloud providers – ranging from multi-purpose giants like AWS to one-solution providers like Logentries.

Together with my team we decided, that we need something to deal with the infrastructure complexity we have. For a platform based on Heroku and AWS, scaled horizontally to four clones Terraform seemed like a perfect solution. We wanted to have something that would let us realize the idea of Infrastructure as a Code – a must for a DevOps enabled team. Advanced and feature-full Terraform is, it doesn’t come free – there is a couple of issues that you should be aware of.

I will enumerate the ones that hurt us the most, and show you our means to deal with them. In the end, I will try to convince you that even with those challenges, there is a lot of room for Terraform in the tooling space.

The pains

1. The evil state

First thing you will complain about, when it comes to Terraform, is the fact that it’s stateful, and the implications it brings. I personally consider two issues that it brings:

  • the state has to be in sync with the infrastructure all the time – that also means that you have to go all-in when it comes to provisioning – i.e. no stack modifications can be made outside of the provisioning tool
  • you have to keep the state somewhere – and this has to be a secure location as state has to carry secrets

But there is a reason why the state was introduced into Terraform. It’s there to maintain the mapping between the resources represented in your definition files and the actual resources created within cloud providers. Having that, Terraform can give you a couple of advantages:

  • reading the state from providers (state syncing, also called refreshing), can be quite time-consuming. If we could be 100% sure that the state is accurate, we could totally resign from that, and apply the change right away
  • having ability to follow the resources that already have been created, we can easier apply renames and restructuring modifications – simply an infrastructure refactoring
  • when it comes to state, Terraform requires it to be locked before applying the changes. That means that we can assure that while we are applying changes no-one else does.

I think when considering provisioning tool you should weigh up above arguments and make sure if your stack is more of a clean-sheet kinda thing, that can be recreated every time you change something, or is it rather a living organism that requires modifications while it’s still running.

2. Hard to start with the existing stack

Back in the early days of Terraform, its issue tracker was full of complaints from people not being able to leverage Terraform with the existing stack. The reason for it was the fact, that Terraform was not able to incorporate it into the state (to my amazement, while looking for a sign of this, I’ve found my old PR that was trying to address that issue back then 😉 ). Fortunately, the import command was introduced, and this problem has been solved (at least at the system level).

But here comes another issue that is tightly connected to this – if your stack is large you are doomed to issue terraform import command multiple times for each resource, that is already there. Without some nifty automation/scripting, it could be really time consuming and frustrating. When you think about it, it would be nice to import such things in a bit more smart way. This however would require Terraform to treat resources not as a flatland of resources, but as a tree. In some cases, it makes perfect sense – have a look at heroku_app vs heroku_domain or heroku_drain. There is certainly a lot of room for improvement in that space.

3. Complicated state modifications

There is one additional thing that is a bit problematic when dealing with the state. While constantly refactoring your infrastructure definition, you may end up renaming resources (changing their identifiers) or moving them deeper into modules. Such changes are unfortunately hard for Terraform to follow, and leave it in a state where it doesn’t know that certain resources are simply misplaced. If you run apply again, you will end up in resource recreation, which is probably not something You always want. The good news is that there is a terraform state mv command that allows you to move the logical resource around the state. The bad news is that in most of the cases you will need a lot of those.

4. Tricky conditional logic

There are some people around the web who doesn’t like the fact that Terraform is not really an actual imperative programming language. To be perfectly honest I don’t share that opinion – I think the provisioning definition of the stack should be as declarative as it can – that leaves a lot less space for some deviations in the definitions. On the other hand, the conditional logic provided by Terraform is a bit tricky. For example to define a resource that is conditionally provisioned you make the resource to be a list, and use the count parameter to control it:

thats rather specific, and you don’t really want to know how does if/else look like. Ok, you should:

So there is a point in saying, that you should stay away from constructs like this as far as you can. Of course, it doesn’t mean it should be a reason to resign from Terraform because of this, but be warned. There is a nice article from Gruntwork about all of the things you can and can’t do with count – really worth reading.

In some close release this problem should be simplified with resource for_each. Let’s keep our fingers crossed :).

5. One can’t simply iterate over modules

The actual idea of the module is awesome – it lets you enclose a reusable, set of resources in a reusable artifact. Let’s have a look at some simplified example:

app/app.tf

which then can be used in service declaration:

stack/some_service.tf

and that was a gamechanger for us, because we had a lot of repeating resources attached to each app – monitoring, logdrains, deployhooks (as above) to name a few. But there is one really hurting issue that comes with them – for some reason they are not representing the same artifact as actual resources. That means, specifically, that they don’t support count parameter which is critical when applying conditional logic stated above, or in our case – iteration over services per each clone. In exact, instead of doing:

stack/services.tf (this is not real)

we have to repeat the definition per each clone:

stack/services.tf

This issue is also promised to be sorted out in a foreseeable future.

6. Flickering resources

Being a 0.x software and feature-rich piece of software, Terraform carries a huge luggage of tiny errors that you might stumble upon. One of those itchy things was the fact that some resources don’t want to stay in a stable state. For us it was always an SNS topic subscription policy – everything we’ve been doing around service that had a queue subscribed to SNS, it was always modifying those (no matter it didn’t make much of a sense). It can lead to a lot of confusion – esp. when someone touches Terraform for the first time. While this issue is provider-local and will be most probably fixed over time, you have all the time have it at the back of your mind.

7. Those tiny details

Another tiny issue that we had was the inability to use count value that is relying on the state of something that is meant to be computed (in modules). Even something like:

when above thing is defined in the module, you get a sweet message saying: value of 'count' cannot be computed… It’s really annoying – especially when you read the explanation of Hashicorp saying that you can always use -target switch to initialize resources one after another :(.

8. How to deal with secrets?

One of the reasons why Terraform files are so hard to be kept around is the question of where to keep secrets. There are a couple of ways of dealing with that:

  • The Hashicorp’s blessed way of doing the thing is to use their Vault – while this could be the way to go, it complicates the whole setup even more and feels a little bit like an overkill
  • Similar to Vault you can use KMS from AWS to store secrets – but it carries the same complexity luggage
  • Use a private git repository, and pretend that everything is okay, as long as no one’s computer is stolen 😉
  • There is also a way of keeping secrets in the env vars. That kinda makes sense when you run the thing from CD/CI server – though in a sufficiently complicated system, this could be really hard to maintain
  • You could keep them somewhere local, have some special machine that would be exclusively for provisioning, but let’s face it – for a reasonably sized team that’s a ‘nogo’.
  • The way we dealt with this issue, was to keep all secret.tfvars along the .tf files, but encrypted using git-secret. The way it works is that the scripts that are running terraform plan and terraform apply for us first use git secret reveal and do git secret hide right after. While this is not a perfect solution, it’s at least simple enough to decrease the churn needed to run Terraform from local machines.

9. One hosting offering

Initially, neither Hashicorp nor any other company was providing any hosting of Terraform. Being quite a complicated piece of software to run (esp. the secrets holding part), there was a niche that had to be fulfilled, and finally – it was, by Terraform Enterprise. Unfortunately, I have no experience with this, so can’t tell for sure, how it looks. But – assuming it’s the same feel stripped from rather problematic issues of dealing with state and sensitive data – I hope for the best. What might be considered an issue is the fact that using Enterprise mode, leaves our provisioning a bit vendor locked-in.

So what should I (You) do?

As it’s quite visible, Terraform carries some issues that have to be taken into account while choosing a provisioning solution. Some of those will eventually be sorted out, others are just architectural choices Hashicorp had to make (most probably these were lesser-evil like decisions). As promised in the title, I should give one major point why should have to consider Terraform – in my opinion, there are cases where you simply have no other options. It’s really hard to find a solution ranging over so many cloud providers. Additionally, if your case is a living system, with a lot of infrastructure repetitions that undergoes minimal infrastructure changes every day – Terraform is definitely worth taking a look.

mtr命令

1.MTR是Linux平台上一款非常好用的网络诊断工具,集成了traceroute、ping、nslookup的功能,用于诊断网络状态非常有用。下面请看简单介绍。

1.1.一、安装

# yum install mtr

#适用于centos

# sudo apt-get install mtr

#适用于debian/ubuntu

 

1.2.二、用法简介

1.2.1.# mtr IP或域名

1.2.2.MTR用法简介

1.2.3.第一列(Host):IP地址和域名,按n键可以切换IP和域名

1.2.4.第二列(Loss%):丢包率

1.2.5.第三列(Snt):设置每秒发送数据包的数量,默认值是10 可以通过参数-c来指定

1.2.6.第四列(Last):最近一次的PING值

1.2.7.第五、六、七列(Avg、Best、Wrst):分别是PING的平均、最好、最差值

1.2.8.第八列(StDev):标准偏差

 

1.3.三、其它用法

1.3.1.# mtr -h  #提供帮助命令

1.3.2.# mtr -v  #显示mtr的版本信息

1.3.3.# mtr -r  #已报告模式显示

1.3.4.# mtr -s  #用来指定ping数据包的大小

1.3.5.# mtr –no-dns  #不对IP地址做域名解析

1.3.6.# mtr -a  #来设置发送数据包的IP地址 这个对一个主机由多个IP地址是有用的

1.3.7.# mtr -i  #使用这个参数来设置ICMP返回之间的要求默认是1秒

1.3.8.# mtr -4  #IPv4

1.3.9.# mtr -6  #IPv6