一、简介
Openstack网络服务Neutron主要负责网络连接任务,包括租户网络及子网的创建、网卡端口配置、访问IP地址管理以及高级网络服务(负载均衡DBaas、防火墙FwaaS)功能。以上功能通过不同的插件和代理来实现二层、三层及四七层网络虚拟化,常见的代理L3(3层),DHCP(动态主机IP地址),以及插件代理,插件式的实现可以容纳不同的网络设备和软件,为OpenStack架构与部署提供了灵活性。
Neutron的组成如下:
- Neutron-server:接收和路由API请求到合适的OpenStack网络插件,以达到预想的目的;
- OpenStack网络插件和代理:插拔端口,创建网络和子网,以及提供IP地址,这些插件和代理依赖于供应商和技术而不同,OpenStack网络基于插件和代理为Cisco 虚拟和物理交换机、NEC OpenFlow产品,Open vSwitch,Linux bridging以及VMware NSX 产品穿线搭桥。
二、部署脚本
2.1 网络节点独立部署
为了减轻网络的负载,网络服务在单独节点上(三个网络节点上)进行部署,但为了方便Neutron API的统一管理与调用,Neutron Server是部署在控制节点上,即Neutron需要在控制节点上配置neutron的管理和ml2插件配置,需要在网络节点上配置neutron-openvswitch-agent、 neutron-l3-agent、neutron-dhcp-agent和neutron-metadata-agent服务,以及创建外网访问br-ex网桥配置。网络服务架构,计算节点上安装配置neutron-openvswitch-agent和openvswitch服务,服务架构同下图类似:
此外,由于本部署采用了HA高可用架构,同三个控制节点上管理集群类似,在三个网络节点同样安装了pacemaker集群管理软件,用于网络节点上neutron-openvswitch-agent、 neutron-l3-agent、neutron-dhcp-agent和neutron-metadata-agent服务的高可用配置。
首先在控制节点上通过0-set-config.sh设置网络节点配置信息:
1 2 3 4 5 |
declare -A networker_map=(["network01"]="192.168.2.14" ["network02"]="192.168.2.15" ["network03"]="192.168.2.16"); ### 网络HA集群默认部署节点(必须存在该主机名的节点) network_host=network01 ### 网络节点是否单独部署,若是设置为yes,否设置为no networker_split=yes |
然后执行安装脚本install-configure-neutron.sh,会完成neutron的所有配置安装,install-configure-neutron.sh在完成控制节点上配置,会调用 install-configure-pacemaker-networkers.sh脚本进行网络节点HA集群部署,然后调用install-configure-neutron-networkers.sh完成网络节点上服务配置及br-ex外网网桥创建及配置。
PS. 网络节点单独部署优势不仅可以减少虚拟机访问的网络负载,还可以将Openstack控制集群与网络集群隔离开,如果控制集群发生异常,不影响网络集群,此时已经创建的虚拟机可以正常访问,不影响其中运行业务,这一点可以大大减少故障范围,也可以方便进行故障恢复。
2.2 控制节点与网络节点融合部署
此情景不需要构建网络节点高可用集群,直接在控制节点上安装所有网络服务,将所有网络服务加入到控制节点上pacemaker资源中即可。
首先在控制节点上通过0-set-config.sh设置网络节点配置信息:让网络节点和控制节点重叠即可,并设置网络节点单独部署配置项为no
1 2 3 4 5 |
declare -A networker_map=(["controller01"]="192.168.2.11" ["controller02"]="192.168.2.12" ["controller03"]="192.168.2.13"); ### 网络HA集群默认部署节点(必须存在该主机名的节点) network_host=controller01 ### 网络节点是否单独部署,若是设置为yes,否设置为no networker_split=no |
然后执行install-configure-neutron.sh安装即可,实际安装过程中,会跳过 install-configure-pacemaker-networkers.sh脚本,不会创建网络节点的高可用集群,由于网络节点和控制节点融合,所有的网络服务都会加入到控制节点pacemaker上资源中。
2.3 脚本详细介绍
neutron服务一键安装脚本:install-configure-neutron.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
#!/bin/sh . ../0-set-config.sh ./style/print-split.sh "Neutron Installation" ### [所有控制节点] 修改/etc/haproxy/haproxy.cfg文件 . ./1-gen-haproxy-cfg.sh neutron ###[所有控制节点] 安装配置 ./pssh-exe C "yum install -y openstack-neutron openstack-neutron-ml2 python-neutronclient" for ((i=0; i<${#controller_map[@]}; i+=1)); do name=${controller_name[$i]}; ip=${controller_map[$name]}; . style/print-info.sh "Openstack configure in $name" ssh root@$ip /bin/bash << EOF openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host $ip openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:$password@$virtual_ip/neutron openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password $password openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://$virtual_ip:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://$virtual_ip:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password $password openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True openstack-config --set /etc/neutron/neutron.conf nova auth_url http://$virtual_ip:35357 openstack-config --set /etc/neutron/neutron.conf nova auth_type password openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne openstack-config --set /etc/neutron/neutron.conf nova project_name service openstack-config --set /etc/neutron/neutron.conf nova username nova openstack-config --set /etc/neutron/neutron.conf nova password $password openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan,gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver iptables_hybrid openstack-config --set /etc/nova/nova.conf neutron url http://$virtual_ip:9696 openstack-config --set /etc/nova/nova.conf neutron auth_url http://$virtual_ip:35357 openstack-config --set /etc/nova/nova.conf neutron auth_type password openstack-config --set /etc/nova/nova.conf neutron project_domain_name default openstack-config --set /etc/nova/nova.conf neutron user_domain_name default openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password $password openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret $password ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service EOF done; ### [任一节点]创建数据库 mysql -uroot -p$password_galera_root -h $virtual_ip -e "CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '"$password"'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller01' IDENTIFIED BY '"$password"'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '"$password"'; FLUSH PRIVILEGES;" ### [任一节点]创建用户等 . /root/keystonerc_admin openstack user create --domain default --password $password neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region RegionOne network public http://$virtual_ip:9696 openstack endpoint create --region RegionOne network internal http://$virtual_ip:9696 openstack endpoint create --region RegionOne network admin http://$virtual_ip:9696 ### [任一节点]生成数据库 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron ### [任一节点]添加pacemaker资源 pcs resource create neutron-server systemd:neutron-server op start timeout=300 --clone interleave=true pcs constraint order start openstack-keystone-clone then neutron-server-clone pcs resource op add neutron-server start timeout=300 pcs resource op add neutron-server stop timeout=300 ### [任一节点]测试 . restart-pcs-cluster.sh . /root/keystonerc_admin neutron ext-list ### [网络节点] 安装网络高可用集群 if [[ "$networker_split" = "yes" ]];then echo "Need to configure pcs cluster in networkers!" . install-configure-pacemaker-networkers.sh fi . install-configure-neutron-networkers.sh ### [任一节点]测试 . /root/keystonerc_admin neutron agent-list |
网络节点上的pcs高可用集群部署脚本install-configure-pacemaker-networkers.sh如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
#!/bin/sh . ../0-set-config.sh ./style/print-split.sh "Networkers Pacemaker Installation" ### [所有网络节点] 安装软件 ./pssh-exe N "yum install -y pcs pacemaker corosync fence-agents-all resource-agents" ### [所有网络节点] 配置服务 ./pssh-exe N "systemctl enable pcsd && systemctl start pcsd" ### [所有网络节点]设置hacluster用户的密码 ./pssh-exe N "echo $password_ha_user | passwd --stdin hacluster" ### [network01]设置到集群节点的认证 ssh root@$network_host pcs cluster auth ${networker_name[@]} -u hacluster -p $password_ha_user --force ### [network01]创建并启动集群 ssh root@$network_host pcs cluster setup --force --name openstack-cluster ${networker_name[@]} ssh root@$network_host pcs cluster start --all ssh root@$network_host sleep 5 ### [network01]设置集群属性 ssh root@$network_host pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 cluster-recheck-interval=5min ### [network01] 暂时禁用STONISH,否则资源无法启动 ssh root@$network_host pcs property set stonith-enabled=false ### [network01] 检验集群状态 ssh root@$network_host pcs cluster status |
网络节点neutron服务配置脚本install-configure-neutron-networkers.sh如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
#!/bin/sh #####[所有网络节点] 安装neutron ./pssh-exe N "yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch" ### [所有网络节点]配置neutron server for ((i=0; i<${#networker_map[@]}; i+=1)); do name=${networker_name[$i]}; ip=${networker_map[$name]}; . style/print-info.sh "Openstack configure in $name" data_ip=$(ssh $ip cat /etc/sysconfig/network-scripts/ifcfg-$data_nic |grep IPADDR=|egrep -v "#IPADDR"|awk -F "=" '{print $2}') ssh root@$ip /bin/bash << EOF openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host $ip openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:$password@$virtual_ip/neutron openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0 openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password $password openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://$virtual_ip:5000 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://$virtual_ip:35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password $password openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True openstack-config --set /etc/neutron/neutron.conf nova auth_url http://$virtual_ip:35357 openstack-config --set /etc/neutron/neutron.conf nova auth_type password openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne openstack-config --set /etc/neutron/neutron.conf nova project_name service openstack-config --set /etc/neutron/neutron.conf nova username nova openstack-config --set /etc/neutron/neutron.conf nova password $password openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan,gre openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000 openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver iptables_hybrid openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip $data_ip openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings external:br-ex openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip $virtual_ip openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret $password openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover True openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3 openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2 openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3 systemctl enable openvswitch.service systemctl start openvswitch.service ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini EOF done; ### [任一网络节点]添加pacemaker资源 ssh root@$network_host /bin/bash << EOF pcs resource create neutron-scale ocf:neutron:NeutronScale --clone globally-unique=true clone-max=3 interleave=true pcs resource create neutron-ovs-cleanup ocf:neutron:OVSCleanup --clone interleave=true pcs resource create neutron-netns-cleanup ocf:neutron:NetnsCleanup --clone interleave=true pcs resource create neutron-openvswitch-agent systemd:neutron-openvswitch-agent --clone interleave=true pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent --clone interleave=true pcs constraint order start neutron-scale-clone then neutron-ovs-cleanup-clone pcs constraint colocation add neutron-ovs-cleanup-clone with neutron-scale-clone pcs constraint order start neutron-ovs-cleanup-clone then neutron-netns-cleanup-clone pcs constraint colocation add neutron-netns-cleanup-clone with neutron-ovs-cleanup-clone pcs constraint order start neutron-netns-cleanup-clone then neutron-openvswitch-agent-clone pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-netns-cleanup-clone pcs constraint order start neutron-openvswitch-agent-clone then neutron-dhcp-agent-clone pcs constraint colocation add neutron-dhcp-agent-clone with neutron-openvswitch-agent-clone pcs constraint order start neutron-dhcp-agent-clone then neutron-l3-agent-clone pcs constraint colocation add neutron-l3-agent-clone with neutron-dhcp-agent-clone pcs constraint order start neutron-l3-agent-clone then neutron-metadata-agent-clone pcs constraint colocation add neutron-metadata-agent-clone with neutron-l3-agent-clone pcs resource op add neutron-scale start timeout=300 pcs resource op add neutron-scale stop timeout=300 pcs resource op add neutron-ovs-cleanup start timeout=300 pcs resource op add neutron-ovs-cleanup stop timeout=300 pcs resource op add neutron-netns-cleanup start timeout=300 pcs resource op add neutron-netns-cleanup stop timeout=300 pcs resource op add neutron-openvswitch-agent start timeout=300 pcs resource op add neutron-openvswitch-agent stop timeout=300 pcs resource op add neutron-dhcp-agent start timeout=300 pcs resource op add neutron-dhcp-agent stop timeout=300 pcs resource op add neutron-l3-agent start timeout=300 pcs resource op add neutron-l3-agent stop timeout=300 pcs resource op add neutron-metadata-agent start timeout=300 pcs resource op add neutron-metadata-agent stop timeout=300 EOF ###[所有网络节点] ovs 操作 echo "ovs-vsctl add-br br-ex ovs-vsctl add-port br-ex "$local_nic tag=`eval date +%Y%m%d%H%M%S` for ((i=0; i<${#networker_map[@]}; i+=1)); do name=${networker_name[$i]}; ip=${networker_map[$name]}; echo "-------------$name------------" \cp ../conf/ifcfg-br-ex.template ../conf/ifcfg-br-ex \cp ../conf/ifcfg-local_nic.template ../conf/ifcfg-local_nic ssh $ip cp /etc/sysconfig/network-scripts/ifcfg-$local_nic /etc/sysconfig/network-scripts/bak-ifcfg-$local_nic-$tag ssh $ip cat /etc/sysconfig/network-scripts/ifcfg-$local_nic |grep NETMASK >> ../conf/ifcfg-br-ex ssh $ip cat /etc/sysconfig/network-scripts/ifcfg-$local_nic |grep PREFIX >> ../conf/ifcfg-br-ex ssh $ip cat /etc/sysconfig/network-scripts/ifcfg-$local_nic |grep DNS1 >> ../conf/ifcfg-br-ex ssh $ip cat /etc/sysconfig/network-scripts/ifcfg-$local_nic |grep GATEWAY >> ../conf/ifcfg-br-ex sed -i -e 's#IPADDR=#IPADDR='"$ip"'#g' ../conf/ifcfg-br-ex sed -i -e 's#NAME=#NAME='"$local_nic"'#g' ../conf/ifcfg-local_nic sed -i -e 's#DEVICE=#DEVICE='"$local_nic"'#g' ../conf/ifcfg-local_nic scp ../conf/ifcfg-local_nic root@$ip:/etc/sysconfig/network-scripts/ifcfg-$local_nic scp ../conf/ifcfg-br-ex root@$ip:/etc/sysconfig/network-scripts/ifcfg-br-ex ssh root@$ip /bin/bash << EOF ovs-vsctl add-br br-ex ovs-vsctl add-port br-ex $local_nic systemctl restart network.service EOF done; ### [任一网络节点]测试 . restart-pcs-cluster-networkers.sh . /root/keystonerc_admin ./pssh-exe N "ovs-vsctl show" neutron agent-list |
注明:neutron-openvswitch-agent和openvswitch服务会在计算节点上统一配置,这里不做介绍。
三、参考文档
https://docs.openstack.org/mitaka/networking-guide/
https://docs.openstack.org/ha-guide/networking-ha.html
https://docs.openstack.org/kilo/networking-guide/scenario_legacy_ovs.html
https://www.rdoproject.org/networking/neutron-with-existing-external-network/
四、源码
脚本源码:https://github.com/zjmeixinyanzhi/Openstack-HA-Install-Shells
五、系列文章
Openstack云平台脚本部署之Galera高可用集群配置(二)
Openstack云平台脚本部署之RabbitMQ高可用集群部署(三)
Openstack云平台脚本部署之Memcached配置(五)
Openstack云平台脚本部署之Keystone认证服务配置(六)
Openstack云平台脚本部署之Glance镜像服务配置(七)
Openstack云平台脚本部署之Nova计算服务配置(八)
Openstack云平台脚本部署之Neutron网络服务配置(九)
Openstack云平台脚本部署之Dashboard配置(十)
Openstack云平台脚本部署之Cinder块存储服务配置(十一)
Openstack云平台脚本部署之Ceilometer数据收集服务配置(十二)
Openstack云平台脚本部署之Aodh告警服务配置(十三)
对pacemaker不熟悉,按照文档操作有如下疑问:
本章第一处: ### [任一节点]添加pacemaker资源
执行如下:
[root@controller2 ~]# pcs resource create neutron-server systemd:neutron-server op start timeout=300 –clone interleave=true
[root@controller2 ~]# pcs constraint order start openstack-keystone-clone then neutron-server-clone
Adding openstack-keystone-clone neutron-server-clone (kind: Mandatory) (Options: first-action=start then-action=start)
[root@controller2 ~]# pcs resource op add neutron-server start timeout=300
Error: operation start with interval 0s already specified for neutron-server:
start interval=0s timeout=300 (neutron-server-start-interval-0s)
[root@controller2 ~]#
[root@controller2 ~]#
感谢你的作品!
Pacemaker集群管理的内容可以参考官网说明文档:http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/index.html。这里Pacemaker HA创建了三个控制节点上的neuron server资源,设置了资源启动顺序在keystone之后,并设置了资源启动超时时间。
ok。 这个了解了
再来打扰了:
后续pcs配置timeout选项:如下
[root@controller3 ~]# pcs resource op add neutron-scale start timeout=300
Error: operation start with interval 0s already specified for neutron-scale:
start interval=0s timeout=40 (neutron-scale-start-interval-0s)
[root@controller3 ~]# pcs resource op add neutron-scale start timeout=30
Error: operation start with interval 0s already specified for neutron-scale:
start interval=0s timeout=40 (neutron-scale-start-interval-0s)
[root@controller3 ~]# pcs resource op add neutron-scale start timeout=300
Error: operation start with interval 0s already specified for neutron-scale:
start interval=0s timeout=40 (neutron-scale-start-interval-0s)
[root@controller3 ~]#
[root@controller3 ~]# pcs resource op add neutron-scale stop timeout=300
Error: operation stop with interval 0s already specified for neutron-scale:
stop interval=0s timeout=300 (neutron-scale-stop-interval-0s)
[root@controller3 ~]# pcs resource op add neutron-linuxbridge-cleanup start timeout=300
[root@controller3 ~]# pcs resource op add neutron-linuxbridge-cleanup stop timeout=300
[root@controller3 ~]# pcs resource op add neutron-netns-cleanup start timeout=300
Error: operation start with interval 0s already specified for neutron-netns-cleanup:
start interval=0s timeout=40 (neutron-netns-cleanup-start-interval-0s)
[root@controller3 ~]# pcs resource op add neutron-netns-cleanup stop timeout=300
Error: operation stop with interval 0s already specified for neutron-netns-cleanup:
stop interval=0s timeout=300 (neutron-netns-cleanup-stop-interval-0s)
[root@controller3 ~]# pcs resource op add neutron-linuxbridge-agent start timeout=300
[root@controller3 ~]# pcs resource op add neutron-linuxbridge-agent stop timeout=300
两种不同的standard,其中ocf 是有默认时间配置的?所以后续add失败?
有这个问题,其实后面追加的这个资源的启动和超时限定是无效的,报错不影响部署配置,可以去掉这个配置项。
这个是脚本的历史原因,当时之所以追加这个配置项,当时是因为重启pcs需要等待的时间特别长,后来单独追加上了,再后来合并到脚本中,然而并没有起作用……
体会到了,现在重启pcs集群需要等待的时间确实很长…..
如果直接使用 pcs cluster kill ,这样是不是对集群有不好的影响??
kill等方式至今没发现什么特别严重的影响,最多就是如果一些服务没被pacemaker正常停掉,这时候直接执行pcs cluster kill,那么等pcs cluster重启后后,那些未被正常停掉服务很有可能就会处于停止状态,不能被pacemaker正常管理,这个情况nova一些服务遇到的比较多,这时候需要手动关闭异常服务,再重新启动pcs cluster。在重启物理集群的时候,pcs cluster kill我们也经常用,不然就要等pcs cluster关闭就要很长时间,服务器不能很快关闭。
另一个问题:
我明明在 neutron.conf 里定义了 host = controller1 (其他节点也定义了本地的hostname),为什么在重启后,neutron.conf 的host变成了 neutron-n-2 这种? 如下:
175727b1-901a-4bc4-a | Metadata agent | neutron-n-0 | | 🙂 | True | neutron-metadata- |
| f68-055e4976d795 | | | | | | agent |
| 2c39242a-89e4-4acc- | L3 agent | neutron-n-0 | nova | 🙂 | True | neutron-l3-agent |
| ae06-28b7d50a3e56 | | | | | | |
| 4313e764-382f-4d6f- | DHCP agent | neutron-n-1 | nova | 🙂 | True | neutron-dhcp-agent |
| 9fde-03555d0d08f4 | | | | | | |
| 4f137fad-dccf-47f3 | DHCP agent | neutron-n-0 | nova | 🙂 | True | neutron-dhcp-agent |
| -9f0f-526dafbb3bb9 | | | | | | |
| 56209933-ea0b-435b- | DHCP agent | neutron-n-2 | nova | 🙂 | True | neutron-dhcp-agent |
| 9b1a-7cf1cbf55e41 | | | | | | |
| 5cc96a3b-beb5-4ee3-b | Linux bridge agent | neutron-n-0 | | 🙂 | True | neutron-linuxbridge- |
| ee1-c5d8792e880d | | | | | | agent |
| 6380e3ab-aa3d-45a4-b | Linux bridge agent | neutron-n-2 | | 🙂 | True | neutron-linuxbridge- |
| 6d5-62455f1f230b | | | | | | agent |
| 8d2eb485-e5bf-4f47 | L3 agent | neutron-n-1 | nova | 🙂 | True | neutron-l3-agent |
| -90aa-c875566909db | | | | | | |
| 9cec1a68-b157-48e3 | Metadata agent | neutron-n-2 | | 🙂 | True | neutron-metadata- |
| -9d4e-a43c3702cb48 | | | | | | agent |
| b05c7673-a7de-4e1f- | Linux bridge agent | neutron-n-1 | | 🙂 | True | neutron-linuxbridge- |
| b9c8-5d6cc7b2645e | | | | | | agent |
| bb1aa470-08bc-4f91-9 | Metadata agent | neutron-n-1 | | 🙂 | True | neutron-metadata- |
| 493-04a4083db13a | | | | | | agent |
| c0cacebf-2205-42cb-9 | L3 agent | neutron-n-2 | nova | 🙂 | True | neutron-l3-agent |
| 863-c832adc2c558 | | | | | | |
+———————-+——————–+————-+——————-+——-+—————-+———————–+
的确是这样,不过没有关系,我们的配置后,验证操作是一样的结果。
hostname显示解决了,其中
pcs resource create neutron-scale ocf:neutron:NeutronScale –clone globally-unique=true clone-max=3 interleave=true
配置NeutronScale 服务,服务脚本中有:这么一段
neutron_scale_start() {
hostid=${OCF_RESKEY_hostbasename}-${OCF_RESKEY_CRM_meta_clone}
cleanup_old_config_files
config_file=”/etc/neutron/$neutronconfigfile”
if [ -f “$config_file” ]; then
openstack-config –set $config_file DEFAULT host $hostid
if [ $? != 0 ]; then
ocf_log err “neutron-scale: unable to set host info to $hostid for $config_file”
return OCF_ERR_GENERIC
else
ocf_log info “neutron-scale: host $hostid set for $config_file”
fi
else
ocf_log err “/etc/neutron/$config_file not installed.”
return OCF_ERR_GENERIC
fi
touch ${HA_RSCTMP}/neutron-scale-${OCF_RESKEY_CRM_meta_clone}
return $OCF_SUCCESS
}
定位执行改host命令: openstack-config –set $config_file DEFAULT host $hostid
so,END
厉害,点赞
博主你好,看你的写的文章太好了。能否加你为好友,感谢你的分享!