playbook中when的简单实用

it2025-05-14  9

背景

在使用 ansible 编写 playbook 的过程中,我们发现在安装某服务时,例如部署 fastdfs 分布式存储时,有的机器需要启动 tracker 和 storage 两个服务,有的机器只需要启动一个服务即可,它们需要的配置不同,我们要根据不同的机器来做不同的判断,来分发不同的配置文件,这时就会用到 when 来做判断了,并且我们还要使用 jinja2 的循环条件控制语句,还要在 ansible 的清单文件中设置好变量。

一个简单的判断

判断某文件或目录是否存在,存在则直接跳过,否则创建

- name: Check if fdfs_dl_dir is already exists stat: path: "{{ fdfs_dl_dir }}" #这里是一个由引号引起的变量,可在vars目录中指定 register: fdfs_dl - name: Create download dir file: path: "{{ fdfs_dl_dir }}" state: directory mode: 0755 when: fdfs_dl.stat.exists == False become: true

我们在日常的部署中,这种使用方法能帮我们大大的提高 playbook 的执行效率

针对不同的主机来做判断,如果满足条件,则执行任务,不满足直接略过

- name: Copy tracker init file template: src: "{{ item.src }}" dest: "{{ item.dest }}" mode: 0755 with_items: - src: fdfs_trackerd.j2 dest: /etc/init.d/fdfs_trackerd - src: fdfs_systemd.j2 dest: /etc/init.d/fdfs_systemd when: fdfs_role == 'tracker' become: true

这里我们自定义了一个变量 fdfs_role,该变量是定义在清单文件中的,如下:

[fdfs] 10.0.3.115 10.0.3.116 10.0.3.150 [tracker] 10.0.3.115 tracker_host=tracker1.backend.com 10.0.3.116 tracker_host=tracker2.backend.com [storage] 10.0.3.115 group_id=1 10.0.3.116 group_id=1 10.0.3.150 group_id=2 [tracker:vars] fdfs_role=tracker [storage:vars] fdfs_role=storage

我们这里对不同的主机进行分组,when 执行的判断是当 fdfs_role 为 tracker 时,才去执行此任务,简而言之就是满足条件才会执行,这对我们非常有用,例如在部署 mysql 集群时,我们需要对数据库执行授权操作,当然,授权操作主库和从库都要进行,但是进行主从关系授权时,则只需要在主库上进行即可,所以,我们需要做一个判断,类似与 fastdfs 的配置,如下:

- name: Grant replication to slave hosts mysql_user: login_user: "{{ mysql_root_user }}" login_port: "{{ mysql_port }}" config_file: "{{ ansible_env.HOME }}/my.cnf" name: "{{ mysql_dbuser }}" host: "{{ mysql_app_dbuser_host }}" password: "{{ mysql_dbpwd }}" append_privs: yes priv: '*.*:REPLICATION SLAVE' state: present environment: PATH: "{{ ansible_env.PATH }}:{{ mysql_install_dir }}/bin" when: mysql_db_role == 'master' - name: Stop replication mysql_replication: login_user: "{{ mysql_root_user }}" login_port: "{{ mysql_port }}" config_file: "{{ ansible_env.HOME }}/my.cnf" mode: stopslave environment: PATH: "{{ ansible_env.PATH }}:{{ mysql_install_dir }}/bin" become: true when: mysql_db_role == 'slave' - name: Reset replication mysql_replication: login_user: "{{ mysql_root_user }}" login_port: "{{ mysql_port }}" config_file: "{{ ansible_env.HOME }}/my.cnf" mode: resetslave environment: PATH: "{{ ansible_env.PATH }}:{{ mysql_install_dir }}/bin" become: true when: mysql_db_role == 'slave' - name: get the current master status mysql_replication: login_user: "{{ mysql_root_user }}" login_port: "{{ mysql_port }}" config_file: "{{ ansible_env.HOME }}/my.cnf" mode: getmaster delegate_to: "{{ mysql_host_master }}" register: master_info environment: PATH: "{{ ansible_env.PATH }}:{{ mysql_install_dir }}/bin" when: mysql_db_role == 'slave' - name: Change the master on slave to start the replication mysql_replication: login_user: "{{ mysql_root_user }}" login_port: "{{ mysql_port }}" config_file: "{{ ansible_env.HOME }}/my.cnf" mode: changemaster master_host: "{{ mysql_host_master }}" master_port: "{{ mysql_port }}" master_log_file: "{{ master_info.File }}" master_log_pos: "{{ master_info.Position }}" master_user: "{{ mysql_dbuser }}" master_password: "{{ mysql_dbpwd }}" environment: PATH: "{{ ansible_env.PATH }}:{{ mysql_install_dir }}/bin" when: mysql_db_role == 'slave' - name: Start slave mysql_replication: login_user: "{{ mysql_root_user }}" login_port: "{{ mysql_port }}" config_file: "{{ ansible_env.HOME }}/my.cnf" mode: startslave environment: PATH: "{{ ansible_env.PATH }}:{{ mysql_install_dir }}/bin" when: mysql_db_role == 'slave'

看着很长,但是逻辑非常简单,就是先给从库进行授权,然后防止之前执行失败,所以先 stop slave,再 reset slave(这两个操作顺序随意),然后再获取主库的 binlog 文件和 binlog 位置,最后在从库确立关系。

[mysql] 10.0.3.150 10.0.3.115 10.0.3.116 [slave] 10.0.3.116 server_id=116 10.0.3.115 server_id=115 [master] 10.0.3.150 [master:vars] mysql_db_role=master [slave:vars] mysql_db_role=slave

喜欢的朋友可以关注下我的个人公众号哦

最新回复(0)