I have mentioned it some time ago. Besides of Terraform, it is possible to cook in OCI with Ansible help. Today it is the right time to show it here. Like with Terraform you need to do some preliminary work, mostly defined in OCI docs (here is a link). Your journey with Ansible and OCI will push you in the direction of using OCI Python SDK. It shouldn’t be strange as Ansible itself is based on Python code. On top of this SDK, you will install Ansible OCI Modules available for free in GitHub repo. I have made the assumption you already have Ansible itself, but maybe this tool you should install as well. Here is a link where you will find instructions on how to do it.
Ok, let’s assume you are ready and all software is already there, installed as required. What to do next? First, you should clone my repo from GitHub. This repo is just a simple example of how to build OCI infra (VCN and network subcomponents) and then provision simple VM within that VCN. My example will use 3 Ansible playbooks:
- playbooks/create_oci_infra.yml – this one will create VCN and the compute/VM.
- playbooks/install_webserver_on_oci_compute.yml – this one will install the HTTPD server within compute instance created before.
- playbooks/teardown_oci_compute_and_infra.yml – this one will destroy compute instance and all OCI network infrastructure.
Let’s examine create_oci_infra.yml file step by step. This will be part 1. Next two blog posts will cover second and third playbooks:
As you can see above, my playbook will be executed locally. Locally in my case means ansible-server (localhost), but of course, it could be your laptop as well.
In the tasks list, the first task will include variables.yml file located in subdirectory variables. This variables.yml file will define all variables and should be modified by you as an initial step before Ansible execution (positions in yellow rectangles should be customized by you):
Ok, now let’s return to the first playbook. What do we have as a second task?
OCI_VCN Ansible module
First, we need to create a VCN within a compartment defined in variables.yml file. For that purpose, we are using oci_vcn Ansible module. We should provide some input to this module – VCN name, CIDR block, and DNS label. Task will execute the module and the results of the module will be registered in the “result” variable. In the next step, automation will set this result as Ansible’s fact (vcn_id). This will help us to create a reference to this particular VCN. It will be necessary for the rest of the OCI’s resources:
OCI_INTERNET_GATEWAY Ansible module
Let’s examine the next task. I guess all should be clear then. Here we have another Ansible module called oci_internet_gateway, embodied as yet another task. This task will create InternetGateway nested in this VCN. As you can see vcn_id variable (set as a fact before) is used to define a relation between IG and VCN (highlighted in red). On the other hand, InternetGateway Id also will be stored as fact/variable for future use (to make a relationship between resources). One thing more… please notice state which is set to “present” value in opposite to other possible value – “absent”. The current code means the module will create the IG resource. Setting value to “absent” would instruct the module to tear down the resource:
OCI_ROUTE_TABLE Ansible module
Ok, let’s move forward and continue to decompose the playbook. The next task is related to the route table. Here we will use oci_route_table Ansible module. Again we are making a reference to VCN (vcn_id) as well as to Internet Gateway (ig_id). On the other hand, we are storing route table id in “rt_id” fact/variable just after task will be executed:
OCI_SECURITY_LIST Ansible module
The next step will be more complex. We will use Jinja2 (j2) templates stored in templates subdirectory. They will help us to create ingress and egress rules for OCI Security List. In practice, it means templates files will be loaded and all variables will be derived and changed into proper values. Then files will be stored as yaml files, printed out and loaded as dynamic variables (loaded_ingress & loaded_egress). As the last step, they will be used to execute yet another Ansible module called oci_security_list. For the purpose of this module, we will also do the reference to VCN (vnc_id). As a result, we will have an additional variable instance_security_list_ocid (as an Ansible fact):
OCI_SUBNET Ansible module
It looks like we have all of the infra components needed for the subnet to be constructed. For subnet provisioning, we will use the oci_subnet Ansible module as follows. In the code body, we will use vnc_id as well as instance_security_list_ocid and rt_id variables. These dynamic variables have been created before. The rest of the module options will be populated as always from the variables.yml file included at the very beginning of this playbook. As a result of the execution we could expect registered output and this output will be stored in yet another variable called instance_subnet_id:
OCI_INSTANCE Ansible module
It looks like our foundation is ready and we can cook the VM itself. For that purpose, we have the next task which will use oci_instance Ansible module. Again we will use a dynamic variable. This time it will be instance_subnet_id (set as a fact in the previous step). Keep in mind this instance will be tagged with freeform tag (tier=webserver). It will be used for the purpose of Ansible dynamic inventory (it will be covered by part 2 of this blog).
OCI_VNIC_ATTACHMENT_FACTS and OCI_VNIC_FACTS Ansible modules
The process of VM spin-up uses to take 2 or 3 minutes in OCI. Next 1-2 minutes we need to wait for the SSH server to be started. As so our playbook code needs to obtain the public IP of this brand new machine and then wait until the machine will be really ready. We will require the next two Ansible modules – oci_vnic_attachment_facts module for VNIC attachment and then, in cascade, oci_vnic_facts module for VNIC itself and public IP address – VNIC’s attribute.
We can evaluate playbook execution as successful only if it will be possible to access the machine with SSH protocol and then execute some simple command remotely. Let’s say it will be “uptime” Linux command, ok? All of that you can see below, in the last part of the playbook:
Ok. It looks like we know exactly what is this playbook for. Let’s now execute it. To do it we can use the shell script in the root directory. This shell script encapsulates ansible-playbook utility with all necessary parameters:
Ok. Now it is a time for execution. Let’s start with Ansible. After 5-6 minutes we should have this outcome in the Linux terminal:
On the other hand in OCI UI we should see this result as follows:
And that is the end of part 1. Next time I will explain how to start second and third playbook from the repo.
Martin, The Cook 🙂