Cook in OCI with Ansible modules – part 1

I have mentioned it some time ago. Besides of Terraform, it is possible to cook in OCI with Ansible help.  Today it is the right time to show it here. Like with Terraform you need to do some preliminary work, mostly defined in OCI docs (here is a link). Your journey with Ansible and OCI will push you in the direction of using OCI Python SDK. It shouldn’t be strange as Ansible itself is based on Python code. On top of this SDK, you will install Ansible OCI Modules available for free in GitHub repo.  I have made the assumption you already have Ansible itself, but maybe this tool you should install as well. Here is a link where you will find instructions on how to do it.

Ok, let’s assume you are ready and all software is already there, installed as required. What to do next? First, you should clone my repo from GitHub. This repo is just a simple example of how to build OCI infra (VCN and network subcomponents) and then provision simple VM within that VCN. My example will use 3 Ansible playbooks:

  • playbooks/create_oci_infra.yml – this one will create VCN and the compute/VM.
  • playbooks/install_webserver_on_oci_compute.yml – this one will install the HTTPD server within compute instance created before.
  • playbooks/teardown_oci_compute_and_infra.yml – this one will destroy compute instance and all OCI network infrastructure.

Let’s examine create_oci_infra.yml file step by step. This will be part 1. Next two blog posts will cover second and third playbooks:

create_oci_infra_playbook_including_variables_

As you can see above, my playbook will be executed locally. Locally in my case means ansible-server (localhost), but of course, it could be your laptop as well.

In the tasks list, the first task will include variables.yml file located in subdirectory variables. This variables.yml file will define all variables and should be modified by you as an initial step before Ansible execution (positions in yellow rectangles should be customized by you):

variables_yml

Ok, now let’s return to the first playbook. What do we have as a second task?

OCI_VCN Ansible module

First, we need to create a VCN within a compartment defined in variables.yml file. For that purpose, we are using oci_vcn Ansible module. We should provide some input to this module – VCN name, CIDR block, and DNS label. Task will execute the module and the results of the module will be registered in the “result” variable. In the next step, automation will set this result as Ansible’s fact (vcn_id). This will help us to create a reference to this particular VCN. It will be necessary for the rest of the OCI’s resources:

create_oci_infra_playbook_creating_VCN

OCI_INTERNET_GATEWAY Ansible module

Let’s examine the next task. I guess all should be clear then.  Here we have another Ansible module called oci_internet_gateway, embodied as yet another task. This task will create InternetGateway nested in this VCN. As you can see vcn_id variable (set as a fact before) is used to define a relation between IG and VCN (highlighted in red). On the other hand, InternetGateway Id also will be stored as fact/variable for future use (to make a relationship between resources). One thing more… please notice state which is set to “present” value in opposite to other possible value – “absent”. The current code means the module will create the IG resource. Setting value to “absent” would instruct the module to tear down the resource:

create_oci_infra_playbook_creating_IG

OCI_ROUTE_TABLE Ansible module

Ok, let’s move forward and continue to decompose the playbook. The next task is related to the route table. Here we will use oci_route_table Ansible module. Again we are making a reference to VCN (vcn_id) as well as to Internet Gateway (ig_id). On the other hand, we are storing route table id in “rt_id” fact/variable just after task will be executed:

create_oci_infra_playbook_creating_RT

OCI_SECURITY_LIST Ansible module

The next step will be more complex. We will use Jinja2 (j2) templates stored in templates subdirectory. They will help us to create ingress and egress rules for OCI Security List. In practice, it means templates files will be loaded and all variables will be derived and changed into proper values. Then files will be stored as yaml files, printed out and loaded as dynamic variables (loaded_ingress & loaded_egress). As the last step, they will be used to execute yet another Ansible module called oci_security_list. For the purpose of this module, we will also do the reference to VCN (vnc_id). As a result, we will have an additional variable instance_security_list_ocid (as an Ansible fact):

create_oci_infra_playbook_creating_SecList_with_Templates

OCI_SUBNET Ansible module

It looks like we have all of the infra components needed for the subnet to be constructed. For subnet provisioning, we will use the oci_subnet Ansible module as follows. In the code body, we will use vnc_id as well as instance_security_list_ocid and rt_id variables. These dynamic variables have been created before. The rest of the module options will be populated as always from the variables.yml file included at the very beginning of this playbook. As a result of the execution we could expect registered output and this output will be stored in yet another variable called instance_subnet_id:

create_oci_infra_playbook_creating_subnet

OCI_INSTANCE Ansible module

It looks like our foundation is ready and we can cook the VM itself. For that purpose, we have the next task which will use oci_instance Ansible module. Again we will use a dynamic variable. This time it will be instance_subnet_id (set as a fact in the previous step). Keep in mind this instance will be tagged with freeform tag (tier=webserver). It will be used for the purpose of Ansible dynamic inventory (it will be covered by part 2 of this blog).

launch_instance

OCI_VNIC_ATTACHMENT_FACTS and OCI_VNIC_FACTS Ansible modules

The process of VM spin-up uses to take 2 or 3 minutes in OCI. Next 1-2 minutes we need to wait for the SSH server to be started. As so our playbook code needs to obtain the public IP of this brand new machine and then wait until the machine will be really ready. We will require the next two Ansible modules – oci_vnic_attachment_facts module for VNIC attachment and then, in cascade, oci_vnic_facts module for VNIC itself and public IP address – VNIC’s attribute.

We can evaluate playbook execution as successful only if it will be possible to access the machine with SSH protocol and then execute some simple command remotely. Let’s say it will be “uptime” Linux command, ok? All of that you can see below, in the last part of the playbook:

acessing_vm_for_uname

Ok. It looks like we know exactly what is this playbook for. Let’s now execute it. To do it we can use the shell script in the root directory. This shell script encapsulates ansible-playbook utility with all necessary parameters:

STEP1_create_oci_infra.jpg

Ok. Now it is a time for execution. Let’s start with Ansible. After 5-6 minutes we should have this outcome in the Linux terminal:

succesful_launch_of_VM

On the other hand in OCI UI we should see this result as follows:

Cloud_UI_VCNCloud_UI_VM

And that is the end of part 1. Next time I will explain how to start second and third playbook from the repo.

Bon Appetit,

Martin, The Cook 🙂

OCI’s ATP as a second dish

What is your first association in case of ATP term? For me? Hmmm… Adenosine triphosphate, the most energetic organic chemical which fuels up biological forms of life…? That was something I have learned in secondary school long, long time ago. Well… ok…  But here I will be talking about something different! I will try to make you familiar with brand new Oracle Cloud Infrastructure service called Autonomous Transaction Processing (ATP). This new PaaS offering means you can create a database with OLTP capabilities, within a couple of minutes. Most of the tasks are fully automated. Oracle use to define it as self-driven, autonomous, which means it will require minimal effort to maintain. As a developer, you can focus on your tables and data itself. I used to specify ATP as a database container. Additionally, it is possible to provision that autonomous database automatically with Terraform.

For that purpose, I have downloaded the code from Terraform OCI Provider examples, which was published as a part of Terraform OCI Provider GitHub repo. Of course, I have modified it a little bit and here is my small GitHub repo with examples. The first ATP database is here:

MacBook-Pro-Martin:FoggyKitchenATP martinlinxfeld$ more atp.tf
(...)
resource "oci_database_autonomous_database" "autonomous_database" {
  #Required
  admin_password           = "${random_string.autonomous_database_admin_password.result}"
  compartment_id           = "${var.compartment_ocid}"
  cpu_core_count           = "${var.autonomous_database_cpu_core_count}"
  data_storage_size_in_tbs = "${var.autonomous_database_data_storage_size_in_tbs}"
  db_name                  = "${var.autonomous_database_db_name}"

  #Optional
  display_name  = "${var.autonomous_database_display_name}"
  freeform_tags = "${var.autonomous_database_freeform_tags}"
  license_model = "${var.autonomous_database_license_model}"
}
(...)

For accessing ATP from outside, for example from SQLDeveloper I need to have wallet. This wallet will be stored locally on your computer as a file:

MacBook-Pro-Martin:FoggyKitchenATP martinlinxfeld$ more atp_wallet.tf
(...)

data "oci_database_autonomous_database_wallet" "autonomous_database_wallet" {
autonomous_database_id = "${oci_database_autonomous_database.autonomous_database.id}"
password = "${random_string.wallet_password.result}"
}

resource "local_file" "autonomous_database_wallet_file" {
content = "${data.oci_database_autonomous_database_wallet.autonomous_database_wallet.content}"
filename = "${path.module}/foggykitchen_atp_wallet.zip"
}
(...)

Next, I need to create the backup for ATP. Keep in mind it will be a manual backup (there is automated backup already included). This manual backup you can execute before the major change in data just to have additional protection. Then you can manually restore from that backup if something goes wrong with data change. For the purpose of that backup, you need to prepare an object storage bucket. Name of the bucket should be backup_<database_name>:

MacBook-Pro-Martin:FoggyKitchenATP martinlinxfeld$ more atp_backup.tf
data "oci_objectstorage_namespace" "autonomous_database_backup_namespace" {
}

resource "oci_objectstorage_bucket" "autonomous_database_backup_bucket" {
compartment_id = "${var.compartment_ocid}"
name = "${var.autonomous_database_backup_display_name}"
namespace = "${data.oci_objectstorage_namespace.autonomous_database_backup_namespace.namespace}"
}

resource "oci_database_autonomous_database_backup" "autonomous_database_backup" {
autonomous_database_id = "${oci_database_autonomous_database.autonomous_database.id}"
display_name = "${var.autonomous_database_backup_display_name}"
}

Of course, I have variables.tf as well, but you will find it in GitHub repo for this small project. Ok, everything seems to be ready so I can start “terraform apply” command. This is output…

ATP_under_provisioning_Terraform+CloudUI

And after 4-5 minutes we have that database up and running, but there is a small problem with initial manual backup. It has failed for some reason:

ATP_failed_backup

So let us connect to ATP with SQLDeveloper. Let’s do it just to fix the manual backup configuration according to the documentation’s section written here. Here is SQLDeveloper screenshot with the proper sequence:

SQLDeveloper_backup_config

Ok, so now manual backup should work fine. You can see that on the screenshot below. I have created another manual backup and it was successful:

ATP_with_successful_backup

Looks like the second dish is ready. I have added some flavoring, but the dish seems to be delicious. 🙂 Go ahead and cook for yourself! 🙂

Best,

MasterChef Martin.

My first soup in OCI infra with Terraform

After my last post, one of my mates has asked me for something more basic, let’s say fundamental. He has argued that he cannot use NAT Gateway or Bastion Host when some basic stuff such as VCN is not well explained. This is true when we talk with OCI newbies. There is nothing to be wrong with being OCI newbie. Some time ago I was myself that kind of a newbie. Unfortunately,  as an expert, I use to forget that every aspect of knowledge should be taught incrementally. From basic and simple up to advanced and complex. In this context, IT knowledge is not an exception. So let’s do something more basic, for my friend and other newbies in Infrastructure as Code (IaC) and Oracle Cloud Infrastructure (OCI). When I say IaC, I expect you will not think only about Terraform. You have to be aware there is the real choice, now. Terraform is great and I cannot imagine working without terraform plan capabilities. But the truth is you can use Ansible as well, with OCI Ansible modules available here in GitHub repo. But as my friend has said – keep it simple. So let me write about OCI Ansible modules next time, ok? Let’s focus on Terraform and OCI, ok? 🙂

As previously, let’s start with a simple topology diagram as follows:

FK_Simple_topo_diagram

Someone could say it is not simple at all, but let me explain it step by step. You will understand it soon, it is complex only at first glance. 🙂 Ok, so this is the first topology diagram, right? As you can see, I will build everything in one region called eu-frankfurt-1. Within that region, I can utilize 2 availability domains (AD1, AD2 …). ADs could be treated as separated DataCenter with independent cooling, power, and network. Failure of AD1 means AD2 (plus further ADs) will still function and can provide the resources. And now I will build VCN with address CIDR 10.0.0.0/16::

resource "oci_core_virtual_network" "FoggyKitchenVCN" {
  cidr_block = "${var.VCN-CIDR}"
  dns_label = "FoggyKitchenVCN"
  compartment_id = "${oci_identity_compartment.FoggyKitchenCompartment.id}"
  display_name = "FoggyKitchenVCN"
}

BTW: You have probably noticed I am not coding CIDR address directly. Instead of I am using ${var.VCN-CIDR} variable which can be found in variables.tf file:

(...)
variable "VCN-CIDR" {
  default = "10.0.0.0/16"
}
(...)

Ok, but let’s return to VCN itself and further infra resources. VCN will be connected to the Internet with InternetGateway:

resource "oci_core_internet_gateway" "FoggyKitchenInternetGateway" {
    compartment_id = "${oci_identity_compartment.FoggyKitchenCompartment.id}"
    display_name = "FoggyKitchenInternetGateway"
    vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
}

Is it clear so far? I hope so 🙂

Next, within that VCN, I will create my first public subnet located in AD1:

resource "oci_core_subnet" "FoggyKitchenSubnet1" {
  availability_domain = "${var.ADs[0]}"
  cidr_block = "10.0.1.0/24"
  display_name = "FoggyKitchen Subnet1"
  dns_label = "FoggyKitchenN1"
  compartment_id = "${oci_identity_compartment.FoggyKitchenCompartment.id}"
  vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
  route_table_id = "${oci_core_route_table.FoggyKitchenRouteTable1.id}"
  dhcp_options_id = "${oci_core_dhcp_options.FoggyKitchenDhcpOptions1.id}"
  security_list_ids = ["${oci_core_security_list.FoggyKitchenSSHSecurityList.id}","${oci_core_security_list.FoggyKitchenHTTPSecurityList.id}","${oci_core_security_list.FoggyKitchenHTTPSSecurityList.id}"]
}

Like previously with CIDR for VCN, here availability domain will be derived from 3 element array which has been defined in variables.tf. In that case, AD1 is an element number 0 in the array:

variable "ADs" {
default = ["pnkC:EU-FRANKFURT-1-AD-1", "pnkC:EU-FRANKFURT-1-AD-2", "pnkC:EU-FRANKFURT-1-AD-3"]
}

Let’s move forward. Now I will define my first VM:

resource "oci_core_instance" "FoggyKitchenWebserver1" {
  availability_domain = "${var.ADs[0]}"
  compartment_id = "${oci_identity_compartment.FoggyKitchenCompartment.id}"
  display_name = "FoggyKitchenWebServer1"
  shape = "${var.Shapes[0]}"
  subnet_id = "${oci_core_subnet.FoggyKitchenSubnet1.id}"
  source_details {
    source_type = "image"
    source_id   = "${var.Images[0]}"
  }
  metadata {
      ssh_authorized_keys = "${file("${var.public_key_oci}")}"
  }
  create_vnic_details {
     subnet_id = "${oci_core_subnet.FoggyKitchenSubnet1.id}"
     assign_public_ip = true
  }
}

Ok, but what about the second box below the topology diagram? I have here  FoggyKitchenRouteTable1 which defines the route from inside to outside of the subnet via InternetGateway entity:

resource "oci_core_route_table" "FoggyKitchenRouteTable1" {
    compartment_id = "${oci_identity_compartment.FoggyKitchenCompartment.id}"
    vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
    display_name = "FoggyKitchenRouteTable1"
    route_rules {
        destination = "0.0.0.0/0"
        destination_type  = "CIDR_BLOCK"
        network_entity_id = "${oci_core_internet_gateway.FoggyKitchenInternetGateway.id}"
    }
}

For security reasons I have also created some Security Lists which will enable traffic for SSH, HTTP and HTTPS protocols and your web server could be accessed from the Internet on ports 22, 80 and 443. It is a firewall with stateful inspection which can be applied on top of VCN. Here is the example of HTTP protocol:

resource "oci_core_security_list" "FoggyKitchenHTTPSecurityList" {
   compartment_id = "${oci_identity_compartment.FoggyKitchenCompartment.id}"
   display_name = "FoggyKitchenHTTPSecurity List"
   vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
   egress_security_rules = [{
      protocol = "6"
     destination = "0.0.0.0/0"
   }]
   ingress_security_rules = [{
      tcp_options {
        "max" = 80
        "min" = 80
      }
      protocol = "6"
      source = "0.0.0.0/0"
     },
    {
      protocol = "6"
      source = "${var.VCN-CIDR}"
     }]
  }

In FoggyKitchenDHCPOptions I will define how DHCP/DNS will be resolved. I will not spend too much time over that topic now,  but hope to explain it in the future posts if needed:

resource "oci_core_dhcp_options" "FoggyKitchenDhcpOptions1" {
  compartment_id = "${oci_identity_compartment.FoggyKitchenCompartment.id}"
  vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
  display_name = "FoggyKitchenDHCPOptions1"

  // required
  options {
    type = "DomainNameServer"
    server_type = "VcnLocalPlusInternet"
  }

  // optional
  options {
    type = "SearchDomain"
    search_domain_names = [ "foggykitchen.com" ]
  }
}

Looks like we are ready for execution. Let’s cook this soup! Terraform plan & Terraform apply commands to be executed 🙂 Hope it will be tasty 🙂

MasterChef Martin.

PS. My code is here in small GitHub repo.

Bastion host and remote access to your private hosts in OCI

Last time I have written a post about NAT Gateway, a new feature in OCI, which enables access to the public Internet resources from the private subnets. Today I would like to enhance this topology a little bit by adding bastion host. The truth is you need to access your private machine from outside, right? Before going into Terraform automation, let’s examine a topology diagram:

Bastion+NAT_Gateway

As you can see in the picture above we have a VCN network structure, which has been split into two subnets. One of the subnets is public, which means machines located there will have public IP addresses. On the other hand, we have private subnet which is separated from the outside world. Of course, there is a way to initiate the traffic to the Internet via NAT Gateway, but there is no way to connect to the machines in private subnet from that Internet. Really? Of course, we can setup bastion host and enable the traffic with proper route tables and security lists. All of that has been documented here.

Ok, but let’s say we would like to automate the process of the provisioning, right? Here is my small github repo which has all necessary Terraform files. They include the NAT Gateway config, which was discussed here, in my previous blog post.

Now you can cook it for yourself 🙂 Bon Appetit! 🙂

Your MasterChef Martin 🙂

 

Let’s cook NAT Gateway in OCI!

Let’s cook something today, ok? I am not sure if you have had a chance to read this blog with a tutorial for automation of NAT instance deployment in OCI with Terraform utility? This blog post is just great, but seems to be a little bit outdated. The same I can say about this whitepaper. Why? Because now in Terraform OCI Provider you can use NAT Gateway without dedicated VM for NAT purposes. Now in OCI, you can build NAT Gateway. Let’s call it an attribute of your VCN. Here is Terraform code snippet for this purpose:

resource "oci_core_nat_gateway" "FoggyKitchenNatGateway" {
  compartment_id = "${var.compartment_ocid}"
  display_name = "Foggy Kitchen NAT Gateway" 
  vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
}

Of course, it is not enough. You have your VMs in private subnet and this private subnet will require route table which will embrace brand new NAT gateway:

resource "oci_core_route_table" "FoggyKitchenPrivateRouteTable1" {
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
  display_name = "Foggy Kitchen Private Route Table"
  route_rules {
    destination = "0.0.0.0/0"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = "${oci_core_nat_gateway.FoggyKitchenNatGateway.id}"
  }
}

And here is private subnet definition:

resource "oci_core_subnet" "FoggyKitchenPrivateSubnet1" {
  availability_domain = "${var.ADs[0]}"
  cidr_block = "10.0.1.0/24"
  display_name = "Foggy Kitchen Private Subnet1"
  dns_label = "fkprivsub1"
  compartment_id = "${var.compartment_ocid}"
  vcn_id = "${oci_core_virtual_network.FoggyKitchenVCN.id}"
  route_table_id = "${oci_core_route_table.FoggyKitchenPrivateRouteTable1.id}"
  dhcp_options_id = "${oci_core_dhcp_options.FoggyKitchenDhcpOptions1.id}"
  security_list_ids = ["${oci_core_security_list.FoggyKitchenPrivateSSHSecurityList.id}"]
  prohibit_public_ip_on_vnic = "true"
}

Hope you will find it interesting, my dear Master Chef!  In case of the other features of NAT Gateway, please take a look into OCI docs here. If you need to know more in case of Terraform OCI Provider and NAT Gateway support I suggest this link.

Bon Appetit!

Martin.

PS. Here are screenshots from Cloud UI after my Terraform code execution:

Pic1. NAT Gateway

My_NAT_Gateway

Pic2. Route table supporting NAT Gateway traffic

Route_Table_for_NAT_Gateway