I am not sure if you are aware of that, but now you can interconnect more then one VCN with your on-premise location. And it doesn’t matter if you are using VPN site-2-site or FastConnect. This architecture has been clearly explained here as TransitVCN, an advanced network architecture in OCI. Of course, there is a significant question about why you should split your cloud network into different VCNs, right? But let’s imagine you need to have different IP ranges for your departments and some part of your applications will be central, others will be regional. On the other hand, you would like to avoid giving access to on-premise from each VCN (avoiding “mesh” scenario). Instead, you would like to create some kind of a central hub, where all of the departmental VCNs should be routed, especially for on-premise interconnection. You can see it below on the simplified topology diagram:
Ok, so let’s examine what we have here, how all of that will work and how it could be utilized. First of all, be aware of the fact you can easily connect from VCN3 to VCN1, or from VCN2 to VCN1 without direct peering between them. It means our spoke VCNs will use HUBVCN as an intermediary entity. In practice then, it means we will be able to ping vcn1server01 from the perspective of vcn3server01 and vice versa, despite the fact their subnets and VCNs are completely separated. Worth to add, in this post (part 1), I will omit the on-premise side, hoping you will then read part 2 where our use case will be enhanced with the home router. After implementing code from part 2 we will be able to ping on-premise private IPs from any node in spoke VCN. But let’s focus on the code for part 1. Here is a repo where you can find the Terraform code.
Hope you will cook it successfully. Let me know by email if something will not be so tasty as it was expected 🙂
Martin The Cook.