Closed
Bug 1005374
Opened 11 years ago
Closed 11 years ago
routing metadata requests to openstack endpoint metadata api
Categories
(Infrastructure & Operations Graveyard :: NetOps, task)
Tracking
(Not tracked)
RESOLVED
WONTFIX
People
(Reporter: dividehex, Assigned: adam)
References
Details
On openstack instances, http requests to 169.254.169.254 port 80 are routed to an api endpoint which returns metadata about the instance based on the associated source ip. This works easily for nova compute nodes that setup VMs and use openvswitch to route traffic internally but doesn't really fit well into the baremetal provisioning model which will use existing route hardware.
We need to figure out the best way to route requests to 169.254.169.254 to openstack metadata api endpoints from the baremetal node vlans (try, core, loaner). To make this even more complicated, we will need traffic to be routed to different api endpoints based on the vlan source since we will have multiple openstack instances (eg. staging and production)
Comment 1•11 years ago
|
||
So there will be three servers, all with the same IP address of 169.254.169.254, right?
And, hosts on each vlan (try, core, loaner) will need to exchange packets with an associated
169.254.169.254 server. i.e. each vlan will have a 169.254.169.254 server associated with it.
Is that correct?
Question 1: Will these 169.254.169.254 servers have a secondary interface on them -- which
can be used for management?
Question 2: Will these 169.254.169.254 servers require access to the Internet?
Thanks.
Flags: needinfo?(jwatkins)
Comment 2•11 years ago
|
||
To your questions, 169.254.169.254 is an IP address, not a server, so I'm not sure what you mean.
The fundamental operation here is this: a brand-new cloud instance starts up, gets configured by DHCP, and then runs
curl http://169.254.169.254
to get a text file with some other, non-DHCP information about its configuration (instance name, user who created it, etc. etc.). We need to make sure that those requests get to a host which can answer them (hosts running the metadata API).
The "multiple instances" jake is talking about refers to staging and production, as discussed in yesterday's email. So we will have admin, try, core, and loaner VLANs for staging, and admin, try, core, and loaner VMs for production. In staging VLANs, 169.254.169.254 needs to be forwarded to the staging metadata api, and in production VLANs, the same IP would need to be forwarded to the production metadata api.
I believe that OpenStack does this internally using DNAT, so perhaps we should consider that as a solution. That would mean that incoming packets to the default gateway from the try/core/loaner VLANs with a destination IP of 169.254.169.254 would be DNAT'd to a virtual IP in the admin VLAN's netblock (a normal IP in 10.26/16). So let's draw some packets, using hypothetical IPs:
new VM instance = 10.26.98.98
core VLAN default gateway = 10.26.98.1
metadata VIP = 10.26.110.110
metadata API servers' primary IP = 10.26.110.111, 10.26.110.112
VM sends
10.26.98.98 -> 169.254.169.254 SYN
fw1 translates to
10.26.98.98 -> 10.26.110.110 SYN
metadata API receives packet, replies with
10.26.110.110 -> 10.26.98.98 SYN/ACK
fw1 translates to
169.254.169.254 -> 10.26.98.98 SYN/ACK
and traffic continues with those translations. The second translation is the tricky bit. if the VM receives a packet with source/dest "10.26.110.110 -> 10.26.98.98", it will reject it. Can you configure an SRX to do this? Can you configure it to do so *twice* with different NAT targets for different source VLANs (production and staging)?
Flags: needinfo?(jwatkins)
Comment 3•11 years ago
|
||
I'm using the word "server" and you're using the word "host".
I think we're talking about the same thing.
Adam suggested doing DNAT already, and that sounds like the most straight forward
way of solving this problem.
Assignee | ||
Updated•11 years ago
|
Assignee: network-operations → adam
Reporter | ||
Comment 4•11 years ago
|
||
So there is a better way of doing this and that should be configuring neutron and OVS to bridge to the proper vlan in conjunction with setting up a corresponding 'provider' network within neutron. From there, dhcp and metadata will be available from the neutron agents. In fact, looking at source, metadata needs to use neutrons metadata proxy service to properly set http headers based on the tenant id and a shared key before passing it to nova metadata api. This prevents metadata from leaking to anyone requesting it.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → WONTFIX
Updated•3 years ago
|
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•