#midonet

/

      • tfukushima has quit
      • baker joined the channel
      • apuimedo has quit
      • tfukushima joined the channel
      • tfukushima has quit
      • tfukushima joined the channel
      • tfukushima has quit
      • dmd17 joined the channel
      • dmd17 has left the channel
      • tfukushima joined the channel
      • baker has quit
      • baker joined the channel
      • ryu_mido joined the channel
      • naoto joined the channel
      • Midokura-Mike has quit
      • reichlichbechern has quit
      • reichlichbechern joined the channel
      • baker has quit
      • bberg1
        hhuang: Looks to me like it's failing to find the Midonet IP address group with ID 92117947-ed69-4cb0-b146-438151653022. This should correspond to a Neutron Security Group with the same ID. Does such a security group exist in your Neutron deployment?
      • hhuang: Also, that looks like a partial stack trace, do you have the full stack trace, to provide a bit more context?
      • hhuang
        hi bberg1, thanks a lot for your reply. I put the full log in this etherpad: https://etherpad.mozilla.org/4sc8eUaFy0
      • niku
        [ MoPad: 4sc8eUaFy0 ] - etherpad.mozilla.org
      • jjbigs joined the channel
      • bberg1
        hhuang: I don't see any way user error could have led to this situation, so it might be a bug. Do you have steps that can consistently reproduce this?
      • hhuang
        hi bberg1, I put up some env details and steps we did before launching the instance which triggered this error. you can find it here: https://etherpad.mozilla.org/2bPe1XUqJC
      • niku
        [ MoPad: 2bPe1XUqJC ] - etherpad.mozilla.org
      • tfukushima has quit
      • hhuang
        bberg1, one thing I would like to add here. We had ml2+ovs for neutron in the same openstack setup before switching to midonet. there are still ml2 plugin and ovs agent installed in the system but they are disabled.
      • ryu_mido
        hhuang: thanks for that info. unfortunately, we do not currently support the case where a user wants to migrate from ML2 to MidoNet with data already stored in DB. We will support this soon but definitely not in the version you are using.
      • it's a big inconvenience, but is it possible that you start clean with midonet?
      • if not, perhaps one thing we could look into is coming up with a script to sync up the data in both sides
      • but it won't be a trivial task
      • tfukushima joined the channel
      • tfukushima has quit
      • hhuang
        hi ryu_mido, thanks for your notes. I'm not sure if there would be any data remain from ml2/ovs which could cause this issue. I did delete all network objects, stop ml2/ovs services before installing midonet.
      • tfukushima joined the channel
      • ryu_mido
        hhuang, interesting. The error above sounds like it came from security groups, which probably means that there might have been a default security group that was created in ML2 which was never deleted.
      • hhuang
        oh, shall I remove ml2 and reinstall NSDN again?
      • s/NSDN/NSDB
      • niku
        hhuang meant to say: oh, shall I remove ml2 and reinstall NSDB again?
      • ryu_mido
        are you running this with midostack?
      • if so, you could just run ./midonet_unstack.sh (cleans up the NSDB) and then re-run ./midonet_stack.sh after confirming that the neutron plugin is set to midonet. That should re-create neutron DB and NSDB
      • if you are not running midotack and instead, installed all the components manually, then to clean up the NSDB, you can jsut delete the ZK data (rm -rf /var/lib/zookeeper/*) and restart zookeeper. Then you should drop/create Neutron DB in mysql and run neutron-db-manage upgrade (after making sure that you have configured midonet as the neutron plugin)
      • hhuang
        I deleted zookeeper data. but have dropped and recreated Neutron DB yet. going to to it now
      • will have to see how it goes. thanks a lot ryu_mido!
      • tomoe_ joined the channel
      • ryu_mido
        no problem, i'll be here in case you need more assistance!
      • hhuang
        hi ryu_mido, thank you very much. your instruction worked! now I'm able to create instance without complaints from zookeeper
      • just would like to mention that the zookeeper configuration needs to be re-done after its data got deleted before starting its service again. otherwise catalina will report connection error to other nodes.
      • tomcat7 service also needs to a restart
      • ryu_mido
        good to know, thanks! glad to hear that it worked
      • hhuang
        the commands I used can be found here: https://etherpad.mozilla.org/DnTQSd5Qdv
      • niku
        [ MoPad: DnTQSd5Qdv ] - etherpad.mozilla.org
      • ryu_mido
        this is great!
      • do you mind if I post this in one of our wiki pages?
      • hhuang
        I also restart all neutron services
      • I don't mind at all
      • let me know if you like me to provide more details about this migration
      • ryu_mido
        thanks, let me see where would be appropriate...
      • ok will do, thanks for trying this out!
      • dklepikov joined the channel
      • tfukushima has quit
      • tfukushima joined the channel
      • luqas joined the channel
      • hhuang: I simplified it a bit but copied over your etherpad notes to this page: http://wiki.midonet.org/ml2tomidonet
      • niku
        [ ml2tomidonet - MidoNet Wiki ] - wiki.midonet.org
      • ryu_mido
        let me know if you think someone is missing
      • and thanks again for providing this!
      • hhuang
        that's great. good to know it was helpful. but all the neutron network objects like net, subnet, routers will have to re-created after migration though.
      • ryu_mido
        hhuang, right, which is not good at all. I am going to propose something soon that allows us to re-create the NSDB from neutron all the time, so that the migration become easier.
      • the idea behind it is that the communication between neutron and NSDB should be durable. This allows NSDB to process all the tasks requested in Neutron API asynchronously.
      • and we can go further and add features that will let us 'flush' and 'import' data from neutron to NSDB (via API or specialized CLI)
      • hhuang
        that would be a good feature for those who use ml2 and would like to migrate to midonet
      • ryu_mido
        hhuang, yeah and it will also help with the upgrade that involves data schema changes. handling it in zookeeper is not fun
      • i'll definitely add you to the reviewers list when I come up with the outline of how this tool works
      • well i'll just throw it out into the mailing list (midonet-dev)
      • hhuang
        last year's user survey shows ml2 was the most popular network drivers apart from openvswith plugin, nova-network, and linux bridge: http://superuser.openstack.org/articles/opensta...
      • niku
        [ OpenStack User Survey Insights: November 2014 - Superuser ] - superuser.openstack.org
      • hhuang
        this will be an important feature in midonet to encourage people switch to midonet
      • from ml2
      • ryu_mido
        yeah exactly, i'm very excited about it.
      • agabert joined the channel
      • agabert
        hhuang: hi, are you there?
      • hhuang: can you do midonet-cli> cleart
      • and then midonet-cli> router list
      • tfukushima has quit
      • hhuang
        got this output
      • router router0 name demo-router state up infilter chain0 outfilter chain1
      • router router1 name MidoNet Provider Router state up
      • devvesa joined the channel
      • agabert
        port port0 device router0 state up mac ac:ca:ba:38:d0:01 address 115.146.82.129 net 115.146.82.128/25 peer bridge0:port0
      • this is confusing me
      • it says that your fip range is 115.146.82.128/25
      • do you want to bind this port to the physical host?
      • tfukushima joined the channel
      • normally it goes like this:
      • 192.168.6.1 is a default gateway (we can use BGP routers also but i explain it with one default gw to keep it simple)
      • 192.168.6.103 is a port on the midonet provider router
      • i bind this port to a machine with the midonet agent running, the machine is called gw001
      • the machine has two nics, one is 192.168.5.103 for tunnel traffic and 192.168.6.103 will be the ip of the midonet provider router interface bound to the second nic of this machine
      • now i route 0.0.0.0/0 via next-port 192.168.6.103 via 192.168.6.1
      • the other machine 192.168.6.1 must have a static route to use 192.168.6.103 for reaching 200.200.200.0/24 (our fip range we use for internal testbeds)
      • what the midonet provider router then will do it will route traffic from the fip network (in your case 115.146.82.128/25) to the default gateway sitting on 192.168.6.1 and back
      • in theory it is possible to also bridge this fip network
      • but i would have to check with the people who know more about midonet provider router port bridging to get you there
      • do you really want to bridge your floating ip network to the same physical network?
      • hhuang: you still there?
      • hhuang
        yes I'm still here
      • apuimedo joined the channel
      • agabert
        root@midonet_cli_os001:~# midonet-cli
      • midonet> cleart
      • tenant_id: None
      • midonet> router list
      • router router0 name MidoNet Provider Router state up
      • router router1 name ext-to-int state up infilter chain0 outfilter chain1
      • midonet> router router0 port list
      • port port0 device router0 state up mac ac:ca:ba:7f:44:b9 address 200.200.200.1 net 200.200.200.0/24 peer bridge0:port0
      • port port1 device router0 state up mac ac:ca:ba:15:3a:71 address 169.254.255.1 net 169.254.255.0/30 peer router1:port0
      • port port2 device router0 state up mac ac:ca:ba:47:d7:8c address 192.168.6.103 net 192.168.6.0/24
      • midonet> host list
      • host host0 name os004 alive true
      • host host1 name gw002 alive true
      • host host2 name gw001 alive true
      • host host3 name os003 alive true
      • host host4 name openstack_neutron_os001 alive true
      • host host5 name gw003 alive true
      • host host6 name os005 alive true
      • host host7 name os002 alive true
      • midonet> host host2 binding list
      • host host2 interface p3p1 port router0:port2
      • midonet> router router0 route list
      • route route0 type nor
      • oh
      • i use pastebin, hold on
      • this is our standard L3 networking model for the provider router
      • if you want me to work out a L2 solution for you please tell me and i will get the necessary steps to do it
      • JustinJose joined the channel
      • hhuang
        okay, in our case, the machine has 2 nics, one is 172.26.11.135 for instance tunnel network, the other one doesn't have any IP address assigned to it, but all the traffic through this nic are supposed to use network subnet 115.146.82.128/25 using its network gateway 115.146.82.129
      • agabert
        ic
      • what is your floating ip range that you want to route?
      • the subnet you are describing to me is what i would call a border network
      • normally this is used to connect bgp routers to our provider router network interfaces
      • what 192.168.6.1 is in my design is 115.146.82.129 in your design
      • hhuang
        the floating ip range would be from 115.146.82.132 to 115.146.82.200
      • agabert
        so we are on par
      • good
      • set this up in neutron please
      • i will help you with the routing and port setup in midonet-cli
      • eart joined the channel