#magnetodb

/

      • openstackgerrit has quit
      • openstackgerrit joined the channel
      • ikhudoshyn__ joined the channel
      • ikhudoshyn_ has quit
      • -- BotBot disconnected, possible missing messages --
      • -- BotBot disconnected, possible missing messages --
      • -- BotBot disconnected, possible missing messages --
      • [o__o] joined the channel
      • ominakov joined the channel
      • jeromatron has quit
      • jeromatron joined the channel
      • miarmak has quit
      • miarmak joined the channel
      • ikhudoshyn_ has quit
      • denis_makogon joined the channel
      • achudnovets joined the channel
      • isviridov
        Hello everybody
      • What is going on?
      • aostapenko, I see good progress with error negative cases test coverage. Feel free to ask if any Qs.
      • achudnovets
        hi all! How we are going to integrate with keystone? Is it possible that we face some kind of troubles when we use keystone to check token if we have many requests per second?
      • aostapenko
        isviridov: Thank you, now I'm working on changing error handling architecture
      • isviridov
        aostapenko, in the future we will move our functional tests to tempest repo, just note that to avoid extra work in future it is better to follow tempest standards
      • achudnovets, we have ec2 authorization already https://blueprints.launchpad.net/magnetodb/+spe...
      • achudnovets, currently it works, but for thouthands of requests we are expecting, we need some caching. Otherwise we will kill keystone.
      • probably cache the token on magentodb side. What do you think?
      • achudnovets, BTW any progress with MagentoAPI draft?
      • * Magneto
      • achupryn, aroud?
      • achudnovets
        yes, actually I have a question about it. How are you think what is the best approach for attributes naming?
      • something like this: this_is_my_key -- keystone v3
      • or like this: userId -- nova
      • personally, I like keystone naming
      • isviridov
        Yeah, it is always holywar. I prefer undescodes. But, is there any draft of the docuemnt to have a look and discuss?
      • achudnovets
        I'll publish the draft in couple of the hours.
      • isviridov
        WOW. It is awesome, we need to have the first draft before weekly meeting.
      • achudnovets
        isviridov, about token caching -- there is some kind of caching in the keystone: http://docs.openstack.org/developer/keystone/co.... I think we can look for this if we will have any troubles with keystone performance.
      • isviridov
        achudnovets, you mean caching on keystone side, right?
      • achudnovets
        yep, it's correct
      • I think it's better to cache any auth info at the keystone side, if we can
      • isviridov
        I think it doesn't help. If we have http request to keystone for each MagnetoDB call, we will kill keystone even with caching. Also it will increase responce time with keystone interaction.
      • ikhudoshyn, around?
      • achudnovets
        I agree, but if we'll cache info on the magnetodb side we can face security issues.
      • isviridov
        What kind of issue?
      • achudnovets
        If some account info will be changed in the keystone, magnetodb can be unaware about it. It's just a theory, but...
      • isviridov
        How do other OS components handle it? As far as I know token has expiration time, is there any mechanism to invalidate tocken and if any of OS services does it?
      • achudnovets
        As far as I know, other OS components checks the token on every request. In keystone.
      • If keystone is down, user can't do anything.
      • dukhlov
        achudnovets: agree
      • achudnovets
        Still I think it's not urgent question. Just a theoretical risk. But we should keep this in mind, I think.
      • isviridov, dukhlov: thanks for your answers!
      • isviridov
        achudnovets, probably you are right. Currently it OS call the keystone for each request. We have investigate it. I really expect that it is not teh case for high load services like Swift and Ceilometer.
      • achudnovets
        agree, thanks isviridov!
      • jeromatron has quit
      • isviridov
        achudnovets, actually ikhudoshin worked on that part and he has more technical context right now
      • achudnovets
        good, thanks for the info. Another question: what is the best way to publish API draft? And make it easy to everyone leave the comments?
      • *for everyone
      • isviridov
        Openstack wiki please
      • We will discuss it in ML and here. We have to involve OS community in that process.
      • achudnovets
        ok, thanks isviridov. I'll try it.
      • achuprin
        Hi MagnetoDB!
      • about CI
      • isviridov
        achuprin, hi
      • achuprin
        Now we working on deploying a 3'rd party CI environment in our lab for MagnetoDB project.
      • isviridov
        achuprin, you are reading my thoughts somehow.
      • achuprin
        This enviroment will be integrated with Infra.
      • At the moment, we are waiting until create a Service Account for integration with Infra and public access for community from our IT department.
      • isviridov
        Have you received compute resources already?
      • achuprin
        this environment will be use for running tests what depends on Cassandra cluster.
      • yes, we have all needed copute resources
      • isviridov
        achuprin, actually it is current case - integration testing with C*. But in future we will move all special checks and testing to our external CI
      • achuprin, could you please describe the whole picture and process in OpenStack wiki?
      • achuprin, also put there the links also
      • ikhudoshyn joined the channel
      • ikhudoshyn
        hi, guys, as for possible slowdown because of keystone, when we switch to our own API we could just use PKI
      • this does not require to ask keystone each time to validate token
      • achuprin
        isviridov, Now we are planning to CI / CD architecture
      • isviridov
        achuprin, yeap. Let us share the views
      • #topic CI/CD
      • The main rule, to make everything public and available for community.
      • achuprin
        yes, of course
      • isviridov
        Sure, we have special needs like integration with C* or HBase or whatever, so we are introducing third party CI on Mirantis hardware. But publicly accessible.
      • Currently we have 2 such cases.
      • 1. running cassandra integration tests
      • 2. running tempest tests on installation close to production. I mean the C* clusted not only just one node, several api instances to cover concurrent access cases
      • Later we will introduce performance and load testing phases, i think
      • Let us discuss each of them.
      • The 1. is just testtools driven test but with dependency on C* and it looks clear now to build it.
      • We have just to describe it on wiki
      • Should we proceed with 2.?
      • The 2. consists on two steps
      • 2.1 run magentodb devstack integration tests depends on https://blueprints.launchpad.net/magnetodb/+spe...
      • 2.2 run functional tests on 3 nodes of C* and 2 nodes of API with
      • I'll try to describe it on wiki. But we always can discuss it here
      • isviridov @ launch
      • SpyRay joined the channel
      • SpyRay
        hi everybody!
      • jeromatron joined the channel
      • dukhlov
        SpyRay: hi
      • ikhudoshyn
        guys, what's the proper way to install cassandra=2.0.2 on ubuntu
      • ?
      • found it
      • dukhlov
        ikhudoshyn: And what is it?
      • jeromatron
        there should be debian packages for each release fwiw...
      • ikhudoshyn
        yea, i found it already
      • isviridov
        SpyRay, hello
      • Nice nickname, Alexey
      • Any news about devstack integration?
      • miarmak
        hmm, this patch was uploaded at 3:50 PM (Kyiv), but it is not shown here... https://review.openstack.org/#/c/78184
      • iamaleksey
        by the way, anybody from Kyiv around here? doing nothing this evening, and maybe willing to talk Cassandra during some beers?
      • (C* committer here, leaving tomorrow)
      • isviridov
        iamaleksey, whole team is in Kharkiv. Maybe next time. BTW what is your usuall location?
      • iamaleksey
        not Ukraine (: But I will be back in 20 days or so, for a day ¯\_(ツ)_/¯
      • isviridov
        iamaleksey, Mirantis' headquoters is in Mountain View, CA. How far from there you are?
      • iamaleksey, somebody usually works on-site in US. One of our cores flies MV next week.
      • iamaleksey
        isviridov: about 6 months from here to me being back to CA, sorry
      • not US based
      • SpyRay
        isviridov, everything is fine! I want to ask, can be enough a single node of cassandra?
      • isviridov
        iamaleksey, sure. I believe we will meet each other some day
      • SpyRay, as forst stage just for development env we can start with single node bceause of resources. But as I now there are scripts to run cassandra cluster on simgle VM.
      • From my expierece C* requires at least 2Gb of memory for stable work.
      • iamaleksey, as you are here. Can you guide us how to run 3 node C* cluster with limited memory? Teh heap size setting helps to start it, but after some time of working it throwns OOM
      • iamaleksey, i suppose it is not expected such tunning, but if you have any hints
      • iamaleksey
        there is a blog post somewhere
      • people have run C* on r pi, ffs, do it's certainly possible
      • so*
      • isviridov
        Thx. it is good start.
      • iamaleksey
        can't find it, sorry
      • I assume you are aware of https://github.com/pcmanus/ccm as well
      • isviridov
        Yeap, we had a look at it. But not tryed yet. Thank you
      • iamaleksey
        that's what most C* committers use
      • dukhlov
        iamaleksey: I've asked one more question in #cassandra-dev.
      • I will duplicate it here:)
      • [16:07] <dukhlov> and another one question: In CQL3 docs it is said that batch update executes atomicaly for each row. So as far as I understood If I have batch with 2 insert statement with different PK values but with the same hash PK part (means the same partition id) then this 2 DML query will be performed atomically (means 2 insert query as single atomical wide row mutation)
      • [16:07] <dukhlov> Is it correct?
      • iamaleksey
        man, get your terminology in order
      • anyway, if both statements share the same partition key, they will be part of one single Mutation, and thus applied atomically and in isolation
      • *share the same partition key and the same column family
      • ikhudoshyn has quit
      • what are you using to access C*?
      • ikhudoshyn joined the channel
      • dukhlov
        hm, sorry I meant that PK is "primary key", not partition key
      • we use python-driver
      • with small patching
      • for gevent support
      • iamaleksey
        why not the java-driver?
      • dukhlov
        because our goal is becoming a part of openstack
      • and openstack has written on python
      • iamaleksey
        ah. didn't know it was all in python
      • makes sense then
      • isviridov
        yeap it is python mostly and MagnetoDB will have a lot of integration with OpenStack