#django

/

      • keekz joined the channel
      • dlam joined the channel
      • nkuttler
        i'd bet youtube has pro accounts and an api
      • whaack joined the channel
      • hylje
        eh the hard part of video processing websites is coming up with the $$$ to keep the lights on
      • nkuttler
        that being said, aws also offers video transcoding and such
      • hylje
        you can probably do the heavy lifting with a straightforward django celery setup
      • jessamyn_ has quit
      • nkuttler
        processing is just one part though. you also have to keep on top of file formats, device compatibilities, etc
      • jas02 joined the channel
      • chrismay has quit
      • Croves joined the channel
      • Croves has quit
      • lorddaedra has quit
      • Crovax31_ joined the channel
      • whaack has quit
      • mattt joined the channel
      • jas02 has quit
      • anddam joined the channel
      • AnInstanceOfMe has quit
      • treats
        Okay, well thanks for the brainstorming. I have a lot of homework to do for this. Leaning toward hosting solution though.
      • AnInstanceOfMe joined the channel
      • Koterpillar joined the channel
      • eperzhand joined the channel
      • mattt
        hi all, i'm using the sites framework to build a saas app of sorts, and have an extra model for storing site-specific details (site admin email, etc.) .... one of the fields is going to be for storing an API key for an external service, and i'm a bit on the fence how that should be stored (if it should not be stored in the database, etc. etc.)
      • does anyone have any suggestions?
      • Koterpillar
        mattt: think about the possible attacks on this _if the bad person already has access to your database_
      • mattt: it is possible that they can do worse things without the API key
      • mattt
        Koterpillar: not really, causing my customers damage to their external services would be the worst thing that could happen
      • Koterpillar
        can the customers see the API key in the interface?
      • pintoch has left the channel
      • mattt
        Koterpillar: well, i probably will mask it, but it will be manually entered by them at some point
      • Koterpillar
        entered != seen
      • mattt: an option is to encrypt just that field with the key somewhere else
      • hylje
        there's not much sense in fencing in a public-facing server system into higher security parts
      • just make the entire thing secure
      • phinxy has quit
      • phinxy joined the channel
      • mattt
        obviously i could pass in keys via environment vars, but that becomes hard to manage as customers scale and the number of additional vars increases over time
      • garrypolley joined the channel
      • Koterpillar: yeah, i did consider this ... for the sake of backups not containing readable API keys
      • gaucho has quit
      • Itkovian_ has quit
      • treats has quit
      • adamchainz
        mattt: encrypted in the database with an external key sounds most sensible to me, just be ready to have a re-encryption strategy when you rotate key. `cryptography` has a `Fermet` class that can use multiple keys for this purpose
      • mattt
        starting to doubt this architecture, maybe i should have had a model where i deploy a single instance for a customer, rather than a single instance using sites framework that hosts all customers :P
      • garrypolley has quit
      • adamchainz: i never thought about the re-encryption strategy, but see how critical that will be, thank you for the info
      • bochecha has quit
      • Itkovian joined the channel
      • infinitesum joined the channel
      • infinitesum
        Is there any performance benefit from gzipping DRF json responses, which are all 20kb max? Chrome network console is reporting they download is less than 1ms, which if it were true would obviously mean no, but that also seems vaguely impossible
      • In this case we're using JWT auth so breach attacks shouldn't be an issue
      • jo_
        infinitesum: Bigger responses might take more time on conjested networks?
      • 20kb/s can happen if you're on a cellular network.
      • LordVan has quit
      • I'd say unless there's a reason you have to _not_ gzip the output (like supporting really old clients), it can't hurt.
      • infinitesum
        yeah, I'm just wondering what the right balance is because obvious gzipping the response is going to take a non-zero amount of time also. So I'm wondering if there is some best practice in terms of if the response size is less than X, don't gzip
      • *obviously
      • jo_
        Honestly, I'd wager that you'd spend more time sending the extra bits than you'd spend gzipping.
      • If we consider the performance of the network layer, it's probably quicker to gzip 500bytes of data than it is to send an extra 500 bytes of data.
      • It's not just network time, it's also the time spent waiting on the PCI bus and time waiting on write.
      • So anything we can do to make the application more CPU bound and less IO bound is good.
      • kingplusplus has quit
      • But I don't have the data to back that up. :V Just anecdotes.
      • Debnet has quit
      • aron_kexp joined the channel
      • robvdl joined the channel
      • garrypolley joined the channel
      • kingplusplus joined the channel
      • mmxx_th has quit
      • michalmo has quit
      • thinkt4n_ has quit
      • garrypolley has quit
      • kingplusplus has quit
      • Hawkerz has quit
      • sieve joined the channel
      • sieve has left the channel
      • nikivi joined the channel
      • kingplusplus joined the channel
      • whaack joined the channel
      • infinitesum
        sounds good, I'll try it on staging and see what happens :-)
      • adamchainz
        infinitesum: https://moz.com/learn/seo/page-speed says gzip anything >150 bytes which pretty much means everything
      • jarshwah_ joined the channel
      • infinitesum
        @+adamchainz Nice!
      • sieve joined the channel
      • adamchainz
        idk how authoritative they are though
      • sieve has quit
      • FancyCamel has quit
      • joshuajonah has quit
      • whaack has quit
      • RoosterJuice has quit
      • kingplusplus has quit
      • Electrometro joined the channel
      • kingplusplus joined the channel
      • iiie has quit
      • infinitesum
        Rand knows his stuff, I'd consider anything from SEO Moz to be pretty trustworthy, especially something like this where it's not going to be out of date
      • but that said I'll still measure what happens
      • [0xAA] joined the channel
      • badet0s has quit
      • pwrentch has quit
      • ellmetha has quit
      • ryanhiebert has quit
      • Itkovian has quit
      • Boingo has quit
      • tulioz joined the channel
      • felixx has quit
      • Itkovian joined the channel
      • mmxx_th joined the channel
      • lorddaedra joined the channel
      • garrypolley joined the channel
      • jas02 joined the channel
      • kingplusplus has quit
      • kingplusplus joined the channel
      • wokopo joined the channel
      • jas02 has quit
      • wokopo
        Hi everyone, I am trying to buildup a n level menu with django.
      • jas02_ joined the channel
      • I already have the data within mongodb but not sure how to render this
      • Koterpillar
        wokopo: recursion
      • wokopo
        Koterpillar but the idea is, using recursion send an object with all the data to the view?
      • Koterpillar
        wokopo: menu.html: {{ item.label }} {% for child in item.children %}{% include menu.html with item=child %}{% endfor %}
      • yes
      • wokopo
        or go back and foward between the view and the backend?
      • lorddaedra has quit
      • oh you can use recursion like that? :O
      • wow
      • haha is awesome
      • let me try that, thanks Koterpillar!!
      • iiie joined the channel
      • kingplusplus has quit
      • rpkilby joined the channel
      • garrypolley has quit
      • jas02_ has quit
      • jas02 joined the channel
      • lorddaedra joined the channel
      • jas02_ joined the channel
      • kingplusplus joined the channel
      • infinitesum has quit
      • rpkilby has quit
      • jas02 has quit
      • morenoh149 joined the channel
      • morenoh149
        given model.created_at how do I change the created_at value and then model.save() ?
      • jas02 joined the channel
      • just the hour though
      • lorddaedra has quit
      • gopar has quit
      • best I got is model.created_at = model.created_at.replace(hour=18); model.save()
      • jas02_ has quit
      • Koterpillar
        morenoh149: that's how, beware that that isn't always a valid thing to do