vlt: I'm storing data in variables because valid.capacity didn't work. Better than putting them in context?
vlt: *better than simply passing them
vlt: as individual variables
mattmcc
__marco: Probably want to put it in quotes.
vlt
kartik: That’s totally up to you.
marco-silva joined the channel
__marco
mattmcc, sure
shredding joined the channel
aossama has quit
__dj__ joined the channel
enttree joined the channel
Cyph0n has quit
aossama joined the channel
enttree has quit
jeanba joined the channel
__dj__ has quit
shredding has quit
jeanba joined the channel
jeanba joined the channel
mclee joined the channel
dean joined the channel
__marco has quit
aossama has quit
kartik has quit
aossama joined the channel
bkxd has quit
dean
Hey is there a way to use the striptags template filter but keep line breaks?
bkxd joined the channel
aossama has quit
Ariel_Calzada joined the channel
aossama joined the channel
EyePulp joined the channel
Ergo joined the channel
Biwaa has quit
pytony joined the channel
Bafflermeal has quit
iiie has quit
iiie joined the channel
iiie has quit
iiie joined the channel
iiie has quit
iiie joined the channel
iiie has quit
iiie joined the channel
iiie has quit
iiie joined the channel
iiie has quit
maks25
Any reason why my logger doesn’t catch errors inside of a celery delay method? If I make the method synchronous it catches it, but not when it’s a @shared_task
eperzhand joined the channel
ytsejam joined the channel
doismellburning
maks25: celery and logging -> fun ;)
maks25: check out CELERY_HIJACK_ROOT_LOGGER or similar
iirc there's all kinds of faff involved
hylje
yeah celery default logging is fucked
you can unfuck it but it's a little bit of magic incantations
celery does its own logging config that may not be what you want
superB_ joined the channel
themsay joined the channel
kyheo joined the channel
kyheo has quit
mstenta joined the channel
maks25
hylje: I’m getting no where…all I want is to wite the stack trace for level warning and above for celery tasks.
write to log file*
Landverus has quit
bkxd has quit
eperzhand has quit
dal220 has quit
raijin joined the channel
__marco joined the channel
bed666 joined the channel
enttree joined the channel
adwin_ joined the channel
enttree has quit
ycon_
Hi all, if I want to alter my DRF result with a function (eg using GroupBy)- do I do it in views or serializers? And do I have to use viewSets or something else?
Biwaa joined the channel
superB_ has quit
sima joined the channel
superflat joined the channel
sima
hi all! I'm trying to upgrade django to 1.9 and faced a problem. The problem stems from django's system check framefork and the fact that it does import urlconf on startup. This all happens during(or even before) apps registry is populated with models. And in views we heavily importing models
right now I'm not sure how to proceed further: either disable all url-related checks or move models imports from views to ... - there's no answer where I could move those, and I'm very against doing all imports lazily, either inside views or deferring imports to actual usage time
Biwaa has quit
maks25
omg I’m such an idiot, no wonder it wasnt logging. Celery wasnt running on my dev machine
superflat has quit
superflat joined the channel
superflat has quit
superflat joined the channel
superflat has quit
superflat joined the channel
rodolfojcj joined the channel
dal220 joined the channel
superflat has quit
Xard has quit
unit923 joined the channel
Xard joined the channel
unit923
hi all
can anyone tell me
if i've got a Foo.objects.all() in memory and im' doing something with it (ie. paginating over records etc..)
and then another process somewhere adds another record to that Model...
what happens to the collection i've got going...?
what seems to be happening is that it...gets reset or something?
Ryanar joined the channel
does anyone have any experience with this?
its the only thing i can think of which explains my scenario
and i can't find anything online about it
dean has quit
raijin has quit
marco-silva
what do you mean it gets reset?
sima
unit923, depending on where that new model in results set. if queryset already has put that into it's cache or if by some reason new object somewhere in the beginning of your results because of ordering - you won't see new object
unit923
hmm ok, not seeing the new object is totally fine...
wait.. do you know what would happen if one of the ones in that set (which i'm paginating over) is *updated*?
sima
querysets are lazy. when you first time access results from it - all set is cached in memory (or one-by-one if you were using `.iterator()`). if there's cache - no query to db takes place on subsequent accesses
marco-silva
what would happen to what?
unit923
to the whole set
sima
unit923, nothing would happen if the object is already in memory
unit923
so i'm doing for foo in Paginator(Foo.objects.all(), batch_size)
in a management command
sol1x joined the channel
and quite regularly it just completely stops processing the loop, with no error given...
however, the website part of the project is regularly adding/updating records to the model which is being paginated over
so i was wondering if somehow this was the cause of the Pagination just..."stopping"...
sima
unit923, maybe all memory exhausted by the operation? and it just suffering from resources, which you observe as "stops processing the loop"
unit923
but it continues to the rest of the management command, it doesnt crash or throw memory exceptions
sima
aaah, then there's just no results
unit923
and they're in small batches of 20 records
ahh ok..
how is that possible though
because it seems to fail in the middle of the batch
sima
I've faced similar problem recently. But I've beed doing filter, and changing data in database while filtering goes. that did result in leaps in processing
try adding some stable order, like `order_by('id')` and verify you don't change data you're filtering against