-
bmonty
_smf_: that won’t work, look a litte further down at the call to vm
-
ultimatt: that patch is the same as what I added
-
but the load will still fail with only that change
-
msimerson’s change to the plugin paths is better
-
ultimatt
actually, ./node_modules should be checked *after* it doesn't exist in ./plugins or env.HARAKA/plugins
-
ultimatt updates PR
-
bmonty
is there a way to create the vm without feeding it code read from a file?
-
_smf_
ultimatt: isn't loading the code from node_modules first a bit wrong. Shouldn't it try and load from plugins/ first?
-
bmonty
if not, then read-package-json would be required
-
ultimatt
-
_smf_, ordering fixed
-
bmonty
ultimatt == msimerson ?
-
ultimatt
bmonty, yes
-
_smf_
Isn't process.env.HARAKA always set?
-
e.g. if you run node haraka.js - it will get set to the directory of haraka.js and if haraka -i is called, then it will get set to that.
-
So some of that looks redundant.
-
bmonty: you could try this:
-
ultimatt
right, but I think the if clauses are for the tests
-
_smf_
Change: var code = '"use strict";' + rf;
-
to
-
var code = '"use strict"\nrequire(\'./' + fp[i] + '\');';
-
Would that work?
-
bmonty
I like that idea…
-
_smf_
Not sure it's going to work though.
-
Ah;
-
You might need:
-
exports = require('./' ....
-
That should work I think.
-
Anyway - I haven't really go time to play with this; so I'll let you both carry on.
-
bmonty
ultimatt: here’s another attempt, I like this one better
-
-
ultimatt
I like that better too.
-
this part isn't quite right though:
-
if (process.env.HARAKA) {
-
paths.unshift(path.join(process.env.HARAKA, 'plugins'));
-
+ paths.unshift(path.join(process.env.HARAKA, 'node_modules'));
-
}
-
I'm working on that though (I'm updating the test for it now)
-
bmonty
I think your patch can cover that
-
ultimatt
yeah, once I get it right.
-
bmonty
you’re saying it’s not correct because of the order, right?
-
ultimatt
correct
-
1node.env.HARAKA_PLUGIN_PATH (if defined)
-
2__dir/plugins
-
3node.env.HARAKA/plugins
-
4node.env.HARAKA/node_modules
-
bmonty
cool
-
ultimatt
that's the order we want
-
_smf_
Still
-
rf = fs.readFileSync(fp[i]);
-
ultimatt
and we can add the check for package.json earlier, rather than waiting for an exception
-
_smf_
could be rf = require(fp[i]) and all that logic could disappear entirely?
-
bmonty
yeah, I see that
-
_smf_
As that section is only determining which path actually works.
-
And the require() will throw if it doesn't find anything.
-
And it will correctly follow package.json etc.
-
bmonty
_smf_: I don’t think that will work because the call to vm.runInNewContext expects javascript code, not an object
-
_smf_
Sure;
-
bmonty
In this way vm.runInThisContext is much like an indirect eval call, e.g. (0,eval)('code').
-
_smf_
but if that code = "exports = require(path_we_determined_above)"
-
bmonty
^ from the node API docs
-
_smf_
Then it you reduce a lot of that complexity.
-
As that for loop is merely determining which path we're going to use.
-
bmonty
also, the require call you’re suggesting would load the plugin in haraka’s main context
-
I think there is an intent to do some separation of the plugins from the main context
-
_smf_
Yeah - that is true.
-
The plugin could do something nasty to global or a builtin
-
bmonty
so separating the plugins is what is creating the complexity
-
_smf_
What about doing that try {} catch {} of the file paths inside the 'code' that we pass to vm.runInNewContext?
-
e.g. determine what to load inside that
-
bmonty
I think that would be better
-
since existsSync is going to be deprecated
-
_smf_
As we could pass 'fp' to the sandbox.
-
bmonty
I think we could get rid of custom_require also
-
I dunno
-
finding the right path to use is the hard part
-
_smf_
e.g. code = '"use strict";\nfor var(i=0; i<fp.length; i++) {\ntry { exports = require(fp[i]); }\ncatch (err) { ... };\n// throw if we couldn't load anything...';
-
I don't think you can get rid of the custom_require
-
That might cause some issue with the require of fp[i] though. We might need to sanitize the paths to what custom_require expects.
-
Actually - I'm being a wally. Move the fp[i] logic inside custom_require instead.
-
bmonty
that might work
-
_smf_
Then just add do the require inside the vm 'code' and let it throw if it can't find anything.
-
ultimatt
-
updated
-
with updated tests
-
and I added a check for package.json
-
bmonty
I think you still need to update _load_and_compile_plugin
-
ultimatt
certainly
-
but now it can happen based on a filename match, instead of a try {} catch
-
right?
-
bmonty
yes, I think so
-
you’re just reading in package.json
-
ultimatt
that's all I was trying to do.
-
My only goal for that PR is add node_modules to the list of dirs, in the correct order.
-
bmonty
I could add a check for package.json and do the same as I did in the catch block
-
ultimatt
to make your upcoming PR simpler
-
adding in package.json was an afterthought, to let you handle that with a /package.json$/.test(name) check
-
bmonty
yeah, I see where you’re going
-
ultimatt
does that help you then?
-
bmonty
I think so
-
let me hack on it for a bit
-
ultimatt
k, let me know.
-
technick joined the channel
-
bmonty
one issue with your patch, the last path pushed in uses __dirname instead of process.env.HARAKA
-
technick
howdy all.. was curious if anyone has seen segfault errors on 2.6.1 like this
-
kernel: node[18137]: segfault at 0 ip 0000000000a3897f sp 00007fff144b9600 error 4 in node[400000+c30000]
-
ultimatt
right
-
b/c process.env.HARAKA is the haraka config directory
-
not the install directory
-
and the node_modules dir is in the install dir
-
bmonty
ok, I’m misunderstanding then
-
I want to load from the node_modules in the install dir
-
I thought that was #4?
-
ultimatt
hmmm, maybe it should be both...
-
but then, that could introduce accidental behavior.
-
I think you're right
-
bmonty
in your PR, you could change “node.env.HARAKA” to “process.env.HARAKA”
-
and I think your patch implements that once you change the last line to use process.env.HARAKA instead of __dirname
-
ultimatt
where do you find node.env?
-
_smf_
technick: no; did you compile node yourself?
-
technick
yes
-
bmonty
-
ultimatt
oh, that's only in the comment
-
it's correct in the code
-
I'll fix the comment though
-
_smf_
technick: which version of node?
-
technick
built node from source v0.12.5
-
_smf_
Ok - were you running a different version before that?
-
technick
nope, fresh install
-
_smf_
no idea then
-
technick
i'm not a js guy, so i'm kinda lost. Any suggestions on how to diagnose this?
-
_smf_
Well - a segfault isn't a 'js' thing.
-
You're hitting a bug in node that is causing the segfault.
-
As you compiled your node yourself, then you could probably do whatever you need to your OS to enable core dumps
-
Get it to crash again
-
technick
alright, i'll enable coredumps =)
-
its running on a centos 7 machine
-
_smf_
Then run 'gdb/path/to/node /path/to/core' followed then type 'bt' at the prompt.
-
It will output a load of symbols.
-
You'll have to then raise an issue with the node folks and provide them the backtrace.
-
If the bug is in node core; hopefully they'll fix it.
-
If it's in a node native module; then you'll need to take it up with the module author
-
technick
fun.. thanks for the info, you rock
-
_smf_
np - yw
-
fwiw - I've not seen any core dump with Haraka unless I've upgraded node and forgotten to rebuild the bindings.
-
bmonty
-
_smf_
e.g. cd /usr/lib/node_modules/Haraka && npm install
-
technick
This seemed to start faulting at around 60k messages an hour
-
ultimatt
-
ultimatt looks at bmonth diff