Issue305

Title make new-scripts work on Mac OS
Priority wish Status resolved
Superseder Nosy List jendrik, malte, mkatz
Assigned To Keywords
Optional summary

Created on 2011-11-24.14:45:04 by malte, last changed by jendrik.

Messages
msg2469 (view) Author: jendrik Date: 2013-05-21.21:13:03
I added an issue at 
https://bitbucket.org/jendrikseipp/lab/issue/3/os-x-support. 
Let's continue the discussion there.
msg2468 (view) Author: malte Date: 2013-05-21.19:34:52
I think lab has a separate tracker; Jendrik will know.
msg2467 (view) Author: mkatz Date: 2013-05-21.19:29:26
It would be great then if lab could actually support Mac OS. Should we open 
another issue for that?
msg2466 (view) Author: malte Date: 2013-05-21.18:57:33
Hi Michael,

be warned that the new-scripts are being phased out. I think Jendrik would
rather delete them from the repository today than tomorrow. ;-)

If you still have a use case for them, maybe Jendrik can help you migrate to
another solution.
msg2465 (view) Author: mkatz Date: 2013-05-21.16:42:22
I do use new-scripts for relatively small local experiments, but I am okay with 
the work around.
msg2457 (view) Author: jendrik Date: 2013-05-21.00:21:46
Yes, I think this can be closed. Mac OS support in lab is by no means a priority, 
but if there's enough demand we can think about adding it.
msg2455 (view) Author: malte Date: 2013-05-20.17:10:28
We don't really use the new-scripts any more, so if anything, I guess this
should be a feature request for lab. Should we close this one?
msg1976 (view) Author: malte Date: 2011-11-24.14:45:03
Currently the new-scripts won't work on Mac OS X because the process group code
uses the /proc file system. It's possible to work around this, but at the cost
of less complete tracking of the time and memory limits. A correct solution
should implement the process group functionality properly for Mac OS.

Pasted from an email:
===========================================================================
Ah, the processgroup stuff. That was something I developed for the IPC
whose purpose was to enforce overall time and memory limits also for
planners that spawn off various processes, since we had to make
absolutely sure that we don't end up swapping on the machines of the
Barcelona cluster. (The problem is that you can enforce limits only per
process, so if you set a ulimit of 2 GB, the planner can then fork off
ten processes which together use 20 GB of memory.)

Unless you added any features that cause forks, this stuff is basically
unnecessary in Fast Downward. If you drop it, you lose some
memory-usage-tracking info in the driver.log file, but that is not
usually parsed anyway.

Try the following:

In new-scripts/calls/call.py, change the line

    def wait(self):

to

    def this_method_is_unused(self):

so that it will use the wait method of the base class instead. This will
make the enforcement of time and memory limits less strict, but for Fast
Downward it should still work, at least as long as you don't run a
portfolio configuration.

===========================================================================
History
Date User Action Args
2013-05-21 21:13:03jendriksetstatus: chatting -> resolved
messages: + msg2469
2013-05-21 19:34:52maltesetmessages: + msg2468
2013-05-21 19:29:26mkatzsetstatus: resolved -> chatting
messages: + msg2467
2013-05-21 18:57:33maltesetstatus: chatting -> resolved
messages: + msg2466
2013-05-21 16:42:22mkatzsetstatus: resolved -> chatting
messages: + msg2465
2013-05-21 00:21:46jendriksetstatus: chatting -> resolved
messages: + msg2457
2013-05-20 17:10:28maltesetmessages: + msg2455
2013-05-15 20:04:59jendriksetnosy: + jendrik
2011-11-24 15:07:00mkatzsetnosy: + mkatz
2011-11-24 14:45:04maltecreate