log in

Hmm, Working Thread Size

Message boards : NFS Discussion : Hmm, Working Thread Size
Message board moderation

To post messages, you must log in.

AuthorMessage
Pappa

Send message
Joined: 22 Oct 09
Posts: 3
Credit: 83,814
RAC: 0
Message 295 - Posted: 21 Dec 2009, 4:31:39 UTC
Last modified: 21 Dec 2009, 4:33:01 UTC

I am finding that while supporting NFS what is not publically published as afar as growth is becomeing more than My Computer(s) and Boinc can handle. The current Working set size is 500 meg. A Dual core becomes 1 gig... Does that mean you are attempting to exclude all of the General Internet population? Most do not have the required resources to give up without notice of reason.

Thank You for keeping to keep us abreast of your growth or changes.

I am setting No New Tasks.

Regards
ID: 295 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Greg
Project administrator

Send message
Joined: 26 Jun 08
Posts: 640
Credit: 437,680,888
RAC: 204,288
Message 296 - Posted: 21 Dec 2009, 5:03:58 UTC - in response to Message 295.  

Yes, this project does require more memory than most. See this sticky thread for more discussion concerning the required memory.
ID: 296 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pappa

Send message
Joined: 22 Oct 09
Posts: 3
Credit: 83,814
RAC: 0
Message 336 - Posted: 21 Dec 2009, 16:36:40 UTC - in response to Message 296.  

Thank You Greg

I had Deselected "laseivef." When I look in the init_data.xml for laseivee it defines <rsc_memory_bound>500000000.000000</rsc_memory_bound>
which automatically blocks out 1 gig of memory on the dual core (XP). So after observing for a while (on my three machines) the largest Peak Mem Usage was 389 meg. So if the larger problems fit inside 400 meg why not set a more realistic working set size.
With the current project design settings, and a lot of Dual Core machines (XP or Vista) that came off the showroom floor with 2 gig or less.
With the OS and crap that is loaded at the factory, the project effectively cripples them (running in the swap file) on startup or after having finished a WU or two.

Regards
ID: 336 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Greg
Project administrator

Send message
Joined: 26 Jun 08
Posts: 640
Credit: 437,680,888
RAC: 204,288
Message 337 - Posted: 21 Dec 2009, 19:21:26 UTC - in response to Message 336.  

rsc_memory_bound is merely the amount of memory that BOINC thinks must be free before it will start the workunit. For all running workunits, the sum of the rsc_memory_bounds of those workunits must be no more than n% of the total memory of the computer, where n is set in the client preferences. This memory is not allocated or reserved by BOINC. The application uses only the memory that it actually needs, not the entire 500 megabytes.

Some users have set the BOINC client to allow it to use 95% or 100% of the total memory. Because of other running programs, if BOINC actually did this the computer would slow to a crawl. Most projects use little memory so this limit is rarely tested. NFS@Home, however, will use that much memory if allowed. I've therefore set rsc_memory_bound a little high to instruct BOINC to leave a bit of room for other programs in these cases. If there are a lot of other programs running, though, the computer will still significantly slow down. If memory is tight, the proper fix is to change the BOINC client settings to only allow BOINC to use about 75%-80% of memory while the computer is idle and 40%-50% (or less) while the computer is in use.
ID: 337 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Pappa

Send message
Joined: 22 Oct 09
Posts: 3
Credit: 83,814
RAC: 0
Message 340 - Posted: 23 Dec 2009, 3:44:47 UTC - in response to Message 337.  
Last modified: 23 Dec 2009, 3:45:07 UTC

Twice something has happened preventing me from typing this message.

The reason I noticed the issue as I got home had to Power off this machine to get control back. When I restarted the machine noticed it had been producing ERRORS (NHS does not like running the Swap/Page file).

So rather than trashing work for something that I do not own. It becomes look for something that I can leave moderately unattended without worry of wasting Everyones Resources/Time. In fairness during the run of the less intense WU's my three multiprojects machines had no issues. With the last batch I had two machines that produced errors. Sorry... I would suggest that lasievee has a memory leak... I am sure that you can figure that one out.

Regards
ID: 340 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : NFS Discussion : Hmm, Working Thread Size


Home | My Account | Message Boards