log in

Posts by bdodson*

41) Message boards : NFS Discussion : Very long post processing (Message 249)
Posted 26 Nov 2009 by bdodson*
Post:
Greg, who is doing most of the post processing? What type of machines/cluster are being used? Just curious.


Greg's always done most of the post processing. Only the
intensive sparse matrix calculation has been farmed out on
recent numbers. One of my other friends reports having done
one of the recent November matrices, and waiting for R269
(a number of larger "difficulty", with a matrix that will
take longer). I'd be interested to hear more, as well, but
am not sure how soon Greg will get back online with the
local holiday(s). -Bruce
42) Message boards : NFS Discussion : 3,538+ factors (Message 218)
Posted 6 Nov 2009 by bdodson*
Post:
The 177-digit number we factored was the last composite factor, the
other known factors all being primes; so now this last piece has been
reduced to primes, and we're done with this number. The one we're
going on to is the next one on the "Status of Numbers" list, ordered by
completion date, 2,853+.

Hope this clarifies matters. -bdodson
43) Message boards : Questions/Problems/Bugs : Project recently started using too much memory? (Message 216)
Posted 6 Nov 2009 by bdodson*
Post:
...select "If no work for selected applications is available, accept work from other applications" so if lasievef does run out of work, then your computer won't be idle.


Done, thanks. Just to be clear, I'm still banning the 16e siever
("lasievef") from linux machines with 1Gb/core or less memory, as
well as on the Windows machines. Task Manager reports 356,172K for
15e ("lasievee"), but there's no reason to risk having boinc being
in the way. -bdodson
44) Message boards : Questions/Problems/Bugs : Project recently started using too much memory? (Message 214)
Posted 6 Nov 2009 by bdodson*
Post:
... BOINC allows you to designate profiles called home, school, and work. You can, say, call the computers without sufficient memory "school" computers by viewing the computers on your account, selecting details for the low memory computers, then changing location near the bottom. Then, in your NFS@Home preferences you can add separate preferences for school and disable lasievef ...


On our Xeon cluster(s) the memory use for lasievef on the new
target R269 is nowhere near 1Gb:
19  708m 330m  724 R 100.1  0.7   4:51.67 lasievef_1.07_x   
19  627m 322m  720 R 100.1  0.7   4:45.84 lasievee_1.07_x    

which shows lowest priority 19, then virtual memory, then
RAM used. I'm switching these from "school" to "work", with
the latter profile _only_ accepting lasievef tasks. If
sufficiently many other users/computers are set like this,
Greg may have to monitor the available tasks (in the server
settings) to make sure that we don't run out of lasievef tasks.
This hasn't been a problem with few people selecting the
"large" or "huge" setting.

I'm especially interested in the factors of R269, a number
with larger "difficulty" (on the "Status of Numbers" page).
These numbers get more extensive ECM pretests, and keeping
numbers requiring the 16e siever feasible gives us a larger
pool of interesting candidates. -bdodson
45) Message boards : NFS Discussion : 1M credits/day (Message 178)
Posted 24 Oct 2009 by bdodson*
Post:
I mean overall. We all together are at 727k credits per day.


I started from Stats & Leaders, clicked on BOINC Stats, clicked to
get the full list (we're not in the top15; is that a plausible
objective? We'd certainly raise our profile ...). Then I
clicked on the column for "last day". The NFS@Home line is
still saying 1.13M, with a last XML update of 5 hours ago
(8 GMT). Where should I be looking for the 727k? -bdodson

PS -- The top15 in question is by "most active", which
appears to mean the "last day" column. If I'm reading
correctly, we're in 16th, just a bit behind cosmology
and "spinhenge".
46) Message boards : NFS Discussion : 1M credits/day (Message 167)
Posted 24 Oct 2009 by bdodson*
Post:
Can we manage 1M credits per day?

Come one guys, let's rock and roll.

Carlos


You mean like this?

NFS@Home 702 +15 1,962 +40 129 +2 51 0 18,315,333 +1,130,532
02:33:02 old

(looks like 1.13M to me!) -bd
47) Message boards : NFS Discussion : Computer Lockup's (Message 163)
Posted 24 Oct 2009 by bdodson*
Post:
...
PS: Problem is you can't get work for the Low Memory one probably because that's what most people want.


Our problem is that (1) there are very few Cunningham numbers
for which the low memory siever is the correct choice; and
(2) projects for numbers that size would finish very quickly,
just 2-3 days, which is already difficult to manage for the
medium-sized siever (as illustrated by the 59-digit ECM factor
in the most recent reservations). I could be wrong, but I
don't believe that any of our numbers have used the small-memory
siever.

I've recently been sieving a bunch of numbers with difficulty
240.0-249.99 on our x86-64 clusters (distributed under condor)
--- they also use the medium-sized siever. The NFS@Home numbers
all have difficulty 250.0-259.99, still in the range where the
medium-sized siever is best. I did one number with the large
siever which had difficulty 269, but I'm not sure where the
crossover is (I'm not sure that we know). I've only done one
number with the low-memory siever, which had difficulty in the 220's.
One of our friends is expert at finding interest in those small-sized
numbers (as well as providing the assembly code settings used for the
large memory siever, which is new). I'll check to see whether there's
one or two that we could convince Greg would be worth the over-head.

Sorry to hear that the new tasks crashed your machine. We had some
discussions with the leader of yoyo@home, who was assuring us that
BOINC would be able to track larger memory jobs away from machines
on which they'd cause a problem. Sounds like we weren't careful
enough.

Regards, bdodson
48) Message boards : Questions/Problems/Bugs : More info on "Status of numbers" page (Message 154)
Posted 22 Oct 2009 by bdodson*
Post:
Good idea. We are currently getting about one result per week now, and the date will help monitor that rate.


Already done! You're fast :)

Thanks for the update.


There seems to still be space on the status page, perhaps we
could have the size of the prime factors? One of the things
SNFS factorization is good for is that it checks the performance
of ECM pretesting (it's hardly ever ECM "factoring", in this range).
Too many factorizations where the smallest prime factor was out
of ECM range, and it's hard to keep pushing up the effort. While
an "ECM miss" (a 53-digit prime factor, or a 54-digit prime factor,
after sufficient testing to remove 55-digit primes to 80%) is less
expensive than a miss in GNFS. And yes, Greg's fast. -bdodson

Cunningham "Champion" SNFS factorizations with Greg:

5714 p128*p140 5,383+ Childers/Dodson
5654 p127*p136 6,392+ Childers/Dodson



49) Message boards : NFS Discussion : p59 factor of 6, 334+ by ecm (Message 141)
Posted 20 Oct 2009 by bdodson*
Post:
The early NFS@Home numbers were all tested extensively
by ECM (the Elliptic Curve Method) for small and medium
sized prime factors, past 55-digits. The three most
recent numbers were less tested, and of these 6, 334+
was hardly tested at all before selection (due to rapid
NFS@Home progress!). We had expected 50-digit prime
factors to have been removed, and planned on removing
prime factors up to 55-digits, with a chance of finding
larger factors, before starting sieving.

So I'm happy to report that the 59-digit prime

p59 =
37597376323754357344197406664995834047249702145969970498293

divides the 239-digit composite factor of 6,334+ we
had intended to factor. Unfortunately, there remains
a composite cofactor of 181-digits; and sieving
by snfs (with difficulty 259) is still easier than
using gnfs (with gnfs on a 181-digit number perhaps
comparable in difficulty to snfs on a number with
difficulty of 270-digits). I'm not done testing yet,
but there's most likely not another factor in ECM
range (up to 70-digit primes).

This factor was found by GMP-ECM, under the ECMNET
project, Dodson/ECMNET; which is also the software
used by the boinc project yoyo (for which Beyond of
Team Ars Technica found a recent large prime factor).
The machine used was one of 300+ 32-bit linux/xeons,
along with c. 500 pcs (1050 pcs used during 8pm to
8am) distributed under condor. The first step limit
B1 was 260,000,000 (with default B2), optimal for
finding 60-digit primes.

This p59 is the 4th largest prime found this year by
ECMNET, just below the 59-digit prime in 3rd place
found by yoyo@home earlier this year. It's nowhere
near the Top 10 on Brent's list, for which the three
smallest primes all have 62-digits. At the risk of
trying the patience of NFS@home readers, I can report
that four of those Top 10 are Dodson/ECMNET factors,
including the two largest at 67-digits and 66-digits
(just a bit larger than the 66-digit prime found by
the ECMNET founder, Paul Zimmermann). -bdodson
50) Message boards : NFS Discussion : Reservations? (Message 140)
Posted 19 Oct 2009 by bdodson*
Post:
The three most recent additions to the list under "Status
of Numbers" are the Cunningham numbers of "snfs difficulty" 259
(neglecting two at 258.8 that would have required a quartic
polynomial). The two you mention are at 252.5; so more
overhead switching numbers.

One of the three added is a 2LM (1726L), so 2LM's aren't being
neglected here (two of the 16 on the "Numbers" list). One of
the others just added is the 3rd-from-last to be reserved from
the previous list of "Wanted" numbers (viz, 7, 307+; with the
other two above the current NFS@Home range).

From the new "Wanted" lists Greg's post refers to (mailed as hardcopy
with the factors on Sam's "Page 112"; we're still waiting for the
online version), all of the "Most Wanted" are already reserved
(three currently here in "Post Processing"). Not counting the
third of our new "Numbers" (6, 334+), there are just ten new
"More Wanted" not already reserved, including the two you've mentioned.
We should be able to clear several of these. -bdodson

(PS - The term "snfs difficulty" refers to the estimate for the
runtime of our factorizations, as measured by the asymptotic
formula.)


Previous 20


Home | My Account | Message Boards