Posts by bdodson*
21)
Message boards :
Questions/Problems/Bugs :
Error's while Computing ?
(Message 629)
Posted 8 Nov 2010 by bdodson* Post:
OK, I have 285696 Nov 8 15:38 cudart.dll 400930 Nov 8 15:48 primegrid_ppsieve_1.30_i686-pc-linux-gnu__cuda23 624 Nov 8 15:52 app_info.xml And get messages like Tessla C2050 (driver version unknown, CUDA version 3010) Found app_info.xml; using anon platform File projects... ppsieve_1.30... .... [error] no url for file transfer primegrid_ppsieve_1.30_windows_intelx86 Happily downloading tasks, but they're just saying "in progress" ... ah, the task reports "Status" as downloading ... no compute time. I did wonder about the cuda23, as these fermi's are cuda3-somethings; and why a linux app would have a .dll (windows device linker?) file. At least I managed to get connected, at last ... -Bruce
|
22)
Message boards :
Questions/Problems/Bugs :
no progress ???
(Message 625)
Posted 7 Nov 2010 by bdodson* Post: On my Athlon X3 (Win Xp 64B) I have 3 WU running on high priority, but those WU are quite stuck: Let me guess. These are 16e's, lasievef? At least as a temporary measure, why not switch to 15e's (lasievee)? Unrelated, but the main boinc site at berkeley has been timing out most of the day. I did a refresh of my files on our GPU server, but haven't been able to get an attach_to_project to connect (exits with a segfault). -Bruce |
23)
Message boards :
Questions/Problems/Bugs :
Error's while Computing ?
(Message 617)
Posted 3 Nov 2010 by bdodson* Post: I was just pointing out the abnormal amount of errors bruce, or is that prohibited at this project ??? the cedit doesn't mean squat to me but if theres to many errors then it's just a waste of time to run the wu's. I switched over to the 15e Wu's to see how they run on my box's ... Thanks.
And thanks for the report; replying as a friend of the admin. Our two nvidia/tessla C2050's (under linux, on a box with six cores) finished the precomputation for a 16e project (non-boinc) they were working on. Any suggestion of a boinc project with an application they'd be able to run? The usual GPU suspects I tried either didn't have what they advertised, or an app with Wu's that just gave errors --- shortly after starting. -Bruce |
24)
Message boards :
Questions/Problems/Bugs :
Error's while Computing ?
(Message 612)
Posted 2 Nov 2010 by bdodson* Post: I hadn't noticed before today but I'm amazed at the amount of Error While Computing 16e Wu's I have. ... Thanks for the promotion, friend (and what have you done to Poorboy?). For the record, the admin here at NFS@Home is a singleton; Greg and Greg only. I can verify divergence from the admin view by reminding everyone that the 15e Wu's have at least a comparable amount of math interest as the 16e's, with none of the Error issues. I was --- of course, as always --- happy to see you back in the top10. I find it hard to believe that the difference in the credit adjustment between 16e's and 15e's is enough to make a visible ripple in your daily credit totals. I can further support my non-admin status by a first report that the 16e number on the "Status" page listed as in "postprocessing" finished last night --- a very remarkable three-prime factorization, with all three primes far away from ECM range (that is, I and the other non-boinc contributors didn't "miss" a small factor, for that part of the computation). This was a Champion 290-digit number, with prime factors having 85-digits, 96-digits and 110-digits; a real achievement for boinc computation. -Bruce ( ... let's wait for the news, for further info ...) |
25)
Message boards :
Questions/Problems/Bugs :
Tasks that Error while computing
(Message 584)
Posted 21 Sep 2010 by bdodson* Post: Sorceress I'm having a _very_ difficult time understanding why people with memory issues won't switch _and_stay_switched to lasievee. I run lasievee on my lower memory linux machines, even though I'm reasonably sure that they would be able to run lasievef. Perhaps it's a credit issue? If so, then maybe the differential credit ought to be adjusted. I do 100% agree that distributed computing jobs that are intended to make use of idle cycles should not be interferring with other user jobs. One point that hasn't been emphasized is that the lasievee jobs (referred to as using the "15e siever", rather than the "16e siever", among non-boinc users) are also working on a very interesting project that's different from the large memory jobs. The latter are heading towards record sized "snfs" numbers, for a public project; and are expected to make use of top-of-the-line national computing resouces (for the postprocessing). But the lower memory jobs will soon be working on a sequence of "gnfs" numbers, for which the pre-computation uses state-of-the-art GPU-computing. If one is thinking about breaking RSA-keys while running these WUs; that's gnfs, not snfs. Greg's relying upon linux/nvidia tessla GPU cards to find the so-called "gnfs polynomial" sent with the lasievee jobs; found using a massively parallel search, and state-of-the-art CUDA programming. If there's even the slighest possibility of a lasievef job being in the way on your machine, _please_, switch already! And happy computing. -bdodson |
26)
Message boards :
Questions/Problems/Bugs :
lasievef 1.08 still errors w/ windows
(Message 548)
Posted 1 Sep 2010 by bdodson* Post:
Me too. Up from 41 min to what looks like 5 hrs on our core i7 server. -bd (Not that this is necessarily a problem; so long as the longer tasks are producing more relations in proportion to the time.) |
27)
Message boards :
Chat :
Productivity
(Message 510)
Posted 29 Jul 2010 by bdodson* Post: Linear algebra is what I fear.Does the latest msieve solve any of these problems? I'm not sure that we know the limits of parallel Block Lanczos; but all of the recent records have been set with LA done by block Wiedemann (RSA200, SNFS1024, RSA768). Already the target 5, 409- C282 may be in "big iron" range, with a request for substantial time on first-rate hardware (infiniband ...). The most recent records have relied on distributing the matrix, with each piece run in parallel, which is a Wiedermann feature. -Bruce |
28)
Message boards :
NFS Discussion :
Factor this!
(Message 480)
Posted 7 Jun 2010 by bdodson* Post: ... OK, you're officially wasting the time of the people reading your posts. The numbers on each of the 18 Cunningham lists have specific limits on n, given in terms of b. There have been no new numbers added for some years; and the most recent "official" count was 458 on April 10. (I'm kidding on the first official, not the second; I'm not an official on either the Cunningham project or NFS@Home.) ... One of us lacks an elementary education in the algorithms of modern computational number theory.
Again.
False. I'm a coauthor of the paper on the factorization of RSA120; we used QS (cf. MPQS, ppmpqs, ppmpqs with larger large primes). Likewise RSA100 and RSA110. Perhaps the most famous of all record factorizations was the original challenge number of Rivest-Addleman-Shamir, which appeared in a famous math puzzle column in Scientic American, the first public description of the RSA cryptographic system, RSA129. Also done by QS (with ppmpqs and larger large primes). So we now know, beyond any doubt the state of your knowledge of previous records
Again. I did. As reported in the paper on the factorization of RSA155, Eurocrypt 2000. More than 10 years ago (with factors found in 1999). That was state-of-the-art at that time; just as RSA768 is the current state-of-the-art. What is the point of this comment? Yes, 512-bits, 155-decimal digits is easy now. My statement (the one quoted) is a correct description of the current resources of NFS@Home; which are somewhat larger than the projects of Batalov+Dodson, where we recently did a c180 gnfs.
Again. An undergraduate class in elementary number theory would usually include at least the statement of the prime number theorem (I advised an undergrad honors project in which the student presented the proof; currently pursuing graduate studies at Univ Wisc.). There are too many primes smaller than 10^70 to search them. More than the number of atoms in the universe. You're decribing a computation with exponential runtime. You've taken calculus? All of QS, ECM and NFS are subexponential methods. ECM has runtime that does depend on the size of the smallest prime factor. The record size prime is 68-digits for general numbers, 73-digits for 2^n-1. Not so QS and NFS, each of which would take weeks to find a factor of 3 in a 180-digit composite. This was reported in Time magazine in 1983; it's not even number theory anymore, basic knowledge of educated people. I'll try again. Your view of the effect of the "difference in size of prime factors in question" is not the view of a talented amateur. You may very well be bright, but your comments do more to confirm that "a little knowledge is a dangerous thing" than to a positive contribution to our project. Unofficially speaking. -bdodson
|
29)
Message boards :
NFS Discussion :
Factor this!
(Message 473)
Posted 25 May 2010 by bdodson* Post: 1. Random generation using random primes (Factors = p96 * p145) There must be some range in between "random numbers, hard to factor" and "numbers of the form b^k-1" (including ones b^2n -1 = (b^n -1)(b^n +1)). You have a c240 or maybe c241 (96+145). Is the intended point that it's no easier to factor than a prospective RSA number p*q, with p and q both primes with 120-digits? Since the current record for RSA numbers is RSA768, 233-decimal digits; factoring the number you've posed isn't much different than asking for a new record factorization of an RSA number, with 240-digits. I don'thave know your background, but have you looked at the resources needed for setting some of the previous records? Numbers of comparable difficulty (size, for gnfs) with the snfs numbers we're working on here might have 180-digits; with 170-digits not-so-difficult, and 190-digits harder than our current resources. Or perhaps you intend factoring your number to be easier than factoring a 240-digit RSA-key due to the smaller p96 factor? There's already a challenge number of this type, with 1024-bits (c. 310-decimal), which is known to be of the form p1*p2*p3*p4, where each pi is a prime (i = 1,2,3,4) with 256-bits. Factoring this previous challenge number by a method easier than factoring a 1024-bit RSA-key requires a method, easier than sieving (gnfs or snfs) that would be able to make use of having a 77-digit prime factor (c. 256-bits). Perhaps this previous challenge is easier than yours (p77 instead of p96)? The person proposing this earlier challenge heads one of the top factoring research labs, and they have made some progress by finding p73 factors; setting a record for ecm, with the slight issue that the 73-digit primes are quite far from being random; uhm, guess you're not going to be impressed, their primes all divide numbers 2^n -1. If you want random, the best we can do is 68-digits. -bdodson |
30)
Message boards :
Chat :
Productivity; Part 2
(Message 456)
Posted 11 May 2010 by bdodson* Post: ... Many of our users sieve with windows sievers; me, in particular, on four different flavors of pc. No Mac version though. The boinc software is primarily gui based; either windows or X11. But there are "early versions" still available from the main boinc page, not graphics based. No idea whether they run well, and it seems unlikely that coordination would be worth Greg's effort. I'm not sure why you needed a consensus on issues for which solutions are well known (to me even, not especially a boinc person, as you know). -Bruce |
31)
Message boards :
Questions/Problems/Bugs :
Project recently started using too much memory?
(Message 368)
Posted 19 Jan 2010 by bdodson* Post: ... Just to be clear, I'm still banning the 16e siever Lots of other users seem to have taken this view as well, with server status sometimes showing < 4000 "16e results in progress". Recently this has been tending upward, in the 5000's. Today I'm seeing 6,217. I know why I'm back to running these on the linux machines with sufficient resources, but am wondering who/why there's such a strong uptick? A possible reason is that we're currently sieving M899 = 2^899-1, one of the Mersenne numbers. When this one finishes, the next 16e target is another "repunit" R271 = (10^271-1)/(10-1) = 111...1 (271 ones) the next one after the number R269 = 10, 269- from back in Dec. Perhaps these are seen as somewhat higher profile targets than some of the other Cunninghams? Just wondering. -Bruce |
32)
Message boards :
NFS Discussion :
Resources
(Message 365)
Posted 15 Jan 2010 by bdodson* Post: Allow me to suggest that in the future you might want to consider ... Actually, one of the primary interests of the Batalov+Dodson project is to clear the remaining numbers of difficulty below 250, leaving only hard quartics. We have a few more wanted numbers and first five holes left to clear from our previous reservation (the 3rd round, seven numbers each) before taking the next bunch. Perhaps you might consider reserving a few yourelf, in advance. -Bruce PS1 - We're not as web-page fluent as Greg, but people that are interested can scroll through recent pages (page 111, page 112, ...) on the Cunningham site. We're perhaps the second most active group after NFS@Home. PS2 - Likewise, the numbers in question, from the most recent addition to the "Status of Numbers" are 6,338+ and 5,377+. I can divulge that Serge was getting set to fire off a reservation for these, but I was happy to be able to report that they were already being ecm'd for NFS@Home. There's still lots of numbers with difficulty between 250-255, a range that's almost untouched by either B+D or NFS@H. PS3 - Sorry, no cell apps yet; although our friends at epfl have a few: https://documents.epfl.ch/users/l/le/lenstra/public/pictures/DSC00942k.JPG |
33)
Message boards :
NFS Discussion :
1M credits/day
(Message 354)
Posted 7 Jan 2010 by bdodson* Post: ... Greg has this right (as usual). Tues: 987201 Wed: 997734 Thurs: 1005468 meeting Carlos's challenge. Took a while, though. Next we might look forward to 100 million TotalCredit, where Synergy has us at 85 million. -Bruce |
34)
Message boards :
NFS Discussion :
EM43
(Message 353)
Posted 7 Jan 2010 by bdodson* Post: In a brief departure from the Cunningham composites, we will be factoring EM43, ... This, however, will be the first use by NFS@Home of the General Number Field Sieve (GNFS) rather than the Special Number Field Sieve (SNFS). A new record for breaking RSA keys was set on Dec 12, and just reported today (Jan 7). The number had 232-decimal digits, and was referred to as RSA-768, a 768-bit key. Sieving (the part done here on NFS@Home) used the method we will be applying to our next (15e) number EM43. The main difference being the postprocessing, which required solving a sparse bit matrix with 192.79 million rows/columns. -Bruce |
35)
Message boards :
NFS Discussion :
1M credits/day
(Message 352)
Posted 3 Jan 2010 by bdodson* Post:
Something's working: on Thurs the synergy daily was 913130; today it's 947231 --- getting very close to Carlos's 1M. -bd |
36)
Message boards :
NFS Discussion :
1M credits/day
(Message 294)
Posted 19 Dec 2009 by bdodson* Post:
The data at Carlos's link has been constant at 897893 for three days; updated finally, 905517. That's the highest I've seen, so perhaps the DC-Vault competition is doing some good? -Bruce |
37)
Message boards :
NFS Discussion :
1M credits/day
(Message 275)
Posted 7 Dec 2009 by bdodson* Post: Comparing apples and oranges again. :-) We're on a NFS@Home surge; Synergy has us at 825.6K. -Bruce |
38)
Message boards :
Questions/Problems/Bugs :
assimilator-e
(Message 271)
Posted 6 Dec 2009 by bdodson* Post: I've been watching the backlog of these going up on the server status, and wondered whether you noticed and/or are worried? -Bruce |
39)
Message boards :
NFS Discussion :
Very long post processing
(Message 257)
Posted 3 Dec 2009 by bdodson* Post:
The large number sieved here (entirely) before M941 has just been completed by Greg as c274 = p62*p100*p113. This is a new "Champion" Cunningham factorization, second place: Special number field sieve by SNFS difficulty: 5501 c307 2,1039- K.Aoki+J.Franke+T.Kleinjung+A.K.Lenstra+D.A.Osvik 5787 c274 5,398+ G.Childers+B.Dodson 5739 c228 12,256+ T.Womack+B.Dodson At 280-digits, M941 will take over second place when the matrix step finishes, about six weeks from now. -Bruce |
40)
Message boards :
NFS Discussion :
Very long post processing
(Message 251)
Posted 26 Nov 2009 by bdodson* Post: ..., I ran the old NFSNet projct on a number of machines way back years ago. IIRC, The project leaders at the time were a bit more informative with the details of what was happening in the background/post processing. Good to hear; was that back when they had stats, and automated task distribution? Once the stats went down, most of the sieving was either me here or Greg. In either case, a lot smaller group, with different interest/tolerance in hearing the details. Also, despite the huge progress in Wanted numbers, NFS@Home is still quite new. I'm not sure that Greg's set a firm protocal for who's available for that one intensive step, the matrix computation. Still a work-in-progress.
I'm usually only able to run matrices on our newest clusters, often with best results before they're quite open to our users. I ran a bunch on our old compute server with Greg (the one still listed with 32 cores). Not sure how long the new Xeons will stay useful for matrix work; I've been running smaller projects with Batalov. Almost all of our hardware is exclusively run under a UWisc scheduler called condor; no user logins or job submission. Something in the range of 200+ linux x86-64s, which I use for nfs sieving projects (most recently M941, about half of that computation). Then a pc/grid of windows machines mostly and some 32-bit linux on which I run ecm. The volumn and quality of the NFS@Home factorizations seems to me to represent a new era for Cunningham numbers, for all but the most exclusive projects using .com or .gov (or both) resources. Those would include the two record computations, M1039 for snfs and RSA200 for gnfs; still somewhat past our present range. -Bruce |