log in

Posts by bdodson*

21) Message boards : Questions/Problems/Bugs : Error's while Computing ? (Message 629)
Posted 8 Nov 2010 by bdodson*
Post:


I see some "[2] NVIDIA Tesla C1060 (4095MB) driver: 19562" running the PrimeGrid "Proth Prime Search (Sieve) v1.30 (cuda23)" Wu's if your interested or think you can get your nvidia/tessla C2050's to run them ... ???

You need the following app_info.xml & the executable and the cudart.dll from HERE ...


OK, I have
 285696 Nov  8 15:38 cudart.dll
 400930 Nov  8 15:48 primegrid_ppsieve_1.30_i686-pc-linux-gnu__cuda23
    624 Nov  8 15:52 app_info.xml

And get messages like
Tessla C2050 (driver version unknown, CUDA version 3010)
Found app_info.xml; using anon platform
File projects... ppsieve_1.30...
....
[error] no url for file transfer primegrid_ppsieve_1.30_windows_intelx86

Happily downloading tasks, but they're just saying "in progress"
... ah, the task reports "Status" as downloading ... no compute
time. I did wonder about the cuda23, as these fermi's are cuda3-somethings;
and why a linux app would have a .dll (windows device linker?) file.

At least I managed to get connected, at last ... -Bruce







<app_info>
<app>
<name>pps_sr2sieve</name>
<user_friendly_name>Proth Prime Search (Sieve)</user_friendly_name>
</app>
<file_info>
<name>primegrid_ppsieve_1.30_windows_intelx86__cuda23.exe</name>
<executable/>
</file_info>
<app_version>
<app_name>pps_sr2sieve</app_name>
<version_num>130</version_num>
<plan_class>cuda23</plan_class>
<avg_ncpus>0.05</avg_ncpus>
<max_ncpus>1</max_ncpus>
<flops>1.0e11</flops>
<coproc>
<type>CUDA</type>
<count>1</count>
</coproc>
<cmdline></cmdline>
<file_ref>
<file_name>primegrid_ppsieve_1.30_windows_intelx86__cuda23.exe</file_name>
<main_program/>
</file_ref>
</app_version>
</app_info>

You need to put all three files into the primegrid project directory. In my case it is C:\Program Files\BOINC\projects\www.primegrid.com.

You need to completely shutdown BOINC (Manager, Client and all the currently running science apps) before you copy the files into the directory. After that you can restart BOINC and should see something like this in the messages:

13 28.10.2010 21:47:02 NVIDIA GPU GeForce GTX 460 (driver version 26063, CUDA version 3020, compute capability 2.1, 993MB, 363 GFLOPS peak)
14 Collatz Conjecture 28.10.2010 21:47:02 Found app_info.xml; using anonymous platform
15 PrimeGrid 28.10.2010 21:47:02 Found app_info.xml; using anonymous platform



22) Message boards : Questions/Problems/Bugs : no progress ??? (Message 625)
Posted 7 Nov 2010 by bdodson*
Post:
On my Athlon X3 (Win Xp 64B) I have 3 WU running on high priority, but those WU are quite stuck:

the first after 46 min is 0%
the second after 1 h and 9 min is 6.1 %
the third after 2h 49m is 0.7%

no more progress at all


Let me guess. These are 16e's, lasievef? At least as
a temporary measure, why not switch to 15e's (lasievee)?

Unrelated, but the main boinc site at berkeley has been timing
out most of the day. I did a refresh of my files on our GPU
server, but haven't been able to get an attach_to_project to
connect (exits with a segfault).

-Bruce
23) Message boards : Questions/Problems/Bugs : Error's while Computing ? (Message 617)
Posted 3 Nov 2010 by bdodson*
Post:
I was just pointing out the abnormal amount of errors bruce, or is that prohibited at this project ??? the cedit doesn't mean squat to me but if theres to many errors then it's just a waste of time to run the wu's. I switched over to the 15e Wu's to see how they run on my box's ...


Thanks.


For the Record the 16e Wu's don't like a reboot or exit from BOINC very much, on a 6 core box I may get 2 or 3 wu's with computation error's if I reboot or even exit from BOINC. I have my Preferences set to keep in memory ...


And thanks for the report; replying as a friend of the admin.

Our two nvidia/tessla C2050's (under linux, on a box with six cores) finished
the precomputation for a 16e project (non-boinc) they were working on. Any
suggestion of a boinc project with an application they'd be able to run? The
usual GPU suspects I tried either didn't have what they advertised, or an
app with Wu's that just gave errors --- shortly after starting.

-Bruce
24) Message boards : Questions/Problems/Bugs : Error's while Computing ? (Message 612)
Posted 2 Nov 2010 by bdodson*
Post:
I hadn't noticed before today but I'm amazed at the amount of Error While Computing 16e Wu's I have. ...

Even my Stock running Box's have them so it's not due to OC'ing. Seems like an awful waste of Resource time to get that many & probably one reason why there so little participation here other than the Admin themselves.

It could be just the Windows OS Machines that are affected ...


Thanks for the promotion, friend (and what have you done to
Poorboy?). For the record, the admin here at NFS@Home is a
singleton; Greg and Greg only. I can verify divergence from
the admin view by reminding everyone that the 15e Wu's have
at least a comparable amount of math interest as the 16e's,
with none of the Error issues.

I was --- of course, as always --- happy to see you back in
the top10. I find it hard to believe that the difference in the
credit adjustment between 16e's and 15e's is enough to make
a visible ripple in your daily credit totals.

I can further support my non-admin status by a first report that
the 16e number on the "Status" page listed as in "postprocessing"
finished last night --- a very remarkable three-prime factorization,
with all three primes far away from ECM range (that is, I and the
other non-boinc contributors didn't "miss" a small factor, for that
part of the computation). This was a Champion 290-digit number,
with prime factors having 85-digits, 96-digits and 110-digits; a
real achievement for boinc computation.

-Bruce ( ... let's wait for the news, for further info ...)
25) Message boards : Questions/Problems/Bugs : Tasks that Error while computing (Message 584)
Posted 21 Sep 2010 by bdodson*
Post:
Sorceress

I have not lost work from any other project as a result of of the NFS lasievef memory usage. When a lasievef WU is downloaded while another project's WU is running and using a lot of memory the lasievef WU simply starts running immediately, which stops any other WUs (greedy little bugger). The lasievef WU starts off using little memory, the increases memory usage after a couple of minutes and errors out when there is not enough. After that that my other WUs start running again from where they left off, without issue. The lasievef WUs should NOT be forcing other WUs to stop like that, it needs to wait in line like everybody else. The NFS software needs some work to fix this issue.

Loseing 2-3 minutes per errored lasievef WU isn't so bad. It's intolerable if it causes another projrct's WU to error out. That needs to be fixed, not ignored!


I'm having a _very_ difficult time understanding why people with memory
issues won't switch _and_stay_switched to lasievee. I run lasievee on
my lower memory linux machines, even though I'm reasonably sure that they
would be able to run lasievef. Perhaps it's a credit issue? If so, then
maybe the differential credit ought to be adjusted.

I do 100% agree that distributed computing jobs that are intended to
make use of idle cycles should not be interferring with other user jobs.

One point that hasn't been emphasized is that the lasievee jobs
(referred to as using the "15e siever", rather than the "16e siever",
among non-boinc users) are also working on a very interesting project
that's different from the large memory jobs. The latter are heading
towards record sized "snfs" numbers, for a public project; and are
expected to make use of top-of-the-line national computing resouces
(for the postprocessing). But the lower memory jobs will soon be working
on a sequence of "gnfs" numbers, for which the pre-computation uses
state-of-the-art GPU-computing. If one is thinking about breaking RSA-keys
while running these WUs; that's gnfs, not snfs. Greg's relying upon
linux/nvidia tessla GPU cards to find the so-called "gnfs polynomial"
sent with the lasievee jobs; found using a massively parallel search,
and state-of-the-art CUDA programming.

If there's even the slighest possibility of a lasievef job being in the
way on your machine, _please_, switch already! And happy computing.
-bdodson
26) Message boards : Questions/Problems/Bugs : lasievef 1.08 still errors w/ windows (Message 548)
Posted 1 Sep 2010 by bdodson*
Post:


Same problem with my system (WinXP32).

Thanks


Me too. Up from 41 min to what looks like 5 hrs on our core i7 server. -bd

(Not that this is necessarily a problem; so long as the longer
tasks are producing more relations in proportion to the time.)
27) Message boards : Chat : Productivity (Message 510)
Posted 29 Jul 2010 by bdodson*
Post:
Linear algebra is what I fear.

.....

Ideas for solving the LA issue?
Does the latest msieve solve any of these problems?


I'm not sure that we know the limits of parallel
Block Lanczos; but all of the recent records have
been set with LA done by block Wiedemann (RSA200,
SNFS1024, RSA768). Already the target 5, 409- C282
may be in "big iron" range, with a request for
substantial time on first-rate hardware (infiniband ...).
The most recent records have relied on distributing
the matrix, with each piece run in parallel, which
is a Wiedermann feature. -Bruce
28) Message boards : NFS Discussion : Factor this! (Message 480)
Posted 7 Jun 2010 by bdodson*
Post:
...
The Cunningham project? You could just add to it by contributing a random b^n + 1 or a^n + b^n Ex: 5^300 + 1, or 6^429 + 11^429.
...
I think there are somewhere between 730 and 1200 of those.. That's going to keep you busy for a long time indeed, and that's not even factoring in the new a^n + b^n you contribute to the site each and every day.


OK, you're officially wasting the time of the people reading
your posts. The numbers on each of the 18 Cunningham lists have
specific limits on n, given in terms of b. There have been no
new numbers added for some years; and the most recent "official"
count was 458 on April 10. (I'm kidding on the first official, not
the second; I'm not an official on either the Cunningham project
or NFS@Home.)

...
"You have a c240 or maybe c241 (96+145). Is the intended point that it's
no easier to factor than a prospective RSA number p*q, with p and q both
primes with 120-digits?"

It's c240. And, that point would be false, a difference in size between the prime factors makes it easier to factor. That can be determined by common sense alone.


One of us lacks an elementary education in the algorithms of
modern computational number theory.


"Since the current record for RSA numbers is
RSA768, 233-decimal digits; factoring the number you've posed isn't
much different than asking for a new record factorization of an RSA number,
with 240-digits."

But this would not count, as it would be too easy (Because of the difference in size of the prime factors in question)


Again.


"I don't ... know your background, but have you looked at
the resources needed for setting some of the previous records?"

All previous records and RSA-768 were all factored using SNFS or GNFS.


False. I'm a coauthor of the paper on the factorization of RSA120;
we used QS (cf. MPQS, ppmpqs, ppmpqs with larger large primes). Likewise
RSA100 and RSA110. Perhaps the most famous of all record factorizations
was the original challenge number of Rivest-Addleman-Shamir, which appeared
in a famous math puzzle column in Scientic American, the first public
description of the RSA cryptographic system, RSA129. Also done by QS (with
ppmpqs and larger large primes). So we now know, beyond any doubt the
state of your knowledge of previous records


" Numbers
of comparable difficulty (size, for gnfs) with the snfs numbers we're
working on here might have 180-digits; with 170-digits not-so-difficult,
and 190-digits harder than our current resources."

You might as well begin trying to break someone else's RSA keys. (155 digits, or a typical 512-bit RSA modulus, would be incredibly easy to do, by that reasoning.)


Again. I did. As reported in the paper on the factorization of RSA155,
Eurocrypt 2000. More than 10 years ago (with factors found in 1999). That
was state-of-the-art at that time; just as RSA768 is the current
state-of-the-art.

What is the point of this comment? Yes, 512-bits, 155-decimal digits
is easy now. My statement (the one quoted) is a correct description
of the current resources of NFS@Home; which are somewhat larger than
the projects of Batalov+Dodson, where we recently did a c180 gnfs.


"There's already a
challenge number of this type, with 1024-bits (c. 310-decimal), which is
known to be of the form p1*p2*p3*p4, where each pi is a prime (i = 1,2,3,4)
with 256-bits. Factoring this previous challenge number by a method easier
than factoring a 1024-bit RSA-key requires a method, easier than sieving
(gnfs or snfs) that would be able to make use of having a 77-digit prime
factor (c. 256-bits). Perhaps this previous challenge is easier than yours
(p77 instead of p96)?"

Four 77-digit numbers? Wouldn't this be easier by searching for factors in the range of 10^76 or 10^77? And why not use GNFS/SNFS? It's your best method for numbers of that size. (It is unlikely that there will be any methods better than GNFS/SNFS for factoring.)


Again. An undergraduate class in elementary number theory would usually
include at least the statement of the prime number theorem (I advised an
undergrad honors project in which the student presented the proof; currently
pursuing graduate studies at Univ Wisc.). There are too many primes smaller
than 10^70 to search them. More than the number of atoms in the universe.
You're decribing a computation with exponential runtime. You've taken
calculus? All of QS, ECM and NFS are subexponential methods. ECM has
runtime that does depend on the size of the smallest prime factor. The
record size prime is 68-digits for general numbers, 73-digits for 2^n-1.
Not so QS and NFS, each of which would take weeks to find a factor of 3
in a 180-digit composite. This was reported in Time magazine in 1983; it's
not even number theory anymore, basic knowledge of educated people.

I'll try again. Your view of the effect of the "difference in size of
prime factors in question" is not the view of a talented amateur. You
may very well be bright, but your comments do more to confirm that
"a little knowledge is a dangerous thing" than to a positive contribution
to our project. Unofficially speaking.

-bdodson



And as for it being an easier challenge, I would be inclined to agree that it is easier, but it's four 77-digit numbers. Repeating the process to find three of the four factors? That will take you a few months. Factoring p96 * p145 would be somewhat easier given the difference in factor size, and the fact that you only need to find one factor out of the two to complete the factorization. But it is 19 digits larger, so I'm guessing that the two factorizations would take an equal amount of time.

"The person proposing this earlier challenge heads one
of the top factoring research labs, and they have made some progress by finding
p73 factors; setting a record for ecm, with the slight issue that the 73-digit
primes are quite far from being random"

Hmm. Interesting. I think that a p80 might be found by ECM sometime soon. (By sometime soon, I mean 10 ± 5 years.)

And, of course, it was found from 2^1181-1.

"uhm, guess you're not going to be
impressed, their primes all divide numbers 2^n -1. If you want random, the
best we can do is 68-digits."

This is well-known, and.. 68 digits? I can do that in 5 minutes! [/color]

29) Message boards : NFS Discussion : Factor this! (Message 473)
Posted 25 May 2010 by bdodson*
Post:
1. Random generation using random primes (Factors = p96 * p145)
2. Wait, wait, wait.. you only factor numbers of the form b^n + 1? ...


There must be some range in between "random numbers, hard to factor" and
"numbers of the form b^k-1" (including ones b^2n -1 = (b^n -1)(b^n +1)).

You have a c240 or maybe c241 (96+145). Is the intended point that it's
no easier to factor than a prospective RSA number p*q, with p and q both
primes with 120-digits? Since the current record for RSA numbers is
RSA768, 233-decimal digits; factoring the number you've posed isn't
much different than asking for a new record factorization of an RSA number,
with 240-digits. I don'thave know your background, but have you looked at
the resources needed for setting some of the previous records? Numbers
of comparable difficulty (size, for gnfs) with the snfs numbers we're
working on here might have 180-digits; with 170-digits not-so-difficult,
and 190-digits harder than our current resources.

Or perhaps you intend factoring your number to be easier than factoring
a 240-digit RSA-key due to the smaller p96 factor? There's already a
challenge number of this type, with 1024-bits (c. 310-decimal), which is
known to be of the form p1*p2*p3*p4, where each pi is a prime (i = 1,2,3,4)
with 256-bits. Factoring this previous challenge number by a method easier
than factoring a 1024-bit RSA-key requires a method, easier than sieving
(gnfs or snfs) that would be able to make use of having a 77-digit prime
factor (c. 256-bits). Perhaps this previous challenge is easier than yours
(p77 instead of p96)? The person proposing this earlier challenge heads one
of the top factoring research labs, and they have made some progress by finding
p73 factors; setting a record for ecm, with the slight issue that the 73-digit
primes are quite far from being random; uhm, guess you're not going to be
impressed, their primes all divide numbers 2^n -1. If you want random, the
best we can do is 68-digits.

-bdodson
30) Message boards : Chat : Productivity; Part 2 (Message 456)
Posted 11 May 2010 by bdodson*
Post:
...
A consensus has arisen that suggests NFS@Home
could greatly increase its productivity by providing
software that will:

(1) Run under Windows
(2) Allow users to run off-line, and send in results
periodically. ...


Many of our users sieve with windows sievers; me, in
particular, on four different flavors of pc. No Mac
version though. The boinc software is primarily gui
based; either windows or X11. But there are "early
versions" still available from the main boinc page,
not graphics based. No idea whether they run well,
and it seems unlikely that coordination would be worth
Greg's effort.

I'm not sure why you needed a consensus on issues for
which solutions are well known (to me even, not especially
a boinc person, as you know). -Bruce
31) Message boards : Questions/Problems/Bugs : Project recently started using too much memory? (Message 368)
Posted 19 Jan 2010 by bdodson*
Post:
... Just to be clear, I'm still banning the 16e siever
("lasievef") from linux machines with 1Gb/core or less memory, as
well as on the Windows machines. ... there's no reason to risk
having boinc being in the way. -bdodson


Lots of other users seem to have taken this view as well, with
server status sometimes showing < 4000 "16e results in progress".
Recently this has been tending upward, in the 5000's. Today I'm
seeing 6,217. I know why I'm back to running these on the linux
machines with sufficient resources, but am wondering who/why there's
such a strong uptick?

A possible reason is that we're currently sieving M899 = 2^899-1,
one of the Mersenne numbers. When this one finishes, the next 16e
target is another "repunit"

R271 = (10^271-1)/(10-1) = 111...1 (271 ones)

the next one after the number R269 = 10, 269- from back in Dec.
Perhaps these are seen as somewhat higher profile targets than
some of the other Cunninghams? Just wondering. -Bruce
32) Message boards : NFS Discussion : Resources (Message 365)
Posted 15 Jan 2010 by bdodson*
Post:
Allow me to suggest that in the future you might want to consider ...

Otherwise, there will be no numbers left for those
of us with small resources to do.


Actually, one of the primary interests of the Batalov+Dodson
project is to clear the remaining numbers of difficulty below
250, leaving only hard quartics. We have a few more wanted
numbers and first five holes left to clear from our previous
reservation (the 3rd round, seven numbers each) before taking
the next bunch. Perhaps you might consider reserving a few
yourelf, in advance. -Bruce

PS1 - We're not as web-page fluent as Greg, but people that
are interested can scroll through recent pages (page 111,
page 112, ...) on the Cunningham site. We're perhaps the
second most active group after NFS@Home.

PS2 - Likewise, the numbers in question, from the most
recent addition to the "Status of Numbers" are 6,338+
and 5,377+. I can divulge that Serge was getting set
to fire off a reservation for these, but I was happy to
be able to report that they were already being ecm'd
for NFS@Home. There's still lots of numbers with
difficulty between 250-255, a range that's almost
untouched by either B+D or NFS@H.

PS3 - Sorry, no cell apps yet; although our friends
at epfl have a few:
https://documents.epfl.ch/users/l/le/lenstra/public/pictures/DSC00942k.JPG
33) Message boards : NFS Discussion : 1M credits/day (Message 354)
Posted 7 Jan 2010 by bdodson*
Post:
...
BOINC Synergy and others report Recent Average Credit, which is an average over a few weeks weighted with an exponential decay function with a one week half-life. The project has a RAC of about 728 thousand. BOINCStats and a couple others report the actual number of credits granted each day without any weighting. This has recently surpassed 1 million/day. Assuming it stays a bit over 1 million/day, then it will take 2-3 weeks for the RAC to get to 1 million. ...


Greg has this right (as usual).

Tues: 987201
Wed: 997734
Thurs: 1005468

meeting Carlos's challenge. Took a while, though. Next
we might look forward to 100 million TotalCredit, where
Synergy has us at 85 million. -Bruce
34) Message boards : NFS Discussion : EM43 (Message 353)
Posted 7 Jan 2010 by bdodson*
Post:
In a brief departure from the Cunningham composites, we will be factoring EM43, ... This, however, will be the first use by NFS@Home of the General Number Field Sieve (GNFS) rather than the Special Number Field Sieve (SNFS).


A new record for breaking RSA keys was set on Dec 12, and just reported
today (Jan 7). The number had 232-decimal digits, and was referred to
as RSA-768, a 768-bit key. Sieving (the part done here on NFS@Home)
used the method we will be applying to our next (15e) number EM43. The
main difference being the postprocessing, which required solving a
sparse bit matrix with 192.79 million rows/columns. -Bruce
35) Message boards : NFS Discussion : 1M credits/day (Message 352)
Posted 3 Jan 2010 by bdodson*
Post:

We're on a NFS@Home surge; Synergy has us at 825.6K. -Bruce


The data at Carlos's link has been constant at 897893 for
three days; updated finally, 905517. That's the highest
I've seen, so perhaps the DC-Vault competition is doing
some good? -Bruce


Something's working: on Thurs the synergy daily was
913130; today it's 947231 --- getting very close to
Carlos's 1M. -bd
36) Message boards : NFS Discussion : 1M credits/day (Message 294)
Posted 19 Dec 2009 by bdodson*
Post:

We're on a NFS@Home surge; Synergy has us at 825.6K. -Bruce


The data at Carlos's link has been constant at 897893 for
three days; updated finally, 905517. That's the highest
I've seen, so perhaps the DC-Vault competition is doing
some good? -Bruce
37) Message boards : NFS Discussion : 1M credits/day (Message 275)
Posted 7 Dec 2009 by bdodson*
Post:
Comparing apples and oranges again. :-)

BOINC Synergy and others report Recent Average Credit, which is an average over a few weeks weighted with an exponential decay function with a one week half-life. The project has a RAC of about 728 thousand. BOINCStats and a couple others report the actual number of credits granted each day without any weighting. This has recently surpassed 1 million/day. ...


We're on a NFS@Home surge; Synergy has us at 825.6K. -Bruce
38) Message boards : Questions/Problems/Bugs : assimilator-e (Message 271)
Posted 6 Dec 2009 by bdodson*
Post:
I've been watching the backlog of these going up on
the server status, and wondered whether you noticed
and/or are worried? -Bruce
39) Message boards : NFS Discussion : Very long post processing (Message 257)
Posted 3 Dec 2009 by bdodson*
Post:


Actually, as long as you have been working on the Cunningham Tables and with the hardware you have available, I'm surprized you are not helping in the post processing, then again maybe you are doing some of your own?


...Almost all of our hardware is exclusively run under a UWisc scheduler
called condor; no user logins or job submission. Something in the
range of 200+ linux x86-64s, which I use for nfs sieving projects (most
recently M941, about half of that computation). ... -Bruce


The large number sieved here (entirely) before M941 has just been
completed by Greg as c274 = p62*p100*p113. This is a new "Champion"
Cunningham factorization, second place:
Special number field sieve by SNFS difficulty:
5501	c307	2,1039-	K.Aoki+J.Franke+T.Kleinjung+A.K.Lenstra+D.A.Osvik
5787	c274	5,398+	G.Childers+B.Dodson
5739	c228	12,256+	T.Womack+B.Dodson 

At 280-digits, M941 will take over second place when the matrix step
finishes, about six weeks from now. -Bruce
40) Message boards : NFS Discussion : Very long post processing (Message 251)
Posted 26 Nov 2009 by bdodson*
Post:
..., I ran the old NFSNet projct on a number of machines way back years ago. IIRC, The project leaders at the time were a bit more informative with the details of what was happening in the background/post processing.


Good to hear; was that back when they had stats, and automated task
distribution? Once the stats went down, most of the sieving was either
me here or Greg. In either case, a lot smaller group, with different
interest/tolerance in hearing the details. Also, despite the huge progress
in Wanted numbers, NFS@Home is still quite new. I'm not sure that Greg's
set a firm protocal for who's available for that one intensive step, the
matrix computation. Still a work-in-progress.


Actually, as long as you have been working on the Cunningham Tables and with the hardware you have available, I'm surprized you are not helping in the post processing, then again maybe you are doing some of your own?


I'm usually only able to run matrices on our newest clusters, often
with best results before they're quite open to our users. I ran a bunch
on our old compute server with Greg (the one still listed with 32 cores).
Not sure how long the new Xeons will stay useful for matrix work; I've
been running smaller projects with Batalov.

Almost all of our hardware is exclusively run under a UWisc scheduler
called condor; no user logins or job submission. Something in the
range of 200+ linux x86-64s, which I use for nfs sieving projects (most
recently M941, about half of that computation). Then a pc/grid of
windows machines mostly and some 32-bit linux on which I run ecm.

The volumn and quality of the NFS@Home factorizations seems to me to represent
a new era for Cunningham numbers, for all but the most exclusive projects
using .com or .gov (or both) resources. Those would include the two record
computations, M1039 for snfs and RSA200 for gnfs; still somewhat past
our present range. -Bruce


Previous 20 · Next 20


Home | My Account | Message Boards