Posts by Thomas Womack

1)
Message boards :
News :
New 15e number queued
(Message 1555)
Posted 15 Jul 2015 by Thomas Womack Post: A new C182 (the XYYXF number with the highest GNFS:SNFS difficulty ratio) is queued up on 15e; thanks to ChristianB for using two weeks of GPU time on his GeForce 750Ti to do the polynomial selection. 2340_742 will start linear algebra tomorrow. |

2)
Message boards :
News :
193-digit GNFS coming atcha!
(Message 1528)
Posted 1 May 2015 by Thomas Womack Post: After five GPU-weeks of polynomial selection and a thousand CPU-hours of trial sieving, I've queued up a 193-digit GNFS from the aliquot-sequences project. For this one I choose parameters to run through c5=1..10^7 in five GPU-weeks, which came out as stage-1 norm of 1e28 and stage-2 of 1e26. The best polynomial came from the first block, with c5=286440. Average yield (over Q=5e7*N .. 5e7*N+1e4 for N=1..8) is just slightly under 1, so I'm sieving 50M..500M to get a good number of relations. There will be a 190-digit coming in about another three weeks when the polynomial selection's finished. |

3)
Message boards :
NFS Discussion :
First batch of experimental GNFS done
(Message 1490)
Posted 24 Feb 2015 by Thomas Womack Post: For the C176 from aliquot sequence 2340 step 736 (15e sieving, 32-bit large primes), 320 million relations are enough for filtering to work with target density 120. So I'm targeting 360M relations for the C179 that I've just queued up. |

4)
Message boards :
NFS Discussion :
Statistics for C187 polynomial selection
(Message 1489)
Posted 24 Feb 2015 by Thomas Womack Post: I've just used another 465 GPU-hours for the C179 from 139^137+137^139 This time I used stage 1 norm 2.5e26, searching from 1 to 2.6M to find 99.3M unique stage-1 hits. I was aiming to run 400 GPU-hours for 1-2M, so used the rough rule that doubling stage1-norm takes 5 times as long to get 8 times the yield to pick a stage-1 norm that took about two hours to run 1M..1M+10k; but 1M-2M ran faster than 1-1M so I extended the range that side). Rest was routine; 163 CPU-hours to run -nps with a (relatively low) stage-2 norm of 5e23, 19.5 hours to run -npr on the 1541 survivors. Best polynomial had a score of 1.012e-13 (and a stage-2 norm of 2.39e23); hundredth-best was 8.87e-14. |

5)
Message boards :
NFS Discussion :
Statistics for C187 polynomial selection
(Message 1483)
Posted 4 Feb 2015 by Thomas Womack Post: This took 460 GPU-hours (so about ten days on two GPUs). First I ran a calibration job to see what stage_1 value was required to get about 20 stage-1 hits per second; the answer turns out to be about 1e27. Then I ran some short ranges (1-2M and 2M-3M), took a 1% sample of the stage-1 hits and ran -nps with a high stage2_norm, sorted by output score and reran on the whole set of stage-1 hits with the threshold equal to the tenth-best output score in the sample - that is, I was aiming for about a thousand stage-2 hits. Once I knew how long the short ranges had taken to run, I ran some longer ranges (3M-17M on one GPU, 17M-30M on the other) and repeated the process. After running -npr on the thousand-or-so stage-2 hits, I sort by E-value and note the hundredth-best as a plausible threshold for the next job. So for 114!+1, difficulty 186.4, the stage-2 threshold I found good for the long ranges is 1.2e+25 (this actually got about 2000 hits), and the hundredth-best-E-value on both ranges was about 2.6e-14. |

6)
Message boards :
NFS Discussion :
First batch of experimental GNFS done
(Message 1482)
Posted 4 Feb 2015 by Thomas Womack Post: Using 32-bit large primes, to be able to get a matrix when doing relation filtering with target density 120 required 134.120 14e 165 digits 300M relations 133.125 14e 170 340M 5748_1537 15e 170 270M For 133.125, I collected many more relations than were needed; for more than 420M relations the filtering process has to make several passes and ends up with a significantly larger (though less dense) matrix, so 420M should be considered an absolute maximum. |

7)
Message boards :
NFS Discussion :
5748.1537 factors
(Message 1477)
Posted 27 Jan 2015 by Thomas Womack Post: 314 million relations; 267.8 million unique; target density 70 made a 9.7M matrix which ran in 67 hours on three threads i7/2600. The factor is 1286489373198313969761725153955854303258080199965379140551257120431681630387 I will be seeing how few relations are needed to make a matrix, in the hope of reducing sieving times in the future; the 14e jobs currently in the queue are significantly over-sieved. |

8)
Message boards :
NFS Discussion :
Welcome from a new adminish person
(Message 1473)
Posted 25 Jan 2015 by Thomas Womack Post: Personally I would prefer people to concentrate on 15e: the jobs that 14e is doing could just about feasibly be run by individuals (I have personally run a few 250-digit SNFS and 170-digit GNFS jobs, they take 20,000 thread-hours which comes to two months on two current quad-core PCs with hyper-threading) whilst the 15e jobs are getting nearer the 100,000 total-CPU-hour mark and you're really doing something that an individual would have some difficulty with. |

9)
Message boards :
NFS Discussion :
Welcome from a new adminish person
(Message 1469)
Posted 25 Jan 2015 by Thomas Womack Post: Good morning, and happy 2015 to all. Though I have spent thousands of core-hours on this project, I've not personally run a single unit of NFS@home; what I do is the setup process (finding polynomials for GNFS) and the completion (running the linear algebra and actually finding the factors). The last month I've been trying to work out good parameters for doing larger GNFS jobs on the 14e and 15e queues; I believe it will be possible to run 190-digit GNFS at 15e and 175-digit GNFS at 14e without too much trouble. These will use 32-bit large primes, which makes the linear algebra take a bit longer, but I have fourteen modern cores available for linear algebra (and will likely have more by the end of the year, depending on what the i7-57xx series of processors look like). The big question is how many relations are needed for 32-bit jobs at this size; I think that can only be determined by experiment, so I'm doing the experiments. |