Ah, ok good to know. That info makes it much easier to get the bug tracked down and fixed. Thanks.warlockv2 wrote:it was, but now it stopped, i seem to have changed my default directory and it worked, i downloaded a few test articles without any issues
SSL is slower than non-SSL
Re: SSL is slower than non-SSL
bug fixed. no idea how. hate it when that happens. trying to break it again now. will. not. be. defeated.
Re: SSL is slower than non-SSL
The socket communication and the article decoder part of the new download engine, is multi-threaded in V8. But there's still only a single thread reserved for disk writing, because, as you say, sequential writing probably results in better throughput than parallel.robena wrote:I am of course already more than happy with 520 Mb/s instead of 200, but maybe there is a way to cache the 30 threads output and have only one concurrent access to the drive? Not anybody has a RAID system that's excellent at concurrent accesses, and using an SSD for high volume download is not recommended!
If you have a reasonable fast computer with 4+ cores, NewsLeecher V8 shouldn't be a bottleneck, until your download speed reaches 1-2 Gbps.
Can you try downloading with one or a few other clients to compare the speed against NL ? If the speed is slower - or the same - with other clients, the bottleneck is probably at your Usenet service provider or your ISP.
bug fixed. no idea how. hate it when that happens. trying to break it again now. will. not. be. defeated.
Re: SSL is slower than non-SSL
Speed is slower with Newsbin, so far Newsleecher 8 is the fatest.Spiril wrote: The socket communication and the article decoder part of the new download engine, is multi-threaded in V8. But there's still only a single thread reserved for disk writing, because, as you say, sequential writing probably results in better throughput than parallel.
If you have a reasonable fast computer with 4+ cores, NewsLeecher V8 shouldn't be a bottleneck, until your download speed reaches 1-2 Gbps.
Can you try downloading with one or a few other clients to compare the speed against NL ? If the speed is slower - or the same - with other clients, the bottleneck is probably at your Usenet service provider or your ISP.
My provider is Newsleecher, so you should know if it's the bottleneck.

My comments about the drive writing stemmed from the fact that with a 250 MB/s (i.e. 2Gb/s) drive the speed was cut to more than half compared to using a faster RAID or SSD system. I don't mind for myself, since I do have a raid, but I'm curious to know why with a single thread reserved for disk writing. Maybe it's just the access time that limits when switching from one part to another, even if it's the same thread? If it's the case, there is nothing you can do about it.
Anyway, 200 to 520 is quite exceptional, so I'll be happy when 8 becomes functional with scaling.
Robert
Re: SSL is slower than non-SSL
Hi,
I installed NV8 in a different folder than NV7, and set it to use a different data folder.
I can run each version independently, but when one is running, the other does not start.
Is there a way to have both running at the same time? I'd like to continue to use V7 for all the searches, and use it to generate NZBs that will be picked up by V8, until V8 can scale on high DPI monitors.
Thanks.
I installed NV8 in a different folder than NV7, and set it to use a different data folder.
I can run each version independently, but when one is running, the other does not start.
Is there a way to have both running at the same time? I'd like to continue to use V7 for all the searches, and use it to generate NZBs that will be picked up by V8, until V8 can scale on high DPI monitors.
Thanks.
Robert
Re: SSL is slower than non-SSL
so... new problem, now it is constantly pausing and idling due to 2 issues... and yes im on a fast computer with 6 cores and a ssd hard drive. it says, waiting to write to disc very rarly but now this decoder things stops the downloads every say, 2 seconds for 2 seconds.... so i went into advanced settings, made it dedicate 6 cores to the decoder and makes 0 difference... but my speed still works with ssl, now it just keeps pausing on me. again, not an issue with me pc, cant be, other clients have 0 issues and dont pause, hope this info helps
Re: SSL is slower than non-SSL
The "Error saving to disk Unable to open temp file." came back, and I was not able to make it stop this time, I had to cancel the download.
Robert
Re: SSL is slower than non-SSL
warlockv2 wrote:so... new problem, now it is constantly pausing and idling due to 2 issues... and yes im on a fast computer with 6 cores and a ssd hard drive. it says, waiting to write to disc very rarly but now this decoder things stops the downloads every say, 2 seconds for 2 seconds.... so i went into advanced settings, made it dedicate 6 cores to the decoder and makes 0 difference... but my speed still works with ssl, now it just keeps pausing on me. again, not an issue with me pc, cant be, other clients have 0 issues and dont pause, hope this info helps
oh and this is happening without ssl on as well... thats odd
Re: SSL is slower than non-SSL
Although it's much better then with version 7 regarding reaching near max speed (which previously I couldn't), it's still very unstable maintaining it. The speed now constantly fluctuates between 10 and 24 MB/sec every couple of seconds. I tested this by downloading 40 files with a size of 143 MB each on a Crucial M4 SSD while using 30 bots.Spiril wrote:I have just uploaded an unofficial forum release of NewsLeecher V8 with the new improved download engine in place.
If you use NewsLeecher with a very fast internet connection, pls let me know if you see speed improvements with this test release
My connection is 200 Mbit down and 20 Mbit up, with the Newsleecher Ready-to-Go package.
I used https://www.newsleecher.com/nl80001.ufr2.exe on Windows 10 Pro x64.
I made a screenshot of the graph.
Re: SSL is slower than non-SSL
Without (hopefully) being a nag, are you close to solving the "Error saving to disk Unable to open temp file." problem?
It seems that it's the only major problem with release 8 as it is. Once fixed, it should be quite usable.
It seems that it's the only major problem with release 8 as it is. Once fixed, it should be quite usable.
Robert
Re: SSL is slower than non-SSL
The new V8 is much faster than V7 and it's the first time i could download with around 45MB/s. But when extract or par repair is running in the background i'm still getting "paused while writing to disk" halts, and i'm downloading to a SSD with 500MB/s write speed.
I think there should be an option: extract / par repair only when download is idle
That would also not stress the SSD/Harddrives that much !
Hardware: Intel i7 2600k, 16GB Ram, Samsung SSDs, 400mbit Cable
I think there should be an option: extract / par repair only when download is idle
That would also not stress the SSD/Harddrives that much !
Hardware: Intel i7 2600k, 16GB Ram, Samsung SSDs, 400mbit Cable
Re: SSL is slower than non-SSL
So, any ETA about a relatively stable V8, without the temp file error that prevents it from working?
Robert
Re: SSL is slower than non-SSL
Yes, a new Beta 8 would be fine !
Thanks !
Thanks !
Re: SSL is slower than non-SSL
Hello,
I encounter the same issue from Newsleecher Usenet as well as another provider (my download speed is around ~500mbit). I downloaded beta 8 ufr3 and the download speeds are almost at maximum, however, the bots often pause with the message "temporarily paused catching up writing to disk". Due to this I lose about 1/3rd of the maximum speed again. Now I am aware that a usenet download consists of many many text articles probably causing a lot of non-sequential writing processes, however, it should be possible to cache in memory and write in sequential chunks to the HDD to avoid this to happen.
The HDD I'm using is a WD40EZRZ (http://hdd.userbenchmark.com/WD-Blue-4TB-2015) with low fragmentation according to windows and diskbenchmark values are also slightly above the average of that userbenchmark link. This should definitely suffice to write the ~55-60mb/sec required.
I also tried changing my download to my SDD drive and still seeing that message. I guess there's still some improvement possible for process handling for fast download speeds.
I encounter the same issue from Newsleecher Usenet as well as another provider (my download speed is around ~500mbit). I downloaded beta 8 ufr3 and the download speeds are almost at maximum, however, the bots often pause with the message "temporarily paused catching up writing to disk". Due to this I lose about 1/3rd of the maximum speed again. Now I am aware that a usenet download consists of many many text articles probably causing a lot of non-sequential writing processes, however, it should be possible to cache in memory and write in sequential chunks to the HDD to avoid this to happen.
The HDD I'm using is a WD40EZRZ (http://hdd.userbenchmark.com/WD-Blue-4TB-2015) with low fragmentation according to windows and diskbenchmark values are also slightly above the average of that userbenchmark link. This should definitely suffice to write the ~55-60mb/sec required.
I also tried changing my download to my SDD drive and still seeing that message. I guess there's still some improvement possible for process handling for fast download speeds.
Re: SSL is slower than non-SSL
SSDs are far more robust than you might think. Look here.robena wrote:I am of course already more than happy with 520 Mb/s instead of 200, but maybe there is a way to cache the 30 threads output and have only one concurrent access to the drive? Not anybody has a RAID system that's excellent at concurrent accesses, and using an SSD for high volume download is not recommended!
On hard drives yes, but SSDs excel at random, multi-threaded I/O. Would it be possible to augment the download engine with an "SSD mode" switch, which would enable a configurable number of threads (or at least a sensible value)?Spiril wrote: The socket communication and the article decoder part of the new download engine, is multi-threaded in V8. But there's still only a single thread reserved for disk writing, because, as you say, sequential writing probably results in better throughput than parallel.