It would be nice to have a delay between feed fetches, especially when updating all of them at once, to prevent a single webserver that hosts multiple feeds from getting flooded. Here's an ugly patch that will insert a 4 second delay:
--- requestfeed.cpp 2018-07-01 15:02:45.000000000 -0400
+++ ./src/requestfeed.cpp 2019-03-02 15:47:47.000000000 -0500
@@ -18,6 +18,7 @@
#include "requestfeed.h"
#include "VersionNo.h"
#include "mainapplication.h"
+#include "common.h"
#include
#ifdef HAVE_QT5
@@ -151,6 +152,7 @@
const QDateTime &date, const int &count)
{
qDebug() << objectName() << "::head:" << getUrl.toEncoded() << "feed:" << feedUrl;
+ Common::sleep(4000);
QNetworkRequest request(getUrl);
QString userAgent = QString("Mozilla/5.0 (Windows NT 6.1) AppleWebKit/%1 (KHTML, like Gecko) QuiteRSS/%2 Safari/%1").
arg(qWebKitVersion()).arg(STRPRODUCTVER);
@@ -176,6 +178,7 @@
const QDateTime &date, const int &count)
{
qDebug() << objectName() << "::get:" << getUrl.toEncoded() << "feed:" << feedUrl;
+ Common::sleep(4000);
QNetworkRequest request(getUrl);
request.setRawHeader("Accept", "application/atom+xml,application/rss+xml;q=0.9,application/xml;q=0.8,text/xml;q=0.7,*/*;q=0.6");
QString userAgent = QString("Mozilla/5.0 (Windows NT 6.1) AppleWebKit/%1 (KHTML, like Gecko) QuiteRSS/%2 Safari/%1").
Hi,
you can define an update frequency for each feed under
Right click on the feed -> Properties (Ctrl+E).
Kind regards,
Sheldon
Yes but that doesn't help in the "Update All Feeds Now" use case... that only applies to people who keep their newsreader open, continuously waiting to poll again. And even then, if many of them have the same frequency, or share common multiples of that frequency, this flooding issue will probably still occur.
The approach comes with limitations but unfortunately there is no better one so far.
By the way! Such a delay would not really reduce the load.
Why wouldn't it reduce load on the servers - ie. waiting several seconds before sending another request?
Because the load depends on the servers' responses too.
I'm not sure what you mean. Obviously overall the total amount of data being processed and transferred will be the same, but having it spread out over time is better than requesting everything all at once, surely? (I was referring to the load on the remote servers, btw.)
But you might have overlappings depending on the response time, data volume and server traffic too.
The solution from above is the best we can provide so far.
Well "probably won't' have overlappings" is better than "might have overlappings". Currently QuiteRSS polls all (or most of) my youtube feeds simultaneously when starting up the program and updating-all.