[clamav-users] We STILL cannot reliably get virus updates (since new mirrors)

Reindl Harald h.reindl at thelounge.net
Mon Jul 2 21:45:29 EDT 2018

Am 03.07.2018 um 03:37 schrieb Paul Kosinski:
> Any system whereby new versions of files are announced before they are
> actually available to automated downloads is awkward (to say the least).
> If, in addition, a server which doesn't have the announced version is
> blacklisted by the automated downloader, the whole mechanism can grind
> to a halt (as it has for us).
> Even if a server which is out of sync (i.e., behind) is not
> blacklisted, but merely temporarily skipped, it uses extra bandwidth in
> the current scheme. In the case of daily.cvd, the only way freshclam
> detects that the server is out of sync is by downloading the whole file
> (currently about 47 MB) -- the waste of bandwidth is enormous. For
> example, our logs this afternoon show 15 complete downloads of
> daily.cvd over about 1 hour. Of these, all but the last failed due to
> out of sync. This is why we have recently taken to deleting mirrors.dat
> before each freshclam run -- to compensate for the blacklisting -- and
> running freshclam 3 times an hour hoping for sync.
> This behavior is both unreasonable and inefficient
tell that the people who think the DNS nonsense instead a static
"daily.version" text-file gains anything....

-------- Weitergeleitete Nachricht --------
Betreff: Re: [clamav-users] We STILL cannot reliably get virus updates
(since new mirrors)
Datum: Mon, 2 Jul 2018 19:10:40 +0100
Von: Brian Morrison <bdm at fenrir.org.uk>
Antwort an: ClamAV users ML <clamav-users at lists.clamav.net>
Organisation: The Fool and Bladder Face-Jumping Team
An: ClamAV users ML <clamav-users at lists.clamav.net>

On Mon, 2 Jul 2018 19:50:55 +0200
Reindl Harald wrote:

> > For me freshclam runs roughly every 2 hours, so I think that the
> > load is an order of magnitude higher than you state. I will confess
> > that I don't know about the capability of web servers in this
> > regard, but the point that d.net made was that the DNS server would
> > always be more capable in this regard than a web server
> come on - our main-server running ina virtual machine spits out 30000
> requests/sec. on our core-cms in case of cache-hits and even on a 7
> years old workstation far above 10000/sec and that is *not* static
> content with a few bytes

How many requests/sec can a DNS server process?

Given that the clamav mirrors seem to be struggling (new system, I know)
I still think that anything that reduces the load they are serving ought
to be a good idea. Not my day job though...

More information about the clamav-users mailing list