Guardian Digital Inc. >
Mailing List Archives >
Full Disclosure Mailing List Archive
On Fri, 24 Dec 2004 16:00:45 -0600 (CST), Ron DuFresne
> It might depend upon how the algorithim is implimented, say, search for
> easy to find vuln systems with stadard port open, till perhaps 10 or 100
> or some given number are found and infected, then go back through the
> non-vuln hosts and search those specifically for non-standard ports.
> Insures spread of the worm and quick infection rate, and then allows it to
> retarget 'hidden' systems. Seems to me this would merely be a change to
> infection code similiar to those wrms that had in them coded a
> date and time to attack a site.
One consideration -- sysadmins who are bright enough to configure
services onto non-standard ports are likely also bright enough to
patch their systems, install IDS and HIPS, and such hosts are in
general less likely to be exploitable than default configurations.
I'm not sure that a routine to find hidden, vulnerable services would
add much value to an automated "flash worm". This approach makes
sense for a human attacker trying to penetrate a specific site or
class of target, but for a "flash worm", wouldn't it make more sense
to put the work towards finding more easy targets?
What does it benefit a worm or the worm's author to compromise 99% of
vulnerable systems rather than a mere 85% of the vulnerable
Additionally, port scanning raises the profile of the source, both on
the network and at the target. Whether just blasting out the exploit
code or doing banner scanning, the worm will need to do a full TCP
session to each potential target IP:Port. This is not only slow, but
also very "noisy", causing unusual events to be logged by listening
daemons on the target system.
The only time I've ever been reprimanded for running (authorized) nmap
scans against non-hardened solaris systems was when I used the '-sV'
option and freaked out a (non security conscious) sysadmin due to the
large volume of timeout and protocol errors logged by rpcbind and
other default TCP listeners.
> Seriously, why do folks think sshd should be open for the world to pound
> upon, no matter which port it's assinged to run on?
When you cannot know the source IP in advance, *something* must serve
as the gatekeeper for access to network services.
> It provides an encrypted channel into the network. And channel in,
> especially encrypted channels, should be guarded and allow only those
> that require access to get access.
Many systems have a business need to allow customers to connect in
from arbitrary source addresses -- vendor support for maintenance,
customers uploading content, etc. There is an unavoidable
requirement to have *some* channel into the system, and it's tough
enough for web hosting providers to push customers to migrate off of
cleartext password protocols like telnet and FTP, now we need to
convince the customer to use public keys and strong authentication
IPSEC might make sense for employee inbound sessions, But for
"customers" of web hosting and the like, ssh itself is already the
primary gatekeeper -- there isn't any other (easy) check to implement
before letting an unknown source talk to the ssh listener.
Full-Disclosure - We believe in it.