Abstract
Information Access Value Vs.
Risk
The Internet has become the
information super highway. The evolving Internet and its related
technologies have allowed businesses to communicate in new and strategic
ways with various types of people and organizations. Over the years, there
have been additions of feature upon feature to the Internet connections.
As the needs have changed, human beings have come across the need of more
robust services, faster connections, and more flexibility in what can be
done. In the beginning, services like simple POP3-style email and Web
access were the extent of an Internet connection. Today we have
site-to-site Virtual Private Networks (VPNs), client-side and home-user
VPNs, streaming media, Web-based training, exciting Internet applications,
e-commerce, and business-to-business extranets. Thus the Internet evolves
towards fulfilling various advanced needs of human society.
On
the home front, the ability to connect to the Internet from a home
computer has enriched many households by providing new sources of
information and entertainment. Online news, weather, stock reports, film
and reference guidance, and guides replace newspapers, periodicals, radio
and television for many users. What few home users realize is that access
to the Internet goes both ways: An unprotected computer or network of
computers sharing an Internet connection can easily be infiltrated unless
appropriate security is in place.
On
enterprise side, as corporate users increasingly proliferate to
information, access beyond the intranet enterprise, the need for
ubiquitous, but secure, access for roaming users access multiple channels
becomes increasingly prevalent. What was once a digital evolution within
the confines of an enterprise or organization has transformed into
ubiquitous access over multiple channels – the enterprise network, the
Internet, mobile computing devices and WAP-enabled wireless phones.
Traditionally securing these layered portals for trusted communications
has relied not only on standard security implementations such as VPNs,
IDSs and firewalls, but also to more advanced security tools such as PKI
and Authorization or Privileged Management Infrastructure (PMI). More
recently, organizations have additionally looked to an Authentication
Management Infrastructure (AMI) model to providing conclusive user
authentication. Thus a multifaceted security approach, a combination of
security solutions, helps organizations to secure their corporate networks
while not impeding resource access.
Internet Security
Many
organizations have grown their Internet set of features across multiple
devices or possibly multiple connections – a firewall for Web and mail
traffic, a VPN appliance for remote connections, a different firewall for
a business-to-business (B2B) relationship that may exist, or other
possible combinations of lines and devices that can push Internet
vulnerabilities beyond control. These services can even be distributed
across multiple Internet service providers. Regardless of the number of
devices that are on the Internet, each has different services that can be
potentially exploited.
Ever
since the first data exchange took place over the Internet involving a
non-private part of the infrastructure, security has been one of the most
critical design considerations of any Internet application. Hackers
increasingly probe connected computers for weaknesses in their security
and can gain access, wreaking havoc or stealing confidential information
without the user even realizing it. Trojan horse programs and Spyware come
from seemingly innocent sources, such as email attachments and file
downloads. Once the user launches them they can secretly sending data
including any sensitive personal or financial information stored on the PC
and back out over the Internet.
As
soon as a computer system is hooked into the Internet, it is exposed to
risks of malicious, or even just curious, visitors accessing the system
and sniffing for information that was not intended to be shared with
anyone. That is, any connection to the Internet is vulnerable to
exploitation. The most basic
vulnerability that all connections face is that they could be made
unavailable and brings down mission-critical services with them. The
worst-case scenario could be a complete system failure, not involving just
the host that was serving as the gateway to the Internet, but all other
computers to which any path existed from that gateway machine.
Although
information security has always had an important role as technology has
advanced, it has become one of the hottest topics in the recent past. The Internet’s open design and the
explosive usage along with rapid adoption of internetworking systems
became the prime factor for the tremendous explosion in demand for
security services.
As
the number of potential targets grows, the sophistication of security
threats is increasing.
Traditional security products such as virus scanners and firewalls
do not provide adequate protection against unknown threats and the
thousands of mutations and variations of Spyware and viruses available to
hackers on the Internet. With the Internet being used in so many ways, the
security control of new applications and technologies requires an entirely
new paradigm. Security, in this environment of constantly evolving
threats, can only come from having complete control of the Internet
connection including the ability to specify which applications, known and
unknown, can be trusted to use the Internet.
Software
infrastructure vendors, application developers, device manufacturers,
Network operators and various research organizations and labs are working
hard towards addressing the security needs of data and services being
provided by connected computing systems.
Today
we have some intelligent defenses against attacks, such as denial of
service (DOS) attacks, as routers and other devices can be set to verify
source addresses and ignore packets if they are bogus or carry a
suspicious pattern. However,
beyond the DOS of open ports, easy passwords, unsecured routers, and
unknown features that any Internet device may have.
The
International Data Corporation (IDC) forecasts that Authentication and
Authorization industry, two security components are poised together to
grow 28 percent annually to reach more than $7 billion by 2004. This
steady growth has heightened security awareness among organizations
struggling to mitigate risk while providing anytime, anyplace access to
employees, customers, and partners. Security awareness is at an all time
high as companies become increasingly Web-centric while breaches in
security become mainstream news topics. The computer Security Institute’s
annual security survey revealed that 90 percent of the respondents in
large corporations and government agencies detected security breaches
within the last 12 months.
A
Primer on Information Security
Security
is neither a software application that can be bought off the shelf and
deployed to make a network secure nor a piece of hardware that can guard a
network against attacks. A good secured system always ensures the
following five basic tenets of security. They are
Authentication –
To address the need to provide trusted access to critical applications,
enterprises require solutions that provide authentication and
authorization capabilities. Authentication is the process of validating
the true identity of another party. Secure systems should incorporate some
form of authentication in order to validate the user who is requesting
interaction with the system.
Organizations
need to be able to conclusively verify the identity of individuals and
entities before providing the authority and access privileges that allow
them to access confidential information or conduct transactions
electronically. If users are not properly identified, and if that
identification is not verified through authentication, an organization has
no assurance that access to resources and services is properly controlled.
A
robust Authentication platform should have the following capabilities.
The
simplest form of authentication is a straightforward user name and a
password. The basic assumption is that only the user knows his or her
password and is trusted with its security. This mode of authentication
works quite well when the other party is not a machine but a human. Often
the username and password get transferred in plain text format over the
public network using the Password Authentication Protocol (PAP). In order
to prevent someone from sniffing the network packets and getting
unauthorized access to the password and later on spoofing the identity of
another user, there are some authentication protocols that use
cryptography technology to encrypt the user name and password information
during their transmission. The most commonly used protocol for this is the
Challenge-Handshake Authentication Protocol (CHAP).
E-business
systems need to authenticate users for a variety of reasons and at a
variety of levels. While high-security and regulated environments such as
financial services, healthcare and government will adopt advanced forms of
authentication such as tokens and biometrics, the simplicity and
familiarity of user name/password will extend its usage despite the hype
of PKI and biometrics. The decision to move to stronger authentication
techniques needs to be driven by the business impact of fraudulent
identification.
Strong
authentication generally requires at least two of the four types of
authentication to be used in combination. Examples are smart cards plus
PIN or biometric, digital certificate plus passwords, physical location,
that is, access to a server console plus a password, or biometric plus a
PIN.
The
cost of moving from passwords to strong authentication can be significant.
Smart cards and biometrics require reader devices, digital certificates
require PKI and location-based authentication requires the use of GPS or
other location technologies.
Because
a biometric measures a unique physical or behavioral biological
characteristic, an accurate biometric is the strongest possible way of
identifying a user. However, like a clear-text reusable password, physical
biometrics are vulnerable to capture and replay-type attacks via a network
unless augmented with additional security. Biometric techniques can be
broken down into two categories:
Physical: Scans retina, fingerprint, hand
geometry or face
Behavioral:
Analyzes voice or handwriting
Digital
Signatures
Digital signatures are based on a combination of the traditional idea of data hashing with public-key based encryption. Most hash functions are similar to encryption functions; in fact, some hash functions are just slightly modified encryption functions. Most operate by grabbing a block of data at a time and repeatedly using a simple scrambling algorithm to modify the bits. If this scrambling is done repeatedly, then there is no known practical way to predict the outcome. It is almost impossible for some one to modify the original data in any way while ensuring that the same output will emerge from the hash function. These hash-based signature algorithms use a cryptographically secure hash function such Message Digest 5 (MD-5) or Secure Hash Algorithm (SHA) to produce a hash value from a given piece of data.
The
first step is to take the original message and compute a “digest” of the
outgoing message using a hashing algorithm. The result is a message
digest, which is typically depicted as a long string of hexadecimal digits
and manipulated by software as binary data.
The
original message content, together with the encrypted digest, forms a
digitally signed message, which is suitable for delivery to the recipient.
On receipt, the receiver verifies the digital signature using an inverse
set of steps: first the encrypted digest is decrypted using the sender’s
public key. Next, this result is compared to an independent computation of
the message digest value using the hashing algorithm. If the two values
are the same, the message has been successfully verified.
A
digital signature provides compelling evidence that only the intended
signer could have created the message. Suppose interlopers have changed
the original message in its transit, and then the decrypted, original
message digest would not have matched the recomputed one for the changed
data in the message. Verification of the digital signature would fail.
Similarly, the creation of bogus signature is impractical because an
interloper does not have the appropriate private
key.
Digital
Certificates
Digital
certificates consist of data that is used for authentication and securing
communications especially on unsecured networks such as the Internet.
Certificates associate a public key to a user or other entity (a Computer
or service) that has the corresponding private key. Certificates are digital IDs being
issued by certification authorities (CAs), which are trusted entities that
vouch for the identity of the user or computer. The CA digitally signs the
certifications it issues, using its private key. The certificates are
valid only for a specified time period
Certificate
authorities are trusted third parties, like notaries. Certificates are
files packaged in many different ways containing identifying information
on an individual, such as name, organization, phone number, e-mail address
and most importantly, the individual’s public key and the digital
signature of the issuing certificate authority. Certificate authorities
demand notarized documentation of a person’s identity before issuing a
certificate that guarantees identity.
It
is also critical to authenticate the machine or device through which the
user is interacting with the system. For this purpose, the digital
certificate technique is being used. A digital certificate certifies the
bearer of the certificate and is being issued by an accredited
certification agency. Generally for an identity card, the identifier is
the photograph along with a signature that should match the signature on
the check. In the case of
digital certificates, it is a digital certificate carrying a public key of
the subject, which can be used to identify the party and subsequently to
encrypt the information sent to the user. The digital certificate is
credible as it is being used by a recognized organization known as a
Certification Authority (CA) after verifying the validity of the entity
applying for the certificate.
Authentication
is the most critical component of any security architecture as almost all
other parts of security mechanism rely on the basic premise that the two
parties involved in digital communication have been fully
authenticated.
Authentication
Mechanisms.
Securing
a network consists of several crucial steps. The first and foremost one is
devising a successful authentication strategy. One has to be sure the
users trying to access resources actually are who they say they are. There
are many ways to execute this authentication. In addition to basic
structural elements such as credential input devices, demands on user data
input, and layering of data validation, network authentication employs
numerous methods and protocols to issue certificates and pass data packets
back and forth. Here come
some common authentication methods and protocols:
Kerberos
Protocol –
This was developed to provide secure authentication for Unix networks.
Microsoft also started to support it with the release of Windows 2000.
Kerberos uses temporary certificates called tickets, which contain
credentials that identify the user to the servers on the network. In the
latest version, the data contained in the tickets including the user’s
password is encrypted. A Key
Distribution Center (KDC) is a service that runs on a network server. The
server issues a ticket called a Ticket Granting Ticket (TGT) to the
clients that authenticates to the Ticket Granting Service (TGS). The
client uses this TGT to access the TGS. The TGS issues a service or
session ticket, which is being used to access a network service or
resource.
Secure
Socket Layer (SSL) –
The SSL protocol is used to provide secure access to Web sites via a
combination of public key technology and secret key technology. Secret key
encryption, also referred to as symmetric encryption is faster, but
asymmetric public key encryption provides for better authentication so
SSL, being a hybrid one, has been designed to benefit form the advantages
of both. The SSL is being supported in almost all the current Web browsers
and Web servers. SSL operates at the application layer of the OSI
reference model. This means applications must be written to use
it.
SSL
authentication is based on digital certificates that allow Web servers and
user agents to verify each other identities before they establish a
conversation. Thus there are two types of certificates: one for client and
one for server.
Microsoft
NT LAN Manager (NTLM) –
It is used by Windows NT servers to authenticate clients to NT domain.
Windows 2000 uses Kerberos authentication by default but retains support
for NTLM authentication. Unix machines connecting to Microsoft networks
via an SMB client also use NTLM to authenticate.
NTML
uses a method called challenge/response, which uses the credentials that
user provided at log on each time that user tries to access a resource.
That is, the user credentials never get transferred across the network
enhancing security. For this mechanism to work both the client and server
have to reside in the same domain or there must be a trust relationship
established between their domains.
Password
Authentication Protocol (PAP) –
PAP is used for authenticating a user over a remote access control. An
important characteristic of PAP is that it sends user passwords across the
networks to the authenticating server in the plain text. But the advantage
is that it is compatible with many server types running on different
OS.
Shiva
PAP is an improvement over PAP in terms of security level, as it uses an
encryption method being used by Shiva remote access servers. The client
sends the user name along with the encrypted password, and the remote
server decrypts the password. If the user name and password match the
information in the server’s database, the remote server sends an
Acknowledgement message and allows the connection. If not, a Negative
Acknowledgement (NAK) is sent, and the connection is
refused.
Challenge
Handshake Authentication Protocol (CHAP) - This protocol is used for remote
access security. It uses MD5, a one-way encryption method. CHAP performs a
hash operation on the password and transmits the hash result instead of
password itself over the network and hence the security of the password
gets maintained.
The
hash algorithm employed in this protocol ensures that the operation cannot
be reverse engineered to obtain the original password from the hash
results. However, CHAP is vulnerable to remote server
impersonation.
MS-CHAP
v2, the Microsoft version of CHAP, uses two-way authentication so that the
identity of the server as well as the client is verified. This protects
against server impersonation. MS-CHAP also increases security by using
separate cryptographic keys for transmitted and received
data.
Extensible
Authentication Protocol (EAP) –
It is a means of authenticating a Point-to-Point (PPP) connection that
allows the communicating computers to negotiate a specific authentication
scheme called an EAP type. A vital characteristic of EAP is its
extensibility. Plug-in modules can be added at both client and server
sides to support new EAP types. EAP can be used with TLS to provide mutual
authentication via the exchange of user and machine
certificates.
Remote
Authentication Dial-In User Service (RADIUS)
RADIUS
often is used by Internet Service Providers (ISPs) to authenticate and
authorize dial-up or VPN users. A RADIUS server receives user credentials
and connection information from dial-up clients and authenticates them to
the network. RADIUS can also perform accounting services, and EAP messages
can be passed to a RADIUS server for authentication. EAP needs only to be
installed on the RADIUS server and it is not required on the client
machine. Windows 2000 Server includes a RADIUS server service called
Internet Authentication Services (IAS), which implements the RADIUS
standards and allows the use of PAP, CHAP or MS-CHAP as well as EAP.
A
break-in to an enterprise’s network or Web site can have various
levels of impact on the enterprise/e-business and its
clients.
The
Intended Use of Authentication
Authentication
is required to achieve some business objective:
We
are to see below what is all about the next critical phase in securing
information.
Authorization –
Authentication is only one step albeit a critical one. Another critical
requirement is the need to control user’s access to sensitive resources
once they have been strongly authenticated. Thus Authorization is the
process of establishing the rights and privileges of a party during its
interaction with the system. The most common way to establish
authorization is by means of Access Control Lists (ACLs). Most Web and
application servers implement authorization schemes by use of ACLs, which
are essentially text-based property files that follow proprietary formats.
Newer breeds of applications are using directory services and industry
standard protocols like Lightweight Directory Access Protocol (LDAP) to
establish the rights and privileges of the users or entities that interact
with those applications.
An authentication platform enables organizations to deploy personal authentication at the network’s edge and know for certain who is accessing sensitive information, applications and transactions. It enables an enterprise to deploy and manages multiple advanced authentication methods – biometric (fingerprint, voice, face, iris and signature recognition) and non-biometric (token and smart card) to protect access to any application or resource.
Firewalls
and Intrusion Detection Systems are the appropriate tools for providing
the much-wanted authorization for users. They are discussed separately
below.
Confidentiality
–
It is the process of ensuring that any sensitive data being transmitted
between the communicating parties can be read only by those parties. Often
it is not just enough to authenticate the other party and permit
interaction with the system based on authorization. The data that gets
sent back and forth between two communicating parties can be sensitive as
well. Today most Internet communications happens over public networks.
Passwords sent during an authentication session, credit card information
sent during a e-commerce transaction, account balances sent during a
retail banking transaction, medical records of a patient sent during an
inter-hospital communication and a CEO’s email message sent to the board
of directors are all examples of data that should be readable only by the
party it is intended for and none other. Thus it becomes mandatory to
maintain the confidentiality of sensitive information being transmitted
among parties. The combating strategy is called encryption. It has many
different variations based on the algorithms used for encrypting the
information, as well as the protocols used between the communicating
parties and the types of keys used to encrypt the
data.
Cryptography
–
It is the ancient art and science of encryption or keeping messages
secret. Encryption is the process of transforming information before
communicating it to make it unintelligible to all but the intended
recipient. All cryptography operates according to the same basic
principle. Mathematical formulas called cryptographic algorithms, or
ciphers and numbers called keys is used to scramble or encipher
information to make it difficult to comprehend without having the
appropriate key to unscramble to decipher the information. There are two
kinds of encryption – single (symmetric) key and public (asymmetric)
key.
Public
key encryption –
This type enables secure communication between parties without the need to
exchange a secret key. It is
the basis for privacy, authentication, data integrity, and
nonrepudiation. Public key
cryptography uses a complex mathematical formula to generate two separate
but related keys, one open to public view and the other private, known
only to one individual. When a message is encoded with a public key, only
the holder of the private key can decode the message, assuring privacy for
the sender, because only the owner of the private key can decode the
message. A message encrypted with a private key can be decoded by anyone
with the corresponding public key.
In
1977, Ron Rivest, Adi Shamir, and Leonard Adelman introduced RSA, a public
key standard with a name based on the first letters of their surnames. RSA
requires significant computing to generate the public and private keys and
hence Phil Zimmerman in 1986 came out with a public key encryption system
for use on personal computers and is called Pretty Good Privacy
(PGP).
Modern
Cryptography Systems – A
combination of both single key and public key is used in modern
cryptographic systems. The
reason for this is that public-key encryption schemes are computationally
intensive versus their symmetric key counterparts. Because symmetric key
cryptography is much faster for encrypting bulk data, modern cryptography
systems typically use public-key cryptography to solve the key
distribution problem first, then symmetric key cryptography is used to
encrypt the bulk data.
Data
Integrity –
Though confidentiality ensures that an intervening party can’t read secure
data, even if the data is intercepted, it does nothing to prevent a
malicious intruder from tampering with encrypted data while it is in
transit. Thus message integrity takes an vital part in secure
communication as others. The most common technique for ensuring the
integrity of a digital message is applying a hashing algorithm on its
content and to communicate that hash value or message digest to the
receiver of the message. The receiver can apply the same hashing algorithm
on the contents of the received message and match the message digest with
the one that was transmitted along with the message. The various
algorithms are Secure Hashing Algorithm-1, Message Digest 2, and Message
Digest 5. As for the
transmission of the message digest itself; most common implementations
employ the use of the asymmetric encryption method.
Nonrepudiation
- It should not be possible
for a sender to reasonably claim that he or she did not send a secured
communication or did not make an online purchase.
Below
we look at the threats and vulnerabilities lurking around communication
systems and the information security technologies that we can use them to
challenge them head on.
Here
come some of the most commonly encountered types of intrusions and
attacks. They can be classified as follows:
Internal
Attack –
an attack originating from inside the local network. It is easier for
legitimate network users to steal, modify, or destroy data or plant
malicious code on the network because they need not worry about getting
inside. While it is important to protect the network perimeter where our
LAN connects to the Internet, it is equally important to consider internal
threats. To address internal threats, we have to
External
Attack –
an attack originating from outside of the local network. The unknown enemy
who attacks from across town or across the globe presents a more
frightening image. Now that
most company networks are connected to the Internet and more of these
connections become full-time dedicated ones with static IP addresses, the
threat from the Internet has become very real.
Protecting
our Internet-connected network from external intrusions and attacks
requires a good, multilevel, well-thought-out security plan. Our first
line of defense should be firewall of some sort at the outer perimeter of
the network. We can also create a DMZ or perimeter network, which is a
sort of buffer zone between the external network and our
network.
The
external attack can be further subdivided into two types as intrusion and
non-intrusion attacks. The differentiation is based on the fact that the
data on our network is the target of the attack or just to bring down our
network and prevent legitimate users from gaining
access.
Another
way to categorize attacks is by the technical aspect; different attack
types exploit vulnerabilities. For example, an attack that exploits the
bugs in a user application is a risk only to those who use that
application. An attack that exploits security holes in an operating system
is likely to put larger group at risk because most computers run one of
only a few common operating systems. The most universally dangerous is the
attack that uses the characteristics of a networking protocol,
particularly TCP/IP, the protocol run by every computer on the Internet.
Many common attacks are based on creative exploitation of some weaknesses
or characteristics of a member of the TCP/IP protocol
suite.
Modern versions of the
Microsoft Office applications allow us to create macros or use Visual
Basic for Applications to automate functions. This helps hackers to insert
malicious code into Office documents, which can then be sent to a
destination on our network as email attachments.
Also
Microsoft outlook and other sophisticated email clients as well as
Microsoft’s Internet Information Server (IIS) are vulnerable to this type
of activity. Because these email clients allow us to receive
HTML-formatted email, they are also vulnerable to exploits that embed
malicious Java applets or VBScript into an HTML document. These
applications then run on the destination computer and can introduce a
virus, collect data and sent it back to the originator, delete data from
our hard disk, or perform other unwanted actions.
Hackers
can exploit bugs in an operating system to gain access to our system as
well. The Windows 9x operating systems are inherently insecure. OS
components can be subjected to buffer overflows, in which the number of
bytes or characters exceeds the maximum number allowed by the programmer
writing the software. This can cause the system to crash. Often, OS vulnerabilities are more
or less due to bad configuration and by effecting changes in the default
configuration settings; we can prevent many of these vulnerabilities. For
most of the true security bugs, the OS vendors constantly release hot
fixes, patches, or service packs that fix the problem once it becomes
known. Hackers often count on the fact that many of the network
administrators are not so quick in applying the fixes.
Knowledgeable
hackers to gain access to our network or damage our data can exploit
commonly used protocols, such as HTTP, DNS, CGI, and FTP. TCP/IP-related
protocols, such as TCP, UDP, and ICMP are favorite targets and are the
basis of many of the attack types.
Internet Explorer (IE) Security -
Vulnerabilities and Cures
Internet
Explorer can misinterpret some IP addresses as belonging to an intranet
site rather than an Internet site, by passing normal security protocols
that are applied to Web sites but are not enforced against internal HTML
documents. This is called IP spoofing
The
second vulnerability involves a way attackers can cause IE to contact a
Web site, send a command as soon as the connection has been established
and make it appear that this command comes from a third party. This could
allow some one to spoof a user and delete information such as e-mail from
Internet accounts.
The third vulnerability is related to a specific version of Telnet. This flaw in IE allows command-line actions, which should be blocked, to be executed
E-mail
comprises well over half of the correspondence taking place between
external stakeholders in today’s business world. It is surprising that
only 10 to 15 percent of the emails sent over the Internet are
encrypted. Security of emails
is highly recommended, as there are issues on transmission of
inappropriate messages or the disclosure of privileged information. Part of the problem is that
standard e-mail POP3 or IMAPI is communicated in an open protocol (SMTP)
that can be easily spoofed or compromised.
There
are some shortcomings on standard e-mail systems:
There are some measures that can be taken to prevent theft, interception, and sabotage of our email. Businesses can protect themselves from sabotage or liability by installing gatekeeper or filtering software on their email servers to prevent a raider from intercepting any communication. This class of software will search messages for questionable content and alert system administrators to review any findings. These content filters can also be used to protect an enterprise’s intellectual property. One example is the Echelon Project, developed by the National Security Agency which has the ability to monitor millions of simultaneous contacts or message packets from anywhere in the world. The FBI uses a system called Carnivore, an intelligent message-packet sniffer that can trace email header information.
Protecting
E-mail
Businesses
can protect themselves from e-mail interception by putting the following
message/system qualifications in place.
There
is not a universal accepted standard for the protection of electronic
messaging industry wide. Most of the technologies used today are based on
a set of keys (a long character string) that produces a safeguarded valid
certificate. This certificate
and its associated keys are then used to encrypt and authenticate a
message. The following list outlines several options available today for
safeguarding messages:
We
have been hearing and realizing some of the vulnerabilities that have been
uncovered in today’s communication systems. Hacking and viruses have
certainly been in the limelight in recent years because of their
visibility but many less visible threats are out there and can be even
more sinister. As the value of data sent over networks increases, it is
becoming more and more important to implement effective information
security measures.
The
main categories of threat are:
Hacking
or Intrusion Attacks:
This category encompasses attacks whereby an intruder gains access to some
area or set of resources that are intended to be off-limits. Classic
examples are a website being vandalized or confidential data being stolen
by illegal access.
Hackers
come in several different flavors and have an assortment of motives and
objectives once they gain access to a vulnerable system. Hackers range
from the inexperienced “script kiddies” to professionals who engage in
targeted industrial espionage.
Script
kiddies
are at the bottom of the hacking food chain. They download tools and hacking
scripts created by serious systems crackers and read about new ways to
break into certain systems from hacker Web sites. Then, they scan the
Internet for systems meeting a certain description and attempt to hack it.
They accomplish these hacks without really understanding how it works.
They simply run the programs or scripts and type in commands listed in the
hacking documentation they got from a Web site.
Technicians
are
more experienced system crackers who are savvy technicians with
programming skills and a broad knowledge and understanding of how computer
networks operate. Their objectives can vary widely from simple learning
experience to using our system as a gateway to make attacks on other
systems. These hackers can be the most dangerous because they are skilled
and unpredictable.
The
industrial spies
are another type of hackers, who are very rare but highly dangerous and
specially targets our company. These are highly skilled individuals who
use cutting-edge techniques and can spend months analyzing a network
before making an attack. They are usually seeking to steal sensitive
financial data or valuable research and development data. Targets for this
kind of activity are usually banks, large e-commerce sites, multinational
corporations and any industry where intellectual property is valuable.
Large organizations usually have security professionals on the lookout
24/7 for this kind of suspicious activity.
We
can use firewalls and intrusion detection systems
complemented by strong authentication systems to decide who is allowed to
do what, and who is allowed access to what. When these are properly
implemented, a hacker’s main tools become useless. The option of fooling a
firewall into allowing privileged access is eliminated by the use of
strong authentication and the scope of access from a non-authenticated
connection is limited by filtering rules and protocol
analysis.
Denial-of-service
(DoS)
Dos attacks have become the weapon
of choice for cyber-terrorists. Any attacks aimed at hampering a service
can fall into this category. This sort of attackers tries to exploit a
known weakness in software, networking practices, and operating systems to
crash a system or subsystem.
Here come four types of DoS attack.
There
are generally four types of DoS attacks. They are
Programming flaws – These flaws are failures of an application, operating system, or embedded logic chip to handle exceptional conditions. The conditions usually are the result of a user sending unintended data to the vulnerable component. Many times, attackers will send weird non-RFC-compliant packets to a target system to determine if the network stack will handle this exception of it if will result in a kernel panic and a complete system crash. For specific applications that rely on user input, attackers can send large data strings thousands of lines long. If the program uses a fixed-length buffer of say, 128 bytes, the attackers could create a buffer overflow condition and crash the application.
Routing and DNS attacks – A routing-based DoS attack involves attackers manipulating routing table entries to deny service to legitimate systems or networks. Most routing protocols such as Routing Information Protocol (RIP) v1 and Border Gateway Protocol (BGP) v4 have no or very weak authentication. This creates a vulnerability cyber-terrorists can exploit by altering legitimate routes.
The
most recent attacks were called distributed denial of service (DDoS)
attacks because they employed a strategy using unprotected network
computers in the attack. In a DDoS attack, a hacker first uses simple
software packages, usually downloaded from the Internet, to identify
network or node computers that are not secure. These computers are usually
university or corporate computers where security is minimal. Once located,
the hacker secretly installs software that will conduct the attack from
these network computers.
Since
these node computers are typically connected directly to the Internet with
a T1 or T3 line, they are capable of transmitting thousands of messages
per second. When the hacker has prepared 50 to 100 node computers, the
hacker initiates the attack. Each individual node computer starts sending
thousands of page requests to a Web site, quickly building up hundreds of
thousands or even millions of requests. In addition, each request includes
a false return address, which makes the targeted Web server use more time
in trying to answer the request. Under these conditions, a server simply
cannot handle the traffic. Also, the node computers being used for the
attack are often heavily affected.
Eavesdropping: An attacker listens to or watches
information in transit. Wire-tapping is a classic eavesdropping
attack.
Using
encryption can thwart this. As long as those who should have access to the
data only share keying material, and as long as the system is implemented
using good security principles, then these systems have an excellent track
record.
Active
attacks: As eavesdropping, these imply that
information is being viewed in transit. But instead of passively observing
or collecting information, data is intercepted, altered and retransmitted,
that is, the line of communication gets broken and routed via the
attacker.
Such
attacks are completely defeated if an authentication phase is used and
cryptographically tied to the communication session. A classic example is
to use public key certificates to authenticate a Diffie-Hell-man key
exchange; the derived keys are then used to encrypt the remainder of the
session. An active attacker
can not play “person in the middle”, since the attacker can not
authenticate his or her self to the communicating partners. A common error is to not bind the
communication session to the authentication stage. This can lead to active
hijacking attacks in which a session is commandeered mid-way o at least
after the authentication phase.
Spoofing
attacks: Such attacks relate with a person
or machine impersonates another to gain access to a
resource. Authorization
based on strong authentication will prevent people from spoofing a user
with desired privileges.
Replay
attacks: This sort of attacks is based on
re-sending packets, or streams of packets that have already been accepted
by a recipient. For example, if a message is sent to a bank instructing it
to add $100 into a bank account, it might be in the account holder’s
interest to capture and replay that message several
times.
Packet
alternation: Instead of spoofing an identity,
an attacker may choose to massage a valid connection to suit his or her
needs. For example, if an
email is sent stating that “Mr. X is no longer allowed access to building
Y ”, then altering “building Y “ to building Z “ might allow Mr. X illegal
access to building Y.
Network
packets may have their own associated integrity checks which effectively
guard against these types of attacks. For example, the Internet Protocol
Security (IPSec) standard from the Internet Engineering Task Force has a
mode which computes a digest on each packet. The recipient discards any
packets for the check disagrees.
Viruses
and Worms:
these work by disguising malicious code and duping an unsuspecting user
into executing it. Once running, viruses can take over a machine, spread,
destroy, or change information, attempt to propagate or even lay dormant
for a certain period.
Breaches
in application security do not really get as much publicity as e-mail
viruses such as SirCam, or Nimad or worms such as CodeRed, but they can
cause just as many problems, ranging from theft of merchandise and
information to the complete shutdown of a Web site. Here come the top ten hacking
techniques at the application level.
Virus
detection software has been largely successful in eradicating the majority
of known viruses; firewalls are now being equipped to scan for viruses.
Worms take advantage of weaknesses in networking systems. In addition, we
can use digital signatures to authenticate software and provide better
information about the source of a program.
Cookie
poisoning –
by manipulating the information stored in a browser cookie, hackers assume
the user’s identity and have access to that user’s information such as
user id, timestamp etc. on the client’s machine. Since cookies are not
always cryptographically secure, a hacker can modify them, thus fooling
the application into changing their values by poisoning the cookie.
Malicious users can gain to accounts that are not their own and perform
activities on behalf of that real user.
Hidden-field
manipulation –
hackers can easily change hidden fields in a page’s source code to
manipulate the price of an item. These fields are often used to save
information about the client’s session, eliminating the need to maintain a
complex database on the server side. Because e-commerce applications use
hidden fields to store the prices of their merchandise, people can view
the site’s source codes, find the hidden field, and alter the prices. In a
real-world scenario, no one would have discovered the change and the
company would have shipped the merchandise at the altered prices and may
even have sent a rebate.
Parameter
tampering –
this technique involves changing information in a site’s URL parameter.
Because many applications fail to confirm the correctness of common
gateway interface (CGI) parameters embedded inside a hyperlink, parameters
can be easily altered to, for example, allow a credit card with a $100,000
limit, skip a site login screen, and give access to alternate orders and
customer information.
Buffer
Overflow –
by exploiting a flaw in a form to overload a server with excess
information, hackers can often cause the server to crash and shut down the
Web site.
Cross-site
Scripting –
when hijackers inject malicious code into a site, the false scripts are
executed in a context that appears to have originated from the targeted
site, giving attackers full access to the document retrieved and may be
even sending data contained the page back to the attacker.
Backdoor
and debug options –
often programmers will leave in debug options to test the site before it
goes live. Sometimes, in haste, they forget to close the holes, giving
hackers free access to sensitive information.
Forceful
browsing –
by subverting the application flow, hackers access information and parts
of the application that should normally be inaccessible, such as log
files, administration facilities, and application source
code.
Stealth
commanding –
hackers often conceal dangerous commands via “Trojan horse” with the
intent to run malicious or unauthorized code that is damaging to the
site.
Third-party
misconfiguration –
since vulnerabilities are posted and patches made available on public Web
sites such as SecurityFocus, hackers are alerted to new vulnerabilities as
they arise. For example, through a configuration error, a hacker could
create a new database that renders the existing one unusable by the
site.
Known
vulnerabilities –
some technologies used in sites have inherent weaknesses that a persistent
hacker can exploit. For example, Microsoft Active Server Pages (ASP)
technology can be exploited to gain the administrator’s passwords and take
control of the entire site.
Here
come some ways that hackers can make use of for attacking our servers by
exploiting Windows 9x clients that are connected to those servers.
Password
Cracking -
There are many different ways of hacking the Windows 9x operating systems.
Stealing passwords is one of the more common and dangerous methods. The
initial password prompt that users see when logging on to Windows 9x can
be easily bypassed by pressing [Esc] or clicking Cancel. The only benefit
to entering a Windows password is that doing so allows us to access a
Windows NT or Windows 2000 domain and allows to cache passwords. Because
Windows passwords can provide a hacker access to a domain or to other
applications, there is a temptation to steal the passwords. Unfortunately,
Windows 9x was not designed as a secure environment. Windows password and
all cached passwords are stored in a PWL file. There are numerous utilities
available on the Internet for cracking PWL files and extracting the
Windows password. Hackers need not to be at the workstation to steal it,
but by simply copying the PWL file to a disk, they can toy around with
cracking the password from any computer. There is unfortunately no single
mechanism to protect our PWL files from being copied. Creating a registry
key and setting its value to 1 will make it impossible to Windows to
remember passwords other than the initial Windows
password.
Screen
Savers –
Normally users remain logged while away from their desks assuming that
their accounts are safe because the PCs are running screen-saver-protected
passwords. It is very simple for hackers to get around a screen-saver
password. Thus save anything that is important on a server where it can be
better protected.
File
Sharing –
Windows 9x file securing is one of the big threats to our network. It is
not itself a problem. If users are sharing files, they probably intend for
people to access them. Therefore, the unauthorized access to the shared
data is not really the problem. The good stuff is usually stored on
servers any way. The real problem is the way that the shares are made
available. It is easy to use a protocol analyzer to intercept a
password.
SNMP
is an Internet protocol developed in the 1980s when TCP/IP networks were
blossoming and it was recognized that network management was becoming a
major problem. SNMP consists of three components:
SMI
is a toolkit that identifies data types and develops rules to create the
MIB. PDUs define authorized network management messages. UDP ports 161
(agents) and 162 (managers) are the usual way of implementing SNMP using
IP, but IPX, AppleTalk datagrams, or even HTTP can also support SNMP.
Vendors
write SMI-compliant object definitions for their systems and compile them
using a standardized MIB compiler to build executable code that is used to
manage hubs, routers, network cards, and so forth., which have agents that
recognize MIB objects.
Access
control and data packet authentication are password-protected but since they are usually
included in each SNMP packet, often in unencrypted format, they can be
discovered using any packet
sniffer. This means that network managers need to block the use of any of
the control features of SNMP, such as Set, or any tool that allows packets
to write data at any entity.
SNMP
provides an easy way for administrators to get topology information about
their networks and even provides some management of remote devices and
servers. However, the administrators have to be very careful that they
have to correctly block SNMP traffic at the firewall level, otherwise
hackers can also use it together that valuable network information and
exploit vulnerabilities.
The
main threat from SNMP is that it provides an easy way to collect basic
system configuration information. For example, the SNMP “systeminfoget”
string could be used to report what network adapters are available on the
client during the logon sequence. This is exactly the sort of information
a hacker tries to obtain before beginning penetration attempts. In addition to information
gathering, SNMP can be used to manage devices such as to shut down a
network interface. This is even more dangerous as a tool for malicious
hackers.
SNMP
is inherently insecure because SNMP messages are not encrypted. SNMP is
not vulnerable because of a bug in the code, but it is dangerous because
of how it was originally designed, before the proliferation of networks
connected to the Internet.
The
SNMP agent for Windows NT can disclose lots of useful data to a
hacker. If the services are
running on the NT server,
SNMP can disclose some more data. However, SNMP is a cross-platform
protocol, its vulnerabilities are definitely not limited to Windows
networks.
Since
management information databases (MIBs), an SNMP component, often include
a TCP connection table listing all passive open ports and active TCP
connections, this information can be accessed remotely. Some vendors, such
as Cisco, automatically hide some of the SNMP information, even Cisco
software does not hide all of the TCP table data.
The
easiest way to deal with SNMP threats is to set our firewall to block UDP
ports 161 and 162 and any other port the administrator may have
custom-configured for SNMP traffic to the outside world. At a minimum, the admin has to
monitor activity on all ports utilizing SNMP.
Most
SNMP security consists of two subsystems:
General
Security Strategies
Preventative
Internet security measures include verifying that equipment and services
are secured in order to prevent unauthorized access. Most of the time when
Internet equipment is compromised and violated, it is due to software
flaws or services that have been incorrectly
configured.
To
overcome hacker attacks, Software developers have to realize the
seriousness of the problem and must aggressively address the problem
internally. There are several
reasons why internal security for machines and employees is so important,
but the number one reason to increase internal security is because the
majority of true hacks come from inside an organization-so security for
system administrators, passwords, and sensitive information needs to be
tight. Here are some security steps a company’s system administrators must
take:
Most
virus attacks cause either the corruption or loss of files. For example,
the I Love You virus deleted every graphic image on the PCs it infected.
This caused significant damage to many companies that then struggled to
recover their Web sites, presentations, art files and other important
collections. Macro viruses account for about 80 percent of all viruses and
are the fastest growing viruses in computer history. These viruses infect
documents created for specific applications such as MS Word or Excel.
The
threat of damaging computer viruses and the need for good antivirus
software are greater than ever. Many organizations have already learned
the painful and costly reality of leaving their networks unprotected
against viruses. Thus there
comes a strong necessity for implementing a thorough antivirus strategy.
Here come some vital points to accomplish this requirement within a
company infrastructure.
A
network infrastructure can traditionally be divided into four distinct
layers that require virus protection.
Layer 1 - Internet Gateways
Layer
2 -
Servers
Layer
3 - Clients
Layer
4 - PDA
Devices
Here
comes a bit about each layer. Almost all viruses are being spread via the
Internet, either by way of email transfer or Web browsing. If one is able
to secure the first layer then the security of all other layers are almost
tamper-proof.
The
first layer of protection should really encompass two components,
rules-based policy enforcement and virus scanning. By having rules-based
policy enforcement, one can create rules to block viruses, based on known
content such as I LOVE YOU in the subject line, even before the antivirus
software manufacturers have released a signature. Apart from these, rules
can be applied to look for old viruses that may be perhaps reclassified as
hoaxes.
By
having virus protection at the first layer, organizations can trap and
block viruses at perhaps one or two gateways for the entire company. Once
a virus has entered, a company must rely on server agents to take over.
They are then forced to scan and cure the virus for many servers versus
one gateway. If suppose the virus enters into client layer, then scanning
and curing have to be performed in hundreds or thousands of
nodes.
If
a company does not have its own Internet gateway, it is left fully to its
Internet service provider (ISP) for virus
protection.
The
second one is the server layer. In any organization, it is natural that at
any given time, there are dozens of computers that have the antivirus
software either disabled, uninstalled, or crippled in some form or
another. This enforces the truth that every server where people are saving
files or storing email messages has to be under constant inspection, that
is, antivirus software must be configured to provide real-time protection
as well as scheduled scanning protection.
The
third layer is the client layer. The desktop and laptop layer represents
the largest and possibly the most difficult layer to protect. By accident,
or intentional disabling, clients have to ability to cripple their
antivirus software. Antivirus
signatures need to be current, real-time monitoring must be enabled, and
scheduled scanning should take place frequently.
The
last layer is the PDA layer. In the recent past, viruses have started
cropping up for these devices. So it becomes imperative to think about
developing antivirus software for such wireless devices. There are devices
based on Palm and Pocket PCs.
But
protecting these important layers is just the first battle in the war
against viruses. To be truly victorious, we must be vigilant in keeping up
with antivirus updates. To accomplish this, an organization must perform
the following four key steps:
Retrieval, Testing, Deployment and
Monitoring.
The
first and most easily neglected step in managing our multiplayer anitvirus
defense is the timely and consistent retrieval of antivirus signature
updates. Most signature updates are being obtained by accessing the FTP
site of the antivirus vendors and pulling down the latest update. This
process has to be automated as manual handling may sometimes lead to
catastrophic failure. As a precaution, we should also have a backup
mechanism, such as the old-fashioned dial-up access for retrieving updates
during those FTP sites are overly busy.
The
second task is to test the acquired signature update before deploying it
in the organization. The
testing phase is for checking whether the signature update has any glaring
problems.
The
third phase is to deploy the tested signature update. The process of deployment has to
be done automatically by rolling out the software to the server and
client, which includes both PCs and handheld devices,
layers.
The
final phase involves monitoring the antivirus health of our environment.
We have to monitor whether all our connected computing systems have the
latest software, whether they are at the latest signature level, whether
someone has intentionally or inadvertently disabled the real-time
monitoring and so on. There are software tools for monitoring the state of
our antivirus environment. Thus one has to realize that virus protection
is not an one-time affair but an ongoing activity. Failing to talk all the
necessary steps to win the way against viruses will eventually cost our
organization dearly in terms of resources and
reputation.
It
often seems that network administrators are obsessed with security. After
all, if an administrator goes easy on the security, countless hooligans
are just waiting to break in to the network to steal or vandalize data. He
has to be smarter than every hacker out there.
As
companies are beginning to realize security as a mission-critical
expenditure, preventive measures occupy a major portion in combating the
nuances of hackers. Although
all the prevention in the world won’t guarantee against a virus infection,
putting safeguards in places will go a long way to reduce the risk and
minimize the impact of a possible infection.
Obviously,
the most effective way of dealing with any virus threat is to completely
prevent it from entering our system. As discussed below, many layers
within our infrastructure need to be fortified in order to build a truly
effective defense against viruses.
Access
Points - First, a network administrator
has to identify all of his organization’s access points, or place where
viruses could be introduced. Potential access points include the SMTP
gateway, Internet gateway, wireless Internet devices, and the CD-ROM
drives and floppy drives on the company’s desktops and laptops. In a
worst-case scenario, the network administrator has to cut off all
externally facing access points to prevent the virus from
spreading.
Server
Vulnerabilities –
It is important for employees to know where all of company’s servers are.
If a problem should arise and the regular IT staff members are out of
their work place, damage may be more widespread if others are not able to
locate company servers.
Preparedness
–
Part of preventing viruses is to be prepared for their inevitability and
to patch every known hole in security. Independent security audits,
ethical hacking, and diligent application of security patches can keep a
company for a virus attack. Addressing identified weaknesses and staying
on top of all the security bulletins and patches as they are released are
the most two important factors in the prevention strategies apart from
having a permanent security team to deal with the endless barrage of
security notices, proper testing of patches and assistance with the
constant implementation of updates.
Detection
–
We should have some early warning mechanism for detecting virus attack.
The symptoms significantly vary according to virus. Some of the common
signs include strange email messages sent to many recipients or Web server
logs that contain additional, irregular entries apart from corrupted files
or errors that appear when starting applications.
Regardless
of the symptoms, sharing of information about virus attack and its related
information to help desk will go a long way in minimizing the damage. A
set of criteria should be established with minimum three escalation levels
- based on proliferation, payload and likelihood - such
as
Level
1: minimal
spread that is, according to news services and security watches, obscure
vulnerability
Level
2:
medium infection, minor payload, easily exploitable vulnerability
Level
3:
significant infection, minimal damage but major annoyance, widely known
vulnerability
and
lines of communication need to be documented for understanding the
severity of the virus attack.
Informing
the clients –
As a virus makes its way up severity levels, and the number of people
affected increases, the number of clients notified must correspondingly
increase. When the level reaches Level 3, tough decisions regarding
discontinuation of services will need to be made. The variation in time
zones favors those in the eastern side of the world as they are the first
to come to know about the virus attack and can take the suitable remedial
measures ahead of their western counterparts.
When
a firm gets affected, it is mandatory to garner as much information as
possible about the virus, symptoms and the corresponding cures and then to
document a comprehensive and consistent approach for eradicating its
infection and prevention from the possible future attack. It is equally
critical to have an effective communication plan with the clients for
notifying them of possible infection, apprising them of progress, and
informing them when all systems are fully functional.
A
Virus Recovery Plan
To
minimize the effects of virus attacks, companies should build a clear
virus recovery plan. This plan should specify how IT goes into action,
what testing is required, the priority of each end user or group and
internal communications. The plan also should specify regular, proactive
measures such as regular recovery CD generation. Each virus attack calls
for a decision from executive management about whether to disconnect from
the outside world and if so, when to reconnect. Having a contingent plan
is always going to ensure that impending damage is minimal and repaired in
an easy manner. For companies belonging to some specified categories, if
they are not online, there are not in business.
Every
one knows that virus attacks are a serious threat to any organization
dependent on personals computers. Despite best efforts by anti-virus
software, viruses wreak havoc on a regular basis. The best defense in the
case of new viruses is a strong offence. Each company should have a virus
recovery plan.
There
is a viable technology from Previo that will help to recover very
quickly when an organization’s defenses get breached. Previo’s eSupport
Essentials has the capability of preparing us for the unknown by
efficiently protecting all the content on all our PCs. eSupport Essentials
is focused specifically on recovery, both IT assisted and end-user
self-service. This special focus will help each organization to be always
up and running as quickly as possible.
Unlike
backup systems, eSupport Essentials captures everything, including
applications, documents, customizations, favorites, device settings, and
other personalizations by taking an automatic snapshot of every PC every
day. The snapshot takes only minutes to complete and is done in the
background and often averages 1 MB or less per snapshot on an on-going
basis, eliminating any tangible impact to the business. This content is
then stored in a way that makes it easy to very rapidly restore everything
to any point in time. This tool helps any infected company to begin
recovery immediately after discovery rather than wait for a fix from the
anti-virus software vendor. This is increasingly critical as viruses
nowadays spread faster, mutate into unrecognized patterns and adversely
affect the supply and demand chain of the business. With Previo, the risk
of remaining connected to the Internet is reduced as recovery is in
general much faster than with other mechanisms.
When
files are lost due to a virus or any other reason, the situation can be
remedied in short order. The end user simply rolls the PC back to a time
prior to infection. This eliminates the virus as well as restores the lost
files. If the end user would prefer, the HelpDesk can perform the rollback
via remote assistance. Rollback of this type typically takes less than 30
minutes.
When
it becomes necessary for a network to be attached with the Internet, there
comes a need for a firewall to protect the network resources from hackers,
competitors, and the curious.
Normally, devices attached to our LAN can see all of the other
devices and access them. This freedom of access is not usually a problem
because we know everyone who has access on the network. However, things
get complicated when we connect our LAN to the Internet, where millions of
connected devices and people who can get at our network resources. While
most of these people are not going to bother us, there shall be one or two
evildoers and hence we have to be well prepared to be on safer side.
Just
like the firewall of a building that prevents flames from spreading from
building to building, a network firewall stops unwanted network traffic
from spreading from the Internet to our network and vice versa. The
unwanted traffic is usually some one outside our network attempting to
access our resources. On the other hand, we also should have firewalls to
prevent our users from going places on the Internet that they should
not. The firewall places a
barrier between our network and the Internet. We can make use of this
barrier to make sure the safety of our network.
There
are different types of firewalls for different levels. A firewall can be hardware-based
or software-based. Hardware-based firewalls traditionally take the form of
special types of routers. Routers filter information on the network and
direct packets. Vendors that sell routers configured firewalls add more
filter and management capabilities to their hardware. There are a number
of hardware-based firewalls such as Cisco’s PIX Firewall, Galea’s Avertis
Firewall, and Ascend’s Pipeline Firewall Router.
Software-based
firewalls turn regular PCs into firewalls. This sort of firewalls runs on
top of the operating system software, such as Windows NT or Netware.
Software-based firewalls are a little slower than the hardware-based ones.
Also they are a less reliable due to limitations on the underlying
operating system platform. But software-based firewalls are significantly
cheaper and more flexible than their hardware counterparts. Microsoft
Proxy Server, Novell BorderManager, LanOptic Guardian, IBM eNetwork
Firewall and Checkpoint Firewall 1 are some of the leading software-based
firewalls.
Firewalls
also can be classified into broad levels: network-level firewalls
and application-level firewalls. The levels describe the way the
firewall controls access across its boundaries. Network-level firewalls
control traffic based on the traffic’s source address, destination
address, and TCP/IP port information. Packets travel directly through
network-level firewalls. The firewall filters the information to make sure
that only the proper packets travel through it. As traffic flows directly through
the firewall, it processes little other than filtering packets and hence
this sort of firewalls works much faster than application-level firewalls.
But
application-level firewalls works in a different way. Traffic does not
flow through an application-level firewall. Instead, the server running
the firewall software processes the requests internally and translates the
data from one network to another. Resources on either side of the firewall
never actually make contact. Newer application-level firewalls make this
process as transparent as possible. Because the firewall must translate
the information, application-level firewalls are sometimes slower than
network-level firewalls. But application-level firewalls offer a little
more control as it checks more information and hence we have a better
opportunity to track and control what is going across your firewall. Proxy
servers are typical examples of application-level
firewalls.
Firewalls
can’t protect our network resources from everything and not 100 percent
hacker-proof. Firewalls can only protect against information flowing
across our network. So they cannot protect our network resources as the
attacks may occur through dial-up lines. They also do not protect security
breaches such as floppy disks, printouts, tapes and CD-ROMs that our users
can carry with them.
Also
firewalls can’t protect us from such things as viruses. Normally viruses
come embedded inside of files. While firewalls examine where the data
packets are coming from and going to, they do not actually examine the
contents of the packets.
Firewalls
are basic equipment nowadays. The problem is that they are not
enough. Many organizations
believe that since they have firewalls or other security mechanisms at the
boundaries of their enterprises, they are protected from attacks
originating from the Internet. However this is not the case for a variety
of reasons according to a Gartner report. Firewalls can’t detect attacks
from within the enterprise, which can be placed by disgruntled employees.
Also firewalls can be easily circumvented.
But
an Intrusion Detection System (IDS) can add a different dimension to the
security infrastructure because it can monitor intrusion attempts from
both inside and outside the company as well as detect anomalous behavior
patterns that could reflect malicious intent. IDS have garnered much interest in
the recent days
Intrusion
Detection System (IDS) – An Overview
An
IDS is a system that tries to detect and alert on attempted intrusions
into a system or network, where an intrusion is considered to be any
unauthorized or unwanted activity on that system or
network.
An
IDS adds an early warning capability to our defenses alerting us to the
type of suspicious activity that typically occurs before and during an
attack. As intrusion
detection systems are not capable of arresting hacker’s attacks, there
should be a carefully thought out corporate security policy, backed up by
effective security procedures which are carried out by skilled staff using
the necessary tools. Thus IDSs can be viewed as an additional tool in the
continuing battle against hackers and crackers.
Intrusion
detection systems are a combination of early warning and alarm systems and
can be viewed as an additional tool in the continuing job of maintaining
the security of a corporate system. IDS are complementary to the use of
firewalls and effective security policy within an organization. IDS will respond to threats such
as internal attacks, external attacks using legitimate routes and
exploiting allowed rules in firewalls that other host and network security
products are unable to counter.
Here
come a few situations where an IDS becomes an invaluable tool.
A
hacker still can pierce through firewalls. Though
vital data seem to be intact, hackers could alter some operating system
files in such a way that every time system administrators log into a
machine, their password is emailed out to the hackers. Also every time
they change their password, as per the security policy, the hackers are
immediately notified.
Hackers
also can come inside our network by means of illegal traffic with nothing
recorded in our firewall logs
Thus
an IDS can
There
are two main types of IDS namely network- and host-based
systems.
Network-based
IDS examine the individual packets flowing through a network. Unlike
firewalls, which typically only look at IP addresses, ports and ICMP
types, network-based intrusion detection systems (NIDS) are able to
understand all the different flags and options that can exist within a
network packet.
Thus
a NIDS can therefore detect maliciously crafted packets that are designed
to be overlooked by a firewall’s relatively simplistic filtering rules.
NIDS also can look at the payload within a packet, that is, to see which
particular web server program is being accessed and with what options and
to raise alerts when an attacker tries to exploit a bug in such code.
Thus, in summary, a network-based IDS looks at all the traffic flowing by
on a network. NIDS can
operate in real time. They are placed as sensors on LAN servers. They use
a database of known “attack signatures” or look for patterns. They produce
a lower rate of false alarms, but the database must be updated regularly
and frequently to ensure the IDS will recognize new types of
attacks.
But
host-based intrusion detection systems are concerned with what is
happening on each individual computer or host. They are able to detect
such things as repeated failed access attempts or changes to critical
system files. The down side is that host-based IDS do not operate in real
time and are prone to false alarms.
An
application based IDS is a host-based system that is particular to a type
of service, that is, an IDS built particularly for a web server or mail
server.
A
target based IDS is built to check the integrity of a particular system
and its onboard software including operating system software. Target based
systems are often called file-integrity assessments since they use
check-sum based software to determine whether a system has been tampered
with.
IDSs
can be further categorized on the basis of detecting misuse and anomalous
use. Misuse detection within a network-based IDS involves checking for
illegal types of network traffic, for example, combinations of options
within a network packet that should never legitimately occur. Misuse
detection by a host-based IDS would include attempts by a user to execute
programs for which they do not have legitimate need.
Detection
of anomalous activity relies on the system knowing what is regular network
traffic and what is not. Anomalous traffic to a host-based IDS might be
interactive accesses outside of normal office hours. An example of
anomalous traffic on a network-based IDS is repeated attempted access by
one remote machine to many diverse services on one or more of our internal
systems, all in quick succession.
IDSs
can also be classified by their passive or reactive nature. Passive systems simply detect the
potential security breach, log the information and raise an alert. Reactive systems are designed to
response to the illegal activity, for example, by logging off a user or by
reprogramming the firewall to disallow network traffic from a suspected
hostile source.
This
reactive system has some serious drawbacks as it can potentially take
actions that could shut down business critical services. For example, an
attacker crafts rogue network traffic aimed at our Internet mail server.
The traffic is crafted in such a way that it is coming from our ISP. The
network-based IDS detects this and reprograms our firewall to disconnect
this traffic and hence we can not receive any email through our ISP. Thus arise the necessity of having
skilled staff members as a critical part of any intrusion detection
system. They can identify
this faked traffic and be able to liaise with our ISP to establish the
source of the problem.
Thus
an IDS can not be a silver bullet for all our security needs and can not
replace skilled staff.
Thus
intrusion detection systems should be seen as an important layer in a
company’s defense in depth security strategy.
Enhancing
Intrusion Detection with a honeypot
Every
network with an Internet connection is vulnerable to intrusion. It is the
price one has to pay for staying wired to the world. Catching elite
intruders and tracking their movements throughout the network is tricky
business. One could use any of several commercial IDSs. However, they only
tell where intruders went and how they got there. Besides, these elite
intruders can often camouflage themselves among the legitimate traffic and
alter the system logs to remove any trace of their
penetration.
The
best intrusion detection methodology here is to track every movement of
those elite intruders in a way that allows one to preserve that data for
law enforcement officials and to provide a target away from our production
servers. There is a specialized tool for accomplishing this.
Honeypots are programs that simulate one or more network services
that one has designated on his computer’s ports. An attacker assumes that
we are running vulnerable services that can be used to break into the
machine. A honeypot can be used to log access attempts to those ports
including the attacker’s keystrokes. This could give us advanced warning
of a more concentrated attack.
Honeypots
are most successful when run on known servers, such as Web, mail, or DNS
servers, because these systems advertise their services and are often the
first point of attack. One can construct a system that appears vulnerable
to attack but actually offers:
We
are intentionally putting our bait to be bitten. If this task is performed
well, we can
It
is recommended that honeypot has to be isolated from the production
network. Many firewalls allow us to place a network in the demilitarized
zone (DMZ). This is a network added between an internal network and an
external network in order to provide an additional layer of security. The
other option is to place it on a separate, dedicated Internet connection
and all traffic to and from the honeypot should also be routed through its
own dedicated firewall.
The
key to an effective honeypot is its ability to monitor intruders. Skilled
intruders will go to extraordinary lengths to cover their tracks. It is
imperative that we have to collect as much data from as many sources as
possible. The first place to collect data is the honeypot’s firewall. All
enterprise firewalls are capable of logging all traffic they examine. If a firewall services only our
honeypot, any traffic appearing in the firewall logs is evidence of an
attack.
The
second data collection tool is the honeypot’s system logs. These logs will
be intruders’ first target and are extremely vulnerable to alternation. It
is vital that these logs are automatically duplicated to a remote system.
IDSs or packet sniffers are providing the third and final monitoring tool.
These applications monitor traffic
passively and do not advertisee their presence. They will provide
us a key-to-key view of what the attacker does and sees.
IP
Virtual Private Networks (VPNs) – An Overview
An
IP VPN is an enterprise-scale or dial-up connectivity deployed on a shared
network infrastructure that uses IP-based
technologies.
A
VPN is seen as virtual because it does not require dedicated lines. It is
private as it uses encryption algorithms, is transparent and it is nearly
tamper-proof. A VPN is considered a network because it reaps the benefits
of a shared IP network.
VPN
– An Introduction
VPNs
are a mature part of the network security market. VPN solutions allow
companies, institutions, and government to leverage productivity gains and
competitive advantages. IP
VPN competes aggressively against other WAN options and as an effective
alternative to private lines.
They are second to dial-up modems as a way of remotely accessing
the corporate LAN. It is being formed by creating a virtual tunnel either
on the public Internet or through use of a managed IP-based network or
both. IP VPNs provide Internet access to remote clients while the
connection is securely tunneled into a corporate
network.
In
addition to security, a VPN provides quality of service and manageability
of a private network using shared IP networks. An IP VPN can be viewed as
a traditional private network with exclusive use of resources, and it is a
blend of security technologies. Its capability is created through
equipment that provides security features such as tunnel-based encryption
and user authentication mechanisms. Alternatively, an IP VPN allows remote
employees to communicate with branch offices and external parties between
two endpoints, such as LANs and WANs.
The
advantages of having IP VPN are found primarily in cost savings and
security. IP VPNs are easy to use and support rapid VPN deployment. IP
VPNs are scalable, that is, IP VPNs has the capability to handle dramatic
traffic fluctuations and add new users. IP VPN servers provide a hardware
upgrade path to increase capacity and scalability. The placement of a VPN
gateway in relation to firewalls, routers, and extranet/intranet
connections directly affects the level of scalability, manageability and
security it offers.
Performance
gets improved, but it heavily depends on packet size, encryption
algorithm, number of concurrent connections, packet loss and operating
system. Performance also relies on the size of CPUs, as security functions
such as encryption are processor-intensive.
IP
VPN supports centralized, policy-based management from a single point of
administration, which ensures that remote clients and firewalls are
installed and configured properly. Also Client hosts using the VPN
connection to access the corporate network are being protected against
attacks.
Others
include flexible communications and simplified network design. An IP VPN
server eliminates long distance charges for dialing directly into the
corporate network and government network allowing low-cost access to
business-critical applications. It provides a secure link between remote
workers and branch offices or external parties over the Internet, allowing
companies to deploy core applications across global networks. The primary
applications of a VPN are remote access, site-to-site connectivity and
extranets.
For
IP VPNs to flourish, there arises a strong need for appropriate tools and
technologies. For example, tools such as bandwidth managers, traffic
shapers, content network delivery and caching schemes to cope with network
bottlenecks are urgently needed. Cost is by far the biggest reason that
companies are using a service provider’s IP VPN. IP VPN solution providers
are moving to offer additional security solutions such as IPsec, intrusion
detection and key management.
VPN
solutions now have the ability to simplify the corporate network and make
it more effective part of a company’s business. VPNs today are about
enabling leading business practices and managing policy relationships
between enterprises, their associates, partners and
customers.
Since
most can not stop an attack, IDS should not be considered
a
Wireless
Security
As
world is becoming wireless, wireless communication devices and products
are penetrating into every one’s daily life. Unauthorized users may be
lurking on wireless local area network (WLAN). The enthusiasm for 802.11b
wireless networking has been dampened by reports of vulnerabilities in the
protocol’s WEP algorithm, an algorithm that is supposed to protect
wireless communication from eavesdropping and unauthorized access. There
are a number of potential security problems posed by WLANs such as
eavesdropping, tampering with transmitted messages, defeating access
control measures and denials of service. Though these security threats are
looming around, wireless systems are becoming a hot commodity among
businesses and consumers.
This propels security experts to think about devising mechanisms
and tools for WLAN security.
A
wireless network uses radio waves to transmit data to every one within
range. So special precautions need to be taken to ensure that those
signals cannot be intercepted in their movement. WEP relies on a secret
key that is shared between a mobile station and an access point. The
secret key is used to encrypt packets before they are transmitted and an
integrity check is used to ensure that packets are not modified in
transit. However, it becomes
easier for hackers to break into wireless systems by using off-the-shelf
equipment and positioning themselves within transmitting range of a WLAN
due to some potential flaws in WEP. As a result, the WLAN is susceptible
for the following types of attacks:
Thus
it is wise not to depend solely on WEP and to use other security
mechanisms for enhancing WEP and WLAN security.
Here
come a couple of security procedures to be followed when companies set up
wireless LAN. The first one is wireless networks should be WEP-enabled due
to the facts that WEP contains a encryption system and deploying wireless
networks without any encryption brings out serious repercussions. The
second step is to isolate the WLAN and enhance encryption. That is, after
enabling WEP, we should also consider other security measures in order to
compensate for its vulnerabilities.
There
are two other security measures as follows. One is, we need to place our
wireless network outside of the firewall and treat it just like we would
treat the rest of Internet. That is, we have to recognize that it can’t be
trusted and anything could happen on it and hence we should firewall it
off from all of our sensitive corporate assets. The second suggestion is to use a
virtual private network (VPN) for all traffic on the WLAN. The VPN will do
its own end-to-end encryption on top of WEP. We can use such popular VPN
protocols as PPTP and IPSec to accomplish this and finally set up a VPN
server/router that connects the WLAN segment to our LAN
segment.
There
is one another alternative, but cheaper one. There is a signal encryption key
that is configured identically for every one who is supposed to have
access to the wireless network.
Usually this key is set up once when the password is handed out and
often stays the same for months or even years. The suggestion is that the
wireless system should employ extensions to WEP that perform dynamic key
changes and modify the wireless encryption key once every 10 minutes. Thus
by changing the key once every 10 minutes, we may lose 10 minutes of data
and changing the key frequently makes it hard to mount WEP
attacks.
A
number of new products are attempting to rally support by providing
additional measures of security and control. Microsoft has thrown its
considerable weight behind 802.11b. Microsoft has incorporated a host of
wireless-related features to the Windows XP OS. These include new driver
support and client association tools, but the most significant feature is
the integration of the nascent 802.1x standard, a move toward
user-authenticated network access control. As part of the 802.1x standard,
the Windows XP client natively supports Extensible Authentication Protocol
(EAP), which provides dynamic, session-specific wireless encryption keys,
central user administration via specialized third party Remote
Authentication Dial-in User service (RADIUS) servers, and mutual
authentication between client and Access point (AP) and AP to RADIUS
server.
As
802.11b authenticates the hardware, not the user, stolen laptops or forged
media access control (MAC) addresses can be used to infiltrate the
network. With EAP, the RADIUS server will authenticate the user, not just
the hardware, providing a scalable, centrally managed authentication
solution. Also, EAP’s dynamic WEP keys reduce the exposure of the same WEP
key over multiple transmissions, reducing the risk of the latest
cryptographic vulnerabilities.
Cisco
also became a vendor for a wireless-ready RADIUS serve through its Cisco
Secure Access Control Server. This can be used with Cisco’s proprietary
Lightweight Extensible Authentication Protocol implementation and it
already interoperates with 802.1x.
Additionally, Funk Software is bringing its own wireless-ready
solution, Steel-Belted RADIUS.
Another
hurdle to corporate wireless networking is a lack of centralized
management, making it difficult to implement and update a wireless
security policy across the enterprise. Wavelink corp. has stepped into the
void by releasing Mobile Manager 5.0, which centralizes the discovery,
monitoring and configuration of access points across the network.
Laptops
nowadays have become the handy and highly portable device for executives
and other important people in enterprises. A stolen laptop can invalidate
an enterprise’s effort to secure its infrastructure from external threats.
Laptop computer theft, and the subsequent loss of sensitive data, has
become the Achilles heel of any enterprise’ efforts to protect its
intellectual property and the privacy of its clients and business
partners. There are methods that allow a single user to encrypt files on
laptops or desktops. However as the encryption methods are often flawed
and the encrypted files cannot be recovered under worst-case scenarios,
there arises a strong need for viable mechanisms that can protect the
confidential and sensitive information stored on laptops.
Any
enterprise with potentially sensitive information on laptop systems should
protect those assets with a solution that prevents access to the operating
system and applications as well as the data created by those applications.
Typically, this solution involves selecting a vendor application that
encrypts the entire hard drive or just files. Thus security managers
tasked with developing and implementing enterprise-level policies and
procedures for laptop protection have to think seriously about the
following questions before embarking on choosing an viable vendor and his
solution product.
There
are a few vendors that have recognized the need for enterprise-level
solutions and their offerings are also increasingly in line with the needs
of security administrators. Thus, enterprises can find competitive,
emerging solutions to mitigate the data risks associated with a stolen
laptop.
PointSec
Mobile Technologies, WinMagic, Ensure Technologies, F-Secure, Aliroo,
CopyTele, Ultimaco and Vasco are some of the leading and emerging software
companies with security products for laptops.
Here
comes a brief explanation on necessity of securing our PDAs and the tools
that can fulfill the security requirements of handheld
devices.
Today
all our company’s executives and even the sales force carry around PDAs.
They are incredibly convenient but at the same time the information stored
in those handy little electronic notebooks faces the risk of being stolen
or corrupted. They probably contain PIN numbers, unlisted phone numbers,
credit card and calling card numbers. They even may contain sensitive
client, sales, and pricing information. In the recent past, network
passwords are also being stored in them.
Suppose
some one has lost his or her PDA or it is stolen. The loss of a PDA has
little to do with the hardware as the cost of PDAs has come down recently.
Also most users do not have much trouble restoring data as they keep them
synchronized with their PC on a regular basis. Thus it is necessary to record the
duplicate data on the PC that is synchronized to the PDA’s
database.
Also
most offer at least minimal security in the form of password protection.
Unfortunately passwords can be guessed and cracking PDAs has become a
cottage industry in some places. Fortunately, there are third-party
encryption programs available for PDAs. For all but the most innocuous
data, such as public information simply carried in a PDA for convenience,
encryption protection should definitely be considered. JAWZ Inc. http://www.jawzinc.com/main.asp
supplies a 4096-bit email encryption tool. This is especially useful
because it lets encrypt single messages, all messages or just those in a
selected category. Ilium
Software http://www.iliumsoft.com/
has come out with a 40-bit RC4 eWallet encryption software for handhelds
as well as Windows and NT PCs. Developer One’s 56-bit CodeWallet is now
available for Windows CE PDAs and desktops.
Summary
The
happenings around the world emphasis the need for understanding the
criticality of information security and what is all about and the need for
security requirements and security-related tools and technologies.
Some of the general warnings for
system and network administrators are as follows:
When
a potential threat has been identified, standard enterprise security
measures should be complemented by increased firewall analysis, intrusion
detection inspection of site usage logs. Penetration testing and
vulnerability scanning should be performed on at least a weekly basis
Companies also have to recognize that their vulnerability does not end at
their own firewalls; their Internet Service Providers and server hosting
companies must have the necessary technology and processes in place to
quickly detect and react to denial of service attacks.
It
is also essential that companies create and document incident response
procedures and regularly perform the Internet equivalent of fire drills to
ensure that their responses are rapid and effective. Key management personnel should be
assigned to handle press inquiries, and the responsibility and criteria
for making the decision to involve law enforcement should be defined in
advance.
Small
and midsize companies are especially vulnerable to malicious attacks
because they usually can not afford or do not attract, personnel who have
extensive security experience. To strengthen network security and reduce
vulnerability to an attack, the following points have been recommended by
Gartner group.
To
succeed in the fiercely competitive e-commerce marketplace, business must
become fully aware of Internet security threats, take advantage of the
technology that overcomes them, and win customers’ trust. The solution for
businesses that are serious about e-commerce is to implement a complete
e-commerce trust infrastructure. PKI cryptography and digital signature
technology, applied via Secure Sockets Layer (SSL) digital certificates,
provide the authentication, data integrity, and privacy necessary for
e-commerce.
In
recent months, there have been several high-profile security breaches by
cyber criminals who logged onto e-business Web sites and gained
unauthorized access to confidential data and applications at the back
end.
Internet
security issues arise and evolve and those concerned have to act with
foresight and innovations.
Web
References
http://www.cert.org/ - This
site focuses on viruses and software flaws and contains an extensive
collection of security-related articles
http://www.eff.org/ - This
site represents a nonprofit group, which is focusing on protecting privacy
in the computer arena. This is loaded with a number of exciting papers on
Internet security
http://csrc.nist.gov/ - This
Web site contains information about computer security issues, products and
research of concern to federal agencies, industry and
users
http://www.securityportal.com/ -
A
gateway to useful security-related resources
http://www.rsasecurity.com/ - RSA
Security Site
http://www.verisign.com/ - Verisign’s
Web site providing security tools and
resources