Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cybersecurity

373 Articles
article-image-foreshadow-l1-terminal-fault-in-intels-chips
Melisha Dsouza
16 Aug 2018
5 min read
Save for later

Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips

Melisha Dsouza
16 Aug 2018
5 min read
Intel's’ chips have been struck with yet another significant flaw called ‘Foreshadow’. This flaw, alternatively called as L1 Terminal Fault or L1TF, targets Intel’s Security Guard Extensions (SGX) within its Core chips. The US government’s body for computer security testified that an attacker could take advantage of this vulnerability in Intel’s chips to obtain sensitive information. This security flaw affects processors released right from 2015. Thankfully,  Intel has released a patch to combat the problem. Check the full list of affected hardware on Intel's website. While Intel confirmed that they are not aware of reports that any of these methods have been used in real-world exploits, the tech giant is now under scrutiny. This was bound to happen as Intel strikes a  hattrick following two similar attacks - Spectre and Meltdown - that were discovered earlier this year in January. Intel confirms that future processors would be built in such a way as to not be affected by Foreshadow. How does Foreshadow affect your data? The flaw was first brought to Intel’s notice by researchers from KU Leuven University in Belgium and others from the universities of Adelaide and Michigan. Foreshadow can exploit various flaws in a computing technique known as speculative execution. It can specifically target a lock box within Intel’s processors. This would let a hacker leak any data desired. To give you a gist, a  processor can run more efficiently by guessing the next operation to be performed. A correct prediction will save resources, while work based on an incorrect prediction gets scrapped. However, the system leaves behind clues like how long it will take the processor to fulfill a certain request. This can be used by an attacker to find weaknesses, ultimately gaining the ability to manipulate what path the speculation takes. Thus, hacking into the data at opportune moments that leaks out of a process's data storage cache. Speculative execution is important to guard against, because an attacker could use them to access data and system privileges meant to be off-limits. The most intriguing part of the story, as stated by hardware security researcher and Foreshadow contributor Jo Van Bulck is,  “Spectre is focused on one speculation mechanism, Meltdown is another, and Foreshadow is another”.   "This is not an attack on a particular user, it’s an attack on infrastructure."                          YUVAL YAROM, UNIVERSITY OF ADELAIDE   After the discovery of Spectre and Meltdown, the researchers found it only too fitting to look for speculative execution flaws in the SGX enclave. To give you an overview, Security Guard Extensions, or SGX, were originally designed to protect code from disclosure or modification. SGX is included in 7th-generation Core chips and above, as well as the corresponding Xeon generation. It remains protected even when the BIOS, VMM, operating system, and drivers are compromised. Meaning that an attacker with full execution control over the platform can be kept away. SGX, allows programs to establish secure enclaves on Intel processors. These are regions of a chip that are restricted to run code that the computer's operating system can't access or change. The creates a safe space for sensitive data,. Even if the main computer is compromised by malware, the sensitive data remains safe. That apparently isn’t totally the case. Wired furthers stress on the fact that the Foreshadow bug could break down the walls between virtual machines, a real concern for cloud companies whose services share space with other theoretically isolated processes. Watch this youtube video for more clarity on how foreshadow works. https://www.youtube.com/watch?v=ynB1inl4G3c&feature=youtu.be The Quick Fix to Foreshadow Prior to details of the flaw being made public, Intel had created its fix and coordinated its response with the researchers on Tuesday. The fix disables some of chips features that were vulnerable to the attack. Along with software mitigations, the bug will also be patched at the hardware level with Cascade Lake, an upcoming Xeon chip, as well as future Intel processors expected to launch later this year. This mitigation limits the extent to which the same processor can be used simultaneously for multiple tasks, and hence companies running cloud computing platforms could see a significant hit to their collective computing power. On Tuesday, cloud services companies - Amazon, Google and Microsoft - said they had put in place a fix for the problem. Intel is working with these cloud providers—where uptime and performance is key—to “detect L1TF-based exploits during system operation, applying mitigation only when necessary,” Leslie Culbertson, executive vice president and general manager of Product Assurance and Security at Intel, wrote. Individual computer users are advised, as ever, to download and install any software updates available. The research team confirmed that is was unlikely that individuals would see any performance impact. As long as you’re system is patched up, you should be okay. Check out PCWorld’s guide on how to protect your PC against Meltdown and Spectre. You can also head over to the Red Hat Blog for more knowledge on Foreshadow. NetSpectre attack exploits data from CPU memory Intel’s Spectre variant 4 patch impacts CPU performance 7 Black Hat USA 2018 conference cybersecurity training highlights
Read more
  • 0
  • 0
  • 2778

article-image-homebrews-github-repo-got-hacked-in-30-mins-how-can-open-source-projects-fight-supply-chain-attacks
Savia Lobo
14 Aug 2018
5 min read
Save for later

Homebrew's Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?

Savia Lobo
14 Aug 2018
5 min read
On 31st July 2018, Eric Holmes, a security researcher gained access to Homebrew's GitHub repo easily (He documents his experience in an in-depth Medium post). Homebrew is a free and open-source software package management system with well-known packages like node, git, and many more. It simplifies the installation of software on macOS. The Homebrew repository contains its recently elevated scopes. Eric gained access to git push on Homebrew/brew and Homebrew/homebrew-core. He was able to invade and make his first commit into Homebrew’s GitHub repo within 30 minutes. Attack = Higher chances of obtaining user credentials After getting an easy access to Homebrew’s GitHub repositories, Eric’s prime motive was to uncover user credentials of some of the members of Homebrew GitHub org. For this, he made use of an OSSINT tool by Michael Henriksen called gitrob, which easily automates the credential search. However, he could not find anything interesting. Next, he explored Homebrew’s previously disclosed issues on https://hackerone.com/Homebrew, which led him to the observation that Homebrew runs a Jenkins instance that’s (intentionally) publicly exposed at https://jenkins.brew.sh. With further invasion into the repo, Eric encountered that the builds in the “Homebrew Bottles” project were making authenticated pushes to the BrewTestBot/homebrew-core repo. This further led him to an exposed GitHub API token. The token opened commit access to these core Homebrew repos: Homebrew/brew Homebrew/homebrew-core Homebrew/formulae.brew.sh Eric stated in his post that, “If I were a malicious actor, I could have made a small, likely unnoticed change to the openssl formulae, placing a backdoor on any machine that installed it.” Via such a backdoor, intruders could have gained access to private company networks that use Homebrew. This could further lead to data breach on a large scale. Eric reported this issue to Homebrew developer, Mike McQuaid. Following which, he publicly disclosed the issue on the blog at https://brew.sh/2018/08/05/security-incident-disclosure/. Within a few hours the credentials had been revoked, replaced and sanitised within Jenkins so they would not be revealed in future. Homebrew/brew and Homebrew/homebrew-core were updated so non-administrators on those repositories cannot push directly to master. The Homebrew team worked with GitHub to audit and ensure that the given access token wasn’t used maliciously, and didn’t make any unexpected commits to the core Homebrew repos. As an ethical hacker, Eric reported the vulnerabilities he found to the Homebrew team and did no harm to the repo itself. But, not all projects may have such happy endings. How can one safeguard their systems from supply chain attacks? The precautions which Eric Holmes took were credible. He informed the Homebrew developer. However, not every hacker has good intentions and it is one’s responsibility to make sure to keep a check on all the supply chains associated to an organization. Keeping a check on all the libraries One should not allow random libraries into the supply chain. This is because it is difficult to partition libraries with organization’s custom code, thus both run with the same privilege risking the company’s security. One should make sure to levy certain policies around the code the company wishes to allow. Only projects with high popularity, active committers, and evidence of process should be allowed. Establishing guidelines Each company should create guidelines for secure use of the libraries selected. For this, a prior definition of what the libraries are expected to be used for should be made. The developers should also be detailed in safely installing, configuring, and using each library within their code. Identification of dangerous methods and how to use them safely should also be taken care of. A thorough vigilance within the inventory Every organization should keep a check within their inventories to know what open source libraries they are using. They should also ensure to set up a notification system which keeps them abreast of which new vulnerabilities the applications and servers are affected. Protection during runtime Organizations should also make use of runtime application security protection (RASP) to prevent both known and unknown library vulnerabilities from being exploited. If in case they notice new vulnerabilities, the RASP infrastructure enables one to respond in minutes. The software supply chain is the important part to create and deploy applications quickly. Hence, one should take complete care to avoid any misuse via this channel. Read the detailed story of Homebrew’s attack escape on its blog post and Eric’s firsthand account of how he went about planning the attack and the motivation behind it on his medium post. DCLeaks and Guccifer 2.0: Hackers used social engineering to manipulate the 2016 U.S. elections Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 2
  • 7689

article-image-ibms-deeplocker-the-artificial-intelligence-powered-sneaky-new-breed-of-malware
Melisha Dsouza
13 Aug 2018
4 min read
Save for later

IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware

Melisha Dsouza
13 Aug 2018
4 min read
In the new found age of Artificial Intelligence, where everything and everyone uses Machine Learning concepts to make life easier, the dark side of the same is can be left unexplored. Cybersecurity is gaining a lot of attention these days.The most influential organizations have experienced a downfall because of undetected malware that have managed to evade even the most secure cyber defense mechanisms. The job just got easier for cyber criminals that exploit AI to empower them and launch attacks. Imagine combining AI with cyber attacks! At last week’s Black Hat USA 2018 conference, IBM researchers presented their newly developed malware “DeepLocker” that is backed up by AI. Weaponized AI seems here to stay. Read Also: Black Hat USA 2018 conference Highlights for cybersecurity professionals All you need to know about DeepLocker Simply put, DeepLocker is a new generation malware which can stealth under the radar and go undetected till its target is reached. It uses an Artificial Intelligence model to identify its target using indicators like facial recognition, geolocation and voice recognition. All of which is easily available on the web these days! What’s interesting is that the malware can hide its malicious payload in carrier applications- like a video conferencing software, and go undetected by most antivirus and malware scanners until it reaches specific victims. Imagine sitting on your computer performing daily tasks. Considering that your profile pictures are available on the internet, your video camera can be manipulated to find a match to your online picture. Once the target (your face) is identified, the malicious payload can be unleashed thanks to your face which serves as a key to unlock the virus. This simple  “trigger condition” to unlock the attack is almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model. The simple logic of  “if this, then that” trigger condition used by DeepLocker is transformed into a deep convolutional network of the AI model.   DeepLocker – AI-Powered Concealment   Source: SecurityIntelligence   The DeepLocker makes it really difficult for malware analysts to answer the 3 main questions- What target is the malware after-  Is it after people’s faces or some other visual clues? What specific instance of the target class is the valid trigger condition? And what is the ultimate goal of the attack payload? Now that’s some commendable work done by the IBM researchers. IBM has always strived to make a mark in the field of innovation. DeepLocker comes as no surprise as IBM has the highest number of facial recognition patents granted in 2018. BlackHat USA 2018 sneak preview The main aim of the IBM Researchers- Marc Ph. Stoecklin, Jiyong Jang and Dhilung Kirat-  briefing the crowd in the BlackHat USA 2018 conference was, To raise awareness that AI-powered threats like DeepLocker can be expected very soon To demonstrate how attackers have the capability to build stealthy malware that can circumvent defenses commonly deployed today and To provide insights into how to reduce risks and deploy adequate countermeasures. To demonstrate the efficiency of DeepLocker’s capabilities, they designed and demonstrated a proof of concept. The WannaCry virus was camouflaged in a benign video conferencing application so that it remains undetected by antivirus engines and malware sandboxes. As a triggering condition, an individual was selected, and the AI was trained to launch the malware when certain conditions- including the facial recognition of the target- were met. The experiment was, undoubtedly, a success. The DeepLocker is just an experiment by IBM to show how open-source AI tools can be combined with straightforward evasion techniques to build a targeted, evasive and highly effective malware. As the world of cybersecurity is constantly evolving, security professionals will now have to up their game to combat hybrid malware attacks. Found this article Interesting? Read the Security Intelligence blog to discover more. 7 Black Hat USA 2018 conference cybersecurity training highlights 12 common malware types you should know Social engineering attacks – things to watch out for while online  
Read more
  • 0
  • 0
  • 7537
Visually different images

article-image-introducing-tls-1-3-the-first-major-overhaul-of-the-tls-protocol-with-improved-security-and-speed
Savia Lobo
13 Aug 2018
3 min read
Save for later

Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed

Savia Lobo
13 Aug 2018
3 min read
The Internet Engineering Task Force (IETF), an organization that defines internet protocols, standardized the latest version of its most important security protocols, Transport Layer Security (TLS). Introducing TLS 1.3. The latest version, TLS 1.3 i.e. RFC 8446 was published on August 10, 2018. This version is the first major overhaul of the protocol, which brings in significant security and performance improvements. https://youtu.be/HFzXrqw-UpI TLS 1.3 vs TLS 1.2 The TLS 1.2 was defined in RFC 5246 and has been in use by a majority of all web browsers for eight years. The IETF organization finalized TLS 1.3, as of March 21, 2018. One can still deploy the TLS 1.2 securely. However, many of the high profile vulnerabilities have exploited certain parts of the 1.2 protocol along with some outdated algorithms. In the new TLS 1.3, all of these problems have been resolved and the included algorithms are said to have no known vulnerabilities. In contrast to the TLS 1.2, the v1.3 has an added privacy for data exchanges. This is done by encrypting more of the negotiation handshake to protect it from eavesdroppers. This helps in protecting the identities of the participants and impedes traffic analysis. In short, the TLS 1.3 has some performance improvements such as faster speed and increased security. Companies such as Cloudfare are making the new TLS 1.3 available to their customers. What’s new in the TLS v1.3? Improved security The outdated and insecure features in the TLS 1.2 removed in the v1.3 include: SHA-1 RC4 DES 3DES AES-CBC MD5 Arbitrary Diffie-Hellman groups — CVE-2016-0701 EXPORT-strength ciphers – Responsible for FREAK and LogJam The cryptographic community was having a constant check to analyze, improve, and validate security in TLS 1.3. It also removes all primitives and features that have contributed to weak configurations and has enabled common vulnerability exploits like DROWN, Vaudenay, Lucky 13, POODLE, SLOTH, CRIME and more. Improved Speed Web performance was affected due to TLS and other encrypted connections. However, the HTTP/2 helped in overcoming this problem. Further, the new version, TLS 1.3, helps in speeding up the encrypted connections even more with features such as TLS false start and Zero Round Trip Time (0-RTT). Simply put, TLS 1.2 requires two round-trips to complete the TLS handshake. On the other hand, the v1.3 requires only one round-trip, which in turn cuts the encryption latency in half. Another interesting feature with the TLS 1.3 is, one can now send data on the first message to the server to the sites which the user has visited previously. This is called a “zero round trip.” (0-RTT). This results in improved load times. Browser support for TLS v1.3 Google has started warning their users in search console that they are moving to TLS version 1.2, as TLS 1 is no longer that safe. TLS version 1.3 is enabled in Chrome 63 for outgoing connections. Support for TLS 1.3 was added back in Chrome 56 and is also supported by Chrome for Android. https://twitter.com/screamingfrog/status/940501282653077505 TLS 1.3 is enabled by default in Firefox 52 and above (including Quantum). They are retaining an insecure fallback to TLS 1.2 until they know more about server tolerance and the 1.3 handshake. TLS 1.3 browser support The other browsers such as IE, Microsoft Edge, Opera, or Safari do not support TLS 1.3 yet. This would take some time while the protocol is being finalized and for browsers to catch up. Most of the remaining ones are in development at the moment. Read more about this in detail, on the IETF blog. Analyzing Transport Layer Protocols Communication and Network Security A new WPA/WPA2 security attack in town: Wi-fi routers watch out! Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 3399

article-image-7-black-hat-usa-2018-conference-cybersecurity-training-highlights-hardware-attacks-io-campaigns-threat-hunting-fuzzing-and-more
Melisha Dsouza
11 Aug 2018
7 min read
Save for later

7 Black Hat USA 2018 conference cybersecurity training highlights: Hardware attacks, IO campaigns, Threat Hunting, Fuzzing, and more

Melisha Dsouza
11 Aug 2018
7 min read
The 21st International Conference of Black Hat USA 2018, has just concluded. It took place from August 4, 2018 – August 9, 2018 in Las Vegas, Nevada. It is one of the most anticipated conferences of the year for security practitioners, executives, business developers and anyone who is a cybersecurity fanatic and wants to expand their horizon into the world of security. Black Hat USA 2018 opened with four days of technical training followed by the two-day main conference featuring Briefings, Arsenal, Business Hall, and more. The conference covered exclusive training modules that provided a hands-on offensive and defensive skill set building opportunity for security professionals. The Briefings covered the nitty-gritties of all the latest trends in information security. The Business Hall included a network of more than 17,000 InfoSec professionals who evaluated a range of security products offered by Black Hat sponsors. Best cybersecurity Trainings  in the conference: For more than 20 years, Black Hat has been providing its attendees with trainings that stand the test of time and prove to be an asset in penetration testing. The training modules designed exclusively for Black Hat attendees are taken by industry and subject matter experts from all over the world with the goal of shaping the information security landscape. Here’s a look at a few from this year’s conference. #1 Applied Hardware attacks: Embedded and IOT systems This hands-on training was headed by Josh Datko, and Joe Fitzpatrick that: Introduced students to the common interfaces on embedded MIPS and ARM systems Taught them how to exploit physical access to grant themselves software privilege. Focussed on UART, JTAG, and SPI interfaces. Students were given a brief architectural overview. 70% hands-on labs- identifying, observing, interacting, and eventually exploiting each interface. Basic analysis and manipulation of firmware images were also covered. This two-day course was geared toward pen testers, red teamers, exploit developers, and product developers who wished to learn how to take advantage of physical access to systems to assist and enable other attacks. This course also aimed to show security researchers and enthusiasts- who are unwilling to 'just trust the hardware'- to gain deeper insight into how hardware works and can be undermined. #2 Information Operations: Influence, exploit, and counter This fast-moving class included hands-on exercises to apply and reinforce the skills learned during the course of the training. It also included a best IO campaign contest which was conducted live during the class. Trainers David Raymond and Gregory Conti covered information operations theory and practice in depth. Some of the main topics covered were IO Strategies and Tactics, Countering Information Operations and Operations Security and Counter Intelligence. Users learned about Online Personas and explored the use of bots and AI to scale attacks and defenses. Other topics included understanding performance and assessment metrics, how to respond to an IO incident, exploring the concepts of Deception and counter-deception, and Cyber-enabled IO. #3 Practical Vulnerability discovery with fuzzing: Abdul Aziz Hariri and Brian Gorenc trained students on techniques to quickly identify common patterns in specifications that produce vulnerable conditions in the network. The course covered the following- Learning the process to build a successful fuzzer, and highlight public fuzzing frameworks that produce quality results. “Real world" case studies that demonstrated the fundamentals being introduced. Leverage existing fuzzing frameworks, develop their own test harnesses, integrate publicly available data generation engines and automate the analysis of crashing test cases. This class was aimed at individuals wanting to learn the fundamentals of the fuzzing process, develop advanced fuzzing frameworks, and/or improve their bug finding capabilities. #4 Active Directory Attacks for Red and Blue teams: Nikhil Mittal’s main aim to conduct the training was to change how you test an Active Directory Environment. To secure Active Directory, it is important to understand different techniques and attacks used by adversaries against it. The AD environments lack the ability to tackle latest threats. Hence, this training was aimed towards attacking modern AD Environment using built-in tools like PowerShell and other trusted OS resources. The training was based on real-world penetration tests and Red Team engagements for highly secured environments. Some of the techniques used in the course were- Extensive AD Enumeration Active Directory trust mapping and abuse. Privilege Escalation (User Hunting, Delegation issues and more) Kerberos Attacks and Defense (Golden, Silver ticket, Kerberoast and more) Abusing cross-forest trust (Lateral movement across forest, PrivEsc and more) Attacking Azure integration and components Abusing SQL Server trust in AD (Command Execution, trust abuse, lateral movement) Credentials Replay Attacks (Over-PTH, Token Replay etc.) Persistence (WMI, GPO, ACLs and more) Defenses (JEA, PAW, LAPS, Deception, App Whitelisting, Advanced Threat Analytics etc.) Bypassing defenses Attendees also acquired a free one month access to an Active Directory environment. This comprised of multiple domains and forests, during and after the training. #5 Hands-on Power Analysis and Glitching with ChipWhisperer This course was suited for anyone dealing with embedded systems who needed to understand the threats that can be used to break even a "perfectly secure" system. Side-Channel Power Analysis can be used to read out an AES-128 key in less than 60 seconds from a standard implementation on a small microcontroller. Colin O'Flynn helped the students understand whether their systems were vulnerable to such an attack or not. The course was loaded with hands-on examples to teach them about attacks and theories. The course included a ChipWhisperer-Lite, that students could walk away with the hardware provided during the lab sessions. During the two-day course, topics covered included : Theory behind side-channel power analysis, Measuring power in existing systems, Setting up the ChipWhisperer hardware & software, Several demonstrated attacks, Understanding and demonstration glitch attacks, and Analyzing your own hardware #6 Threat Hunting with attacker TTPs A proper Threat Hunting program focused on maximizing the effectiveness of scarce network defense resources to protect against a potentially limitless threat was the main aim of this class. Threat Hunting takes a different perspective on performing network defense, relying on skilled operators to investigate and find the presence of malicious activity. This training used standard network defense and incident response (which target flagging known malware). It focussed on abnormal behaviors and the use of attacker Tactics, Techniques, and Procedures (TTPs). Trainers Jared Atkinson, Robby Winchester and Roberto Rodriquez taught students on how to create threat hunting hypotheses based on attacker TTPs to perform threat hunting operations and detect attacker activity. In addition, they used free and open source data collection and analysis tools (Sysmon, ELK and Automated Collection and Enrichment Platform) to gather and analyze large amounts of host information to detect malicious activity. They used these techniques and toolsets to create threat hunting hypotheses and perform threat hunting in a simulated enterprise network undergoing active compromise from various types of threat actors. The class was intended for defenders wanting to learn how to effectively hunt threats in enterprise networks. #7 Hands-on Hardware Hacking Training: The class, taught by Joe Grand, took the students through the process of reverse engineering and defeating the security of electronic devices. The comprehensive training covered Product teardown Component identification Circuit board reverse engineering Soldering and desoldering Signal monitoring and analysis, and memory extraction, using a variety of tools including a logic analyzer, multimeter, and device programmer. It concluded with a final challenge where users identify, reverse engineer, and defeat the security mechanism of a custom embedded system. Users interested in hardware hacking, including security researchers, digital forensic investigators, design engineers, and executive management benefitted from this class. And that’s not all! Some other trainings include-- Software defined radio, a guide to threat hunting utilizing the elk stack and machine learning, AWS and Azure exploitation: making the cloud rain shells and much more. This is just a brief overview of the BlackHat USA 2018 conference, where we have handpicked a select few trainings. You can see the full schedule along with the list of selected research papers at the BlackHat Website. And if you missed out this one, fret not. There is another conference happening soon from 3rd December to 6th December 2018. Check out the official website for details. Top 5 cybersecurity trends you should be aware of in 2018 Top 5 cybersecurity myths debunked A new WPA/WPA2 security attack in town: Wi-fi routers watch out!  
Read more
  • 0
  • 0
  • 4162

article-image-stack-overflow-revamps-its-code-of-conduct
Sugandha Lahoti
10 Aug 2018
3 min read
Save for later

Stack Overflow revamps its Code of Conduct to explain what ‘Be nice’ means - kindness, collaboration, and mutual respect

Sugandha Lahoti
10 Aug 2018
3 min read
Stack overflow has expanded its Code of Conduct which previously focused on just “Being Nice” to include more virtues around kindness, collaboration, and mutual respect. Recently, there has been many supporters of the idea that Stack Overflow is a “toxic wasteland”. https://twitter.com/aprilwensel/status/974859164747931650 There is also a Reddit thread, from six months ago, where people have shared their woes on Stack Overflow being too toxic. This Code of Conduct is a formal, far less ambiguous and a more informative way of Stack Overflow to regulate belittling language and condescension. It is applicable to everyone using Stack Overflow and the Stack Exchange network, including the team, moderators, and anyone posting to Q&A sites or chat rooms. The Be Nice policy, since its inception in 2008, was a single guiding principle that everyone was expected to follow. However, just two words turned out to be too little, too ambiguous and later, in 2014, a revised version of the policy was released to reflect Stack Exchange as a better community than what was believed on the Internet. The revised version also added instructions on how to report rare cases of bad behavior.  However, this still was not specific enough to meet the needs of a much larger dynamic site Stack Overflow was growing to be. This is when, they decided to launch a more formal policy, one that covers “Be nice, here’s how, here’s why, and here’s what to do if someone isn’t.” The main tenets of the new code are: If you’re here to get help, make it as easy as possible for others to help you. If you’re here to help others, be patient and welcoming. Offer support if you see someone struggling or otherwise in need of help. Be clear and constructive when giving feedback, and be open when receiving it. Be kind and friendly. Avoid sarcasm and be careful with jokes, as tone can be hard to decipher online. The code also denounces subtle put-downs or unfriendly language, name-calling or personal attacks, bigotry, and harassment. Source: Stack Overflow In case someone is guilty of breaking the code of conduct, there are three stages: Warning: For most first-time misconduct, moderators will remove the offending content and send a warning. Account Suspension: For repetitive misconduct, moderators will impose a temporary suspension Account Expulsion: For very rare cases, moderators will expel people who display a pattern of harmful destructive behavior towards the community. The Stack Overflow team plans to assess the CoC by taking feedback, every 6 months, from both new and experienced users about their recent experiences on the site. They have also added a code of conduct tag which members can use on Meta Stack Exchange to ask questions about or propose changes to the CoC. You can go through the entire Code of Conduct on Stack Overflow. 10 predictable findings from Stack Overflow’s 2018 survey Stack Overflow Developer Survey 2018: A Quick Overview 4 surprising things from Stack Overflow’s 2018 survey 96% of developers believe developing soft skills is important
Read more
  • 0
  • 0
  • 3915
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-lets-encrypt-ssl-tls-certificates-gain-the-trust-of-all-major-root-programs
Melisha Dsouza
09 Aug 2018
2 min read
Save for later

Let's Encrypt SSL/TLS certificates gain the trust of all Major Root Programs

Melisha Dsouza
09 Aug 2018
2 min read
Let's Encrypt is a Certification Authority that enables HTTPS on your website. Initially, major browsers and root certificate programs were, however, apprehensive of trusting this CA. The page has now turned for Let's Encrypt, who, in their announcement yesterday stated that they are now directly trusted by major root programs like Microsoft, Google, Apple, Mozilla, Oracle and Blackberry. Knowing that these big names are now associated with Let's Encrypt’s SSL Certificate, end users are in for a host of advantages. They can obtain a trusted certificate from Let's Encrypt for zero cost. Not only can software running on web server obtain a certificate, but also be securely configured for use and automatically renew the certificate as and when needed. This certification authority also ensures that TLS security is taken seriously. They aim to benefit the community by maintaining transparency in issuing and revoking certificates- which will be publicly recorded for inspection. This will be published as an open standard for others to adopt. Initially, they started off with the trust base of many browsers excluding the major root programs. The main reason for this being that it was a very new certificate authority launched in early April 2016. To overcome this roadblock, their intermediate “Let's Encrypt Authority X3” is signed by ISRG Root X1. The intermediate now stands cross-signed by another certificate authority- ‘IdenTrust’. IdenTrust has always been a major name whose root is already trusted in all major browsers. Thus, this indirect circle of trust has been a game changer for Let's Encrypt. There are still many older versions operating systems, browsers, and devices that do not directly trust Let's Encrypt. Some of these will eventually be updated to trust them directly. Some will not. And until they move out of the trust and security scene, they plan to use a cross signature. By currently providing certificates for more than 115 million websites,  Let's Encrypt is definitely making its presence felt! Head over to the official site of Let's Encrypt for more insights on this new announcement. You can also check out Black Hill’s post for information on why Let's Encrypt is making rounds on the internet these days. A new WPA/WPA2 security attack in town: Wi-fi routers watch out! Top 5 cybersecurity trends you should be aware of in 2018 Mozilla’s new Firefox DNS security updates spark privacy hue and cry  
Read more
  • 0
  • 0
  • 2923

article-image-darpa-on-the-hunt-to-catch-deepfakes-with-its-ai-forensic-tools-underway
Natasha Mathur
08 Aug 2018
5 min read
Save for later

DARPA on the hunt to catch deepfakes with its AI forensic tools underway

Natasha Mathur
08 Aug 2018
5 min read
The U.S. Defense Advanced Research Projects Agency ( DARPA) has come out with AI-based forensic tools to catch deepfakes, first reported by MIT technology review yesterday. According to MIT Technology Review, the development of more tools is currently under progress to expose fake images and revenge porn videos on the web. DARPA’s deepfake mission project was announced earlier this year. Alec Baldwin on Saturday Night Live face swapped with Donald Trump As mentioned in the MediFor blog post, “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns”. This is one of the major reasons why DARPA Forensics experts are keen on finding methods to detect deepfakes videos and images How did deepfakes originate? Back in December 2017, a Reddit user named “DeepFakes” posted extremely real-looking explicit videos of celebrities. He used deep learning techniques to insert celebrities’ faces into adult movies. Using Deep learning, one can combine and superimpose existing images and videos onto original images or videos to create realistic-seeming fake videos. As per the MIT technology review,“Video forgeries are done using a machine-learning technique -- generative modeling -- lets a computer learn from real data before producing fake examples that are statistically similar”. Video tampering is done using two neural networks -- generative adversarial networks which work in conjunction “to produce ever more convincing fakes”. Why are deepfakes toxic? An app named FakeApp was released earlier this year which helped create deepfakes quite easily. FakeApp uses neural networking tools developed by Google's AI division. The app trains itself to perform image-recognition tasks using trial and error. Ever since its release, the app has been downloaded more than 120,000 times. In fact, there are tutorials online on how to create deepfakes. Apart from this, there are regular requests on deepfake forums, asking users for help in creating face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. Deepfakes is even be used to create fake news such as world leaders declaring war on a country. The toxic potential of this technology has led to a growing concern as deepfakes have become a powerful tool for harassing people. Once deepfakes found their way on the world wide web, many websites such as Twitter and PornHub, banned them from being posted on their platforms. Reddit also announced a ban on deepfakes, earlier this year, killing The “deepfakes” subreddit which had more than 90,000 subscribers, entirely. MediFor: DARPA’s AI weapon to counter deepfakes DARPA’s Media Forensics group, also known as MediFor, works in a group along with other researchers is set on developing AI tools for deepfakes. It is currently focusing on four techniques to catch the audiovisual discrepancies present in a forged video. This includes analyzing lip sync, detecting speaker inconsistency, scene inconsistency and content insertions. One technique comes from a team led by Professor Siwei Lyu of SUNY Albany. Lyu mentioned that they “generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well”. As the deepfakes are created using static images, Lyu noticed that that the faces in deepfakes videos rarely blink and that eye-movement, if present, is quite unnatural. An academic paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking," by Yuezun Li, Ming-Ching Chang and Siwei Lyu explains a method to detect forged videos. It makes use of Long-term Recurrent Convolutional Networks (LRCN). According to the research paper, people, on an average, blink about 17 times a minute or 0.283 times per second. This rate increases with conversation and decreases while reading. There are a lot of other techniques which are used for eye blink detection such as detecting the eye state by computing the vertical distance between eyelids, measuring eye aspect ratio ( EAR ), and using the convolutional neural network (CNN) to detect open and closed eye states. But, Li, Chang, and Lyu use a different approach. They rely on  Long-term Recurrent Convolutional Networks (LRCN) model. They first perform pre-processing to identify facial features and normalize the video frame orientation. Then, they pass cropped eye images into the LRCN for evaluation. This technique is quite effective. It is also better as compared to other approaches, with a reported accuracy of 0.99 (LRCN) compared to 0.98 (CNN) and 0.79 (EAR). However, Lyu says that a skilled video editor can fix the non-blinking deepfakes by using images that shows blinking eyes. But, Lyu’s team has a secret effective technique in the works to fix even that, though he hasn’t divulged any details. Others in DARPA are on the look-out for similar cues such as strange head movements, odd eye color, etc as these little details are leading the team even closer to detection of deepfakes. As mentioned in the MIT Technology review post, “the arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths” and how”. Also, MediFor states that “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video”. Deepfakes need to stop and the U.S. Defense Advanced Research Projects Agency ( DARPA) seems all set to fight against them. Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news A new WPA/WPA2 security attack in town: Wi-fi routers watch out! YouTube has a $25 million plan to counter fake news and misinformation  
Read more
  • 0
  • 17
  • 7753

article-image-facebook-open-sources-fizz-the-new-generation-tls-1-3-library
Melisha Dsouza
08 Aug 2018
3 min read
Save for later

Facebook open sources Fizz, the new generation TLS 1.3 Library

Melisha Dsouza
08 Aug 2018
3 min read
Facebook open-sourced a new library Fizz (a TLS 1.3 library) for securing websites against cyberattacks and improving its focus on safe data traversal across the internet. TLS  1.3 is now taking good shape, as Facebook has claimed that it’s secured and running more than 50% of its web traffic via TLS1.3 and Fizz. Since the Facebook infrastructure is so widespread, a protocol like the TLS is of much importance. Solving the SSL issues of both latency and data exposure, the TLS protocol also uses a stronger encryption for messages to maintain the privacy of certificates and redesigns the way secret keys are derived while using a zero round-trip connection setup to accelerate requests. Thus, TLS overcomes the shortcomings of the previously used SSL protocol. What problem does Fizz solve for Facebook? Assisting the Internet Engineering Task Force’s efforts to improve the TLS protocol, Fizz will now play its own part. One of the major issues faced by the engineers at Facebook was writing data to huge chunks of memory. This led to an increase in resource overhead and reduced the servers’ speed. To combat this issue, Fizz will divide the data into smaller chunks and then move it into memory while encrypting it in place. This simple technique called as “Scatter/gather I/O” processes data much more efficiently.   Scatter/Gather I/O Source: code.fb.com The next big thing that Fizz aims to do is replace the previously deployed Zero protocol with TLS 1.3. The zero protocol enabled Facebook to experiment with the 0-RTT secure connections. The 0-RTT reduced the latency of requests and the overhead needed to deploy TLS. Fizz has now taken over the zero protocol by providing zero copy encryption and decryption, tight integration with other parts of the infrastructure while reducing usage of memory and CPU. This improves user experience, particularly on app startup when there are no existing connections to reuse. All this is done at the same speed as the zero protocol but provides a 10-percent higher throughput. In today’s world, servers are scattered everywhere! Keeping in mind that these servers usually want to be able to make calls to services in other locations in the middle of a handshake, asynchronous IO becomes very important.  Fizz, therefore, provides a simple async application programming interface (API).  Any callback from Fizz can return an asynchronous response without blocking the service from processing other handshakes. It is also very easy to add new asynchronous callbacks to Fizz for other use cases. Fizz also provides developers with easy-to-use API’s to send “early data” immediately after the TCP connection is established. Early data reduces the latency of requests. Fizz is comprised of secure abstractions. This helps catch bugs during compile time rather than at runtime, thereby preventing mistakes. This open source provision from Facebook aims to be better than its SSL predecessor at preventing attacks. It would be interesting to see how the crowd takes advantage of the  library! Head over to the official FB documentation to know more about this robust library. Facebook is investigating data analytics firm Crimson Hexagon over misuse of data Facebook plans to use Bloomsbury AI to fight fake news Time for Facebook, Twitter and other social media to take responsibility or face regulation    
Read more
  • 0
  • 0
  • 2707

article-image-a-new-wpa-wpa2-security-attack-in-town-wi-fi-routers-watch-out
Savia Lobo
07 Aug 2018
3 min read
Save for later

A new WPA/WPA2 security attack in town: Wi-fi routers watch out!

Savia Lobo
07 Aug 2018
3 min read
Jens "atom" Steube, the developer of the popular Hashcat password cracking tool recently developed a new technique to obtain user credentials over WPA/WPA2 security. Here, attackers can easily retrieve the Pairwise Master Key Identifier (PMKID) from a router. WPA/WPA2, the Wi-Fi security protocols, enable a wireless and secure connection between devices using encryption via a PSK(Pre-shared Key). The WPA2 protocol was considered as highly secure against attacks. However, a method known as KRACK attack discovered in October 2017 was successful in decrypting the data exchange between the devices, theoretically. Steube discovered the new method when looking for new ways to crack the WPA3 wireless security protocol. According to Steube, this method works against almost all routers utilizing 802.11i/p/q/r networks with roaming enabled. https://twitter.com/hashcat/status/1025786562666213377 How does this new WPA/WPA2 attack work? The new attack method works by extracting the RSN IE (Robust Security Network Information Element) from a single EAPOL frame. RSN IE is an optional field containing the PMKID generated by a router when a user tries to authenticate. Previously, for cracking user credentials, the attacker had to wait for a user to login to a wireless network. They could then capture the four-way handshake in order to crack the key. However, with the new method, an attacker has to simply attempt to authenticate to the wireless network in order to retrieve a single frame to get access to the PMKID. This can be then used to retrieve the Pre-Shared Key (PSK) of the wireless network. A boon for attackers? The new method makes it easier to access the hash containing the pre-shared key, which needs to be cracked. However, this process takes a long time depending on the complexity of the password. Most users don’t change their wireless password and simply use the PSK generated by their router. Steube, in his post on Hashcat, said,"Cracking PSKs is made easier by some manufacturers creating PSKs that follow an obvious pattern that can be mapped directly to the make of the routers. In addition, the AP mac address and the pattern of the ESSID  allows an attacker to know the AP manufacturer without having physical access to it." He also stated that attackers pre-collect the pattern used by the manufacturers and create generators for each of them, which can then be fed into Hashcat. Some manufacturers use patterns that are too large to search but others do not. The faster one’s hardware is, the faster one can search through such a keyspace. A typical manufacturer’s PSK of length 10 takes 8 days to crack (on a 4 GPU box). How can users safeguard their router’s passwords? Creating one’s own key rather than using the one generated by the router. The key should be long and complex by consisting of numbers, lower case letters, upper case letters, and symbols (&%$!) Steube personally uses a password manager and lets it generate truly random passwords of length 20 - 30. One can follow the researcher's footsteps in safeguarding their routers or use the tips he mentioned above. Read more about this new WiFi security attack on Hashcat forum. NetSpectre attack exploits data from CPU memory Cisco and Huawei Routers hacked via backdoor attacks and botnets Finishing the Attack: Report and Withdraw
Read more
  • 0
  • 2
  • 4802
article-image-mozillas-new-firefox-dns-security-updates-spark-privacy-hue-and-cry
Melisha Dsouza
07 Aug 2018
4 min read
Save for later

Mozilla's new Firefox DNS security updates spark privacy hue and cry

Melisha Dsouza
07 Aug 2018
4 min read
Mozilla just upped its security game by introducing two new features to their Firefox browser that they call "DNS over HTTPs" (DoH) and "Trusted Recursive Resolver" (TRR). According to Mozilla, this is an attempt on their part to enhance security. They want to make one of the oldest parts of the internet architecture- the DNS- more private and safe. This will be done by encrypting DNS queries and by testing a service that keeps DNS providers from collecting and sharing users browsing history. But internet security geeks far from agree to this claim made by Mozilla. DoH and TRR explained A DNS converts a computer’s domain name into an IP address. This means that when you enter the domain of a particular website in your browser, a request is automatically sent to the DNS server that you have configured. The DNS server then looks up this domain name and returns an IP address for your browser to connect to. However, this DNS traffic is unencrypted and shared with multiple parties, making data vulnerable to capture and spy on. Enter Mozilla with two new updates to save the day. The DNS over HTTPS (DoH) protocol encrypts DNS requests and responses.DNS requests sent to the DoH cloud server are encrypted while old style DNS requests are not protected. The next thing up Mozilla’s alley is building a default configuration for DoH servers that puts privacy first- also known as the  Trusted Recursive Resolver (TRR). With Trusted Recursive Resolver (TRR) turned on as default, any DNS changes that a Firefox user configured in the network will be overridden. Mozilla has partnered up with Cloudflare after agreeing to a very strong privacy policy that protects users data. Why security Geeks don’t prefer Mozilla’s DNS updates? Even though Mozilla has made an attempt to transport requests over https- thus encrypting the data- the main concern was that the DNS servers used are local and hence the parties that spy on you will, well, also be local! Adding to this, while browsing with Firefox, Cloudflare will can read everyone's DNS requests. This is because Mozilla has partnered up with Cloudflare, and will resolve the domain names from the application itself via a DNS server from Cloudflare based in the United States. Now this itself poses as a threat since Cloudflare is a third party bearer and we all know the consequences of having a third party interfere with our data and network. Despite the assurance that Cloudflare has signed a “pro-user privacy policy” that deletes all personally identifiable data within 24 hours, you can never say what will be done with your data. After the Cambridge analytica scandal- nothing virtual can be trusted. Here’s a small overview of what can go wrong because of the TRR. TRR  fully disables anonymity. Before Mozilla implemented this change, the DNS resolution was local and could be attacked. However, with Mozilla's change, all DNS requests are seen by Cloudflare and in turn also by any government agency that has legal right to request data from Cloudflare. So in short, any (US) government agency can basically trace you down if you have information to spill or benefit them. So to save everyone the trouble, let's explore what you can do with the situation. It's simple- turn TRR off! Hackernews users suggest the following workaround: Enter about:config in the address bar Search for network.trr Set network.trr.mode = 5 to completely disable it If you want to explore more about mode 5, head over to mozilla.org. You can Change network.trr.mode to 2 to enable DoH. This will try and use. DoH but will fallback to insecure DNS under some circumstances like captive portals.  (Use mode 5 to disable DoH under all circumstances.) The other modes are described on usejournal.com You may be surprised at how such a simple update can fuel so much discussion. It all comes down to the pitfalls of blind trusting a third party service or being your own boss and switching the TRR off. Whose side are you on? To know more about this update, head over to Mozilla's Blog. Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature Mozilla is building a bridge between Rust and JavaScript Firefox has made a password manager for your iPhone    
Read more
  • 0
  • 0
  • 3010

article-image-google-to-launch-a-censored-search-engine-in-china-codenamed-dragonfly
Sugandha Lahoti
03 Aug 2018
3 min read
Save for later

Google to launch a censored search engine in China, codenamed Dragonfly

Sugandha Lahoti
03 Aug 2018
3 min read
According to a leaked report found by the folks at Intercept, Google is secretly planning to bring back its search engine to China. The project, codenamed Dragonfly, will meet China's censorship rule and filter out certain topics including search terms about human rights, democracy, religion, and peaceful protests. According to internal Google documents and people familiar with the plans, the project was initiated in the spring of last year. However, it picked up speed following a December 2017 meeting between Google’s CEO Sundar Pichai and the Chinese government. Google has created a custom Android app through which users can access Google’s search service. Per Intercept, the app has already been demonstrated to the Chinese government and the finalized version may be launched anytime in the next 6 to 9 months. This custom app will comply with China’s strict censorship laws, restricting access to content that is banned. The Chinese government has censored popular social media sites like Instagram, Facebook, and Twitter, as well as news companies: the New York Times and the Wall Street Journal. It has also banned information on the internet about political opponents, free speech, and academic studies. Intercept says that the leaked document states that, “the search app will also blacklist sensitive queries so that no results will be shown at all when people enter certain words or phrases.” Back in 2010, Google made the decision to exit China by publicly declaring it would withdraw its search engine services from China. The primary reason can be attributed to the fact that the Chinese government was forcing Google to censor search results. However, the Chinese government had hacked Google’s servers, which also played a major role in Google absconding China. Patrick Poon, a Hong Kong-based researcher with human rights group Amnesty International, told The Intercept that “Google’s decision to comply with the censorship would be a big disaster for the information age.” The general public has also expressed their disdain over Google’s decision calling it a money-minting business. Twitter Twitter Google is yet to share their views on the Chinese search engine. A spokesperson from Google was heard saying that they have" no comment on speculation about future plans." You can read the original story on The Intercept. Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Decoding the reasons behind Alphabet’s record high earnings in Q2 2018 Time for Facebook, Twitter, and other social media to take responsibility or face regulation. Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act. The New AI Cold War Between China and the USA
Read more
  • 0
  • 0
  • 2436

article-image-microsoft-edge-introduces-web-authentication-for-passwordless-web-security
Savia Lobo
01 Aug 2018
2 min read
Save for later

Microsoft Edge introduces Web Authentication for passwordless web security

Savia Lobo
01 Aug 2018
2 min read
Security over the web via passwords can be crucial as passwords are hard to memorize, easy to forget and can be easily phished or cracked. However, Microsoft Edge has recently made dealing with passwords a lot easier by introducing the Web Authentication specification. This new feature allows an improved and a more secure user experience along with a passwordless experience on the web. Using Web Authentication, Edge users can now sign in with their face, fingerprint, PIN, or portable FIDO2 devices. These methods leverage strong public-key credentials instead of passwords. Why go passwordless? Many users might still be skeptical of moving onto these methods. On the other hand, we allow most of the online websites (shopping, food ordering websites, and so on) to store our credit card numbers, our other sensitive information without any investigation. These credentials are hidden using just passwords; an outdated security model which can be easily hacked. Microsoft aims for a secure and passwordless experience on the web via advanced methods such as Windows Hello biometrics and creation of Web Authentication, an open standard for passwordless authentication. How does Web authentication work? Windows Hello allows users to authenticate without a password on any Windows 10 device. They can make use of biometrics like face and fingerprint recognition to log in to websites by a simple glance or use a PIN number to sign in. External FIDO2 security keys also work for authentication with a removable device and the user’s biometrics or PIN. There are still some websites which do not offer a complete passwordless model yet. For such websites, backward compatibility with FIDO U2F devices can act as a strong enough secondary security besides the password. At the RSA 2018 conference, Microsoft discussed how APIs shall be used to approve a payment on the web via one’s facial identity. To get started with Web Authentication in Microsoft Edge, one can install Windows Insider Preview build 17723 or higher to try out the updated feature. Read more about this feature on the Microsoft Web Authentication guide. Web Security Update: CASL 2.0 releases! Amazon Cognito for secure mobile and web user authentication [Tutorial] Oracle Web Services Manager: Authentication and Authorization
Read more
  • 0
  • 0
  • 2942
article-image-facebook-stop-discriminatory-advertising-in-the-us-declares-washington-ag-ferguson
Sugandha Lahoti
26 Jul 2018
3 min read
Save for later

Facebook must stop discriminatory advertising in the US, declares Washington AG, Ferguson

Sugandha Lahoti
26 Jul 2018
3 min read
Attorney General Bob Ferguson announced the day before yesterday (24th July 2018) that Facebook has been found guilty of providing discriminatory advertisements on its platform. The platform provides third-party advertisers with the option to exclude ethnic and religious minorities, immigrants, LGBTQ individuals and other protected groups from seeing their ads. If these groups cannot see the ads at all, they are deprived of the opportunities provided in the advertisements. Source: Office of the Attorney General Following this verdict, Facebook has signed a legally binding agreement to make changes to its advertising platform within 90 days. According to this agreement, Facebook will no longer provide advertisers with options to exclude ethnic groups from advertisements for housing, credit, employment, insurance and public accommodations ads. Facebook will no longer provide advertisers with tools to discriminate based on race, creed, color, national origin, veteran or military status, sexual orientation and disability status. This matter was first brought to light by ProRepublica in 2016 when they went undercover and bought multiple rental housing ads on Facebook, where certain categories of users were excluded from seeing the ads. According to ProPublica, “Every single ad was approved within minutes.” The allegations in this news were alarming and the AG’s office decided to investigate. They used the platform to create 20 fake ads that excluded one or more ethnic minorities from receiving their advertising. Despite these exclusions, Facebook’s advertising platform approved all 20 ads. “Facebook’s advertising platform allowed unlawful discrimination on the basis of race, sexual orientation, disability, and religion,” said Ferguson. “That’s wrong, illegal, and unfair.” The Attorney General’s investigation found the platform's unlawful targeting options as an act of unfair acts and practices, and in violation of the state Consumer Protection Act and the Washington Law Against Discrimination. Read more: 5 reasons the government should regulate technology This led to the development of a permanent and legal binding agreement that aims to cover all loopholes and prevent Facebook from offering discriminating advertising in any form. However, Peter Romer-Friedman, a lawyer with Outten & Golden LLP points out that the “agreement does nothing to address age discrimination or gender discrimination on Facebook”. This agreement is legally binding in Washington state. Facebook has agreed to change its platform nationwide. Apart from fixing its advertising platform within 90 days, they are also entitled to pay the Washington State AGs Office $90,000 in costs and fees. This agreement is a win not just for the citizens of Washington state but also the United States. Facebook has agreed to implement its improved advertising options nationwide. But this is a very small step for the entire world. The ball is in Facebook’s court now. We’ll have to wait and see if it proactively generalizes these policies on a worldwide scale or if it needs the public and the law to hold Facebook accountable for the power its platform holds over the lives of its over 2 billion users. EU slaps Google with $5 billion fine for the Android antitrust case Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017
Read more
  • 0
  • 0
  • 2300

article-image-spectrersb-targets-cpu-return-stack-buffer-found-on-intel-amd-and-arm-chipsets
Savia Lobo
25 Jul 2018
4 min read
Save for later

SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets

Savia Lobo
25 Jul 2018
4 min read
Attacks exploiting operating systems and applications have been on an exponential rise in recent time. One such popular class of vulnerability is the Spectre, which exploits the speculative execution mechanism employed in modern processor chips and has recently targeted Intel, AMD, and ARM. The assumed dead exploit which resurfaced as a new variant of Spectre, SpectreRSB, was successful in exploiting the return stack buffer (RSB), a common predictor structure in modern CPUs used to predict return addresses. Spectre, which was first detected in January this year, has remained resilient. The Spectre variant 1, which Dartmouth claimed to resolve using its ELFbac policy techniques. The next one is the Spectre variant 2, which Google fixed using its Retpoline. Next to follow are the new data-stealing exploits, Spectre 1.1 and 1.2, detected just two weeks ago by Vladimir Kiriansky and Carl Waldspurger. And the most recent one in the headlines is the SpectreRSB. This spectre-class exploit, SpectreRSB, was revealed by security experts from the University of California, Riverside (UCR). They mentioned the details of this new exploit attack method in a research paper published by Arxiv, titled ‘Spectre Returns! Speculation Attacks using the Return Stack Buffer’ What is SpectreRSB? The SpectreRSB exploit relies on speculative execution, a feature found in several modern CPUs for optimizing computing performance. Due to the disparity between the potential speed of modern CPUs and memory, speculative execution occurs to keep efficiency at peak levels. However, to do so, the CPU is employed with running batch instructions. Once the instructions start, the CPU does not really check whether the memory accesses from the cache are accessing via a privileged memory. This exactly is the time for exploits to attack the system. As per the UCR researchers, SpectreRSB takes a slight detour from other similar attacks such as Meltdown. Rather than exploit the branch predictor units of CPUs or CPU cache components, SpectreRSB exploits the Return Stack Buffer (RSB). Researcher Nael Abu-Ghazaleh wrote, “To launch the attack, the attacker should poison the RSB (a different and arguably easier process than poisoning the branch predictor) and then cause a return instruction without a preceding call instruction in the victim (which is arguably more difficult than finding an indirect branch).” The paper says SpectreRSB also enables an attack against the Intel SGX (Software Guard Extensions) compartment. Here a malicious OS pollutes the RSB to cause a mis-speculation exposing data outside an SGX compartment. This attack bypasses all software and microcode patches on the SGX machine. How to Defend against SpectreRSB? Researchers stated that they reported SpectreRSB to companies that use RSBs to predict return addresses, which include Intel, AMD and ARM. Out of the three, AMD and ARM did not respond to a request for comment from Threatpost. However, in a reply to one of the statements in the Threatpost, an Intel spokesperson stated via an email, “SpectreRSB is related to branch target injection (CVE-2017-5715), and we expect that the exploits described in this paper are mitigated in the same manner.” He further stated that, “We have already published guidance for developers in the whitepaper, Speculative Execution Side Channel Mitigations. We are thankful for the ongoing work of the research community as we collectively work to help protect customers.” Following this, the UCR researchers stated that this newly found SpectreRSB cannot be prevented, using prior known defenses such as Google’s Retpoline fix, Intel’s microcode patches and so on. However, the researchers did mention the existence of a defense to mitigate against the SpectreRSB known as RSB stuffing. RSB stuffing currently exists on Intel’s Core i7 processors, starting from its Skylake lineup. With RSB stuffing, also known as  RSB refilling, every time there is a switch into the kernel, the RSB is intentionally filled with the address of a benign delay gadget to avoid the possibility of mis-speculation. Abu-Ghazaleh told Threatpost, “For some of the more dangerous attacks, the attack starts from the user code, but it's trying to get the OS to return to the poisoned address. Refilling overwrites the entries in the RSB whenever we switch to the kernel (for example, at the same points where the KPTI patch remaps the kernel addresses).  So, the user cannot get the kernel to return to its poisoned addresses in the RSB.” Read more about the SpectreRSB in its research paper. Social engineering attacks – things to watch out for while online Top 5 cybersecurity trends you should be aware of in 2018 Top 5 cybersecurity myths debunked  
Read more
  • 0
  • 0
  • 3350