Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cybersecurity

373 Articles
article-image-intel-announces-9th-gen-core-cpus-with-spectre-and-meltdown-hardware-protection-amongst-other-upgrades
Melisha Dsouza
09 Oct 2018
4 min read
Save for later

Intel announces 9th Gen Core CPUs with Spectre and Meltdown Hardware Protection amongst other upgrades

Melisha Dsouza
09 Oct 2018
4 min read
On 8th October, at it’s 'Fall Desktop Launch Event', Intel unveiled the 9th-generation Core i9-9900K, i7-9700K, and i5-9600K processors for desktops. With an aim to deliver ‘the best gaming performance’ in the word, the processors also come with fixes for the much controversial  Specter, Meltdown, and L1TF vulnerabilities. Major features of this launch include, #1 Security fixes for Specter, Meltdown, and LITF Faults In March 2018, Intel announced that they would be adding hardware protection to forthcoming CPUs protecting users against some of the processor's security flaws. These 'protective walls' added in the hardware would keep malicious code in a physically different location from areas of the CPU were speculative execution is taking place. Intel kept its word by announcing hardware mitigations in the 9th Gen CPU’s for Spectre/Meltdown. Former Intel CEO Brian Krzanich stated in a press release, "We have redesigned parts of the processor to introduce new levels of protection through partitioning that will protect against both Variants 2 and 3. Think of this partitioning as additional “protective walls” between applications and user privilege levels to create an obstacle for bad actors." It has not been detailed what specific hardware changes were made to add protection. It was noted that the previous software and microcode protections added would cause a performance hit on older CPUs. These new CPUs are powerful enough that any performance hit caused by these protections should not be noticeable. #2 Forgoing HyperThreading Intel is forgoing HyperThreading on some of the Core i9 parts. This will partly help make the product stack more linear. This could also possibly help mitigate one of the side-channel attacks that can occur when HyperThreading is in action. Disabling HyperThreading on the volume production chips, ensures that every thread on that chip is not competing for per-core resources. #3 Hardware Specifications Source: AnandTech Core i9-9900K The  Core i9-9900K processor is designed to deliver the best gaming performance in the world. Users can enable up to 220 FPS on Rainbow Six: Siege, Fortnite, Counter-Strike: Global Offensive and PlayerUnknown Battlegrounds. It comes with8 cores, 16 threads and a base frequency of 3.6GHz which can be boosted up to 5.0GHz. This processor is aimed at desktop-based enthusiasts and with a dual-channel DDR4 and up to 40 PCIe lanes. The i9-9900K is based off Intel’s 14nm process. Hyperthreading is an added bonus in this processor. Core i7-9700K The i7-9700K comes with 8 cores and 8 threads. With a  base clock speed is of 3.6 GHz (which can be boosted to 4.9 GHz on all cores), the processor comes without hyperthreading.  It can turbo up to 4.9 GHz only on a single core. The i7-9700K is meant to be the direct upgrade over the Core i7-8700K. While both chips have the same Coffee Lake microarchitecture, the 9700K has two more cores and slightly better turbo performance. That being said, it has less L3 cache per core at only 1.5MB per core. Core i5-9600K The  i5-9600K is clocked at a base frequency of 3.7 GHz and can be boosted up to 4.6 GHz. With 6 cores and 6 threads, it comes without Hyperthreading. This processor is really similar to the Core i5 of the previous generation, but with an added frequency for better performance. It would be interesting to see how these new processors will help in mitigating security flaws without impacting their performance. For detailed information on each of the processors, you can head over to AnandTech. You could also check out BleepingComputer for additional insights. NetSpectre attack exploits data from CPU memory Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips
Read more
  • 0
  • 0
  • 4459

article-image-upgrade-to-git-2-19-1-to-avoid-a-git-submodule-vulnerability-that-causes-arbitrary-code-execution
Savia Lobo
08 Oct 2018
3 min read
Save for later

Upgrade to Git 2.19.1 to avoid a Git submodule vulnerability that causes arbitrary code execution

Savia Lobo
08 Oct 2018
3 min read
Last week, the Git Project revealed a vulnerability, CVE-2018-17456, which can cause arbitrary code to be executed when a user clones a malicious repository. The new Git v2.19.1 has been released with a fix to this vulnerability. Also, backports in v2.14.5, v2.15.3, v2.16.5, v2.17.2, and v2.18.1 have been added. Users have been advised to update their clients in order to protect themselves. For those who have not yet updated, they can protect by simply avoiding submodules from untrusted repositories. This includes commands such as git clone --recurse-submodules and git submodule update. The community, in their post, mentions that neither GitHub.com nor GitHub Enterprise is directly affected by the vulnerability. However, as with previously discovered vulnerabilities, GitHub.com will detect malicious repositories and will reject pushes or API requests attempting to create them. Versions of GitHub Enterprise with this detection will be shipped on October 9th. About the CVE-2018-17456 vulnerability This vulnerability is similar to CVE-2017-1000117, as both are option-injection attacks related to submodules. In the previous attack, a malicious repository would ship a .gitmodules file pointing one of its submodules to a remote repository with an SSH host starting with a dash (-). The ssh program—spawned by Git—would then interpret that as an option. The new attack works in a similar way, except that the option-injection is against the child git clone itself. Learning from the previous attack, the researchers have audited all of the .gitmodules values and implemented stricter checks as appropriate. These checks should prevent a similar vulnerability in another code path. They also implemented detection of potentially malicious submodules as part of Git’s object quality checks, which was made much easier by the infrastructure added during the last submodule-related vulnerability. Products affected by the CVE-2018-17456 vulnerability GitHub Desktop GitHub Desktop versions 1.4.1 and older included an embedded version of Git that was affected by this vulnerability.  All GitHub Desktop users are encouraged to update to the newest version (1.4.2 and 1.4.3-beta0) available today in the Desktop app. Atom Atom included the same embedded Git and was also affected. Releases 1.31.2 and 1.32.0-beta3 include the patch. Users should ensure they have the latest Atom release by completing any of the following: Windows: From the toolbar, click “Help” -> “Check for updates” MacOS: From the menu bar, click “Atom” -> “Check for Update” Linux: Update manually by downloading the latest release from atom.io Git on the command line and other clients In order to be protected from the vulnerability, users must update their command-line version of Git and any other application that may include an embedded version of Git, as they are independent of each other. 4 myths about Git and GitHub you should know about 7 tips for using Git and GitHub the right way GitHub addresses technical debt, now runs on Rails 5.2.1
Read more
  • 0
  • 0
  • 2949

article-image-a-year-later-google-project-zero-still-finds-safari-vulnerable-to-dom-fuzzing-using-publicly-available-tools-to-write-exploits
Melisha Dsouza
05 Oct 2018
4 min read
Save for later

A year later, Google Project Zero still finds Safari vulnerable to DOM fuzzing using publicly available tools to write exploits

Melisha Dsouza
05 Oct 2018
4 min read
It's been a year since the Project zero team published the results of their research about the resilience of modern browsers against DOM fuzzing. They also published Domato, their DOM fuzzing tool that was used to find those bugs. The results of the research were astonishing since Apple Safari, or more specifically, WebKit (its DOM engine) did not fare well in this test. The team decided to revisit the project again using exactly the same methodology and exactly the same tools to see whether the browsers have managed to implement better security mechanisms. The Test Setup In the previous research, the fuzzing was initially done against WebKitGTK+ and then all the crashes were tested against Apple Safari running on a Mac. In this research, WebKitGTK+ version 2.20.2 was used. To improve the fuzzing process, a couple of custom changes were made to WebKitGTK+ . For instance: Building WebKitGTK+ with ASan (Address Sanitizer) is now possible Changed window.alert() implementation to immediately call the garbage collector instead of displaying a message window. Generally, when a DOM bug causes a crash, due to the multi-process nature of WebKit, only the web process would crash, but the main process would continue running. Code was added to crash the main process when the web process crashes The team created a custom target binary. Results Obtained After running the fuzzer for 100.000.000 iterations, the team discovered 9 unique bugs that were reported to Apple. The bugs are summarized in the table below. All of these bugs have been fixed at the time of release of this blog post.   Project Zero bug ID CVE Type Affected Safari 11.1.2 Older than 6 months Older than 1 year 1593 CVE-2018-4197 UAF YES YES NO 1594 CVE-2018-4318 UAF NO NO NO 1595 CVE-2018-4317 UAF NO YES NO 1596 CVE-2018-4314 UAF YES YES NO 1602 CVE-2018-4306 UAF YES YES NO 1603 CVE-2018-4312 UAF NO NO NO 1604 CVE-2018-4315 UAF YES YES NO 1609 CVE-2018-4323 UAF YES YES NO 1610 CVE-2018-4328 OOB read YES YES YES UAF = use-after-free. OOB = out-of-bounds Out of the 9 bugs found, 6 affected the release version of Apple Safari, directly affecting Safari users. While this is significantly less than the 17 bugs found a year ago, it is still a notable number, especially since the fuzzer has been public for a long time now. After the results were in, the team found that most of the bugs were sitting in the WebKit codebase for longer than 6 months, however, only 1 of them is older than 1 year. Also, the team notes that throughout the past year, their fuzzing process came up with 14 bugs but they cannot surely say if these bugs have been resolved or are still live. The Exploit performed on the bugs To prove that bugs like this can lead to a browser compromise, an exploit was written for one of them. Out of the 6 issues affecting the release version of Safari, the researchers selected the use-after-free issue to exploit. The details of this issue are well explained in Project Zero’s Blog post. The exploit was successfully tested on Mac OS 10.13.6 (build version 17G65). All the details of the exploit can be seen at bugs.chromium.org. An interesting aspect of this exploit is that, on Safari for Mac OS it could be written in a very "old-school' way due to lack of control flow mitigations on the platform. That being said, on the latest mobile hardware and in iOS 12, which was published after the exploit was already written, Apple introduced control flow mitigations by using Pointer Authentication Codes (PAC). The issues were reported to Apple between June 15 and July 2nd, 2018. On September 17th 2018, Apple published security advisories for iOS 12, tvOS 12 and Safari 12 which fixed all of the issues. Although the bugs were fixed at that time, the corresponding advisories did not initially mention them. The issues described in the blog post were only added to the advisories one week later, on September 24, 2018, when the security advisories for macOS Mojave 10.14 were also published. The researchers affirm that there were clear improvements in WebKit DOM when tested with Domato. However, the public fuzzer was still able to find a large number of bugs. This is worrying because if a public tool was able to find that many bugs, private tools can be even more effective in exploiting these bugs. To know more about this experiment, head over to Google Project Zero’s official Blog. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan
Read more
  • 0
  • 0
  • 3038
Visually different images

article-image-facebook-finds-no-evidence-that-hackers-accessed-third-party-apps-via-user-logins-from-last-weeks-security-breach
Natasha Mathur
04 Oct 2018
3 min read
Save for later

Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach

Natasha Mathur
04 Oct 2018
3 min read
Facebook revealed last Friday that a major security breach compromised 50 million user accounts on Facebook. The security attack not only affected user’s Facebook accounts but also impacted other accounts that were linked to Facebook. The hackers had exploited Facebook’s “View As” feature that lets people see what their own profile looks like to someone else. The hackers had stolen Facebook access tokens to hack into other user’s accounts. These tokens provide hackers with full control over victim’s account, including logging into third-party applications that use Facebook Login. “We wanted to provide an update on the security attack that we announced last week. We fixed the vulnerability and we reset the access tokens for a total of 90 million accounts — 50 million that had access tokens stolen and 40 million that were subject to a “View As” look-up in the last year” wrote Guy Rosen, VP of product management. Resetting the tokens required users to login into their Facebook accounts again as well as re-login into any accounts or apps that use Facebook. As far as questions about the effects of this attack on the apps that used Facebook are concerned, Facebook is yet to find any impact. “We have now analyzed our logs for all third-party apps installed or logged in during the attack we discovered last week. That investigation has so far found no evidence that the attackers accessed any apps using Facebook Login”, states the Facebook post. All the developers leveraging the official Facebook SDKs along with people checking the validity of their users’ access tokens were automatically protected, on resetting the access tokens. However, to be extra careful, Facebook is developing a tool which will allow developers to manually identify users of the apps affected by the security breach so that they can be logged out. This will also prove to be beneficial for all those developers who don’t leverage Facebook’s SDKs or who don’t regularly check whether Facebook access tokens are valid. Additionally, Facebook recommends that developers always use Facebook Login security best practices as a guideline. It recommends that a developer use Facebook’s official SDKs for Android, iOS, and JavaScript, as these automatically check the validity of access tokens. These also force a fresh login every time the tokens are reset by Facebook, thereby protecting users accounts. Another thing to keep in mind is that Facebook wants developers to use the Graph API. This keeps the information updated regularly and makes sure that users are logged out of apps in case they show any Facebook session as invalid. “Security is incredibly important to Facebook. We’re sorry that this attack happened — and we’ll continue to update people as we find out more” reads the post. For more information, check out the official announcement. How far will Facebook go to fix what it broke: Democracy, Trust, Reality Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?
Read more
  • 0
  • 0
  • 2572

article-image-fireeye-reports-north-korean-state-sponsored-hacking-group-apt38-is-targeting-financial-institutions
Savia Lobo
04 Oct 2018
3 min read
Save for later

FireEye reports North Korean state sponsored hacking group, APT38 is targeting financial institutions

Savia Lobo
04 Oct 2018
3 min read
Yesterday, FireEye revealed a new group of hackers named APT38, a financially motivated North Korean regime-backed group responsible for conducting destructive attacks against financial institutions, as well as for some of the world's largest cyber heists. FireEye Inc. is a cybersecurity firm that provides products and services to protect against advanced persistent threats and spear phishing. Earlier this year, FireEye helped Facebook find suspicious accounts linked to Russia and Iran on its platform and also alerted Google of election influence operations linked to Iranian groups. Now FireEye cybersecurity researchers released a special report titled APT38: Un-usual Suspects, to expose the methods used by the APT38 group. In the report, they said,“Based on observed activity, we judge that APT38's primary mission is targeting financial institutions and manipulating inter-bank financial systems to raise large sums of money for the North Korean regime.” The researchers also state that the group has attempted to steal more than $1.1 billion and were also responsible for some of the more high-profile attacks on financial institutions in the last few years.  Some of the publicly reported attempted heists attributable to APT38 include: Vietnam TP Bank in December 2015 Bangladesh Bank in February 2016 Far Eastern International Bank in Taiwan in October 2017 Bancomext in January 2018 Banco de Chile in May 2018 Sandra Joyce, FireEye’s vice president of global intelligence says, “The hallmark of this group is that it deploys destructive malware after stealing money from an organization, not only to cover its tracks, but [also]  in order to distract defenders, complicate the incident response process, and gain time to get out the door.” Some details of the APT38 targeting Since at least 2014, APT38 has conducted operations in more than 16 organizations in at least 11 countries. The total number of organizations targeted by APT38 may be even higher when considering the probable low incident reporting rate from affected organizations. The group is careful, calculated, and has demonstrated a desire to maintain access to a victim environment for as long as necessary to understand the network layout, required permissions, and system technologies to achieve its goals. On average, they have observed that APT38 remain within a victim network for approximately 155 days, with the longest time within a compromised environment believed to be almost two years. In just the publicly reported heists alone, APT38 has attempted to steal over $1.1 billion dollars from financial institutions. APT38 Attack Lifecycle FireEye researchers believe that APT38’s financial motivation, unique toolset, tactics, techniques and procedures (TTPs) observed during their carefully executed operations are distinct enough to be tracked separately from other North Korean cyber activity. The APT38 group overlaps characteristics with other operations, known as ‘Lazarus’ and the actor they call as TEMP.Hermit. On Tuesday, the U.S. government released details on malware it alleges Pyongyang’s computer operatives have used to fraudulently withdraw money from ATMs in various countries. The unmasking of APT38 comes weeks after the Justice Department announced charges against Park Jin Hyok, a North Korean computer programmer, in connection with the 2014 hack of Sony Pictures and the 2017 WannaCry ransomware attack. According to Jacqueline O’Leary, a senior threat intelligence analyst at FireEye, Park has likely contributed to both APT38 and TEMP.Hermit operations. However, the North Korean government has denied allegations that it sponsors such hacking. Reddit posts an update to the FireEye’s report on suspected Iranian influence operation Facebook COO, Sandberg’s Senate testimony Google’s Protect your Election program  
Read more
  • 0
  • 0
  • 2812

article-image-californias-new-bot-disclosure-law-bans-bots-from-pretending-to-be-human-to-sell-products-or-influence-elections
Savia Lobo
03 Oct 2018
3 min read
Save for later

California’s new bot disclosure law bans bots from pretending to be human to sell products or influence elections

Savia Lobo
03 Oct 2018
3 min read
Last week, California’s Governor Jerry Brown passed a bill giving rise to a new law that will ban automated accounts, more commonly known as bots, from pretending to be real people in pursuit of selling products or influencing elections. The bill was approved on September 28 and will be effective from July 1, 2019. As per the California Senate, "This bill would, with certain exceptions, make it unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivise a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” The law will assist in tackling social media manipulation to determine foreign interference. Bots caused major issues during the 2016 U.S. Presidential elections and have ever since grown to be a menace that platforms like Twitter have been trying to combat. The 2016 U.S. Presidential elections saw Russian-controlled bots playing an active role in manipulating opinions, retweeting Donald Trump's tweets 470,000 times, and Hillary Clinton's fewer than 50,000 times. The main aim of this effort is to target bots that spread misinformation. Twitter said that it took down 9.9 million potentially spammy or automated accounts per week in May and has placed warnings on suspicious accounts. Twitter has also announced an update in its work on its "election integrity" project, ahead of the US mid-term elections in November. These include updating its rules regarding fake accounts and sharing stolen information. It said it would now take into account stock avatar photos and copied profile bios in determining whether an account is genuine. Robert Hertzberg, a state senator from California who pushed for the new law forcing bots to disclose their lack of humanity, told The New York Times he was the subject of a bot attack over a bail reform bill. So he decided to fight bots with bots by launching @Bot_Hertzberg in January. As per California law, the account discloses its automated nature. “*I AM A BOT.*” states the account’s Twitter profile. “Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I'm transparent about being a bot.” To know more about this California Senate ’s bill in detail, check out the Senate bill. Sentiment Analysis of the 2017 US elections on Twitter Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections
Read more
  • 0
  • 0
  • 1677
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-the-u-s-justice-department-sues-to-block-the-new-california-net-neutrality-law
Natasha Mathur
01 Oct 2018
3 min read
Save for later

The U.S. Justice Department sues to block the new California Net Neutrality law

Natasha Mathur
01 Oct 2018
3 min read
The U.S. Justice Department filed a lawsuit against California yesterday after the California governor Jerry Brown signed the state’s Net Neutrality proposal into law. This was to restore open internet protections known as Net Neutrality, that requires internet service providers like AT&T, Comcast, and Verizon to treat all web traffic equally in the state. California’s Net Neutrality bill is a state-level response to the FCC’s decision to revoke the existing legislation earlier this year. The law that was set when President Obama was in office was scrapped after the Republicans took over leadership of the FCC in 2017.  Considered one of the toughest Net Neutrality bills in the U.S., it prevents ISPs from throttling traffic, and from charging websites for special access to internet users. It also bans “zero rating” on certain apps (where using certain apps would not count against a user’s data usage). The California Net Neutrality bill, namely, Senate No. 822  was approved by the State Assembly and the Senate, in August, despite receiving many protests. However, after the governor decided to enact the Net Neutrality proposal as a law yesterday, senior Justice Department officials sued them on the grounds that only the federal government, not state leaders, have the power to regulate Net Neutrality. Attorney General Jeff Sessions issued the following statement, for filing the complaint: “Once again the California legislature has enacted an extreme and illegal state law attempting to frustrate federal policy. The Justice Department should not have to spend valuable time and resources to file this suit today, but we have a duty to defend the prerogatives of the federal government and protect our Constitutional order. We are confident that we will prevail in this case—because the facts are on our side”. FCC Chairman Ajit Pai also issued a statement stating, “I’m pleased the Department of Justice has filed this suit. Not only is California’s Internet regulation law illegal, but it also hurts consumers.  The law prohibits many free-data plans, which allow consumers to stream video, music, and the like exempt from any data limits. They have proven enormously popular in the marketplace, especially among lower-income Americans. But notwithstanding the consumer benefits, this state law bans them.” Member of the California state and author of the state bill, Scott Wiener, tweeted his response to the lawsuit, saying that it’s just an attempt by the administration to block the state's initiatives. https://twitter.com/Scott_Wiener/status/1046585508472602624 Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act California passes the U.S.’ first IoT security bill Like newspapers, Google algorithms are protected by the First amendment making them hard to legally regulate them
Read more
  • 0
  • 0
  • 2231

article-image-google-project-zero-discovers-a-cache-invalidation-bug-in-linux-memory-management-ubuntu-and-debian-remain-vulnerable
Melisha Dsouza
01 Oct 2018
4 min read
Save for later

Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable

Melisha Dsouza
01 Oct 2018
4 min read
"Raise your game on merging kernel security fixes, you're leaving users exposed for weeks" -Jann Horn to maintainers of Ubuntu and Debian Jann Horn, the Google Project Zero researcher who discovered the Meltdown and Spectre CPU flaws, is making headlines once again. He has uncovered a cache invalidation bug in the Linux kernel. The kernel bug is a cache invalidation flaw in Linux memory management that has been tagged as CVE-2018-17182. The bug has been already reported to Linux kernel maintainers on September 12. Without any delay, Linux founder, Linus Torvalds fixed this bug in his upstream kernel tree two weeks ago. It was also fixed in the upstream stable kernel releases 4.18.9, 4.14.71, 4.9.128, and 4.4.157 and  3.16.58. Earlier last week, Horn released an "ugly exploit" for Ubuntu 18.04, which "takes about an hour to run before popping a root shell". The Bug discovered by Project Zero The vulnerability is a use-after-free (UAF) attack. It works by exploiting the cache invalidation bug in the Linux memory management system, thus allowing an attacker to obtain root access to the target system. UAF vulnerabilities are a type of ‘memory-based corruption bug’. Once attackers gain access to the system, they can cause system crashes, alter or corrupt data, and gain privileged user access. Whenever a userspace page fault occurs, for instance, when a page has to be paged in on demand, the Linux kernel has to look up the Virtual Memory Area (VMA) that contains the fault address to figure out how to handle the fault. To avoid any performance hit, Linux has a fastpath that can bypass the tree walk if the VMA was recently used. When a VMA is freed, the VMA caches of all threads must be invalidated - otherwise, the next VMA lookup would follow a dangling pointer. However, since a process can have many threads, simply iterating through the VMA caches of all threads would be a performance problem. To solve this, both the struct mm_struct and the per-thread struct vmacache are tagged with sequence numbers. When the VMA lookup fastpath discovers in vmacache_valid() that current->vmacache.seqnum and current->mm->vmacache_seqnum don't match, it wipes the contents of the current thread's VMA cache and updates its sequence number. The sequence numbers of the mm_struct and the VMA cache were only 32 bits wide, meaning that it was possible for them to overflow.  To overcome this, in version 3.16, an optimization was added. However, Horn asserts that this optimization is incorrect because it doesn't take into account what happens if a previously single-threaded process creates a new thread immediately after the mm_struct's sequence number has wrapped around to zero. The bug was fixed by changing the sequence numbers to 64 bits, thereby making an overflow infeasible, and removing the overflow handling logic.   Horn has raised concerns that some Linux distributions are leaving users exposed to potential attacks by not reacting fast enough to frequently updated upstream stable kernel releases. End users of Linux distributions aren't protected until each distribution merges the changes from upstream stable kernels, and then users install that updated release. Between these two points, the issue also gets exposure on public mailing lists, giving both Linux distributions and would-be attackers a chance to take action. As of today, Debian stable and Ubuntu releases 16.04 and 18.04 have not yet fixed the issue, in spite of the latest kernel update occurring around a month earlier. This means there's a gap of several weeks between the flaw being publicly disclosed and fixes reaching end users. Canonical, the UK company that maintains Ubuntu, has responded to Horn's blog, and says fixes "should be released" around Monday, October 1. The window of exposure between the time an upstream fix is published and the time the fix actually becomes available to users is concerning. This gap could be utilized by an attacker to write a kernel exploit in the meantime. It is no secret that Linux distributions don’t publish kernel updates regularly. This vulnerability highlights the importance of having a secure kernel configuration. Looks like the team at Linux needs to check and re-check their security patches before it is made available to the public. You can head over to Google Project Zero’s official blog page for more insights on the vulnerability and how it was exploited by Jann Horn. NetSpectre attack exploits data from CPU memory SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips
Read more
  • 0
  • 0
  • 4382

article-image-eset-scientists-reveal-fancy-bears-first-documented-use-of-uefi-rootkit-targeting-european-governments
Melisha Dsouza
28 Sep 2018
3 min read
Save for later

ESET Scientists reveal Fancy Bear’s first documented use of UEFI rootkit targeting European governments

Melisha Dsouza
28 Sep 2018
3 min read
ESET researchers stated that they have found evidence that 'Fancy Bear' (Russia-backed hackers group) is using ‘LoJax’ malware to target certain government organizations in Europe. This research was presented on Thursday at the 2018 Microsoft BlueHat conference. This is the first case of a UEFI rootkit recorded as ‘active’ and still in use. The researchers have not explicitly named the governments that have been targeted. They have only stated that the hackers were active in targeting the Balkans and some central and eastern European countries. This attempt to target european governments is another one of Fancy Bears tactics after hacking into the Democratic National Committee. The hackers had previously targeted senators, social media sites, the French presidential elections, and leaked Olympic athletes’ confidential medical files, which demonstrates their hacking abilities. The LoJax UEFI rootkit LoJax is known for its brutal persistence in making it challenging to remove from a system. It embeds itself in the computer’s firmware and launches when the OS boots up. Sitting in a computer’s flash memory,  LoJax consumes time, effort and extreme care to reflash the memory with a new firmware. In May 2018, Arbor Networks suggested that this Russian hacker group was utilizing Absolute Software's 'LoJack'- a legitimate laptop recovery solution- for unscrupulous means. Hackers tampered with the samples of the LoJack software and programmed it to communicate with a command-and-control (C2) server controlled by Fancy Bear, rather than the legitimate Absolute Software server. The modified version was named as LoJax to separate it from Absolute Software's legitimate solution. LoJax is implemented as a UEFI/BIOS module, to resist operating system wipes or hard drive replacement. This UEFI rootkit was found bundled together with a toolset that was able to patch a victim's system firmware and install malware at the system’s deepest level. In at least one recorded case, the hackers behind the malware were able to write a malicious UEFI module into a system's SPI flash memory leading to the execution of malicious code on disk during the boot process. ESET further added that the malicious UEFI module is being bundled into exploit kits which are able to access and patch UEFI/BIOS settings. Alongside the malware, three other tools were found in Fancy Bear's refreshed kit. A tool that dumps information related to PC settings into a text file A tool to save an image of the system firmware by reading the contents of the SPI flash memory where the UEFI/BIOS is located A tool that adds the malicious UEFI module to the firmware image to write it back to the SPI flash memory. The researchers affirm that the UEFI rootkit has increased the severity of the hacking group. However, there are preventative measures to safeguard your system against this notorious group of hackers. The Fancy Bear’s rootkit isn’t properly signed and hence a computer’s Secure Boot feature could prevent the attack by properly verifying every component in the boot process. This can be switched on at a computer’s pre-boot settings. For more insights on this news, head over to ZDNet. Microsoft claims it halted Russian spearphishing cyberattacks Russian censorship board threatens to block search giant Yandex due to pirated content UN meetings ended with US & Russia avoiding formal talks to ban AI enabled killer robots
Read more
  • 0
  • 0
  • 1818

article-image-chrome-69-privacy-issues-automatic-sign-ins-and-retained-cookies-chrome-70-to-correct-these
Prasad Ramesh
27 Sep 2018
4 min read
Save for later

Chrome 69 privacy issues: automatic sign-ins and retained cookies; Chrome 70 to correct these

Prasad Ramesh
27 Sep 2018
4 min read
There are privacy concerns with Chrome 69, the latest release of the popular browser. The concerns revolve around signing into Chrome and the storage of cookies which have been changed in the new release. What are the privacy concerns with Chrome 69? The Google Chrome 69 update brought a new interface, UI changes and a feature that would automatically sign you into Chrome if you signed into any of Google’s services. This was met with heavy criticism from privacy conscious users. This is not the first time Google has been in question regarding user privacy and the data they collect. Google changed their privacy policy to circumvent GDPR fines in the scale of billions of dollars. Previously, users had an option to signin too Chrome with their Google credentials, but the Chrome 69 update changes it. Signing into any Google service would automatically sign you into Chrome. But Google noted that this would not turn on the sync feature by default. Another concern with Chrome 69 is that on clearing all browsing history and cookies, everything gets cleared excluding Google sites. So, on clearing all browsing history and data, you’re still left with Google cookies and data in your desktop if you’re using Chrome. Source: Google Blog What are people saying? In a blog, John Hopkins professor Matthew Green stated: “Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern.” Christoph Tavan, CTO & Co-Founder of @contentpass tweeted that cookies from Google sites remain in your machine even after clearing all browser data. https://twitter.com/ctavan/status/1044282084020441088 John Graham-Cumming, Cloudflare CTO tweeted that he won’t be using Chrome anymore: https://twitter.com/jgrahamc/status/1044123160243826688 A comment on reddit reads: “This is actually ok. It's not incredibly invasive, and it just creates a chrome user profile when you sign in. They say that it will solve the confusion of the two separate sign ins.” What does Google have to say about this? Chrome 70 to be released in mid October will rollback this move. In a blog Zach Koch, Chrome Product Manager states: “While we think sign-in consistency will help many of our users, we’re adding a control that allows users to turn off linking web-based sign-in with browser-based sign-in—that way users have more control over their experience. For users that disable this feature, signing into a Google website will not sign them into Chrome.” ‏Google Chrome engineer Adrienne Porter Felt replied with an explanation as to why automatic sign in was turned on by default in Chrome 69. Porter stated that the intent is to prevent a ‘common’ confusion where the login state of the browser ends up being different from the login state of the content area. The reply from a Google engineer is not sufficient, notes Green. In the Chrome blog post they also addressed the concerns with cookies by stating: “We’re also going to change the way we handle the clearing of auth cookies. In the current version of Chrome, we keep the Google auth cookies to allow you to stay signed in after cookies are cleared. We will change this behavior so that all cookies are deleted and you will be signed out.” Ending thoughts It is concerning that singing into any Google product automatically signs you into Chrome. Moreover, syncing is just an accidental click away, many people wouldn’t want their data to be synced like that. If sync is not turned on by default then why are they signing you in by default in the first place? Makes sense where multiple accounts are in play, but in any case there should be a prompt for signing into Chrome that makes users consciously choose to sign in. The next step might have been auto sync on login, had not the user backlash happened. This design choice has definitely eroded trust and goodwill among many Chrome users, some of whom are now seriously looking for viable alternatives. Google Chrome’s 10th birthday brings in a new Chrome 69 Microsoft Cloud Services get GDPR Enhancements Google’s new Privacy Chief officer proposes a new framework for Security Regulation
Read more
  • 0
  • 0
  • 2636
article-image-ex-googler-who-quit-google-on-moral-grounds-writes-to-senate-about-companys-unethical-china-censorship-plan
Melisha Dsouza
27 Sep 2018
4 min read
Save for later

Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan

Melisha Dsouza
27 Sep 2018
4 min read
“I am part of a growing movement in the tech industry advocating for more transparency, oversight and accountability for the systems we build.” - Jack Poulson, former Google Scientist Project Dragonfly is making its rounds on the internet yet again. Jack Poulson, a former Google scientist who quit Google in September 2018, over its plan to build a censored search engine in China, has written a letter to the U.S. senators revealing new details of this project. The letter lists several details of Google's work on the Chinese search engine that had been reported but never officially confirmed by the company. He affirms that some company employees may have "actively subverted" an internal privacy review of the system. Poulson was strictly opposed to the idea of Google supporting China’s censorship on subjects by blacklisting keywords such as human rights, democracy, peaceful protest, and religion in its search engine. In protest to this project more than 1,000 employees had signed an open letter asking the company to be transparent. Many employees, including Poulson, took the drastic step of resigning from the company altogether. Now, in fear of Google’s role in violating human rights in China, Poulson has sent a letter to members of the Senate Committee on Commerce, Science, and Transportation. The letter stated that there has been "a pattern of unethical and unaccountable decision making from company leadership" at Google. He has requested Keith Enright, Google’s chief privacy officer, to respond to concerns raised by 14 leading human rights groups, who said in late August that Dragonfly could result in Google "directly contributing to, or [becoming] complicit in, human rights violations." The letter highlights a major flaw in the process of developing the Chinese search platform. He says there was "a catastrophic failure of the internal privacy review process, which one of the reviewers characterized as [having been] actively subverted." Citing anonymous sources familiar to the project, the Intercept affirms that the "catastrophic failure" Poulson mentioned, relates to an internal dispute between Google employees- those who work on privacy issues and engineers who developed the censored search system. The privacy reviewers were led to believe that the code used for developing the engine did not involve user data. After The Intercept exposed the project in early August, the privacy reviewers reviewed the code and felt that their colleagues working on Dragonfly had seriously and purposely misled them. The engine did involve user data and was designed to link users’ search queries to their personal phone number, track their internet movements, IP addresses, and information about the devices they use and the links they clicked on. Poulson told the senators that he could "directly verify" that a prototype of Dragonfly would allow a Chinese partner company to "search for a given user’s search queries based on their phone number." The code incorporates an extensive censorship blacklist developed in accordance with the Chinese government. It censors words like the English term "human rights", the Mandarin terms for 'student protest' and 'Nobel prize', and very large numbers of phrases involving 'Xi Jinping' and other members of the CCP. The engine is explicitly coded to ensure only Chinese government-approved air quality data would be returned in response to Chinese users' search. This incident takes us back to August 2018, when in an Open letter to Google CEO Sundar Pichai, the US Senator for Florida Marco Rubio led by a bipartisan group of senators, expressed his concerns over the project being  "deeply troubling" and risks making “Google complicit in human rights abuses related to China’s rigorous censorship regime”. If Google does go ahead with this project, other non-democratic nations can follow suit to demand customization of the search engine as per their rules, even if they may violate human rights. Citizens will have to think twice before leaving any internet footprint that could be traced by the government. To gain deeper insights on this news, you can head over to The Intercept. 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology Google’s ‘mistakenly deployed experiment’ covertly activated battery saving mode on multiple phones today  
Read more
  • 0
  • 0
  • 2800

article-image-ex-employee-on-contract-sues-facebook-for-not-protecting-content-moderators-from-mental-trauma
Natasha Mathur
27 Sep 2018
5 min read
Save for later

Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma

Natasha Mathur
27 Sep 2018
5 min read
An ex-employee filed a lawsuit against Facebook, last week, alleging that Facebook is not providing enough protection to the content moderators whose job involve reviewing disturbing content on the platform. Why is Selena Scola, a content moderator, suing Facebook? “Plaintiff Selena Scola seeks to protect herself and all others similarly situated from the dangers of psychological trauma resulting from Facebook's failure to provide a safe workplace for the thousands of contractors who are entrusted to provide the safest environment possible for Facebook users”, reads the lawsuit. Facebook receives millions of videos, images, and broadcast posts of child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder. In order to make Facebook a safe platform for users, it relies on machine learning augmented by content moderators. This ensures that any image that violates the corporation’s term of use is removed completely from the platform, as quickly as possible. “Facebook’s content moderators are asked to review more than 10 million potentially rule-breaking posts per week. Facebook aims to do this with an error rate of less than one percent, and seeks to review all user-reported content within 24 hours”, says the lawsuit. Although this safeguard helps with maintaining the safety on the platform, content moderators witness thousands of such extreme content every day. Because of this constant exposure to disturbing graphics, content moderators go through a lot of trauma, with many ending up developing Post-traumatic stress disorder (PTSD), highlights the lawsuit. What does the law say about workplace safety? Facebook claims to have a workplace safety standards draft already in place, like many other tech giants, to protect content moderators. They say it includes providing moderators with mandatory counseling, mental health supports, altering the resolution, and audio, of traumatizing images. It also aimed to train its moderators to recognize the physical and psychological symptoms of PTSD. We have, however, found it difficult to locate the said document. However, as per the lawsuit, “Facebook ignores the workplace safety standards it helped create. Instead, the multibillion-dollar corporation affirmatively requires its content moderators to work under conditions known to cause and exacerbate psychological trauma”. This is against the California law which states, “Every employer shall do every other thing reasonably necessary to protect the life, safety, and health of Employees. This includes establishing, implementing, and maintaining an effective injury prevention program. Employers must provide and use safety devices and safeguards reasonably adequate to render the employment and place of employment safe”. Facebook hires content moderators on a contract basis Tech giants such as Facebook generally have a two-level workforce in place. The top level comprises Facebook’s official employees such as engineers, designers, and managers. These enjoy the majority of benefits such as high salary, and lavish perks among others. Employees such as Content moderators come under the lower level. Majority of these workers are not even permanent employees at Facebook, as they’re employed on a contract basis. Because of this, they often get paid low, miss out on the benefits that regular employees get, as well as have limited access to Facebook management. One of the employees, who wished to remain anonymous told the Guardian last year, “We were underpaid and undervalued”. He earned roughly $15 per hour. This was for removing terrorist related content from Facebook, after a two-week training period. They usually come from a poor financial background, with many having families to support. Taking up a job as opposed to being unemployed seems to be a better option for them. Selena Scola was employed by Pro Unlimited (a contingent labor management company in New York) as a Public Content Contractor from approximately June 19, 2017, until March 1, 2018, at Facebook’s offices in Menlo Park and Mountain View, California. During the entirety of this period, Scola was employed solely by Pro Unlimited, an independent contractor of Facebook. She had never been directly employed by Facebook in any capacity. Scola is also suing Pro Unlimited. “According to the Technology Coalition, if a company contracts with a third-party vendor to perform duties that may bring vendor employees in contact with graphic content, the company should clearly outline procedures to limit unnecessary exposure and should perform an initial audit of a contractor’s wellness procedures for its employees,” says the lawsuit. Scola is not the only one who has complained about the company. Over a hundred conservative Facebook employees formed an online group to protest against the company’s “intolerant” liberal culture, last month. The mass exodus of high profile executives is also indicative of a deeper people and a cultural problem at Facebook. Additionally, Facebook has been in many controversies regarding user’s data, fake news, and hate speech. The Department of Housing and Urban Development (HUD) had filed a complaint against Facebook last month, for selling ads which discriminate against users on the basis of race, religion, and sexuality. Similarly, Facebook was found guilty of discriminatory advertisements. Apparently, Facebook provided the third-party advertisers with an option to exclude religious minorities, immigrants, LGBTQ individuals, and other protected groups from seeing their ads. Given the increasing number of controversial cases against Facebook, it's high time for the company to take the right measures towards solving these issues. The lawsuit is currently Scola v Facebook Inc and Pro Unlimited Inc, filed in Superior Court of the State of California. For more information, read the official lawsuit. How far will Facebook go to fix what it broke: Democracy, Trust, Reality Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity Time for Facebook, Twitter and other social media to take responsibility or face regulation
Read more
  • 0
  • 0
  • 2997

article-image-googles-new-privacy-chief-officer-proposes-a-new-framework-for-security-regulation
Natasha Mathur
25 Sep 2018
4 min read
Save for later

Google’s new Privacy Chief officer proposes a new framework for Security Regulation

Natasha Mathur
25 Sep 2018
4 min read
Google announced a new Chief Privacy Officer as Keith Enright, yesterday, who has served a decade at leading Google’s Privacy Legal team. Enright has always been heavily involved in speaking out regarding Google’s Privacy and Security regulations. As a Chief Privacy Officer, Enright will be responsible for setting the privacy program at Google. This includes updating the security tools, policies, and practices as user-focused. “My team’s goal is to help you enjoy the benefits of technology while remaining in control of your privacy” mentions Enright in the Google outreach page. Google has already been taking measures when it comes to security as it launched a “Protect your Election program”, last month, which included security policies to defend against the state-sponsored phishing attacks. “This is an important time to take on this new role. There is real momentum to develop baseline rules of the road for data protection. Google welcomes this and supports comprehensive, baseline privacy regulation. People deserve to feel comfortable that all entities that use personal information will be held accountable for protecting it” as per the Google blog. Lately, there’s been a lot of companies raising voice against security-related issues. For instance, YouTube’s CBO and German OpenStreetMap, spoke out against the Article 13 of EU’s controversial copyright law. In light of organizations citing EU’s privacy laws as strict, Enright proposed a new Privacy framework which includes Google’s view on the requirements, scope, and enforcement expectations in data protection laws. This framework has been established using privacy frameworks, and the services that depend on personal data. Also, it will be complying with the evolving data protection laws around the world. “These principles help us evaluate new legislative proposals and advocate for responsible, interoperable and adaptable data protection regulations. How these principles are put into practice will shape the nature and direction of innovation”, says Enright. Principles in the new framework are based on established privacy regimes. These are applicable to organizations which are responsible for making decisions about the collection and use of personal information. Enright will be discussing the principles in the framework and Google’s work on privacy and security with the U.S. Senate later this week. The new framework states the requirements, scope, and accountability in data protection laws as follows: Requirements Collecting and using personal information responsibly. Maintaining Transparency is mandatory for helping individuals be informed. Placing reasonable limitations on the means of collecting, using, and disclosing personal information. Maintaining the quality of personal information. Make it practical for individuals so that they can control the use of personal information. Giving individuals an ability to access, correct, delete and download personal information if it's about them. Requirements needed to secure personal information should be included. Scope and Accountability Holding organizations accountable for compliance. More focus on the risk of harm to individuals and communities. Direct consumer services should be distinguished from enterprise services. Personal information should be defined flexibly to ensure the proper incentives and handling. Rules should be applied to all organizations that process personal information. Regulations should be designed for improving the ecosystem and accommodating the changes in technology and norms. A geographic scope that accords with international norms should be applied. Encouraging global interoperability. “Sound practices combined with strong and balanced regulations can help provide individuals with confidence that they’re in control of their personal information,” says Enright. For more information, check out the official framework.  EU slaps Google with $5 billion fine for the Android antitrust case Ex-Google CEO, Eric Schmidt, predicts an internet schism by 2028 Google plans to let the AMP Project have an open governance model, soon!
Read more
  • 0
  • 0
  • 2532
article-image-how-twitter-is-defending-against-the-silhouette-attack-that-discovers-user-identity
Savia Lobo
20 Sep 2018
5 min read
Save for later

How Twitter is defending against the Silhouette attack that discovers user identity

Savia Lobo
20 Sep 2018
5 min read
Twitter Inc. disclosed that it is learning to defend against a new cyber attack technique, Silhouette, that discovers the identity of logged-in twitter users. This issue was reported to Twitter first in December 2017 through their vulnerability rewards program by a group of researchers from Waseda University and NTT. The researchers submitted a draft of their paper for the IEEE European Symposium on Security and Privacy in April 2018. Following this, Twitter’s security team prioritized the issue and routed it to several relevant teams and also contacted several other at-risk sites and browser companies to urgently address the problem. The researchers too recognized the significance of the problem and formed a cross-functional squad to address it. The Silhouette attack This attack exploits variability during the time taken by web pages to load. This threat is established by exploiting a function called ‘user blocking’ that is widely adopted in (Social Web Services) SWSs. Here the malicious user can also control the visibility of pages from legitimate users. As a preliminary step, the malicious third party creates personal accounts within the target SWS (referred to below as “signaling accounts”) and uses these accounts to systematically block some users on the same service thereby constructing a combination of non-blocked/blocked users. This pattern can be used as information for uniquely identifying user accounts. At the time of identification execution, that is, when a user visits a website on which a script for identifying account names has been installed, that user will be forced to communicate with pages of each of those signaling accounts. This communication, however, is protected by the Same-Origin Policy*5, so the third party will not be able to directly obtain the content of a response from such a communication. The action taken against Silhouette attack The Waseda University and NTT researchers provided various ideas for mitigating the issue in their research paper. The ideal solution was to use the SameSite attribute for the twitter login cookies. This would mean that requests to Twitter from other sites would not be considered logged-in requests. If the requests aren't logged-in requests, identity can't be detected. However, this feature was an expired draft specification and it had only been implemented by Chrome. Although Chrome is one of biggest browser clients by usage, Twitter needed to cover other browsers as well. Hence, they decided to look into other options to mitigate this issue. Twitter decided to reduce the response size differences by loading a page shell and then loading all content with JavaScript using AJAX. Page-to-page navigation for the website already works this way. However, the server processing differences were still significant for the page shell, because the shell still needed to provide header information and those queries made a noticeable impact on response times. Twitter’s CSRF protection mechanism for POST requests checks if the origin and referer headers of the request are sourced from Twitter. This proved effective in addressing the vulnerability, but it prevented this initial load of the website. Users might load Twitter from a Google search result or by typing the URL into the browser. To address this case, Twitter created a blank page on their site which did nothing but reload itself. Upon reload, the referer would be set to twitter.com, and so it would load correctly. There is no way for non-Twitter sites to follow that reload. The blank page is super-small, so while a roundtrip load is incurred, it doesn't impact load times too much. With this solution, Twitter was able to apply it to various high-level web stacks. There were a bunch of other considerations twitter had to make. Some of them include: They supported a legacy version of Twitter (known internally as M2) that operates without the need for JavaScript. They also made sure that the reloading solution didn't require JavaScript. They made use of CSP for security to make sure that their blank reloading page followed Twitter’s own CSP rules, which can vary from service to service. Twitter needed to pass through the original HTTP referrer to make sure metrics were still accurately attributing search engine referrals. They had to make sure the page wasn't cached by the browser, or the blank page would reload itself indefinitely. Thus, they used cookies to detect those loops, showing a short friendly message and a manual link if the page appeared to be reloading more than once. Implementing the SameSite cookie on major browsers Although Twitter has implemented the mitigation, they have discussed this issue with other major browser vendors regarding the SameSite cookie attribute. All major browsers have now implemented SameSite cookie support. This includes Chrome, Firefox, Edge, Internet Explorer 11, and Safari. Rather than adding the attribute to Twitter’s existing login cookie, they added two new cookies for SameSite, to reduce the risk of logout should a browser or network issue corrupt the cookie when it encounters the SameSite attribute. Adding the SameSite attribute to a cookie is not at all time-consuming. One just needs to add "SameSite=lax" to the set-cookie HTTP header. However, Twitter's servers depend on Finagle, which is a wrapper around Netty, which does not support extensions to the Cookie object. As per a Twitter post, “When investigating, we were surprised to find a feature request from one of our own developers the year before! But because SameSite was not an approved part of the spec, there was no commitment from the Netty team to implement. Ultimately we managed to add an override into our implementation of Finagle to support the new cookie attribute.” Read more about this in detail on Twitter’s blog post. The much loved reverse chronological Twitter timeline is back as Twitter attempts to break the ‘filter bubble’ Building a Twitter news bot using Twitter API [Tutorial] Facebook, Twitter open up at Senate Intelligence hearing, the committee does ‘homework’ this time
Read more
  • 0
  • 0
  • 2152

article-image-peekaboo-zero-day-vulnerability-allows-hackers-to-access-cctv-cameras-says-tenable-research
Melisha Dsouza
20 Sep 2018
3 min read
Save for later

‘Peekaboo’ Zero-Day Vulnerability allows hackers to access CCTV cameras, says Tenable Research

Melisha Dsouza
20 Sep 2018
3 min read
Earlier this week, Tenable Inc announced that its research team had discovered a zero-day vulnerability dubbed as 'Peekaboo' in NUUO software. NUUO licenses its software to at least 100 other brands including Sony, CISCO, Sony, Cisco Systems, D-Link, Panasonic and many more. The vulnerable device is NVRMini2, which is a network-attached storage device and network video recorder. The vulnerability would allow cybercriminals to view, disable or otherwise manipulate video footage using administrator privileges. To give you a small gist of the situation, hackers could replace live feed of video surveillance with a static image of the area. This could assist criminals to enter someone’s premises- undetected by the CCTV! Cameras with this bug could be manipulated and taken offline, worldwide. And this is not the first time that NUUO devices have been affected by a vulnerability. Just last year, there were reports of the NUUO NVR devices being specifically targeted by the Reaper IoT Botnet. "The Peekaboo flaw is extremely concerning because it exploits the very technology we rely on to keep us safe" - Renaud Deraison, co-founder and chief technology officer, Tenable Vulnerabilities discovered by Tenable The vulnerabilities -CVE-2018-1149, CVE-2018-1150, are tied to NUUO NVRMini2 webserver software. #1 CVE-2018-1149: Allows an attacker to sniff out affected gear This vulnerability assists attackers to sniff out affected gear using Shodan. The attacker can trigger a buffer-overflow attack that allows them to access the camera’s web server Common Gateway Interface (CGI). This interface acts as a gateway between a remote user and the web server. The attack delivers a really large cookie file to the CGI handle. The CGI, therefore, does not validate the user’s input properly, allowing them to access the web server portion of the camera. #2 CVE-2018-1150: Takes advantage of Backdoor functionality This bug takes advantage of the backdoor functionality in the NUUO NVRMini2 web server. When the back door PHP code is enabled, it allows an unauthenticated attacker to change the password for any registered user except administrator of the system. ‘Peekaboo’ affects firmware versions older than 3.9.0, Tenable states that NUUO was notified of this vulnerability in June. NUUO was given 105 days to issue a patch before publicly disclosing the bugs. Tenable’s GitHub page provides more details on potential exploits tested with one of NUUO’s NVRMini2 devices. NUUO is planning to issue a security patch. Meanwhile, users are advised to restrict access to their NUUO NVRMini2 deployments. Owners of devices connected directly to the internet are especially at risk. Affected end users are urged to disconnect these devices from the internet until a patch is released. For more information on Peekaboo, head over to the Tenable Research Advisory blog post. Alarming ways governments are using surveillance tech to watch you Windows zero-day vulnerability exposed on ALPC interface by a vulnerability researcher with ‘no formal degrees’ IoT botnets Mirai and Gafgyt target vulnerabilities in Apache Struts and SonicWall  
Read more
  • 0
  • 0
  • 3047