Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cybersecurity

373 Articles
article-image-nsas-eternalblue-leak-leads-to-459-rise-in-illicit-crypto-mining-cyber-threat-alliance-report
Melisha Dsouza
20 Sep 2018
3 min read
Save for later

NSA’s EternalBlue leak leads to 459% rise in illicit crypto mining, Cyber Threat Alliance report

Melisha Dsouza
20 Sep 2018
3 min read
"Illicit mining is the 'canary in the coal mine' of cybersecurity threats. If illicit cryptocurrency mining is taking place on your network, then you most likely have worse problems and we should consider the future of illicit mining as a strategic threat." - Neil Jenkins, Chief Analytic Officer for the Cyber Threat Alliance A leaked software tool from the US National Security Agency has led to a surge in Illicit cryptocurrency mining, researchers said on Wednesday. The report released by the Cyber Threat Alliance, an association of cybersecurity firms and experts, states that it detected a 459 percent increase in the past year of illicit crypto mining- a technique used by hackers to steal the processing power of computers to create cryptocurrency. One reason for the sharp rise in illicit mining was the leak last year by a group of hackers known as the Shadow Brokers of EternalBlue. The EternalBlue was a software developed by the NSA to exploit vulnerabilities in the Windows operating system. There are still countless organizations that are being victimized by this exploit, even after a patch for EternalBlue has been made available for 18 months. Incidentally, the rise in hacking coincides with the growing use of virtual currencies such as bitcoin, ethereum or monero. Hackers have discovered ways to tap into the processing power of unsuspecting computer users to illicitly generate currency. Neil Jenkins said in a blog post that the rise in malware for crypto mining highlights "broader cybersecurity threats". Crypto mining which was once non-existent is, now, virtually on every top firm’s threat list. The report further added that 85 percent of illicit cryptocurrency malware mines monero, and 8 percent mines bitcoin. Even though Bitcoin is well known as compared to Monero, according to the report, the latter offers more privacy and anonymity which help cyber criminals hide their mining activities and their transactions using the currency. Transaction addresses and values are unclear in monero by default, making it incredibly difficult for investigators to find the cybercrime footprint. The blog advises network defenders to make it harder for cybercriminals to carry out illicit mining by improving practices of cyber hygiene. Detection of cyber mining and Incident response plans to the same should also be improved. Head over to techxplore for more insights on this news. NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018 Top 15 Cryptocurrency Trading Bots Cryptojacking is a growing cybersecurity threat, report warns  
Read more
  • 0
  • 0
  • 2277

article-image-the-much-loved-reverse-chronological-twitter-timeline-is-back-as-twitter-attempts-to-break-the-filter-bubble
Natasha Mathur
19 Sep 2018
3 min read
Save for later

The much loved reverse chronological Twitter timeline is back as Twitter attempts to break the ‘filter bubble’

Natasha Mathur
19 Sep 2018
3 min read
Twitter’s CEO Jack Dorsey announced on Monday that Twitter’s bringing back the much-loved and original ‘reverse chronological order theme’ for the Twitter news feed. You can enable the reverse chronological theme by making setting changes. https://twitter.com/jack/status/1042038232647647232 Twitter is also working on providing users with a way to easily toggle between the two different themes i.e. a timeline of tweets most relevant to you and a timeline of all the latest tweets. To change to the reverse chronological order timeline, go to settings on the twitter, then select privacy option, go to the content section and uncheck the box that says “Timeline- show the best tweets first”. Twitter also removed the ‘in case you missed it’ section from the settings. https://twitter.com/TwitterSupport/status/1041838957896450048 The Reverse Chronological theme was Twitter’s original content presentation style, much before it made the ‘top tweets algorithm’ as a default option, back in 2016. When Twitter announced that it was changing its timeline so that it wouldn’t show the tweets in chronological order anymore, a lot of people were unhappy. In fact, people despised the new theme so much that a new hashtag #RIPTwitter was trending back then. Twitter with its new algorithm in 2016 focussed mainly on bringing the top, most happening, tweets to light. But, a majority of Twitter users felt differently. People enjoyed the simpler reverse-chron Twitter where people could get real-time updates from their close friends, family, celebrities, etc, not the twitter that shows only the most relevant tweets stacked together. Twitter defended the new approach as it tweeted yesterday that “We’ve learned that when showing the best Tweets first, people find Twitter more relevant and useful. However, we've heard feedback from people who at times prefer to see the most recent Tweets”. Also, Twitter has been making a lot of changes recently after Twitter CEO, Jack Dorsey testified before the House Energy and Commerce Committee regarding Twitter’s algorithms and content monitoring. Twitter mentioned that they want people to have more control over their timeline. https://twitter.com/TwitterSupport/status/1042155714205016064 Public reaction to this new change has been largely positive with a lot of people criticizing the company’s Top Tweet timeline.   https://twitter.com/_Case/status/1041841407739260928 https://twitter.com/terryb600/status/1041847173770620929   https://twitter.com/smithant/status/1041884671921930240 https://twitter.com/alliecoyne/status/1041850426159583232 https://twitter.com/fizzixrat/status/1041881429477654528 One common pattern observed is that people brought up Facebook a lot while discussing this new change. https://twitter.com/_Case/status/1042068118997270528 https://twitter.com/Depoetic/status/1041842498459578369 https://twitter.com/schachin/status/1041925075698503680 Twitter seems to have dodged a bullet by giving back to its users what they truly want. Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey Facebook, Twitter open up at Senate Intelligence hearing, committee does ‘homework’ this time Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee  
Read more
  • 0
  • 0
  • 1866

article-image-google-makes-trust-services-root-r1-r2-r3-and-r4-inclusion-request
Melisha Dsouza
17 Sep 2018
3 min read
Save for later

Google makes Trust Services Root R1, R2, R3, and R4 Inclusion Request

Melisha Dsouza
17 Sep 2018
3 min read
After Google Launched its Certification Authority in August 2017, it has now put in a  request to Mozilla certification store for the inclusion of the Google Trust Services R1, R2, R3, and R4 roots as documented in the following bug. Google’s application states the following- "Google is a commercial CA that will provide certificates to customers from around the world.  We will offer certificates for server authentication, client authentication, email (both signing and encrypting), and code signing.  Customers of the Google PKI are the general public. We will not require that customers have a domain registration with Google, use domain suffixes where Google is the registrant, or have other services from Google." What are Google Trust Services Roots? To adopt an independent infrastructure and build the "foundation of a more secure web," Google Trust Services allows the company to issue its own TLS/SSL certificates for securing its web traffic via HTTPS, instead of relying on third-party certs. The main aim of launching the GTS was to bring security and authentication certificates up to par with Google’s rigorous security standards. This means invalidating the old, insecure HTTP standard in Chrome, and depreciate Adobe Flash, a web program known to be insecure, and a resource hog. GTS will provide HTTPS certificates public websites to API servers, and it will be inclusive to all Alphabet companies, not just Google. Developers who build products that connect to Google’s services will have to include the new Root Certificates. All GTS roots expire in 2036, while GS Root R2 expires in 2021 and GS Root R4 in 2038. Google will also be able to cross-sign its CAs, using GS Root R3 and GeoTrust, to ease potential timing issues while setting up the root CAs. To know more about these trust services, you can visit GlobalSign. Some noticeable points in this request are Google has supplied a key generation ceremony audit report Other than the disclosed intermediates and required test certificates, no issuance has been detected from these roots. Section 1.4.2 of the CPS expressly forbids the use of Google certificates for "man-in-the middle purposes". Appendix C of the current CPS indicates that Google limits the lifetime of server certificates to 365 days. The following concerns exist in the Roots- From the transfer on 11-August 2016 through 8-December 2016, at the time it would not have been clear if any policies applied to these new roots. The applicable CPS (Certification Practice Statement) during that period makes no reference to these roots. Google does state in their current CPS that these roots were operated according to that CPS. From the transfer on 11-August 2016 through the end of Google’s audit period on 30-September, 2016, these roots were not explicitly covered by either Google’s audit nor GlobalSign’s audit. The discussion was concluded with adding this policy to the main Mozilla Root Store Policy (section 8). With these changes and the filing of the bug, Mozilla plans to take no action against GTS based on what has been discovered and discussed. Here is what users had to say on this request- Source: Vue-hn To get a complete insight into this request, head over to Google groups. Let’s Encrypt SSL/TLS certificates gain the trust of all Major Root Programs Pay your respects to Inbox, Google’s email innovation is getting discontinued Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers  
Read more
  • 0
  • 0
  • 8295
Visually different images

article-image-googles-prototype-chinese-search-engine-dragonfly-reportedly-links-searches-to-phone-numbers
Melisha Dsouza
17 Sep 2018
3 min read
Save for later

Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers

Melisha Dsouza
17 Sep 2018
3 min read
Last month, the Intercept informed that Google is reportedly building a prototype search engine for China called 'Dragonfly' which lead to Google employees pressuring Google to abandon the project on ethical grounds. Google has then appeased their employees stating that the project was simply an exploration and nowhere near completion. Now, there are fresh reports from the Intercept that Google’s custom search engine would link Chinese users’ search queries to their personal phone numbers, thus making it easier for the government to track their searches. This means those who search for banned information could be interrogated or detained if security agencies got hold of Google's search records. According to The intercept, Dragonfly will be designed for Android devices, and would remove content considered to be sensitive by China’s authoritarian Communist Party regime- which includes information about freedom of speech, dissidents, peaceful protest and human rights. Citing anonymous sources familiar with the plan—including a Google whistleblower having "moral and ethical concerns" about Google’s role in censorship, the Intercept revealed that "programmers and engineers at Google have created a custom Android app" which has already been demonstrated to the Chinese government. The finalized version could be launched in the next six to nine months,  after the approval from Chinese officials. What this means to other nations and to Google China has strict cyber surveillance, and the fact that this tech giant is bending to China’s demands is a topic of concern for US legislators as well as citizens of other countries. Last week, in an Open letter to Google CEO Sundar Pichai, the US Senator for Florida Marco Rubio led by a bipartisan group of senators, expresses his concerns over the project being   "deeply troubling" and risks making “Google complicit in human rights abuses related to China’s rigorous censorship regime”. He also requests answers for several unanswered doubts. For instance, what changed since Google’s 2010 withdrawal from China to make the tech giant comfortable in cooperating with China’s rigorous censorship regime. This project is also driving attention from users all over the Globe. Source: Reddit   Google has not yet confirmed the existence of Dragonfly, and has publicly declined to comment on reports about the project. The only comment released to Fox News from a Google spokesperson on Sunday was that it is just doing 'exploratory' work on a search service in China and that it is 'not close to launching a search product.' In protest to this project last month, more than 1,000 employees had signed an open letter asking the company to be transparent. Now, some employees have taken the next step by resigning from the company altogether.  This is not the first time that Google employees have resigned in protest over one of the company's projects. Earlier this year, Project Maven, a drone initiative for the US government that could weaponize their AI research caused a stir among at least a dozen employees who reportedly quit over the initiative. The scrutiny on Google’s take on privacy has continued to intensify. It is about time the company starts  taking into consideration all aspects of a user’s internet privacy. To know more about Project 'Dragonfly', head over to The intercept. Google’s ‘mistakenly deployed experiment’ covertly activated battery saving mode on multiple phones today Did you know your idle Android device sends data to Google 10 times more often than an iOS device does to Apple? Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal      
Read more
  • 0
  • 0
  • 3172

article-image-iot-botnets-mirai-gafgyt-target-vulnerabilities-apache-struts-sonicwall
Savia Lobo
12 Sep 2018
4 min read
Save for later

IoT botnets Mirai and Gafgyt target vulnerabilities in Apache Struts and SonicWall

Savia Lobo
12 Sep 2018
4 min read
Unit 42 of the Palo Alto Networks reported about two new variants of the IoT botnets named Mirai and Gafgyt last week on September 7, 2018. The former IoT botnet targets vulnerabilities in Apache Struts and the latter in older, unsupported versions of SonicWall’s Global Management System (GMS). Researchers at Palo Alto Networks said, “Unit 42 found the domain that is currently hosting these Mirai samples previously resolved to a different IP address during the month of August. During that time this IP was intermittently hosting samples of Gafgyt that incorporated an exploit against CVE-2018-9866, a SonicWall vulnerability affecting older versions of SonicWall Global Management System (GMS). SonicWall has been notified of this development.” Mirai variant botnet exploit in Apache Struts The Mirai botnet exploit targets 16 different vulnerabilities, which includes the Apache Struts arbitrary command execution vulnerability CVE-2017-5638 , via crafted Content-Type, Content-Disposition, or Content-Length HTTP headers. The same Mirai bug was associated with the massive Equifax data breach in September 2017. This botnet had previously targeted routers and other IoT based devices which was revealed around end of May 2018. However, in the case of Mirai botnet, this is the first instance where it has targeted a vulnerability in Apache Struts. This new Mirai variant is also targeting vulnerabilities such as: the Linksys E-series device remote code execution flaw, a D-Link router remote code execution flaw, an OS command injection security flaw affecting Zyxel routers, an unauthenticated command injection flaw affecting AVTECH IP devices and more. Here’s the complete list of all exploits incorporated in this Mirai variant. Gafgyt variant exploit in SonicWall GMS The Gafgyt variant is targeting a security flaw, CVE-2018-9866 discovered in July that affects old, unsupported versions of SonicWall Global Management System (GMS) that is, versions 8.1 and older. The vulnerability targeted by this exploit is caused by the lack of sanitization of XML-RPC requests to the set_time_config method. There is currently no fix for the flaw except for GMS users to upgrade to version 8.2. Researchers noted that these samples were first surfaced on August 5, less than a week after the publication of a Metasploit module for this vulnerability. Some of its configured commands include launching the Blacknurse DDoS attack. Unit 42 researchers said, “Blacknurse is a low bandwidth DDoS attack involving ICMP Type 3 Code 3 packets causing high CPU loads first discovered in November 2016. The earliest samples we have seen supporting this DDoS method are from September 2017.” The researchers also mentioned, "The incorporation of exploits targeting Apache Struts and SonicWall by these IoT/Linux botnets could indicate a larger movement from consumer device targets to enterprise targets. These developments suggest these IoT botnets are increasingly targeting enterprise devices with outdated versions." In an email directed to us, SonicWall mentions that "The vulnerability disclosed in this post is not an announcement of a new vulnerability in SonicWall Global Management System (GMS).  The issue referenced only affects an older version of the GMS software (version 8.1) which was replaced by version 8.2 in December 2016. Customers and partners running GMS version 8.2 and above are protected against this vulnerability.  Customers still using GMS version 8.1 should apply a hotfix supplied by SonicWall in August 2018 and plan for an immediate upgrade, as GMS 8.1 went out of support in February 2018.  SonicWall and its threat research team continuously updates its products to provide industry-leading protection against the latest security threats, and it is therefore crucial that customers are using the latest versions of our products. We recommend that customers with older versions of GMS, which are long out of support, should upgrade immediately from www.mysonicwall.com." To know more about these IoT botnet attacks in detail, visit Palo Alto Networks Unit 42 blog post. Build botnet detectors using machine learning algorithms in Python [Tutorial] Cisco and Huawei Routers hacked via backdoor attacks and botnets How to protect yourself from a botnet attack
Read more
  • 0
  • 0
  • 2527

article-image-openssl-1-1-1-released-with-support-for-tls-1-3-improved-side-channel-security
Melisha Dsouza
12 Sep 2018
3 min read
Save for later

OpenSSL 1.1.1 released with support for TLS 1.3, improved side channel security

Melisha Dsouza
12 Sep 2018
3 min read
Yesterday (11th of September), the OpenSSL team announced the stable release of OpenSSL 1.1.1. With work being in progress for two years along with more than 500 commits, the release comes with many notable upgrades. The most important new feature in OpenSSL 1.1.1 is TLSv1.3, which was published last month as RFC 8446 by the Internet Engineering Task Force. Applications working with OpenSSL1.1.0 can gain the benefits of TLSv1.3 by upgrading to the new OpenSSL version. TLS 1.3 features Reduction in the number of round trips required between the client and server to improve connection times 0-RTT or “early data” feature - which is the ability  for clients to start sending encrypted data to the server straight away without any round trips with the server Removal of various obsolete and insecure cryptographic algorithms and encryption of more of the connection handshake has improved security For more details on TLS 1.3 read: Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed Updates in OpenSSL 1.1.1 A complete rewrite of the OpenSSL random number generator The OpenSSL random number generator has been completely rewritten to introduce capabilities such as: The default RAND method now utilizes an AES-CTR DRBG according to NIST standard SP 800-90Ar1. Support for multiple DRBG instances with seed chaining. Public and private DRBG instance. DRBG instances are made fork-safe. Keep all global DRBG instances on the secure heap if it is enabled. The public and private DRBG instance are per thread for lock free operation Support for various new cryptographic algorithms The different algorithms that are now supported by OpenSSL 1.1.1 include: SHA3, SHA512/224 and SHA512/256 EdDSA (including Ed25519 and Ed448) X448 (adding to the existing X25519 support in 1.1.0) Multi-prime RSA SM2,SM3,SM4 SipHash ARIA (including TLS support) Side-Channel attack security improvements This upgrade also introduces significant Side-Channel attack security improvements, maximum fragment length TLS extension support and a new STORE module, implementing a uniform and URI based reader of stores containing keys, certificates, CRLs and numerous other objects. OpenSSL 1.0.2 will receive full support only until the end of 2018 and security fixes only till the end of 2019. The team advises users of OpenSSL 1.0.2 to upgrade to OpenSSL 1.1.1 at the earliest. Head over to the OpenSSL blog for further details on the news. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements Haiku, the open source BeOS clone, to release in beta after 17 years of development Ripgrep 0.10.0 released with PCRE2 and multi-line search support  
Read more
  • 0
  • 0
  • 3507
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-german-openstreetmap-protest-against-article-13-eu-copyright-reform
Sugandha Lahoti
10 Sep 2018
3 min read
Save for later

German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable

Sugandha Lahoti
10 Sep 2018
3 min read
European’ Union’s copyright preform bill, is currently up for a vote in the European Parliament on September 12. It’s Article 13 has been on the receiving end of backlash with many organizations protesting against it. Last week it was Youtube’s CBO speaking out and this week German OpenStreetMap has made their map unusable, to protest against EU copyright reform. [box type="shadow" align="" class="" width=""]According to Article 13, there is an “obligation on information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users to take appropriate and proportionate measures to ensure the functioning of agreements concluded with right holders and to prevent the availability on their services of content identified by rightholders in cooperation with the service providers”.[/box] The Article 13 is a new revamped version that EU has come out with as the older version of the copyright reform bill was rejected by the Parliament back in July. The older version also received heavy criticism from different policy experts and digital rights group on grounds of violating the fundamental rights of the internet users. This legislation has the possibility of changing the balance of power between producers of music, news and film and the dominant websites that host their work. On one side, people say that if passed, this law would mean the end of free Internet. Platforms will have to algorithmically pre-filter all user uploads and block fair use content, cool satire, funny memes etc. On the other side, supporters of the law say that their hard work is being compromised because they are not being fairly compensated for their work. These people are creators who depend upon being paid for the sharable content they create, such as musicians, authors, filmmakers and so on. Although people have supported OpenStreetmap’s decision. A hacker news user pointed out, “Good for them. The Internet as we know it is being attacked from multiple angles right now, with the EU filtering proposals, AU/5Eyes anti-encryption proposals, etc.” A person also called it as, “Oh no, more evil political hacking!” You can read about more such opinions on Hacker news. You can also find some of the most common questions around the proposed Directive on the EU website. Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns. Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity. Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more.
Read more
  • 0
  • 0
  • 3112

article-image-snort-3-beta-available-now
Melisha Dsouza
10 Sep 2018
2 min read
Save for later

Snort 3 beta available now!

Melisha Dsouza
10 Sep 2018
2 min read
On 29th August 2018, the team at Snort released the fourth alpha of the next generation Snort IPS- Snort 3, in beta version. Along with all the Snort 2.X features, this version of Snort++ includes new features as well as bug fixes for the base version of Snort. Here are some key features of Snort++: Support provided for multiple packet processing threads Shared configuration and attribute table available Simple, scriptable configuration Key components are now pluggable Autodetect services for portless configuration Support for  sticky buffers in rules Autogenerate reference documentation Provide better cross-platform support Facilitate component testing Support pipelining of packet processing, hardware offload and data plane integration, and proxy mode Below is a brief gist of these upgrades, Easy Configuration LuaJIT is used for configuration with a consistent, and executable syntax. Better Detection of Services The team has worked closely with Cisco Talos to update rules to meet their needs, including a feature they call "sticky buffers." The Hyperscan search engine, and regex fast patterns make rules faster and more accurate. HTTP Support Snort 3 has a stateful HTTP inspector that handles 99 percent of the HTTP Evader cases. The aim is to achieve 100% coverage soon. The HTTP support also includes new rule options. Better Performance Deep packet inspection now gives a better performance. Snort 3 supports multiple packet-processing threads, and scales linearly with a much smaller amount of memory required for shared configs. JSON event logging This can be used to integrate with tools such as the Elastic Stack. Check out the Snort blog post for more details on the same. More Plugins! Snort 3 was designed to be extensible. It has over 225 of plugins of various types. It is easy for users to add their own codec, inspector, rule action, rule option, or logger. In addition to all these features, users can also watch out for additional upgrades like next generation DAQ, connection events, search engine acceleration among others. To know more about the release of Snort 3, head over to Snort’s official page. OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0 Mastodon 2.5 released with UI, administration, and deployment changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more
Read more
  • 0
  • 0
  • 2796

article-image-youtubes-cbo-speaks-out-against-article-13-of-eus-controversial-copyright-law
Natasha Mathur
07 Sep 2018
3 min read
Save for later

YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law

Natasha Mathur
07 Sep 2018
3 min read
Robert Kyncl, YouTube's Chief Business Officer, opened up on YouTube’s Creator Blog, on Tuesday. This was about  “Article 13” in the EU proposal, which is currently up for a vote in the European Parliament on September 12. According to Article 13, there is an “obligation on information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users to take appropriate and proportionate measures to ensure the functioning of agreements concluded with right holders and to prevent the availability on their services of content identified by rightholders in cooperation with the service providers”. In a nutshell, any user-generated content on these online platforms that a copyright enforcement algorithm considers as copyrighted work would need to be censored by these platforms. This is a new revamped version that EU has come out with as the older version was rejected by the Parliament back in July. The older version also received heavy criticism from different policy experts and digital rights group on grounds of violating the fundamental rights of the internet users. “The "Article 13” potentially undermine this creative economy, discouraging or even prohibiting platforms from hosting user-generated content. This outcome would not only stifle your creative freedom, it could have severe, negative consequences for the fans, the communities and the revenue you have all worked so hard to create,” mentioned Kyncl. Kyncl also pointed out how the creators and artists on these platforms have built businesses “on the back” of this “openness”.  YouTube has a strong set of copyright management tools like Content ID and a Copyright Match Tool which are pretty efficient at managing the re-uploads of creators’ content. “Copyright holders have control over their content: they can use our tools to block or remove their works, or they can keep them on YouTube and earn advertising revenue. In over 90% of cases, they choose to leave the content up. Enabling this new form of creativity and engagement with fans can lead to mass global promotion and even more revenue for the artist.” reads the YouTube blog post. A good example given by Kyncl is that of a famous pop singer, Dua Lipa whose singing career started with covering songs of other Artists. Also, Alan Walker’s worldwide famous track “Fade”  was heavily used by other users in the YouTube community along with being used in video games. This resulted in a massive fanbase for him. YouTube is not the only one disapproving of the new proposal. Other organizations such as  European Digital Rights, the Internet Archive, Patreon, Wordpress, and Medium have all opened up about their disapprobation against the EU copyright policy. “This is the new creative economy in action. The Copyright Directive won’t just affect creators and artists on YouTube. It will also apply to many forms of user-generated content across the Internet” writes Kyncl. For more information, check out the official YouTube blog post. YouTube has a $25 million plan to counter fake news and misinformation Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity
Read more
  • 0
  • 0
  • 3501

article-image-north-korean-hacker-charged-for-wannacry-ransomware-and-for-infiltrating-sony-pictures-entertainment
Melisha Dsouza
07 Sep 2018
2 min read
Save for later

North Korean hacker charged for WannaCry ransomware and for infiltrating Sony Pictures Entertainment

Melisha Dsouza
07 Sep 2018
2 min read
The US Justice Department has charged a North Korean hacker, Park Jin Hyok for the devastating cyberattacks that hacked Sony Pictures Entertainment and unleashed the WannaCry ransomware virus in 2017. The US alleges that Mr. Park worked as a computer programmer for Chosun Expo Joint Venture,a wing of the North Korean military. Hyok is charged with extortion, wire fraud, and various hacking crimes that could potentially carry a prison term up to 25 years. The criminal complaint against Hyok was filed in Los Angeles federal court in June, and unsealed this Thursday. It alleges that Mr. Park and the Joint Venture sought to “conduct multiple destructive cyber attacks around the world” in support of the North Korean government. Timeline of Cybercrimes committed by Hyok In 2017, the Wannacry ransomware attack affected more than 230,000 computers and caused hundreds of millions of dollars in damages around the world. One of the main targets affected was the UK’s National Health System, which was forced to cancel thousands of appointments after its systems were infected. The Justice Department asserts that the North Korean hacking team both developed the ransomware and propagated the attacks. Mr. Park is also charged in connection with an $81 million (£62 million) theft from a bank in Bangladesh in 2016. He is further accused of aiding the 2014 hack into Sony Pictures Entertainment, in which data was destroyed and internal documents were made publicly available online for anyone to download. The attack came shortly after Sony produced a comedy film ‘The Interview’, about an attempted assassination on a man who, was made to look like North Korean leader Kim Jong-un indirectly mocking him. According to the Justice Department, Mr. Park is also charged for “numerous other attacks or intrusions on the entertainment, financial services, defence, technology, and virtual currency industries, academia, and electric utilities”. The charges were filed four days before President Donald Trump’s meeting with North Korea’s leader, Kim Jong-n, to discuss ending hostility between the two countries. Prosecutors confirm that said the complaint wasn’t sealed to prevent derailing their meet in Singapore. Head over to cnet for more insights to this news. Microsoft claims it halted Russian spearphishing cyberattacks Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal New cybersecurity threats posed by artificial intelligence
Read more
  • 0
  • 0
  • 2615
article-image-winbox-vulnerability-in-microtik-routers-forwarding-traffic-to-attackers-say-researchers-at-netlabs-360
Savia Lobo
07 Sep 2018
3 min read
Save for later

Winbox vulnerability in MicroTik routers forwarding traffic to attackers, say researchers at NetLabs 360

Savia Lobo
07 Sep 2018
3 min read
A research done by China's Netlab 360 revealed thousands of routers manufactured by the Latvian company MikroTik to be compromised by a malware attacking the Winbox, a Windows GUI application. This vulnerability allows gaining access to an unsecured router. The Winbox vulnerability was revealed in April this year and MicroTik had also posted a software update for the same. However, researchers found that more than 370,000 MikroTik devices they identified on the Internet were still vulnerable. According to a report by Netlab 360's Genshen Ye, “More than 7,500 of them are actively being spied on by attackers, who are actively forwarding full captures of their network traffic to a number of remote servers. Additionally, 239,000 of the devices have been turned into SOCKS 4 proxies accessible from a single, small Internet address block.” Prior to the MicroTik attack, WikiLeaks revealed a vulnerability from the CIA's ‘Vault7’ toolkit. According to WikiLeaks, the CIA Vault7 hacking tool Chimay Red involves 2 exploits, including Winbox Any Directory File Read (CVE-2018-14847) and Webfig Remote Code Execution Vulnerability. Attacks discovered on the MicroTik routers Previously, researchers at Trustwave also had discovered two malware campaigns against MikroTik routers based on an exploit reverse-engineered from a tool in the Vault7 leak. #1 Attack targeting routers with CoinHive Malware The first attack targeted routers in Brazil with CoinHive malware. The attack injected the CoinHive JavaScript into an error page presented by the routers' Web proxy server. It further redirected all Web requests from the network to that error page. However, in routers affected by this type of malware found by the Netlab 360 team, all the external web resources, including those from coinhive.com necessary for web mining, are blocked by the proxy ACLs (access control lists) set by attackers themselves. #2 Attack that turns affected routers into a malicious proxy network The other attack, discovered by the Netlab 360 team, has turned affected routers into a malicious proxy network. This was done by using the SOCKS4 protocol over a very non-standard TCP port (4153).  Ye said that “Very interestingly, the Socks4 proxy config only allows access from one single net-block, 95.154.216.128/25.” Most of the traffic is said to be going to 95.154.216.167, an address associated with a hosting service in the United Kingdom. This attack includes the addition of a scheduled task to report the router's IP address back to the attacker to help maintain the persistence of the SOCKS proxy if the router is rebooted. Eavesdropping on routers NetLab 360 researchers also discovered that more than 7,500+ victims are being actively eavesdropped and were largely streaming network traffic. This includes FTP and emails focused traffic, as well as some traffic associated with network management. Majority of the streams, almost 5,164 of them, were being sent to an address associated with an ISP in Belize. Attackers have leveraged MikroTik's built-in packet-sniffing capabilities for eavesdropping over the network. Here, the sniffer, which uses the TZSP protocol, can send a stream of packets to a remote system using Wireshark or other packet capture tools. To know more about this news in detail, visit the Netlab 360 blog. Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks? Apache Struts faces code execution flaw risking enterprises to attacks
Read more
  • 0
  • 0
  • 2690

article-image-facebook-twitter-open-up-senate-intelligence-hearing-committee
Fatema Patrawala
06 Sep 2018
14 min read
Save for later

Facebook, Twitter open up at Senate Intelligence hearing, committee does ‘homework’ this time

Fatema Patrawala
06 Sep 2018
14 min read
Five months after Facebook founder Mark Zuckerberg appeared before Congress, the US government once again invited top tech executives from Facebook, Twitter, and Google to the fourth and final installment of the series of high profile hearings on social media’s role in US democratic proceedings. Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey faced the Senate Select Intelligence Committee, for the purpose to discuss the National Security issues and foreign interference through social media platforms in US elections. Google was notably absent from the proceedings, after the firm failed to send a senior executive ‘at the right level’ to Washington. Google submitted a written testimony ahead of the hearing, which the Senate discarded. In place of a Google representative, the Senate committee left an empty chair. Opening Remarks from the Senate Chairman Richard Burr and the Vice Chairman Mark Warner Chairman of the Senate Richard Burr made his opening remarks welcoming Jack Dorsey CEO Twitter and Sheryl Sandberg COO Facebook. He started with some words from the recently passed John McCain. McCain's place at the hearing was marked with a single white rose on a black cloth. "He will be dearly missed," Chairman Burr says. He opened his speech discussing about social media in the last 18 months. He acknowledged its immense potential for good but highlighted how the recent past has show how vulnerable social media can be to corruption and misuse. He said the committee takes this issue very seriously and appreciates the fact that Facebook and Twitter have taken responsibility with an equivalent and appropriate measures of seriousness and unlike their peer Google, have shown up for the hearing with the ‘appropriate level of corporate representation’. He further added that the purpose of this hearing was to discuss the role social media plays into the execution of foreign influence operations. The Chairman precisely made a point that its important we be candid with our language because that is what the significance of this threat demands. He said, “We need to be precise about the foreign actors we talking about. We need to be precise about the consequences of not acting and we need to be candid about being responsible for solving this problem and where it lies.” Chairman Burr's said that "business as usual" for these tech firms is not good enough. "We've identified the problem, now we've got to find a solution," he added. He also adds a jibe at Google for failing to send the "right senior executive". His sentiments were echoed by Vice Chairman Mark Warner, who took over from Burr. He was "deeply disappointed" in Google for not taking the issues being discussed yesterday seriously enough. Vice Chairman Mark, also put forward some thoughts and open questions to Twitter and Facebook to improve their policies and systems: Users should have the right to know when they are interacting with bots or humans on the platform Isn't there a public interest in ensuring there is more anonymised data to help researchers and academics identify potential problems and misuse. Why are your terms of service so difficult to find and nearly impossible to read and understand Ideas like data portability, data immunization or first party consent should be adopted After encountering numerous situations of misuse, what kind of accountability should be implemented to the flawed advertising model Sheryls Sandberg’s defending comments The Facebook CEO Sheryl Sandberg smoothly projected the impression that the company is always doing something. Whether that’s on combating hate speech, hoaxes and “inauthentic” content, or IDing and blocking state-level disinformation campaigns — thereby shifting attention off the deeper question of whether Facebook is doing enough. Many of her answers courteously informed senators that Facebook would ‘follow up’ with answers and/or by providing some hazily non-specific ‘collaborative work’ at some undated future time — which is the most professional way to kick awkward questions. Sheryl started her opening remarks by thanking the committee for giving her the opportunity to talk in the Senate Hearing. Referring to her written testimony which goes into more detail and here few points Sandberg reiterated in the session. Russia used our platform to interfere in the US elections and Facebook was too slow to spot this and too slow to act and that is on us, she said She mentioned about taking collaborative efforts with government and law enforcement committees. She further stated that at Facebook they are investing in long term security, and have doubled the number of people working in safety and security. They are able to view security reports in 50 languages 24 hours a day. They use better ML and AI techniques to be more proactive in finding abuse. Their first line of defense is finding and taking down the fake accounts and pages. Blocking millions of attempts to make fake accounts. Making progress on fake news and limiting their distribution as well. They demark articles by third party fact checkers and warn people who give them or about to share them. They show them related articles with more facts for a more well rounded opinion. Strong steps taken to prevent abuse and increase transparency on their advertising platform. For political issue you can now see who paid for the ads, how much they paid and the demographics of the advertisers. Advertisers are also required to go through a long authorization process to confirm their authentic identity. Finally Sandberg concluded by saying these steps wont stop people who are trying to game the system but it will make it a lot harder. She emphasized on working more collaboratively with the government and law enforcement agencies. She continued that Facebook is more determined than its opponent and they are in a grey area working together to meet this challenge. Jack dorsey’s defence “We weren’t expecting any of this when we created Twitter over 12 years ago. We acknowledge the real-world negative consequences of what happened, and we take full responsibility to fix it.” Here's the opening to Jack Dorsey's prepared statement: “Thank you for the opportunity to appear before the Committee today so I may speak to you and the American people. Twitter’s purpose is to serve the public conversation. We are an American company that serves our global audience by focusing on the people who use our service, and we put them first in every step we take. Twitter is used as a global town square, where people from around the world come together in an open and free exchange of ideas. We must be a trusted and healthy place that supports free and open discussion. Twitter has publicly committed to improving the collective health, openness, and civility of public conversation on our platform. Twitter’s health is measured by how we help encourage more healthy debate, conversations, and critical thinking. Conversely, abuse, malicious automation, and manipulation detracts from the health of our platform. We are committed to hold ourselves publicly accountable towards progress of our health initiative. Today, I hope my testimony before the Committee will demonstrate the challenges that we are tackling as a global platform. Twitter is approaching these challenges with a simple question: How do we earn more trust from the people using our service? We know the way we earn more trust around is how we make decisions on our platform to be as transparent as possible. We want to communicate how our platform works in a clear and straightforward way.” Jack mentions, “Abuse, harassment, troll armies, propaganda through bots and human coordination, misinformation campaigns, and divisive filter bubbles…that‘s not a healthy public square. Worse, a relatively small number of bad-faith actors were able to game Twitter to have an outsized impact. We weren’t expecting any of this when we created Twitter over 12 years ago. We acknowledge the real-world negative consequences of what happened, and we take full responsibility to fix it. We’ve seen positive results from our work. We‘re now removing over 200% more accounts for violating our policies. We’re identifying and challenging 8-10 million suspicious accounts every week. And we’re thwarting over a half million accounts from logging in to Twitter every day. Today we‘re committing to the people, and this committee, to do that work, and do it openly. We‘re here to contribute to a healthy public square, not compete to have the only one.” Few Questions to the witnesses from the Senators in the committee Senator James E. Risch Questions on Hate Speech “Who sets the security standards or the descriptions of authority of manipulative content and if there is any kind of unanimity amongst them or are there any debates or hate speeches in the team” Sandberg said that language that leads to violence is not permitted on their platform and Twitter CEO Dorsey shares the same views. Risch asked whether there was any way for Facebook to find any distinction between US citizens and people from other countries. Sandberg responded saying Facebook asks people to declare where they are from. People are allowed to talk about any country, but are not allowed to talk about hate. They are not allowed to interfere or influence elections. Facebook is also looking to dive further into transparency reporting. Twitter is focusing on behavioural patterns. It tracks common patterns of behaviour and utilizes that information to find out the unauthentic content. They have built deep learning and machine learning technologies to recognize these patterns quickly and shut them before they spread in other areas. Senator Martin Heinrich on Threat to Elections “What is it that you have learned from the past elections since 2016 as the platforms have been used throughout the course of a number of elections around the world. And how you have informed your current posture in terms of how you are gaining transparency in this activity?” Sandberg said that Facebook is getting smarter at detecting and preventing threats to elections but warned that the opponents are getting smarter as well. Dorsey followed by mentioning how Twitter is working with AI tools to recognise patterns of behaviour that allow people to artificially amplify information. Senator Susan Collins on why Twitter doesn't intimate individuals “Once you’ve taken down accounts that are linked to Russia, these imposter accounts, what do you do to notify the followers of those accounts that they have been following or engaged in accounts that originated in Russia and are not what they appear to be.” “We simply haven’t done enough… we do believe transparency is a big part of where we need improvement... We need to meet people where they are... We are going to do our best to make sure that we catch everything via external partnership and other channels. We recognise we need to communicate more directly,” said Jack Dorsey. He also added, “We are looking to incentivise people not only based on the number of followers they have but also the way they share content online. By what kind of content they share. We are also looking to expand our transparency report and extend the same to the public.” How Can Facebook & Twitter Clean Their Systems? “We have been investing heavily in identifying bad actors in the system. Most of our takedown have been on our own, but we have coordinated with external parties to make this successful.” said Sandberg. Dorsey had his own response saying, “There are a number of short term risks involved but the only way we'll grow is by building the platform's health and we have strengthened our partnership with government agencies and law enforcement partners.” The stock prices of Twitter and Facebook don’t seem to be holding up to the questioning and have been dropping since the hearing began. Sandberg added, “the most important determinant is what people choose to follow. If you don’t want to follow someone we encourage that. We are going to do a contribution to investing in technology to figure out a solution to battle deep fake news.” “I encourage both of you to work closely with academia… I hope that you will commit to providing data that goes beyond a 3 year window to researchers who are looking into Russian influence on your platforms”, concluded Senator Collins Senator Harris on business incentive alignment and policy inconsistencies at Facebook “What metric are you using to calculate the revenue generated associated with those [inorganic] ads? And what is the dollar amount that is associated with that revenue?... What percentage of content on Facebook is inorganic?.. You must know.” Sandberg answered, “Ads don’t run with inorganic content on our service. So there is no way to firmly ascertain how much ads are attached to how much organic content and that’s not how we work.” Harris further asked “How can you reconcile an incentive to create and increase your user engagement when the content that generates a lot of engagement is often inflammatory in nature?” Sandberg gave a specific example of Facebook’s hate speech moderation failure, a financially incentivized policy and moral failure. She referenced a ProPublica report from June 2017, which revealed the company had told moderators to delete hate speech targeting white men but not black children as they were a protected class. She continued that it was a bad policy and they had fixed it. Harris questioned whether the policy was changed after the report? To which Sandberg uncomfortably responded about getting back to the committee on the specifics of when and what would have happened. Senator Blunt on liability implications and learning from attempts at improving the platforms this year “In the interest of transparency and public education…, are you willing to archive suspended accounts...?” Dorsey opened by saying, “As we think about our singular priority of improving the health of public conversations, we are not going to be able to do long term work unless we’re look at the incentives that our product is asking people to do everyday.” Dorsey agreed that archiving historical data is a great idea, but further understanding of the legal implications of such an action is needed. “The business implications, the liability implications of what we’re asking you to do are pretty grey,... what’s the challenge here?” asked Blunt. Tighter co-ordination helps, said Sandberg responded. We’d like regular cadence of meetings with our law enforcement partners, we’d love to understand the secular trends that they are aware of in our peer companies our other mediums or more broadly that would inform us on how to act faster. We’d appreciate consolidating to a single point of contact instead of bouncing between multiple agencies to do our work,” added Dorsey. Senator Lankford on Data of Suspended Accounts Both Twitter and Facebook keep records of the suspended accounts for later analysis and also for referrals by law and enforcement bodies. Sandberg was also questioned on the number of fake accounts on Facebook. Senator Manchin on Why Facebook & Twitter Don't Operate in China Both Facebook and Twitter do not operate in China because the Chinese government hasn’t allowed both these platforms in the country. Sandberg and Dorsey unanimously replied to the senator. US Senator Cotton on Why Wikileaks is Active on Facebook and Twitter WikiLeaks and Julian Asange remain active on Facebook & Twitter. Sandberg said that these accounts don’t violate any of Facebook's terms. Dorsey also supported the viewpoint and clarified that Twitter is open to inviting law and enforcement to investigate if needed. US Senate Vice Chairman Mark Warner Wraps It Up Warner thanked both Dorsey and Sandberg for their presence and urged both to make their platforms safer for users across the US. He also thanked them for taking down bad actors online and in helping fight against fake news. US Senate Chairman Richard Burr also thanked both the individuals for being present and addressing the senators’ questions. To watch the full coverage of the hearing visit the US Senate Select Intelligence official page. Google’s Senate testimony, “Combating disinformation campaigns requires efforts from across the industry.” Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections
Read more
  • 0
  • 0
  • 2060

article-image-googles-senate-testimony-combating-disinformation-campaign-require-efforts-from-industry
Fatema Patrawala
05 Sep 2018
5 min read
Save for later

Google’s Senate testimony, “Combating disinformation campaigns requires efforts from across the industry.”

Fatema Patrawala
05 Sep 2018
5 min read
Ahead of today’s congressional hearing on social media companies’ efforts to thwart election meddling in advance of November’s midterm races, Alphabet Inc.’s Google posted a “testimony”. The Senate had invited Alphabet Inc. CEO Larry Page, and also extended the invitation to Google CEO, Sundar Pichai to testify in the hearing. However, both officials aren’t attending the hearing, and Google has planned to send its Chief legal officer Kent Walker, instead, to testify before the panel. The Senate Intelligence Committee has rejected Google’s Chief Legal Officer Kent Walker as a witness. The committee finds Walker as not placed high-level enough in the company to testify at Wednesday’s hearing. The panel expects to hear testimony from Twitter Inc. Chief Executive Jack Dorsey and Facebook Inc. Chief Operating Officer Sheryl Sandberg as well on Wednesday. Kent Walker says in his blog post, “I will be in Washington briefing Members of Congress on our work on this and other issues and answering any questions they have, and will be submitting this testimony.” Here are the key highlights of the testimony: Verification program: A verification program has been rolled out for anyone who wants to purchase a federal election ad on Google in the U.S. Google will require advertisers to provide government-issued identification information and other key information to confirm they are a U.S. citizen or lawful permanent resident or a U.S.-based organization, as per the law. In-ad disclosures: To help people better understand who is paying for an election ad Google has incorporated In-ad ​Disclosures. It means Google will be able to identify by name advertisers running election-related campaigns on Search, YouTube, Display and Video 360, and the Google Display Network. Transparency report: Google launched a “Political advertising on Google” Transparency Report​ for election ads, which will provide data about the entities buying election-related ads on the platforms, how much money is spent across states and congressional districts on such ads, and who the top advertisers are overall. The report will also show the keywords advertisers have spent the most money on ads of political importance during the current U.S. election cycle from May 31st, 2018 onwards. Searchable election Ad library: Finally, Google will offer a searchable election Ad ​Library ​within their ​public Transparency Report which will show things like which ads had the highest views, what the latest election ads running on our platform are, and deep dives into specific advertisers’ campaigns. The data shows the overall amount spent and number of ads run by each election advertiser, and whether the advertiser targeted its ad campaigns geographically or by age or gender. It will also show the approximate amount spent on each individual ad, the approximate impressions generated by each ad, and the dates each ad ran on the platform. In addition to the transparency efforts, Google has implemented a number of initiatives to improve the cybersecurity posture of candidates, campaigns, and the election infrastructure. In October 2017, they unveiled the Advanced Protection Program​, which they claim, will provide the strongest account protection that Google offers. Second, in May 2018, Google’s Jigsaw project, dedicated to building technology to address significant security challenges, announced the availability of Project Shield ​to U.S. political organizations (e.g., candidates, campaigns, political action committees) registered with the 3 appropriate electoral authorities. Project Shield is a free service that will use Google technology to prevent distributed denial of service (DDoS) attacks that block access to content. Lastly Google continues to issue warnings to users​ when they are suspicious about the risk of state-sponsored efforts hijacking their accounts. But they also acknowledge that combating disinformation campaigns is next to impossible for any single company to shoulder. “We have deployed our most advanced technologies to increase security and fight manipulation, but we realize that no system is going to be 100% perfect. Our algorithms are designed to identify content that many people find relevant and useful. We are constantly looking to find signals that help us identify deceptive content, while promoting content that is authoritative, relevant, and current. We have made substantial progress in preventing and detecting abuse, and are seeing continued success in stopping bad actors attempting to game our systems. And as threats evolve, we will continue to adapt in order to understand and prevent new attempts to misuse our platforms. We certainly can’t do this important work alone. Combating disinformation campaigns requires efforts from across the industry. We’ll continue to work with other companies to better protect the collective digital ecosystem, and, even as we take our own steps, we are open to working with governments on legislation that promotes electoral transparency.” Kent concluded saying, “While the nature of our services and the way we run our advertising operations appears to have limited the amount of state-sponsored interference on our platforms, no system is perfect—and we are committed to taking continuing action to address the issue. Over the course of the last 18 months.” Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more
Read more
  • 0
  • 0
  • 2148
article-image-facebook-coo-sandbergs-testimony-to-the-us-senate-on-combating-foreign-influence-fake-news-and-upholding-election-integrity
Savia Lobo
05 Sep 2018
8 min read
Save for later

Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity

Savia Lobo
05 Sep 2018
8 min read
In the US Senate select committee hearing Facebook COO, Sheryl Sandberg has put forward Facebook’s testimony to the US Senate select committee on Wednesday, 5th September 2018. Twitter and Google also have their side of testimonies to be offered in the hearing. Facebook has had a tumultuous couple of years centered around the misuse of its platform and abuse of its users’ data and privacy by advertisers, political entities and foreign bad actors.  The Cambridge Analytica scandal is just one example. Another is where Russians used Facebook to meddle with the 2016 US Presidential elections. Sheryl Sandberg in her testimony started with an apologizing statement, “We were too slow to spot this and too slow to act. That’s on us. This interference was completely unacceptable. It violated the values of our company and of the country we love.” She had also highlighted the efforts taken by Facebook to keep its community safe and the user services secure, which include: Using artificial intelligence to help find bad content and locate bad actors. Shutting down fake accounts and reducing the spread of false news. Set up new ad transparency policies, ad content restrictions, and documentation requirements for political ad buyers. Better anticipation of risks and working closely with law enforcement and its industry peers to share information and make progress together. Removed hundreds of Pages and accounts involved in coordinated inauthentic behavior— meaning they misled others about who they were and what they were doing. Sandberg further touched upon these highlights in detail and presented ways in which Facebook is looking forward to combat the issues. She said, “At its best, Facebook plays a positive role in our democratic process—and we know we have a responsibility to protect that process on our service. We’re investing for the long term because security is never a finished job. Our adversaries are determined, creative, and well-funded. But we are even more determined—and we will continue to fight back.” Facebook assesses past Russian attempts to influence elections Sheryl said that, before the election day in November 2016, Facebook committee detected and mitigated several threats from actors--such as the APT28 activity-- that had ties to Russia. They also recorded new behaviour such as the creation of fake IDs which were linked to a Facebook page named DCLeaks, which was later removed by them. Read more: DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections Post the 2016 elections, Facebook found that the Internet Research Agency (IRA), a Russian entity located in St. Petersburg, Russia, had used coordinated networks of fake Pages and accounts to interfere in the election. Sheryl stated, “Around 470 fake Pages and accounts associated with the IRA spent approximately $100,000 on about 3,500 Facebook and Instagram ads between June 2015 and August 2017. Our analysis showed that these accounts used these ads to promote roughly 120 Facebook Pages that they had set up, which had posted more than 80,000 pieces of content between January 2015 and August 2017. We shut down the accounts and Pages we identified at the time that were still active. The Instagram accounts we deleted had posted about 120,000 pieces of content.” In April of this year, Facebook took down more than 270 additional Pages and accounts controlled by the IRA and it continues to monitor its service for abuse and share information with law enforcement and others in the industry about these threats. Facebook combats Foreign election interference AND also advances on Election Integrity Facebook has more than doubled the number of people working on safety and security and now have over 20,000 people. They review reports in over 50 languages, 24 hours a day. Use of better machine learning technology and artificial intelligence have also enabled highly proactive identification of abuses. Sheryl mentioned that Facebook focusses on removing Fake Accounts. She added, “One of the main ways we identify and stop foreign actors is by proactively detecting and removing fake accounts, since they’re the source of much of the interference we see.” Some important measures Facebook is taking are: Use of both automated and manual review to detect and deactivate fake accounts. These systems analyze distinctive account characteristics and prioritize signals that are more difficult for bad actors to disguise. It has blocked millions of attempts to register fake accounts every day. It has globally disabled 1.27 billion fake accounts from October 2017 to March 2018. By using technologies like machine learning, artificial intelligence, and computer vision, Facebook is proactively detecting more bad actors and take action more quickly. Read More: Four 2018 Facebook patents to battle fake news and improve news feed Tackling False News: Facebook has partnered with third-party fact-checking organizations to limit the spread of articles they rate as false, and it further disrupts the economic incentives for traffickers of misinformation. It has also invested in news literacy programs and work to inform people by providing more context on the stories it sees. Increasing Ad Transparency. Facebook has taken strong steps to prevent abuse and increase transparency in advertising. They ensure all politics and issue ads on Facebook and Instagram in the U.S. are clearly labeled with a “Paid for by” disclosure at the top of the ad so people can see who is paying for them. This is especially important when the Page name doesn’t match the name of the company or person funding the ad. Enforcing Compliance with Federal Law. Facebook’s compliance team maintains a Political Activities and Lobbying Policy that is available to all employees. This Policy is covered in its Code of Conduct training for all employees and includes guidelines to ensure compliance with the Federal Election Campaign Act. Suspicious Activity Reporting. Facebook has designed certain processes to identify inauthentic and suspicious activity. It also maintains a sanctions compliance program to screen advertisers, partners, vendors, and others using its payment products. Its payments subsidiaries file Suspicious Activity Reports on developers of certain apps and take other steps as appropriate, including denying such apps access to the facebook platform. Facebook defending against targeted hacking Sheryl Sandberg also highlighted how Facebook is strengthening its defenses against a broader set of threats. Some of the defenses include: Building AI systems to detect and stop attempts to send malicious content. Providing customizable security and privacy features, including two-factor authentication options and marketing to encourage people to adopt them. Sending notifications to individuals if they have been targeted by sophisticated attackers, with custom recommendations depending on the threat model. Sending proactive notifications to people who have not yet been targeted, but may be at risk based on the behavior of particular malicious actors. Deploying AI systems to monitor login patterns and detect the signs of a successful account takeover campaign. Facebook working with government entities, industry, and civil society Sheryl mentioned in her testimony, “We have worked successfully with the DOJ, the FBI, and other law enforcement agencies to address a wide variety of threats to our platform, and we are actively engaged with DHS and the FBI’s new Foreign Influence Task Force focused on election integrity.” Facebook has also partnered with cybersecurity firms such as FireEye, which informed it about a network of Pages and accounts originating from Iran that engaged in coordinated inauthentic behavior. Based on which, Facebook started an investigation and identified and removed additional accounts and Pages from the network. The FB security team regularly conducts internal reviews to monitor for state-sponsored threats that are not publicly disclosed, for security reasons. They monitor and assess thousands of account details, such as location information and connections to others on Facebook. Sheryl also added, “As part of official investigations, government officials sometimes request data about people who use Facebook. We have an easily accessible online portal and processes in place to handle these government requests, and we disclose account records in accordance with our terms of service and applicable law. We also have law enforcement response teams available around the clock to respond to emergency requests.” Facebook also participated in discussions with governments around the world at key events such as the Munich Security Conference and CyCon, which is organized by the NATO Cooperative Cyber Defense Centre of Excellence. Sheryl Sandberg concluded her testimony by saying that, the Facebook community is learning from what happened and is improving. She said, “When we find bad actors, we will block them. When we find content that violates our policies, we will take it down. And when our attackers use new techniques, we’ll share them to improve our collective defense. We are even more determined than our adversaries, and we will continue to fight back.” Here’s the link to Sheryl Sandberg’s complete testimony to the US Senate Committee. Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban A new conservative employee group within Facebook to protest Facebook’s “intolerant” liberal policies Facebook Watch is now available worldwide challenging video-streaming rivals, YouTube, Twitch, and more
Read more
  • 0
  • 0
  • 2270

article-image-researchers-find-way-to-spy-on-remote-screens-through-webcam-machine-learning
Fatema Patrawala
03 Sep 2018
6 min read
Save for later

Researchers find a way to spy on remote screens through the Webcam mic and machine learning

Fatema Patrawala
03 Sep 2018
6 min read
With a little help from machine learning, you might know what the people on the other end of a Hangouts session are really looking at on their screens. Based on research published at the CRYPTO 2018 Conference in Santa Barbara last week your webcam could give details on what's on your screen, if the person on the other end is listening the right way. All you'll need to do is process the audio picked up by their microphones. Daniel Genkin of the University of Michigan, Mihir Pattani of the University of Pennsylvania, Roei Schuster of Cornell Tech and Tel Aviv University, and Eran Tromer of Tel Aviv University and Columbia University investigated a potential new avenue of remote surveillance dubbed as "Synesthesia”. It is a side-channel attack that can reveal the contents of a remote screen, providing access to potentially sensitive information based solely on "content-dependent acoustic leakage from LCD screens.” Anyone who remembers working with cathode ray tube monitors is familiar with the phenomenon of coil whine. Even though LCD screens consume a lot less power than the old cathode ray tube (CRT), they still generate the same sort of noise, though in a totally different frequency range. Because of the way computer screens render a display—sending signals to each pixel of each line with varying intensity levels for each sub-pixel—the power sent to each pixel fluctuates as the monitor goes through its refresh scans. Variations in the intensity of each pixel create fluctuations in the sound created by the screen's power supply, leaking information about the image being refreshed—information that can be processed with machine learning algorithms to extract details about what's being displayed. That audio could be captured and recorded in a number of ways, as demonstrated by the researchers: Over a device's embedded microphone or an attached webcam microphone during a Skype, Google Hangouts, or other streaming audio chat Through recordings from a nearby device, such as a Google Home or Amazon Echo Over a nearby smartphone; or with a parabolic microphone from distances up to 10 meters Even a reasonably cheap microphone could pick up and record the audio from a display, even though it is just on the edge of human hearing And it turns out that audio can be exploited with a little bit of machine learning black magic. The researchers began by attempting to recognize simple, repetitive patterns. They created a simple program that displays patterns of alternating horizontal black and white stripes of equal thickness (in pixels), which shall be referred to as Zebras, the researchers recounted in their paper. These "zebras" each had a different period, measured by the distance in pixels between black stripes. As the program ran, the team recorded the sound emitted by a Soyo DYLM2086 monitor. With each different period of stripes, the frequency of the ultrasonic noise shifted in a predictable manner. The variations in the audio only really provide reliable data about the average intensity of a particular line of pixels, so it can't directly reveal the content of a screen. However, by applying supervised machine learning in three different types of attacks, the researchers demonstrated that it was possible to extract a surprising amount of information about what was on the remote screen. After training, a neural-network-generated classifier was able to reliably identify which of the Alexa top 10 websites was being displayed on a screen based on audio captured over a Google Hangouts call—with 96.5 percent accuracy. In a second experiment, the researchers were able to reliably capture on-screen keyboard strokes on a display in portrait mode (the typical tablet and smartphone configuration) with 96.4 percent accuracy, for transition times of one and three seconds between key "taps." On a landscape-mode display, accuracy of the classifiers was much lower, with a first-guess success rate of only 40.8 percent. However, the correct typed word was in the top three choices 71.9 percent of the time for landscape mode, meaning that further human analysis could still result in accurate data capture. (The correct typed word was in the top three choices for the portrait mode classifier 99.6 percent of the time.) In a third experiment, the researchers used guided machine learning in an attempt to extract text from displayed content based on the audio—a much more fine-grained sort of data than detecting changes in screen keyboard intensity. In this case, the experiment focused on a test set of 100 English words and also used somewhat ideal display settings for this sort of capture: all the letters were capitalized (in the Fixedsys Excelsior typeface with a character size 175 pixels wide) and black on an otherwise white screen. The results, as the team reported them, were promising: The per-character validation set accuracy (containing 10% of our 10,000 trace collection) ranges from 88% to 98%, except for the last character where the accuracy was 75%. Out of 100 recordings of test words, for two of them preprocessing returned an error. For 56 of them, the most probable word on the list was the correct one. For 72 of them, the correct word appeared in the list of top-five most probable words. While these tests were all done with a single monitor type, the researchers also demonstrated that a "cross screen" attack was possible—by using a remote connection to display the same image on a remote screen and recording the audio, it was possible to calibrate a baseline for the targeted screen. It's clear that there are limits to the practicality of acoustic side-channels as a means of remote surveillance. But as people move to use mobile devices such as smartphones and tablets for more computing tasks—with embedded microphones, limited screen sizes, and a more predictable display environment—the potential for these sorts of attacks could rise. And mitigating the risk would require re-engineering of current screen technology. So, while it remains a small risk, it's certainly one that those working with sensitive data will need to kept in mind—especially if they're spending much time in Google Hangouts with that data on-screen. Read more on this page. Google Titan Security key with secure FIDO two factor authentication is now available for purchase 6 artificial intelligence cybersecurity tools you need to know Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy
Read more
  • 0
  • 0
  • 1643