Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Security

470 Articles
article-image-windows-zero-day-vulnerability-exposed-on-alpc-interface-by-a-vulnerability-researcher-with-no-formal-degrees
Savia Lobo
29 Aug 2018
4 min read
Save for later

Windows zero-day vulnerability exposed on ALPC interface by a vulnerability researcher with ‘no formal degrees’

Savia Lobo
29 Aug 2018
4 min read
On 27th August, a self-proclaimed ‘retired vulnerability researcher’ who goes by the name ‘SandboxEscaper’  tweeted about a local privilege escalation exploit for Windows. The unknown Windows zero-day vulnerability found in the Windows OS could allow a local user or a malicious one to obtain system privileges on the targeted machine. Will Dorman, an engineer of CERT/CC, confirmed the vulnerability and issued an official CERT/CC alert on the same day. He said that the vulnerability is a privilege escalation issue and resides in the Windows' task scheduler program. It occurred due to errors in the handling of Advanced Local Procedure Call (ALPC) systems. ALPC interface is a Windows-internal mechanism and works as an inter-process communication system. With ALPC, a client process running within the OS can ask a server process running within the same OS to provide some information or perform some action. Proof-of-concept (PoC) code to exploit the ALPC interface on GitHub SandboxEscaper released a proof-of-concept (PoC) code on GitHub on 7th May,  in order to exploit the ALPC interface to gain SYSTEM access on a Windows system. This PoC can largely attract malware authors as it allows benign malware to gain an admin access on targeted systems. At present, there are no known solutions for this vulnerability, which has been awarded a Common Vulnerability Scoring System (CVSS) score of 6.4 - 6.8. A CVSS score ranging between 4.0 - 6.9 is said to have medium severity as per the Qualitative Severity Rating Scale. SandboxEscaper did not notify Microsoft about the vulnerability, which leaves all the Windows 64-bit users prone to attack. However, Microsoft has acknowledged the 0-day flaw and we can expect this flaw to be resolved in Microsoft's next security updates scheduled for September 11, the company's next ‘Patch Tuesday’. The person behind the Windows zero-day hack:  SandboxEscaper This vulnerability was discovered by a self-educated blogger named ‘Sandbox escaper’. Her previous work can be found at https://sandboxescaper.blogspot.com/p/disclosures_8.html What is intriguing is that the blogger calls herself a ‘retired vulnerability researcher’ who now blogs on travel. However, she has just started looking for a job in vulnerability research a week before her now famous Windows 0day hack. She says on her post on her current job hunt, “I have mainly focused on logic bugs so far. So ideally I would prefer a place that is willing to mentor me, and doesn't just expect me to start breaking all the hard targets and sandboxes by myself. I would also prefer an onsite job in the UK (I'm currently a citizen of Belgium and also living there).” She also goes forth to mention that being a transgender, her transition has been really difficult. Dealing with social pressure and anxiety isn’t easy, but this vulnerability researcher is causing heads to turn thanks to this discovery! She’s definitely got Microsoft’s attention now. Would be interesting to see if Microsoft decides to give her a chance at a job interview. On a related note, this story also underscores the existing toxic culture in tech and highlights why it is important for tech companies to push inclusion and diversity as a key CxO performance metric. A person should be judged on merits and capabilities, not on their personal lifestyle choices or their traits/features, physical, emotional, sexual, political or otherwise. Further updates to this story After SandboxEscaper’s first tweet caused friction in the flaw disclosure process. She followed up with another tweet stating "Enjoy the 0day. It will get patched really fast. I guess I had fun today. Now I'm gone for a while, bye." Publicly releasing Windows vulnerabilities before Microsoft has issued a patch is quite rare. Microsoft, and many other companies offer bug bounties, or rewards, for information on software flaws. However, publicly disclosing the flaw vindicates someone from earning a bug bounty. As per Microsoft's rules, detailed proof-of-concept code similar to the one that SandboxEscaper posted, must not be disclosed until 30 days after Microsoft issues a patch. Her GitHub video might have violated Microsoft's terms and conditions for bug rewards. Yesterday, SandboxEscaper tweeted, "I screwed up, not MSFT (they are actually a cool company)." SandboxEscaper received an overwhelmingly positive response and compliments for her vulnerability discovery from various tech geeks, including from the cybersecurity training company Hacker House. Read more about this 0day exploit’s technical details on Kevin Beaumont’s Medium post. Note: Updated on 30th Aug, to include section on ‘Further updates to this story’. Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before the patch was ready Sugar operating system: A new OS to enhance GPU acceleration security in web apps Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips
Read more
  • 0
  • 0
  • 2741

article-image-lerna-relicenses-to-ban-major-tech-giants-like-amazon-microsoft-palantir-from-using-its-software-as-a-protest-against-ice
Natasha Mathur
29 Aug 2018
3 min read
Save for later

Lerna relicenses to ban major tech giants like Amazon, Microsoft, Palantir from using its software as a protest against ICE

Natasha Mathur
29 Aug 2018
3 min read
The Lerna team has taken a strong stand against the U.S. Immigration and Customs Enforcement ( ICE ) by modifying their MIT license to ban companies who have collaborated with ICE from using Lerna. Lerna is a tool for managing large-scale JavaScript projects with multiple packages. Lerna lets you add dependencies to multiple packages with a single command. It made monorepos available to everyone, which were earlier very expensive and used only by big companies. A comment on Github by a Lerna developer, Jamie Kyle earlier this day, stated how he has been deeply disturbed by ICE’s behavior with American immigrants, especially with the way ICE has acted with immigrant children and wants it to stop. “The actions of ICE have had a lifelong lasting impact on these children, and many of them won't even remember it happening. I have trouble expressing how angry this makes me feel. And the worst part is that I feel helpless to improve the situation. There is one thing I have control over, and that's open source”, reads the post. Kyle states that major tech giants such as Facebook, Uber, Google, Amazon, etc, carry out “a lot of shady things behind the scenes. These companies care only about the millions of dollars that ICE is paying them and are willing to ignore all the horrible things that ICE does.” Now, these companies are also using Lerna, and “it's really hard for me to sit back and ignore what these companies are doing with my code” says Kyle. Reinforcing Lerna’s ethical beliefs, the updated Lerna license bans companies that are known collaborators with US Immigration and Customs Enforcement such as Microsoft, Palantir, and Amazon, among the others from using Lerna. These companies don’t have any licensing rights and “any use of Lerna will be considered theft”. They cannot pay for a license, and if they wish to use Lerna, they need to publicly end their contracts with ICE. For everyone else, Lerna will remain MIT licensed. Public opinion about Lerna’s decision against ICE is varied: https://twitter.com/AdrienDittrick/status/1034716993323184128 https://twitter.com/sarah_federman/status/1034633564065656832 https://twitter.com/_juandjara/status/1034716644667473921 https://twitter.com/stefanpenner/status/1034687675066970112 “Now, it's not news to me that people can use open source for evil. But it's really hard for me to sit back and ignore what these companies are doing with my code. It doesn't feel like there are enough steps in between me and the horrible things ICE is doing” says Kyle. For more information, check out the official Github post. Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?  
Read more
  • 0
  • 0
  • 3469

article-image-facebooks-ai-algorithm-finds-20-myanmar-military-officials-guilty-of-spreading-hate-and-misinformation-leads-to-their-ban
Sugandha Lahoti
28 Aug 2018
2 min read
Save for later

Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban

Sugandha Lahoti
28 Aug 2018
2 min read
Facebook has banned 20 military officials from Myanmar for spreading hate and misinformation about the ethnic violence in Myanmar. They have also removed a total of 18 Facebook accounts, one Instagram account, and 52 Facebook Pages. This action was a result of a report conducted by the UN Human Rights Council-authorized Fact-Finding Mission on Myanmar. They found evidence of many organizations and individuals committing or assisting in serious human rights abuses in the country. Following this, Facebook banned these individuals to prevent further inflammation of ethnic and religious tensions. The 20 military officials and organizations removed include Senior General Min Aung Hlaing, commander-in-chief of the armed forces, and the military’s Myawady television network. They have removed six pages and six accounts from Facebook and one account from Instagram connected to these individuals and organizations. The rest don’t have a Facebook or Instagram presence but are banned nevertheless. Facebook has also removed 46 Pages and 12 accounts for engaging in coordinated inauthentic behavior. These pages used independent news and opinion pages to secretly push the messages of the Myanmar military. Earlier this year, Facebook created a dedicated team across product, engineering, and policy to work on issues specific to Myanmar. They use sophisticated artificial intelligence to proactively flag posts that break Facebook policies. In the second quarter of 2018, these algorithms identified about 52% of the content that Facebook removed for hate speech in Myanmar. They also updated their credible violence policies to deal with misinformation that may contribute to imminent violence or physical harm. They are also improving Facebook reporting tools and introducing new tools on the Messenger mobile app for people to report conversations that violate Community Standards. Read the entire report on this decision on the Facebook newsroom. Facebook takes down hundreds of fake accounts with ties to Russia and Iran Facebook bans another quiz app and suspends 400 more due to concerns of data misuse Facebook is reportedly rating users on how trustworthy they are at flagging fake news
Read more
  • 0
  • 0
  • 2284
Visually different images

article-image-vsap-tally-1-0-a-new-open-source-vote-counting-system-by-la-county-gets-final-state-approval
Natasha Mathur
27 Aug 2018
3 min read
Save for later

VSAP Tally 1.0, a new open source vote-counting system by LA County gets final state approval

Natasha Mathur
27 Aug 2018
3 min read
The Election officials in Los Angeles County gave final approval, last Tuesday, to a new system of counting ballots, named “Voting Solutions for All People (VSAP) Tally 1.0”. The VSAP Tally 1.0 system is created to make the upcoming elections more secure. The new tally system, VSAP Tally 1.0, is an open-source platform that runs on technology owned by the county instead of a private vendor. This is the first publicly-owned, open-source election tally system certified under the California voting system standards. The certification process of VSAP Tally 1.0 involved rigorous functional and security testing conducted by the Secretary of State’s staff as well as a certified voting system test lab. The testing ensured that the new system complies with California Voting System Standards (CVSS). According to Secretary of State, Alex Padilla, “With security on the minds of elections officials and the public, open-source technology has the potential to further modernize election administration, security, and transparency -- the newly designed VBM is the first step in implementing a new voting experience for LA County voters that is more accessible and convenient.” John Sebes, the chief technology officer, Open Source Election Technology Institute, points out that " their intention is to make it freely available to other organizations, which it is not as of now. It's open source in the sense that it was paid for by public funds and the intent is to share it." The certification of the VSAP Tally 1.0 solution enables the Los Angeles County to move forward for November 6, 2018, General Elections, with its newly redesigned VSAP Vote by Mail (VBM) ballots. “This is a significant milestone in our efforts to implement a new voting experience for the voters of Los Angeles County. The VSAP Tally System ensures that the new Vote by Mail ballots cast in the upcoming November election will be counted accurately and securely”, says Dean C. Logan, County Clerk. No information on how they plan to verify these votes has been disclosed yet. Also, even though the VSAP Tally 1.0 is an open source system, there are no codes made available on GitHub so far. For more information, be sure to check out the official press release. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections  
Read more
  • 0
  • 0
  • 2804

article-image-googles-protect-your-election-program-security-policies-to-defend-against-state-sponsored-phishing-attacks-and-influence-campaigns
Savia Lobo
27 Aug 2018
4 min read
Save for later

Google’s Protect your Election program: Security policies to defend against state-sponsored phishing attacks, and influence campaigns

Savia Lobo
27 Aug 2018
4 min read
With more and more attacks happening via emails and hackers intruding into presidential elections and still influencing various ongoing campaigns, Google has recently shared their ongoing work to provide protection against: State-sponsored phishing attacks Technical attribution of a recently-reported influence campaign from Iran Detection and termination of activity on Google properties Due to the advanced techniques used by hackers, users are often tricked by an email camouflaged as a legitimate one. As a countermeasure, Google says it has invested in robust systems, For detecting any phishing or hacking attempts on user’s email network To identify influence operations launched by foreign governments To protect political campaigns from digital attacks via Google’s Protect Your Election program. Google’s Threat Analysis Group is working with their partners at Jigsaw and Google’s Trust & Safety team to identify bad actors and disable their accounts. The group will further warn users about these bad actors, and also share intelligence with other companies and law enforcement officials. State-sponsored phishing attacks Email phishing is the most common yet the most popular attack. Google has improved their security policies for Gmail users such as automated protections, account security (like security keys), specialized warnings, and so on. Google, via these attempts, plans to significantly decrease the volume of phishing emails that get through to its users. On 20th August 2018, Google issued a series of notifications to Gmail users who were subject to suspicious emails from a wide range of countries. They posted about the different warnings about Government-backed phishing on their blog post and asked users to take immediate actions if they came across the attack or pop-up mentioned. FireEye detected suspicious Google accounts linked to Iran Google has also integrated with FireEye cybersecurity group, and other top security consultants, to provide them with intelligence. FireEye’s recent help to Facebook by detecting the identified suspicious accounts with links to Russia and Iran is worth mentioning. For the last two months, Google and Jigsaw have worked closely with FireEye on the influence operation linked to Iran that FireEye identified last week. FireEye identified some suspicious Google accounts (three email accounts, three YouTube channels, and three Google+ accounts), which were swiftly disabled. Google Security team suspects the malicious actors are linked to IRIB In addition to FireEye’s intelligence report, Google’s team have investigated a broader range of suspicious actors linked to Iran who has engaged in setting up the malicious accounts. Following this, Google has informed the U.S. lawmakers and law enforcement agencies about the results of their investigation, including its relation to political content in the United States. Google’s technical research team further identified with evidence that these actors are associated with the IRIB, the Islamic Republic of Iran Broadcasting. Their observations are as follows: Technical data associated with these actors is strongly linked to the official IRIB IP address space. Domain ownership information about these actors is strongly linked to IRIB account information. Account metadata and subscriber information associated with these actors is strongly linked to the corresponding information associated with the IRIB, indicating common ownership and control. Detecting and terminating activity on Google properties All content influenced by the malicious actors violating Google’s policies are swiftly removed from Google services and terminates these actors’ accounts. It also uses several robust methods, including IP blocking, to prevent individuals or entities in Iran from opening advertising accounts. Google identified and terminated a number of accounts linked to the IRIB organization that disguised their connection to this effort, including while sharing English-language political content in the U.S., these include: 39 YouTube channels that had 13,466 total US views on relevant videos 6 blogs on Blogger 13 Google+ accounts The state-sponsored phishing attacks and the actors associated with the IRIB are not the only state-sponsored actors at work on the Internet. Google had also disclosed information about actors linked to the Internet Research Agency (IRA) in 2017. They detected and removed 42 YouTube channels, which had 58 English-language political videos (these videos had a total of fewer than 1,800 U.S. views). Read more about Google’s plan to protect users against phish attacks on their Safety & Security blog. DC Airport nabs the first imposter using its newly deployed facial recognition security system Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison Mozilla, Internet Society, and web foundation wants G20 to address “tech-lash” fuelled by security and privacy concerns
Read more
  • 0
  • 0
  • 2625

article-image-jack-dorsey-to-testify-before-the-house-energy-and-commerce-committee
Sugandha Lahoti
27 Aug 2018
2 min read
Save for later

Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee

Sugandha Lahoti
27 Aug 2018
2 min read
House Energy and Commerce Committee announced that Twitter CEO Jack Dorsey will testify before the committee regarding Twitter algorithms and content monitoring. The hearing will take place on the afternoon of Wednesday, September 5, 2018. https://twitter.com/HouseCommerce/status/1033099291185827841 A few days back, Jack Dorsey announced plans to rethink how Twitter works to combat fake news and data scandals. Last month, Twitter deleted 70 million fake accounts in an attempt to curb fake news and improve Twitter algorithms. It has been constantly suspending fake accounts which are inauthentic, spammy or created via malicious automated bots. Earlier this month, Apple, Facebook, and Spotify took action against Alex Jones. Initially, Twitter allowed Jones to continue using its service. But later Twitter imposed a seven-day “timeout” on Jones after he encouraged his followers to get their “battle rifles” ready against critics in the “mainstream media”. "Twitter is an incredibly powerful platform that can change the national conversation in the time it takes a tweet to go viral," said House Energy and Commerce Committee Chairman Greg Walden, in a statement. "When decisions about data and content are made using opaque processes, the American people are right to raise concerns.” The committee will deal with Twitter algorithms and will ask tough questions revolving around how Twitter monitors and polices content. E&C expects Twitter to adhere to content judgment calls and be transparent regarding the complex processes behind the social media’s algorithms. On Friday, U.S. President Donald Trump accused social media companies of silencing “millions of people” in an act of censorship, but without offering evidence to support the claim. https://twitter.com/realDonaldTrump/status/1032954224529817600 House Majority Leader Kevin McCarthy, commented on the hearing saying, "We all agree that transparency is the only way to fully restore Americans’ trust in these important public platforms." https://twitter.com/GOPLeader/status/1033118278728777729 Following Twitter, representatives from Google and Facebook are also scheduled to appear at next month's hearing. Twitter takes down hundreds of fake accounts with ties to Russia and Iran. Twitter’s disdain for third-party clients gets real. Time for Facebook, Twitter, and other social media to take responsibility or face regulation.
Read more
  • 0
  • 0
  • 1814
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-dc-airport-nabs-first-imposter-using-its-newly-deployed-facial-recognition-security-system
Melisha Dsouza
27 Aug 2018
3 min read
Save for later

DC Airport nabs first imposter using its newly deployed facial recognition security system

Melisha Dsouza
27 Aug 2018
3 min read
The initial apprehension to having facial recognition technology is beginning to move on to acceptance as the incident at the D.C airport stands witness of this fact.  Just three days after the technology was implemented at Washington Dulles International Airport, the system identified an imposter attempting to make his way into the US using a fake passport. On August 23, the US Customs and Border Protection (CBP) released a news about the 26-year-old male, who was traveling from Sao Paulo, Brazil, who presented a French passport to the CBP officer in the primary investigation phase. The facial comparison biometric system confirmed that his face did not match the picture in the passport. He was then sent to secondary inspections for a thorough examination. He appeared nervous during the checks and doubts were confirmed when a search revealed the man's authentic Republic of Congo identification card concealed in his shoe. NEC has collaborated with a total of 14 airports across the US to use the facial recognition technology in order to screen out people arriving in the US with false documents. This has reduced the average wait time for arriving international passengers by around four minutes. According to the International Trade Administration that Quartz quoted back in February 2017,  about 104,525 people arrive from overseas into the US (that number excludes people entering from Mexico and Canada) every day. Scanning such a large number of travelers each day is a daunting task for the CBP. Facial Recognition technology will definitely reduce the complexity that comes with traveler identification. A gist of how the biometric system works The CBP first constructs a photo gallery of all the travelers on US-bound international aircraft using flight manifests and travelers’ documents (mainly passports and visas). When they touch down in America, TSA officers guide travelers to a camera next to a document checking podium. This camera snaps a picture and compares it to the one on their travel documents to determine if they’re indeed who they claim to be. The CBP asserts that the system will not only help in nabbing terrorists and criminals before they can enter the US, but also speed up airport checks, and eventually allow travelers to get through security processes without a boarding pass. CBP is  clearly trying its best to use technology to make its operations more efficient and to detect security breaches at a scale never seen before. It remains to be seen if the benefits of using of facial recognition such as protecting the American people from external threats outweighs the dangers of over-reliance on this tech such as wrongly tagging people or infringing on individual freedom. You can gain more insights to this article on techspot.com. Google’s new facial recognition patent uses your social network to identify you! Admiring the many faces of Facial Recognition with Deep Learning Amazon is selling facial recognition technology to police  
Read more
  • 0
  • 0
  • 2472

article-image-facebook-twitter-takes-down-hundreds-of-fake-accounts-with-ties-to-russia-and-iran-suspected-to-influence-the-us-midterm-elections
Melisha Dsouza
24 Aug 2018
4 min read
Save for later

Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections

Melisha Dsouza
24 Aug 2018
4 min read
"Authenticity matters and people need to be able to trust the connections they make on Facebook." -Mark Zuckerberg After Facebook announced last month that it had identified suspicious accounts that were engaged in "coordinated inauthentic behavior," it successfully took down 652 fake accounts and pages that published political content. Facebook had then declined to specify which country or countries may have been leading the campaign, but officials said the campaign was consistent with previous Russian attacks. These pages were suspected to have been intended to influence the US midterm elections set to take place in November this year. The campaigns were first discovered by FireEye, a cybersecurity firm that worked with Facebook on investigating the fake pages and accounts. Earlier this week, Facebook confirmed in a blog post that these campaigns had links to Russia and Iran. The existence of the fake accounts was first reported by The New York Times. Taking down Inauthentic Behaviour The conspiracy started unravelling in July,  when FireEye tipped Facebook off to the existence of a network of pages known as “Liberty Front Press”. The network included 70 accounts, three Facebook groups, and 76 Instagram accounts, which had 155,000 Facebook followers and 48,000 Instagram followers. The network had undisclosed links to Iranian state media, Facebook said, and spent more than $6,000 between 2015 and today. The network also hosted three events. On investigating those pages, it was found that they linked them back to Iranian state media using website registration information and internet protocol addresses. Pages created in 2013, posted political content that was focused on the Middle East, Latin America, Britain and the United States. Other fake pages also had a far more international spread than the earlier batches uncovered. They carried a number of pro-Iranian themes. The aim of the pages also included promoting Palestinians. Some included anti-Trump language and were tied to relations between the United States and Iran, including references to the Iranian nuclear weapons deal. Newer accounts, created in 2016 targeted cybersecurity by spreading malware and stealing passwords. The accounts that originated in Russia focused on activity in Ukraine and Syria. They did not appear to target the United States. But the aim of the latest campaigns can be summed up to be on similar lines as to those of past operations on the social network. Mainly to distribute fake news that might cause confusion among people, as well as to alter people’s thinking to become more biased or pro-government on various issues. Mark Zuckerberg, Facebook’s chief executive, officially made a statement in a conference call late Tuesday saying, “We believe these pages, groups, and accounts were part of two sets of campaigns, One from Iran, with ties to state-owned media. The other came from a set of people the U.S. government and others have linked to Russia.” Closely following suit, Twitter also went ahead and suspended 284 accounts for engaging in coordinated manipulation. Their analysis supports the theory that many of these accounts originated from Iran. Another social media giant, YouTube, deleted a channel called ‘Liberty Front Press’, which was a website linked to some of the fake Iranian accounts on Facebook. This was done because the account violated its community guidelines. Facebook has come under heavy audit for how its policies are exploited by third parties for fake news, propaganda, and other malicious activity especially after the debacle of the coordinated election interference from Russia’s IRA before, during, and after the 2016 US election. The criticism has only aggravated as the US heads toward the midterms. Facebook has been making an effort to prepare its products and moderation strategy for any manipulation. Now Facebook has taken a step further and is working with researchers to study social media-based election interference. The social media giant hopes to understand how this interference functions and to find ways to stop it. Read the the new york times post for further analysis of this evolving situation. Facebook and NYU are working together to make MRI scans 10x faster Four 2018 Facebook patents to battle fake news and improve news feed Facebook is investigating data analytics firm Crimson Hexagon over misuse of data  
Read more
  • 0
  • 0
  • 2759

article-image-mozilla-internet-society-and-web-foundation-wants-g20-to-address-techlash-fuelled-by-security-and-privacy-concerns
Natasha Mathur
24 Aug 2018
4 min read
Save for later

Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns

Natasha Mathur
24 Aug 2018
4 min read
The Mozilla organization, Internet Society, and the web foundation have spoken out about the current “techlash” that is posing a strong risk to the Internet on their blogs. They want the G20 to address the issues causing techlash at the ongoing G20 Digital Economy Ministerial Meeting this week. Techlash, a term originally coined by The Economist last year, refers to a strong response against major tech companies due to concerns over power, user privacy, and security. This techlash is caused by security and privacy concerns for users on the web. As mentioned in their (Mozilla, Internet Society, web foundation) blog post, “once thought of as the global equalizer, opening doors for communication, work opportunities, commerce and more – the Internet is now increasingly viewed with skepticism and wariness. We are witnessing a trend where people are feeling let down by the technology they use”. The Internet is estimated to contribute US$6.6 trillion a year in the G20 countries by 2020. For developing nations, the rate at which digital economy is growing is 15 to 25 percent a year. Yet, the internet seems to be at continuous risk. This is largely due to the reasons like data breaches, silence around how data is utilized and monetized, cybercrime, surveillance as well as other online threats that are causing mistrust among users. The blog reads that “It is the priority of G20 to reinject hope into technological innovation: by putting people, their rights, and needs first”. With over 100 organizations calling on the leaders at the G20 Digital Economy Ministerial Meeting this week, the urgency speaks highly of how the leaders need to start putting people at “the center of the digital future”. G20 comprises of the world’s largest advanced and emerging economies. It represents, about two-thirds of the world’s population, 85% of global gross domestic product and over 75% of global trade These member nations engage with guest countries and other non-member countries to make sure that the G20 presents a broad range of international opinion. The G20 is famous for addressing issues such as connectivity, future of work and education. But, topics such as security and privacy, which are of great importance and concern to people across the globe, haven’t featured equally as prominently on discussion forums. According to the blog post, “It must be in the interest of the G20 as a global economic powerhouse to address these issues so that our digital societies can continue to thrive”. With recent data issues such as a 16-year-old hacking Apple’s “super secure” customer accounts, idle Android devices sending data to Google, and governments using surveillance tech to watch you, it is quite evident that the need of the hour is to make the internet a secure place. Other recent data breaches include Homebrew’s Github repo getting hacked in 30 minutes, TimeHop’s data breach, and AG Bob Ferguson asking Facebook to stop discriminatory ads. Companies should be held accountable for their invasive advertising techniques, manipulating user data or sharing user data without permission. People should be made aware of the ways their data is being used by the governments and the private sector. Now, there are measures being taken by organizations at an individual level to make the internet more safe for the users. For instance, DARPA is working on AI forensic tools to catch deepfakes over the web, Twitter deleted 70 million fake accounts to curb fake news, and EU fined Google with $5 billion over the Android antitrust case. But, with G20 bringing more focus to the issue, it can really help protect the development of the Internet on a global scale. G20 members should aim at protecting information of all the internet users across the world. It can play a detrimental role by taking into account people’s concerns over internet privacy and security. The techlash is ”questioning the benefits of the digital society”. Argentine President, Mauricio Macri, said that to tackle the challenges of the 21st century “put the needs of people first” and it's time for G20 to do the same. Check out the official blog post by Mozilla, Internet Society and Web Foundation. 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China Four 2018 Facebook patents to battle fake news and improve news feed Time for Facebook, Twitter, and other social media to take responsibility or face regulation  
Read more
  • 0
  • 0
  • 3721

article-image-intel-faces-backlash-on-microcode-patches-after-it-prohibited-benchmarking-or-comparison
Melisha Dsouza
24 Aug 2018
4 min read
Save for later

Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison

Melisha Dsouza
24 Aug 2018
4 min read
Intel has introduced microcode updates for mitigating the recently disclosed speculative execution vulnerabilities known as ‘Foreshadow’ a.k.a the L1 Terminal Fault (L1TF). These microcode patches were supposed to handle various side-channel and timing attacks. A new license term applied to the new microcode is as follows: You will not, and will not allow any third party to (i) use, copy, distribute, sell or offer to sell the Software or associated documentation; (ii) modify, adapt, enhance, disassemble, decompile, reverse engineer, change or create derivative works from the Software except and only to the extent as specifically required by mandatory applicable laws or any applicable third party license terms accompanying the Software; (iii) use or make the Software available for the use or benefit of third parties; or (iv) use the Software on Your products other than those that include the Intel hardware product(s), platform(s), or software identified in the Software; or (v) publish or provide any Software benchmark or comparison test results. However, this was not very well received by the public. Let’s find out why. Issues in the Security Patches The security fixes introduced apparently slow down Intel processors. Intel could very well be facing a backlash from the public on this. Imagine companies that run huge server farms or provide cloud services having to face a significant 5-10% speed reduction in their server. Security and reputation, both would be at stake. Another dilemma is whether the customer should install the fix or not. Many computer users don't allow outside or unprivileged users to run on their CPUs the way a cloud or hosting company does. For them, the slowdown incurred by installing the fix is unnecessary. Through its license, Intel has now attempted to gag anyone who would collect information for reporting about speed loss incurred penalties. Bad move. When in reality, it should have focussed on ways to handle security problems by owning up to the damage and publish mitigations. This clause of the license just hides how they are damaged. By Silencing free speech of those who would merely publish benchmarks is bad ethics . Intel’s decision to include this clause in the license also gained attention by many big names in the tech industry. The Register reported on Tuesday that Linux distro Debian decided to withhold packages containing the microcode security fix over concerns about its license. After this, open-source pioneer Bruce Perens called out Intel for trying to "gag"  netizens. Here is what Lucas Holt, MidnightBSD project lead, had to say in a tweet.   Source: Twitter.com Terms of the License stand re-written To save further confusion and chaos of the masses, Intel has backtracked on the license for its latest microcode update after the previous wording outlawed public benchmarking of the chips. The reworked license no longer prohibits benchmarking. In an announcement via Twitter, Imad Sousou, corporate VP and general manager of Intel Open Source Technology Center, on Thursday said: "We have simplified the Intel license to make it easier to distribute CPU microcode updates and posted the new version here. As an active member of the open source community, we continue to welcome all feedback and thank the community." While Intel could have faced major trust issues not only from their dedicated users, it managed to re-trace its steps just in time. It’s about time Intel starts taking responsibility of its own machines. Hopefully, the company thinks twice before introducing any other changes that could lead to a backlash. You can read all about the origins of the discussion on Bruce Perens blog. Intel acquires Vertex.ai to join it under their artificial intelligence unit Defending Democracy Program: How Microsoft is taking steps to curb cybersecurity threats to democracy Microsoft claims it halted Russian spearphishing cyberattacks
Read more
  • 0
  • 0
  • 2327
article-image-facebook-bans-another-quiz-app-for-data-misuse
Sugandha Lahoti
24 Aug 2018
2 min read
Save for later

Facebook bans another quiz app and suspends 400 more due to concerns of data misuse

Sugandha Lahoti
24 Aug 2018
2 min read
Facebook today revealed that it has banned another quiz app, myPersonality, over concerns of data misuse. This step was taken after they did not allow Facebook to audit their app raising doubts regarding them having shared user information with researchers as well as companies. So far this is the second quiz app that has been banned after Facebook announced a large-scale audit of its platform in March. The first one being, This Is Your Digital Life which Facebook banned after it was found to be linked to Cambridge Analytica. According to Ime Archibong, VP of Product Partnerships at Facebook, “Since launching our investigation in March, we have investigated thousands of apps. And we have suspended more than 400.” These apps were banned on concerns around the developers who built them or around these apps misusing the information people chose to share. [box type="shadow" align="" class="" width=""]According to Facebook App Review policy, no user information will be shared with apps if the user hasn’t used them in 90 days.[/box] myPersonality was created by researchers at the Cambridge Psychometrics Centre to source data from Facebook users via personality quizzes. The quiz app gathered data on some four million users when it was operational from 2007 to 2012 and illegally gave it to researchers and companies. In May, Facebook suspended the app, which hadn’t been active since 2012, but now it has been completely banned. Facebook will notify people who chose to share their Facebook information with myPersonality. Currently, they have no evidence if the quiz app accessed any friends’ information. If they find any such evidence, they will be notifying these people’s Facebook friends as well. Read Facebook’s official statement on the Facebook blog. Facebook is reportedly rating users on how trustworthy they are at flagging fake news. Four 2018 Facebook patents to battle fake news and improve news feed. Facebook, Apple, Spotify pull Alex Jones content.
Read more
  • 0
  • 0
  • 2140

article-image-facebook-is-reportedly-rating-users-on-how-trustworthy-they-are-at-flagging-fake-news
Sugandha Lahoti
23 Aug 2018
3 min read
Save for later

Facebook is reportedly rating users on how trustworthy they are at flagging fake news

Sugandha Lahoti
23 Aug 2018
3 min read
Amidst the allegations surrounding Facebook on fake news, Facebook is now reportedly working on a scale to rate user trustworthiness. According to a report by Washington Post, the company is giving its users a trustworthiness score ranging from 0 to 1 depending on the reliability of their false news flagging. This is another of Facebook’s attempt to revamp its image after it got unfriended by Wall Street, complained on by HUD, and accused of discriminatory advertising. Previously, Facebook has made several patents to battle fake news and improve news feed, including patenting their news feed filter tool, most recently. How does the fake news scoring system work? If a user flags something as false news but fact checkers verify it as true, it could hurt their score and reduce future Facebook flagging. If users consistently report false news that’s indeed proven to be false, their score improves and Facebook will trust their future flagging more. The user-reported fakes are arranged on the basis of user trustworthiness to help make the best use of fact-checker time. The score is used to help the fact-checking team determine which posts to look at first. The idea behind this scoring is to eliminate people who have the habit of making false claims about news articles. This will also help thwart certain users who band together to flag a piece of content from a news publisher they disagree with.  Facebook says, "We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.” Facebook’s News Feed product manager Tessa Lyons confirmed the scoring system exists and that it was developed sometime over the past year, Lyons said, “There’s currently no way to see your own or someone else’s trustworthiness score. And other signals are also used to compute the score.” Facebook is keeping shut about how the score is generated to prevent bad actors from unethically boosting their trustworthiness score. While it is good to distinguish genuine flagging from the rest to allow news moderators to focus on fact-checking better, what is still missing is an effective mechanism to minimize the reach of fake news in the early hours of post. This makes us wonder if Facebook or some other social media sites could be considering rating users based on their propensity for sharing/propagating fake news via shares and likes. The entire interview is available on Washington Post. Four 2018 Facebook patents to battle fake news and improve news feed. Facebook patents its news feed filter tool to provide more relevant news to its users. Facebook plans to use Bloomsbury AI to fight fake news.
Read more
  • 0
  • 0
  • 2005

article-image-android-device-sends-data-to-google-10-times-than-an-ios-device-does-to-apple
Fatema Patrawala
23 Aug 2018
3 min read
Save for later

Did you know your idle Android device sends data to Google 10 times more often than an iOS device does to Apple?

Fatema Patrawala
23 Aug 2018
3 min read
A new research shared by Digital Content Next, reveals idle Android devices send 10 times more data than iOS devices. In a paper titled "Google Data Collection," by Douglas C. Schmidt, a computer science professor at Vanderbilt University. Schmidt in the research catalogues how much data Google is collecting about consumers and their most personal habits across all of its products and how that data is being tied together. More from Schmidt’s research findings: An idle Android phone with Chrome web browser active in the background communicated location information to Google 340 times during a 24-hour period. An equivalent experiment found that on an iOS device with Safari open but not Chrome, Google could not collect any appreciable data unless a user was interacting with the device. Additionally an idle Android phone with running Chrome sends back to Google nearly fifty times as many data requests per hour as an idle iPhone running Safari. Overall, an idle Android device was found to communicate with Google nearly 10 times more often than an Apple device communicates with Apple servers. Data transmission frequencies on an android device can potentially tie together data through passive means with the help of user’s personal information. For example, anonymous advertising identifiers collect activity data from apps and third-party web page visits of a user. Similarly Google can associate the cookie to a user's Google account when a user accesses a Google app in the same browser that a third-party web page was accessed. Source: Digital Content Next The research also showed Google to track location data even after the consumer turned off their settings. Google had clarified about its location policies but yet it continues to track location data through app features. The location data is used for ad targeting purposes, Google’s primary business model. While Apple uses differential privacy to gather anonymous usage insights from devices like iPhones, iPads, and Macs. Apple says the data it collects off-device is used to improve services like Siri suggestions, and to help identify problematic websites that use excessive power or too much memory in Safari. When users sets up their iOS device, it will explicitly asks users if they wish to provide usage information on an opt-in basis. If a user declines, no data is collected by the device unless they choose to opt in at a later time. Apple CEO, Tim Cook and Apple executives’ belief that customers are not the company's product seems to be clearly in action here. The company also has a dedicated privacy website that explains its approach to privacy and government data requests. Do you want to know what the future holds for privacy? It’s got Artificial Intelligence on both sides. Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 3215
article-image-apache-struts-faces-code-execution-flaw-risking-enterprises-to-attacks
Fatema Patrawala
23 Aug 2018
2 min read
Save for later

Apache Struts faces code execution flaw risking enterprises to attacks

Fatema Patrawala
23 Aug 2018
2 min read
Apache Struts 2 has been found with a bug in the core infrastructure of the software. The issue was found by the cybersecurity firm Semmle on April 10 and code patches were released on June 25. The Apache Software foundation is facing security vulnerability as the bug affects all the versions of Apache Struts 2. Researchers from Semmle, uncovered that the security flaw is caused by the insufficient validation of untrusted user data in the core Struts framework. As the bug, CVE-2018-11776, has been discovered in the Struts core, the team says there are multiple attack vectors, threat actors could use to exploit the vulnerability. If the alwaysSelectFullNamespace flag is set to true in the Struts configuration, which is automatically the case when the Struts Convention plugin is in use. Or if a user's Struts configuration file contains a tag that does not specify the optional namespace attribute or specifies a wildcard namespace, it is likely the build is vulnerable to attack. "This vulnerability affects commonly-used endpoints of Struts, which are likely to be exposed, opening up an attack vector to malicious hackers. On top of that, the weakness is related to the Struts OGNL language, which hackers are very familiar with, and are known to have been exploited in the past." says Man Yue Mo from the Semmle Security Research Team. The vulnerability will affect all versions of Apache Struts 2. Firms which use the popular open-source framework are urged to update their builds immediately. Users of Struts 2.3 are advised to upgrade to 2.3.35; users of Struts 2.5 need to upgrade to 2.5.17. As the latest releases only contain fixes for the vulnerability, Apache does not expect users to experience any backward compatibility issues. Semmle team mentioned, "Previous disclosures of similarly critical vulnerabilities have resulted in exploits being published within a day, putting critical infrastructure and customer data at risk. All applications that use Struts are potentially vulnerable, even when no additional plugins have been enabled." Git-bug: A new distributed bug tracker embedded in git How to Debug an application using Qt Creator Debugging Xamarin Application on Visual Studio [Tutorial]
Read more
  • 0
  • 0
  • 1577

article-image-defending-democracy-program-how-microsoft-is-taking-steps-to-curb-increasing-cybersecurity-threats-to-democracy
Prasad Ramesh
23 Aug 2018
4 min read
Save for later

Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy

Prasad Ramesh
23 Aug 2018
4 min read
With the growing cybersecurity threats, Microsoft took over six internet domains acting on a court order, and introduced AccountGuard for emails. Microsoft AccountGuard is a move extending their Defending Democracy Program which will be applicable to both organizational and personal email accounts. Microsoft’s Digital Crimes Unit (DCU) executed a court order to take over six internet domains created by a group known as Strontium, or alternatively Fancy Bear or APT28. The group is widely associated with the Russian government. The six internet domains, my-iri.org, hudsonorg-my-sharepoint.com, senate.group, adfs-senate.services, adfs-senate.email, office365-onedrive.com impersonated the real websites. Of late, there have been instances of foreign entities launching cyber strikes to disrupt elections. What is Microsoft AccountGuard? Microsoft AccountGuard will provide “state-of-the-art cybersecurity protection” without any additional cost. This applies to individuals, campaigns and related political institutions. Brad Smith, President at Microsoft stated: “To be clear, we currently have no evidence these domains were used in any successful attacks before the DCU transferred control of them, nor do we have evidence to indicate the identity of the ultimate targets of any planned attack involving these domains.” The technology is free of charge to candidates, campaigns and related political institutions using Office 365. Microsoft AccountGuard will provide these features: Cross-account threat detection and notification: Microsoft’s Threat Intelligence Center will enable them to detect and notify of attacks in a unified way on both organizational and personal emails. When threats are verified, Microsoft will provide personal and expedited recommendations to affected political campaigns and their staff to secure the concerned systems. The unified notification system will provide a comprehensive view of attacks against the campaign/organization. Security guidance and ongoing education: Microsoft will provide guidance to make officials’, political campaigns and eligible organizations to further secure their network and email systems. This includes multi-factor authentication, installing latest security update to control access of data. AccountGuard will also show updated briefings and training to address evolving cyber-attack trends. Early adopter opportunities: There will be preview releases of the new security features which are used in large corporate and government accounts. If you are eligible for Microsoft AccountGuard you can request an invitation to enroll. A quick look at Microsoft’s Defending Democracy Program The Defending Democracy Program is a global effort as Microsoft tries to scale its efforts and reach other democratic countries to protect their processes in the coming years. Microsoft has identified 2018 as a critical year for governments and tech companies to work together towards making elections more secure. The Defending Democracy Program consist of some steps that include: Protecting campaigns from hacks by better account monitoring and increasing response measures to attacks. Supporting proposals like the Honest Ads Act to increase online political advertising transparency. In addition, adopting self-regulatory measures across Microsoft platforms. Exploring technological solutions to protect and preserve the electoral processes. And also interact with federal, state, and local officials to identify and fix cyber threats. Defending against disinformation, propaganda and fake news by partnering with institutions and think tanks who are dedicated to counter such activities. Microsoft will focus on the U.S. midterm elections of November 2018. They are piloting new cross-industry protections; this will also be done in the 2020 U.S. presidential elections. Tom Burt, Corporate Vice President, Customer Security & Trust stated: “Expect to hear more from us on what we’re doing, both on our own and in partnership with governments and our industry colleagues, to put our cybersecurity expertise to work for the defense of democracy.” Visit the Microsoft Blog for more details on AccountGuard and the defending democracy program. Google introduces Cloud HSM beta hardware security module for crypto key security Top 5 cybersecurity trends you should be aware of in 2018 Microsoft Edge introduces Web Authentication for passwordless web security
Read more
  • 0
  • 0
  • 2434