Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Security

470 Articles
article-image-sbi-data-leak-in-india-results-in-information-of-millions-of-customers-exposed-online
Prasad Ramesh
31 Jan 2019
2 min read
Save for later

SBI data leak in India results in information of millions of customers exposed online

Prasad Ramesh
31 Jan 2019
2 min read
The State bank of India, the largest bank of the nation leaked data of millions of its account holders. In the SBI data leak, Information like bank balances and recent transactions were visible online due to the leak. As per a TechCrunch report, two months of data was stored on a Mumbai based data center. An SMS and call based system was used by customers to query information about their bank accounts. The SBI server was not password protected allowing anyone with an internet connection to access such data if they knew where to find the data. It is unclear as to how long the server was unprotected but a security researcher found about this and reported it to TechCrunch. SBI Quick is a service that enables SBI customers to perform various actions with their bank account via SMS, miss calls etc. Customers can then get information like balance, recent transactions on their phone. For people not using a smartphone, this is very useful. The report says that the back-end SMS system was exposed leading to the SBI data leak. Since the server was not password protected, information like phone number, bank balance, recent transactions, and even partial account numbers were exposed. Speaking to TechCrunch, security researcher Karan Saini said: “The data available could potentially be used to profile and target individuals that are known to have high account balances.” He added that knowing a phone number “could be used to aid social engineering attacks — which is one the most common attack vector here with regard to financial fraud.” The report also says that the server has been secured now. GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising How to protect your VPN from Data Leaks A WordPress plugin vulnerability is leaking Twitter account information of users making them vulnerable to compromise
Read more
  • 0
  • 0
  • 1511

article-image-facebook-pays-users-20-month-to-install-a-facebook-research-vpn-that-spies-on-their-phone-and-web-activities-techcrunch-reports
Savia Lobo
30 Jan 2019
4 min read
Save for later

Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports

Savia Lobo
30 Jan 2019
4 min read
TechCrunch, in their recent report mentioned, Facebook has been spying on user’s data and internet habits by paying $20 a month, plus referral fees for users aged between 13 - 35  to install a ‘Facebook Research’ VPN via beta testing services such as Applause, BetaBound, and uTest. The VPN allows Facebook to have an eye on user’s web as well as phone activity. Such activity was found similar to Facebook’s Onavo Project app, which was banned by Apple in June 2018 and totally discarded in August. Launched in 2016, the Facebook research project was renamed to Project Atlas mid-2018 after the backlash against Onavo. One of the companies, uTest, was also running ads for a "paid social media research study" on Instagram and Snapchat, tweeted one of contributing TechCrunch editors to the report. https://twitter.com/JoshConstine/status/1090395755880173569 TechCrunch has also updated that “Facebook now tells TechCrunch it will shut down the iOS version of its Research app in the wake of our report.” According to the Techcrunch report, “Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity.” Guardian Mobile Firewall’s security expert Will Strafach, told TechCrunch, “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location-tracking apps you may have installed.” As part of the study, users were even asked to provide screenshots of their Amazon purchases. For underage users, Applause requires parental permission, and Facebook is mentioned in the consent agreement. The agreement also mentions this line about the company tracking your children, “There are no known risks associated with this project, however, you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” As highlighted by TechCrunch, the Facebook Research app sent data to an address which is affiliated with Onavo. https://twitter.com/chronic/status/1090397698803621889 A Facebook spokesperson wrote that the program is being misrepresented by TechCrunch and that there was never a lack of transparency surrounding it. As a response to this, Josh Constine, Editor at TechCrunch tweeted, “Here is my rebuttal to Facebook's statement regarding the characterization of our story. We stand by our report, and have a fully updated version here.” He also provided an updated report link followed by a snippet from the report. https://twitter.com/JoshConstine/status/1090519765452353536 https://twitter.com/matthewstoller/status/1090605150673215494 According to Will Strafach, who did the actual app research for TechCrunch, “"they didn't even bother to change the function names, the selector names, or even the "ONV" class prefix. it's literally all just Onavo code with a different UI. Also, the Root Certificate they have users install so that they can access any TLS-encrypted traffic they'd like." According to a user on Hacker News, “By using a VPN they forced all traffic to go through their servers, and with the root certificate, they are able to monitor and gather data from every single app and website users visit/use. Which would include medical apps, chat apps, Maps/gps apps and even core operating system apps. So for users using Facebook's VPN they are effectively able to mine data which actually belongs to other apps/websites.” Another user writes, “How is this not in violation of most wiretapping laws? Facebook is not the common carrier in these cases. Both parties of conversations with teens are not consenting to the wiretapping, which is not allowed in many US states. I’m not sure teenage consent is considered “consent” and the parents aren’t a party to the conversations Facebook is wiretapping. Facebook is both paying people and recording the electronic communications.” To know more about this news, head over to TechCrunch’s complete report. Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they’re being targeted by advertisers Facebook releases a draft charter introducing a new content review board that would filter what goes online
Read more
  • 0
  • 0
  • 2295

article-image-outage-in-the-microsoft-365-and-gmail-made-users-unable-to-log-into-their-accounts
Savia Lobo
30 Jan 2019
2 min read
Save for later

Outage in the Microsoft 365 and Gmail made users unable to log into their accounts

Savia Lobo
30 Jan 2019
2 min read
Yesterday, the Microsoft suffered a worldwide issue affecting its various cloud services including Office 365, Azure Portal, Dynamics 365 and LinkedIn, where many users were unable to log into its cloud-based services. This outage issue was reported around 7.15 in the morning--Sydney time--and is believed to have affected multiple enterprise customers. Microsoft confirmed these problems and said that the network issues in the Azure Active Directory were to be blamed. In an update to customers, the Microsoft team said, “that have their authorization cached are unaffected by this issue, and new authentications are succeeded approximately 50 percent of the time.” They also tweeted that, “Engineers are investigating a Microsoft networking issue impacting customers' ability to log in to the Azure Portal.” https://twitter.com/AzureSupport/status/1090360382466605056 Downdetector, an outage tracking service, showed a heatmap of the problems, which show that the east coast of Australia, as well as New Zealand, felt the impact. Microsoft 365, tweeted today, “We've identified a third-party network provider issue that is affecting authentication to multiple Microsoft 365 services. We're moving services to an alternate network provider to resolve the issue.” https://twitter.com/GossiTheDog/status/1090579331502485505 To know more about this in detail, visit Microsoft’s official website. Gmail also suffers from a global outage Yesterday, Gmail users all around the globe reported a "404" error when they tried accessing the service around 11 am. Google's service status page listed no issues with Gmail at the time, but users clearly disagree. According to Outage Report and Down Detector, Gmail was down in nearly all of Europe, parts of North America, South America and Asia. https://twitter.com/zeefu/status/1090204478308012033 However, the issue with Gmail should now be resolved, as per a Google spokesperson. Internet Outage or Internet Manipulation? New America lists government interference, DDoS attacks as top reasons for Internet Outages across the world How Dropbox uses automated data center operations to reduce server outage and downtime Philips Hue’s second ongoing remote connectivity outage infuriates users
Read more
  • 0
  • 0
  • 2766
Visually different images

article-image-san-francisco-legislation-proposes-a-citywide-ban-on-governments-use-of-facial-recognition-technology
Melisha Dsouza
30 Jan 2019
3 min read
Save for later

San Francisco legislation proposes a citywide ban on government’s use of facial recognition technology

Melisha Dsouza
30 Jan 2019
3 min read
San Francisco will be the first city in the U.S. to ban the government from using facial recognition technology, if a legislation that was tabelled yesterday, stands approved. The ‘Stop Secret Oversight Ordinance’, will be proposed by Supervisor Aaron Peskin. The legislation seeks to do the following: Departments in the city will have to seek approval from the Board of Supervisors before using or buying surveillance technology The legislation would implement annual audits of surveillance technology in order to ensure that the tools involved are properly used. A blanket ban will be issued that stops departments from purchasing or using facial recognition technology. The legislation, which would also apply to law enforcement, will be heard in committee next month. It has already obtained support from civil rights groups like the ACLU of Northern California. https://twitter.com/Matt_Cagle/status/1090421659754991616 The legislation also makes a strong point that ‘surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective. The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring’. The news comes at a time where facial recognition technology stands at the realms of debate between various privacy and security personnel. While the tech has been put to good use by various organizations, we can’t help but notice the negative impacts it can have on citizens. Take for instance the WEF 2019 talk that pointed out the role of governments and military in the use (and potential abuse) of today’s technology. Users on Twitter displayed reactions along the same line, expressing their relief that the government will no longer be able to “invade and micromanage the privacy of others.” Some also have called Facial recognition used by the government as ‘Dangerous’. https://twitter.com/jevanhutson/status/1090384142670258176 https://twitter.com/NicoleOzer/status/1090433222272417792 Head over to The Verge for more insights on this news. Four IBM facial recognition patents in 2018, we found intriguing Google won’t sell its facial recognition technology until questions around tech and policy are sorted Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report
Read more
  • 0
  • 0
  • 1980

article-image-facebook-hires-top-eef-lawyer-and-facebook-critic-as-whatsapp-privacy-policy-manager
Melisha Dsouza
30 Jan 2019
2 min read
Save for later

Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager

Melisha Dsouza
30 Jan 2019
2 min read
Nate Cardozo, former top legal counsel at  Electronic Frontier Foundation and has been hired to undertake a privacy role at Whatsapp. Cardoza, who until recently worked as a Senior Information Security Counsel with the EFF, is also known as a prominent Facebook critic. He has already worked with private companies on privacy policies that protect user rights; and now, hopping aboard along with the Facebook team he says that “the privacy team I’ll be joining knows me well, and knows exactly how I feel about tech policy, privacy, and encrypted messaging. And that’s who they want at managing privacy at WhatsApp. I couldn’t pass up that opportunity.” In a 2015, Cardozo wrote a blog post, questioning Facebook’s ethical morals. He alleged Facebook’s data models  “depends on our collective confusion and apathy about privacy” to execute its operations of tracking users through their behaviour on the social media site. This can also been seen as a strategic move by Facebook, who hired Cardozo immediately after the attention Facebook drew to its intention of integrating  WhatsApp, Instagram and Facebook Messenger. Security and privacy-minded personnel expressed their concerns about what else the integration could possibly mean to user security. The EEF has also been critical of Facebook, related to Whatsapp’s user privacy and criticism over secretly released Facebook documents that outlined illicit practices being performed at the company. While hiring such a top EEF personnel may put several privacy minded people at rest, users on HackerNews are speculating if the move was calculative by ‘removing a critic and making him an ally and have removed him from constantly stirring controversy.’ Alongside Nate, Facebook has also hired Robyn Greene from the Open Technology Institute to focus on law enforcement access and data protection at Facebook. She announced the news on Twitter yesterday. https://twitter.com/Robyn_Greene/status/1090343419237617664 You can head over to CNBC for more insights on this news. GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices
Read more
  • 0
  • 0
  • 1757

article-image-gdpr-complaint-claims-google-and-iab-leaked-highly-intimate-data-of-web-users-for-behavioral-advertising
Melisha Dsouza
29 Jan 2019
4 min read
Save for later

GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising

Melisha Dsouza
29 Jan 2019
4 min read
Last September, a complaint was filed against Google and other ad auction companies about a data breach that “affects virtually every user on the web”. This complaint was made by a host of privacy activists and browser makers, alleging that tech companies broadcasted people’s personal data to dozens of companies, without proper security through a mechanism of “behavioural ads”. The complaint stated that every time a person visits a website and is shown a “behavioural” ad on a website; intimate personal data describing each visitor and what they are watching online is captured and broadcast to tens or hundreds of companies. This was done in order to request potential advertisers’ bids for the attention of the specific individual visiting the website. The complaints were lodged by Jim Killock of the U.K.’s Open Rights Group, tech policy researcher Michael Veale of University College London, and Johnny Ryan of the pro-privacy browser firm Brave. They claimed that Google and other ad-tech firms were breaking the EU’s strict General Data Protection Regulation (GDPR) by unlawfully recording people’s sensitive characteristics. Now, new evidence has been released by the very same organizations that filed last September's complaint, showing the data broadcasted includes information about people’s ethnicity, disabilities, sexual orientation and more. This sensitive information allows advertisers to specifically target incest, abuse victims, or those with eating disorders. The irony of it being, yesterday was ‘International Data Protection Day”. What is Behavioral advertising? Yahoo finance has explained the concept of behavioral advertising very simply. The online ad industry tracks a person's movements around the internet and builds a profile based on what the individual looks at/ sites the user visits. On visiting a webpage that runs behavioral ads, an automated auction takes place between ad agencies with the winner allowed being to show the user an ad that supposedly matches their profile. This ultimately means that for the real-time bidding system to work, personal details of the users have to be broadcasted to the advertisers in so-called “bid requests”. Evidence against Google and IAB Joining the list of complainants is Poland’s Panoptykon Foundation, another rights group, that has complained to its local data protection authority about organizations including Google and the Interactive Advertising Bureau (IAB), which is the industry body that sets the rules for ad auctions. The evidence submitted by the complainants comprises category lists from Google and IAB, including topics such as being an incest victim, having cancer, having a substance-abuse problem, being into a certain kind of politics or adhering to a certain religion or sect. Special needs kids, endocrine and metabolic diseases, birth control, infertility, diabetes, Islam, Judaism, disabled sports, bankruptcy- these serve as supplementary evidence for the two original complaints filed with the UK’s ICO and the Irish DPC last year. A Google spokesperson told TechCrunch that the company has “strict policies that prohibit advertisers on our platforms from targeting individuals on the basis of sensitive categories” and that if they did find such ads violating said policies, they would take immediate action”. The original IAB lists can be downloaded as a spreadsheet. The PDF versions of the IAB lists with special category and sensitive data highlighted by the complainants can be viewed here (v1) and here (v2). You can go ahead and download Google’s original document for more insights on this news. French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved
Read more
  • 0
  • 0
  • 2134
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-cisco-announces-severe-vulnerability-that-gives-improper-access-controls-for-urls-in-its-small-business-routers-rv320-and-rv325
Savia Lobo
29 Jan 2019
2 min read
Save for later

Cisco announces severe vulnerability that gives improper access controls for URLs in its Small Business routers RV320 and RV325

Savia Lobo
29 Jan 2019
2 min read
Last week, Cisco announced of a severe vulnerability in the web-based management interface of its Small Business RV320 and RV325 Dual Gigabit WAN VPN Routers. This vulnerability could easily allow an unauthenticated, remote attacker to retrieve sensitive information. Cisco in their report, mention that this vulnerability is due to the improper access controls for URLs. An attacker could easily exploit this vulnerability by connecting to the affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information. Cisco routers vulnerable to CVE-2019-1653 According to Bad packets report, they scanned around 15,309 unique IPv4 hosts and determined 9,657 Cisco RV320/RV325 routers are vulnerable to CVE-2019-1653. Their report states, 6,247 out of 9,852 Cisco RV320 routers scanned are vulnerable (1,650 are not vulnerable and 1,955 did not respond to our scans) 3,410 out of 5,457 Cisco RV325 routers scanned are vulnerable (1,027 are not vulnerable and 1,020 did not respond to our scans) Source: Bad packets report This vulnerability also affects Cisco Small Business RV320 and RV325 Dual Gigabit WAN VPN Routers running Firmware Releases 1.4.2.15 and 1.4.2.17. Cisco has also released firmware updates to address this vulnerability. However, they mention, there are no workarounds that address this vulnerability. To know about this news in detail, visit Cisco’s official website. Cisco and Huawei Routers hacked via backdoor attacks and botnets Dropbox purchases workflow and eSignature startup ‘HelloSign’ for $250M Per the new GDC 2019 report, nearly 50% of game developers think game industry workers should unionize  
Read more
  • 0
  • 0
  • 2190

article-image-biometric-information-privacy-act-it-is-now-illegal-for-amazon-facebook-or-apple-to-collect-your-biometric-data-without-consent-in-illinois
Melisha Dsouza
28 Jan 2019
5 min read
Save for later

Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois

Melisha Dsouza
28 Jan 2019
5 min read
On 25th January, the Illinois Supreme Court passed a unanimous ruling which states that, when companies collect biometric data like fingerprints or face prints without informed opt-in consent, they can be sued. This can be done even without proof of concrete injuries- like identity fraud or physical harm. The law known as the Biometric Information Privacy Act, requires that companies explicitly inform a person about what biometric data they will collect, and also how the data will be stored and utilized by the company. The data includes information like fingerprints, facial scans, iris scans, or other biological information. Next, the company has to obtain prior consent from that person before capturing these details. A point to be noted here is that, while other states only allow attorneys general to sue companies, the Illinois BIPA law gives individuals the right to sue companies. Afterwhich, they can collect damages of $1,000 (the amount increases to $5,000, if the court finds a company deliberately or recklessly flouted the law). Six Flags v Rosenbach According to FastCompany, the decision was taken in a landmark lawsuit against the theme park Six Flags, who recorded the thumbprint of a 14-year-old boy without notice or written consent, while issuing him a season pass in 2014. Six Flags did not notify the boy or his mother, Stacy Rosenbach, about obtaining his fingerprints. She sued Six Flags for violation of the BIPA law and in its defense, Six Flags made the case that because Rosenbach couldn’t demonstrate that taking his fingerprints had done any “harm” to the boy (example: no data breach or security problem), the company wasn’t liable for damages. According to the Electronic Frontier foundation, “EFF, along with ACLU, CDT, the Chicago Alliance Against Sexual Exploitation, PIRG, and Lucy Parsons Labs, filed an amicus curiae brief urging the Illinois Supreme Court to adopt a robust interpretation of BIPA. The Illinois Supreme Court agreed with us and soundly rejected the defendants’ argument that BIPA required a person to show an injury beyond loss of statutory privacy rights”.  On Friday the state’s Supreme Court ruled that Six Flags had, indeed, violated the law and would need to pay the boy damages, in spite of no “harm” shown. The ruling comes as an example in Illinois that if a company violates a citizen’s privacy without any prior notice or consent and the citizen sues, the plaintiff doesn’t need to demonstrate an additional harm for the law to protect the user. BIPA sets an example for similar lawsuits The Six Flags ruling builds a stronger case for other ongoing lawsuits, including one against Facebook in which consumers claimed that Facebook violated a state privacy law by using facial recognition technology on their uploaded photographs without their consent. Facebook is fighting back by saying consumers should have to show that the lawbreaking practice caused ‘additional harm’ beyond a mere violation. Google also faced a similar lawsuit on Thursday, where two Illinois residents allege that the company “failed to obtain consent from anyone when it introduced its facial recognition technology.” Just last month, Google won the dismissal of a lawsuit it has been facing since 2016 for allegedly scanning and saving the biometric data of a woman, captured unwillingly in 11 photos taken on Android by a Google Photos user. As per Bloomberg, the lawsuit was dismissed by a judge in Chicago, who found that the plaintiff didn’t suffer “concrete injuries”. Senior staff counsel Rebecca Glenberg of the ACLU of Illinois, said in a statement that  “Your biometric information belongs to you and should not be left to corporate interests who want to collect detailed information about you for advertising and other commercial purposes.” What does this mean to the tech industry? According to Illinois.org, while facial recognition technology and biometric does have the potential to simplify life of citizens, the BIPA may affect technological innovation overall. The litigation may have the potential to drive up costs- on both ends of the spectrum. Businesses like Apple, who have demonstrated the importance of biometric technology, will have to take extra precautions to comply with the BIPA. In case the law is violated, there is the expense of class-action lawyers and litigants. They may simply decide against hiring people in Illinois because of the expense and hassle. It would be interesting to see companies coming up with a kind of framework that would safeguard them against violation of the BIPA as well as put citizens at rest about the way their data is handled/used. You can head over to Electronic Frontier Foundation’s official post for more insights on this news. ACLU files lawsuit against 11 federal criminal and immigration enforcement agencies for disclosure of information on government hacking The district of Columbia files a lawsuit against Facebook for the Cambridge Analytica scandal IBM faces age discrimination lawsuit after laying off thousands of older workers, Bloomberg reports
Read more
  • 0
  • 0
  • 3296

article-image-osmfs-openstreetmap-foundation-investigation-report-on-unusual-membership-signups-just-before-their-board-elections
Savia Lobo
28 Jan 2019
3 min read
Save for later

OSMF’s (OpenStreetMap Foundation) investigation report on unusual membership signups just before their board elections

Savia Lobo
28 Jan 2019
3 min read
OpenStreetMap Foundation (OSMF), the world’s largest collaborative mapping community, saw some unusual rise in signups for their memberships, in November 2018. Guillaume Rischard, an MWG(Members of the Working Group) member (and then-board candidate), detected that a large group of OSMF membership applications arrived under suspicious circumstances just when the window for eligibility for voting in the 2018 board election was closing. He later reported this issue to the board on 20th November 2018. Rischard along with another  OSMF MWG member, Steve Friedl released an investigation report on this issue, on 26th December 2018. The report was released to other OSMF members exactly a month later, on 26th Jan 2019. The OpenStreetMap Foundation is a non-profit company registered in England and Wales, “supporting, but not controlling, the OpenStreetMap Project”. The OSMF calls for an online general meeting once a year, where they elect new members for the board of directors. The board, however, did not pass a circular resolution that would have rejected these signups. The Membership Working Group (MWG), whose duties include the administration of OSMF memberships, undertook an investigation into the circumstances around this matter. According to the report, some of the observations that were brought to light include: All were Associate members; there’s usually a mix of membership types. All had @gmail.com addresses. Almost none provided OSM usernames; this is very unusual. Many were associated with the IP address of GlobalLogic, an outsourcing firm in India operating in the OSM/mapping world. All came in at the last minute in a very concentrated manner. Rischard in his email addressing his fellow OSMF members wrote, “We have uncovered evidence that the company behind the campaign, GlobalLogic, is not being truthful, and that the members did not sign up individually. GlobalLogic has provided versions of the event that are contradictory and not credible. We do not know the motivations for this campaign, but strongly suspect that this was an attempt, luckily unsuccessful, to influence the recent OSMF board election.” In their report, they have stated that “these new members were not eligible to vote in the 2018 AGM, and unless they renew again next year, probably won’t be able to vote in December 2019.” To know more about this news in detail, read the report, “Investigation into the Unusual Signups”. SEC’s EDGAR system hacked; allowing hackers to allegedly make a profit of $4.1 million via insider trading Hyatt Hotels launches public bug bounty program with HackerOne Black Hat hackers used IPMI cards to launch JungleSec Ransomware, affects most of the Linux servers.
Read more
  • 0
  • 0
  • 3282

article-image-youtube-to-reduce-recommendations-of-conspiracy-theory-videos-that-misinform-users-in-the-us
Natasha Mathur
28 Jan 2019
3 min read
Save for later

YouTube to reduce recommendations of ‘conspiracy theory’ videos that misinform users in the US

Natasha Mathur
28 Jan 2019
3 min read
YouTube announced an update regarding YouTube recommendations last week. As per the new update, YouTube aims to reduce the recommendations of videos that promote misinformation ( eg; conspiracy videos, false claims about historical events, flat earth videos, etc) that affect users in harmful ways, to better the user experience on the platform. YouTube states that the new change is going to be gradual and will be applicable for less than 1% of the overall videos on YouTube as of now. “To be clear, this will only affect recommendations of what videos to watch, not whether a video is available on YouTube. As always, people can still access all videos that comply with our Community Guidelines”, states the YouTube team. YouTube is also working on eliminating the presence of content that “comes close” to violating its community guidelines. The new change makes use of machine learning along with human evaluators and experts from all over the United States to train these machine learning systems responsible for generating recommendations. Evaluators are trained using public guidelines and help offer their input on the quality of a video. Currently, the change is applied only to a small set of videos in the US as the machine learning systems are not very accurate currently. The new update will roll out in different countries once the systems become more efficient. YouTube is continually updating its system to improve the user experience on its platform. For instance, YouTube has taken steps against clickbait content in the past and keeps updating its system to put more focus on viewer satisfaction instead of views, while also making sure to not recommend clickbait videos as often. YouTube team also mentions that Youtube now presents recommendations from a wider set of topics (instead of many similar recommendations) to its users and hundreds of changes were made to optimize the quality of recommendations for users.   “It's just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users”, states the YouTube team. Public reaction to this news  is varied, with some calling YouTube’s new move as ‘censorship’ while others appreciating it: https://twitter.com/Purresnol/status/1089022759546601472 https://twitter.com/therealTTG/status/1088826997189591040 https://twitter.com/nomo_BS/status/1089007519706550272 https://twitter.com/Mattlennial/status/1089008644589604866 https://twitter.com/politic_sky/status/1089006646288941056 YouTube bans dangerous pranks and challenges YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law Is YouTube’s AI Algorithm evil?
Read more
  • 0
  • 0
  • 2115
article-image-mac-users-affected-by-shlayer-trojan-dropped-via-a-steganography-based-ad-payload-confiant-and-malwarebytes-report
Savia Lobo
25 Jan 2019
2 min read
Save for later

Mac users affected by ‘Shlayer Trojan’ dropped via a Steganography-based Ad Payload; Confiant and Malwarebytes report

Savia Lobo
25 Jan 2019
2 min read
Recently, Confiant and Malwarebytes analyzed a steganography based payload which was utilized by a "malvertizer" dubbed "VeryMal" by the two firms, to infect Macs. According to the firms, the attempted attack ad was viewed on as many as 5 million Macs. This campaign was active from 11th January 2019 until 13th January 2019. Confiant detected and blocked 191,970 impressions across their publisher customers. They said that only the US visitors were targeted in this campaign. According to Confiant, the Mac users who saw the ad, the attack displayed notices that the Adobe Flash Player needed to be updated and made the users to open a file that would attempt to download in their browsers. The download, when accepted and run, ended up infecting the user’s Mac with the Shlayer trojan. The image could be viewed without harm despite containing the payload. It is harmful only when the code is run on the file, followed by the browser being redirected to a link included in the payload. Eliya Stein, Security Engineering and research at Confiant, writes, “As malvertizing detection continues to mature, sophisticated attackers are starting to learn that obvious methods of obfuscation are no longer getting the job done. Techniques like steganography are useful for smuggling payloads without relying on hex encoded strings or bulky lookup tables.” The same malicious actor VeryMal had performed a similar attack at the end December 2018: 437,819 Impressions detected and blocked by Confiant across two December campaigns. US targeting split between Mac OS and iOS. However, this attack includes a method, which was difficult to detect. Malware “is not only limited to advertising-based attacks, with reports in September noting even some apps in the Mac App Store were performing malicious actions, such as extracting a user's data”, according to an Apple Insider report. To know more about how Confiant and Malwarebytes carried out this analysis, visit Eliya Stein’s blog post on Medium. Twitter memes are being used to hide malware Privilege escalation: Entry point for malware via program errors Bo Weaver on Cloud security, skills gap, and software development in 2019  
Read more
  • 0
  • 0
  • 2474

article-image-microsofts-bing-back-to-normal-in-china
Savia Lobo
25 Jan 2019
2 min read
Save for later

Microsoft’s Bing ‘back to normal’ in China

Savia Lobo
25 Jan 2019
2 min read
On Wednesday, Microsoft announced of its search engine, Bing, being blocked in China. However, they were unsure if it was due to China’s great wall censorship or due to a technical glitch. However, the search engine is back online after being shut down for two consecutive days. The site may have been blocked by government censors. Many users also posted on Weibo, one of the popular social networks in China, commenting that “Bing is back” and “Bing returns to normal.” ZDNet also pointed out a notable fact that, “The temporary block of Microsoft's Bing comes at a time when tensions between the US and China are running high, with the introduction of a bipartisan Bill in the US earlier this month to ban the sale of tech to Chinese companies Huawei and ZTE, and the US stating on Wednesday its intention to extradite Huawei CFO Meng Wanzhou.” Though Bing is not widely used in China, it has been one of the few remaining portals to the broader internet as the Chinese government isolates China’s internet from the rest of the world. Bing remains the only US-based search engine because “Microsoft has worked to follow the government’s censorship practices around political topics”, the New York Times reported. In an interview with Fox Business Network at the World Economic Forum in Davos, Switzerland, Microsoft’s president, Brad Smith, said “There are times when there are disagreements, there are times when there are difficult negotiations with the Chinese government, and we’re still waiting to find out what this situation is about.” What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 Packt helped raise almost $1 million for charity with Humble Bundle in 2018 Sweden is at a crossroads with its nearly cashless society: To Swish or e-krona?
Read more
  • 0
  • 0
  • 1795

article-image-china-blocks-microsofts-bing-search-engine
Savia Lobo
24 Jan 2019
2 min read
Save for later

China blocks Microsoft’s Bing search engine

Savia Lobo
24 Jan 2019
2 min read
Yesterday, Microsoft announced in a statement, that their popular Bing search engine was banned in China. This would be Microsoft’s second setback since November 2017, after its Skype internet phone call and messaging service were discontinued from Apple and Android app stores. When users within China’s mainland tried performing a search on Bing’s China website--cn.bing.com--they were redirected to a page which read, the server cannot be reached. Chinese authorities have a firewall that blocks most of the US-based tech platforms including Facebook and Twitter. However, Microsoft has not reported if this outage could be because of the censorship or simply a technical problem. A Microsoft spokesperson said, "We've confirmed that Bing is currently inaccessible in China and are engaged to determine next steps.” Microsoft’s Bing was the only major foreign search engine accessible from within China-built Great Firewall. Bing’s biggest rival, Google shut down its search engine in China in 2010, after rows with the authorities over censorship and hacking. Google CEO Sundar Pichai, said that it has no plans to relaunch a search engine in China. Microsoft, however, has censored search results on sensitive topics, in accordance with government policy. Citing a source, The Financial Times, yesterday, reported that China Unicom, a major state-owned telecommunication company, had confirmed the government order to block the search engine. Also, Cyberspace Administration of China (CAC), a government watchdog, did not respond to faxed questions about Bing’s blocked website. CAC also said that it has also deleted more than 7 million pieces of online information and 9,382 mobile apps. “President Xi Jinping has accelerated control of the internet in China since 2016, as the ruling Communist Party seeks to crack down on dissent in the social media landscape”, the Reuters reported. China Telecom misdirected internet traffic, says Oracle report Bo Weaver on Cloud security, skills gap, and software development in 2019 Microsoft Edge mobile browser now shows warnings against fake news using NewsGuard
Read more
  • 0
  • 0
  • 2322
article-image-usc-researchers-present-identification-and-mitigation-techniques-to-combat-fake-news
Natasha Mathur
24 Jan 2019
6 min read
Save for later

USC researchers present identification and mitigation techniques to combat fake news

Natasha Mathur
24 Jan 2019
6 min read
A group of researchers from the University of Southern California published a paper titled “Combating Fake News: A Survey on Identification and Mitigation Techniques” that discusses existing methods and techniques applicable to identification and mitigation of fake news. The paper has categorised different existing work on fake news detection and mitigation methods into three types: fake news identification using content-based methods (classifies news based on the content of the information to be verified) identification using feedback-based methods (classifies news based on the user responses it receives over social media) intervention based solutions (offers computational solutions for identifying the spread of false information along with methods to mitigate the impact of fake news) These existing methods are further categorized as follows: Categorization of existing methods “The scope of the methods discussed in content-based and feedback based identification is limited to classifying news from a snapshot of features extracted from social media. For practical applications, we need techniques that can dynamically interpret and update the choice of actions for combating fake news based on real-time content propagation dynamics”, reads the paper. These techniques that provide such computational methods and algorithms are discussed extensively in the paper. Let’s have a look at some of these strategies. Mitigation strategies: decontamination, competing cascades, and multi-stage intervention The paper presents three different mitigation strategies aimed at reversing the effect of fake news by introducing true news on social media platforms. This ensures that users are exposed to the truth and the impact of fake news on user opinions are mitigated. The Computational methods designed for this purpose first needs to consider information diffusion models widely-used in social networks such as the Independent Cascade (IC), linear Threshold (LT) model, as well as the point process models such as Hawkes Process model. Decontamination The paper mentions the strategy introduced by Nam P Nguyen, in his paper “Containment of misinformation spread in online social networks”. The strategy includes decontaminating the users exposed to fake news. It makes use of the diffusion process (estimates the spread of information over the population) modelled with the help of Independent Cascade (IC) or Linear Threshold model (LT). A simple greedy algorithm is then designed that selects the best set of users. Then starts the diffusion process for true news so that at least a fraction of the selected users can be decontaminated. The algorithm iteratively selects the next best user to include into the set depending on the marginal gains obtained by the inclusion of that user (i.e. the number of users activated or reached by the true news in expectation, if the set did additionally include the chosen user). Competing cascades The paper mentions an intervention strategy based on competing cascades. The method of competing cascades involves introducing a true news cascade to make it compete with the fake news cascade, while the fake news is propagating through the network. The paper discusses an “influence blocking maximization objective” by Xinran He, as an optimal strategy to spread true news in the presence of fake news cascade. The process selects a set of “k” users strategically with the objective of minimizing the number of users who get activated by fake news at the end of the diffusion. According to the paper, this model assumes that once a user gets activated by either the fake or true cascade, that user will remain activated under that cascade. Multi-stage intervention Another strategy discussed in the paper is the “multi-stage intervention strategy” proposed by Mehrdad Farajtabar, in the paper “Fake News Mitigation via Point Process Based Intervention”.  This strategy allows “external interventions to adapt as necessary to the observed propagation dynamics of fake news”, states the paper. The purpose of the external interventions in the process is to incentivize certain users to enable increased sharing of true news that can counteract the fake news process over the network. At each step of the intervention, there are certain budget and user activity constraints that are imposed. This allows you to track the optimal amount of external incentivization, needed to achieve the desired objective i.e. minimizing the difference between fake and true news exposures. This strategy makes use of a reinforcement learning based policy iteration framework that helps derive the optimal amount of external incentivization. Identification strategies: Network Monitoring and crowd-sourcing The paper discusses different identification mechanisms that help actively detect and prevent the spread of misinformation due to fake news on social media platforms. Network Monitoring The paper presents a strategy based on network monitoring that involves intercepting information from a list of suspected fake news sources using computer-aided social media accounts or real paid user accounts. These accounts help filter out the information they receive and block fake news. The strategy used a “network monitor placement” that is determined by finding a part of the network with the highest probability of fakes news transmission. Another network monitoring placement solution involves a Stackelberg game between leader (attacker) and follower( defender) nodes. The paper also mentions an idea implemented by various network monitoring sites. This includes having multiple human or machine classifiers to improve the detection robustness as something that might get missed by one fact-checker might get captured by another. Crowd-sourcing Another identification strategy mentioned in the paper makes use of the crowd-sourced user feedback on social media platforms that helps users report or flag fake news articles. These crowd-sourced signals used to prioritize the fact-checking of news articles involves capturing “the trade-off between a collection of evidence v/s the harm caused from more users being exposed to fake news (exposures) to determine when the news needs to be verified”, states the paper. The fact-checking process and events are represented using point process models. This process helps to derive the optimal intensity of fact-checking that is proportional to the rate of exposure to misinformation and collected evidence as flags. The paper mentions an online learning algorithm to more accurately leverage user flags. This algorithm jointly infers the flagging accuracies of users while also identifying fake news. “The literature surveyed here has demonstrated significant advances in addressing the identification and mitigation of fake news. Nevertheless, there remain many challenges to overcome in practice,” states the researchers. For more information, check out the official research paper. Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on the elections Four 2018 Facebook patents to battle fake news and improve news feed Facebook patents its news feed filter tool to provide more relevant news to its users
Read more
  • 0
  • 0
  • 1506

article-image-debian-9-7-released-with-fix-for-rce-flaw
Melisha Dsouza
24 Jan 2019
1 min read
Save for later

Debian 9.7 released with fix for RCE flaw

Melisha Dsouza
24 Jan 2019
1 min read
On 23rd January, Debian announced the release of Debian 9.7 which is the seventh update of the stable distribution of Debian 9. This comes right after a remote code execution vulnerability was discovered in the APT high-level package manager used by Debian, Ubuntu, and other related Linux distributions that allows an attacker to perform a man-in-the-middle attack. This Debian includes a security update for the APT vulnerability. The Debian GNU/Linux 9.7 (codename "Stretch") release contains a new version of the APT package manager that's no longer vulnerable to man-in-the-middle attacks. The team states that there is no need to download new ISO images to update existing installations, however, the Debian Project will release live and install-only ISO images for all supported architectures of the Debian GNU/Linux 9.7 "Stretch". This will be available for download in a few days. Head over to Debian’s official website for more information on this announcement. Kali Linux 2018 for testing and maintaining Windows security – Wolf Halton and Bo Weaver [Interview] Black Hat hackers used IPMI cards to launch JungleSec Ransomware, affects most of the Linux servers Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more!
Read more
  • 0
  • 0
  • 2588