Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-facebook-must-face-privacy-class-action-lawsuit-loses-facial-recognition-appeal-u-s-court-rules
Fatema Patrawala
09 Aug 2019
3 min read
Save for later

Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules

Fatema Patrawala
09 Aug 2019
3 min read
The 9th Circuit U.S. Court of Appeals issued its ruling on Thursday that Facebook users in Illinois can sue the company over face recognition technology. The Court rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users. The class action lawsuit has been working its way through the courts since four years, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service. The case is Patel et al v Facebook Inc, 9th U.S. Circuit Court of Appeals, No. 19-15982. Now, thanks to a unanimous decision from the Circuit Court, the lawsuit can proceed. The statement from the circuit came as per this, “We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.” According to the American Civil Liberties Union (ACLU), it's the first decision by a U.S. appellate court to directly address privacy concerns posed by facial recognition technology. "This decision is a strong recognition of the dangers of unfettered use of face surveillance technology," Nathan Freed Wessler, an attorney with the ACLU Speech, Privacy and Technology Project, said in a statement. "The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale." “This biometric data is so sensitive that if it is compromised, there is simply no recourse,” Shawn Williams, a lawyer for plaintiffs in the class action, said to Reuters. “It’s not like a Social Security card or credit card number where you can change the number. You can’t change your face.” Facebook has been currently facing broad criticism from lawmakers and regulators over its privacy practices. Last month, Facebook agreed to pay a record $5 billion fine to settle a Federal Trade Commission data privacy probe. Facebook said it plans to appeal. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” according to Reuters report. Illinois users accused Facebook for violating the Biometric Information Privacy Act Reuters report that in 2015 the lawsuit began when Illinois users accused Facebook of violating that state’s Biometric Information Privacy Act in collecting biometric data. Facebook allegedly accomplished this through its “Tag Suggestions” feature, which allowed users to recognize their Facebook friends from previously uploaded photos. Writing for the appeals court, Circuit Judge Sandra Ikuta said the Illinois users could sue as a group, rejecting Facebook’s argument that their claims were unique and required individual lawsuits. She also said the 2008 Illinois law was intended to protect individuals’ “concrete interests in privacy,” and Facebook’s alleged unauthorized use of a face template “invades an individual’s private affairs and concrete interests.” The court returned the case to U.S. District Judge James Donato in San Francisco, who had certified a class action in April 2018, for a possible trial.Illinois’ biometric privacy law provides for damages of $1,000 for each negligent violation and $5,000 for each intentional or reckless violation. Williams, a partner at Robbins Geller Rudman & Dowd, said the class could include 7 million Facebook users. Facebook fails to fend off a lawsuit over data breach of nearly 30 million users Facebook fails to block ECJ data security case from proceeding Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content
Read more
  • 0
  • 0
  • 1871

article-image-react-16-9-releases-with-asynchronous-testing-utility-programmatic-profiler-and-more
Bhagyashree R
09 Aug 2019
3 min read
Save for later

React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more

Bhagyashree R
09 Aug 2019
3 min read
Yesterday, the React team announced the release of React 16.9. This release comes with an asynchronous testing utility, programmatic Profiler, and a few deprecations and bug fixes. Along with the release announcement, the team has also shared an updated React roadmap. https://twitter.com/reactjs/status/1159602037232791552 Following are some of the updates in React 16.9: Asynchronous act() method for testing React 16.8 introduced a new API method called ‘ReactTestUtils.act()’. This method enables you to write tests that involve rendering and updating components run closer to how React works in the browser. This release only supported synchronous functions. Whereas React 16.9 is updated to accept asynchronous functions as well. Performance measurements with <React.Profiler> Introduced in React 16.5, React Profiler API collects timing information about each rendered component to find performance bottlenecks in your application. This release adds a new “programmatic way” for gathering measurements called <React.Profiler>. It keeps track of the number of times a React application renders and how much it costs for rendering. With these measurements, developers will be able to identify the application parts that are slow and need optimization. Deprecations in React 16.9 Unsafe lifecycle methods are now renamed The legacy component lifecycle methods are renamed by adding an “UNSAFE_” prefix. This will indicate that if your code uses these lifecycle methods, it will be more likely to have bugs in React’s future releases. Following are the functions that have been renamed: ‘componentWillMount’ to ‘UNSAFE_componentWillMount’ ‘componentWillReceiveProps’ to ‘UNSAFE_componentWillReceiveProps’ ‘componentWillUpdate’ to ‘UNSAFE_componentWillUpdate’ This is not a breaking change, however, you will see a warning when using the old names. The team has also provided a ‘codemod’ script for automatically renaming these methods. javascript: URLs will now give a warning URLs that start with “javascript:” are deprecated as they can serve as “a dangerous attack surface”. As with the renamed lifecycle methods, these URLs will continue to work, but will show a warning. The team recommends developers to use React event handlers instead. “In a future major release, React will throw an error if it encounters a javascript: URL,” the announcement read. How does the React roadmap look like The React team has made some changes in the roadmap that was shared in November. React Hooks was released as per the plan, however, the team underestimated the follow-up work, and ended up extending the timeline by a few months. After React Hooks released in React 16.8, the team focused on Concurrent Mode and Suspense for Data Fetching, which are already being used in Facebook’s website. Previously, the team planned to split Concurrent Mode and Suspense for Data Fetching into two releases. Now both of these features will be released in a single version later this year. “We shipped Hooks on time, but we’re regrouping Concurrent Mode and Suspense for Data Fetching into a single release that we intend to release later this year,” the announcement read. To know more in detail about React 16.9, you can check out the official announcement. React 16.8 releases with the stable implementation of Hooks React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering” React Conf 2018 highlights: Hooks, Concurrent React, and more
Read more
  • 0
  • 0
  • 2742

article-image-stockx-confirms-a-data-breach-impacting-6-8-million-customers
Sugandha Lahoti
09 Aug 2019
3 min read
Save for later

StockX confirms a data breach impacting 6.8 million customers

Sugandha Lahoti
09 Aug 2019
3 min read
StockX, an online marketplace for buying and selling sneakers, suffered a major data breach in May impacting 6.8 million customers. Records leaked included names, email addresses and hashed passwords. The full scale of this data breach came to light after an unnamed data breached seller contacted TechCrunch claiming information about the attack. Tech crunch then verified the claims by contacting people from a sample of 1,000 records using the information only they would know. StockX released a statement yesterday acknowledging that a data breach had indeed occurred. StockX says they were made aware of the breach on July 26 and immediately launched a forensic investigation and engaged experienced third-party data experts to assist. On getting evidence to suggest customer data may have been accessed by an unknown third party, they sent customers an email on August 3 to make them aware of the incident. This email surprisingly asked customers to reset their passwords citing system updates but said nothing about the data breach leaving users confused on what caused the alleged system update or why there was no prior warning. Later the same day, StockX confirmed that they had discovered a data security issue and confirmed that an unknown third-party was able to gain access to certain customer data, including customer name, email address, shipping address, username, hashed passwords, and purchase history. The hashes were encrypted using MD5 with salts. According to weleakinfo, this is a very weak hashing algorithm; at least 90% of all hashes can be cracked successfully. Users were infuriated that instead of being honest, StockX simply sent their customers an email asking them to reset their passwords. https://twitter.com/Asaud_7/status/1157843000170561536 https://twitter.com/kustoo/status/1157735133157314561 https://twitter.com/RunWithChappy/status/1157851839754383360 StockX released a system-wide security update, a full password reset of all customer passwords with an email to customers alerting them about resetting their passwords, a high-frequency credential rotation on all servers and devices and a lockdown of their cloud computing perimeter. However, they were a little too late in their ‘ongoing investigation’ as they mention on their blog. Techcrunch revealed that the seller had put the data for sale for $300 in a dark web listing and one person had already bought the data. StockX is also subject to EU’s General Data Protection Regulation considering it has a global customer base and can be potentially fined for the incident. https://twitter.com/ComplexSneakers/status/1157754866460221442 According to FTC, StockX is also not compliant with the US laws regarding a data breach. https://twitter.com/zruss/status/1157785830200619008 Following Capital One data breach, GitHub gets sued and AWS security questioned by a US Senator. British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach. U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches.
Read more
  • 0
  • 0
  • 3275

article-image-interstellar-is-developing-slingshot-a-new-rust-based-blockchain-architecture-to-support-zero-knowledge-smart-contracts-and-more
Bhagyashree R
08 Aug 2019
4 min read
Save for later

Interstellar is developing Slingshot, a new Rust based blockchain architecture to support zero-knowledge smart contracts, and more

Bhagyashree R
08 Aug 2019
4 min read
In September 2018, LightYear acquired Chain to form a combined company called Interstellar. The company is working on a new blockchain architecture with a focus on privacy, security, and safety named Slingshot. https://twitter.com/go_interstellar/status/1039164551139287040 The Slingshot project encapsulates the following sub-protocols and components: Zero-knowledge Virtual Machines (ZkVM) The authors of TxVM, a virtual machine for blockchain transactions have come up with ZkVM. https://twitter.com/oleganza/status/1126612382728372224 It is a blockchain transaction format with cloaked assets and zero-knowledge smart contracts. Its goal is to make transactions customizable, confidential, highly efficient, and simple. It allows custom contracts via programmable constraints over encrypted data and assets. Slingshot also has an API called Token for issuing assets using ZkVM. ZkVM ensures confidentiality by fully encrypting quantities and types of assets. It also makes it certain that the asset flow is hidden at the transaction level allowing individuals and organizations to safely perform their transactions directly on the shared ledger. Its data model is compact, taking up only a few kilobytes. You can verify transactions parallelly in 1-2 ms per CPU core and bootstrap nodes instantly from a network-verified snapshot. Spacesuit, Rust implementation of the Cloak protocol Slingshot's Spacesuit is the implementation of the Cloak protocol in Rust. Cloak is a protocol for confidential assets based on the Bulletproofs zero-knowledge circuit proof system. With cloaked transactions, you can exchange values that have different asset types. Musig, a signature scheme for signing messages Slingshot's Musig is the Rust implementation of Simple Schnorr Multi-Signatures. It is a signature scheme for signing single or multiple messages. You can sign a single message with one public key. This public key can be created from a private key of a single party or by aggregating multiple public keys. Multiple messages can be signed with multiple public keys. Keytree, a key blinding scheme for deriving hierarchies of public keys Keytree is a 'key blinding scheme' with which you can derive hierarchies of public keys for Ristretto-based signatures. It can derive a set of public keys with only one key without using any private keys. This enables a system to generate unique receiving addresses without knowing any details about the private key. For instance, an online merchant can generate invoices with unique keys by keeping only public keys on the server, without compromising the security of the private keys. Slidechain, a demonstration of a minimal Stellar sidechain Slingshot includes Slidechain that allows you to peg funds from the Stellar testnet. You can then import them to a sidechain and move them back to Stellar if needed. A sidechain is generally used for operations that aren’t possible or permitted on the originating network. The sidechain in Slidechain is based on TxVM for allowing safe, general-purpose smart contracts and token issuance. The pegged funds will remain immobilized on the originating network while the imported funds exist on the sidechain. On a Reddit thread, a user explained, “Looks more like an entire network upgrade to me. An overhaul that offers privacy, more scalability, and sidechains. It would be odd to offer a sidechain that operates as a better version of stellar.” Another user added, “Ever since Chain was acquired, there has been little information about what Interstellar is building for Stellar. Chain offered a blockchain service called Sequence. Sequence allowed you to easily setup a ledger/blockchain and integrate it with your application/business. I believe this repo details an enhanced version of Chain with Stellar integration. Businesses can create their own private network while having full access to the Stellar network to transact with other chain networks. This would function as a second layer solution on top of Stellar. Other networks such as OMG and Cosmos function similarly to this iirc.” To know more about Slingshot, check out its GitHub repository. Blast through the Blockchain hype with Packt and Humble Bundle Installing a blockchain network using Hyperledger Fabric and Composer[Tutorial] Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets
Read more
  • 0
  • 0
  • 3135

article-image-googles-project-zero-reveals-several-serious-zero-day-vulnerabilities-in-a-fully-remote-attack-surface-of-the-iphone
Sugandha Lahoti
08 Aug 2019
4 min read
Save for later

Google’s Project Zero reveals several serious zero-day vulnerabilities in a fully remote attack surface of the iPhone

Sugandha Lahoti
08 Aug 2019
4 min read
Security analysts from Google’s Project Zero investigated the remote attack surface of the iPhone and reviewed SMS, MMS, VVM, Email, and iMessage. They found several serious zero-day vulnerabilities in the remote, interaction-less attack surface of the iPhone. The majority of vulnerabilities occurred in iMessage due to its broad and difficult to enumerate attack surface. Visual Voicemail also had a large and unintuitive attack surface that likely led to a single serious vulnerability being reported in it.   Vulnerability in Visual Voicemail Visual Voicemail (VVM) is a feature of mobile devices that allows voicemail to be read in an email-like format. It informs devices of the location of the IMAP server by sending a specially formatted SMS message containing the URL of the IMAP server.  Any device can send a message that causes Visual Voicemail to query an IMAP server specified in the message. So an attacker can force a device to query an IMAP server they control without the user interacting with the device in any way. This results in an object lifetime issue in the iPhone IMAP client. It happens when a NAMESPACE command response contains a namespace that cannot be parsed correctly. It leads to the mailbox separator being freed, but not replaced with a valid object. This leads to a selector being called on an object that is not valid. This vulnerability was assigned id CVE-2019-8613. This issue was fixed on Tuesday, May 14. Vulnerabilities in iMessage CVE-2019-8624: A bug was found in the Digital Touch extension which led to a crash in SpringBoard requiring no user interaction.  This extension allows users to send messages containing drawings and other visual elements. This bug was fixed in Apple’s July 24 update. CVE-2019-8663: This vulnerability was found in deserializing the SGBigUTF8String class, which is a subclass of NSString. The initWithCoder: implementation of this class deserializes a byte array that is then treated as a UTF-8 string with a null terminator, even if it does not have one. This can lead to a string that contains out-of-bounds memory being created. CVE-2019-8661: This vulnerability is present in [NSURL initWithCoder:] and affects Mac only. It results in a heap overflow in [NSURL initWithCoder:] that can be reached via iMessage and likely other paths. It also results in a crash in soagent requiring no user interaction. This issue can be resolved by removing CarbonCore from the NSURL deserialization path. It was fixed on Saturday, Aug 3, 2019. CVE-2019-8646: This vulnerability allows deserializing in the class _NSDataFileBackedFuture even if secure encoding is enabled. Classes do not need to be public or exported to be available for deserialization. This issue was fixed in iOS 12.4 by preventing this class from being decoded unless it is explicitly added to the allow list. Better filtering of the file URL was also implemented. CVE-2019-8647: It occurs when deserializing class _PFArray, which extends NSArray and implements [_PFArray initWithObjects:count:], which is called by[NSArray initWithCoder:]. This vulnerability results in NSArray deserialization invoking a subclass that does not retain references. This issue can be reached remotely via iMessage and crash Springboard with no user interaction. This issue was fixed in 12.4 by implementing [_PFArray classForKeyedUnarchiver] and similar that returns NSArray. CVE-2019-8660. This vulnerability involved cycles in serialized objects. There is a memory corruption vulnerability when decoding an object of class  NSKnownKeysDictionary1. It was fixed in iOS 12.4 with improved length checking. They found another vulnerability CVE-2019-8641, which they are not yet disclosing because its fix did not fully remediate the issue. The analysts concluded that reducing the remote attack surface of the iPhone would likely improve its security. You can read their complete analysis on Project Zero’s blog. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Google Project Zero reveals an iMessage bug that bricks iPhone causing repetitive crash and respawn operations Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users
Read more
  • 0
  • 0
  • 2094

article-image-microsoft-reveals-russian-hackers-fancy-bear-are-the-culprit-for-iot-network-breach-in-the-u-s
Savia Lobo
07 Aug 2019
3 min read
Save for later

Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S.

Savia Lobo
07 Aug 2019
3 min read
Two days ago, Microsoft revealed that Russian hackers are attempting to compromise IoT devices including a VOIP, a printer, and a video decoder across multiple locations. These attacks were discovered in April, by security researchers in the Microsoft Threat Intelligence Center. According to the Microsoft report, “These devices became points of ingress from which the actor established a presence on the network and continued looking for further access,” “Once the actor had successfully established access to the network, a simple network scan to look for other insecure devices allowed them to discover and move across the network in search of higher-privileged accounts that would grant access to higher-value data.” Microsoft officials said, “We attribute the attacks on these customers using three popular IoT devices to an activity group that Microsoft refers to as STRONTIUM,” which is a Russian-based hacking group also known as Fancy Bear or ATP28. “In two of the cases, the passwords for the devices were deployed without changing the default manufacturer’s passwords and in the third instance the latest security update had not been applied to the device,” the officials further added. “After gaining access to each of the IoT devices, the actor ran tcpdump to sniff network traffic on local subnets. They were also seen enumerating administrative groups to attempt further exploitation,” the officials added. “As the actor moved from one device to another, they would drop a simple shell script to establish persistence on the network which allowed extended access to continue hunting. Analysis of network traffic showed the devices were also communicating with an external command and control (C2) server.” “Microsoft said it identified and blocked these attacks in their early stages, so its investigators weren't able to determine what Strontium was trying to steal from the compromised networks,” ZDNet reports. Microsoft has notified the makers of the targeted devices so that they can explore the possibility of adding new protections. Microsoft’s report also provided IP addresses and scripts that organizations can use to detect if they have also been targeted or infected. Microsoft plans to reveal more information about the Strontium April 2019 attacks later this week at the Black Hat USA 2019 security conference. To know more about this news in detail, read Microsoft's complete report. Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices A cybersecurity primer for mid sized businesses
Read more
  • 0
  • 0
  • 2488
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-microsoft-introduces-public-preview-of-azure-dedicated-host-and-updates-its-licensing-terms
Amrata Joshi
07 Aug 2019
3 min read
Save for later

Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms

Amrata Joshi
07 Aug 2019
3 min read
Last week, Microsoft introduced a preview of Azure Dedicated Host that provides a physical server hosted on Azure and is not shared with other customers. The company has made few licensing changes that will make the Microsoft software bit expensive for the AWS, Google and Alibaba customers. Currently, the dedicated host is available in two specifications, first is Type 1 based on a 2.3GHz Intel Xeon E5-2673 v4 (Broadwell) and has 64 vCPUs (Virtual CPUs). It costs $4.055 or $4.492 per hour depending on the RAM (256GB or 448GB). Another is Type 2 based on the Xeon Platinum 8168 (Skylake) and comes with 72 vCPUs and 144GB RAM, it costs $4.039 per hour. These prices don’t include the licensing costs and it seems Microsoft is trying to bring changes in this area.  Last week, the team at Microsoft announced that they will modify their licensing terms related to outsourcing rights and dedicated hosted cloud services on October 1, 2019. The team further stated that this particular change won’t impact the use of existing software versions under the licenses that are purchased before October 1, 2019. The official post reads, “Currently, our outsourcing terms give on-premises customers the option to deploy Microsoft software on hardware leased from and managed by traditional outsourcers.”  The team is updating the outsourcing terms for Microsoft on-premises licenses in order to clarify the difference between on-premise/traditional outsourcing and cloud services. Additionally, they are planning to create more consistent licensing terms across multi-tenant and dedicated hosted cloud services. The customers will either have to rent the software via SPLA (Service Provider License Agreement) or they will have to purchase a license with software assurance, an annual service charge. From October 1, on-premises licenses purchased without Software Assurance and mobility rights will no longer be deployed with dedicated hosted cloud services that are offered by the public cloud providers like Microsoft, Alibaba, Amazon (including VMware Cloud on AWS), and Google. According to Microsoft, they will then be referred to as “Listed Providers.” These changes won’t be applicable to other providers, Services Provider License Agreement (SPLA) program as well as to the License Mobility for Software Assurance benefit except for expanding this benefit to cover dedicated hosted cloud services. Customers will benefit from licensed Microsoft products on a dedicated cloud platform Customers will be able to license Microsoft products on dedicated hosted cloud services from the Listed Providers. Users can continue to deploy and use the software under their existing licenses on Listed Providers’ servers that are dedicated to them. But they will noy be able to add workloads under licenses that are acquired on or after October 1, 2019.  Users will be able to use products through the purchase of cloud services directly from the Listed Provider, after 1st October. In case, they have licenses with Software Assurance, then it can be used with the Listed Providers’ dedicated hosted cloud services under License Mobility or Azure Hybrid Benefit rights. These changes aren’t applicable to deployment and use of licenses outside of a Listed Provider’s data center. But these changes are applicable to both first and third-party offerings on a dedicated hosted cloud service from a Listed Provider. To know more about this news, check out the official post by Microsoft. CERN plans to replace Microsoft-based programs with an affordable open-source software Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors Why are experts worried about Microsoft’s billion dollar bet in OpenAI’s AGI pipe dream?  
Read more
  • 0
  • 0
  • 1602

article-image-att-employees-were-bribed-over-1-million-for-assisting-hackers-to-illegally-unlock-cellphones-says-doj
Amrata Joshi
07 Aug 2019
5 min read
Save for later

AT&amp;T employees were bribed over $1 million for assisting hackers to illegally unlock cellphones, says DOJ

Amrata Joshi
07 Aug 2019
5 min read
Yesterday the United States’ Department of Justice (DOJ) stated that Muhammad Fahd, a 34-year-old citizen of Pakistan had bribed the employees from AT&T’s Seattle-area offices and call centers by paying more than $1 million. Fahd bribed those employees in order to install malware on AT&T’s network so that he could unlock millions of smartphones. Fahd was supported by Ghulam Jiwani in this conspiracy, who is believed to be deceased. They tried to get illegal access to 2 million of the company’s phones from 2012 and 2017. Last year in February, on United States’ request, Muhammad Fahd got arrested in Hong Kong in the same month and was extradited to the United States last week. Fahd has a serious criminal history which includes intentional damage to a protected computer, wire fraud, and conspiracy to violate the Computer Fraud and Abuse Act, violate the Travel Act. AT&T uses proprietary locking software on its phone in order to prevent its phone from getting used on any other wireless network except for the AT&T network until the phones were unlocked. On unlocking the phones, the proprietary locking software would get disabled and which would let the phone work on multiple carrier systems. According to the Wireless Customer Agreement between AT&T and its customers, the company would unlock the customers' phone once the customers have fulfilled their terms of service contract or installment plan. These unlocked phones could be resold and be used by any other network. When the customers’ phone got fraudulently switched to another network, AT&T got deprived of some of the customers’ remaining payments that were under a customer’s installment plan and terms of service contract. As a result, millions of phones got removed from AT&T service and payment plans which costs the company millions of dollars. Fahd had paid tens of thousands of dollars to the AT&T insiders and to one of the co-conspirators he paid $428,500 over the five-year scheme.  The conspiracy started in 2012 and is still under investigation Last year in March, the second superseding indictment was filed that stated how Fahd bribed AT&T employees and used their computer credentials and disabled AT&T’s proprietary locking software.  As per the indictment, between April 2012 to April 2013, they gave instructions to AT&T’s insiders with the help of wires in interstate and foreign commerce. Fahd had also sent the list of cellular IMEI numbers for the phones to the insiders. Between April 2013 to October 2013 the AT&T insiders were bribed to plant malware on the computer systems to get information about the company’s computer network and software applications. This information was then used for creating another malware that interacted with the company’s internal protected computer systems for processing the fraudulent unlock requests. Between November 2014 to September 2017, they again bribed the AT&T insiders for getting access to AT&T’s physical workspace for installing unauthorized hardware devices such as wireless access points to get unauthorized access to the company’s computers.  Fahd used to contact these insiders through telephone, Facebook, anonymous email accounts and other channels. They were instructed to open shell companies and business accounts on the names of these shell companies for receiving payments. The insiders even helped Fahd and Jiwani for developing and installing tools that would help them in unlocking the phones even from a remote location. Till now, three of those co conspirators have pleaded guilty and have admitted that they were paid thousands of dollars for serving Fahd’s fraudulent scheme.  Assistant Attorney General Brian A. Benczkowski of the Justice Department’s Criminal Division, said, “This arrest illustrates what can be achieved when the victim of a cyber attack partners quickly and closely with law enforcement.”  Benczkowski further added, “When companies that fall prey to malware work with the Department of Justice, no cybercriminal—no matter how sophisticated their scheme—is beyond our reach.” U.S. Attorney Brian T. Moran for the Western District of Washington said, “This defendant thought he could safely run his bribery and hacking scheme from overseas, making millions of dollars while he induced young workers to choose greed over ethical conduct.”   Attorney Brian T. Moran added, “Now he will be held accountable for the fraud and the lives he has derailed.” Currently, the U.S. Secret Service Electronic Crimes Task Force is investigating this case. Community demands for strict security measures as employees were involved too According to a few users, the companies need to take strict security measures and shouldn’t ignore any security threat to them and implement encryption for user data.  A user commented on HackerNews, “Companies need to assume that their network is compromised. Ignoring anything else that means they need to adopt E2E encryption for all user data (except where legally mandated to be insecure, or when the data has a fundamental need to be accessible - e.g. your bank needs to know how much money you have). Anything else, including dumbass politicians demanding magic crypto, makes your user data a valuable and achievable target.” Few others are shocked about the fact that AT&T employees were involved in this. https://twitter.com/harrymccracken/status/1158745600399159297 https://twitter.com/rstephens/status/1158759723870482437 https://twitter.com/BobbyChesney/status/1158746997790257155 To know more about this news, check out the official page. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals 25 million Android devices infected with ‘Agent Smith’, a new mobile malware          
Read more
  • 0
  • 0
  • 1479

article-image-googles-5-5m-cookie-privacy-settlement-that-paid-nothing-to-users-is-now-voided-by-a-u-s-appeals-court
Bhagyashree R
07 Aug 2019
3 min read
Save for later

Google's $5.5m ‘cookie’ privacy settlement that paid nothing to users is now voided by a U.S. appeals court

Bhagyashree R
07 Aug 2019
3 min read
Yesterday, the U.S. Court of Appeals for the Third Circuit voided Google's $5.5m ‘cookie’ privacy settlement that paid nothing to consumers. The settlement was meant to resolve the case against Google for violating user privacy by installing cookies in their browsers. This comes after the decision was challenged by the Center for Class Action Fairness (CCAF), an institution representing class members against unfair class action procedures and settlements. What this Google 'cookie' case was about The class-action case accuses Google of creating a web browser cookie that tracks a user’s data. It mentions that the cookie also tracked data of Safari or Internet Explorer users even if they properly configured their privacy settings. The plaintiffs claim that Google invaded their privacy under the California constitution and the state tort of intrusion upon seclusion. In February 2017,  U.S. District Judge Sue Robinson in Delaware ruled that Google will stop using cookies for Safari browsers and pay a $5.5 million settlement. This settlement will cover fees and costs of the class counsel, incentive awards for the named class representatives, and cy pres distributions. This did not include any direct compensation to class members. The six cy pres recipients were data privacy organizations who agreed to use the funds for researching and promoting browser privacy. Cy pres distributions, which means “as near as possible” allows the court to distribute the money from a class action settlement to a charitable organization. This is done when the settlement becomes impossible, impracticable, or illegal to perform. Some of the cy pres recipients had pre-existing associations with Google and the class counsel, which raised concern. “Through the proposed class-action settlement, the purported wrongdoer promises to pay a couple million dollars to class counsel and make a cy pres contribution to organizations it was already donating to otherwise (at least one of which has an affiliation with class counsel),” the Circuit Judge Thomas Ambro said. He noted that John Roberts, the U.S Chief Justice has previously expressed concerns about cy pres. Many federal courts also are quite skeptical about cy pres awards as they could prompt class counsel to put their own interests ahead of their clients’. Ambro further mentioned that the District Court’s fact-finding was insufficient. “In this context, we believe the District Court’s fact-finding and legal analysis were insufficient for us to review its order certifying the class and approving the fairness, reasonableness, and adequacy of the settlement. We thus vacate and remand for further proceedings in accord with this opinion.” CCAF objection to this settlement was overruled by U.S. District Court for the District of Delaware on February 2, 2017. Ted Frank, CCAF’s director, who is also a class member in this case, filed a notice of appeal on March 1, 2017. Ted Frank believes that the money awarded to the privacy groups should have been instead given to class members. The objection is also being supported by 13 state attorneys. “The state attorneys general agree with CCAF that the feasibility of distributing funds depends on whether it’s impossible to distribute funds to some class members, not whether it’s possible to distribute to all class members,” wrote CCAF. Now, the case is returned to the Delaware court. You can read more about Google’s Cookie Placement Consumer Privacy Litigation case on Hamilton Lincoln Law Institute. Google discriminates against pregnant women, an employee memo alleges Google Chrome to simplify URLs by hiding special-case subdomains Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage  
Read more
  • 0
  • 0
  • 2112

article-image-ffmpeg-4-2-releases-with-av1-decoding-support-through-libdav1d-decoding-of-hevc-444-content-and-more
Vincy Davis
07 Aug 2019
3 min read
Save for later

FFmpeg 4.2 releases with AV1 decoding support through libdav1d, decoding of HEVC 4:4:4 content and more

Vincy Davis
07 Aug 2019
3 min read
Two days ago, the team behind FFmpeg released their latest version of FFmpeg 4.2, nicknamed “Ada”. This release comes with many new filters, decoders, and demuxers. FFmpeg is an open-source project composing software suite of libraries and programs to handle multimedia files. It has cross-platform multimedia framework which is used by various games and applications to record, convert and stream audios and videos. The previous version FFmpeg 4.1 was released last year in November. The FFmpeg team has announced on Twitter that the follow-up point release (4.2.1) will be released in a few weeks. FFmpeg 4.2 has a AV1 decoding support through libdav1d. It also supports decoding of HEVC 4:4:4 content in nvdec, cuviddec and vdpau. It has many new filters like tpad, dedot, freezedetect, truehd_core bitstream, anlmdn, maskfun, and more. Read More: Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg PCM-DVD has been included as an encoder in FFmpeg 4.2. It also introduces numerous demuxers such as dhav, vividas, hcom, KUX, and IFV. This version also removes libndi-newtek. The new version allows the mov muxer to write tracks in any unspecified language. Earlier, a mov mixer could write in only English, by default. The latest version also supports clang to compile CUDA kernels. Users are happy with the FFmpeg 4.2 release and seem to be excited to get hands on with the new features in their applications. https://twitter.com/Poddingue/status/1158840601267322886 A user on Hacker News comments, “Massive "thank you" to FFmpeg for being an amazing tool. My app pivotally depends on FFmpeg extracting screenshots of videos for users to browse through.” Another comment on Hacker News reads, “Nice to see improved HEVC 4:4:4 support in Linux” Additionally users have appreciated the growth of FFmpeg project in general. A Redditor comments, “Man ffmpeg is so awesome. It's still mind blowing that something so powerful is FOSS. Outside of programming languages, operating systems, and essential tools like the GNU suite, I think an argument can be made that ffmpeg is one of the most important pieces of software ever created. We live in a digital world and virtually everything from security cameras to social media sites to news stations use ffmpeg on some level” Check out the FFmpeg page to read about the updates in detail. Firefox 67 enables AV1 video decoder ‘dav1d’, by default on all desktop platforms Fabrice Bellard, the creator of FFmpeg and QEMU, introduces a lossless data compressor which uses neural networks Introducing QuickJS, a small and easily embeddable JavaScript engine
Read more
  • 0
  • 0
  • 6548
article-image-apple-card-iphones-new-payment-system-is-now-available-for-select-users
Sugandha Lahoti
07 Aug 2019
5 min read
Save for later

Apple Card, iPhone’s new payment system, is now available for select users

Sugandha Lahoti
07 Aug 2019
5 min read
Update - 20th August 2019: Apple Card is now available to qualified customers in the US with iPhone 6 and later. To apply, customers can update to iOS 12.4 on iPhone, open Wallet and tap +. The Apple Card 3 percent Daily Cash is now extended to new merchants - Uber and Uber Eats. Update - 23rd August 2019: The article has been updated to include details of how Apple wants users to store and clean the card. At its March event, Apple announced its paid subscription service, Apple Card. This is Apple’s new digital credit card which can be used for purchases and comes with simpler applications, no fees, lower interest rates, and daily rewards. Now, Apple Card is available as a  “preview rollout” since yesterday and will be broadly available to all iPhone owners in the US later this month. The Apple Card is created in partnership with Goldman Sachs and Mastercard. It is available as two options. First, as a digital card which users will be able to access by signing up on their iPhone in the Apple Wallet app.  Second, as a physical titanium card with no credit card number, CVV, expiration date, or signature. All of the authorization information is stored directly in the Apple Wallet app. The card also provides a virtual number for online merchants that don’t take Apple Pay. Users can deactivate it entirely from the Wallet app with a single tap. The preview version is available to a random group of users who have signed up to be notified about the Apple Card. The sign up will require iOS 12.4, address, birthday, income level, and last four digits of their Social Security number. This information is sent to Goldman Sachs, which will approve or decline an application in real-time. Users would also need to enable two-factor authentication. They are also not allowed to modify their Apple device or jailbreak it. The card makes use of machine learning and Apple Maps to label stores and to categorize them based on color. Users can easily track purchases in the Wallet app across categories like “food and drink” or “shopping.” It also has a rewards program, “Daily Cash,” which adds 2 percent of the daily purchase amount in cash to your Apple Cash account, also within the Wallet app. Though, purchases made through the physical card will get just 1 percent cashback. Apple has although faced criticism for its variable APR (Annual Percentage Rate) which starts at 12.99 percent and goes up to 24.24 percent. John Gruber from Daring Fireball wrote, “Variable APRs range from 13.24% to 24.24% based on creditworthiness. Rates as of March 2019.” What a crock of shit this “low-interest rates” line is. Those interest rates are usury, right in line with the rest of the credit card industry. 24% interest ought to be criminal, and 13% is not “low”.” Apple is quite cautious of the card’s privacy features and will store the spending, tracking and other information directly on the device. Jennifer Bailey, VP of Apple Pay said, “Apple doesn’t know what you bought, where you bought it, and how much you paid for it. Goldman Sachs will never sell your data to third parties for marketing and advertising.” Store your Apple Card lot in your wallet without touching another credit card, warns Apple Although Apple's new titanium Apple Card boasts of a multitude of features, it requires detailed handling, care, and cleaning which is quite different from how people use and store their normal credit cards. Apple says you should clean your Apple Card with a microfiber cloth and avoid contact with leather and denim. You should also be placing your card in a slot in your wallet or billfold without touching another credit card. It should not be placed in a pocket or bag that contains loose change, keys, or other potentially abrasive objects. Apple warns that storing the card with other credit cards can scratch and damage it. Apple has previously been criticized for making products that are aesthetically pleasing but easily damaged in everyday use. People on Twitter agree. https://twitter.com/JasonHirschhorn/status/1164352999122071552 https://twitter.com/alexstamos/status/1164367608038088704 Apple Card also signifies a transition from devices to services. In light of the recent news of Apple’s iPhone sales dwindling, the company is now shifting its focus to other means of revenue growth to keep its consumers occupied in the world of Apple. Apart from Apple card, these include smart home devices, streaming service, and Apple Arcade. You can sign up to be notified of the release of the card on Apple.com Apple’s March Event: Apple changes gears to services, is now your bank, news source, gaming zone, and TV Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion. OpenID Foundation questions Apple’s Sign-In feature, says it has security and privacy risks
Read more
  • 0
  • 0
  • 2078

article-image-deepcode-the-ai-startup-for-code-review-raises-4m-seed-funding-will-be-free-for-educational-use-and-enterprise-teams-with-30-developers
Vincy Davis
06 Aug 2019
3 min read
Save for later

DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers

Vincy Davis
06 Aug 2019
3 min read
Today, Deepcode, the tool that uses artificial intelligence (AI) to help developers write better code, raised $4M in seed funding to expand it’s machine learning systems for code reviews. Deepcode plans to expand its supported list of languages (by including C#, PHP, and C/C++), improve the scope of code recommendations, and also grow the team internationally. It has also been revealed that Deepcode is working on its first integrated developer environment (IDE) project. The funding round was conducted by Earlybed, and the participants were 3VC and Btov Partners, DeepCode’s existing investor. DeepCode has also announced a new pricing structure. Previously, it was only free for open source software development projects. Today, it announced that it will also be free for educational purposes and for enterprise teams with 30 developers. https://twitter.com/DeepCodeAI/status/1158666106690838528 Launched in 2016, DeepCode reviews bugs, alerts about critical vulnerabilities, and style violations in the earlier stages of software development. Currently, DeepCode supports Java, JavaScript, and Python languages. When a developer links their Github or Bitbucket accounts to DeepCode, the DeepCode bot processes millions of commits in the available open source software projects and highlights broken codes that can cause compatibility issues. In a statement to Venturebeat, Paskalev says that DeepCode saves 50% of developers time, spent on finding bugs. Read Also: Thanks to DeepCode, AI can help you write cleaner code Earlybird co-founder and partner, Christian Nagel says, “DeepCode provides a platform that enhances the development capabilities of programmers. The team has a deep scientific understanding of code optimization and uses artificial intelligence to deliver the next breakthrough in software development.” Many open source projects have been getting major investments from tech companies lately. Last year, the software giant Microsoft acquired the open source code platform giant GitHub for $7.5 billion. Another popular platform for distributed version control and source code management GitLab also raised a $100 million Series D funding. With the software industry growing, the amount of codes written has increased to a great extent thus requiring more testing and debugging. DeepCode receiving funds is definitely good news for the developer community. https://twitter.com/andreas_herzog/status/1158666757588115456 https://twitter.com/evanderburg/status/1158710341963935745 Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker Virality of fake news on social media: Are weaponized AI bots to blame, questions Destin Sandlin
Read more
  • 0
  • 0
  • 2720

article-image-ten-us-senators-demand-google-to-automatically-convert-temp-workers-to-permanent-employees-after-six-months
Sugandha Lahoti
06 Aug 2019
4 min read
Save for later

Ten US senators demand Google to automatically convert temp workers to permanent employees after six months

Sugandha Lahoti
06 Aug 2019
4 min read
Ten US senators have signed an official document late July, objecting to Google’s misuse of independent contractors and temporary workers, reports the New York Times. The document was signed post a report by The New York Times which indicated that the tech giant had more temporary and contract workers than full-time employees. The NYT report found that Google employs 102,000 full-time employees and 121,000 temporary and contract workers. Among the ten senators were presidential candidates Bernie Sanders, Kamala Harris, and Elizabeth Warren. The document highlights differences between permanent and temporary workers in terms of place of work, the number of hours they work, the tasks they perform, and whether or not they should continue to work on Google contracts. The NYT report highlights how temporary workers and contractors are typically paid a much lower salary than their full-time employed counterparts. They also have fewer opportunities for professional advancement as well as have to work overtime and not report inappropriate advances by superiors. https://twitter.com/clairebangasser/status/1158657467573776384 The senators call on Google to take “immediate action” to convert its growing number of contractors to full-time employees after six months of work. The letter was addressed to Google CEO Sundar Pichai and demands a response by Friday this week. It urges Google to commit to taking immediate action to end these anti-worker practices and adopt the following company policies. The automatic transition from a temporary worker to permanent full-time Google employee after six months; Prohibition of financial disincentives — including “conversion fees” stipulated by staffing agencies in contracts with Google — for transitioning a temporary worker to permanent Google employee. Wage and benefits parity for independent contractors, temporary workers, and permanent full-time employees; Disclosure to temporary workers at the start of their work on a Google contract about their status and when they can expect to transition to permanent full-time employee status; Limitations on the use of independent contractors and temporary workers to temporary or non-core work that is not already performed by full-time employees; Prohibition of mandatory nondisclosure agreements about the terms and conditions of employment, including in temporary workers’ contracts with their staffing agencies; Elimination of all non-compete clauses in all employment contracts, including in temporary workers’ contracts with their staffing agencies; and Google acceptance of liability for any workplace violations that occur with temporary workers or independent contractors. Google disagrees with the Senators' demands Eileen Naughton, Google’s VP of People Operations strongly disputed the arguments raised in the letter. In response to the senators she wrote, “Respectfully, we strongly disagree with any suggestion that Google misuses independent contractors or temporary workers, Being a temporary worker is not intended to be a path to employment at Google. This fact is clearly stated in Google’s written policies and in its training documents and is applicable for full-time positions through the same hiring process as everyone else.” “Temporary workers comprise 3% of our total workforce, and do the job of a full-time Google employee but for a short period of time, working on temporary projects, addressing quick needs in business, incubating special projects, or covering for employees who may be on short-term leave, like parental or sick leave,” Naughton said. She added, “We care about everyone working at Google or on Google-related projects - employees, vendors, temporary staff and contractors alike - and we’re happy to meet with your staff to discuss these issues further.” The Senators demands is a clear indication that even though some of the organizers have been retaliated against and have been forced to resign, their work has not gone to waste. Google has previously partially acknowledged only one demand from the walkout organizers’ original demands: ending forced arbitration for all its full-time employees but not for Google’s temporary and contract workers. In April, Google confirmed that its contracted and temporary workforce will receive full benefits, including comprehensive health care, paid parental leave and a $15 minimum wage, The Hill reported. Google's announcement came after a group of 915 Google workers signed on to a letter demanding equal treatment for the company's temporary workers and contractors. Meredith Whittaker, Google Walkout organizer and now ex-Googler supported the demands laid out by the Senators. https://twitter.com/mer__edith/status/1158383054840352768 So did other activists and tech worker organizations. https://twitter.com/teamcoworker/status/1158407450724327425 https://twitter.com/MarkCCrowley/status/1133508309414293504 Google employees ‘Walkout for Real Change’ today. These are their demands. #GoogleWalkout organizers face backlash at work, tech workers show solidarity Google discriminates against pregnant women, an employee memo alleges
Read more
  • 0
  • 0
  • 1433
article-image-mimecast-introduced-community-based-tailored-threat-intelligence-tool-at-black-hat-2019
Fatema Patrawala
06 Aug 2019
3 min read
Save for later

Mimecast introduced community based tailored threat intelligence tool at Black Hat 2019

Fatema Patrawala
06 Aug 2019
3 min read
Yesterday, at Black Hat 2019, Mimecast Limited, a leading email and data security company, introduced Mimecast Threat Intelligence which offers a deeper understanding of the cyber threats faced by organizations. The cybersecurity landscape changes daily, and attackers are constantly changing their techniques to avoid detection. According to Mimecast’s recent State of Email Security Report 2019, 94% of organizations saw phishing attacks in the last 12 months and 61% said it was likely or inevitable that they would be hit with an email-borne attack. The new features in Mimecast Threat Intelligence are designed to give organizations access to threat data and analytics specific to overall organization. Additionally it offers a granular view of the attacks blocked by Mimecast. The Mimecast Threat Intelligence dashboard highlights users who are most at-risk, malware detections, malware origin by geo-location, Indicators of Compromise (IoCs) and malware forensics based on static and behavioral analysis. The data is consolidated into a user-friendly view and will be available for integration into an organization’s security ecosystem through the Threat Feed API. This targeted threat intelligence will provide greater visibility and insight to security professionals, enabling them to easily respond and remediate against threats and malicious files. “As the threat landscape evolves, arming our organization and people with the best possible tools is more important now than ever,” said Thomas Cronkright, CEO at CertifID. “Mimecast’s Threat Intelligence is a unique, incredibly easy to use value-added service that provides an outstanding benefit to organizations in search of a secure ecosystem.” “The cyber threat landscape is dynamic, complex and driven by a relentless community of adversaries. IT and security teams need threat intelligence that is easy to digest and actionable, so they can better leverage the information to proactively prevent and defend against cyberattacks,” said Josh Douglas, Vice President of threat intelligence at Mimecast. “Mimecast sees a lot of data, as we process more than 300 million emails every day to help customers block hundreds of thousands of malicious emails. Mimecast Threat Intelligence helps organizations get the deep insights they need to build a more cyber resilient environment.” Mimecast Threat Intelligence consists of a Threat Dashboard, Threat Remediation and Threat Feed with Threat Intelligence APIs. To know more, check out this page on Mimecast Threat Intelligence. International cybercriminals exploited Citrix internal systems for six months using password spraying technique A zero-day vulnerability on Mac Zoom Client allows hackers to enable users’ camera, leaving 750k companies exposed An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices
Read more
  • 0
  • 0
  • 3390

article-image-blazingsql-a-gpu-accelerated-sql-engine-built-on-top-of-rapids-is-now-open-source
Bhagyashree R
06 Aug 2019
4 min read
Save for later

BlazingSQL, a GPU-accelerated SQL engine built on top of RAPIDS, is now open source

Bhagyashree R
06 Aug 2019
4 min read
Yesterday, the BlazingSQL team open-sourced BlazingSQL under the Apache 2.0 license. It is a lightweight, GPU-accelerated SQL engine built on top of the RAPIDS.ai ecosystem. RAPIDS.ai is a suite of software libraries and APIs for end-to-end execution of data science and analytics pipelines entirely on GPUs. Explaining his vision behind this step, Rodrigo Aramburu, CEO of BlazingSQL wrote in a Medium blog post, “As RAPIDS adoption continues to explode, open-sourcing BlazingSQL accelerates our development cycle, gets our product in the hands of more users, and aligns our licensing and messaging with the greater RAPIDS.ai ecosystem.” Aramburu calls RAPIDS “the next-generation analytics ecosystem” where BlazingSQL serves as the SQL standard. It also serves as an SQL interface for cuDF, a GPU DataFrame (GDF) library for loading, joining, aggregating, and filtering data. Here’s an overview of how BlazingSQL fits into the RAPIDS.ai ecosystem: Source: BlazingSQL Advantages of using BlazingSQL Cost-effective: Customers often have to cluster thousands of servers for processing data at scale, which can be very expensive. BlazingSQL takes up only a small fraction of the infrastructure to run at an equivalent scale. Better performance: BlazingSQL is 20x faster than Apache Spark cluster when extracting, transforming, and loading data. It generates GPU-accelerated results in seconds enabling data scientists to quickly iterate over new models. Easily scale up workload: Usually, workloads are first prototyped at small scale and then rebuilt for distributed systems. With BlazingSQL, you need to write code only once that can be dynamically changed depending on the scale of distribution with minimal code changes. Connect to multiple data sources: It connects to multiple data sources for querying files in local and distributed filesystems. Currently, it supports AWS S3 and Apache HDFS and the team plans to support more in the future. Run federated queries: It allows you to directly query raw data into GPU memory in its original format with the help of federated queries. A federated query allows you to join data from multiple data stores across multiple data formats. It currently supports CSV, Apache Parquet, JSON, and existing GPU DataFrames. GM of data science at NVIDIA, Josh Patterson said in the announcement, “NVIDIA and the RAPIDS ecosystem are delighted that BlazingSQL is open-sourcing their SQL engine built on RAPIDS. By leveraging Apache Arrow on GPUs and integrating with Dask, BlazingSQL will extend open-source functionality, and drive the next wave of interoperability in the accelerated data science ecosystem.” This news sparked a discussion on Hacker News, where Aramburu cleared any queries developers had about BlazingSQL. One developer asked why the team chose CUDA instead of an open-sourced option like OpenCL. Aramburu explained, “Early on when we first started playing around with General Processing on GPU's we had Nvidia cards to begin with and I started looking at the APIs that were available to me. The CUDA ones were easier for me to get started, had tons of learning content that Nvidia provided, and were more performant on the cards that I had at the time compared to other options. So we built up lots of expertise in this specific way of coding for GPUS. We also found time and time again that it was faster than OpenCL for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs. The second answer to this question is that blazingsql is part of a greater ecosystem. rapids.ai and the largest contributor by far is Nvidia. We are really happy to be working with their developers to grow this ecosystem and that means that the technology will probably be CUDA only unless we somehow program "backends" like they did with thrust but that would be eons away from now.” People also celebrated the news of Blazing SQL’s open-sourcing. A comment on Hacker News reads, “This is great. The BlazingDB guys are awesome and now that the project is open source this is another good reason for my teams to experiment with different workloads and compare it against a SparkSQL approach” BlazingDB announces BlazingSQL , a GPU SQL Engine for NVIDIA’s open source RAPIDS Amazon introduces PartiQL, a SQL-compatible unifying query language for multi-valued, nested, and schema-less data Amazon Aurora makes PostgreSQL Serverless generally available
Read more
  • 0
  • 0
  • 3552