Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-introducing-netlify-dev-for-local-testing-and-live-stream-preview-capabilities
Amrata Joshi
10 Apr 2019
2 min read
Save for later

Introducing Netlify Dev for Local Testing and Live Stream Preview Capabilities

Amrata Joshi
10 Apr 2019
2 min read
Yesterday, the team at Netlify announced the new Netlify Dev for local testing and live stream preview capabilities. Web developers can now locally test serverless functions, API integrations, and CDN Logic; thus promoting instant progress sharing.. They can now have access to capabilities of the Netlify platform on their laptops which means they no longer have to wait for staging or production to test and get feedback on their websites and applications. Developers can live-stream their development server to a cloud URL and share updates as the code and content changes. In a statement to Business Wire, Kent C. Dodds, software engineer and educator, said, “Netlify has a knack for simplifying things that are hard so I can focus on building my web application, and Netlify Dev is another example of that. “I'm excited about being able to simply develop, test, and debug my Netlify web applications with one simple command.” Netlify has compiled its entire edge redirect engine into WebAssembly so developers can locally test before deploying to production. They can now write and validate AWS Lambda functions in the Netlify CLI using modern JavaScript and also deploy them as full API endpoints. Mathias Biilmann, CEO, said, “Netlify is obsessed with developer productivity for building modern sites on the JAMstack. The new local test and share capabilities of Netlify Dev provide a single, simplified workflow that brings everything together—from the earliest code to production global deployment. Netlify Dev can automatically detect common tools like Gatsby, Hugo, Jekyll, React Static, Eleventy and more. It also provides a single development server and workflow. New and existing users can use Netlify Dev by installing or updating the Netlify CLI for creating new sites, setting up continuous deployment and for pushing new deployments. The new features of Netlify Dev are tightly coupled with Netlify's git-based workflow for team collaboration. Netlify brings an instant CI/CD pipeline for the developers who work in Git so that every commit and pull request can build the site into a deploy preview. Developers can easily build and collaborate in the full production environment. To know more about this news, check out Netlify’s official page. Netlify raises $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management Introducing Gitpod, a one-click IDE for GitHub IPv6 support to be automatically rolled out for most Netify Application Delivery Network users  
Read more
  • 0
  • 0
  • 2087

article-image-u-s-senators-introduce-a-bipartisan-bill-that-bans-social-media-from-using-dark-patterns-to-trick-its-users
Natasha Mathur
10 Apr 2019
4 min read
Save for later

U.S. senators introduce a bipartisan bill that bans social media platforms from using 'dark patterns' to trick its users

Natasha Mathur
10 Apr 2019
4 min read
Two U.S. Senators, namely  Mark R. Warner (D-VA) and Deb Fischer (R-NE), introduced a bill yesterday, to ban large online platforms ( with over 100 million monthly active users) such as Facebook and Twitter from tricking its consumers into handing over their personal data. The bill, named, the Deceptive Experiences To Online Users Reduction (DETOUR) Act, bipartisan legislation is aimed at prohibiting these platforms from using deceptive user interfaces, called, “dark patterns”. https://twitter.com/MarkWarner/status/1115660831969153025 The term “dark patterns” refers to online interfaces on websites and apps that are specially designed to manipulate users into taking actions they wouldn’t otherwise take under normal circumstances. The design tactics for these patterns are inspired by extensive behavioral psychology research and misleads the users on social media platforms into agreeing to settings and providing data that are advantageous to the company. Forcing the users this way to give up their personal data (contacts, messages, web activity, location), these social media companies gain an unfair advantage over their competitors, which significantly benefits the company. According to Senator Fischer, a member of the Senate Commerce Committee, these dark patterns weaken the privacy policies that involve consent. “Misleading prompts to just click the ‘OK’ button can often transfer your contacts, messages, browsing activity, photos, or location information without you even realizing it. Our bipartisan legislation seeks to curb the use of these dishonest interfaces and increase trust online”.   https://twitter.com/MarkWarner/status/1115660838692642818 https://twitter.com/MarkWarner/status/1115660840575877120 Other examples of dark patterns include a sudden interruption amidst a task repeating until the user agrees to consent and the use of privacy settings that push users to ‘agree’ as the default option. Also, users looking out for more privacy-related options are required to follow a long process that involves clicking through multiple screens. Moreover, sometimes users are not even provided with the alternative option.   As per the DETOUR act: A professional standards body, registered with the Federal Trade Commission (FTC), needs to be created to focus on best practices surrounding user design for large online operators. This association would act as a self-regulatory body and provide updated guidance to the social media platforms.    Segmenting consumers for behavioral experiments is prohibited unless carried out with a consumer’s informed consent. This includes routine disclosures by large online operators (at least once every 90 days) on any behavioral experiments to the public. Also, as per the bill, large online operators would have an internal Independent Review Board to offer oversight on these practices and safeguard consumer welfare. User design intended for compulsive usage among children under the age of 13 years old is prohibited. FTC needs to come out with rules within one year of its enactment and perform tasks necessary surrounding informed consent, Independent Review Boards, and Professional Standards Bodies. Senator Warner has been raising concerns regarding the implications of dark patterns used by social media companies for several years. For instance, in 2014, Sen. Warner asked the FTC to probe into Facebook’s use of dark patterns in an experiment that involved nearly 700,000 users. The experiment focused on the emotional impact of manipulating information on Facebook’s News Feeds. “We support Senators Warner and Fischer in protecting people from exploitive and deceptive practices online. Their legislation helps to achieve that goal and we look forward to working with them”, said Fred Humphries, Corporate VP of U.S. Government Affairs at Microsoft in a press release sent to us. Apart from the DETOUR act,  Sen. Warner is planning to introduce further legislation that will be designed to further improve transparency, privacy, and accountability on social media. Public reaction to the news has been largely positive, with people supporting the senators and new bill: https://twitter.com/tristanharris/status/1115735945393782785 https://twitter.com/joenatoli/status/1115823934132445186 For more information, check out the official DETOUR act bill. US Senators introduce a bill to avoid misuse of facial recognition technology U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches A brief list of drafts bills in US legislation for protecting consumer data privacy  
Read more
  • 0
  • 0
  • 2092

article-image-china-will-ban-cryptomining-to-prevent-wastage-of-resources
Savia Lobo
10 Apr 2019
2 min read
Save for later

China will ban cryptomining to prevent wastage of resources

Savia Lobo
10 Apr 2019
2 min read
Yesterday, China’s National Development and Reform Commission (NDRC) published a proposal to ban cryptocurrency mining with a reason stating that crypto mining is a waste of valuable resources. The team is waiting for public feedback over this proposal and they also indicated that “the crypto-mining ban could take effect as soon as they’re formally issued”, the Bloomberg reports. The proposal also includes a “revised list--first published in 2011--of industries it wants to encourage, restrict or eliminate”, Reuters reports. The public can comment on the draft latest by May 7. Cryptocurrencies such as Bitcoin use specialized computers that use a huge amount of energy. According to the South China Morning Post, China’s coal-rich regions, Xinjiang and Inner Mongolia have become popular destinations for crypto-miners looking for cheap electricity. “It’s estimated that as much as 74 percent of global crypto mining is occurring in China, a place where it’s also the most carbon-intensive”, Gizmodo reports. A recent report in Nature Sustainability says crypto mining emits anywhere between 3 million and 15 million tons of carbon dioxide globally. In 2017, China banned initial coin offerings and also put a halt to virtual currency trading. In 2018, Chinese officials outlined proposals to discourage crypto mining. Jehan Chu, a managing partner at blockchain investment firm Kenetic, said, “The NDRC’s move is in line overall with China’s desire to control different layers of the rapidly growing crypto industry, and does not yet signal a major shift in policy.” According to Reuters, “The draft for a revised list added cryptocurrency mining, including that of bitcoin, to more than 450 activities the NDRC said should be phased out as they did not adhere to relevant laws and regulations, were unsafe, wasted resources or polluted the environment.” An executive who works closely with Chinese mining firms told Wired, “although the ban was widely expected to move forward, miners expect it will take years for the government to fully rein in their operations.” To know more on this news in detail, head over to Bloomberg. Crypto-cash is missing from the wallet of dead cryptocurrency entrepreneur Gerald Cotten – find it, and you could get $100,000 FOSDEM 2019: Designing better cryptographic mechanisms to avoid pitfalls – Talk by Maximilian Blochberger Google expands its Blockchain search tools, adds six new cryptocurrencies in BigQuery Public Datasets
Read more
  • 0
  • 0
  • 1280

article-image-google-chrome-will-soon-support-lazyload-a-solution-to-lazily-load-below-the-fold-images-and-iframes
Bhagyashree R
09 Apr 2019
2 min read
Save for later

Google Chrome will soon support LazyLoad, a solution to lazily load below-the-fold images and iframes

Bhagyashree R
09 Apr 2019
2 min read
Google Chrome will soon support something called LazyLoad, a feature that allows browsers to delay the loading of out-of-view images and iframes until the user scrolls near them, shared Scott Little, a Chromium developer yesterday. Why LazyLoad is introduced? Very often, web pages have images and other embedded content like ads placed below the fold and users don’t always end up scrolling all the way down. LazyLoad tries to take the advantage of this behavior to optimize the web browser by loading the important content much faster and hence reducing the network data and memory usage. LazyLoad waits to load images and iframes that are out of view until the user scrolls near them. It is up to the browser to decide exactly how “near”, but it should typically start loading the out-of-view content some distance before the content comes in view. Currently, there are few JavaScript libraries that can be used for lazy loading images or other kinds of content. But, natively supporting such feature in the browser itself will make it easier for websites to take advantage of lazy loading. Additionally, with this feature browsers will be able to automatically find and load content that are suitable for lazy loading. The LazyLoad solution will be supported on all platforms. Web pages just need to use loading="lazy" on the img and iframe elements. For Android Chrome users who have Data Saver turned on, elements with loading="auto" or unset will also be lazily loaded if Chrome finds them to be good candidates for lazy loading based on heuristics. If you set loading="eager" on the image or iframe element they will not be lazily loaded. To read more in detail about LazyLoad, check out its GitHub repository. Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members  
Read more
  • 0
  • 0
  • 2903

article-image-googles-cloud-healthcare-api-is-now-available-in-beta
Amrata Joshi
09 Apr 2019
3 min read
Save for later

Google’s Cloud Healthcare API is now available in beta

Amrata Joshi
09 Apr 2019
3 min read
Last week, Google announced that its Cloud Healthcare API is now available in beta. The API acts as a bridge between on-site healthcare systems and applications that are hosted on Google Cloud. This API is HIPAA compliant, ecosystem-ready and developer-friendly. The aim of the team at Google is to give hospitals and other healthcare facilities more analytical power with the help of Cloud Healthcare API. The official post reads, "From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data and better understand that data through the application of analytics and machine learning in real time, at scale." This API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP). With the help of this API, users can now explore new capabilities for data analysis, machine learning, and application development for healthcare solutions. The  Cloud Healthcare API also simplifies app development and device integration to speed up the process. This API also supports standards-based data formats and protocols of existing healthcare tech. For instance, it will allow healthcare organizations to stream data processing with Cloud Dataflow, analyze data at scale with BigQuery, and tap into machine learning with the Cloud Machine Learning Engine. Features of Cloud Healthcare API Compliant and certified This API is HIPAA compliant and HITRUST CSF certified. Google is also planning ISO 27001, ISO 27017, and ISO 27018 certifications for Cloud Healthcare API. Explore your data This API allows users to explore their healthcare data by incorporating advanced analytics and machine learning solutions such as BigQuery, Cloud AutoML, and Cloud ML Engine. Managed scalability Google’s Cloud Healthcare API provides web-native, serverless scaling which is optimized by Google’s infrastructure. Users can simply activate the API to send requests as the initial capacity configuration is not required. Apigee Integration This API integrates with Apigee, which is recognized by Gartner as a leader in full lifecycle API management, for delivering app and service ecosystems around user data. Developer-friendly This API organizes users’ healthcare information into datasets with one or more modality-specific stores per set where each store exposes both a REST and RPC interface. Enhanced data liquidity The API also supports bulk import and export of FHIR data and DICOM data, which accelerates delivery for applications with dependencies on existing datasets. It further provides a convenient API for moving data between projects. The official post reads, “While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers.” Google will highlight what its partners, including the American Cancer Society, CareCloud, Kaiser Permanente, and iDigital are doing with the API at the ongoing Google Cloud Next. To know more about this news, check out Google’s official announcement. Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council  
Read more
  • 0
  • 0
  • 4217

article-image-horovod-an-open-source-distributed-training-framework-by-uber-for-tensorflow-keras-pytorch-and-mxnet
Natasha Mathur
09 Apr 2019
3 min read
Save for later

Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet

Natasha Mathur
09 Apr 2019
3 min read
The LF Deep Learning Foundation, a community umbrella project of The Linux Foundation, announced Horovod, started by Uber in 2017, as their new project, last year in December. Uber joined Linux Foundation in November 2018 to support LF Deep Learning Foundation open source projects. Horovod (named after a traditional Russian dance) announced at 2018 KubeCon + CloudNativeCon North America, is an open source distributed training framework for TensorFlow, Keras, MXNet, and PyTorch. It helps improve speed, as well as scales and resource allocation in machine learning training activities. The main goal of Horovod is to simplify distributed Deep Learning and make it fast. Ever since its release, Horovod has been getting leveraged across different tasks and by different companies. For instance, Uber has been using Horovod for self-driving vehicles, fraud detection, and trip forecasting. Other companies using Horovod include Alibaba, Amazon, and NVIDIA. Other contributors to the Horovod Project are Amazon, IBM, Intel, and NVIDIA. IBM uses Horovod as part of its open source deep learning solution, FfDL, and in its IBM Watson Studio. Databricks also features Horovod in their deep learning offering. Similarly, NVIDIA announced last November that it is using Uber’s Horovod to build an AI computing platform for developers of self-driving vehicles. Molly Vorwerck, Editorial Program Manager for Uber Engineering, mentioned that “Horovod was a clear choice for NVIDIA. With only a few lines of code, Horovod allowed them to scale from one to eight GPUs, optimizing model training for their self-driving sensing and perception technologies, leading to faster, safer systems”. Horovod makes it easy to take a single-GPU TensorFlow program and train it on many GPUs. Also, it is easier to achieve improved GPU resource usage figures with Horovod. It makes use of advanced algorithms and features high-performance networks that offer data scientists and other researchers the tooling to easily scale their deep learning models with high performance. Also, the open source community’s response was also very positive about Horovod. “It was very cool to see my first open source project reach so many people and be adopted so quickly..now, when I go to conferences people actually know of Horovod and they’re excited to integrate with it...all these things make me really happy”, states Alex Sergeev, Horovod Project Lead. Apart from that, Horovod also joined the existing Linux Foundation Deep Learning projects, namely, Acumos AI (an open source AI framework), Angel (a high-performance distributed machine learning platform), and EDL (Elastic Deep Learning framework). These projects have been designed to help cloud service providers build cluster cloud services using deep learning frameworks. “Uber built Horovod to make deep learning model training faster and more intuitive for AI researchers across industries. As Horovod continues to mature in its functionalities and applications, this collaboration will enable us to further scale its impact in the open source ecosystem for the advancement of AI,” said Sergeev. For more information, check out the official Horovod blog post. Uber open-sources Peloton, a unified Resource Scheduler Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models Uber releases AresDB, a new GPU-powered real-time Analytics Engine
Read more
  • 0
  • 0
  • 4522
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-microsoft-makes-the-first-preview-builds-of-chromium-based-edge-available-for-testing
Bhagyashree R
09 Apr 2019
4 min read
Save for later

Microsoft makes the first preview builds of Chromium-based Edge available for testing

Bhagyashree R
09 Apr 2019
4 min read
Yesterday, Microsoft released the first preview builds of its Chromium-powered Edge browser for Windows 10. This comes after Microsoft announced last year in December that it will be adopting the Chromium open source project in the development of Microsoft Edge for desktop. You can download these preview builds for testing from the Microsoft Edge Insider site. The new builds are available through three different Microsoft Edge Insider Channels: Beta, Canary, and Developer. Canary builds are the ones that will receive updates every night. Developer builds are much more stable than the Canary builds and will be updated weekly. Beta builds are the most stable ones as compared to the three and will receive updates every 6 weeks. Source: Microsoft Right now, Microsoft is only opening the Developer and Canary channels. Though the company was not so clear about the timeline in the announcement, it does promises that the Beta builds and support for Mac and all the other supported versions of Windows will come in the future. However, there is no mention of whether this new overhauled Microsoft Edge will support Linux. In these preview builds, the team has mostly focussed on the fundamentals. So, current users will not see an extensive range of features and language support. These new Chromium-based Microsoft Edge preview builds do look strikingly similar to Google Chrome. Among the similarities include subtle design finishes, a dark mode, and the ability to manage your sign-in profile. In this Chromium-based Edge implementation, Microsoft has removed or replaced about 50 services that are included in Chromium. Some of them are Google Now, Google Cloud Messaging, and Chrome-OS related services. More details regarding the updates will be shared during a BlinkOn 10 keynote today. These preview builds also bring support for an expanded selection of extensions. Users will no longer have to just choose from the limited set of extensions available on Microsoft’s store as extensions from other third-party stores like Chrome Web Store are also supported. Since this is based on Chromium, it also comes with support for Progressive Web Apps and supports the same developer tools as Chromium. Microsoft is working closely with the team at Google and hopes to work with the broader Chromium community going forward. Their latest contributions to the Chromium open source project includes in areas like accessibility, touch, ARM64, and others. In the future, it plans to introduce smooth scrolling, a reading view free of distractions, grammar tools, and Microsoft Translator integration. Users who have tested these preview builds are finding it unsurprisingly very similar to Chrome. One of the users are Reddit remarks, “To the surprise of no one, its basically chrome. Even my google account came in logged in automatically, same recent sites etc. I wonder if the roadmap will include things like dark mode, I never used the annotations feature so can't vouch much for it. I'm yet to try to make a MS Teams call but looking good so far.” The Verge, after testing the preview builds, shared that the Chromium-powered Edge is showing even better performance than Google Chrome. Many users are also saying that instead of joining hands with Google, Microsoft could have instead gone with Firefox to make the web fair and accessible. “I wish they've would have gone with Firefox's Quantum, in order to try and at least balance out web market shares. MSFT no longer has any leverage in the web, so trying to keep it fair and accessible (no browser monopolies) should be a priority for them (especially since they have quite a few web platforms like office 365),” adds a redditor. To read the official announcement, check out the Microsoft blog. Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination Microsoft, Adobe, and SAP share new details about the Open Data Initiative Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser
Read more
  • 0
  • 0
  • 2285

article-image-apple-plans-to-make-notarization-a-default-requirement-in-all-future-macos-updates
Sugandha Lahoti
09 Apr 2019
4 min read
Save for later

Apple plans to make notarization a default requirement in all future macOS updates

Sugandha Lahoti
09 Apr 2019
4 min read
In an updated developer documentation released yesterday, Apple has announced its plans to make notarization a default requirement for all software in the future. Now, starting from macOS 10.14.5, all new software distributed with a new Developer ID must be notarized in order to run. “Beginning in macOS 10.14.5, all new or updated kernel extensions and all software from developers new to distributing with Developer ID must be notarized in order to run. In a future version of macOS, notarization will be required by default for all software.” writes Apple in a blog post. What is notarization? First introduced in macOS Mojave for apps distributed outside of the Mac App Store, Apple’s notary service is an automated system that scans software for malicious content and checks for code-signing issues. Based on these checks, notarization generates a ticket and publishes that ticket online where Gatekeeper (Apple’s flagship security feature) can find it and deem it as notarized. The Gatekeeper then places descriptive information in the initial launch dialog to help the user make an informed choice about whether to launch the app. macOS 10.14.5 requires new developers to notarize Apple has encouraged Mac app developers to submit their apps to Apple to be notarized. The Gatekeeper dialog has also been streamlined to reassure users that an app is not known malware. For non-Mac App Store developers, Apple provides a Developer ID that is required to allow the Gatekeeper function on macOS to install non-Mac App Store apps without extra warnings. However, from macOS 10.14.5 onwards, all new software distributed with a new Developer ID will need to go through the notarization process for their apps to work on the Mac. Apple notes that some preexisting software might not run properly after being successfully notarized. For example, “Gatekeeper might find code signing issues that a relaxed notarization process didn’t enforce.” They recommend developers to always review the notary log for any warnings, and test the software before distribution. Developers will not need to rebuild or re-sign their software before submitting it for notarization, but they must use Xcode 10 to perform the notarization steps. More information on notarization can be found on Apple's developer site. Some Hacker News users were unsure of what Apple means by “by default”. “kind of makes it sound like all software will have to be notarized, which implies that you have to be an Apple Developer to distribute at all. But saying "by default" makes it seems like there's some kind of option given to the user, so maybe it just means that software that's distributed by a registered Apple Developer but isn't notarized just moves down into the third tier of software that has to be explicitly allowed to run by the user.” “I interpret the "by default" as meaning the exact same thing as "Developer ID is required by default for Mac apps" today. Or in other words, I would assume that getting around a non-notarized app in the future would have the exact same sequence of steps as getting around a non-Developer ID-signed app today.” “I'd read the 'by default' as it being turned on system-wide and up to the user to override on a per case basis. Of course, Apple's ideal model is that they want everything going through them. They're going to enable it 'by default' and if customers don't scream too much, they'll likely make it mandatory a release or two later.” Final release for macOS Mojave is here with new features, security changes and a privacy flaw. macOS gets RPCS3 and Dolphin using Gfx-portability, the Vulkan portability implementation for non-Rust apps Swift 5 for Xcode 10.2 is here!
Read more
  • 0
  • 0
  • 2854

article-image-blast-through-the-blockchain-hype-with-packt-and-humble-bundle
Richard Gall
09 Apr 2019
2 min read
Save for later

Blast through the Blockchain hype with Packt and Humble Bundle

Richard Gall
09 Apr 2019
2 min read
Blockchain and cryptocurrency have received significant attention from the press in the last few years. But it’s not always easy to get past the hype and find out what Blockchain means in practice - and how we can make it really matter. But April’s Humble Bundle from Packt is a small step in changing that. With 25 books and video courses on all things crypto - all 25 available for as little as $15 - it’s the perfect starting point for anyone interested in learning what’s possible with Blockchain - and actually building things with it. Find the bundle here. But while Humble Bundle and Packt are excited to help a new audience of engineers and crypto-enthusiasts, they’re also delighted to be supporting some incredible charities. The bundle’s chosen charity is Dress for Success, an organization that supports women worldwide with the resources and tools needed to achieve economic independence. For anyone new to Humble Bundle, the principle is simple: Pay What You Want across three tiers, each with a minimum of $1, $8, and $15. Customers can then choose how their money is split between publisher, Humble Bundle, and charities. So, between April 8 and April 22, the world can get its hands on a substantial stash of Blockchain and cryptocurrency resources. But what’s included in each tier? For as little as $1 customers can get their hands on… Blockchain By Example Ethereum Cookbook Learn Bitcoin and Blockchain Learn JavaScript: Build Your Own Blockchain Learning Blockchain Application Development For a minimum of $8, customers will get all of the above as well as… A Developer's Guide to Blockchain, Bitcoin and Cryptocurrencies Blockchain for Business 2018: The New Industrial Revolution Ethereum Projects for Beginners Foundations of Blockchain Hands-On Bitcoin Programming with Python Hands-On IoT Solutions with Blockchain Hyperledger for Blockchain Applications Tokenomics And for a minimum of $15, readers will get all of the products in the tiers above, as well as… Blockchain across Oracle Blockchain Real World Projects Ethereum Projects Hands-On Blockchain for Python Developers Hands-On Blockchain with Hyperledger Hands-On Cybersecurity with Blockchain Learn Blockchain Programming with JavaScript Learn Python by Building a Blockchain and Cryptocurrency Mastering Blockchain Solidity Programming Essentials Truffle Quick Start Guide Blockchain Quick Reference To learn more, visit Humble Bundle here.
Read more
  • 0
  • 0
  • 2387

article-image-facebook-ai-introduces-aroma-a-new-code-recommendation-tool-for-developers
Natasha Mathur
09 Apr 2019
3 min read
Save for later

Facebook AI introduces Aroma, a new code recommendation tool for developers

Natasha Mathur
09 Apr 2019
3 min read
Facebook AI team announced a new tool, called Aroma, last week. Aroma is a code-to-code search and recommendation tool that makes use of machine learning (ML) to simplify the process of gaining insights from big codebases. Aroma allows engineers to find common coding patterns easily by making a search query without any need to manually browse through code snippets. This, in turn, helps save time in their development workflow. So, in case a developer has written code but wants to see how others have implemented the same code, he can run the search query to find similar code in related projects. After the search query is run, results for codes are returned as code ‘recommendations’. Each code recommendation is built from a cluster of similar code snippets that are found in the repository. Aroma is a more advanced tool in comparison to the other traditional code search tools. For instance, Aroma performs the search on syntax trees. Instead of looking for string-level or token-level matches, Aroma can find instances that are syntactically similar to the query code. It can then further highlight the matching code by cutting down the unrelated syntax structures. Aroma is very fast and creates recommendations within seconds for large codebases. Moreover, Aroma’s core algorithm is language-agnostic and can be deployed across codebases in Hack, JavaScript, Python, and Java. How does Aroma work? Aroma follows a three-step process to make code recommendations, namely, Feature-based search, re-ranking and clustering, and intersecting. For feature-based search, Aroma indexes the code corpus as a sparse matrix. It parses each method in the corpus and then creates its parse tree. It further extracts a set of structural features from the parse tree of each method. These features capture information about variable usage, method calls, and control structures. Finally, a sparse vector is created for each method according to its features and then the top 1,000 method bodies whose dot products are highest are retrieved as the candidate set for the recommendation. Aroma In the case of re-ranking and clustering, Aroma first reranks the candidate methods by their similarity to the query code snippet. Since the sparse vectors contain only abstract information about what features are present, the dot product score is an underestimate of the actual similarity of a code snippet to the query. To eliminate that, Aroma applies ‘pruning’ on the method syntax trees. This helps to discard the irrelevant parts of a method body and helps retain all the parts best match the query snippet. This is how it reranks the candidate code snippets by their actual similarities to the query. Further ahead, Aroma runs an iterative clustering algorithm to find clusters of code snippets similar to each other and consist of extra statements useful for making code recommendations. In the case of intersecting, a code snippet is taken first as the “base” code and then ‘pruning’ is applied iteratively on it with respect to every other method in the cluster. The remaining code after the pruning process is the code which is common among all methods, making it a code recommendation. “We believe that programming should become a semiautomated task in which humans express higher-level ideas and detailed implementation is done by the computers themselves”, states Facebook AI team. For more information, check out the official Facebook AI blog. How to make machine learning based recommendations using Julia [Tutorial] Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset
Read more
  • 0
  • 0
  • 6465
article-image-ieee-spectrum-ibm-watson-has-a-long-way-to-go-before-it-becomes-an-efficient-ai-doctor
Natasha Mathur
09 Apr 2019
5 min read
Save for later

IEEE Spectrum: IBM Watson has a long way to go before it becomes an efficient AI doctor

Natasha Mathur
09 Apr 2019
5 min read
Eliza Strickland, the Senior Associate Editor at IEEE Spectrum, a magazine by the Institute of Electrical and Electronics Engineers, published an article, last week. The article talks about how IBM Watson still has a long way to go before it establishes itself as an efficient AI in the healthcare Industry. IBM Watson, a question-answering computer system that is capable of answering questions in natural language, made the hits back in February 2011 when it defeated two human champions in the game of Jeopardy! a popular American Quiz Show. This was also the time when IBM researchers explored the possibilities of extending Watson’s capabilities to ‘revolutionize’ health care.  IBM decided to apply the outstanding NLP capabilities of Watson to medicine and even promised a commercial product. The first time IBM showed off Watson’s potential to transform medicine using AI was back in 2014. For the Demo, Watson was fed a bizarre collection of patient symptoms, using which, it produced a list of possible diagnoses. Watson’s memory bank included information on even the rarest of diseases and its processors were totally unbiased in approach, giving it an edge over other AIs for doctors. “If Watson could bring that instant expertise to hospitals and clinics all around the world, it seemed possible that the AI could reduce diagnosis errors, optimize treatments, and even alleviate doctor shortages—not by replacing doctors but by helping them do their jobs faster and better,” writes Strickland. However, despite promising on new projects related to AI commercial products, it could not follow up on that promise. “In the eight years since, IBM has trumpeted many more high-profile efforts to develop AI-powered medical technology—many of which have fizzled and a few of which have failed spectacularly,” writes Strickland. Moreover, the products that have been produced from the IBM Watson Health Division, are more like basic AI assistants that are capable of performing routine tasks, not even close to being an AI doctor. Challenges faced by Watson in the healthcare industry While IBM was considering Watson’s possibilities in the healthcare industry, the most challenging issue at the time was the fact that the bulk of patient data in Medicine, i.e. unstructured data. This includes doctor’s notes and hospital discharge summaries which accounts for about 80 percent of a typical patient’s record and is an amalgamation of jargon, shorthand, and subjective statements. Another challenge faced by IBM Watson is its diagnosis of cancer. Mark Kris, a lung cancer specialist at Memorial Sloan Kettering Cancer Center, New York City along with the other preeminent physicians trained an AI system known as Watson for Oncology in 2015. Watson for Oncology would learn by ingesting the vast medical literature on cancer and the health records of real cancer patients and uncover patterns unknown to humans. The other preeminent physicians at the University of Texas MD Anderson Cancer Center, in Houston, collaborated with IBM to create a tool called Oncology Expert Advisor. Both the products, however, faced severe criticism saying that Watson for Oncology at times provided ‘useless’ and ‘dangerous recommendations’. “A deeper look at these two projects reveals a fundamental mismatch between the promise of machine learning and the reality of medical care—between “real AI” and the requirements of a functional product for today’s doctors”, writes Strickland. Although Watson learned quickly about scanning articles on clinical studies, it was difficult to teach Watson to read the articles the way a doctor would. “The information that physicians extract from an article, that they use to change their care, may not be the major point of the study. Watson’s thinking is based on statistics, so all it can do is gather statistics about main outcomes”, adds Mark Kris. Researchers further found that Watson was also incapable of mining information from patients’ electronic health records. Also, they realized that Watson is incapacitated when it comes to comparing a new patient with another large number of cancer patients to discover hidden patterns. Further, they hoped that Watson would mimic the abilities of expert oncologists, but they were disappointed. Despite some challenges, IBM Watson has also seen its share of success stories. Strickland cites an example of Watson for Genomics developed in partnership with the University of North Carolina, Yale University, and other renowned institutions. The tool helps genetics lab generate reports for practicing oncologists. Watson ingests lists of patient’s genetic mutations and generates a report describing all the relevant drugs and clinical trials in just a few seconds. Moreover, a paper was published by IBM’s partners at the University of North Carolina on the effectiveness of Watson for Genomics in 2017 Effective or not, IBM Watson still has a long queue of hurdles that it needs to cross before IBM reaches its dream of making Watson the impeccable ‘AI doctor’. For more information, check out the official IEEE Spectrum article. IBM CEO, Ginni Rometty, on bringing HR evolution with AI and its predictive attrition AI IBM sued by former employees on violating age discrimination laws in workplace IBM announces the launch of Blockchain World Wire, a global blockchain network for cross-border payments
Read more
  • 0
  • 0
  • 3616

article-image-tech-companies-eu-to-face-strict-regulation-on-terrorist-content
Fatema Patrawala
08 Apr 2019
11 min read
Save for later

Tech companies in EU to face strict regulation on Terrorist content: One hour take down limit; Upload filters and private Terms of Service

Fatema Patrawala
08 Apr 2019
11 min read
Countries around the world are seeking to exert more control over content on the internet – and, by extension, their citizens. With more acts of terrorism taking place everywhere, they are now attaining a kind of online history too. With material like those from the recent Christchurch shooting proliferate as supporters upload them to any media platform they can reach. And lawmakers around the world have had enough, so this year, they hope to enact new legislation that will hold big tech companies like Facebook and Google more accountable for any terrorist-related content they host. The Australian parliament passed legislation to crack down on violent videos on social media. Recently Sen. Elizabeth Warren, US 2020 presidential hopeful proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. And on 3rd April, Elizabeth introduced Corporate Executive Accountability Act, a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached. Another news from Washington post states that UK has drafted an aggressive new plan to penalise Facebook, Google and other tech giants that don't stop the spread of harmful content online. Last year, the German parliament enacted the NetzDG law, requiring large social media sites to remove posts that violate certain provisions of the German code, including broad prohibitions on “defamation of religion,” “hate speech,” and “insult.” The removal obligation is triggered not by a court order, but by complaints from users. Companies must remove the posts within 24 hours or seven days facing steep fines if they fail to do so. Joining the bandwagon, Europe has also drafted EU Regulation on preventing the dissemination of terrorist content online. The legislation was first proposed by the EU last September as a response to the spread of ISIS propaganda online which encouraged further attacks. It covers recruiting materials such as displays of a terrorist organization’s strength, instructions for how to carry out acts of violence, and anything that glorifies the violence itself. Social media is an important part of terrorists’ recruitment strategy, say backers of the legislation. “Whether it was the Nice attacks, whether it was the Bataclan attack in Paris, whether it’s Manchester, [...] they have all had a direct link to online extremist content,” says Lucinda Creighton, a senior adviser at the Counter Extremism Project (CEP), a campaign group that has helped shape the legislation. The new laws require platforms to take down any terrorism-related content within an hour of a notice being issued, force them to use a filter to ensure it’s not reuploaded, and, if they fail in either of these duties, allow governments to fine companies up to 4 percent of their global annual revenue. For a company like Facebook, which earned close to $17 billion in revenue last year, that could mean fines of as much as $680 million (around €600 million). Advocates of the legislation say it’s a set of common-sense proposals that are designed to prevent online extremist content from turning into real-world attacks. But critics, including Internet Freedom think tanks and big tech firms, claim the legislation threatens the principles of a free and open internet, and it may jeopardize the work being done by anti-terrorist groups. The proposals are currently working their way through the committees of the European Parliament, so a lot could change before the legislation becomes law. Both sides want to find a balance between allowing freedom of expression and stopping the spread of extremist content online, but they have very different ideas about where this balance lies. Why is the legislation needed? Terrorists use social media to promote themselves, just like big brands do. Organizations such as ISIS use online platforms to radicalize people across the globe. Those people may then travel to join the organization’s ranks in person or commit terrorist attacks in support of ISIS in their home countries. At its peak, ISIS has had a devastatingly effective social media strategy, which both instilled fear in its enemies and recruited new supporters. In 2019, the organization’s physical presence in the Middle East has been all but eliminated, but the legislation’s supporters argue that this means there’s an even greater need for tougher online rules. As the group’s physical power has diminished, the online war of ideas is more important than ever. The recent attack in New Zealand where the alleged shooter identified as a 28 year old Australian man, Brenton Tarrant announced the attack on the anonymous-troll message board 8chan. He posted images of the weapons days before the attack, and made an announcement an hour before the shooting. On 8chan, Facebook and Twitter, he also posted links to a 74-page manifesto, titled “The Great Replacement,” blaming immigration for the displacement of whites in Oceania and elsewhere. The manifesto cites “white genocide” as a motive for the attack, and calls for “a future for white children” as its goal. Further he live-streamed the attacks on Facebook, YouTube; and posted a link to the stream on 8chan. “Every attack over the last 18 months or two years or so has got an online dimension. Either inciting or in some cases instructing, providing instruction, or glorifying,” Julian King, a British diplomat and European commissioner for the Security Union, told The Guardian when the laws were first proposed. With the increasing frequency with which terrorists become “self-radicalized” by online material shows the importance of the proposed laws. One-hour takedown limit; upload filters & private Terms of Service The one-hour takedown is one of two core obligations for tech firms proposed by the legislation. Under the proposals, each EU member state will designate a so-called “competent authority.” It’s up to each member state to decide exactly how this body operates, but the legislation says they’re responsible for flagging problematic content. This includes videos and images that incite terrorism, that provide instructions for how to carry out an attack, or that otherwise promote involvement with a terrorist group. Once content has been identified, this authority would then send out a removal order to the platform that’s hosting it, which can then delete it or disable access for any users inside the EU. Either way, action needs to be taken within one hour of a notice being issued. It’s a tight time limit, but removing content this quickly is important to stop its spread, according to Creighton. This obligation is similar to voluntary rules that are already in place that encourage tech firms to take down content flagged by law enforcement and other trusted agencies in an hour. Another part is the addition of a legally mandated upload filter, which would hypothetically stop the same pieces of extremist content from being continuously reuploaded after being flagged and removed — although these filters have sometimes been easy to bypass in the past. “The frustrating thing is that [extremist content] has been flagged with the tech companies, it’s been taken down and it’s reappearing a day or two or a week later,” Creighton says, “That has to stop and that’s what this legislation targets.” The other part is the prohibition of content using private Terms of Service (TOS), rather than national law, and to take down more material than the law actually requires. This effectively increases the power of authorities in any EU Member State to suppress information that is legal elsewhere in the EU. For example, authorities in Hungary and authorities in Sweden may disagree about a news organization sharing an interview with a current or former member of a terrorist organization that it is “promoting” or “glorifying” terrorism. Or they may differ on the legitimacy of a civil society organizations advocacy on complex issues in Chechnya, Israel, or Kurdistan. This regulation gives platforms reason to use their TOS to accommodate whichever authority wants such content taken down – and to apply that decision to users everywhere. What’s the problem with the legislation? Critics say that the upload filter could be used by governments to censor their citizens, and that aggressively removing extremist content could prevent non-governmental organizations from being able to document events in war-torn parts of the world. One prominent opponent is the Center for Democracy and Technology (CDT), a think tank funded in part by Amazon, Apple, Facebook, Google, and Microsoft. Earlier this year, it published an open letter to the European Parliament, saying the legislation would “drive internet platforms to adopt untested and poorly understood technologies to restrict online expression.” The letter was co-signed by 41 campaigners and organizations, including the Electronic Frontier Foundation, Digital Rights Watch, and Open Rights Group. “These filtering technologies are certainly being used by the big platforms, but we don’t think it’s right for government to force companies to install technology in this way,” the CDT’s director for European affairs, Jens-Henrik Jeppesen, told The Verge in an interview. Removing certain content, even if a human moderator has correctly identified it as extremist in nature, could prove disastrous for the human rights groups that rely on them to document attacks. For instance, in the case of Syria’s civil war, footage of the conflict is one of the only ways to prove when human rights violations occur. But between 2012 and 2018, Google took down over 100,000 videos of attacks that were carried out in Syria’s civil war, which destroyed vital evidence of what took place. The Syrian Archive, an organization that aims to verify and preserve footage of the conflict, has been forced to backup footage on its own site to prevent the records from disappearing. Opponents of the legislation like the CDT also say that the filters could end up acting like YouTube’s frequently criticized Content ID system. This ID allows copyright owners to file takedowns on videos that use their material, but the system will sometimes remove videos posted by their original owners, and they can misidentify original clips as being copyrighted. It can also be easily circumvented. Opponents of the legislation also believe that the current voluntary measures are enough to stop the flow of terrorist content online. They claim the majority of terrorist content has already been removed from the major social networks, and that a user would have to go out of their way to find the content on a smaller site. “It is disproportionate to have new legislation to see if you can sanitize the remaining 5 percent of available platforms,” Jeppesen says. These organizations need to be able to view this content, no matter how troubling it might be, in order to investigate war crimes. Their independence from governments is what makes their work valuable, but it could also mean they’re shut out under the new legislation. While Lucinda doesn’t believe free and public access to this information is the answer. She argues that needing to “analyze and document recruitment to ISIS in East London” isn’t a good enough excuse to leave content on the internet if the existence of that content “leads to a terrorist attack in London, or Paris or Dublin.” The legislation is currently working its way through the European Parliament, and its exact wording could yet change. At the time of publication, the legislation’s lead committee is currently due to vote on its report on the draft regulation. After that, it must proceed through the trilogue stage — where the European Commission, the Council of the European Union, and the European Parliament debate the contents of the legislation — before it can finally be voted into law by the European Parliament. Because the bill is so far away from being passed, neither its opponents nor its supporters believe a final vote will take place any sooner than the end of 2019. That’s because the European Parliament’s current term ends next month, and elections must take place before the next term begins in July. Here’s the link to the proposed bill by the European Commission. How social media enabled and amplified the Christchurch terrorist attack Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’ EU’s antitrust Commission sends a “Statements of Objections” to Valve and five other video game publishers for “geo-blocking” purchases
Read more
  • 0
  • 0
  • 3178

article-image-eeros-acquisition-by-amazon-creates-a-financial-catastrophe-for-investors-and-employees
Savia Lobo
08 Apr 2019
4 min read
Save for later

Eero’s acquisition by Amazon creates a financial catastrophe for investors and employees

Savia Lobo
08 Apr 2019
4 min read
Last month Amazon announced that it acquired the mesh Wi-Fi router company, Eero for $97 million. However, this deal, which sounded full of potential, struck Eero’s investors and employees with a financial catastrophe. Mashable, who first reported on Amazon’s acquisition, reported that Eero executives brought home multi-million dollar bonuses of around $30 million and eight-figure salary increases. However, the others did not fare well in this deal. According to Mashable, “Investors took major hits, and the Amazon acquisition rendered Eero stock worthless: $0.03 per share, down from a common stock high of $3.54 in July 2017. It typically would have cost around $3 for employees to exercise their stock, meaning they would actually lose money if they tried to cash out. Former and current Eero employees who chose not to exercise those options are now empty-handed. And those who did exercise options, investing their financial faith in the company, have lost money.” Eero devices, the first to mesh WiFi, hit the market first in the year 2016. However, companies such as Luma and NetGear launched similar products in the following year. According to an Eero former employee, another major challenge for Eero was when Google launched its own mesh network, Google Wifi, in late 2016, for just $299 whereas Eero’s was priced at $500. To remain ahead of the curve, Eero later launched a smart home security system named Hive. And Google again produced a similar product called Nest Secure. Post this, Eero abandoned Hive leading which aroused a period of confusion. “The day they killed [Hive] was the day the company changed,” a former employee told Mashable. “After Eero employees returned from the holidays, 20 percent of the staff was cut. Next came massive attrition. An ex-employee described it as a period of “desperate fear.” Morale was so low that HR disabled group emailing and prohibited employees from sending out goodbye emails to say they were leaving”, Mashable reports. After Eero announced its acquisition last month, specifics of the deal was neither disclosed by Eero nor Amazon, which led the employees to bundle up their anger against this deal. Per Mashable, “Employees tried to guess from news reports and social media what the deal meant for them. When the stock price leaked, some ex-employees breathed a sigh of relief that they didn’t exercise their options in the first place. Others were left with worthless stock and disappointment.” All employees received a letter dated February 15 which mentioned that they had four days to decide what to do with their Eero shares. Some even received the letter on or after the deadline. Source: Mashable The employees who chose to purchase or exercise their stock received a "phonebook-sized" packet of dense financial information including acquisition terms. Nick Weaver, Eero’s co-founder, wrote in the introduction, “Unfortunately, the transaction will not result in the financial return we all hoped for.” Rob Chandra, a partner at Avid Park Ventures, and lecturer at UC Berkeley’s Haas business school said, “One obvious way you can judge whether it was a great exit or not is if the exit valuation is lower than the amount of capital that was invested in the startup. So it's not a great exit.” “The documents state that after transaction costs and debt, the actual price will be closer to $54.6 million. That means that Amazon is covering around $40 million of the debt that Eero owes. Ex-employees believe the debt to be from hardware manufacturing costs, since they said that Eero took on corporate financing to actually manufacture the products”, Mashable reports. Jeff Scheinrock, a professor at the UCLA Anderson School of Management said, “What this says about it was that Eero was cash strapped. A lot of this money is going to pay off debts. They were having difficulty and probably couldn’t raise additional money, so they had to look for an exit.” To know more about this news in detail, head over to Mashable’s complete coverage. SUSE is now an independent company after being acquired by EQT for $2.5 billion JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution Amazon buys ‘Eero’ mesh router startup, adding fuel to its in-house Alexa smart home ecosystem ambitions
Read more
  • 0
  • 0
  • 3173
article-image-fabrice-bellard-the-creator-of-ffmpeg-and-qemu-introduces-a-lossless-data-compressor-which-uses-neural-networks
Bhagyashree R
08 Apr 2019
3 min read
Save for later

Fabrice Bellard, the creator of FFmpeg and QEMU, introduces a lossless data compressor which uses neural networks

Bhagyashree R
08 Apr 2019
3 min read
Last month, Fabrice Bellard and his team published a paper named Lossless Data Compression with Neural Networks. This paper talks about lossless data compressors that use pure NN models based on Long Short-Term Memory (LSTM) and Transformer models. How does this model work? This lossless data compressor uses the traditional predictive approach: At each time, the encoder computes the probability vector of the next symbol value with the help of the neural network model knowing all the preceding symbols. The actual symbol value is then encoded using an arithmetic encoder and the model is updated of knowing the symbol value. As the decoder works symmetrically, meaning that both the encoder and decoder update their model identically, there is no need to transmit the model parameters. To improve the compression ratio and speed, a preprocessing stage was added. For small LSTM models, the team reused the text preprocessor CMIX and lstm-compress to have a meaningful comparison. The larger models used subword based preprocessor where each symbol represents a sequence of bytes. The model uses something called arithmetic coding, a standard compression technique. This model tries to make arithmetic coding adaptive. Here’s an example by a Reddit user that explains how exactly this model works: “The rough frequency of `e' in English is about 50%. But if you just saw this partial sentence "I am going to th", the probability/frequency of `e' skyrockets to, say, 98%. In standard arithmetic coding scheme, you would still parametrize you encoder with 50% to encode the next "e" despite it's very likely (~98%) that "e" is the next character (you are using more bits than you need in this case), while with the help of a neural network, the frequency becomes adaptive.” To ensure that both the decoder and encoder are using the exact same model, the authors have developed a custom C library called LibNC. This library is responsible for implementing the various operations needed by the models. It has no dependency on any other libraries and has a C API. Results of the experiment Performance of the model was evaluated against enwik8 Hutter Prize benchmark. The models show slower decompression speed, 1.5x slower for the LSTM model and 3x slower for the Transformer model. But, its description is simple and the memory consumption is reasonable as compared to other compressors giving similar compression ratio. Speaking of the compression ratio, the models are yet to reach the performance of CMIX, a lossless data compressor that gives optimized compression ratio at the cost of high CPU/memory usage. In all the experiments, the Transformer model gives worse performance than the LSTM model although it gives the best performance in language modeling benchmarks. To know more in detail, check out the paper, Lossless Data Compression with Neural Networks. Microsoft open-sources Project Zipline, its data compression algorithm and hardware for the cloud Making the Most of Your Hadoop Data Lake, Part 1: Data Compression Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza
Read more
  • 0
  • 0
  • 4489

article-image-researchers-introduce-a-new-generative-model-hologan-that-learns-3d-representation-from-natural-images
Natasha Mathur
08 Apr 2019
4 min read
Save for later

Researchers introduce a new generative model, HoloGAN, that learns 3D representation from natural images

Natasha Mathur
08 Apr 2019
4 min read
A group of researchers, namely, Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang, published a paper titled ‘HoloGAN: Unsupervised learning of 3D representations from natural images, last week. In the paper, researchers have proposed a generative adversarial network (GAN), called HoloGAN, for the task of unsupervised learning of 3D representations from natural images. HoloGAN works by adopting strong ‘inductive biases’ about the 3D world. The paper states that commonly used generative models depend on 2D Kernels to produce images and make assumptions about the 3D world. This is why these models tend to create blurry images in tasks that require a strong 3D understanding. HoloGAN, however, learns a 3D representation of the world and is successful at rendering this representation in a realistic manner. It can be trained using unlabelled 2D images without requiring pose labels, 3D shapes, or multiple views of the same objects. “Our experiments show that using explicit 3D features enables HoloGAN to disentangle 3D pose and identity, which is further decomposed into shape and appearance, while still being able to generate images with similar or higher visual quality than other generative models”, states the paper. How does HoloGAN work? HoloGAN first considers a 3D representation, which is later transformed to a target pose, projected to 2D features, and rendered to generate the final images. HoloGAN then learns perspective projection and rendering of 3D features from scratch with the help of a projection unit. Finally, to generate new views of the same scene, 3D rigid-body transformations are applied to the known 3D features, and the results are visualized using a neural renderer. This produces sharper results than performing 3D transformations in high-dimensional latent vector space. HoloGAN To learn 3D representations from 2D images without labels, HoloGAN extends the capability of traditional unconditional GANs by introducing a strong inductive bias about the 3D world into the generator network. During training, random poses from a uniform distribution are sampled and then the 3D features are transformed using these poses before they are rendered into images. Also, a variety of datasets are used to train HoloGAN, namely, Basel Face, CelebA, Cats, Chairs, Cars,  and LSUN bedroom. HoloGAN is trained on a resolution of 64×64 pixels for Cats and Chairs, and 128×128 pixels for Basel Face, CelebA, Cars and LSUN bedroom. Other than that, HoloGAN generates 3D representations using a learned constant tensor. The random noise vector instead is treated as a “style” controller and gets mapped to affine parameters for adaptive instance normalization (AdaIN) using a multilayer perceptron. Results and Conclusion In the paper, researchers prove that HoloGAN can generate images with comparable or greater visual fidelity than other 2D-based GAN models. HoloGAN can also learn to disentangle challenging factors in an image, such as 3D pose, shape, and appearance. The paper also shows that HoloGAN can successfully learn meaningful 3D representations across different datasets with varying levels of complexity. “We are convinced that explicit deep 3D representations are a crucial step forward for both the interpretability and controllability of GAN models, compared to existing explicit or implicit 3D representations”, reads the paper. However, researchers state that while HoloGAN is successful at separating pose from identity, its performance largely depends on the variety and distribution of poses included in the training dataset. The paper cites an example of the CelebA and Cats dataset where the model cannot recover elevation as well as azimuth. This is due to the fact that most face images are taken at eye level, thereby, containing limited variation in elevation. Future work The paper states that researchers would like to explore learning distribution of poses from the training data in an unsupervised manner for uneven pose distributions. Other directions that can be explored include further disentanglement of objects’, and appearances ( texture and illumination). Researchers are also looking into combining the HoloGAN with training techniques suchas progressive GANs to produce higher-resolution images. For more information, check out the official research paper. DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ Google AI researchers introduce PlaNet, an AI agent that can learn about the world using only images Facebook researchers show random methods without any training can outperform modern sentence embeddings models for model classification
Read more
  • 0
  • 0
  • 2758