Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-ipython-7-0-releases-with-asyncio-integration-and-new-async-libraries
Natasha Mathur
28 Sep 2018
2 min read
Save for later

IPython 7.0 releases with AsyncIO Integration and new Async libraries

Natasha Mathur
28 Sep 2018
2 min read
IPython team released version 7.0 of IPython, yesterday. IPython is a powerful Python interactive shell with features such as advanced tab completion, syntactic coloration, and more. IPython 7.0 explores new features such as AsyncIO integration, new Async libraries, and Async support in Notebooks. IPython (Interactive Python) provides a rich toolkit for interactive computing in multiple programming languages. It’s the Jupyter kernel for Python used by millions of users. Let’s discuss the key features in IPython 7.0 release. AsyncIO Integration IPython 7.0 comes with the integration of IPython and AsyncIO. This means that you don’t have to import or learn about asyncIO anymore. AsyncIO is a library which lets you write concurrent code using the async/await syntax. The asyncIO library is used as a foundation for multiple Python asynchronous frameworks providing high-performance network, web-servers, database connection libraries, distributed task queues, etc. Just remember that asyncIO is an async function, it won’t magically make your code faster but will make it easier to write. New Async Libraries (Curio and Trio integration) Python consists of keywords async and await. This helps simplify the use of asynchronous programming and the standardization around asyncIO. It also allows experimentation with the new paradigms for asynchronous libraries. Now, two new Async Libraries namely Curio and Trio, have been added in IPython 7.0. Both of these libraries explore ways to write asynchronous programs. They also explore how to use async, awaits, and coroutines when starting from a blank slate. Curio is a library which helps perform concurrent I/O and common system programming tasks. It makes use of the Python coroutines and the explicit async/await syntax. Trio is an async/await-native I/O library for Python. It lets you write programs that do multiple things at the same time with parallelized I/O. Async support in Notebooks Async code will now work in a notebook when using ipykernel for Jupyter users. With IPython 7.0, async will work with all the frontends that support the Jupyter Protocol, including the classic Notebook, JupyterLab, Hydrogen, nteract desktop, and nteract web. The default code will run in the existing asyncIO/tornado loop that runs the kernel. For more information, check out the official release notes. Make Your Presentation with IPython How to connect your Vim editor to IPython Increase your productivity with IPython
Read more
  • 0
  • 0
  • 3648

article-image-introducing-googles-tangent
Sugandha Lahoti
14 Nov 2017
3 min read
Save for later

Introducing Google's Tangent: A Python library with a difference

Sugandha Lahoti
14 Nov 2017
3 min read
The Google Brain team, in a recent blog post, announced the arrival of Tangent, an open source and free Python library for ahead-of-time automatic differentiation. Most machine learning algorithms require the calculation of derivatives and gradients. If we do it manually, it is time-taking as well as error-prone. Automatic differentiation or autodiff is a set of techniques to accurately compute the derivatives of numeric functions expressed as computer programs.  Autodiff techniques can run large-scale machine learning models with high-performance and better usability. Tangent uses the Source code transformation (SCT) in Python to perform automatic differentiation. What it basically does is, take the Python source code as input, and then produce new Python functions as its output. The new python function calculates the gradient of the input. This improves readability of the automatic derivative code similar to the rest of the program. In contrast, TensorFlow and Theano, the two most popular machine learning frameworks do not perform autodiff on the Python Code. They instead use Python as a metaprogramming language to define a data flow graph on which SCT is performed. This at times is confusing to the user, considering it involves a separate programming paradigm. Source: https://github.com/google/tangent/blob/master/docs/toolspace.png Tangent has a one-function API: import tangent df = tangent.grad(f) For printing out derivatives: import tangent df = tangent.grad(f, verbose=1) Because it uses SCT, it generates a new python function. This new function follows standard semantics and its source code can be inspected directly. This makes it easy to understand by users, easy to debug, and has no runtime overhead. Another highlighting feature is the fact that it is easily compatible with TensorFlow and NumPy. It is high performing and is built on Python, which has a large and growing community. For processing arrays of numbers, TensorFlow Eager functions are also supported in Tangent. This library also auto-generates derivatives of codes that contain if statements and loops. It also provides easy methods to generate custom gradients. It improves usability by using abstractions for easily inserting logic into the generated gradient code. Tangent provides forward-mode auto differentiation. This is a better alternative than the backpropagation, which fails for cases where the number of outputs exceeds the number of inputs. In contrast, forward-mode auto diff runs in proportion to the input variables. According to the Github repository, “Tangent is useful to researchers and students who not only want to write their models in Python but also read and debug automatically-generated derivative code without sacrificing speed and flexibility.” Currently Tangent does not support classes and closures. Although the developers do plan on incorporating classes. This will enable class definitions of neural networks and parameterized functions.   Tangent is still in the experimental stage. In the future, the developers plan to extend it to other numeric libraries and add support for more aspects of the Python language.  These include closures, classes, more NumPy and TensorFlow functions etc. They also plan to add more advanced autodiff and compiler functionalities. To summarize, here’s a bullet list of key features of Tangent: Auto differentiation capabilities Code is easy to interpret, debug, and modify Easily compatible Custom Gradients Forward-mode autodiff High performance and optimization You can learn more about the project on their official GitHub.
Read more
  • 0
  • 0
  • 3645

article-image-epic-games-ceo-calls-google-irresponsible-for-disclosing-the-security-flaw-in-fortnite-android-installer-before-patch-was-ready
Natasha Mathur
28 Aug 2018
4 min read
Save for later

Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready

Natasha Mathur
28 Aug 2018
4 min read
Epic Games CEO, Tim Sweeney has accused Google of being “irresponsible” for disclosing a major security flaw in the Fortnite Android Installer to the public eye before patch of this game was widely available. After the Fortnite installer went live, Google security engineers pointed out a security bug. This showed that installing the file (with .apk extension) shared by Epic Games, enabled the hackers to push malicious apps that could take over a user’s device. To make things even worse, the .apk file shared by Epic Games is the first step to follow while installing the Fortnite game. As mentioned in the Google thread, “Any app with the WRITE_EXTERNAL_STORAGE permission can substitute the APK immediately after the download is completed and the fingerprint is verified. This is easily done using a FileObserver. The Fortnite Installer will proceed to install the substituted (fake) APK”. Epic was quick to respond to this and took appropriate action to secure the newer Android devices from being vulnerable to the attacks. Additionally, Epic had asked Google for 90 days before making the security issue public as it would provide users with enough time to update the installers. However, last Friday, Google released a thread titled “Fortnite Installer downloads are vulnerable to hijacking” that talks about the vulnerability issues in the installer, clearly not granting Epic the requested 90 days. Google proceeded to “unrestrict the issue in line with Google’s standard disclosure practices”. Google spokesperson said that “User security is our top priority, and as part of our proactive monitoring for malware we identified a vulnerability in the Fortnite installer. We immediately notified Epic Games and they fixed the issue”. Epic games didn’t appreciate the move, and its CEO Tim Sweeney released a statement saying how “Epic genuinely appreciated Google’s effort to perform an in-depth security audit of Fortnite immediately following our release on Android, and share the results with Epic so we could speedily issue an update to fix the flaw they discovered. However, it was irresponsible of Google to publicly disclose the technical details of the flaw so quickly, while many installations had not yet been updated and were still vulnerable.” Sweeney also took to Twitter to express his disapproval regarding the situation. https://twitter.com/TimSweeneyEpic/status/1033225118405804032 https://twitter.com/TimSweeneyEpic/status/1034117758332661760 He even went ahead to say that this was Google’s attempt to “score cheap PR points” against Epic as they decided to release Fortnite via their own website instead of Google Play Store. This would have left Google out of the 30% cut it would’ve received with in-app purchases made on Fortnite Android. “Google’s security analysis efforts are appreciated and benefit the Android platform, however a company as powerful as Google should practice more responsible disclosure timing than this, and not endanger users in the course of its counter-PR efforts against Epic’s distribution of Fortnite outside of Google Play” as mentioned on the Fortnite blog. https://twitter.com/TimSweeneyEpic/status/1033226094357504000 This is not the first time that Google has been criticized, Microsoft also accused it of disclosing its vulnerabilities before patches were made widely available. Now, whether this was really a PR move by Google against Epic cannot be verified. Epic games have now come out with a 2FA or two-factor authentication to “ help protect user accounts from unauthorized access by requiring them to enter an additional code when they sign in”. Google’s incognito location tracking scandal could be the first real test of GDPR 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China Google gives Artificial Intelligence full control over cooling its data centers  
Read more
  • 0
  • 0
  • 3641

article-image-facebook-again-caught-tracking-stack-overflow-user-activity-and-data
Amrata Joshi
14 May 2019
3 min read
Save for later

Facebook again, caught tracking Stack Overflow user activity and data

Amrata Joshi
14 May 2019
3 min read
Facebook has been trending in the news because of its ethics and data privacy issues. Right from the Cambridge Analytica scandal to multiple hearings and fine against the company, Facebook has been surrounded by these controversies since quite some time now. Lately,  the Canadian and British Columbia privacy commissioners decided to take Facebook to Federal Court to seek an order because of its privacy practices. And once again, the company makes the headline for tracking users across Stack Overflow. Well, to explain this better, Stack Overflow directly links to Facebook profile pictures. You must be wondering many third-party platforms allow such tracking, then what’s the big deal in this one? So, the trap is, that this linking unintentionally allows user activity throughout Stack Exchange to be tracked by Facebook and surprisingly, it also tracks the topics you are interested in! To explain this further, let’s take an example from a Stack Overflow user. Image source: Stack Overflow The user says, “Have a look: when I load a page containing any avatars hot-linked from Facebook, my browser automatically sends a request including a Facebook identifying cookie and the URL of the page I'm viewing on Stack Exchange. They don't just know that I'm visiting the site, they also get to know which topics I'm interested on throughout the network.” Another user commented on the thread, “Facebook creates 'shadow' accounts for many people who don't have actual accounts (or at least, for people they can't find an actual account for) in order to consistently/reliably track/gather data to sell.” Few others are complaining about their profile pictures being attributed directly to facebook.com domains. The browser is basically making a request to Facebook and the Facebook session cookie identifies the user as well as a referrer header. This header tells Facebook what page the users were on at the time they check the image. How to save yourself from such creepy activity by Facebook? A lot of users have suggested selecting the cookies they should be accepting on each of the sites they visit. Also, blocking third-party cookies and setting the browser to remove cookies while closing the browser as a viable option. Manual removal of cookies is advisable while quitting a browser. Few others have suggested using an ad blocker which will refrain the users from going on fishy sites. It is suggested to enable Strict Content Blocking in Firefox for security concerns. But the matter of concern is that even other tech companies must be involved in collecting the user data and manipulating them and basically playing around our privacy. Just a few years ago, Google was trying to patent the collection of user data. It’s surprising to see how is the world changing around us and we are forced to live in an era where the tech giants are data minded. To know more about this news, check out the Stack Overflow thread. Facebook bans six toxic extremist accounts and a conspiracy theory organization Facebook open-sources F14 algorithm for faster and memory-efficient hash tables Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson
Read more
  • 0
  • 0
  • 3638

article-image-now-theres-a-cyclegan-to-visualize-the-effects-of-climate-change-but-is-this-enough-to-mobilize-action
Vincy Davis
20 May 2019
5 min read
Save for later

Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?

Vincy Davis
20 May 2019
5 min read
Climate change effects are now visible in all countries around the globe. The world is witnessing phenomena like higher temperature, flooding, ice-melting, and much more. There have been many technologies invented in the last decade to help humans understand and adapt to these climatic changes. Earlier this month, researchers from Montreal Institute for Learning Algorithms, ConscientAI Labs and Microsoft Research came up with a project that aims to generate images which will depict accurate, vivid, and personalized outcomes of climate change using Machine Learning (ML) and Cycle-Consistent Adversarial Networks (CycleGANs). This will enable individuals to make more informed choices about their climate future by creating an understanding of the effects of climate change, while maintaining scientific credibility using climate model projections. The project is to develop a Machine Learning (ML) based tool. This tool will show in a personalized way, the probable effect that climate change will have on a specific location familiar to the viewer. When given an address, the tool will generate an image projecting transformations which are likely to occur there, based on a formal climate model. For the initial version, the generated images consist of houses and buildings specifically after flooding events. The challenge in generating realistic images using Cycle-Consistent Adversarial Networks (CycleGANs) is to collect the training data needed in order to extract the mapping function. The researchers manually searched open source photo-sharing websites for images of houses from various neighborhoods and settings, such as suburban detached houses, urban townhouses, and apartment buildings. They have gathered over 500 images of non-flooded houses and the same number of flooded locations, and re-sized them to 300x300 pixels. The networks were trained using the publicly available PyTorch. The CycleGAN model for 200 epochs were trained using the trained images, using the Adam solver with a batch size of 1 and training the model from scratch with a learning rate of 0.0002. As per the CycleGAN training procedure, the learning rate is constant for the first 100 epochs and linearly decayed to zero over the next 100 epochs. Project Output and Future Plan The trained CycleGAN model was successful in learning an adequate mapping between grass and water, which could be applied to generate fairly realistic images of flooded houses. This will work best with single-family, suburban-type houses which are surrounded by an expanse of grass. From the 80 images in the test set, it was found that about 70% were successfully mapped to realistically flooded houses. This initial version of the CycleGAN model will illustrate the feasibility of applying generative model to create personalized images of an extreme climate event i.e., flooding, that is expected to increase in frequency based on climate change projections. Subsequent versions of this model will integrate more varied types of houses and surroundings, as well as different types of climate-change related extreme event phenomena (i.e. droughts, hurricanes, wildfires, air pollution etc), depending on the expected impacts at a given location, as well as forecast time horizons. There’s still scope for improvement with regard to the color scheme of the generated images and the visual artifacts. Furthermore to channel the emotional response of the public, into behavioural change or actions, the researchers are planning another improvement to the model called ‘choice knobs’. This will enable users to visually see the impact of their personal choices, such as deciding to use more public transportation, as well as the impact of broader policy decisions, such as carbon tax and increasing renewable portfolio standards. The projects greater aim is to help the general population progress towards more visible public support for climate change mitigation steps on a national level, facilitating governmental interventions and helping make the required rapid changes to a global sustainable economy. The researchers have stated that they need to explore more physical constraints to GAN training in order to incorporate more physical knowledge into these projections. This will enable a GAN model to transform a house to its projected flooded state and also take into account the forecast simulations of the flooding event represented by the physical variable outputs and probabilistic scenarios by a climate model for a given location. Response to the project Few developers have liked the idea of using technology, to produce realistic images depicting the effect of climate change in your own hometown which may make people understand the adverse effects of it. https://twitter.com/jameskobielus/status/1129392932988096513 While some developers are not sure if showing people a picture of their house submerged in water is going to create any difference. A user on Hacker news comments, “The threshold for believing the effects of climate change has to change from reading/seeing to actually being there and touching it. Or some far more reliable system of remote verification has to be established” Another user adds, “Is this a real paper? It's got to be a joke, right? a parody? It's literally a request to develop images to be used for propaganda purposes. And for those who will say that climate change is going to end the world, yeah, but that doesn't mean we should develop propaganda technology that could be used for some other political purpose.” There are already many studies/evidences to make people aware of the effects of climate change, depicting a picture of their house submerged in water is not going to move them anymore. Climate change is already happening and effecting our day to day lives. What we need now are stronger approaches towards analysing, mitigating, and adapting to these changes and inspiring more government policies to fight against these climate changes. To know more details about the project, head over to the research paper. Read More Amazon employees get support from Glass Lewis and ISS on its resolution for Amazon to manage climate change risks ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change
Read more
  • 0
  • 0
  • 3637

article-image-mozilla-removes-avast-and-avg-extensions-from-firefox-to-secure-user-data
Fatema Patrawala
05 Dec 2019
4 min read
Save for later

Mozilla removes Avast and AVG extensions from Firefox to secure user data

Fatema Patrawala
05 Dec 2019
4 min read
Yesterday Wladimir Palant, the creator of AdBlock Plus, reported that Mozilla removed four Firefox extensions made by Avast and its subsidiary AVG. Palant also found credible reports about the extensions harvesting user data and browsing histories. The four extensions are Avast Online Security, AVG Online Security, Avast SafePrice, and AVG SafePrice. The first two are extensions that show warnings when navigating to known malicious or suspicious sites, while the last two are extensions for online shoppers, showing price comparisons, deals, and available coupons. Avast and AVG extensions were caught in October Mozilla removed the four extensions from its add-ons portal after receiving a report from Palant. Palant analyzed the Avast Online Security and AVG Online Security extensions in late October and found that the two were collecting much more data than they needed to work -- including detailed user browsing history, a practice prohibited by both Mozilla and Google. He published a blog post on October 28, detailing his findings, but in a blog post dated today, he says he found the same behavior in the Avast and AVG SafePrice extensions as well. On his original blog post Mozilla did not intervene to take down the extensions. Palant reported about it again to Mozilla developers yesterday and they removed all four add-ons within 24 hours. “The Avast Online Security extension is a security tool that protects users online, including from infected websites and phishing attacks,” an Avast spokesperson told ZDNet. “It is necessary for this service to collect the URL history to deliver its expected functionality. Avast does this without collecting or storing a user's identification.” “We have already implemented some of Mozilla's new requirements and will release further updated versions that are fully compliant and transparent per the new requirements,” the Avast spokesperson said. “These will be available as usual on the Mozilla store in the near future.” Extensions still available on Chrome browser The four extensions are still available on the Chrome Web Store according to Palant. "The only official way to report an extension here is the 'report abuse' link," he writes. "I used that one of course, but previous experience shows that it never has any effect. "Extensions have only ever been removed from the Chrome Web Store after considerable news coverage," he added. On Hacker News, users discussed Avast extensions creepily trick browsers to inspect tls/ssl packets. One on the users commented, “Avast even does some browser trickery to then be able to inspect tls/ssl packets. Not sure how I noticed that on a windows machine, but the owner was glad to uninstall it. As said on other comments, the built-in windows 10 defender AV is the least evil software to have enabled for somewhat a protected endpoint. The situation is desperate for AV publishers, they treat customers like sheep, the parallel with mafia ain't too far possible to make. It sorts of reminds me 20 years back when it was common discussion to have on how AV publishers first deployed a number of viruses to create a market. The war for a decent form of cyber security and privacy is being lost. It's getting worse every year. More money (billions) is poured into it. To no avail. I think we got to seriously show the example and reject closed source solutions all together, stay away from centralized providers, question everything we consume. The crowd will eventually follow.” Mozilla’s sponsored security audit finds a critical vulnerability in the tmux integration feature of iTerm2 Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020 Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol
Read more
  • 0
  • 0
  • 3634
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-google-landmarks-novel-dataset-instance-level-image-recognition
Sugandha Lahoti
06 Mar 2018
2 min read
Save for later

Google-Landmarks, a novel dataset for instance-level image recognition

Sugandha Lahoti
06 Mar 2018
2 min read
Image retrieval and image recognition are fundamental problems in the machine learning and computer vision world. Image classification technology has shown remarkable progress over the past few years. An obstacle in this research, however, is the unavailability of large annotated datasets. Google has made an attempt to solve this challenge by introducing Google-Landmarks, a worldwide dataset for recognition of human-made and natural landmarks. This dataset was made with the intention of solving fine-grained and instance-level recognition problems. Examples of this include identifying important landmarks in images (Eiffel Tower, Mount Fuji, Taj Mahal, etc), which accounts for a large portion of what people like to photograph. Landmark recognition can help predict landmark labels directly from image pixels to help people better understand and organize their photo collections. The Google-Landmarks dataset contains more than 2 million images depicting 30 thousand unique landmarks from across the world, a number of classes that is almost 30x larger than what is available in commonly used datasets. Geographic distribution of landmarks in the Landmark dataset Google has also open-sourced Deep Local Features DELF, an attentive local feature descriptor, which is useful for large-scale instance-level image recognition, in order to advance research in this area. DELF detects and describes semantic local features which can be geometrically verified between images showing the same object instance. It is also optimized for landmark recognition. Google-Landmarks is being released as part of the Landmark Recognition and Landmark Retrieval Kaggle challenges. The Landmark recognition challenge calls for developers to build models that recognize the correct landmark (if any) in a dataset of challenging test images. In the retrieval challenge, developers are given query images and for each query, they are expected to retrieve all database images containing the same landmarks (if any). Participants are encouraged to compete in both these challenges as the test set for both the problems is same. Participants may also use the training data from the recognition challenge to train models which could be useful for the retrieval challenge. However, there are no landmarks in common between the training/index sets of the two challenges. This challenge is the focal point of the CVPR’18 Landmarks workshop. More details of the challenge and the dataset can be found in the Google research blog.  
Read more
  • 0
  • 0
  • 3634

article-image-mikroorm-4-1-lets-talk-about-performance-from-dailyjs-medium
Matthew Emerick
15 Oct 2020
3 min read
Save for later

MikroORM 4.1: Let’s talk about performance from DailyJS - Medium

Matthew Emerick
15 Oct 2020
3 min read
I just shipped version 4.1 of MikroORM, the TypeScript ORM for Node.js, and I feel like this particular release deserves a bit more attention than a regular feature release. In case you don’t know… If you never heard of MikroORM, it’s a TypeScript data-mapper ORM with Unit of Work and Identity Map. It supports MongoDB, MySQL, PostgreSQL and SQLite drivers currently. Key features of the ORM are: Implicit transactions ChangeSet based persistence Identity map You can read the full introductory article here or browse through the docs. So what changed? This release had only one clear goal in mind — the performance. It all started with an issue pointing out that flushing 10k entities in a single unit of work is very slow. While this kind of use case was never a target for me, I started to see all the possibilities the Unit of Work pattern offers. Batch inserts, updates and deletes The biggest performance killer was the amount of queries — even if the query is as simple and optimised as possible, firing 10k of those will be always quite slow. For inserts and deletes, it was quite trivial to group all the queries. A bit more challenging were the updates — to batch those, MikroORM now uses case statements. As a result, when you now flush changes made to one entity type, only one query per given operation (create/update/delete) will be executed. This brings significant difference, as we are now executing fixed number of queries (in fact the changes are batched in chunks of 300 items). https://medium.com/media/3df9aaa8c2f0cf018855bf66ecf3d065/href JIT compilation Second important change in 4.1 is JIT compilation. Under the hood, MikroORM now first generates simple functions for comparing and hydrating entities, that are tailored to their metadata definition. The main difference is that those generated functions are accessing the object properties directly (e.g. o.name), instead of dynamically (e.g. o[prop.name]), as all the information from metadata are inlined there. This allows V8 to better understand the code so it is able to run it faster. Results Here are the results for a simple 10k entities benchmark: In average, inserting 10k entities takes around 70ms with sqlite, updates are a tiny bit slower. You can see results for other drivers here: https://github.com/mikro-orm/benchmark. Acknowledgement Kudos to Marc J. Schmidt, the author of the initial issue, as without his help this would probably never happen, or at least not in near future. Thanks a lot! Like MikroORM? ⭐️ Star it on GitHub and share this article with your friends. If you want to support the project financially, you can do so via GitHub Sponsors. MikroORM 4.1: Let’s talk about performance was originally published in DailyJS on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 3634

article-image-django-2-1-released-with-new-model-view-permission-and-more
Sugandha Lahoti
06 Aug 2018
3 min read
Save for later

Django 2.1 released with new model view permission and more

Sugandha Lahoti
06 Aug 2018
3 min read
Django 2.1 has been released with changes to Model view permissions, Database backend API, and additional new features. Django 2.1 supports Python 3.5, 3.6, and 3.7. Django 2.1 is a time-based release. The schedule followed was: May 14, 2018 Django 2.1 alpha; feature freeze. June 18 Django 2.1 beta; non-release blocking bug fix freeze. July 16 Django 2.1 RC 1; translation string freeze. ~August 1 Django 2.1 final Here is the list of all new features: Model view permission Django 2.1 adds a view permission to the model Meta.default_permissions. This new permission will allow users read-only access to models in the admin. The permission will be created automatically when running migrate. Considerations for the new model view permission With the new “view” permission, existing custom admin forms may raise errors when a user doesn’t have the change permission because the form might access nonexistent fields. If users have a custom permission with a codename of the form can_view_<modelname>, the new view permission handling in the admin will allow view access to the changelist and detail pages for those models. Changes to Database backend API To adhere to PEP 249, exceptions where a database doesn’t support a feature are changed from NotImplementedError to django.db.NotSupportedError. The allow_sliced_subqueries database feature flag is renamed to allow_sliced_subqueries_with_in. The DatabaseOperations.distinct_sql() now requires an additional params argument and returns a tuple of SQL and parameters instead of a SQL string. The DatabaseFeatures.introspected_boolean_field_type is changed from a method to a property. Dropped support for MySQL 5.5 and PostgreSQL 9.3 Django 2.1 marks the end of upstream support for MySQL 5.5. It now supports MySQL 5.6 and higher. Similarly, it ends the support for PostgreSQL 9.3. Django 2.1 supports PostgreSQL 9.4 and higher. SameSite cookies The cookies used for django.contrib.sessions, django.contrib.messages, and Django’s CSRF protection now set the SameSite flag to Lax by default. Browsers that respect this flag won’t send these cookies on cross-origin requests. Other Features It removes BCryptPasswordHasher from the default PASSWORD_HASHERS setting. The minimum supported version of mysqlclient is increased from 1.3.3 to 1.3.7. Support for SQLite < 3.7.15 is removed. The multiple attribute rendered by the SelectMultiple widget now uses HTML5 boolean syntax rather than XHTML’s multiple="multiple". The local-memory cache backend now uses a least-recently-used (LRU) culling strategy rather than a pseudo-random one. The new json_script filter safely outputs a Python object as JSON, wrapped in a <script> tag, ready for use with JavaScript. These are just a select few updates in available in Django 2.1. The release notes cover all the new features in detail. Getting started with Django RESTful Web Services Getting started with Django and Django REST frameworks to build a RESTful app Python web development: Django vs Flask in 2018
Read more
  • 0
  • 0
  • 3634

article-image-is-the-commons-clause-a-threat-to-open-source
Prasad Ramesh
12 Sep 2018
4 min read
Save for later

Is the ‘commons clause’ a threat to open source?

Prasad Ramesh
12 Sep 2018
4 min read
Currently, free and open source software means anyone can modify and repurpose it for their needs. This also means that companies can take advantage of such open source software and use it for their commercial advantage. The ‘commons’ clause aims to change that and forbids monetization that mostly entails commercial use. The case in favor of the commons clause Companies that commercialize open source projects don’t give much back and this is an abuse of open source projects. The projects are open source for the idea to promote sharing and learning, not necessarily for tech giants to use it commercially make money from projects that were available freely. This is not illegal but can be just viewed as an abuse of open source projects where it is being used to make money without the creators/community getting anything back. What is the commons clause? The Commons Clause website states contributed by FOSSA, the founder and CEO is Kevin Wang. The task of drafting the commons clause was handed over to open source lawyer Heather Meeker. It is not a license itself but an additional clause that can be added to open source project licenses. It adds a narrow commercial restriction on top of the existing open-source license. The additional clause restricts the ability to ‘sell’ the software while keeping all the original license permissions unchanged. This is in the interest of preserving open source projects and helping them thrive. To avoid any confusion, when commons clause is added to a project, it is no longer ‘open source’ by the formal definition. Adding commons clause means the project still has many elements aligning to an open source project like free access, freedom to modify and redistribute but not sell. Basically, when commons clause is added to a project, the project can no longer be monetized. The Commons Clause FAQ states: “The Commons Clause was intended, in practice, to have virtually no effect other than force a negotiation with those who take predatory commercial advantage of open source development. In practice, those are some of the biggest technology businesses in the world, some of whom use open source software but don’t give back to the community. Freedom for others to commercialize your software comes with starting an open source project, and while that freedom is important to uphold, growth and commercial pressures will inevitably force some projects to close. The Commons Clause provides an alternative.” The case against the commons clause There are discussions on various forums regarding this clause with conflicting views. So, I will try to give my views on this. Opposers of the clause believe a software becomes propriety on applying commons clause. This means that any service created from the original software remains the intellectual property of the original company to sell. The fear is that this would discourage the community from contributing to open-source projects with a commons clause attached since the new products made will remain with the company. Only they will be able to monetize it if they choose to do so. On the one hand, companies that make millions of dollars from open source software and giving anything back is not in line with the ethos of open source software. But on the other hand, smaller startups and individual contributors get penalized by this clause too. What if small companies contribute to a large open source project and want to use the derived product for their growth? They can’t anymore if the commons clause is applied to the project they contributed to. It is also not right to think that a contributor deserves 50% of the profits if a company makes millions of dollars using their open source project. What can be done then? The commons clause doesn't really help the open source community, it only prevents bigger companies from monetizing it unfairly. I think major tech companies can license open source software strictly for commercial use separately. Perhaps a financial profit benchmark (say $100,000) can be made for paid licensing. That is, if you make x money from the open source software, pay for a license for further use. This will help small companies from running out of money and force closing their source. The commons clause currently is at 1.0, and there will be future revisions. It was recently adopted by Redis after Amazon using their open source project commercially. For more information, you can visit the Commons Clause website. Storj Labs’ new Open Source Partner Program: to generate revenue opportunities for open source companies Home Assistant: an open source Python home automation hub to rule all things smart NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 3630
article-image-google-researchers-propose-building-service-robots-with-reinforcement-learning-to-help-people-with-mobility-impairment
Amrata Joshi
01 Mar 2019
5 min read
Save for later

Google researchers propose building service robots with reinforcement learning to help people with mobility impairment

Amrata Joshi
01 Mar 2019
5 min read
Yesterday, Google researchers released three different research papers which describe their investigations in easy-to-adapt robotic autonomy by combining deep Reinforcement Learning with long-range planning. This research is made for people with a mobility impairment that makes them home-bound. The researchers propose to build service robots, trained using reinforcement learning to improve the independence of people with limited mobility. The researchers have trained the local planner agents in order to perform basic navigation behaviors and traverse short distances safely without collisions with moving obstacles. These local planners take noisy sensor observations, such as a 1D lidar that helps in providing distances to obstacles, and output linear and angular velocities for robot control. The researchers trained the local planner in simulation with AutoRL (AutomatedReinforcement Learning) which is a method that automates the search for RL rewards and neural network architecture. These local planners transfer to both real robots and to new, previously unseen environments. This works as building blocks for navigation in large spaces. The researchers then worked on a roadmap, a graph where nodes are locations and edges connect the nodes only if local planners can traverse between them reliably. Automating Reinforcement Learning (AutoRL) In the first paper, Learning Navigation Behaviors End-to-End with AutoRL, the researchers trained the local planners in small, static environments. It is difficult to work with standard deep RL algorithms, such as Deep Deterministic Policy Gradient (DDPG). To make it easier, the researchers automated the deep Reinforcement Learning training. AutoRL is an evolutionary automation layer around deep RL that searches for a reward and neural network architecture with the help of a large-scale hyperparameter optimization. It works in two phases, reward search, and neural network architecture search. During the reward search, AutoRL concurrently trains a population of DDPG agents, with each having a slightly different reward function. At the end of the reward search phase, the reward that leads the agents to its destination most often gets selected. In the neural network architecture search phase, the process gets repeated. The researchers use the selected reward and tune the network layers. This turns into an iterative process and which means AutoRL is not sample efficient. Training one agent takes 5 million samples while AutoRL training around 10 generations of 100 agents requires 5 billion samples which is equivalent to 32 years of training. The advantage is that after AutoRL, the manual training process gets automated, and DDPG does not experience catastrophic forgetfulness. Another advantage is that AutoRL policies are robust to the sensor, actuator and localization noise, which generalize to new environments. PRM-RL In the second paper, PRM-RL: Long-Range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning, the researchers explain Sampling-based planners that tackle long-range navigation by approximating robot motions. In this paper, the researchers have combined PRMs with hand-tuned RL-based local planners (without AutoRL) for training robots locally and then adapting them to different environments. The researchers trained a local planner policy in a generic simulated training environment, for each robot. Then they build a PRM with respect to that policy, called a PRM-RL, over a floor plan for the deployment environment. For building a PRM-RL, the researchers connected the sampled nodes with the help of Monte Carlo simulation. The resulting roadmap can be tuned to both the abilities and geometry of the particular robot. Though the roadmaps for robots with the same geometry having different sensors and actuators will have different connectivity. At execution time, the RL agent easily navigates from roadmap waypoint to waypoint. Long-Range Indoor Navigation with PRM-RL In the third paper, the researchers have made several improvements to the original PRM-RL. They replaced the hand-tuned DDPG with AutoRL-trained local planners, which improves long-range navigation. They have also added Simultaneous Localization and Mapping (SLAM) maps, which robots use at execution time, as a source for building the roadmaps. As the SLAM maps are noisy, this change closes the “sim2real gap”, a phenomenon where simulation-trained agents significantly underperform when they are transferred to real-robots. Lastly, they have added distributed roadmap building to generate very large scale roadmaps containing up to 700,000 nodes. The team compared PRM-RL to a variety of different methods over distances of up to 100m, well beyond the local planner range. The team realized that PRM-RL had 2 to 3 times the rate of success over baseline because the nodes were connected appropriately for the robot’s capabilities. To conclude, Autonomous robot navigation can improve the independence of people with limited mobility. This is possible by automating the learning of basic, short-range navigation behaviors with AutoRL and using the learned policies with SLAM maps for building roadmaps. To know more about this news, check out the Google AI blog post. Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 Google released a paper showing how it’s fighting disinformation on its platforms Google introduces and open-sources Lingvo, a scalable TensorFlow framework for Sequence-to-Sequence Modeling  
Read more
  • 0
  • 0
  • 3627

article-image-golang-1-13-module-mirror-index-and-checksum-database-are-now-production-ready
Savia Lobo
02 Sep 2019
4 min read
Save for later

Golang 1.13 module mirror, index, and Checksum database are now production-ready

Savia Lobo
02 Sep 2019
4 min read
Last week, the Golang team announced that the Go module mirror, index, and checksum database are now production-ready thus adding reliability and security to the Go ecosystem. For Go 1.13 module users, the go command will use the module mirror and checksum database by default. New production-ready modules for Go 1.13 module Module Mirror A module mirror is a special kind of module proxy that caches metadata and source code in its own storage system. This allows the mirror to continue to serve source code that is no longer available from the original locations thus speeding up downloads and protect users from the disappearing dependencies. According to the team, module mirror is served at proxy.golang.org, which the go command will use by default for module users as of Go 1.13. For users still running an earlier version of the go command, they can use this service by setting GOPROXY=https://proxy.golang.org in their local environment. Read Also: The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Module Index The module index is served by index.golang.org. It is a public feed of new module versions that become available through proxy.golang.org. Module index is useful for tool developers who want to keep their own cache of what’s available in proxy.golang.org, or to keep up-to-date on some of the newest modules go developers use. Read Also: Implementing Garbage collection algorithms in Golang [Tutorial] Checksum Database Modules introduced the go.sum file, a list of SHA-256 hashes of the source code and go.mod files of each dependency when it was first downloaded. The go command can use these hashes to detect misbehavior by an origin server or proxy that gives a different code for the same version. However, the go.sum file has a limitation, it works entirely by trust based on user’s first use. When a user adds a version of a never seen before dependency, the go command fetches the code and adds lines to the go.sum file quickly. The problem is that those go.sum lines aren’t being checked against anyone else’s and thus they might be different from the go.sum lines that the go command just generated for someone else. The checksum database ensures that the go command always adds the same lines to everyone's go.sum file. Whenever the go command receives new source code, it can verify the hash of that code against this global database to make sure the hashes match, ensuring that everyone is using the same code for a given version. The checksum database is served by sum.golang.org and is built on a Transparent Log (or “Merkle tree”) of hashes backed by Trillian, a transparent, highly scalable and cryptographically verifiable data store. The main advantage of a Merkle tree is that it is tamper-proof and has properties that don’t allow for misbehavior to go undetected, making it more trustworthy. The Merkle tree checks inclusion proofs (if a specific record exists in the log) and “consistency” proofs (that the tree hasn’t been tampered with) before adding new go.sum lines to a user’s module’s go.sum file. This checksum database allows the go command to safely use an otherwise untrusted proxy. Because there is an auditable security layer sitting on top of it, a proxy or origin server can’t intentionally, arbitrarily, or accidentally start giving you the wrong code without getting caught. “Even the author of a module can’t move their tags around or otherwise change the bits associated with a specific version from one day to the next without the change being detected,” the blog mentions. Developers are excited about the launch of the module mirror and checksum database and look forward to checking it out. https://twitter.com/hasdid/status/1167795923944124416 https://twitter.com/jedisct1/status/1167183027283353601 To know more about this news in detail, read the official blog post. Other news in Programming Why Perl 6 is considering a name change? The Julia team shares its finalized release process with the community TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers and more
Read more
  • 0
  • 0
  • 3626

article-image-cmake-3-12-0-releases
Natasha Mathur
18 Jul 2018
3 min read
Save for later

CMake 3.12.0 releases!

Natasha Mathur
18 Jul 2018
3 min read
CMake 3.12.0 is now available for download. The new release includes changes in generators, command lines, variables, modules and other updates. Let’s have a look at major changes and new features in CMake 3.12.0 Key Updates Here are the new features added in the latest release: Generators The Visual Studio Generators for VS 2017 now offers support for version=14.## option in the CMAKE_GENERATOR_TOOLSET value (e.g. via the cmake(1) -T option). This helps in specifying a toolset version number. Command Line The cmake(1) Build Tool Mode (cmake --build) gained --parallel [<jobs>] and -j [<jobs>] options. This helps to specify a parallel build level. They also help map to the corresponding options in the native build tool. Commands The add_compile_definitions() command is now added to set the preprocessor definitions at a directory level. This supersedes add_definitions(). Also, the cmake_minimum_required() and cmake_policy(VERSION) commands can now accept a version range using the form <min>[...<max>]. The list() command offers a SUBLIST sub-command to get a sublist of the list. Variables The CMAKE_SUPPRESS_REGENERATION variable is extended and now provides support to the Ninja and Makefile Generators. The CMAKE_FOLDER variable has been added to initialize the FOLDER property on all targets. Properties VS_SHADER_DISABLE_OPTIMIZATIONS and VS_SHADER_ENABLE_DEBUG, which are the properties of HLSL source file now offer support for generator expressions. HLSL source file property VS_SHADER_OBJECT_FILE_NAME has been added to the Visual Studio Generators for VS 2010 and above. The property helps to specify the file name of the compiled shader object. Modules The FindALSA module is now capable of providing imported targets. The FindMatlab module offers support for the Matlab Runtime Compiler (MCR) for compiling and linking matlab extensions. The UseSWIG module has also gained support now for CSHARP variant wrapper files. Generator Expressions A new $<GENEX_EVAL:...> and $<TARGET_GENEX_EVAL:target,...> generator expression is now added to allow consumption of generator expressions. Their evaluation itself delivers generator expressions. Added a new $<TARGET_NAME_IF_EXISTS:...> generator expression. Other changes The Visual Studio 8 2005 generator has been deprecated. Fortran dependency scanning now provides support for dependencies implied by Fortran Submodules. The Compile Features functionality will now make use of C features in MSVC since VS 2010. For more information on the latest updates and features, check out the official CMake 3.12.0 release notes. Qt for Python 5.11 released! Apache NetBeans 9.0 RC1 released!  
Read more
  • 0
  • 0
  • 3625
article-image-openai-reinforcement-learning-giving-robots-human-like-dexterity
Sugandha Lahoti
31 Jul 2018
3 min read
Save for later

OpenAI builds reinforcement learning based system giving robots human like dexterity

Sugandha Lahoti
31 Jul 2018
3 min read
Researchers at OpenAI have developed a system trained with reinforcement learning algorithms which is dexterous in-hand manipulation. Termed as Dactyl, this system can solve object orientation tasks entirely in a simulation without any human input. After the system’s training phase, it was able to work on a real robot without any fine-tuning. Using humanoid hand systems to manipulate objects has been a long-standing challenge in robotic control. Current techniques remain limited in their ability to manipulate objects in the real world. Although robotic hands have been available for quite some time, they were largely unable to utilize complex end-effectors to perform dexterous manipulation tasks. The Shadow Dexterous Hand, for instance, has been available since 2005 with five fingers and 24 degrees of freedom. However, it did not see large-scale adoption because of the difficulty of controlling such complex systems. Now OpenAI researchers have developed a system that trained control policies allowing a robot hand to perform complex in-hand manipulations. This systems shows unprecedented levels of dexterity and discovers different hand grasp types found in humans, such as the tripod, prismatic, and tip pinch grasps. It is also able to display dynamic behaviors such as finger gaiting, multi-finger coordination, the controlled use of gravity, and application of translational and torsional forces to the object. How does the OpenAI system work? First, they used a large distribution of simulations with randomized parameters to collect data for the control policy and vision-based pose estimator. The control policy receives observed robot states and rewards from the distributed simulations. It then learns to map observations to actions using RNN and reinforcement learning. The vision-based pose estimator renders scenes collected from the distributed simulations. It then learns to predict the pose of the object from images using a CNN, trained from the control policy. The object pose is predicted from 3 camera feeds with the CNN. These cameras measure the robot fingertip locations using a 3D motion capture system and give them to the control policy to produce an action for the robot. OpenAI blog You can place a block in the palm of the Shadow Dexterous hand and the Dactyl can reposition it into different orientations. For example, it can rotate the block to put a new face on top. OpenAI blog According to OpenAI, this project completes a full cycle of AI development that OpenAI has been pursuing for the past two years. “We’ve developed a new learning algorithm, scaled it massively to solve hard simulated tasks, and then applied the resulting system to the real world.” You can read more about Dactyl on OpenAI blog. You can also read the research paper for further analysis. AI beats human again – this time in a team-based strategy game OpenAI charter puts safety, standards, and transparency first Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block
Read more
  • 0
  • 0
  • 3621

article-image-qt-design-studio-1-0-released-with-qt-photoshop-bridge-timeline-based-animations-and-qt-live-preview
Natasha Mathur
26 Oct 2018
2 min read
Save for later

Qt Design Studio 1.0 released with Qt photoshop bridge, timeline based animations and Qt live preview

Natasha Mathur
26 Oct 2018
2 min read
The Qt team released Qt Design Studio 1.0 yesterday. Qt Design Studio 1.0 explores features such as Qt photoshop bridge, timeline-based animations, and Qt live preview among other features. Qt Design Studio is a UI design and development environment which allows designers and developers around the world to rapidly prototype as well as develop complex and scalable UIs. Let’s discuss the features of Qt Design Studio 1.0 in detail. Qt Photoshop Bridge Qt Design Studio 1.0 comes with Qt photoshop bridge that allows users to import their graphics design from photoshop. Users can also create re-usable components directly via Photoshop. Moreover, exporting directly to specific QML types is also allowed. Other than that, Qt photoshop Bridge comes with an enhanced import dialog as well as basic merging capabilities. Timeline-based animations Timeline-based animations in Qt Design Studio 1.0 come with a timeline-/keyframe-based editor. This editor allows designers to easily create pixel-perfect animations without having to write a single line of code. You can also map and organize the relationship between timelines and states to create smooth transitions from state to state. Moreover, selecting multiple keyframes is also enabled. Qt Live Preview Qt Live Preview lets you run and preview your application or UI directly on the desktop, Android devices, as well as the Boot2Qt devices. You can also see how your changes affect the UI live on your target device. Moreover, it also comprises a zoom in and out functionality. Other Features You can insert a 3D studio element to preview it on the end target dice with the Qt Live Preview. There’s a Qt Safe Renderer integration that uses Safe Renderer items and also map them in your UI. You can use states and timeline for the creation of screen flows and transitions. Qt Design Studio is free, however, you will need a commercial Qt developer license to distribute the UIs created with Qt Design Studio. For more information, check out the official Qt Design Studio blog. Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements Qt creator 4.8 beta released, adds language server protocol Qt Creator 4.7.0 releases!
Read more
  • 0
  • 0
  • 3618