Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-tensorflow-2-0-beta-releases-with-distribution-strategy-api-freeze-easy-model-building-with-keras-and-more
Vincy Davis
10 Jun 2019
5 min read
Save for later

TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more

Vincy Davis
10 Jun 2019
5 min read
After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0. The 2.0 API is final with the symbol renaming/deprecation changes completed. The 2.0 API is ready and available as part of the TensorFlow 1.14 release in compat.v2 module. TensorFlow 2.0 support for Keras features Distribution Strategy for hardware The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes. The tf.distribute.Strategy can be used with: TensorFlow's high level APIs Tf.keras Tf.estimator Custom training loops TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy - tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop. Model Subclassing Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the  __init__  method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible. Breaking Changes The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely. In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release. Bug Fixes and Other Changes This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below: In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added. The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights. The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics. Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added. This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks. The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed. The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose. General reaction to the release of TensorFlow 2.0 beta is positive. https://twitter.com/markcartertm/status/1137238238748266496 https://twitter.com/tonypeng_Synced/status/1137128559414087680 A user on reddit comments, “Can't wait to try that out !” However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production. A user on Hacker News comments, “Maybe I'll give TF another try, but right now I'm really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It's great for research and proofs-of-concept. Maybe for production too.” Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it's hard to transition from one to the other.” The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer. Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet ML.NET 1.0 RC releases with support for TensorFlow models and much more!
Read more
  • 0
  • 0
  • 2958

article-image-github-introduces-template-repository-for-easy-boilerplate-code-management-and-distribution
Bhagyashree R
10 Jun 2019
2 min read
Save for later

GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution

Bhagyashree R
10 Jun 2019
2 min read
Yesterday GitHub introduced ‘Template repository’ using which you can share boilerplate code and directory structure across projects easily. This is similar to the idea of ‘Boilr’ and ‘Cookiecutter’. https://twitter.com/github/status/1136671651540738048 How to create a GitHub template repository? As per its name, ‘Template repository’ enable developers to mark a repository as a template, which they can use later for creating new repositories containing all of the template repository’s files and folders. You can create a new template repository or mark an existing one as a template with admin permissions. Just navigate to the Settings page and then click on the ‘Template repository’ checkbox. Once the template repository is created anyone who has access to it will be able to generate a new repository with same directory structure and files via ‘Use this template’ button. Source: GitHub All the templates that you own, have access to, or have used in a previous project will also be available to you when creating a new repository through ‘Choose a template’ drop-down. Every template repository will have a new URL ‘/generate’ endpoint that will allow you to distribute your template more efficiently. You just need to link your template users directly to this endpoint. Source: GitHub Templating is similar to cloning a repository, except it does not retain the history of the repository unlike cloning and gives users a clean new project with an initial commit. Though this function is still pretty basic, as GitHub will add more functionality in the future, it will be useful for junior developers and beginners to help them get started. Here’s what a Hacker News user believes we can do with this feature: “This is a part of something which could become a very powerful pattern: community-wide templates which include many best practices in a single commit: - Pre-commit hooks for linting/formatting and unit tests. - Basic CI pipeline configuration with at least build, test and release/deploy phases. - Package installation configuration for the frameworks you want. - Container/VM configuration for the languages you want to enable cross-platform and future-proof development. - Documentation to get started with it all.” Read the official announcement by GitHub for more details. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack
Read more
  • 0
  • 0
  • 17198

article-image-nim-0-20-0-1-0-rc1-released-with-many-new-features-library-additions-language-changes-and-bug-fixes
Vincy Davis
07 Jun 2019
6 min read
Save for later

Nim 0.20.0 (1.0 RC1) released with many new features, library additions, language changes and bug fixes

Vincy Davis
07 Jun 2019
6 min read
Yesterday, the Nim team announced the release of Nim version 0.20.0. Nim is a statically typed compiled systems programming language, which successfully combines the concepts from mature languages like Python, Ada and Modula. This is a massive release from Nim, with more than 1,000 commits. Nim version 0.20.0 is effectively Nim 1.0 RC1. The team has also mentioned that the stable release 1.0 will either be, the Nim 0.20.0 being promoted to 1.0 status or another release candidate, as there will be no more breaking changes. Also, version1.0 will be a long-term supported stable release and will only receive bug fixes and new features in the future, as long as it doesn't break backwards compatibility. New Features not is always a unary operator Stricter compile time checks for integer and float conversions Tuple unpacking for constant and for loop variables Hash sets and tables are initialized by default Better error message for case-statements The length of a table must not change during iteration Better error message for index out of bounds Changelog The Nim version 0.20.0 includes many changes affecting backwards compatibility. One of the changes is that strutils.editDistance has been deprecated, instead editdistance.editDistance or editdistance.editDistanceAscii to be used instead. One of the breaking changes in the standard library includes osproc.execProcess now also takes a workingDir parameter and std/sha1.secureHash will now accept openArray[char], and not string. There are few breaking changes in the compiler too. One of the main changes is that the compiler now implements the “generic symbol prepass” for when statements in generics. Library additions There are many new library additions in this release. Some of them are mentioned below: stdlib module std/editdistance as a replacement for the deprecated strutils.editDistance. stdlib module std/wordwrap as a replacement for the deprecated strutils.wordwrap. Added split, splitWhitespace, size, alignLeft, align, strip, repeat procs and iterators to unicode.nim. Added or for NimNode in macros Added system.typeof for more control over how type expressions can be deduced. Library changes Many changes have been made in the library. Some of them are mentioned below: The string output of macros.lispRepr proc has been tweaked slightly. The dumpLisp macro in this module now outputs an indented proper Lisp, devoid of commas. Added macros.signatureHash that returns a stable identifier derived from the signature of a symbol. In strutils empty strings now no longer match as substrings The Complex type is now a generic object and not a tuple anymore. The ospaths module is now deprecated, use os instead. Note that os is available in a NimScript environment but unsupported operations produce a compile-time error. Language additions There have been new additions to the language as well. Some of them are mentioned below: Vm support for float32<->int32 and float64<->int64 casts was added. There is a new pragma block noSideEffect that works like the gcsafe pragma block User defined pragmas are now allowed in the pragma blocks Pragma blocks are no longer eliminated from the typed AST tree to preserve pragmas for further analysis by macros. Language changes The standard extension for SCF (source code filters) files was changed from .tmpl to .nimf. Pragma syntax is now consistent. Previous syntax where type pragmas did not follow the type name is now deprecated. Also pragma before generic parameter list is deprecated to be consistent with how pragmas are used with a proc. Hash sets and tables are initialized by default. The explicit initHashSet, initTable, etc. are not needed anymore. Tool changes jsondoc now includes a moduleDescription field with the module description. jsondoc0 shows comments as its own objects as shown in the documentation. nimpretty: –backup now defaults to off instead of on and the flag was undocumented; use git instead of relying on backup files. koch now defaults to build the latest stable Nimble version unless you explicitly ask for the latest master version via --latest. Compiler changes The deprecated fmod proc is now unavailable on the VM A new --outdir option was added The compiled JavaScript file for the project produced by executing nim js will no longer be placed in the nimcache directory. The --hotCodeReloading has been implemented for the native targets. The compiler also provides a new more flexible API for handling the hot code reloading events in the code. The compiler now supports a --expandMacro:macroNameHere switch for easy introspection into what a macro expands into. The -d:release switch now does not disable runtime checks anymore. For a release build that also disables runtime checks use -d:release -d:danger or simply -d:danger. The Nim version 0.20.0 also contains many bug fixes. Most developers are quite delighted with the release of Nim version 0.20.0. A user on Hacker News states that “It's impressive. 1000 commits! Great job Nim team!” Another user comments, “I've been full steam on the Nim train for the past year. It really hits a sweet spot between semantic complexity and language power. If you've used any mainstream language and understand types, you already understand 80% of the semantics you need to be productive. But more advanced features (generics, algebraic data types, hygienic macros) are available when needed. Now that the language is approaching 1.0, the only caveat is a small ecosystem and community. Nim has completely replaced Node as my language of choice for side projects and prototyping.” While there are some users who still prefer Python, for its strong command language, a user says that “It seems to me that the benefits of Nim over Python are far smaller than the benefits of Python's library ecosystem. I'm pretty happy with Python though. It seems like Nim's benefits couldn't be that big. I consider Python "great", so the best Nim could be is "great-er", as a core language I mean. I've been rooting for Nim, but haven't actually tried it. And my use case is pretty small, I admit.” These are select few updates. More information on the Nim blog. Containers and Python are in demand, but Blockchain is all hype, says Skill Up developer survey Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more
Read more
  • 0
  • 0
  • 1750
Banner background image

article-image-square-updated-its-terms-of-services-community-raise-concerns-about-restriction-to-use-the-agpl-licensed-software-in-online-stores
Amrata Joshi
07 Jun 2019
4 min read
Save for later

Square updated its terms of services; community raise concerns about restriction to use the AGPL-licensed software

Amrata Joshi
07 Jun 2019
4 min read
Last month, Square a financial services and mobile payment company updated its terms of service effective from this year in July. Developers are raising concerns upon one of the terms of service which restricts the use of AGPL-licensed software in online stores. What is GNU AGPL Affero General Public License The GNU Affero General Public License (AGPL) is a free and copyleft license for software and other kinds of works. AGPL guarantees the freedom for sharing and changing all versions of a program. It protects developers’ right by asserting copyright on the software, and by giving legal permission to copy, distribute and/or modify the software. What does the developer community think about AGPL The Content Restrictions section B-15  under the Online Store, reads, “You will not use, under any circumstance, any open source software subject to the GNU Affero General Public License v.3, or greater.” Few of the developers think that Square has misunderstood AGPL and this rule doesn’t make sense to them. A user commented on HackerNews, “This makes absolutely no sense. I'm almost certain that Square lawyers fucked up big time. They looked at the AGPL and completely misunderstood the context. There is no way in hell anyone can interpret AGPL in a way that makes Square responsible for any license violations their customers make selling software.” While according to few others the code which is licensed under AGPL can’t be used in a website hosted by Square, is what the rule means. Also, if the AGPL code is used by Square then the code might be sent to the browsers along with Square’s own proprietary code. And this could possibly mean that Square has violated AGPL. But a lot of companies follow the same rule, including Google, which clearly states, “WARNING: Code licensed under the GNU Affero General Public License (AGPL) MAY NOT be used at Google.”  But this could be useful for the developers as it keeps the code safe from the big tech companies using it. Chris DiBona, Director of open source at Google, said in a statement to The Register that “Google continues to ban the lightning-rod AGPL open source license within the company because doing so "saves engineering time" and because most AGPL projects are of no use to the company.” According to him, AGPL is designed for closing the "application service provider loophole" in the GPL and which lets ASPs use GPL code without distributing their changes back to the open source community. Under the AGPL, one has to open source their code if they use the AGPL code in their web service, and why would a company like Google do that? As its core components and back-end infrastructure that run its online services are not open source. But it also seems that it is something that needs the interference of lawyers and it is a matter of concern for them as well. https://twitter.com/MarkKriegsman/status/1136589805024923649 Also, the websites using AGPL code might have to provide the entire source code to their back end system. So, few think that AGPL is not an efficient license and they would want to see a better one that goes with the idea of freedom completely. And according to them such licenses should come from copyleft folks and not from the profit-oriented companies. While the rest argue that it is an efficient license and is useful for the developers and giving them enough freedom to share and protecting their software from companies. https://twitter.com/MarkKriegsman/status/1136589799341600769 https://twitter.com/mikeym0p/status/1136392884306010112 https://twitter.com/kjjaeger/status/1136633898526490624 https://twitter.com/fuzzychef/status/1136386203756818433 To know more about this news, check out the post by Square. AWS announces Open Distro for Elasticsearch licensed under Apache 2.0 Blue Oak Council publishes model license version 1.0.0 to simplify software licensing for everyone Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)  
Read more
  • 0
  • 0
  • 2046

article-image-researchers-highlight-impact-of-programming-languages-on-code-quality-and-reveal-flaws-in-the-original-fse-study
Amrata Joshi
07 Jun 2019
7 min read
Save for later

Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study

Amrata Joshi
07 Jun 2019
7 min read
Researchers from Northeastern University, University of Massachusetts, Amherst and Czech Technical University in Prague, published a paper on the Impact of Programming Languages on Code Quality which is a reproduction of work by Ray et al published in 2014 at the Foundations of Software Engineering (FSE) conference. This work claims to reveal an association between 11 programming languages and software defects in projects that are hosted on GitHub. The original paper by Ray et al was well-regarded in the software engineering community because it was nominated as the Communication of the ACM (CACM) research highlight. And the above mentioned researchers conducted a study to validate the claims from the original work. They used the experimental repetition method, which was partially successful and found out that association of 10 programming languages with defects is true. Then they conducted an independent reanalysis which revealed a number of flaws in the original study. And finally the results suggested that only four languages are significantly associated with the defects, and the effect size (correlation between two variables) for them is extremely small. Let us take a look at all the 3 researches in brief: 2014 FSE paper: Does programming language choice affect software quality? The question that arises from the study by Ray et al. published at the 2014 Foundations of Software Engineering (FSE) conference is, What is the effect of programming language on software quality? The results reported in the FSE paper and later repeated in the followup works are based on an observational study of a corpus of 729 GitHub projects that are written in 17 programming languages. For measuring the quality of code, the authors have identified, annotated, and tallied commits that are supposed to indicate bug fixes. Then the authors fitted a Negative Binomial regression against the labeled data for answering the following research questions: RQ1 Are some languages more defect prone than others? The original paper concluded that “Some languages have a greater association with defects than others, although the effect is small.” The conclusion was that Haskell, Clojure, TypeScript, Scala and Ruby were less error-prone whereas C, JavaScript, C++, Objective-C, PHP, and Python were more error-prone. RQ2 Which language properties relate to defects? The original study concluded that “There is a small but significant relationship between language class and defects. Functional languages have a smaller relationship to defects than either procedural or scripting languages.” It could be concluded that functional and strongly typed languages showed fewer errors, whereas the procedural, unmanaged languages and weakly typed induced more errors. RQ3 Does language defect proneness depend on domain? A mix of automatic and manual methods has been used for classifying projects into six application domains. The paper concluded that “There is no general relationship between domain and language defect proneness”. It can be noted that the variation in defect proneness comes from the languages themselves which makes the domain a less indicative factor. RQ4 What’s the relation between language & bug category? The study concluded that “Defect types are strongly associated with languages. Some defect type like memory error, concurrency errors also depend on language primitives. Language matters more for specific categories than it does for defects overall.” It can be concluded that for memory, languages with manual memory management have more errors. Also, Java stands out as the only garbage collecting language that is associated with more memory errors. Whereas for concurrency, Python, JavaScript, etc have fewer errors than languages with concurrency primitives. Experimental repetition method performed to obtain similar results from the original study The original study used methods of data acquisition, data cleaning, and statistical modeling. The researchers then planned for experimental repetition. Their first objective was to repeat the analyses of the FSE paper and obtain the same results. They used an artifact coming from the original authors that had 3.45 GB of processed data and 696 lines of R code for loading the data and performing statistical modeling. According to a repetition process, a script generates results and match the results in the published paper. The researchers wrote new R scripts for mimicking all of the steps from the original manuscript. They then found out that it is essential to automate the production of all tables, numbers, and graphs to iterate multiple times. Researchers concluded that the repetition was partly successful and according to them: RQ1 produced small differences but had qualitatively similar conclusions. The researchers were mostly able to replicate RQ1 based on the artifact provided by the authors. The researchers found 10 languages with a statistically significant association with errors, instead of the eleven reported RQ2 could have been repeated but they noted issues with language classification. They noted that RQ3 could not be repeated as the code was missing and also because their reverse engineering attempts failed. RQ4 could not be repeated because of irreconcilable differences in the data.  Another reason was that the artifact didn't contain the code which implemented the NBR for bug types. The Reanalysis method confirms flaws in the FSE study Their second objective was to carry out a reanalysis of RQ1 of the FSE paper. i.e., Whether some languages are more defect prone than others? The reanalysis differs from repetition as it proposes alternative data processing and statistical analyses for addressing methodological weaknesses of the original work. The researchers then used methods such as data processing, data cleaning, statistical modeling. According to the researchers, the p-values for Objective-C, JavaScript, C, TypeScript, PHP, and Python fall outside of the “significant” range of values. Thus, 6 of the original 11 claims have been discarded at this stage. Controlling the FDR increased the p-values slightly, but did not invalidate additional claims. The p-value for one additional language, Ruby, lost its significance and even Scala is out of the statistically significant set. And a smaller p-value (≤ 0.05) indicates stronger evidence against the null hypothesis, so the null hypothesis can be rejected in that case. In the table below, grey cells are indicating disagreement with the conclusion of the original work and which include, C, Objective-C, JavaScript, TypeScript, PHP, and Python. So, the reanalysis has failed to validate most of the claims and the multiple steps of data cleaning and improved statistical modeling have also invalidated the significance of 7 out of 11 languages. Image source: Impact of Programming Languages on Code Quality The researchers conclude that the work by the Ray et al. has aimed to provide evidence for one of the fundamental assumptions in programming language research, that is language design matters. But the researchers have identified numerous problems in the FSE study that has invalidated its key result. The paper reads, “Our intent is not to blame, performing statistical analysis of programming languages based on large-scale code repositories is hard. We spent over 6 months simply to recreate and validate each step of the original paper.” The researchers’ contribution provides thorough analysis and discussion of the downfalls associated with statistical analysis of large code bases. According to them, statistical analysis combined with large data corpora is a powerful tool that might even answer the hardest research questions but the possibility of errors—is enormous. The researchers further state that “It is only through careful re-validation of such studies that the broader community may gain trust in these results and get better insight into the problems and solutions associated with such studies.” Check out the paper On the Impact of Programming Languages on Code Quality for more in-depth analysis. Samsung AI lab researchers present a system that can animate heads with one-shot learning Researchers from China introduced two novel modules to address challenges in multi-person pose estimation AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural sounding speech
Read more
  • 0
  • 0
  • 3594

article-image-apple-previews-macos-catalina-10-15-beta-featuring-apple-music-tv-apps-security-zsh-shell-driverkit-and-much-more
Amrata Joshi
04 Jun 2019
6 min read
Save for later

Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!

Amrata Joshi
04 Jun 2019
6 min read
Yesterday, Apple previewed the next version of macOS called Catalina, at its ongoing Worldwide Developers Conference (WWDC) 2019. macOS 10.15 or Catalina comes with new features, apps, and technology for developers. With Catalina, Apple is replacing iTunes with entertainment apps such as  Apple PodcastsApple Music, and the Apple TV app. macOS Catalina is expected to be released this fall. Craig Federighi, Apple’s senior vice president of Software Engineering, said, “With macOS Catalina, we’re bringing fresh new apps to the Mac, starting with new standalone versions of Apple Music, Apple Podcasts and the Apple TV app.” He further added, “Users will appreciate how they can expand their workspace with Sidecar, enabling new ways of interacting with Mac apps using iPad and Apple Pencil. And with new developer technologies, users will see more great third-party apps arrive on the Mac this fall.” What’s new in macOS Catalina Sidecar feature Sidecar is a new feature in the macOS 10.15 that helps users in extending their Mac desktop with the help of iPad as a second display or as a high-precision input device across the creative Mac apps. Users can use their iPad for drawing, sketching or writing in any Mac app that supports stylus input by pairing it with an Apple Pencil. Sidecar can be used for editing video with Final Cut Pro X, marking up iWork documents or drawing with Adobe Illustrator iPad app support Catalina comes with iPad app support which is a new way for developers to port their iPad apps to Mac. Previously, this project was codenamed as “Marzipan,” but it’s now called Catalyst. Developers will now be able to use Xcode for targeting their iPad apps at macOS Catalina. Twitter is planning on porting its iOS Twitter app to Mac, and even Atlassian is planning to bring its Jira iPad app to macOS Catalina. Though it is still not clear how many developers are going to support this porting, Apple is encouraging developers to port their iPad apps to the Mac. https://twitter.com/Atlassian/status/1135631657204166662 https://twitter.com/TwitterSupport/status/1135642794473558017 Apple Music Apple Music is a new music app that will help users discover new music with over 50 million songs, playlists, and music videos. Users will now have access to their entire music library, including the songs they have downloaded, purchased or ripped from a CD. Apple TV Apps The Apple TV app features Apple TV channels, personalized recommendations, and more than 100,000 iTunes movies and TV shows. Users can browse, buy or rent and also enjoy 4K HDR and Dolby Atmos-supported movies. It also comes with a Watch Now section that has the Up Next option, where users can easily keep track of what they are currently watching and then resume on any screen. Apple TV+ and Apple’s original video subscription service will be available in the Apple TV app this fall. Apple Podcasts Apple Podcasts app features over 700,000 shows in its catalog and comes with an option for automatically being notified of new episodes as soon as they become available. This app comes with new categories, curated collections by editors around the world and even advanced search tools that help in finding episodes by the host, guest or even discussion topics. Users can now easily sync their media to their devices using a cable in the new entertainment apps. Security In macOS Catalina, Gatekeeper checks all the apps for known security issues, and the new data protections now require all apps to get permission before accessing user documents. Approve that comes with Apple Watch helps users to approve security prompts by tapping the side button on their Apple Watch. With the new Find My app, it is easy to find the location of a lost or stolen Mac and it can be anonymously relayed back to its owner by other Apple devices, even when offline. Macs will be able to send a secure Bluetooth signal occasionally, which will be used to create a mesh network of other Apple devices to help people track their products. So, a map will populate of where the device is located and this way it will be useful for the users in order to track their device. Also, all the Macs will now come with the T2 Security Chip support Activation Lock which will make them less attractive to thieves. DriverKit MacOS Catalina SDK 10.15+ beta comes with DriverKit framework which can be used for creating device drivers that the user installs on their Mac. Drivers built with DriverKit can easily run in user space for improved system security and stability. This framework also provides C++ classes for IO services, memory descriptors, device matching, and dispatch queues. DriverKit further defines IO-appropriate types for numbers, strings, collections, and other common types. You use these with family-specific driver frameworks like USBDriverKit and HIDDriverKit. zsh shell on Mac With macOS Catalina beta, Mac uses zsh as the interactive shell and the default login shell and is available currently only to the members of the Apple Developer Program. Users can now make zsh as the default in earlier versions of macOS as well. Currently, bash is the default shell in macOS Mojave and earlier. Zsh shell is also compatible with the Bourne shell (sh) and bash. The company is also signalling that developers should start moving to zsh on macOS Mojave or earlier. As bash isn’t a modern shell, so it seems the company thinks that switching to something less aging would make more sense. https://twitter.com/film_girl/status/1135738853724000256 https://twitter.com/_sjs/status/1135715757218705409 https://twitter.com/wongmjane/status/1135701324589256704 Additional features Safari now has an updated start page that uses Siri Suggestions for elevating frequently visited bookmarks, sites, iCloud tabs, reading list selections and links sent in messages. macOS Catalina comes with an option to block an email from a specified sender or even mute an overly active thread and unsubscribe from commercial mailing lists. Reminders have been redesigned and now come with a new user interface that makes it easier for creating, organizing and tracking reminders. It seems users are excited about the announcements made by the company and are looking forward to exploring the possibilities with the new features. https://twitter.com/austinnotduncan/status/1135619593165189122 https://twitter.com/Alessio____20/status/1135825600671883265 https://twitter.com/MasoudFirouzi/status/1135699794360438784 https://twitter.com/Allinon85722248/status/1135805025928851457 To know more about this news, check out Apple’s post. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case
Read more
  • 0
  • 0
  • 2925
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-apple-releases-native-swiftui-framework-with-declarative-syntax-live-editing-and-support-of-xcode-11-beta
Vincy Davis
04 Jun 2019
4 min read
Save for later

Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta

Vincy Davis
04 Jun 2019
4 min read
Yesterday, at the ongoing Worldwide Developers Conference (WWDC) 2019, Apple announced a new framework called SwiftUI for building user interfaces across all Apple platforms. With an aim to decrease the line of codes, SwiftUI supports declarative syntax, design tools, and live editing. SwiftUI has an incredible native performance, thus allowing developers to feel fully integrated by taking advantage of the features from the previous technologies and developer experiences of Apple platforms. It's also automatically supported for dynamic type, dark mode, localization, and accessibility. The tools for SwiftUI development are only available when running on macOS 10.15 beta. Declarative syntax SwiftUI enables a developer to simply state the requirements of a user interface and it will be done directly. For example, if a developer wants a list of items consisting of text fields, then the developer will have to just describe the alignment, font, and color for each field. This makes the code simpler and easier to read, thus saving time and maintenance. SwiftUI also makes complex concepts like animation, much simpler. It enables developers to add animation to almost any control and choose a collection of ready-to-use effects with only a few lines of code. Design tools During the WWDC, Xcode 11 beta release notes were also released. Xcode 11 beta includes SDKs for iOS 13, macOS 10.15, watchOS 6, and tvOS 13.  Xcode 11 beta also supports development with SwiftUI. It supports uploading apps from the Organizer window and its editors can now be added to any window without needing an Assistant Editor. Also the LaunchServices on macOS, now respects the selected Xcode when launching Instruments, Simulator, and other developer tools embedded within Xcode. Thus using these intuitive new design tools of Xcode11, SwiftUI can be used to build interfaces like dragging and dropping, dynamic replacement, and previews. Drag and drop A developer can arrange components within the user interface by simply dragging controls on the canvas. It can be done by opening an inspector to select font, color, alignment, and other design options, and easily rearrange controls with the cursor. Many of these visual editors are also available within the code editor. It is also possible to drag controls from the library and drop them on the design canvas or directly on the code. Dynamic replacement When working in a design canvas, every edit by the developer will be completely in sync with the code in the adjoining editor. Xcode will recompile the changes instantly such that a developer can constantly build an app and run it at the same time, like a ‘live app’. With this feature, Xcode can also swap the edited code directly in the live app. Previews It is now possible to create one or many previews of any SwiftUI views to get sample data and configure almost anything the users can see, such as large fonts, localizations, or dark mode. The users' code will be instantly visible as a preview, and if any change is made in the preview, it will immediately appear in the code. Previews can also display a UI, in any device and any orientation. Native on all Apple platforms SwiftUI has been created in such a way that all controls and platform-specific experiences are included in the code. It allows an app to directly access the features from the previous technologies of each platform, with a small amount of code and an interactive design canvas. It can be used to build user interfaces for any Apple device, including iPhone, iPad, iPod touch, Apple Watch, and Apple TV. SwiftUI’s striking features have made developers very excited to try out the framework. https://twitter.com/stroughtonsmith/status/1135647926439632899 https://twitter.com/fjeronimo/status/1135626395168563201 https://twitter.com/sascha_p/status/1135626257884782592 https://twitter.com/cocoawithlove/status/1135626052678574080 For more details on SwiftUI framework, head over to the Apple Developers website. Apple promotes app store principles & practices as good for developers and consumers following rising antitrust worthy allegations Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments
Read more
  • 0
  • 0
  • 2932

article-image-amazon-managed-streaming-for-apache-kafka-amazon-msk-is-now-generally-available
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available

Amrata Joshi
03 Jun 2019
3 min read
Last week, Amazon Web Services announced the general availability of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK makes it easy for developers to build and run applications based on Apache Kafka without having to manage the underlying infrastructure. It is fully compatible with Apache Kafka that enables customers to easily migrate their on-premises or Amazon Elastic Cloud Compute (Amazon EC2) clusters to Amazon MSK without code changes. Customers can use Apache Kafka for capturing and analyzing real-time data streams from a range of sources, including database logs, IoT devices, financial systems, and website clickstreams. Many customers choose to self-manage their Apache Kafka clusters and they end up spending their time and cost in securing, scaling, patching, and ensuring high availability for Apache Kafka and Apache ZooKeeper clusters. But Amazon MSK offers attributes of Apache Kafka that are combined with the availability, security, and scalability of AWS. Customers can now create Apache Kafka clusters that are designed for high availability that span multiple Availability Zones (AZs) with few clicks. Amazon MSK also monitors the server health and automatically replaces servers when they fail. Customers can now easily scale out cluster storage in the AWS management console to meet changes in demand. Amazon MSK runs the Apache ZooKeeper nodes at no additional cost and provides multiple levels of security for Apache Kafka clusters which include VPC network isolation, AWS Identity and Access Management (IAM), etc. It allows customers to continue to run applications built on Apache Kafka and allow them to use Apache Kafka compatible tools and frameworks. General Manager of Amazon MSK, AWS, Rajesh Sheth, wrote to us in an email, "Customers who are running Apache Kafka have told us they want to spend less time managing infrastructure and more time building applications based on real-time streaming data.” He further added, “Amazon MSK gives these customers the ability to run Apache Kafka without having to worry about managing the underlying hardware, and it gives them an easy way to integrate their Apache Kafka applications with other AWS services. With Amazon MSK, customers can stand up Apache Kafka clusters in minutes instead of weeks, so they can spend more time focusing on the applications that impact their businesses.” Amazon MSK is currently available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (Paris), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney), and will expand to additional AWS Regions in the next year. Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 2875

article-image-storm-2-0-0-releases-with-java-enabled-architecture-new-core-and-streams-api-and-more
Vincy Davis
03 Jun 2019
4 min read
Save for later

Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more

Vincy Davis
03 Jun 2019
4 min read
Last week, Apache Storm PMC announced the release of Storm 2.0.0. The major highlight of this release is that Storm has been re-architected in pure Java. Previously a large part of Storm's core functionality was implemented in Clojure. This release also includes significant improvements in terms of performance, a new stream API, windowing enhancements, and Kafka integration changes. New Architecture Implemented in Java With this release, Storm has been re-architected, with its core functionality implemented in pure Java. This new implementation has improved its performance significantly and also has made internal APIs more maintainable and extensible. The previous language Clojure often posed a barrier for entry to new contributors. Storm's codebase will be now more accessible to developers who don't want to learn Clojure in order to contribute. New High-Performance Core Storm 2.0.0 has a new core featuring a leaner threading model, a blazing fast messaging subsystem and a lightweight back pressure model. This has been designed to push boundaries on throughput, latency, and energy consumption while maintaining backward compatibility. Also, this makes Storm 2.0, the first streaming engine to break the 1-microsecond latency barrier. New Streams API This version has a new typed API, which will express streaming computations more easily, using functional style operations. It builds on top of the Storm's core spouts and bolt APIs and automatically fuses multiple operations together. This will help in optimizing the pipeline. Windowing Enhancements Storm 2.0.0's windowing API can now save/restore the window state to the configured state backend. This will enable larger continuous windows to be supported. Also, the window boundaries can now be accessed via the APIs. Improvements in Kafka Kafka Integration Changes Removal of Storm-Kafka Due to Kafka's deprecation of the underlying client library, the storm-kafka module has been removed. Users will have to move, to the storm-kafka-client module. This uses Kafka's ‘kafka-clients’ library for integration. Move to Using the KafkaConsumer.assign API Kafka's own mechanism which was used in Storm 1.x has been removed entirely in 2.0.0. The storm-kafka-client subscription interface has also been removed, due to the limited control it offered over the subscription behavior. It has been replaced with the ‘TopicFilter’ and ‘ManualPartitioner’ interfaces. For custom subscription users, head over to the storm-kafka-client documentation, which describes how to customize assignment. Other Kafka Highlights The KafkaBolt now allows you to specify a callback that will be called when a batch is written to Kafka. The FirstPollOffsetStrategy behavior has been made consistent between the non-Trident and Trident spouts. Storm-kafka-client now has a transactional non-opaque Trident spout. Users have also been notified that the 1.0.x version line will no longer be maintained and have strongly encouraged users to upgrade to a more recent release. The Java 7 support has also been dropped, and Storm 2.0.0 requires Java 8. There has been a mixed reaction from users over the changes, in Storm 2.0.0. Few users are not happy with Apache dropping the Clojure language. As a user on Hacker News comments, “My team has been using Clojure for close to a decade, and we found the opposite to be the case. While the pool of applicants is smaller, so is the noise ratio. Clojure being niche means that you get people who are willing to look outside the mainstream, and are typically genuinely interested in programming. In case of Storm, Apache commons is run by Java devs who have zero interest in learning Clojure. So, it's not surprising they would rewrite Storm in their preferred language.” Some users think that this move of dropping Clojure language shows that developers nowadays are unwilling to learn new things As a user on Hacker News comments, “There is a false cost assigned to learning a language. Developers are too unwilling to even try stepping beyond the boundaries of the first thing they learned. The cost is always lower than they may think, and the benefits far surpassing what they may think. We've got to work at showing developers those benefits early; it's as important to creating software effectively as any other engineer's basic toolkit.” Others are quite happy with Storm getting Java enabled. A user on Reddit said, “To me, this makes total sense as the project moved to Apache. Obviously, much more people will be able to consider contributing when it's in Java. Apache goal is sustainability and long-term viability, and Java would work better for that.” To download the Storm 2.0.0 version, visit the Storm downloads page. Walkthrough of Storm UI Storing Apache Storm data in Elasticsearch Getting started with Storm Components for Real Time Analytics
Read more
  • 0
  • 0
  • 2411

article-image-safari-technology-preview-release-83-now-available-for-macos-mojave-and-macos-high-sierra
Amrata Joshi
03 Jun 2019
2 min read
Save for later

Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra

Amrata Joshi
03 Jun 2019
2 min read
Last week, the team at WebKit announced that Safari Technology Preview release 83 is now available for macOS Mojave and macOS High Sierra. Safari Technology Preview is a version of Safari for OS X includes an in-development version of the WebKit browser engine. What’s new in Safari Technology Preview release 83? Web authentication This release comes with web authentication enabled by default on macOS. The web authentication has been changed to cancel the pending request when a new request is made. Web authentication has been changed to return InvalidStateError to sites whenever authenticators return such error. Pointer events With this release, the issue with isPrimary property of pointercancel events has been fixed. Also, the issue with calling preventDefault() on pointerdown has been fixed. Rendering The team has implemented backing-sharing in compositing layers and have further allowed overlap layers to paint into the backing store of another layer. The team has also fixed rendering of backing-sharing layers with transforms. The issue with layer-related flashing with composited overflow: scroll has been fixed. CSS In this release, “clearfix” with display: flow-root has been implemented. Also, page-break-* and -webkit-column-break-* have been implemented. The issue with font-optical-sizing applying the wrong variation value has been implemented. The CSS grid has also  been updated. WebRTC This release now allows sequential playback of media files. Also, the issue with video stream freezing has been fixed. Major bug fixes In this release, the CPU timeline and memory timeline bars have been fixed. The colors in the network table waterfall container have been fixed. The issue with context menu items in the DOM tree has been fixed. To know more about this news, check out the release notes. Chrome, Safari, Opera, and Edge to make hyperlink auditing compulsorily enabled Safari Technology Preview 71 releases with improvements in Dark Mode, Web Inspector, WebRTC, and more! Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users
Read more
  • 0
  • 0
  • 2447
article-image-ian-lance-taylor-golang-team-member-adds-another-perspective-to-go-being-googles-language
Sugandha Lahoti
28 May 2019
6 min read
Save for later

Ian Lance Taylor, Golang team member, adds another perspective to Go being Google's language

Sugandha Lahoti
28 May 2019
6 min read
Earlier this month, the Hacker News community got into a heated debate on whether “Go is Google’s language, and not the community’s”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google’s project.” In response to his statements, last Thursday, Ian Lance Taylor, a googler and member of the Golang team added his own views on a Google group mailing list, that don't necessarily contradict Chris’s blog post but add some nuance. Ian begins with a disclaimer: “I'm speaking exclusively for myself here, not for Google nor for the Go team.” He then reminds us that Go is an open source language considering all the source code, including for all the infrastructure support, is freely available and may be reused and changed by anyone. Go provides all developers the freedom to fork and take an existing project in a new direction.  He further explains how there are 59 Googlers and 51 non-Googlers on the committers list which includes the set of people who can commit changes to the project. He says, “so while Google is the majority, it's not an overwhelming one.” Golang has a small core committee which makes decisions Contradicting Chris’s opinion of how Golang is only run by a small set of people which prevents it from becoming the community’s language, he says, “All successful languages have a small set of people who make the final decisions. Successful languages pay attention to what people want, but to change the language according to what most people want is, I believe, a recipe for chaos and incoherence.  I believe that every successful language must have a coherent vision that is shared by a relatively small group of people.” He then adds, “Since Go is a successful language, and hopes to remain successful, it too must be open to community input but must have a small number of people who make final decisions about how the language will change over time.” This makes sense. The core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more than a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Ian defends Google as a company being a member of the Golang team, saying they are doing more work with Go at a higher level, supporting efforts like the Go Cloud Development Kit and support for Go in Google Cloud projects like Kubernetes. He also assures that executives, and upper management in general, have never made any attempt to affect how the Go language and tools and standard library are developed. “Google, apart from the core Go team, does not make decisions about the language.” What if Golang is killed? He is unsure of what will happen if someone on the core Go team decides to leave Google but wants to continue working on Go. He says, “many people who want to work on Go full time wind up being hired by Google,  so it would not be particularly surprising if the core Go team continues to be primarily or exclusively Google employees.” This reaffirms our original argument of Google having a propensity to kill its own products. While Google’s history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. He further adds, “ It's also possible that someday it will become appropriate to create some sort of separate Go Foundation to manage the language.”  But did not specify what such a foundation would look like, who its members will be, and how the governance model will be like. Will Google leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project? Or would they let the authors only work on Golang in their spare time at home or at the weekends? Another common argument is on what Google has invested to keep Go thriving and if, the so-called Go foundation will be able to sustain a healthy development cycle with low monetary investments (although GitHub sponsors can, maybe, change that). A comment on Hacker News reads, “ Like it or not, Google is probably paying around $10 million a year to keep senior full-time developers around that want to work on the language. That could be used as a benchmark to calculate how much of an investment is required to have a healthy development cycle. If a community-maintained fork is created, it would need time and monetary investment similar to what Google is doing just to maintain and develop non-controversial features. Question is: Is this assessment sensible and if so, is the community able or willing to make this kind of investment?” In general, though, most people/developers agreed with Ian. Here are a few responses from the same mailing list: “I just want to thank Ian for taking the time to write this. I've already got the idea that it worked that way, but my own deduction process, but it's good to have a confirmation from inside.” “Thank you for writing your reply Ian. Since it's a rather long post I don't want to go through it point by point, but suffice it to say that I agree with most of what you've written.” Read Ian’s post on Google Forums. Is Golang truly community driven and does it really matter? Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch
Read more
  • 0
  • 0
  • 3854

article-image-rust-1-35-0-released
Vincy Davis
24 May 2019
3 min read
Save for later

Rust 1.35.0 released

Vincy Davis
24 May 2019
3 min read
Yesterday, the Rust team announced the release of Rust 1.35.0. This release highlights the implementation of Fn* closure traits for Box<dyn Fn*>. Additionally it has features like coercing closures which are extended to unsafe fn pointers, dbg! macro can now be called without passing any arguments, a number of APIs have become stable and many more. Key features explained in brief: Fn* closure traits implemented for Box<dyn Fn*> In Rust 1.35.0, the FnOnce, FnMut, and the Fn traits are now implemented for Box<dyn FnOnce>, Box<dyn FnMut>, and Box<dyn Fn> respectively. This allows users to use boxed functions in places where a function is to be implemented. It is also now possible to directly call Box<dyn FnOnce> objects. Coercing closures to unsafe function pointers In the earlier versions, it was possible to coerce closures which do not capture from the environment, into function pointers only . With this release, coercing closures has been extended to ‘unsafe’ function pointers also. Calling dbg!() with no argument ‘dbg!()’ macro allows to quickly inspect the value of some expression with context. Now, users can call dbg!() without passing any arguments. This is useful in tracing what branch your application will take. Library stabilizations In 1.35.0, a number of APIs have become stable. Few new implementations and other changes have also been made. Some are mentioned below: Copy the sign of a floating point number onto another Check whether a Range contains a value Map and split a borrowed RefCell value in two Replace the value of a RefCell through a closure Hash a pointer or reference by address, not value Copy the contents of an Option<&T> To know more about the changes in Library, head over to the release notes page. Changes in Clippy Clippy is a collection of lints to catch common mistakes and improve the Rust code. In this release of Rust, a new lint ‘drop_bounds’ has been added. Also Clippy has split the lintredundant_closure into redundant_closure and redundant_closure_for_method_calls. Changes in Cargo When passing a test filter, such as ‘cargo test foo’, the user does not have to build examples (unless they set test = true). ‘rustc-cdylib-link-arg’ key has been added to build scripts to specify linker arguments for cdylib crates. ‘Cargo clippy-preview’ is now a built-in cargo command. The verification step in ‘cargo package’ that checks if any files are modified is now stricter. It uses a hash of the contents instead of checking file system mtimes. It also checks all files in the package. To know more about the changes in Cargo, head over to the release notes page. Read more about the Rust 1.35.0 announcement on the official Rust blog. Read More Rust’s recent releases 1.34.0 and 1.34.1 affected from a vulnerability that can cause memory unsafety Rust 1.34 releases with alternative cargo registries, stabilized TryFrom and TryInto, and more Rust shares roadmap for 2019
Read more
  • 0
  • 0
  • 1955

article-image-wolfram-engine-is-now-free-for-developers
Vincy Davis
22 May 2019
3 min read
Save for later

Wolfram Engine is now free for developers

Vincy Davis
22 May 2019
3 min read
Yesterday in a blogpost, Stephen Wolfram posted about launching a free Wolfram Engine for developers. The Wolfram Engine runs on any standard platform like Linux, Mac, Windows, RasPi, and many more. It can be used directly with a script, or from a command line. The Wolfram Engine also has access to the whole Wolfram Knowledgebase by a free basic subscription to the Wolfram Cloud. “The Wolfram Engine is the heart of all our products.”, says  Wolfram. The Wolfram Engine implements the full Wolfram Language as a software component and can immediately be plugged into any standard software engineering stack. The Wolfram language is a powerful system used for interactive computing as well as for doing R&D, education and data science. It is also being increasingly used as a key component in building production software systems. The Wolfram language has 5000+ functions, including visualization, machine learning, numerics, image computation, and many more. It has lots of  real-world knowledge too, particularly in geo, medical, cultural, engineering, scientific, etc. The Wolfram language has been increasingly used inside large-scale software projects. Wolfram added, “Sometimes the whole project is built in Wolfram Language. Sometimes Wolfram Language is inserted to add some critical computational intelligence, perhaps even just in a corner of the project.” The free Wolfram Engine for developers will help make the Wolfram language available to any software developer. It will also help build systems that can take full advantage of its computational intelligence. Wolfram concludes the blogpost stating, “We’ve worked hard to make the Free Wolfram Engine for Developers as easy to use and deploy as possible.” Many developers have welcomed the free availability of Wolfram Engine. https://twitter.com/bc238dev/status/1130868201129107456 A user on Hacker News states, “I'm excited about this change. I wish it had happened sooner so it could have had more of an impact. It certainly put Wolfram Engine back on my radar.” Another user is planning to take advantage of this situation by “using Mathematica (and its GUI) on a Raspberry Pi to explore and figure out how to do what you want to do, but then actually run it in Wolfram Engine on a more powerful computer.” To know more details about the news, head over to Stephen Wolfram blog. Read More Software developer tops the 100 Best Jobs of 2019 list by U.S. News and World Report Key trends in software development in 2019: cloud native and the shrinking stack 18 people in tech every programmer and software engineer needs to follow in 2019
Read more
  • 0
  • 0
  • 2591
article-image-facebook-releases-pythia-a-deep-learning-framework-for-vision-and-language-multimodal-research
Amrata Joshi
22 May 2019
2 min read
Save for later

Facebook releases Pythia, a deep learning framework for vision and language multimodal research

Amrata Joshi
22 May 2019
2 min read
Yesterday, the team at Facebook released Pythia, a deep learning framework that supports multitasking in the vision and language multimodal research. Pythia is built on the open-source PyTorch framework and enables researchers to easily build, reproduce, and benchmark AI models. https://twitter.com/facebookai/status/1130888764945907712 It is designed for vision and language tasks, such as answering questions that are related to visual data and automatically generates image captions. This framework also incorporates elements of Facebook’s winning entries in recent AI competitions including the VQA Challenge 2018 and Vizwiz Challenge 2018. Features of Pythia Reference implementations: Pythia references implementations to show how previous state-of-the-art models achieved related benchmark results. Performance gauging: It also helps in gauging the performance of new models. Multitasking: Pythia supports multitasking and distributed training. Datasets: It also includes support for various datasets built-in including VizWiz, VQA,TextVQA and VisualDialog. Customization: Pythia features custom losses, metrics, scheduling, optimizers, tensorboard as per the needs of the customers. Unopinionated: Pythia is unopinionated about the dataset and model implementations that are built on top of it. The goal of the team behind Pythia is to accelerate the AI models and their results and further make it easier for the AI community to build on, and benchmark against, successful systems. The team hopes that Pythia will also help researchers to develop adaptive AI that synthesizes multiple kinds of understanding into a more context-based, multimodal understanding. The team also plans to continue adding tools, data sets, tasks, and reference models. To know more about this news, check out the official Facebook announcement. Facebook tightens rules around live streaming in response to the Christchurch terror attack Facebook again, caught tracking Stack Overflow user activity and data Facebook bans six toxic extremist accounts and a conspiracy theory organization  
Read more
  • 0
  • 0
  • 2614

article-image-racket-7-3-releases-with-improved-racket-on-chez-refactored-io-system-and-more
Bhagyashree R
17 May 2019
2 min read
Save for later

Racket 7.3 releases with improved Racket-on-Chez, refactored IO system, and more

Bhagyashree R
17 May 2019
2 min read
Earlier this week, the team behind Racket announced the release of Racket 7.3. This release comes with improved Racket-on-Chez, refactored IO system, a new shear function in the Pict library, and more. The Racket programming language is general-purpose, multi-paradigm, and is a dialect of Lisp and Scheme. Updates in Racket 7.3 Snapshot builds of Racket-on-Chez are now available Racket’s core was largely implemented in C, which affects its portability to different systems, maintenance, and performance. Hence, back in 2017, the team decided to make the Racket distribution run on Chez Scheme. With the last release (Racket 7.2), the team shared that the implementation of Racket on Chez Scheme (Racket CS) has reached almost complete status with all functionalities in place. With this release, the team has added more improvements to Racket-on-Chez and has made its snapshot builds available on Racket Snapshots. The team further shared that by next release we can expect Racket-on-Chez to be included as a download option. Other updates In addition to the improvements in Racket-on-Chez, the following updates are introduced: Racket’s IO system is refactored to provide better performance and a simplified internal design. The JSON reader is now dramatically faster. The Racket web library now comes with improved support for 307 redirects. The Plot library comes with color map support for renderers. The Plot library allows you to produce any kind of plot including scatter plots, line plots, contour plots, histograms, and 3D surfaces and isosurfaces. A ‘shear’ function is added to the Pict library, Racket’s one of the standard functional picture libraries. Read the full announcement on Racket’s official website. Racket 7.2, a descendant of Scheme and Lisp, is now out! Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others Swift is improving the UI of its generics model with the “reverse generics” system
Read more
  • 0
  • 0
  • 2922