Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-python-in-visual-studio-code-released-with-enhanced-variable-explorer-data-viewer-and-more
Amrata Joshi
27 Apr 2019
3 min read
Save for later

Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!

Amrata Joshi
27 Apr 2019
3 min read
This week, the team at Python announced the release of Python Extension for Visual Studio Code. This release comes with enhanced variable explorer and data viewer and improvements to the Python Language Server. What’s new in Python in Visual Studio Code? Enhanced Variable Explorer and Data Viewer This release comes with a built-in Variable Explorer along with a Data Viewer, which will help the users to easily view, inspect and filter the variables in the application, including lists, NumPy arrays, pandas data frames, and more. This release shows a section for variables while running code and cells in the Python Interactive window. On expanding it, users can see a list of the variables in the current Jupyter session. More variables will automatically show up as they get used in the code. And users can sort the variables in columns by clicking on each column header. Users can now double-click on each row or use the “Show variable in Data Viewer” button in order to view full data of each variable in the newly-added Data Viewer and can perform a simple search over its values. Improvements to debug configuration In this release, the process of configuring the debugger has now been simplified. If a user starts debugging through the Debug Panel and no debug configuration exists, then the users will now be prompted to create a debug configuration for their application. Instead of manually configuring the launch.json file, users can now create a debug configuration through a set of menus. Improvements to the Python Language Server This release comes with fixes and improvements to the Python Language Server. The team has added back the features that were removed in the 0.2 release including “Rename Symbol”, “Go to Definition” and “Find All References”. Also, the loading time and memory usage have been improved while importing scientific libraries such as pandas, Plotly, PyQt5, especially while running in full Anaconda environments.   Read Also: Visualizing data in R and Python using Anaconda [Tutorial] Major changes In this release, the default behavior of debugger has been changed to display return values. “Unit Test” has been renamed to “Test” or “Testing”. The debugStdLib setting has been replaced with justMyCode. This release comes with setting to just enable/disable the data science codelens. The reliability of test discovery while using pytest has been improved. Bug Fixes The issues with cell spacing have been resolved. Problems with errors not showing up for import have been fixed. Issues with the tabs in the comments section have been fixed. To know more about this news, check out Microsoft’s official blog post. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript Debugging and Profiling Python Scripts [Tutorial]  
Read more
  • 0
  • 0
  • 10997

article-image-introducing-voila-that-turns-your-jupyter-notebooks-to-standalone-web-applications
Bhagyashree R
13 Jun 2019
3 min read
Save for later

Introducing Voila that turns your Jupyter notebooks to standalone web applications

Bhagyashree R
13 Jun 2019
3 min read
Last week, a Jupyter Community Workshop on dashboarding was held in Paris. At the workshop, several contributors came together to build the Voila package, the details of which QuantStack shared yesterday. Voila serves live Jupyter notebooks as standalone web applications providing a neat way to share your work results with colleagues. Why do we need Voila? Jupyter notebooks allow you to do something called “literature programming” in which human-friendly explanations are accompanied with code blocks. It allows scientists, researchers, and other practitioners of scientific computing to add theory behind their code including mathematical equations. However, Jupyter notebooks may prove to be a little bit problematic when you plan to communicate your results with other non-technical stakeholders. They might get put-off by the code blocks and also the need for running the notebook to see the results. It also does not have any mechanism to prevent arbitrary code execution by the end user. How Voila works? Voila addresses all the aforementioned queries by converting your Jupyter notebook to a standalone web application. After connecting to a notebook URL, Voila launches the kernel for that notebook and runs all the cells. Once the execution is complete, it does not shut down the kernel. The notebook gets converted to HTML and is served to the user. This rendered HTML includes JavaScript that is responsible for initiating a websocket connection with the Jupyter kernel. Here’s a diagram depicting how it works: Source: Jupyter Blog Following are the features Voila provides: Renders Jupyter interactive widgets: It supports Jupyter widget libraries including bqplot, ipyleafet, ipyvolume, ipympl, ipysheet, plotly, and ipywebrtc. Prevents arbitrary code execution: It does not allow arbitrary code execution by consumers of dashboards. A language-agnostic dashboarding system: Voila is built upon Jupyter standard protocols and file formats enabling it to work with any Jupyter kernel (C++, Python, Julia). Includes custom template system for better extensibility: It provides a flexible template system to produce rich application layouts. Many Twitter users applauded this new way of creating live and interactive dashboards from Jupyter notebooks: https://twitter.com/philsheard/status/1138745404772818944 https://twitter.com/andfanilo/status/1138835776828071936 https://twitter.com/ToluwaniJohnson/status/1138866411261124608 Some users also compared it with another dashboarding solution called Panel. The main difference between Panel and Voila is that Panel supports Bokeh widgets whereas Voila is framework and language agnostic. “Panel can use a Bokeh server but does not require it; it is equally happy communicating over Bokeh Server's or Jupyter's communication channels. Panel doesn't currently support using ipywidgets, nor does Voila currently support Bokeh plots or widgets, but the maintainers of both Panel and Voila have recently worked out mechanisms for using Panel or Bokeh objects in ipywidgets or using ipywidgets in Panels, which should be ready soon,” a Hacker News user commented. To read more in detail about Voila, check out the official announcement on the Jupyter Blog. JupyterHub 1.0 releases with named servers, support for TLS encryption and more Introducing Jupytext: Jupyter notebooks as Markdown documents, Julia, Python or R scripts JupyterLab v0.32.0 releases
Read more
  • 0
  • 0
  • 10915

article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10871

article-image-introducing-krispnet-dnn-a-deep-learning-model-for-real-time-noise-suppression
Bhagyashree R
15 Nov 2018
3 min read
Save for later

Introducing krispNet DNN, a deep learning model for real-time noise suppression

Bhagyashree R
15 Nov 2018
3 min read
Last month, 2Hz introduced an app called krisp which was featured on the Nvidia website. It uses deep learning for noise suppression and is powered by krispNet Deep Neural Network. krispNet is trained to recognize and reduce background noise from real-time audio and yields clear human speech. 2Hz is a company which builds AI-powered voice processing technologies to improve voice quality in communications. What are the limitations in the current ways of noise suppression? Many edge devices from phones, laptops, to conferencing systems come with noise suppression technologies. Latest mobile phones come equipped with multiple microphones which helps suppress environmental noise when we talk. Generally, the first mic is placed on the front bottom of the phone to directly capture the user’s voice. The second mic is placed as far as possible from the first mic. After the surrounding sounds are captured by both these mics, the software effectively subtracts them from each other and yields an almost clean voice. The limitations of multiple mics design: Since multiple mics design requires a certain form factor, their application is only limited to certain use cases such as phones or headsets with sticky mics. These designs make the audio path complicated, requiring more hardware and code. Audio processing can only be done on the edge or device side, thus the underlying algorithm is not very sophisticated due to the low power and compute requirement. The traditional Digital Signal Processing (DSP) algorithms also work well only in certain use cases. Their main drawback is that they are not scalable to variety and variability of noises that exist in our everyday environment. This is why 2Hz has come up with a deep learning solution that uses a single microphone design and all the post processing is handled by a software. This allows hardware designs to be simpler and more efficient. How deep learning can be used in noise suppression? There are three steps involved in applying deep learning to noise suppression: Source: Nvidia Data collection: The first step is to build a dataset to train the network by combining distinct noises and clean voices to produce synthetic noisy speech. Training: Next, feed the synthetic noisy speech dataset to the DNN on input and the clean speech on the output. Inference: Finally, produce a mask which will filter out the noise giving you a clear human voice. What are the advantages of krispNet DNN? krispNet is trained with a very large amount of distinct background noises and clean human voices. It is able to optimize itself to recognize what’s background noise and separate it from a human speech by leaving only the latter. While inferencing, krispNet acts on real-time audio and removes background noise. krispNet DNN can also perform Packet Loss Concealment for audio and fill out missing voice chunks in voice calls by eliminating “chopping”. krispNet DNN can predict higher frequencies of a human voice and produce much richer voice audio than the original lower bitrate audio. Read more in detail about how we can use deep learning in noise suppression on the Nvidia blog. Samsung opens its AI based Bixby voice assistant to third-party developers Voice, natural language, and conversations: Are they the next web UI? How Deep Neural Networks can improve Speech Recognition and generation
Read more
  • 0
  • 0
  • 10757

article-image-microsofts-net-core-2-1-now-powers-bing-com
Melisha Dsouza
21 Aug 2018
4 min read
Save for later

Microsoft’s .NET Core 2.1 now powers Bing.com

Melisha Dsouza
21 Aug 2018
4 min read
Microsoft is ever striving to make its products run better. They can add yet another accomplishment to their list as Microsoft’s cloud service search engine, Bing is now running fully on .NET Core 2.1, as announced by the .NET engineering team in their blog yesterday. .NET Core is the slimmed down and cross-platform version of Microsoft’s .NET managed common language runtime. Since Bing runs on thousands of servers spanning many data centers across the globe, .NET Core will serve as the perfect platform for it to function on. Why did Bing migrate to .NET Core 2.1? Bing has always run on the .NET Framework, but has been able to move to .NET Core 2.1 after some recent API additions. Let’s take a look at the main reasons for Bing.com’s migration to .NET Core. 1. Performance i.e. serving latency .NET Core 2.1 has led to an improvement in performance in virtually all areas of the runtime and libraries. The internal server latency over the last few months shows a striking 34% improvement. Check out the graph for a clear picture!     Souce: blog.msdn.microsoft.com The following changes in .NET Core 2.1 are the reasons why the workload and performance has greatly improved- #1 Vectorization of string.Equals & string.IndexOf/LastIndexOf HTML rendering and manipulation are string-heavy workloads. Vectorization of String comparisons and indexing operations (major components of string slicing) is the biggest contributor to the performance improvement. You can find more information on this on the github page for  Vectorization of string.Equals and string.IndexOf/LastIndexOf #2 Devirtualization Support for EqualityComparer<T>.Default One of .NET core’s major software components is a heavy user of Dictionary<int/long, V>, which indirectly benefits from the intrinsic recognition work that was done in the JIT to make Dictionary<K, V> amenable to that optimization.  Head over to the github page for more clarity on why this feature empowers .NET Core 2.1 #3 Software Write Watch for Concurrent GC This led to a reduction in CPU usage. The implementation relies on a JIT Write Barrier, which instinctively increases the cost of a reference store, but that cost is amortized and not noticed in the workload. #4 Methods with calli are now inline-able ldftn + calli  are used in lieu of delegates (which incur an object allocation) in performance-critical pieces of code where there is a need to call a managed method indirectly. This change allowed method bodies with a calli instruction to be eligible for inlining. The github page provides more insight on this subject. #5 Improve performance of string.IndexOfAny for 2 & 3 char searches A common operation in a front-end stack is search for ‘:’, ‘/’, ‘/’ in a string to delimit portions of a URL. Check out this special-casing improvement that was beneficial throughout the codebase on the github page. 2. Runtime Agility The ability to have an xcopy version of the runtime inside their application denotes that they can adopt newer versions of the runtime at a much faster pace. The Continuous integration (CI) pipeline is run with .NET Core’s daily CI and it builds testing functionality and performance all the way through the release. 3. ReadyToRun Images Managed applications usually can have poor startup performance as methods first have to be JIT compiled to machine code. .NET Framework has a precompilation technology, NGEN. On .NET Core, the crossgen tool allows the code to be precompiled as a pre-deployment step, such as in the build lab, and the images deployed to production are Ready To Run! This feature was not supported on the previous  .NET implementation. The .NET Core team is striving to provide Bing.com users fast results. The latest software and technologies used by their developers will ensure that .NET Core will not fail Bing.com! Read the detailed overview of the article on Microsoft's blog. Say hello to FASTER: a new key-value store for large state management by Microsoft Microsoft Azure’s new governance DApp: An enterprise blockchain without mining .NET Core completes move to the new compiler – RyuJIT
Read more
  • 0
  • 0
  • 10753

article-image-github-now-allows-issue-transfer-between-repositories-a-public-beta-version
Savia Lobo
01 Nov 2018
3 min read
Save for later

GitHub now allows issue transfer between repositories; a public beta version

Savia Lobo
01 Nov 2018
3 min read
Yesterday, GitHub announced that repository admins can now transfer issues from one repository to another better fitting repository, to help those issues find their home. This project by GitHub is currently is in public beta version. Nat Friedman, CEO of GitHub, in his tweet said, “We've just shipped the ability to transfer an issue from one repo to another. This is one of the most-requested GitHub features. Feels good!” When the user transfers an issue, the comments, assignees, and issue timeline events are retained. The issue's labels, projects, and milestones are not retained, although users can see past activity in the issue's timeline. People or teams who are mentioned in the issue will receive a notification letting them know that the issue has been transferred to a new repository. The original URL redirects to the new issue's URL. People who don't have read permissions in the new repository will see a banner letting them know that the issue has been transferred to a new repository that they can't access. Permission levels for issue transfer between repositories People with an owner or team maintainer roles can manage repository access with teams. Each team can have different repository access permissions. There are three types of repository permissions, i.e. Read, Write, and Admin, available for people or teams collaborating on repositories that belong to an organization. To transfer an open issue to another repository, the user needs to have admin permissions on the repository the issue is in and the repository where the issue is to be transferred. If the issue is being transferred from a repository that's owned by an organization, you are a member of, you must transfer it to another repository within your organization. To know more about the repository permission levels visit GitHubHelp blog post. Steps to transfer an Open issue to another repository On GitHub, navigate to the main page of the repository. Under your repository name, click  Issues. In the list of issues, click the issue you'd like to transfer. In the right sidebar, click Transfer this issue. 5. In "Choose a repository," select the repository you want to transfer the issue to. 6. Click Transfer issue. GitHub Business Cloud is now FedRAMP authorized GitHub updates developers and policymakers on EU copyright Directive at Brussels GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage    
Read more
  • 0
  • 0
  • 10709
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-macos-gets-rpcs3-and-dolphin-using-gfx-portability-the-vulkan-portability-implementation-for-non-rust-apps
Melisha Dsouza
05 Sep 2018
2 min read
Save for later

macOS gets RPCS3 and Dolphin using Gfx-portability, the Vulkan portability implementation for non-Rust apps

Melisha Dsouza
05 Sep 2018
2 min read
The Vulkan Portability implementation, gfx-portability allows non-Rust applications that use Vulkan to run with ease. After improving the functionality of gfx-portability’s Metal backend through benchmarking Dota2, and verifying certain functionalities through the Vulkan Conformance Test Suite (CTS), developers are now planning to expand their testing to other projects that are open source, already using Vulcan for rendering and finally lacking strong macOS/Metal support. The projects which matched their criteria were  RPCS3 and Dolphin. However, the team discovered various issues with both RPCS3 and Dolphin projects. RPCS3 Blockers RPCS3 satisfies all the above mentioned criteria. It is an open-source Sony PlayStation 3 emulator and debugger written in C++ for Windows and Linux. RPCS3 has a Vulkan backend, and some attempts were made to support macOS previously. The gfx-rs team added a surface and swapchain support to start of with the macOS integration. This process identified a number of blockers in both gfx-rs and RPCS3. The RPCS3 developers and the gfx-rs teams collaborated to quickly address the blockers. Once the blockers were addressed, gameplay was rendered within RPCS3. Dolphin support for macOS Dolphin, the emulator for two recent Nintendo video game consoles, was actively working on adding support for macOS. While being tested with gfx-portability the teams noticed some further minor bugs in gfx. The issues were addressed and the teams were able to render real gameplay. Continuous Releases for the masses The team has already started automatically releasing gfx-portability binaries under GitHub latest release -> the portability repository. Currently the team provides MacOS (Metal) and Linux (Vulkan) binaries, and will add Windows (Direct3D 12/11 and Vulkan) binaries soon. These releases ensure that users don’t have to build gfx-portability themselves in order to test it with an existing project. The binaries are compatible with both the Vulkan loader on macOS and by linking the binaries directly from an application.   The team was successfully able to run RPCS3 and Dolphin on top of gfx-portability’s Metal backend and only had to address some minor issues in the process. Stability and performance will improve as more real world use cases are tested. You can read more about this on gfx-rs.github.io.   OpenAI Five loses against humans in Dota 2 at The International 2018 How to use artificial intelligence to create games with rich and interactive environments [Tutorial] Best game engines for AI game development  
Read more
  • 0
  • 0
  • 10622

article-image-home-assistant-an-open-source-python-home-automation-hub-to-rule-all-things-smart
Prasad Ramesh
25 Aug 2018
2 min read
Save for later

Home Assistant: an open source Python home automation hub to rule all things smart

Prasad Ramesh
25 Aug 2018
2 min read
We have Amazon Alexa, Google Home and Phillips Hue for smart actions in your home. But they are individual and require different controls. What if all of your smart devices can work together with a master hub? That is Home Assistant. Home assistant is an automation platform that can run on Raspberry Pi. It acts as a central hub for connecting and automating all your smart devices. It supports services like IFTTT, Pushbullet, Google cast, and many others. Currently there are over a thousand components supported. It tracks the state of all the installed smart devices in your home. All the devices can be controlled from a single, mobile-friendly, interface. For security and privacy, all operations via Home Assistant are done locally, meaning no data is stored on the cloud. The Home assistant website advertises functions like having lights turn on upon sunset, dimming lights when you watch a movie on Chromecast. There is a virtual image called Hass.io which is an all in one solution and get started with Home Assistant. There is a guide is to install Hass.io on a Raspberry Pi. The requirements for running Home Assistant are: Raspberry Pi 3 Model B+ + Power Supply (at least 2.5A) A Class 10 or higher, Size 32 GB or bigger Micro SD card An SD Card reader Ethernet cable (optional, Hass.io can work with WiFi) For unattended configuration, optionally a USB-Stick Home assistant is a hub, it cannot control anything on its own. Think of it as a hub that passes instructions, a master device that communicates with other devices for home automation. Home assistant can’t do anything if there are no smart devices to work with. Since it is open source, there are dozens of contributions from tinkerers and DIY enthusiasts worldwide. You can check out the automation examples to know more and use them. The installation is very simple and there is a friendly UI to control your automation tasks. There is plenty of information at the Home Assistant website to get your started. They also have a GitHub repository. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration Apple joins the Thread Group, signalling its Smart Home ambitions with HomeKit, Siri and other IoT products Amazon Echo vs Google Home: Next-gen IoT war
Read more
  • 0
  • 1
  • 10546

article-image-google-open-sources-deeplab-model-semantic-image-segmentation-using-tensorflow
Savia Lobo
13 Mar 2018
2 min read
Save for later

Google open sources DeepLab-v3+: A model for Semantic Image Segmentation using TensorFlow

Savia Lobo
13 Mar 2018
2 min read
DeepLab-v3+, Google’s latest and best performing Semantic Image Segmentation model is now open sourced! DeepLab is a state-of-the-art deep learning model for semantic image segmentation, with the goal to assign semantic labels (e.g., person, dog, cat and so on) to every pixel in the input image. Assigning these semantic labels sets a much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection. Examples of semantic image segmentation tasks include synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. DeepLab-v3+ is implemented in TensorFlow and has its models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results, intended for server-side deployment. Source: Google Research blog Let’s have a look at some of the highlights of DeepLab v3: Google has extended DeepLab-v3 to include a simple yet effective decoder module to refine the segmentation results especially along object boundaries. In this encoder-decoder structure one can arbitrarily control the resolution of extracted encoder features by atrous convolution to trade-off precision and runtime. They has also shared their Tensorflow model training and evaluation code, along with models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks. This version also adopts two network backbones, MobileNetv2 and Xception. MobileNetv2 is a fast network structure designed for mobile devices. Xception is a powerful network structure intended for server-side deployment. You can read more about this announcement on the Google Research blog.  
Read more
  • 0
  • 0
  • 10453

article-image-qml-net-a-new-c-library-for-cross-platform-net-gui-development
Prasad Ramesh
10 Aug 2018
3 min read
Save for later

Qml.Net: A new C# library for cross-platform .NET GUI development

Prasad Ramesh
10 Aug 2018
3 min read
Qml.Net is a C# library for cross-platform GUI development with native dependency. It exposes the required object types to host a QML engine. In Qml.NET, Qml and JavaScript together form the UI layer. It can be thought of as the view in MVC. Qml.Net features The PInvoke code in this .NET library is hand-crafted by developer Paul Knopf to ensure appropriate memory management and pointer ownership semantics. He is pretty confident about the library and mentions in his blog “I’d bet you couldn’t generate a segfault, even if you wanted to.” In Qml.Net C# objects can be registered to be treated as QML components. You can then interoperate with them as you would with regular JavaScript objects. The registered C# objects serve as a portal through which the QML world can interact with your .NET objects. This has an added benefit of keeping your business/UI concerns separate cleanly. There will also be no chatty PInvoke calls for rendering. It is a great match. A pre-compiled portable installation of Qt and the native C wrapper is available for Windows, OSX, and Linux. Developers wouldn’t have to bother with C/C++. All you need to know is QML, C#, and JavaScript; QML if fairly simple. QML can’t really be classified as a language, in the semantic sense. More appropriately it can be considered as a combination of JSON and JavaScript. Qml.Net support and working Qml.Net will work with any .NET language including popular C# and functional languages like F#. Your libraries will reference the pure .NET NuGet package, Qml.Net. The host process (Program.Main) references the native NuGet packages. This is dependent on the OS you are on: Qml.Net.WindowsBinaries Qml.Net.OSXBinaries Qml.Net.LinuxBinaries Paul currently only tests his own models that are C# objects registered with the QML engine. They are specific to each control/page. Since Microsoft's announcement of .NET Core, there hasn’t been any clear idea on cross-platform GUI development. Although Microsoft plans to support WPF in .NET Core 3.0, it will be limited to Windows machines. With community involvement and support, Qml.net can be a potential game changer. You can head to the GitHub repository and also view some hosted examples to get a better idea. Read next Exciting New Features in C# 8.0 .NET Core completes move to the new compiler – RyuJIT Microsoft Azure's new governance DApp: An enterprise blockchain without mining
Read more
  • 0
  • 0
  • 10250
article-image-big-data-as-a-service-bdaas-solutions-comparing-iaas-paas-and-saas
Guest Contributor
28 Aug 2018
8 min read
Save for later

Big data as a service (BDaaS) solutions: comparing IaaS, PaaS and SaaS

Guest Contributor
28 Aug 2018
8 min read
What is Big Data as a Service (BDaaS)? Thanks to the increased adoption of cloud infrastructures, processing, storing, and analyzing huge amounts of data has never been easier. The big data revolution may have already happened, but it’s Big Data as a service, or BDaas, that’s making it a reality for many businesses and organizations. Essentially, BDaas is any service that involves managing or running big data on the cloud. The advantages of BDaas There are many advantages to using a BDaaS solution. It makes many of the aspects that managing a big data infrastructure yourself so much easier. One of the biggest advantages is that it makes managing large quantities of data possible for medium-sized businesses. Not only can it be technically and physically challenging, it can also be expensive. With BDaaS solutions that run in the cloud, companies don’t need to stump up cash up front, and operational expenses on hardware can be kept to a minimum. With cloud computing, your infrastructure requirements are fixed at a monthly or annual cost. However, it’s not just about storage and cos. BDaaS solutions sometimes offer in-built solutions for artificial intelligence and analytics, which means you can accomplish some pretty impressive results without having to have a huge team of data analysts, scientists and architects around you. The different models of BDaaS There are three different BDaaS models. These closely align with the 3 models of cloud infrastructure: IaaS, PaaS, and SaaS. Big Data Infrastructure as a Service (IaaS) – Basic data services from a cloud service provider. Big Data Platform as a Service (PaaS) – Offerings of an all-round Big Data stack like those provided by Amazon S3, EMR or RedShift. This excludes ETL and BI. Big Data Software as a Service (SaaS) – A complete Big Data stack within a single tool. How does the Big Data IaaS Model work? A good example of the IaaS model is Amazon’s AWS IaaS architecture, which combines S3 and EC2. Here, S3 acts as a data lake that can store infinite amounts of structured as well as unstructured data. EC2 acts a compute layer that can be used to implement a data service of your choice and connects to the S3 data. For the data layer you have the option of choosing from among: Hadoop – The Hadoop ecosystem can be run on an EC2 instance giving you complete control NoSQL Databases – These include MongoDB or Cassandra Relational Databases – These include PostgreSQL or MySQL For the compute layer, you can choose from among: Self-built ETL scripts that run on EC2 instances Commercial ETL tools that can run on Amazon’s infrastructure and use S3 Open source processing tools that run on AWS instances, like Kafka How does the Big Data PaaS Model work? A standard Hadoop cloud-based Big Data Infrastructure on Amazon contains the following: Data Ingestion – Logs file data from any data source Amazon S3 Data Storage Layer Amazon EMR – A scalable set of instances that run Map/Reduce against the S3 data. Amazon RDS – A hosted MySQL database that stores the results from Map/Reduce computations. Analytics and Visualization – Using an in-house BI tool. A similar set up can be replicated using Microsoft’s Azure HDInsight. The data ingestion can be made easier with Azure Data Factory’s copy data tool. Apart from that, Azure offers several storage options like Data lake storage and Blob storage that you can use to store results from the computations. How does the Big Data SaaS model work? A fully hosted Big Data stack complete that includes everything from data storage to data visualization contains the following: Data Layer – Data needs to be pulled into a basic SQL database. An automated data warehouse does this efficiently Integration Layer – Pulls the data from the SQL database into a flexible modeling layer Processing Layer – Prepares the data based on the custom business requirements and logic provided by the user Analytics and BI Layer – Fully featured BI abilities which include visualizations, dashboards, and charts, etc. Azure Data Warehouse and AWS Redshift are the popular SaaS options that offer a complete data warehouse solution in the cloud. Their stack integrates all the four layers and is designed to be highly scalable. Google’s BigQuery is another contender that’s great for generating meaningful insights at an unmatched price-performance. Choosing the right BDaaS provider It sounds obvious, but choosing the right BDaaS provider is ultimately all about finding the solution that best suits your needs. There are a number of important factors to consider, such as workload, performance, and cost, each of which will have varying degrees of importance for you. criteria behind the classification include workload, performance and budget requirements. Here are 3 ways you might approach a BDaaS solution:Core BDaaS Core BDaaS uses a minimal platform like Hadoop with YARN and HDFS and other services like Hive. This service has gained popularity among companies which use this for any irregular workloads or as part of their larger infrastructure. They might not be as performance intensive as the other two categories. A prime example would be Elastic MapReduce or EMR provided by AWS. This integrates freely with NoSQL store, S3 Storage, DynamoDB and similar services. Given its generic nature, EMR allows a company to combine it with other services which can result in simple data pipelines to a complete infrastructure. Performance BDaaS Performance BDaaS assists businesses that are already employing a cluster-computing framework like Hadoop to further optimize their infrastructure as well as the cluster performance. Performance BDaaS is a good fit for companies that are rapidly expanding and do not wish to be burdened by having to build a data architecture and a SaaS layer. The benefit of outsourcing the infrastructure and platform is that companies can focus on specific processes that add value instead of concentrating on complicated Big Data related infrastructure. For instance, there are many third-party solutions built on top of Amazon or Azure stack that let you outsource your infrastructure and platform requirements to them. Feature BDaaS If your business is in need of additional features that may not be within the scope of Hadoop, Feature BDaaS may be the way forward. Feature BDaaS focuses on productivity as well as abstraction. It is designed to enable users to be up and using Big Data quickly and efficiently. Feature BDaaS combines both PaaS and SaaS layers. This includes web/API interfaces, and database adapters that offer a layer of abstraction from the underlying details. Businesses don’t have to spend resources and manpower setting up the cloud infrastructure. Instead, they can rely on third-party vendors like Qubole and Altiscale that are designed to set it up and running on AWS, Azure or cloud vendor of choice quickly and efficiently. Additional Tips for Choosing a Provider When evaluating a BDaaS provider for your business, cost reduction and scalability are important factors. Here are a few tips that should help you choose the right provider. Low or Zero Startup Costs – A number of BDaaS providers offer a free trial period. Therefore, theoretically, you can start seeing results before you even commit a dollar. Scalable – Growth in scale is in the very nature of a Big Data project. The solution should be easy and affordable to scale, especially in terms of storage and processing resources. Industry Footprint – It is a good idea to choose a BDaaS provider that already has an experience in your industry. This is doubly important if you are also using them for consultancy and project planning requirements. Real-Time Analysis and Feedback – The most successful Big Data projects today are those that can provide almost immediate analysis and feedback. This helps businesses to take remedial action instantly instead of working off of historical data. Managed or Self-Service – Most BDaaS providers today provide a mix of both managed as well as self-service models based on the company’s needs. It is common to find a host of technical staff working in the background to provide the client with services as needed. Conclusion The value of big data is not in the data itself, but in the insights that can be drawn after processing it and running it through robust analytics. This can help to guide and define your decision making for the future. A quick tip with regards to using Big Data: keep it small at the initial stages. This ensures the data can be checked for accuracy and the metrics derived from them are right. Once confirmed, you can go ahead with more complex and larger data projects. Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia. Gilad is a 3-time winner of international technical communication awards, including the STC Trans-European Merit Award and the STC Silicon Valley Award of Excellence. Over the past 7 years Gilad has headed Agile SEO, which performs strategic search marketing for leading technology brands. Together with his team, Gilad has done market research, developer relations and content strategy in 39 technology markets, lending him a broad perspective on trends, approaches and ecosystems across the tech industry. Common big data design patterns Hortonworks partner with Google Cloud to enhance their Big Data strategy Top 5 programming languages for crunching Big Data effectively Getting to know different Big data Characteristics  
Read more
  • 0
  • 0
  • 10153

article-image-git-bug-a-new-distributed-bug-tracker-embedded-in-git
Melisha Dsouza
20 Aug 2018
3 min read
Save for later

Git-bug: A new distributed bug tracker embedded in git

Melisha Dsouza
20 Aug 2018
3 min read
git-bug is a distributed bug tracker that is embedded in git. Using git's internal storage ensures that no files are added in your project. You can push your bugs to the same git remote that you are already using to collaborate with other people. The main idea behind implementing a distributed bug tracker in Git was to stop relying on a web service somewhere to deal with bugs. Browsing and editing bug reports offline wouldn’t be much of a pain, thanks to this implementation. While git-bug addresses a pressing need, note that the project is not yet available for full fledged use and is currently a proof of concept released just 3 days ago at version 0.2.0. Reddit is abuzz with views on the release. A user quotes- Source: reddit.com Certain users also had counter thoughts on the cons of the release - Source: reddit.com   Now that you want to get your hands on git-bug, let’s look at how to get started. Installing git-bug, Linux packages needed and CLI usage for its implementation To install the git-bug, all you need to do is execute the following command- go get github.com/MichaelMure/git-bug If it's not done already, add golang binary directory in your PATH: export PATH=$PATH:$GOROOT/bin:$GOPATH/bin You can set pre-compiled binaries by following 3 simple steps: Head over to the release page and download the appropriate binary for your system. Copy the binary anywhere in your PATH Rename the binary to git-bug (or git-bug.exe on windows) The only linux packge needed for this release is the Archlinux (AUR) Further, you can use the CLI to implement the git-bug using the following commands- Create a new bug: git bug new Your favorite editor will open to write a title and a message. You can push your new entry to a remote: git bug push [<remote>] And pull for updates: git bug pull [<remote>] List existing bugs: git bug ls   Use commands like show, comment, open or close to display and modify bugs. For more details about each command, you can run git bug <command> --help or scan the command's documentation. Features of the git-bug #1 Interactive User Interface for the terminal Use the git bug termui  command to browse and edit bugs. This short video will demonstrate how easy and interactive it is to browse and edit bugs #2 Launch a rich Web UI Take a look at the awesome web UI that is obtained with git bug webui. Source: github.com     Source: github.com   This web UI is entirely packed inside the same go binary and serve static content through a localhost http server. It connects to  backend through a GraphQL API. Take a look at the schema for more clarity. The additional features that are planned include media embedding import/export of github issue extendable data model to support arbitrary bug tracker inflatable raptor Every new release is expected to come with exciting new features, it is also coupled with a few minor constraints. You can check out some of the minor inconveniences as listed out on the github page. We can’t wait for the release to be in a fully working condition. But before that, if you need any additional information on how the git-bug works, head over to the github page. Snapchat source code leaked and posted to GitHub GitHub open sources its GitHub Load Balancer (GLB) Director Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?
Read more
  • 0
  • 0
  • 10152

article-image-tesseract-version-4-0-releases-with-new-lstm-based-engine-and-an-updated-build-system
Natasha Mathur
30 Oct 2018
2 min read
Save for later

Tesseract version 4.0 releases with new LSTM based engine, and an updated build system

Natasha Mathur
30 Oct 2018
2 min read
Google released version 4.0 of its OCR engine, Tesseract, yesterday. Tesseract 4.0 comes with a new neural net (LSTM) based OCR engine, updated build system, other improvements, and bug fixes. Tesseract is an OCR engine that offers support for unicode (a specification that supports all character set) and comes with an ability to recognize more than 100 languages out of the box. It can be trained to recognize other languages and is used for text detection on mobile devices, videos, and in Gmail image spam detection. Let’s have a look at what's new in Tesseract 4.0. New neural net (LSTM) based OCR engine The new OCR engine uses a neural network system based on LSTMs, with major accuracy gains. This consists of new training tools for the LSTM OCR engine. You can train a new model from scratch or by fine-tuning an existing model. Trained data including LSTM models and 123 languages have been added to the new OCR engine. Optional accelerated code paths have been added for the LSTM recognizer: Moreover, a new parameter lstm_choice_mode that allows including alternative symbol choices in the hOCR output has been added. Updated Build System Tesseract 4.0 uses semantic versioning and requires Leptonica 1.74.0 or a higher version. In case you want to build Tesseract from source code then a compiler with strong C++ 11 support is necessary. Unit tests have been added to the main repo. Tesseract's source tree has been reorganized in version 4.0. A new option has been added that lets you compile Tesseract without the code of the legacy OCR engine. Bug Fixes Issues in trainingdata rendering have been fixed. Damage caused to binary images when processing PDFs has been fixed. Issues in the OpenCL code have been fixed. OpenCL now works fine for the legacy Tesseract OCR engine but the performance hasn’t improved yet. Other Improvements Multi-page TIFF handling is improved in Tesseract 4.0. Improvements are made to PDF rendering. The version information and improved help texts have been added to the training tools. tessedit_pageseg_mode 1 has been removed from hocr, pdf, and tsv config files. The user has to now explicitly use --psm 1 if that is desired. For more information, check out the official release notes. Tesla v9 to incorporate neural networks for autopilot Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit
Read more
  • 0
  • 0
  • 10111
article-image-git-2-23-released-with-two-new-commands-git-switch-and-git-restore-a-new-tutorial-and-much-more
Amrata Joshi
19 Aug 2019
4 min read
Save for later

Git 2.23 released with two new commands ‘git switch’ and ‘git restore’, a new tutorial, and much more!

Amrata Joshi
19 Aug 2019
4 min read
Last week, the team behind Git released Git 2.23 that comes with experimental commands, backward compatibility and much more. This release has received contributions from over 77 contributors out of which 26 were new. What’s new in Git 2.23? Experimental commands This release comes with a new pair of experimental commands, git switch and git restore for providing a better interface for the git checkout.  “Two new commands "git switch" and "git restore" are introduced to split "checking out a branch to work on advancing its history" and "checking out paths out of the index and/or a tree-ish to work on advancing the current history" out of the single "git checkout" command,” the official mail thread reads.  Git checkout can be used to change branches with git checkout <branch>. In case if the user doesn’t want to switch branches, git checkout can be used to change individual files, too. These new commands aim to separate the responsibilities of git checkout into two narrower categories that is operations, which change branches and operations that change files.  Backward compatibility  The "--base" option of "format-patch" is now compatible with "git patch-id --stable".  Git fast-export/import pair The "git fast-export/import" pair will be now used to handle commits with log messages in encoding other than UTF-8. git clone --recurse-submodules "git clone --recurse-submodules" has now learned to set up the submodules for ignoring commit object names that are recorded in the superproject gitlink. git diff/grep The pattern "git diff/grep" that is used for extracting funcname and words boundary for Rust has now been added. git fetch" and "git pull The commands "git fetch" and "git pull" are used to report when a fetch results in non-fast-forward updates that lets the user notice unusual situation.    git status With this release, the extra blank lines in "git status" output have been reduced. Developer support This release comes with developer support for emulating unsatisfied prerequisites in tests for ensuring that the remainder of the tests succeeds when tests with prerequisites are skipped. A new tutorial for git-core developers This release comes with a new tutorial that target aspiring git-core developers. This tutorial demonstrates end-to-end workflow of creating a change to the Git tree, for sending it for review, as well as making changes that are based on comments. Bug fixes in Git 2.23 In the earlier version, "git worktree add" used to fail when another worktree that was connected to the same repository was corrupt. This issue has been corrected in this release. An issue with the file descriptor has been fixed. This release comes with an updated parameter validation. The code for parsing scaled numbers out of configuration files has been made more robust and easier to follow with this release. Few users seem to be happy about the new changes made, a user commented on HackerNews, “It's nice to hear that there appears to be progress being made in making git's tooling nicer and more consistent. Git's model itself is pretty simple, but the command line tools for working with it aren't and I feel that this fuels most of the "Git is hard" complaints.” Few others are still skeptical about the new commands, another user commented, “On the one hand I'm happy on the new "switch" and "restore" commands. On the other hand, I wonder if they truly add any value other than the semantic distinction of functions otherwise present in checkout.” To know more about this news in detail, read the official blog post on GitHub. GitHub has blocked an Iranian software developer’s account GitHub services experienced a 41-minute disruption yesterday iPhone can be hacked via a legit-looking malicious lightning USB cable worth $200, DefCon 27 demo shows
Read more
  • 0
  • 0
  • 10049

article-image-craftassist-an-open-source-framework-to-enable-interactive-bots-in-minecraft-by-facebook-researchers
Vincy Davis
19 Jul 2019
5 min read
Save for later

CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers

Vincy Davis
19 Jul 2019
5 min read
Two days ago, researchers from Facebook AI Research published a paper titled “CraftAssist: A Framework for Dialogue-enabled Interactive Agents”. The authors of this research are Facebook AI research engineers Jonathan Gray and Kavya Srinet, Facebook AI research scientist C. Lawrence Zitnick and Arthur Szlam and Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo and Siddharth Goyal. The paper describes the implementation of an assistant bot called CraftAssist which appears and interacts like another player, in the open sandbox game of Minecraft. The framework enables players to interact with the bot via in-game chat through various implemented tools and platforms. The players can also record these interactions through an in-game chat. The main aim of the bot is to be a useful and entertaining assistant to all the tasks listed and evaluated by the human players. Image Source: CraftAssist paper For motivating the wider AI research community to use the CraftAssist platform in their own experiments, Facebook researchers have open-sourced the framework, the baseline assistant, data and the models. The released data includes the functions which was used to build the 2,586 houses in Minecraft, the labeling data of the walls, roofs, etc. of the houses, human rephrasing of fixed commands, and the conversion of natural language commands to bot interpretable logical forms. The technology that allows the recording of human and bot interaction on a Minecraft server has also been released so that researcher will be able to independently collect data. Why is the Minecraft protocol used? Minecraft is a popular multiplayer volumetric pixel (voxel) 3D game based on building and crafting which allows multiplayer servers and players to collaborate and build, survive or compete with each other. It operates through a client and server architecture. The CraftAssist bot acts as a client and communicates with the Minecraft server using the Minecraft network protocol. The Minecraft protocol allows the bot to connect to any Minecraft server without the need for installing server-side mods. This lets the bot to easily join a multiplayer server along with human players or other bots. It also lets the bot to join an alternative server which implements the server-side component of the Minecraft network protocol. The CraftAssist bot uses a 3rd-party open source Cuberite server. It is a fast and extensible game server used for Minecraft. Read More: Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users How does the CraftAssist function? The block diagram below demonstrates how the bot interacts with incoming in-game chats and reaches the desired target. Image Source: CraftAssist paper Firstly, the incoming text is transformed into a logical form called the action dictionary. The action dictionary is then translated by a dialogue object which interacts with the memory module of the bot. This produces an action or a chat response to the user. The bot’s memory uses a relational database which is structured to recognize the relation between stored items of information. The major advantage of this type of memory is the easy to convert semantic parser, which is converted into a fully specified tasks. The bot responds to higher-level actions, called Tasks. Tasks are an interruptible process which follows a clear objective of step by step actions. It can adjust to long pauses between steps and can also push other Tasks onto a stack, like the way functions can call other functions in a standard programming language. Move, Build and Destroy are few of the many basic Tasks assigned to the bot. The The Dialogue Manager checks for illegal or profane words, then queries the semantic parser. The semantic parser takes the chat as input and produces an action dictionary. The action dictionary indicates that the text is a command given by a human and then specifies the high-level action to be performed by the bot. Once the task is created and pushed onto the Task stack, it is the responsibility of the command task ‘Move’ to compare the bot’s current location to the target location. This will make the bot to undertake a sequence of low-level step movements to reach the target. The core of the bot’s understanding of natural language depends on a neural semantic parser called the Text-toAction-Dictionary (TTAD) model. This model receives the incoming command/chat and then classifies it into an action dictionary which is interpreted by the Dialogue Object. The CraftAssist framework thus enables the bots in Minecraft to interact and play with players by understanding human interactions, using the implemented tools. The researchers hope that since the dataset of CraftAssist is now open-sourced, more developers will be empowered to contribute to this framework by assisting or training the bots, which might lead to the bots learning from human dialogue interactions, in the future. Developers have found the CraftAssist framework interesting. https://twitter.com/zehavoc/status/1151944917859688448 A user on Hacker News comments, “Wow, this is some amazing stuff! Congratulations!” Check out the paper CraftAssist: A Framework for Dialogue-enabled Interactive Agents for more details. Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects What to expect in Unreal Engine 4.23? A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news
Read more
  • 0
  • 0
  • 10028