Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-low-js-a-node-js-port-for-embedded-systems
Prasad Ramesh
17 Sep 2018
3 min read
Save for later

low.js, a Node.js port for embedded systems

Prasad Ramesh
17 Sep 2018
3 min read
Node.JS is a popular backend widely for web development despite some of its flaws. For embedded systems, now there is low.js, a Node.js port with far lower system requirements. In low.js you can program JavaScript applications by utilizing the full Node.js API. You can run these on regular computers and also on embedded devices, which are based on the $3 ESP32 microcontroller. The JavaScript V8 engine at the center of Node.js is replaced with Duktape. Duktape is an embeddable ECMAScript E5/E5.1 engine with a compact footprint. Some parts of the Node.js system library are rewritten for more compact footprint and use more native code. low.js currently uses under 2 MB of disk space with a minimum requirement of around 1.5 MB of RAM for the ESP32 version. low.js features low.js is good for hobbyists and people interested in electronics. It allows using Node.JS scripts on smaller devices like routers which are based on Linux or uClinux without using much of the resources. This is great for scripting especially if they communicate over the internet. The neonious one is a microcontroller board based on low.js for ESP32, which can be programmed in JavaScript ES 6 with the Node API. It includes Wifi, Ethernet, additional flash and an extra I/O controller. The lower systems requirements in low.js allow you to run it comfortably on the ESP32-WROVER module. The ESP32-WROVER costs under $3 for large orders and is a very cost effective solution for IoT devices requiring a microcontroller and Wifi. low.js for ESP32 also adds the additional benefit of fast software development and maintenance. Specialized software developers are not needed for the microcontroller software. How to install? The community edition of low.js can be run on POSIX based systems including Linux, uClinux, and Mac OS X. It is available on Github and currently ./configure is not present. You might need some programming skills and knowledge to get low.js up and running on your systems. The commands are as follows: git clone https://github.com/neonious/lowjs cd lowjs git submodule update --init --recursive make low.js for ESP32 is the same as the community edition, but adapted for the ESP32 microcontroller. This version is not open source and is pre-flashed on the neonious one. For more information and documentation visit the low.js website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust Node.js announces security updates for all their active release lines for August 2018 Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 8590

article-image-raspberry-pi-launches-it-last-board-for-the-foreseeable-future-the-raspberry-pi-3-model-a-available-now-at-25
Prasad Ramesh
16 Nov 2018
2 min read
Save for later

Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25

Prasad Ramesh
16 Nov 2018
2 min read
Yesterday, Raspberry launched the Raspberry Pi 3 Model A+ board which is a smaller and cheaper version of the Raspberry Pi 3B+. In 2014, the first gen Raspberry Pi 1 Model B+ was followed by a lighter Model A+ with half the RAM and removed ports. This was able to fit into their Hardware Attached on Top (HAT). Until now there were no such small form factor boards for the Raspberry Pi 2 and 3. Size is cut down but not the features (most of) The Raspberry Pi 3 Model A+ retains most of the features and enhancements as the bigger board of this series. This includes a 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU, 512MB LPDDR2 SDRAM, and dual-band 802.11ac wireless LAN and Bluetooth 4.2/BLE. The enhancements retained are improved USB mass-storage booting and improved thermal management. The entire Raspberry Pi 3 Model A+ board is an FCC certified radio module. This will significantly reduce the cost in conformance testing Raspberry Pi–based products. What is shrunk is the price which is now down to $25 and the board size of 65x56mm, the size of a HAT. Source: Raspberry website Raspberry Pi 3 Model A+ will likely be the last product for now In March this year, Raspberry said that the 3+ platform is the final iteration of the “classic” Raspberry Pi boards. The next steps/released products will be out of necessity and not an evolution. This is because for an evolution to happen Raspberry will need a new core silicon, on a new process node, with new memory technology. So this new board, the 3A+ is about closing things; meaning we won’t see any more products in this line, in the foreseeable future. This board does answer one of their most frequent customer requests for ‘missing products’. And clears their pipeline to focus on building the next generation of Raspberry Pi boards. For more details visit the Raspberry Pi website. Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 8499

article-image-introducing-microsofts-airsim-an-open-source-simulator-for-autonomous-vehicles-built-on-unreal-engine
Bhagyashree R
19 Sep 2019
4 min read
Save for later

Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine

Bhagyashree R
19 Sep 2019
4 min read
Back in 2017, the Microsoft Research team developed and open-sourced Aerial Informatics and Robotics Simulation (AirSim). On Monday, the team shared how AirSim can be used to solve the current challenges in the development of autonomous systems. Microsoft AirSim and its features Microsoft AirSim is an open-source, cross-platform simulation platform for autonomous systems including autonomous cars, wheeled robotics, aerial drones, and even static IoT devices. It works as a plugin for Epic Games’ Unreal Engine. There is also an experimental release for the Unity game engine. Here is an example of drone simulation in AirSim: https://www.youtube.com/watch?v=-WfTr1-OBGQ&feature=youtu.be AirSim was built to address two main problems developers face during the development of autonomous systems. First, the requirement of large datasets for training and testing the systems and second, the ability to debug in a simulator. With AirSim, the team aims to equip developers with a platform that has various training experiences so that the autonomous systems could be exposed to different scenarios before they are deployed in the real-world. “Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform-independent way,” the team writes. AirSim provides physically and visually realistic simulations by supporting hardware-in-the-loop simulation with popular flight controllers such as PX4, an open-source autopilot system. It can be easily extended to accommodate various new types of autonomous vehicles, hardware platforms, and software protocols. Its extensible architecture also allows them to quickly add custom autonomous system models and new sensors to the simulator. AirSim for tackling the common challenges in the autonomous systems’ development In April, the Microsoft Research team collaborated with Carnegie Mellon University and Oregon State University, collectively called Team Explorer, to solve the DARPA Subterranean (SubT) Challenge. The challenge was to build robots that can autonomously map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios. On Monday, Microsoft’s Senior Research Manager, Ashish Kapoor shared how they used AirSim to solve this challenge. Team Explorer and Microsoft used AirSim to create an “intricate maze” of man-made tunnels in a virtual world. To create this maze the team used reference material from real-world mines to modularly generate a network of interconnected tunnels. This was a high-definition simulation of man-made tunnels that also included robotic vehicles and a suite of sensors. AirSim also provided a rich platform that Team Explorer could use to test their methods along with generating training experiences for creating various decision-making components for autonomous agents. Microsoft believes that AirSim can also help accelerate the creation of a real dataset for underground environments. “Microsoft’s ability to create near-realistic autonomy pipelines in AirSim means that we can rapidly generate labeled training data for a subterranean environment,” Kapoor wrote. Kapoor also talked about another collaboration with Air Shepherd and USC to help counter wildlife poaching using AirSim. In this collaboration, they developed unmanned aerial vehicles (UAVs) equipped with thermal infrared cameras that can fly through national parks to search for poachers and animals. AirSim was used to create a simulation of this use case, in which virtual UAVs flew over virtual environments at an altitude from 200 to 400 feet above ground level. “The simulation took on the difficult task of detecting poachers and wildlife, both during the day and at night, and ultimately ended up increasing the precision in detection through imaging by 35.2%,” the post reads. These were some of the recent use cases where AirSim was used. To explore more and to contribute you can check out its GitHub repository. Other news in Data 4 important business intelligence considerations for the rest of 2019 How artificial intelligence and machine learning can help us tackle the climate change emergency France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 8418

article-image-firewall-ports-you-need-to-open-for-availability-groups-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
6 min read
Save for later

Firewall Ports You Need to Open for Availability Groups from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
6 min read
Something that never ceases to amaze me is the frequent request for help on figuring out what ports are needed for Availability Groups in SQL Server to function properly. These requests come from a multitude of reasons such as a new AG implementation, to a migration of an existing AG to a different VLAN. Whenever these requests come in, it is a good thing in my opinion. Why? Well, that tells me that the network team is trying to instantiate a more secure operating environment by having segregated VLANs and firewalls between the VLANs. This is always preferable to having firewall rules of ANY/ANY (I correlate that kind of firewall rule to granting “CONTROL” to the public server role in SQL Server). So What Ports are Needed Anyway? If you are of the mindset that a firewall rule of ANY/ANY is a good thing or if your Availability Group is entirely within the same VLAN, then you may not need to read any further. Unless, of course, if you have a software firewall (such as Windows Defender / Firewall) running on your servers. If you are in the category where you do need to figure out which ports are necessary, then this article will provide you with a very good starting point. Windows Server Clustering – TCP/UDP Port Description TCP/UDP 53 User & Computer Authentication [DNS] TCP/UDP 88 User & Computer Authentication [Kerberos] UDP 123 Windows Time [NTP] TCP 135 Cluster DCOM Traffic [RPC, EPM] UDP 137 User & Computer Authentication [NetLogon, NetBIOS , Cluster Admin, Fileshare Witness] UDP 138 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] TCP 139 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] UDP 161 SNMP TCP/UDP 162 SNMP Traps TCP/UDP 389 User & Computer Authentication [LDAP] TCP/UDP 445 User & Computer Authentication [SMB, SMB2, CIFS, Fileshare Witness] TCP/UDP 464 User & Computer Authentication [Kerberos Change/Set Password] TCP 636 User & Computer Authentication [LDAP SSL] TCP 3268 Microsoft Global Catalog TCP 3269 Microsoft Global Catalog [SSL] TCP/UDP 3343 Cluster Network Communication TCP 5985 WinRM 2.0 [Remote PowerShell] TCP 5986 WinRM 2.0 HTTPS [Remote PowerShell SECURE] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}RPC and DCOM ] * SQL Server – TCP/UDP Port Description TCP 1433 SQL Server/Availability Group Listener [Default Port {CAN BE CHANGED}] TCP/UDP 1434 SQL Server Browser UDP 2382 SQL Server Analysis Services Browser TCP 2383 SQL Server Analysis Services Listener TCP 5022 SQL Server DBM/AG Endpoint [Default Port {CAN BE CHANGED}] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}] *Randomly allocated UDP port number between 49152 and 65535 So I have a List of Ports, what now? Knowing is half the power, and with great knowledge comes great responsibility – or something like that. In reality, now that know what is needed, the next step is to go out and validate that the ports are open and working. One of the easier ways to do this is with PowerShell. $RemoteServers = "Server1","Server2" $InbndServer = "HomeServer" $TCPPorts = "53", "88", "135", "139", "162", "389", "445", "464", "636", "3268", "3269", "3343", "5985", "5986", "49152", "65535", "1433", "1434", "2383", "5022" $UDPPorts = "53", "88", "123", "137", "138", "161", "162", "389", "445", "464", "3343", "49152", "65535", "1434", "2382" $TCPResults = @() $TCPResults = Invoke-Command $RemoteServers {param($InbndServer,$TCPPorts) $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $TCPPorts){ $PortCheck = (TNC -Port $p -ComputerName $InbndServer ).TcpTestSucceeded If($PortCheck -notmatch "True|False"){$PortCheck = "ERROR"} $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } $Object } -ArgumentList $InbndServer,$TCPPorts | select * -ExcludeProperty runspaceid, pscomputername $TCPResults | Out-GridView -Title "AG and WFC TCP Port Test Results" $TCPResults | Format-Table * #-AutoSize $UDPResults = Invoke-Command $RemoteServers {param($InbndServer,$UDPPorts) $test = New-Object System.Net.Sockets.UdpClient; $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $UDPPorts){ Try { $test.Connect($InbndServer, $P); $PortCheck = "TRUE"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } Catch { $PortCheck = "ERROR"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } } $Object } -ArgumentList $InbndServer,$UDPPorts | select * -ExcludeProperty runspaceid, pscomputername $UDPResults | Out-GridView -Title "AG and WFC UDP Port Test Results" $UDPResults | Format-Table * #-AutoSize This script will test all of the related TCP and UDP ports that are required to ensure your Windows Failover Cluster and SQL Server Availability Group works flawlessly. If you execute the script, you will see results similar to the following. Data Driven Results In the preceding image, I have combined each of the Gridview output windows into a single screenshot. Highlighted in Red is the result set for the TCP tests, and in Blue is the window for the test results for the UDP ports. With this script, I can take definitive results all in one screen shot and share them with the network admin to try and resolve any port deficiencies. This is just a small data driven tool that can help ensure quicker resolution when trying to ensure the appropriate ports are open between servers. A quicker resolution in opening the appropriate ports means a quicker resolution to the project and all that much quicker you can move on to other tasks to show more value! Put a bow on it This article has demonstrated a meaningful and efficient method to (along with the valuable documentation) test and validate the necessary firewall ports for Availability Groups (AG) and Windows Failover Clustering. With the script provided in this article, you can provide quick and value added service to your project along with providing valuable documentation of what is truly needed to ensure proper AG functionality. Interested in learning about some additional deep technical information? Check out these articles! Here is a blast from the past that is interesting and somewhat related to SQL Server ports. Check it out here. This is the sixth article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Firewall Ports You Need to Open for Availability Groups first appeared on SQL RNNR. Related Posts: Here is an Easy Fix for SQL Service Startup Issues… December 28, 2020 Connect To SQL Server - Back to Basics March 27, 2019 SQL Server Extended Availability Groups April 1, 2018 Single User Mode - Back to Basics May 31, 2018 Lost that SQL Server Access? May 30, 2018 The post Firewall Ports You Need to Open for Availability Groups appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 8350

article-image-your-quick-introduction-to-extended-events-in-analysis-services-from-blog-posts-sqlservercentral
Anonymous
01 Jan 2021
9 min read
Save for later

Your Quick Introduction to Extended Events in Analysis Services from Blog Posts - SQLServerCentral

Anonymous
01 Jan 2021
9 min read
The Extended Events (XEvents) feature in SQL Server is a really powerful tool and it is one of my favorites. The tool is so powerful and flexible, it can even be used in SQL Server Analysis Services (SSAS). Furthermore, it is such a cool tool, there is an entire site dedicated to XEvents. Sadly, despite the flexibility and power that comes with XEvents, there isn’t terribly much information about what it can do with SSAS. This article intends to help shed some light on XEvents within SSAS from an internals and introductory point of view – with the hopes of getting more in-depth articles on how to use XEvents with SSAS. Introducing your Heavy Weight Champion of the SQLverse – XEvents With all of the power, might, strength and flexibility of XEvents, it is practically next to nothing in the realm of SSAS. Much of that is due to three factors: 1) lack of a GUI, 2) addiction to Profiler, and 3) inadequate information about XEvents in SSAS. This last reason can be coupled with a sub-reason of “nobody is pushing XEvents in SSAS”. For me, these are all just excuses to remain attached to a bad habit. While it is true that, just like in SQL Server, earlier versions of SSAS did not have a GUI for XEvents, it is no longer valid. As for the inadequate information about the feature, I am hopeful that we can treat that excuse starting with this article. In regards to the Profiler addiction, never fear there is a GUI and the profiler events are accessible via the GUI just the same the new XEvents events are accessible. How do we know this? Well, the GUI tells us just as much, as shown here. In the preceding image, I have two sections highlighted with red. The first of note is evidence that this is the gui for SSAS. Note that the connection box states “Group of Olap servers.” The second area of note is the highlight demonstrating the two types of categories in XEvents for SSAS. These two categories, as you can see, are “profiler” and “purexevent” not to be confused with “Purex® event”. In short, yes Virginia there is an XEvent GUI, and that GUI incorporates your favorite profiler events as well. Let’s See the Nuts and Bolts This article is not about introducing the GUI for XEvents in SSAS. I will get to that in a future article. This article is to introduce you to the stuff behind the scenes. In other words, we want to look at the metadata that helps govern the XEvents feature within the sphere of SSAS. In order to, in my opinion, efficiently explore the underpinnings of XEvents in SSAS, we first need to setup a linked server to make querying the metadata easier. EXEC master.dbo.sp_addlinkedserver @server = N'SSASDIXNEUFLATIN1' --whatever LinkedServer name you desire , @srvproduct=N'MSOLAP' , @provider=N'MSOLAP' , @datasrc=N'SSASServerSSASInstance' --change your data source to an appropriate SSAS instance , @catalog=N'DemoDays' --change your default database go EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'SSASDIXNEUFLATIN1' , @useself=N'False' , @locallogin=NULL , @rmtuser=NULL , @rmtpassword=NULL GO Once the linked server is created, you are primed and ready to start exploring SSAS and the XEvent feature metadata. The first thing to do is familiarize yourself with the system views that drive XEvents. You can do this with the following query. SELECT lq.* FROM OPENQUERY(SSASDIXNEUFLATIN1, 'SELECT * FROM $system.dbschema_tables') as lq WHERE CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%XEVENT%' OR CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%TRACE%' ORDER BY CONVERT(VARCHAR(100),lq.TABLE_NAME); When the preceding query is executed, you will see results similar to the following. In this image you will note that I have two sections highlighted. The first section, in red, is the group of views that is related to the trace/profiler functionality. The second section, in blue, is the group of views that is related the XEvents feature in SSAS. Unfortunately, this does demonstrate that XEvents in SSAS is a bit less mature than what one may expect and definitely shows that it is less mature in SSAS than it is in the SQL Engine. That shortcoming aside, we will use these views to explore further into the world of XEvents in SSAS. Exploring Further Knowing what the group of tables looks like, we have a fair idea of where we need to look next in order to become more familiar with XEvents in SSAS. The tables I would primarily focus on (at least for this article) are: DISCOVER_TRACE_EVENT_CATEGORIES, DISCOVER_XEVENT_OBJECTS, and DISCOVER_XEVENT_PACKAGES. Granted, I will only be using the DISCOVER_XEVENT_PACKAGES view very minimally. From here is where things get to be a little more tricky. I will take advantage of temp tables  and some more openquery trickery to dump the data in order to be able to relate it and use it in an easily consumable format. Before getting into the queries I will use, first a description of the objects I am using. DISCOVER_TRACE_EVENT_CATEGORIES is stored in XML format and is basically a definition document of the Profiler style events. In order to consume it, the XML needs to be parsed and formatted in a better format. DISCOVER_XEVENT_PACKAGES is the object that lets us know what area of SSAS the event is related to and is a very basic attempt at grouping some of the events into common domains. DISCOVER_XEVENT_OBJECTS is where the majority of the action resides for Extended Events. This object defines the different object types (actions, targets, maps, messages, and events – more on that in a separate article). Script Fun Now for the fun in the article! IF OBJECT_ID('tempdb..#SSASXE') IS NOT NULL BEGIN DROP TABLE #SSASXE; END; IF OBJECT_ID('tempdb..#SSASTrace') IS NOT NULL BEGIN DROP TABLE #SSASTrace; END; SELECT CONVERT(VARCHAR(MAX), xo.Name) AS EventName , xo.description AS EventDescription , CASE WHEN xp.description LIKE 'SQL%' THEN 'SSAS XEvent' WHEN xp.description LIKE 'Ext%' THEN 'DLL XEvents' ELSE xp.name END AS PackageName , xp.description AS CategoryDescription --very generic due to it being the package description , NULL AS CategoryType , 'XE Category Unknown' AS EventCategory , 'PureXEvent' AS EventSource , ROW_NUMBER() OVER (ORDER BY CONVERT(VARCHAR(MAX), xo.name)) + 126 AS EventID INTO #SSASXE FROM ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * From $system.Discover_Xevent_Objects') ) xo INNER JOIN ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * FROM $system.DISCOVER_XEVENT_PACKAGES') ) xp ON xo.package_id = xp.id WHERE CONVERT(VARCHAR(MAX), xo.object_type) = 'event' AND xp.ID <> 'AE103B7F-8DA0-4C3B-AC64-589E79D4DD0A' ORDER BY CONVERT(VARCHAR(MAX), xo.[name]); SELECT ec.x.value('(./NAME)[1]', 'VARCHAR(MAX)') AS EventCategory , ec.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS CategoryDescription , REPLACE(d.x.value('(./NAME)[1]', 'VARCHAR(MAX)'), ' ', '') AS EventName , d.x.value('(./ID)[1]', 'INT') AS EventID , d.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS EventDescription , CASE ec.x.value('(./TYPE)[1]', 'INT') WHEN 0 THEN 'Normal' WHEN 1 THEN 'Connection' WHEN 2 THEN 'Error' END AS CategoryType , 'Profiler' AS EventSource INTO #SSASTrace FROM ( SELECT CONVERT(XML, lq.[Data]) FROM OPENQUERY (SSASDIXNEUFLATIN1, 'Select * from $system.Discover_trace_event_categories') lq ) AS evts(event_data) CROSS APPLY event_data.nodes('/EVENTCATEGORY/EVENTLIST/EVENT') AS d(x) CROSS APPLY event_data.nodes('/EVENTCATEGORY') AS ec(x) ORDER BY EventID; SELECT ISNULL(trace.EventCategory, xe.EventCategory) AS EventCategory , ISNULL(trace.CategoryDescription, xe.CategoryDescription) AS CategoryDescription , ISNULL(trace.EventName, xe.EventName) AS EventName , ISNULL(trace.EventID, xe.EventID) AS EventID , ISNULL(trace.EventDescription, xe.EventDescription) AS EventDescription , ISNULL(trace.CategoryType, xe.CategoryType) AS CategoryType , ISNULL(CONVERT(VARCHAR(20), trace.EventSource), xe.EventSource) AS EventSource , xe.PackageName FROM #SSASTrace trace FULL OUTER JOIN #SSASXE xe ON trace.EventName = xe.EventName ORDER BY EventName; Thanks to the level of maturity with XEvents in SSAS, there is some massaging of the data that has to be done so that we can correlate the trace events to the XEvents events. Little things like missing EventIDs in the XEvents events or missing categories and so forth. That’s fine, we are able to work around it and produce results similar to the following. If you compare it to the GUI, you will see that it is somewhat similar and should help bridge the gap between the metadata and the GUI for you. Put a bow on it Extended Events is a power tool for many facets of SQL Server. While it may still be rather immature in the world of SSAS, it still has a great deal of benefit and power to offer. Getting to know XEvents in SSAS can be a crucial skill in improving your Data Superpowers and it is well worth the time spent trying to learn such a cool feature. Interested in learning more about the depth and breadth of Extended Events? Check these out or check out the XE website here. Want to learn more about your indexes? Try this index maintenance article or this index size article. This is the seventh article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Your Quick Introduction to Extended Events in Analysis Services first appeared on SQL RNNR. Related Posts: Extended Events Gets a New Home May 18, 2020 Profiler for Extended Events: Quick Settings March 5, 2018 How To: XEvents as Profiler December 25, 2018 Easy Open Event Log Files June 7, 2019 Azure Data Studio and XEvents November 21, 2018 The post Your Quick Introduction to Extended Events in Analysis Services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 8328

article-image-workers-dev-will-soon-allow-users-to-deploy-their-cloudflare-workers-to-a-subdomain-of-their-choice
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice

Melisha Dsouza
20 Feb 2019
2 min read
Cloudflare users will very soon be able to deploy Workers without having a Cloudflare domain. They will be able to deploy their Cloudflare Workers to a subdomain of their choice, with an extension of .workers.dev. According to the Cloudflare blog, this is a step towards making it easy for users to get started with Workers and build a new serverless project from scratch. Cloudflare Workers’ serverless execution environment allows users to create new applications or improve existing ones without configuring or maintaining infrastructure. Cloudflare Workers run on Cloudflare servers, and not in a user’s browser, meaning that a user’s code will run in a trusted environment where it cannot be bypassed by malicious clients. workers. dev was obtained through Google’s TLD launch program. Customers can head over to workers.dev where they will be able to claim a subdomain (one per user). workers.dev is fully served using Cloudflare Workers. Zack Bloom, the Director of Product for Product Strategy at Cloudflare, says that workers.dev will especially be useful for Serverless apps. Without cold-starts users will obtain instant scaling to almost any volume of traffic, making this type of serverless seem faster and cheaper. Cloudflare workers have received an amazing response from users all over the internet: Source:HackerNews This news has also been received with much enthusiasm: https://twitter.com/MrAhmadAwais/status/1097919710249783297 You can head over to the Cloudflare blog for more information on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers
Read more
  • 0
  • 0
  • 8314
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-google-makes-trust-services-root-r1-r2-r3-and-r4-inclusion-request
Melisha Dsouza
17 Sep 2018
3 min read
Save for later

Google makes Trust Services Root R1, R2, R3, and R4 Inclusion Request

Melisha Dsouza
17 Sep 2018
3 min read
After Google Launched its Certification Authority in August 2017, it has now put in a  request to Mozilla certification store for the inclusion of the Google Trust Services R1, R2, R3, and R4 roots as documented in the following bug. Google’s application states the following- "Google is a commercial CA that will provide certificates to customers from around the world.  We will offer certificates for server authentication, client authentication, email (both signing and encrypting), and code signing.  Customers of the Google PKI are the general public. We will not require that customers have a domain registration with Google, use domain suffixes where Google is the registrant, or have other services from Google." What are Google Trust Services Roots? To adopt an independent infrastructure and build the "foundation of a more secure web," Google Trust Services allows the company to issue its own TLS/SSL certificates for securing its web traffic via HTTPS, instead of relying on third-party certs. The main aim of launching the GTS was to bring security and authentication certificates up to par with Google’s rigorous security standards. This means invalidating the old, insecure HTTP standard in Chrome, and depreciate Adobe Flash, a web program known to be insecure, and a resource hog. GTS will provide HTTPS certificates public websites to API servers, and it will be inclusive to all Alphabet companies, not just Google. Developers who build products that connect to Google’s services will have to include the new Root Certificates. All GTS roots expire in 2036, while GS Root R2 expires in 2021 and GS Root R4 in 2038. Google will also be able to cross-sign its CAs, using GS Root R3 and GeoTrust, to ease potential timing issues while setting up the root CAs. To know more about these trust services, you can visit GlobalSign. Some noticeable points in this request are Google has supplied a key generation ceremony audit report Other than the disclosed intermediates and required test certificates, no issuance has been detected from these roots. Section 1.4.2 of the CPS expressly forbids the use of Google certificates for "man-in-the middle purposes". Appendix C of the current CPS indicates that Google limits the lifetime of server certificates to 365 days. The following concerns exist in the Roots- From the transfer on 11-August 2016 through 8-December 2016, at the time it would not have been clear if any policies applied to these new roots. The applicable CPS (Certification Practice Statement) during that period makes no reference to these roots. Google does state in their current CPS that these roots were operated according to that CPS. From the transfer on 11-August 2016 through the end of Google’s audit period on 30-September, 2016, these roots were not explicitly covered by either Google’s audit nor GlobalSign’s audit. The discussion was concluded with adding this policy to the main Mozilla Root Store Policy (section 8). With these changes and the filing of the bug, Mozilla plans to take no action against GTS based on what has been discovered and discussed. Here is what users had to say on this request- Source: Vue-hn To get a complete insight into this request, head over to Google groups. Let’s Encrypt SSL/TLS certificates gain the trust of all Major Root Programs Pay your respects to Inbox, Google’s email innovation is getting discontinued Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers  
Read more
  • 0
  • 0
  • 8295

article-image-deno-attempt-to-fix-node-js-flaws-rewritten-in-rust
Prasad Ramesh
27 Aug 2018
2 min read
Save for later

Deno, an attempt to fix Node.js flaws, is rewritten in Rust

Prasad Ramesh
27 Aug 2018
2 min read
Deno is a runtime by creator of Node, Ryan Dahl. It aims at fixing some of the problems in Node. Originally written in Go, Deno is now rewritten in Rust and is in version 0.1. Node.js was developed nearly a decade ago. It was designed in 2009 to use server-side JavaScript. The implementation solved problems of 2009, for which Dahl has no regrets. But lately, he did have regrets elaborated in a talk on 10 things he regrets about Node in the JSConf 2018. Some of the regrets included packages, security issues, the entire build system, among others. Deno is a secure TypeScript run-time on Chrome V8. It was originally written in Go and now has been rewritten in Rust to avoid potential garbage collector issues. Deno is similar to Node.js but is focused on security. Deno takes full advantage of JavaScript being a secure sandbox. So, unlike Node.js, Deno is sandboxed. Scripts should run without any write access by default. Using untrusted utilities like linters will be optional. There is no package.json in Deno, no npm and it is not explicitly compatible with Node. An important thing to note is that the requirement is Python 2, not Python 3. This is because Chrome V8 scripts still use Python 2. There were plans to rewrite Deno in Rust when it was originally released in June this year. Dahl mentioned in a GitHub comment: “The reason for not using Go is that it has a rather complex runtime - including a GC. Although I haven't experienced any problems with that yet, it's not hard to imagine that down the road that might clash badly with V8's very complex runtime.” You can get the binaries here to get started and check out the Github repo. Deploying Node.js apps on Google App Engine is now easy Creating Macros in Rust [Tutorial] Rust Language Server, RLS 1.0 releases with code intelligence, syntax highlighting and more
Read more
  • 0
  • 0
  • 8281

article-image-docker-faces-public-outcry-as-docker-for-mac-and-windows-can-be-downloaded-only-via-docker-store-login
Melisha Dsouza
23 Aug 2018
4 min read
Save for later

Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login

Melisha Dsouza
23 Aug 2018
4 min read
5 years ago, Docker was the talk of the town because it made it possible to get a number of apps running on the same old servers and it also made packaging and shipping programs easy. But the same cannot be said about Docker now as the company is facing public disapproval on their decision to allow Docker for Mac and Windows only to be downloaded if one is logged into the Docker store. Their quest for  "improving the users experience" clearly is facing major roadblocks. Two years ago, every bug report and reasonable feature request was "hard" or "something you don't want" and would result in endless back and forth for the users. On 02 June 2016, new repository keys were pushed to the docker public repository. As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch.” The issue affected  ALL systems worldwide that were configured with the docker repository. All Debian and ubuntu versions, independent of OS and docker versions, faced the meltdown. It became impossible to run a system update or upgrade on an existing system. This 7 hours interplanetary outage because of Docker had little tech news coverage. All that was done was a few messages on a GitHub issue. You would have expected Docker to be a little bit more careful after the above controversy, but lo and behold! Here , comes yet another badly managed change implementation.. The current matter in question On June 20th 2018, github and reddit were abuzz with comments from confused Docker users on how they couldn’t download Docker for Mac or Windows without logging into the docker store. The following URLs were spotted with the problem: Install Docker for Mac and Install Docker for Windows To this, a docker spokesperson responded saying that the change was incorporated to improve the Docker for Mac and Windows experience for users moving forward. This led to string of accusations from dedicated docker users. Some of their complains included-  Source: github.com            Source: github.com    Source: github.com The issue is still ongoing and with no further statements released from the Docker team, users are left in the dark. Inspite of all the hullabaloo, why choose Docker? A report by Dzone indicates that Docker adoption by companies was up 30% in the last year. Its annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent. So what is this company doing differently? It’s no secret that Docker containers are easy to deploy in a cloud. It can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, which are some of the major languages in configuration management. Specifically, for CI/CD Docker makes it achievable to set up local development environments that are exactly like a live server. It can run multiple development environments from the same host with unique software, operating systems, and configurations. It helps to test projects on new or different servers. Allows multiple users to work on the same project with the exact same settings, regardless of the local host environment. It ensures that applications that are running on containers are completely segregated and isolated from each other. Which means you get complete control over traffic flow and management. So, what’s the verdict? Most users accused Docker’s move as manipulative since they are literally asking people to login with their information to target them with ad campaigns and spam emails to make money. However, there were also some in support of this move. Source: github.com One reddit user said that while there is no direct solution to this issue, You can use https://github.com/moby/moby/releases as a workaround, or a proper package manager if you're on Linux. Hopefully, Docker takes this as a cue before releasing any more updates that could spark public outcry. It would be interesting to see how many companies still stick around and use Docker irrespective of the rollercoaster ride that the users are put through. You can find further  opinions on this matter at reddit.com. Docker isn’t going anywhere Zeit releases Serverless Docker in beta What’s new in Docker Enterprise Edition 2.0?  
Read more
  • 0
  • 0
  • 8227

article-image-google-opensources-tensorflow-gan-tfgan-library-for-generative-adversarial-networks-neural-network-model
Abhishek Jha
13 Dec 2017
11 min read
Save for later

Generative Adversarial Networks: Google open sources TensorFlow-GAN (TFGAN)

Abhishek Jha
13 Dec 2017
11 min read
If you have played the game Prince of Persia, you know what it is like defending yourself from the ‘shadow’ which tries to kill you. It’s a conundrum: If you kill the shadow you die; if you don’t do anything, you definitely die! For all its merits, Generative Adversarial Networks, or GAN, has faced a similar problem with differentiation. Most deep learning experts who endorse GAN mix their support with a little bit of caution – there is a stability issue! You may call it a holistic convergence problem. Both discriminator and generator are at loggerheads, while still being dependant on each other for efficient training. If one of them fails, the entire system fails. And you have got to ensure they don’t explode. The Prince of Persia is an interesting concept! To begin with, Neural Networks were designed to replicate human brain (albeit, artificially). They have succeeded in recognizing objects and processing natural languages. But to think and act like humans at that neurological level – let us admit it’s a far cry still. Which is why Generative Adversarial Networks became a hot topic in machine learning. It’s a relatively new architecture, but have gone on to revolutionize deep learning by accurately modeling real world data in ways better than any other model has done before. After all, they came up with a new model for training a neural net, with not one but two independent nets that work separately (and act as adversaries!) as Discriminator and Generator. Such a new architecture for an unsupervised neural network yields far better performance when compared to traditional nets. But the fact is, we have barely scratched the surface. Challenge is to train GAN here onwards. It comes with its own problems, such as failing to differentiate how many of a particular object should occur at a location, failing to adapt to 3D objects (it doesn’t understand the perspectives of frontview and backview), not being able to understand real-life holistic structures, etc. Substantial research has been taking place to take care of these problems. New models have been proposed to give more accurate results than previous techniques. Now  Google intends to make the Generative Adversarial Networks easier to experiment with! They have just open sourced TFGAN, a lightweight TensorFlow library designned to make it easy to train and evaluate GANs. [embed width="" height=""]https://www.youtube.com/watch?v=f2GF7TZpuGQ[/embed] According to Google, TFGAN provides the infrastructure to easily train a GAN, provides well-tested loss and evaluation metrics, and gives easy-to-use examples that highlight the expressiveness and flexibility of TFGAN. "We’ve also released a tutorial that includes a high-level API to quickly get a model trained on your data," Google said in its announcement. Source: research.googleblog.com The above image demonstrates the effect of an adversarial loss on image compression. The top row shows image patches from the ImageNet dataset. The middle row shows the results of compressing and uncompressing an image through an image compression neural network trained on a traditional loss. The bottom row shows the results from a network trained with a traditional loss and an adversarial loss. The GAN-loss images are sharper and more detailed, even if they are less like the original. TFGAN offers simple function calls for majority of GAN use-cases (where users can run a model in a few lines of code), but it's also built in a modular way that covers sophisticated GAN designs. "You can just use the modules you want — loss, evaluation, features, training, etc. are all independent. TFGAN’s lightweight design also means you can use it alongside other frameworks, or with native TensorFlow code," Google says, adding that GAN models written using TFGAN will easily benefit from future infrastructure improvements. That users can select from a large number of already-implemented losses and features without having to rewrite their own. Most importantly, Google is assuring us that the code is well-tested: "You don’t have to worry about numerical or statistical mistakes that are easily made with GAN libraries." Source: research.googleblog.com Most neural text-to-speech (TTS) systems produce over-smoothed spectrograms. When applied to the TacotronTTS system, Google says, a GAN can recreate some of the realistic-texture reducing artifacts in the resulting audio. And then, there is no harm in reiterating that when Google has open sourced a project, it must be absolute production ready! "When you use TFGAN, you’ll be using the same infrastructure that many Google researchers use, and you’ll have access to the cutting-edge improvements that we develop with the library," the tech giant added. To Start With import tensorflow as tf tfgan = tf.contrib.gan Why TFGAN? Easily train generator and discriminator networks with well-tested, flexible library calls. You can mix TFGAN, native TF, and other custom frameworks Use already implemented GAN losses and penalties (ex Wasserstein loss, gradient penalty, mutual information penalty, etc) Monitor and visualize GAN progress during training, and evaluate them Use already-implemented tricks to stabilize and improve training Develop based on examples of common GAN setups Use the TFGAN-backed GANEstimator to easily train a GAN model Improvements in TFGAN infrastructure will automatically benefit your TFGAN project Stay up-to-date with research as we add more algorithms What are the TFGAN components? TFGAN is composed of several parts which were designed to exist independently. These include the following main pieces (explained in detail below). core: provides the main infrastructure needed to train a GAN. Training occurs in four phases, and each phase can be completed by custom-code or by using a TFGAN library call. features: Many common GAN operations and normalization techniques are implemented for you to use, such as instance normalization and conditioning. losses: Easily experiment with already-implemented and well-tested losses and penalties, such as the Wasserstein loss, gradient penalty, mutual information penalty, etc evaluation: Use Inception Score or Frechet Distance with a pretrained Inception network to evaluate your unconditional generative model. You can also use your own pretrained classifier for more specific performance numbers, or use other methods for evaluating conditional generative models. examples and tutorial: See examples of how to use TFGAN to make GAN training easier, or use the more complicated examples to jumpstart your own project. These include unconditional and conditional GANs, InfoGANs, adversarial losses on existing networks, and image-to-image translation. Training a GAN model Training in TFGAN typically consists of the following steps: Specify the input to your networks. Set up your generator and discriminator using a GANModel. Specify your loss using a GANLoss. Create your train ops using a GANTrainOps. Run your train ops. There are various types of GAN setups. For instance, you can train a generator to sample unconditionally from a learned distribution, or you can condition on extra information such as a class label. TFGAN is compatible with many setups, and a few are demonstrated below: Examples Unconditional MNIST generation This example trains a generator to produce handwritten MNIST digits. The generator maps random draws from a multivariate normal distribution to MNIST digit images. See 'Generative Adversarial Networks' by Goodfellow et al. # Set up the input. images = mnist_data_provider.provide_data(FLAGS.batch_size) noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims]) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=mnist.unconditional_generator, # you define discriminator_fn=mnist.unconditional_discriminator, # you define real_data=images, generator_inputs=noise) # Build the GAN loss. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan_losses.wasserstein_discriminator_loss) # Create the train ops, which calculate gradients and apply updates to weights. train_ops = tfgan.gan_train_ops( gan_model, gan_loss, generator_optimizer=tf.train.AdamOptimizer(gen_lr, 0.5), discriminator_optimizer=tf.train.AdamOptimizer(dis_lr, 0.5)) # Run the train ops in the alternating training scheme. tfgan.gan_train( train_ops, hooks=[tf.train.StopAtStepHook(num_steps=FLAGS.max_number_of_steps)], logdir=FLAGS.train_log_dir) Conditional MNIST generation This example trains a generator to generate MNIST images of a given class. The generator maps random draws from a multivariate normal distribution and a one-hot label of the desired digit class to an MNIST digit image. See 'Conditional Generative Adversarial Nets' by Mirza and Osindero. # Set up the input. images, one_hot_labels = mnist_data_provider.provide_data(FLAGS.batch_size) noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims]) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=mnist.conditional_generator, # you define discriminator_fn=mnist.conditional_discriminator, # you define real_data=images, generator_inputs=(noise, one_hot_labels)) # The rest is the same as in the unconditional case. ... Adversarial loss This example combines an L1 pixel loss and an adversarial loss to learn to autoencode images. The bottleneck layer can be used to transmit compressed representations of the image. Neutral networks with pixel-wise loss only tend to produce blurry results, so the GAN can be used to make the reconstructions more plausible. See 'Full Resolution Image Compression with Recurrent Neural Networks' by Toderici et al for an example of neural networks used for image compression, and 'Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network' by Ledig et al for a more detailed description of how GANs can sharpen image output. # Set up the input pipeline. images = image_provider.provide_data(FLAGS.batch_size) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=nets.autoencoder, # you define discriminator_fn=nets.discriminator, # you define real_data=images, generator_inputs=images) # Build the GAN loss and standard pixel loss. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan_losses.wasserstein_discriminator_loss, gradient_penalty=1.0) l1_pixel_loss = tf.norm(gan_model.real_data - gan_model.generated_data, ord=1) # Modify the loss tuple to include the pixel loss. gan_loss = tfgan.losses.combine_adversarial_loss( gan_loss, gan_model, l1_pixel_loss, weight_factor=FLAGS.weight_factor) # The rest is the same as in the unconditional case. ... Image-to-image translation This example maps images in one domain to images of the same size in a different dimension. For example, it can map segmentation masks to street images, or grayscale images to color. See 'Image-to-Image Translation with Conditional Adversarial Networks' by Isola et al for more details. # Set up the input pipeline. input_image, target_image = data_provider.provide_data(FLAGS.batch_size) # Build the generator and discriminator. gan_model = tfgan.gan_model( generator_fn=nets.generator, # you define discriminator_fn=nets.discriminator, # you define real_data=target_image, generator_inputs=input_image) # Build the GAN loss and standard pixel loss. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.least_squares_generator_loss, discriminator_loss_fn=tfgan_losses.least_squares_discriminator_loss) l1_pixel_loss = tf.norm(gan_model.real_data - gan_model.generated_data, ord=1) # Modify the loss tuple to include the pixel loss. gan_loss = tfgan.losses.combine_adversarial_loss( gan_loss, gan_model, l1_pixel_loss, weight_factor=FLAGS.weight_factor) # The rest is the same as in the unconditional case. ... InfoGAN Train a generator to generate specific MNIST digit images, and control for digit style without using any labels. See 'InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets' for more details. # Set up the input pipeline. images = mnist_data_provider.provide_data(FLAGS.batch_size) # Build the generator and discriminator. gan_model = tfgan.infogan_model( generator_fn=mnist.infogan_generator, # you define discriminator_fn=mnist.infogran_discriminator, # you define real_data=images, unstructured_generator_inputs=unstructured_inputs, # you define structured_generator_inputs=structured_inputs) # you define # Build the GAN loss with mutual information penalty. gan_loss = tfgan.gan_loss( gan_model, generator_loss_fn=tfgan_losses.wasserstein_generator_loss, discriminator_loss_fn=tfgan_losses.wasserstein_discriminator_loss, gradient_penalty=1.0, mutual_information_penalty_weight=1.0) # The rest is the same as in the unconditional case. ... Custom model creation Train an unconditional GAN to generate MNIST digits, but manually construct the GANModel tuple for more fine-grained control. # Set up the input pipeline. images = mnist_data_provider.provide_data(FLAGS.batch_size) noise = tf.random_normal([FLAGS.batch_size, FLAGS.noise_dims]) # Manually build the generator and discriminator. with tf.variable_scope('Generator') as gen_scope: generated_images = generator_fn(noise) with tf.variable_scope('Discriminator') as dis_scope: discriminator_gen_outputs = discriminator_fn(generated_images) with variable_scope.variable_scope(dis_scope, reuse=True): discriminator_real_outputs = discriminator_fn(images) generator_variables = variables_lib.get_trainable_variables(gen_scope) discriminator_variables = variables_lib.get_trainable_variables(dis_scope) # Depending on what TFGAN features you use, you don't always need to supply # every `GANModel` field. At a minimum, you need to include the discriminator # outputs and variables if you want to use TFGAN to construct losses. gan_model = tfgan.GANModel( generator_inputs, generated_data, generator_variables, gen_scope, generator_fn, real_data, discriminator_real_outputs, discriminator_gen_outputs, discriminator_variables, dis_scope, discriminator_fn) # The rest is the same as the unconditional case. ... Google has allowed anyone to contribute to the github repositories to facilitate code-sharing among machine learning users. For more examples on TFGAN, see tensorflow/models on GitHub.
Read more
  • 0
  • 0
  • 8209
article-image-drupal-9-will-be-released-in-2020-shares-dries-buytaert-drupals-founder
Bhagyashree R
14 Dec 2018
2 min read
Save for later

Drupal 9 will be released in 2020, shares Dries Buytaert, Drupal’s founder

Bhagyashree R
14 Dec 2018
2 min read
At Drupal Europe 2018, Dries Buytaert, the founder and lead developer of the Drupal content management system announced that Drupal 9 will be released in 2020. Yesterday, he shared a much detailed timeline for Drupal 9, according to which it is planned to release on June 3, 2020. One of the biggest dependency of Drupal 8 is Symfony 3 and it is scheduled to reach its end-of-life by November 21. This means that no security bugs in Symfony 3 will be fixed and people have to move to Drupal 9 for better support and security. Going by the plan, the site owners will have at least one year to upgrade from Drupal 8 to Drupal 9. Drupal 9 will not have a separate code base, rather the team is adding new functionalities in Drupal 8 as backward-compatible code and experimental features. Once they are sure that these features are stable, any old functionalities will be deprecated. One of the most notable update will be, support for Symfony 4 or 5 in Drupal 9. Since, Symfony 5 is not yet released the scope of its changes will not be clear to the Drupal team. They are focusing on running Drupal 8 with Symfony 4. The final goal is to make Drupal 8 work with Symfony 3, 4 or 5 so that any issues encountered can be fixed before they start requiring Symfony 4 or 5 in Drupal 9. As Drupal 9 is being build in Drupal 8, this will make things much easier for every stakeholder. Drupal core contributors will just have to remove the deprecated functionalities and upgrade the dependencies. For site owners it will be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Dries Buytaert in his post said, “Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.” You can read the full announcement on Drupal's website. WordPress 5.0 (Bebo) released with improvements in design, theme and more 5 things to consider when developing an eCommerce website Introduction to WordPress Plugin
Read more
  • 0
  • 0
  • 8172

article-image-sherin-thomas-explains-how-to-build-a-pipeline-in-pytorch-for-deep-learning-workflows
Packt Editorial Staff
09 May 2019
8 min read
Save for later

Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows

Packt Editorial Staff
09 May 2019
8 min read
A typical deep learning workflow starts with ideation and research around a problem statement, where the architectural design and model decisions come into play. Following this, the theoretical model is experimented using prototypes. This includes trying out different models or techniques, such as skip connection, or making decisions on what not to try out. PyTorch was started as a research framework by a Facebook intern, and now it has grown to be used as a research or prototype framework and to write an efficient model with serving modules. The PyTorch deep learning workflow is fairly equivalent to the workflow implemented by almost everyone in the industry, even for highly sophisticated implementations, with slight variations. In this article, we explain the core of ideation and planning, design and experimentation of the PyTorch deep learning workflow. This article is an excerpt from the book PyTorch Deep Learning Hands-On by Sherin Thomas and Sudhanshi Passi. This book attempts to provide an entirely practical introduction to PyTorch. This PyTorch publication has numerous examples and dynamic AI applications and demonstrates the simplicity and efficiency of the PyTorch approach to machine intelligence and deep learning. Ideation and planning Usually, in an organization, the product team comes up with a problem statement for the engineering team, to know whether they can solve it or not. This is the start of the ideation phase. However, in academia, this could be the decision phase where candidates have to find a problem for their thesis. In the ideation phase, engineers brainstorm and find the theoretical implementations that could potentially solve the problem. In addition to converting the problem statement to a theoretical solution, the ideation phase is where we decide what the data types are and what dataset we should use to build the proof of concept (POC) of the minimum viable product (MVP). Also, this is the stage where the team decides which framework to go with by analyzing the behavior of the problem statement, available implementations, available pretrained models, and so on. This stage is very common in the industry, and I have come across numerous examples where a well-planned ideation phase helped the team to roll out a reliable product on time, while a non-planned ideation phase destroyed the whole product creation. Design and experimentation The crucial part of design and experimentation lies in the dataset and the preprocessing of the dataset. For any data science project, the major timeshare is spent on data cleaning and preprocessing. Deep learning is no exception from this. Data preprocessing is one of the vital parts of building a deep learning pipeline. Usually, for a neural network to process, real-world datasets are not cleaned or formatted. Conversion to floats or integers, normalization and so on, is required before further processing. Building a data processing pipeline is also a non-trivial task, which consists of writing a lot of boilerplate code. For making it much easier, dataset builders and DataLoader pipeline packages are built into the core of PyTorch. The dataset and DataLoader classes Different types of deep learning problems require different types of datasets, and each of them might require different types of preprocessing depending on the neural network architecture we use. This is one of the core problems in deep learning pipeline building. Although the community has made the datasets for different tasks available for free, writing a preprocessing script is almost always painful. PyTorch solves this problem by giving abstract classes to write custom datasets and data loaders. The example given here is a simple dataset class to load the fizzbuzz dataset, but extending this to handle any type of dataset is fairly straightforward. PyTorch's official documentation uses a similar approach to preprocess an image dataset before passing that to a complex convolutional neural network (CNN) architecture. A dataset class in PyTorch is a high-level abstraction that handles almost everything required by the data loaders. The custom dataset class defined by the user needs to override the __len__ and __getitem__ functions of the parent class, where __len__ is being used by the data loaders to determine the length of the dataset and __getitem__ is being used by the data loaders to get the item. The __getitem__ function expects the user to pass the index as an argument and get the item that resides on that index: from dataclasses import dataclassfrom torch.utils.data import Dataset, DataLoader@dataclass(eq=False)class FizBuzDataset(Dataset):    input_size: int    start: int = 0    end: int = 1000    def encoder(self,num):        ret = [int(i) for i in '{0:b}'.format(num)]        return[0] * (self.input_size - len(ret)) + ret    def __getitem__(self, idx):        x = self.encoder(idx)        if idx % 15 == 0:            y = [1,0,0,0]        elif idx % 5 ==0:            y = [0,1,0,0]        elif idx % 3 == 0:            y = [0,0,1,0]        else:            y = [0,0,0,1]        return x,y           def __len__(self):        return self.end - self.start The implementation of a custom dataset uses brand new dataclasses from Python 3.7. dataclasses help to eliminate boilerplate code for Python magic functions, such as __init__, using dynamic code generation. This needs the code to be type-hinted and that's what the first three lines inside the class are for. You can read more about dataclasses in the official documentation of Python (https://docs.python.org/3/library/dataclasses.html). The __len__ function returns the difference between the end and start values passed to the class. In the fizzbuzz dataset, the data is generated by the program. The implementation of data generation is inside the __getitem__ function, where the class instance generates the data based on the index passed by DataLoader. PyTorch made the class abstraction as generic as possible such that the user can define what the data loader should return for each id. In this particular case, the class instance returns input and output for each index, where, input, x is the binary-encoder version of the index itself and output is the one-hot encoded output with four states. The four states represent whether the next number is a multiple of three (fizz), or a multiple of five (buzz), or a multiple of both three and five (fizzbuzz), or not a multiple of either three or five. Note: For Python newbies, the way the dataset works can be understood by looking first for the loop that loops over the integers, starting from zero to the length of the dataset (the length is returned by the __len__ function when len(object) is called). The following snippet shows the simple loop: dataset = FizBuzDataset()for i in range(len(dataset)):    x, y = dataset[i]dataloader = DataLoader(dataset, batch_size=10, shuffle=True,                     num_workers=4)for batch in dataloader:    print(batch) The DataLoader class accepts a dataset class that is inherited from torch.utils.data.Dataset. DataLoader accepts dataset and does non-trivial operations such as mini-batching, multithreading, shuffling, and so on, to fetch the data from the dataset. It accepts a dataset instance from the user and uses the sampler strategy to sample data as mini-batches. The num_worker argument decides how many parallel threads should be operating to fetch the data. This helps to avoid a CPU bottleneck so that the CPU can catch up with the GPU's parallel operations. Data loaders allow users to specify whether to use pinned CUDA memory or not, which copies the data tensors to CUDA's pinned memory before returning it to the user. Using pinned memory is the key to fast data transfers between devices, since the data is loaded into the pinned memory by the data loader itself, which is done by multiple cores of the CPU anyway. Most often, especially while prototyping, custom datasets might not be available for developers and in such cases, they have to rely on existing open datasets. The good thing about working on open datasets is that most of them are free from licensing burdens, and thousands of people have already tried preprocessing them, so the community will help out. PyTorch came up with utility packages for all three types of datasets with pretrained models, preprocessed datasets, and utility functions to work with these datasets. This article is about how to build a basic pipeline for deep learning development. The system we defined here is a very common/general approach that is followed by different sorts of companies, with slight changes. The benefit of starting with a generic workflow like this is that you can build a really complex workflow as your team/project grows on top of it. Build deep learning workflows and take deep learning models from prototyping to production with PyTorch Deep Learning Hands-On written by Sherin Thomas and Sudhanshu Passi. F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 8120

article-image-what-to-expect-in-webpack-5
Bhagyashree R
07 Feb 2019
3 min read
Save for later

What to expect in Webpack 5?

Bhagyashree R
07 Feb 2019
3 min read
Yesterday, the team behind Webpack shared all the updates we will see in its upcoming version, Webpack 5. This version improves build performance with persistent caching, introduces a new named chunk id algorithm, and more. For Webpack 5, the minimum supported Node.js version has been updated from 6 to 8. As this version is a major release, it will come with breaking changes and users may expect some plugin to not work. Expected features in Webpack 5 Removed Webpack 4 deprecated features All the features that were deprecated in Webpack 4 have been removed in this version. So, when migrating to Webpack 5 ensure that your Webpack build doesn’t show any deprecation warnings. Additionally, the team has also removed IgnorePlugin and BannerPlugin that must now be passed an options object. Automatic Node.js polyfills removed All the versions before Webpack 4 provided polyfills for most of the Node.js core modules. These were automatically applied once a module uses any of the core modules. Using polyfills makes it easy to use modules written for Node.js, but this also increases the bundle size as huge modules get added to the bundle. To stop this, Webpack 5 removes this automatically polyfilling and focuses on frontend compatible modules. Algorithm for deterministic chunk and module IDs Webpack 5 comes with new algorithms for long term caching. These are enabled by default in production mode with the following configuration lines: chunkIds: "deterministic”, moduleIds: “deterministic" These algorithms assign short numeric IDs to modules and chunks in a deterministic way. It is recommended that you use the default values for chunkIds and moduleIds. You can also choose to use the old defaults chunkIds: "size", moduleIds: "size", which will generate smaller bundles, but invalidate them more often for caching. Named Chunk IDs algorithm A named chunk id algorithm is introduced, which is enabled by default in development mode. It gives chunks and filenames human-readable names instead of the old numeric names. The algorithm determines the chunk ID the chunk’s content. So, users no longer need to use import(/* webpackChunkName: "name" */ "module") for debugging.To opt-out of this feature, you can change the configuration as chunkIds: “natural”. Compiler idle and close Starting from Webpack 5, compilers need to be closed after the use. Now, compilers enter and leave an idle state and have hooks for these states. Once compile is closed, all the remaining work should be finished as fast as possible. Then, a callback will signal that the closing has been completed. You can read the entire changelog from the Webpack repository. Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more! How to create a desktop application with Electron [Tutorial] The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 8011
article-image-build-achatbot-with-microsoft-bot-framework
Kunal Chaudhari
27 Apr 2018
8 min read
Save for later

How to build a chatbot with Microsoft Bot framework

Kunal Chaudhari
27 Apr 2018
8 min read
The Microsoft Bot Framework is an increbible tool from Microsoft. It makes building chatbots easier and more accessible than ever. That means you can build awesome conversational chatbots for a range of platforms, including Facebook and Slack. In this tutorial, you'll learn how to build an FAQ chatbot using Microsoft Bot Framework and ASP.NET Core. This tutorial has been taken from .NET Core 2.0 By Example. Let's get started. You're chatbot that can respond to simple queries such as: How are you? Hello! Bye! This should provide a good foundation for you to go further and build more complex chatbots with the Microsoft Bot Framework, The more you train the Bot and the more questions you put in its knowledge base, the better it will be. If you're a UK based public sector organisation then ICS AI offer conversational AI solutions built to your needs. Their Microsoft based infrastructure runs chatbots augmented with AI to better serve general public enquiries. Build a basic FAQ Chabot with Microsoft Bot Framework First of all, we need to create a page that can be accessed anonymously, as this is frequently asked questions (FAQ ), and hence the user should not be required to be logged in to the system to access this page. To do so, let's create a new controller called FaqController in our LetsChat.csproj. It will be a very simple class with just one action called Index, which will display the FAQ page. The code is as follows: [AllowAnonymous] public class FaqController : Controller { // GET: Faq public ActionResult Index() { return this.View(); } } Notice that we have used the [AllowAnonymous] attribute, so that this controller can be accessed even if the user is not logged in. The corresponding .cshtml is also very simple. In the solution explorer, right-click on the Views folder under the LetsChat project and create a folder named Faq and then add an Index.cshtml file in that folder. The markup of the Index.cshtml would look like this: @{ ViewData["Title"] = "Let's Chat"; ViewData["UserName"] = "Guest"; if(User.Identity.IsAuthenticated) { ViewData["UserName"] = User.Identity.Name; } } <h1> Hello @ViewData["UserName"]! Welcome to FAQ page of Let's Chat </h1> <br /> Nothing much here apart from the welcome message. The message displays the username if the user is authenticated, else it displays Guest. Now, we need to integrate the Chatbot stuff on this page. To do so, let's browse http://qnamaker.ai. This is Microsoft's QnA (as in questions and answers) maker site which a free, easy-to-use, REST API and web-based service that trains artificial intelligence (AI) to respond to user questions in a more natural, conversational way. Compatible across development platforms, hosting services, and channels, QnA Maker is the only question and answer service with a graphical user interface—meaning you don’t need to be a developer to train, manage, and use it for a wide range of solutions. And that is what makes it incredibly easy to use. You would need to log in to this site with your Microsoft account (@microsoft/@live/@outlook). If you don't have one, you should create one and log in. On the very first login, the site would display a dialog seeking permission to access your email address and profile information. Click Yes and grant permission: You would then be presented with the service terms. Accept that as well. Then navigate to the Create New Service tab. A form will appear as shown here: The form is easy to fill in and provides the option to extract the question/answer pairs from a site or .tsv, .docx, .pdf, and .xlsx files. We don't have questions handy and so we will type them; so do not bother about these fields. Just enter the service name and click the Create button. The service should be created successfully and the knowledge base screen should be displayed. We will enter probable questions and answers in this knowledge base. If the user types a question that resembles the question in the knowledge base, it will respond with the answer in the knowledge base. Hence, the more questions and answers we type, the better it will perform. So, enter all the questions and answers that you wish to enter, test it in the local Chatbot setup, and, once you are happy with it, click on Publish. This would publish the knowledge bank and share the sample URL to make the HTTP request. Note it down in a notepad. It contains the knowledge base identifier guide, hostname, and subscription key. With this, our questions and answers are ready and deployed. We need to display a chat interface, pass the user-entered text to this service, and display the response from this service to the user in the chat user interface. To do so, we will make use of the Microsoft Bot Builder SDK for .NET and follow these steps: Download the Bot Application project template from http://aka.ms/bf-bc-vstemplate. Download the Bot Controller item template from http://aka.ms/bf-bc-vscontrollertemplate. Download the Bot Dialog item template from http://aka.ms/bf-bc-vsdialogtemplate. Next, identify the project template and item template directory for Visual Studio 2017. The project template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesProjectTemplatesVisual C# and the item template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesItemTemplatesVisual C#. Copy the Bot Application project template to the project template directory. Copy the Bot Controller ZIP and Bot Dialog ZIP to the item template directory. In the solution explorer of the LetsChat project, right-click on the solution and add a new project. Under Visual C#, we should now start seeing a Bot Application template as shown here: Name the project FaqBot and click OK. A new project will be created in the solution, which looks similar to the MVC project template. Build the project, so that all the dependencies are resolved and packages are restored. If you run the project, it is already a working Bot, which can be tested by the Microsoft Bot Framework emulator. Download the BotFramework-Emulator setup executable from https://github.com/Microsoft/BotFramework-Emulator/releases/. Let's run the Bot project by hitting F5. It will display a page pointing to the default URL of http://localhost:3979. Now, open the Bot framework emulator and navigate to the preceding URL and append api/messages; to it, that is, browse to http://localhost:3979/api/messages and click Connect. On successful connection to the Bot, a chat-like interface will be displayed in which you can type the message. The following screenshot displays this step:   We have a working bot in place which just returns the text along with its length. We need to modify this bot, to pass the user input to our QnA Maker service and display the response returned from our service. To do so, we will need to check the code of MessagesController in the Controllers folder. We notice that it has just one method called Post, which checks the activity type, does specific processing for the activity type, creates a response, and returns it. The calculation happens in the Dialogs.RootDialog class, which is where we need to make the modification to wire up our QnA service. The modified code is shown here: private static string knowledgeBaseId = ConfigurationManager.AppSettings["KnowledgeBaseId"]; //// Knowledge base id of QnA Service. private static string qnamakerSubscriptionKey = ConfigurationManager.AppSettings["SubscriptionKey"]; ////Subscription key. private static string hostUrl = ConfigurationManager.AppSettings["HostUrl"]; private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<object> result) { var activity = await result as Activity; // return our reply to the user await context.PostAsync(this.GetAnswerFromService(activity.Text)); context.Wait(MessageReceivedAsync); } private string GetAnswerFromService(string inputText) { //// Build the QnA Service URI Uri qnamakerUriBase = new Uri(hostUrl); var builder = new UriBuilder($"{qnamakerUriBase}/knowledgebases /{knowledgeBaseId}/generateAnswer"); var postBody = $"{{"question": "{inputText}"}}"; //Add the subscription key header using (WebClient client = new WebClient()) { client.Headers.Add("Ocp-Apim-Subscription-Key", qnamakerSubscriptionKey); client.Headers.Add("Content-Type", "application/json"); try { var response = client.UploadString(builder.Uri, postBody); var json = JsonConvert.DeserializeObject<QnAResult> (response); return json?.answers?.FirstOrDefault().answer; } catch (Exception ex) { return ex.Message; } } } The code is pretty straightforward. First, we add the QnA Maker service subscription key, host URL, and knowledge base ID in the appSettings section of Web.config. Next, we read these app settings into static variables so that they are available always. Next, we modify the MessageReceivedAsync method of the dialog to pass the user input to the QnA service and return the response of the service back to the user. The QnAResult class can be seen from the source code. This can be tested in the emulator by typing in any of the questions that we have stored in our knowledge base, and we will get the appropriate response, as shown next: Our simple FAQ bot using the Microsoft Bot Framework and ASP.NET Core 2.0 is now ready! Read more about building chatbots: How to build a basic server side chatbot using Go
Read more
  • 0
  • 1
  • 7976

article-image-google-podcasts-is-transcribing-full-podcast-episodes-for-improving-search-results
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Google Podcasts is transcribing full podcast episodes for improving search results

Bhagyashree R
28 Mar 2019
2 min read
On Tuesday, Android Police reported that Google Podcasts is automatically transcribing episodes. It is using these transcripts as metadata to help users find the podcasts they want to listen even if they don’t know its title or when it was published. Though this is coming into light now, Google’s plan of using transcripts for improving search results has already been shared even before the app was actually launched. In an interview with Pacific Content, Zack Reneau-Wedeen, Google Podcasts product manager, said that Google could “transcribe the podcast and use that to understand more details about the podcast, including when they are discussing different topics in the episode.” This is not a user-facing feature but instead works in the background. You can see the transcription of these podcasts in the web page source of the Google Podcasts web portal. After getting a hint from a user, Android Police searched for “Corbin dabbing port” instead of Corbin Davenport, a writer for Android Police. Sure enough, the app’s search engine showed Episode 312 of the Android Police Podcast, his podcast, as the top result: Source: Android Police The transcription is enabled by Google’s Cloud Speech-to-Text transcription technology. Using transcriptions of such a huge number of podcasts Google can do things like including timestamps, index the contents, and make text easily searchable. This will also allow Google to actually “understand” what is being discussed in the podcasts without having to solely rely on the not-so-detailed notes and descriptions given by the podcasters. This could prove to be quite helpful if users don’t remember much about the shows other than a quote or interesting subject matter and make searching frictionless. As a user-facing feature, this could be beneficial for both a listener and a creator. “It would be great if they would surface this as feature/benefit to both the creator and the listener. It would be amazing to be able to timestamp, tag, clip, collect and share all the amazing moments I've found in podcasts over the years, “ said a Twitter user. Read the full story on Android Police. Google announces the general availability of AMP for email, faces serious backlash from users European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google announces Stadia, a cloud-based game streaming service, at GDC 2019
Read more
  • 0
  • 0
  • 7970