Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-react-vs-vue-javascript-framework-wars
Amarabha Banerjee
17 Jul 2018
4 min read
Save for later

React vs. Vue: JavaScript framework wars

Amarabha Banerjee
17 Jul 2018
4 min read
Before I begin, one thing that needs to be established, is we can’t for a second ignore the fact that we are going to compare two different JavaScript frameworks - React and Vue. Both frameworks are clearly different in terms of popularity and usage. While ReactJS is a JavaScript library, great for building huge web applications where data can be updated on a regular basis. Vue.js is a JavaScript framework, fit for creating highly adaptable user interfaces and sophisticated Single-page applications. On the other hand, we all have mostly come across is how React and Vue are similar in their fundamental approach. They both have a virtual DOM based approach; they both have component based structure and are Reactive in their architecture. They both tend to work around a root library with all the other tasks transferred to other libraries. Having said that, as per npm trends report, React stands well ahead in terms of monthly downloads at 2.4 million whereas Vue stands joint second with Angular at around 239k downloads. Now that we have established the popularity of front end web development frameworks, it’s time to talk about what works and what doesn’t in React and Vue.js. Comparing React.js and Vue.js Template vs JSX While everything in React is a JavaScript code written in JSX syntax, Vue depends majorly on its templates which are HTML5 and CSS3 based. Now if you are a front end developer how can this possibly affect you? Well it depends on your choice of working methodology. If you want to write code on your own and control every aspect of your application, then the React way will be much more suitable for you. But if you want to start working on a readymade template and then add features as you go then Vue should be your best choice. React is a better framework if you want to scale up Make no mistake about this. The size and scalability of your application plays a determining role on your choice of framework. The fact that React gives you more control over your application architecture is the single most reason why it is easier in React to scale up your application. But since Vue is so much dependent inherently on the templating structure, it becomes tough when to build an industrial grade application made with Vue as changing the template can be difficult. It's easier to update data with Vue than React Updating data on Vue is much simpler. The middle stage of transpiling is not needed in Vue as it directly renders into the browser and hence the process is faster. Whereas in React, the data is analyzed, then stored, then the Document Object Model (DOM) is invoked and thereafter the change takes place, which is a time consuming process. React has a bigger community than Vue React is backed by Facebook. There are many similar libraries like React such as Preact which render support to React making it a  larger community than Vue. And with larger communities developers can expect faster resolution of issues and regular community support with timely updates. Building for mobile with React and Vue The capabilities of all modern day frameworks are often judged in terms of how they allow developers build for mobile. React has a world class companion in this domain: React Native. React Native is very similar to React in terms of its component structure, and is a fairly short learning curve for anyone already using React. The introduction of Vue Native, which offers a way of writing mobile applications with NativeScript using Vue, has made it easier for mobile developers to use Vue mobile development. Chinese tech company Alibaba has also created a cross platform framework called Weex. Weex has support for Vue, and while it doesn't yet have the capabilities of React Native, it could be a mobile framework to watch. Which is better? React or Vue? To summarize, there are different aspects of Vue and React which are useful and developer friendly. However if you intend to judge them on your own, you are better off assessing what your development needs are first? How big an issue scalability is going to be? Would you be needing something for the mobile platform too? Once you have figured out these questions, the choice should be easy. Read Next: What is React.js and how does it work Is React Native really a Native framework Using React Router for Client Side Redirecting
Read more
  • 0
  • 2
  • 7352

article-image-angularjs-nodejs-and-firebase-startup-web-developers-toolkit
Erik Kappelman
09 Mar 2016
7 min read
Save for later

Angular.js, Node.js, and Firebase: the startup web developer's toolkit

Erik Kappelman
09 Mar 2016
7 min read
So, you’ve started a web company. You’ve even attracted one or two solid clients. But now you have to produce, and you have to produce fast. If you’ve been in this situation, then we have something in common. This is where I found myself a few months ago. A caveat: I am a self-taught web developer in an absolute sense. Self-taught or not, in August of 2015 I found myself charged with creating a fully functional blogging app for an author. Needless to say, I was in over my head. I was aware of Node.js, because that had been the backend for the very simple static content site my company had produced first. I was aware of database concepts and I did know a reasonable amount of JavaScript, but I felt ill prepared to pull of these tools together in a cohesive fashion. Luckily for me it was 2015 and not 1998. Today, web developers are blessed with tools that make the development of websites and web apps a breeze. After some research, I decided to use Angular.js to control the frontend behavior of the website, Node.js with Express.js as the backend, and Firebase to hold the data. Let me walk you through the steps I used to get started. First of all, if you aren’t using Express.js on top of Node.js for your backend in development you should start. Node.js was written in C, C++ and JavaScript by Ryan Dahl in 2009. This multiplatform runtime environment for JavaScript is fast, open-source, and easy to learn, because odds are you already know JavaScript. Using Express.js and the Express-Generator in congress with Node.js makes development quite simple. Express.js is Node middleware. In simple terms, Express.js makes your life a whole lot easier by doing most of the work for you. So, let’s build our backend. First, install Node.js and NPM on your system. There are a variety of online resources to complete this step. Then, using NPM, install the Express application generator. $ npm install express-generator -g Once we have Node.js and the Express generator installed, get to your development folder and execute the following commands to build the skeleton of your web app’s backend: $ express app-name –e I use the –e flag to set the middleware to use ejs files instead of the default jade files. I prefer ejs to jade but you might not. This command will produce a subdirectory called app-name in your current directory. If you navigate into this directory and type the commands $ npm install $ npm start and then navigate in a browser to http://localhost:3000 you will see the basic welcome page auto generated by Express. There are thousands of great things about Node.js and Express.js and I will leave them to be discovered by you as you continue to use these tools. Right now, we are going to get Firebase connected to our server. This can serve as general instructions for installing and using Node modules as well. Head over to firebase.com and create a free account. If you end up using Firebase for a commercial app you will probably want to upgrade to a paid account, but for now the starter account should be fine. Once you get your Firebase account setup, create a Firebase instance using their online interface. Once this is done get back to your backend code to connect the Firebase to your server. First install the Firebase Node module. $ npm install firebase --save Make sure to use the --save flag because this puts a new line in your packages.json file located in the root of the web server. This means that if you type npm install, as you did earlier, NPM will know that you added firebase to your webserver and install it if it is not already present. Now open the index.js file in the routes folder in the root of your Node app. At the top of this folder, put in the line var Firebase = require(‘firebase’); This pulls the Firebase module you installed into your code. Then to create a connection to the account you just created on Firebase[MA1]  ,put in the following line of code: var FirebaseRef = new Firebase("https://APP-NAME.firebaseio.com/"); Now, to take a snapshot in JSON of your Firebase and store it in an object, include the following lines var FirebaseContent = {}; FirebaseRef.on("value", function(snapshot) { FirebaseContent = snapshot.val(); }, function(errorObject) { console.log("The read failed: " + errorObject.code); }); FirebaseContent is now a JavaScript object containing your complete Firebase data. Now let’s get Angular.js hooked up to the frontend of the website and then it’s time for you to start developing. Head over to angularjs.org and download the source code or get the CDN. We will be using the CDN. Open the file index.ejs in the views directory in your Node app’s root. Modify the <head> tag, adding the CDN. <head> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.0-rc.1/angular.min.js"></script> </head> This allows you to use the Angular.js tools. Angular.js uses controllers to control your app. Let’s make your angular app and connect a controller. Create a file called myapp.js in your public/javascripts directory. In myapp.js include the following angular.module(“myApp”,[]); This file will grow but for now this is all you need. Now create a file in the same directory called myController.js and put this code into it. Angular.module(“myApp”).controller(‘myController’,['$scope',function($scope){ $scope.jsVariable = 'Controller is working'; }]) Now modify the index.ejs file again. <html ng-app=“myApp”> <head> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.0-rc.1/angular.min.js"></script> <script src="/javascripts/myApp.js"></script> <script src="/javascripts/myController.js"></script> </head> <body ng-controller= “myController”> <h1> {{jsVariable}} </h1> </body> </html> If you start your app again and go back to http://localhost:3000 you should see that your controller is now controlling the contents of the first heading. This is just a basic setup and there is much more you will learn along the way. Speaking from experience, taking the time to learn these tools and put them into use will make your development faster and easier and your results will be of much higher quality. About the Author Erik was born and raised in Missoula, Montana. He attended the University of Montana and received a degree in economics. Along the way, he discovered the value of technology in our modern world, especially to businesses. He is currently seeking a master's degree in economics from the University of Montana and his research focuses on the economics of gender empowerment in the developing world. During his professional life, he has worn many hats – from cashier, to barista, to community organizer. He feels that Montana offers a unique business environment that is often best understood by local businesses. He started his company, Duplovici, with his friends in an effort to meet the unique needs of Montana businesses, non-profits, and individuals. He believes technology is not simply an answer to profit maximization for businesses: by using internet technologies we can unleash human creativity through collective action and the arts, as well as business ventures. He is also the proud father of two girls and has a special place in his heart for getting girls involved with technology and business.
Read more
  • 0
  • 1
  • 7306

article-image-why-metadata-important-iot
Raka Mahesa
24 Jan 2018
4 min read
Save for later

Why Metadata is so important for IoT

Raka Mahesa
24 Jan 2018
4 min read
The Internet of Things is growing all the time. However, as IoT takes over the world, there are more and more aspects of it that needs to be addressed, such as security and standardization. It might not be ideal to live in a wild west where everything is connected but there are no guidelines or rules for how to manage and analyze these networks. A crucial part of all this is metadata – as data grows in size, the way we label, categorize and describe it will become more important than ever. Find our latest and forthcoming IoT eBooks and videos here.  We probably shouldn’t be that surprised – if IoT is all about connecting things that wasn’t previously connected – traffic lights, lamps, car parts – good metadata allows us to make sure those connections remain clear and legible. It helps to ensure that things are working properly. A system without definitions, without words and labels, would, after all, get chaotic pretty quickly. Metadata makes it easier to organize IoT data If metadata is, quite simply, data about data, it’s not hard to see why it might be so important when dealing with the expanse of data that is about to be generated thanks to the internet of things. While IoT will clearly largely run on data – information and messages passing between objects, moving within a given system, metadata is incredibly useful in this new world because it allows us to better understand the systems that we are developing. And what’s more, once we have that level of insight, we can begin to do more to further improve and optimize IoT systems using machine learning and artificial intelligence.  Consider how metadata organizes your media library – it would be a mess without it, practically unusable. When you scale that up, we’ll be able to make much smarter use of IoT. Without it, we might well be lost in a chaotic mess of connections.  Metadata, then, allows us to organize and catalog data.  Metadata solves IoT's interoperability problem Metadata can also help with the biggest problem of IoT: interoperability. Interoperability refers to the ability for one device to communicate and exchange data with another device. And this is really important in the context of the Internet of Things, because having great interoperability means more devices can connect with each other.  How does metadata solve interoperability? Well, by using metadata, a device can quickly identify a new device that tries to connect to it by looking at its model number, device class, and other attributes. Once the new device has been identified, our device can find a suitable communication protocol that's supported by both devices to exchange data. Metadata can also be added on the exchanged data, so both devices can read and process the data correctly, just like adding image format metadata allows any application to display that image. Metadata helps to protect legacy hardware and software There's another aspect that metadata can help with. The Internet of Things is an evolving technology where new products are introduced every day, and bring along with them changes and innovations. But what happens to the old products that have been replaced by new ones? With metadata, we can archive and protect the future accessibility of our devices, making sure that new devices can still communicate with older, legacy devices.  That's why metadata is important for the Internet of Things. There are many benefits that can be gained by having a robust system of metadata in the Internet of Things. And as the Internet of Things grows and is used to manage more crucial aspects of our lives, the need for this system will also grow. Raka Mahesa is a game developer at Chocoarts who is interested in digital technology. Outside of work, he enjoys working on his own projects, with Corridoom VR being his latest relesed gme. Raka also regularly tweets @legacy99.
Read more
  • 0
  • 0
  • 7274

article-image-operator-overloading-techniques-in-kotlin-you-need-to-know
Aaron Lazar
21 Jun 2018
6 min read
Save for later

4 operator overloading techniques in Kotlin you need to know

Aaron Lazar
21 Jun 2018
6 min read
Operator overloading is a form of polymorphism. Some operators change behaviors on different types. The classic example is the operator plus (+). On numeric values, plus is a sum operation and on String is a concatenation. Operator overloading is a useful tool to provide your API with a natural surface. Let's say that we're writing a Time and Date library; it'll be natural to have the plus and minus operators defined on time units.  In this article, we'll understand how Operator Overloading works in Kotlin. This article has been extracted from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty.  Kotlin lets you define the behavior of operators on your own or existing types with functions, normal or extension, marked with the operator modifier: class Wolf(val name:String) { operator fun plus(wolf: Wolf) = Pack(mapOf(name to this, wolf.name to wolf)) } class Pack(val members:Map<String, Wolf>) fun main(args: Array<String>) { val talbot = Wolf("Talbot") val northPack: Pack = talbot + Wolf("Big Bertha") // talbot.plus(Wolf("...")) } The operator function plus returns a Pack value. To invoke it, you can use the infix operator way (Wolf + Wolf) or the normal way (Wolf.plus(Wolf)). Something to be aware of about operator overloading in Kotlin—the operators that you can override in Kotlin are limited; you can't create arbitrary operators. Binary operators Binary operators receive a parameter (there are exceptions to this rule—invoke and indexed access). The Pack.plus extension function receives a Wolf parameter and returns a new Pack. Note that MutableMap also has a plus (+) operator: operator fun Pack.plus(wolf: Wolf) = Pack(this.members.toMutableMap() + (wolf.name to wolf)) val biggerPack = northPack + Wolf("Bad Wolf") The following table will show you all the possible binary operators that can be overloaded: Operator Equivalent Notes x + y x.plus(y) x - y x.minus(y) x * y x.times(y) x / y x.div(y) x % y x.rem(y) From Kotlin 1.1, previously mod. x..y x.rangeTo(y) x in y y.contains(x) x !in y !y.contains(x) x += y x.plussAssign(y) Must return Unit. x -= y x.minusAssign(y) Must return Unit. x *= y x.timesAssign(y) Must return Unit. x /= y x.divAssign(y) Must return Unit. x %= y x.remAssign(y) From Kotlin 1.1, previously modAssign. Must return Unit. x == y x?.equals(y) ?: (y === null) Checks for null. x != y !(x?.equals(y) ?: (y === null)) Checks for null. x < y x.compareTo(y) < 0 Must return Int. x > y x.compareTo(y) > 0 Must return Int. x <= y x.compareTo(y) <= 0 Must return Int. x >= y x.compareTo(y) >= 0 Must return Int. Invoke When we introduce lambda functions, we show the definition of Function1: /** A function that takes 1 argument. */ public interface Function1<in P1, out R> : Function<R> { /** Invokes the function with the specified argument. */ public operator fun invoke(p1: P1): R } The invoke function is an operator, a curious one. The invoke operator can be called without name. The class Wolf has an invoke operator: enum class WolfActions { SLEEP, WALK, BITE } class Wolf(val name:String) { operator fun invoke(action: WolfActions) = when (action) { WolfActions.SLEEP -> "$name is sleeping" WolfActions.WALK -> "$name is walking" WolfActions.BITE -> "$name is biting" } } fun main(args: Array<String>) { val talbot = Wolf("Talbot") talbot(WolfActions.SLEEP) // talbot.invoke(WolfActions.SLEEP) } That's why we can call a lambda function directly with parenthesis; we are, indeed, calling the invoke operator. The following table will show you different declarations of invoke with a number of different arguments: Operator Equivalent Notes x() x.invoke() x(y) x.invoke(y) x(y1, y2) x.invoke(y1, y2) x(y1, y2..., yN) x.invoke(y1, y2..., yN) Indexed access The indexed access operator is the array read and write operations with square brackets ([]), that is used on languages with C-like syntax. In Kotlin, we use the get operators for reading and set for writing. With the Pack.get operator, we can use Pack as an array: operator fun Pack.get(name: String) = members[name]!! val badWolf = biggerPack["Bad Wolf"] Most of Kotlin data structures have a definition of the get operator, in this case, the Map<K, V> returns a V?. The following table will show you different declarations of get with a different number of arguments: Operator Equivalent Notes x[y] x.get(y) x[y1, y2] x.get(y1, y2) x[y1, y2..., yN] x.get(y1, y2..., yN) The set operator has similar syntax: enum class WolfRelationships { FRIEND, SIBLING, ENEMY, PARTNER } operator fun Wolf.set(relationship: WolfRelationships, wolf: Wolf) { println("${wolf.name} is my new $relationship") } talbot[WolfRelationships.ENEMY] = badWolf The operators get and set can have any arbitrary code, but it is a very well-known and old convention that indexed access is used for reading and writing. When you write these operators (and by the way, all the other operators too), use the principle of least surprise. Limiting the operators to their natural meaning on a specific domain, makes them easier to use and read in the long run. The following table will show you different declarations of set with a different number of arguments: Operator Equivalent Notes x[y] = z x.set(y, z) Return value is ignored x[y1, y2] = z x.set(y1, y2, z) Return value is ignored x[y1, y2..., yN] = z x.set(y1, y2..., yN, z) Return value is ignored Unary operators Unary operators don't have parameters and act directly in the dispatcher. We can add a not operator to the Wolf class: operator fun Wolf.not() = "$name is angry!!!" !talbot // talbot.not() The following table will show you all the possible unary operators that can be overloaded: Operator Equivalent Notes +x x.unaryPlus() -x x.unaryMinus() !x x.not() x++ x.inc() Postfix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. x-- x.dec() Postfix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. ++x x.inc() Prefix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. --x x.dec() Prefix, it must be a call on a var, should return a compatible type with the dispatcher type, shouldn't mutate the dispatcher. Postfix (increment and decrement) returns the original value and then changes the variable with the operator returned value. Prefix returns the operator's returned value and then changes the variable with that value. Now you know how Operator Overloading works in Kotlin. If you found this article interesting and would like to read more, head on over to get the whole book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Extension functions in Kotlin: everything you need to know Building RESTful web services with Kotlin Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 7270

article-image-are-distributed-networks-decentralized-systems-same
Amarabha Banerjee
31 Jul 2018
3 min read
Save for later

Are distributed networks and decentralized systems the same?

Amarabha Banerjee
31 Jul 2018
3 min read
The emergence of Blockchain has paved way for the implementation of non-centralized network architecture. The seeds of distributed network architecture was sown back in 1964, by Paul Baran in his paper “On Distributed Networks“. Since then, there have been many attempts at implementing this architecture in network systems. The most recent implementation has been aided by the discovery of Blockchain technology, in 2009, by the anonymous Satoshi Nakamoto. Terminologies: Network: A collection of interlinked nodes that exchange information. Node: The most basic part of the network; for example, a user or computer. Link: The connection between two nodes. Server: A node that has connections to a relatively large amount of other nodes The mother of all the network architectures is a centralized Network. Here, the primary decision making control rests with one Node. The message is carried on by Sub Nodes. Distributed networks are in a way, a conglomeration of small centralized networks. It consists of multiple Nodes which themselves are miniature versions of centralized networks. Decentralized networks consist of individual nodes and every one of these Nodes are capable of independent decision making. Hence there is no central control in Decentralized networks. Source: Meshworld A common misconception is that Distributed and Decentralized systems are the same; they just have different nomenclature and slightly different functionalities. In a way this is true, but not completely. A distributed system still has one central control algorithm, that takes the final call over the process and protocol to be followed. As an example, let’s consider distributed digital ledgers. Each of these ledgers are independent network nodes. These ledgers get updated with individual transactions and other details and this information is not passed on to the other Nodes. This particular feature makes this system secure and decentralized. The other Nodes are not aware of the information stored. This is how a decentralized network behaves. Now the same system behaves a bit differently, when the Nodes communicate with each other, (in-case of Ethereum, by the movement of “Ether” between nodes). Although the individual Node’s information stays secure, the information about the state of the Nodes is passed on, finally to a central peer. Then the peer takes on the decision of which state to finally change to and what is the optimum process to change the state. This is decided by the votes of the individual nodes. The Nodes then change to the new state, preserving information about the previous state. This makes the system dynamic, secure and distributed, because although the Nodes get to vote based on their individual states, the final decision is taken by the centralized peer. This is a distributed system. Hence we can clearly state that decentralized systems are a subset of distributed systems, more independent and minus any central controlling authority. This presents a familiar question to us - are the current blockchain based apps purely decentralized, or are they just distributed systems with a central control? Could this be the reason why we have not yet reached the ultimate Cryptocurrency based alternative economy? Put differently, is the invisible central control hindering the evolution of blockchain based systems to a purely decentralized system and economy? Only time and more dedicated research, along with better practical implementation of decentralized applications will answer these questions. A brief history of Blockchain Blockchain can solve tech’s trust issues – Imran Bashir Partnership alliances of Kontakt.io and IOTA Foundation for IoT and Blockchain
Read more
  • 0
  • 0
  • 7266

article-image-how-ai-is-going-to-transform-the-data-center
Melisha Dsouza
13 Sep 2018
7 min read
Save for later

How AI is going to transform the Data Center

Melisha Dsouza
13 Sep 2018
7 min read
According to Gartner analyst Dave Cappuccio, 80% of enterprises will have shut down their traditional data centers by 2025, compared to just 10% today. The figures are fitting considering the host of problems faced by traditional data centers. The solution to all these problems lies right in front of us- Incorporating Intelligence in traditional data centers. To support this claim, Gartner also predicts that by 2020, more than 30 percent of data centers that fail to implement AI and Machine Learning will cease to be operationally and economically viable. Across the globe, Data science and AI are influencing the design and development of the modern data centers. With the surge in the amount of data everyday, traditional data centers will eventually get slow and result in an inefficient output. Utilizing AI in ingenious ways, data center operators can drive efficiencies up and costs down. A fitting example of this is the tier-two automated control system implemented at Google to cool its data centers autonomously. The system makes all the cooling-plant tweaks on its own, continuously, in real-time- thus saving up to 30% of the plant’s energy annually. Source: DataCenter Knowledge AI has enabled data center operators to add more workloads on the same physical silicon architecture. They can aggregate and analyze data quickly and generate productive outputs, which is specifically beneficial to companies that deal with immense amounts of data like hospitals, genomic systems, airports, and media companies. How is AI facilitating data centers Let's look at some of the ways that Intelligent data centers will serve as a solution to issues faced by traditionally operated data centers. #1 Energy Efficiency The Delta Airlines data center outage in 2016, that was attributed to electrical-system failure over a three day period, cost the airlines around $150 million, grounding about 2,000 flights. This situation could have been easily averted had the data centers used Machine Learning for their workings. As data centers get ever bigger, more complex and increasingly connected to the cloud, artificial intelligence is becoming an essential tool for keeping things from overheating and saving power at the same time. According to the Energy Department’s U.S. Data Center Energy Usage Report, the power usage of data centers in the United States has grown at about a 4 percent rate annually since 2010 and is expected to hit 73 billion kilowatt-hours by 2020, more than 1.8 percent of the country’s total electricity use. Data centers also contribute about 2 percent of the world’s greenhouse gas emissions, AI techniques can do a lot to make processes more efficient, more secure and less expensive. One of the keys to better efficiency is keeping things cool, a necessity in any area of computing. Google and DeepMind (Alphabet Inc.’s AI division)use of AI to directly control its data center has reduced energy use for cooling by about 30 percent. #2 Server optimization Data centers have to maintain physical servers and storage equipment. AI-based predictive analysis can help data centers distribute workloads across the many servers in the firm. Data center loads can become more predictable and more easily manageable. Latest load balancing tools with built-in AI capabilities are able to learn from past data and run load distribution more efficiently. Companies will be able to better track server performance, disk utilization, and network congestions. Optimizing server storage systems, finding possible fault points in the system, improve processing times and reducing risk factors will become faster. These will in turn facilitate maximum possible server optimization. #3 Failure prediction / troubleshooting Unplanned downtime in a datacenter can lead to money loss. Datacenter operators need to quickly identify the root case of the failure, so they can prioritize troubleshooting and get the datacenter up and running before any data loss or business impact take place. Self managing datacenters make use of AI based deep learning (DL) applications to predict failures ahead of time. Using ML based recommendation systems, appropriate fixes can be inferred upon the system in time. Take for instance the HPE artificial intelligence predictive engine that identifies and solves trouble in the data center. Signatures are built to identify other users that might be affected. Rules are then developed to instigate a solution, which can be automated. The AI-machine learning solution, can quickly interject through the entire system and stop others from inheriting the same issue. #4 Intelligent Monitoring and storing of Data Incorporating machine learning, AI can take over the mundane job of monitoring huge amounts of data and make IT professionals more efficient in terms of the quality of tasks they handle. Litbit has developed the first AI-powered, data center operator, Dac. It uses a human-to-machine learning interface that combines existing human employee knowledge with real-time data. Incorporating over 11,000 pieces of innate knowledge, Dac has the potential to hear when a machine is close to failing, feel vibration patterns that are bad for HDD I/O, and spot intruders. Dac is proof of how AI can help monitor networks efficiently. Along with monitoring of data, it is also necessary to be able to store vast amounts of data securely. AI holds the potential to make more intelligent decisions on - storage optimization or tiering. This will help transform storage management by learning IO patterns and data lifecycles, helping storage solutions etc. Mixed views on the future of AI in data centers? Let’s face the truth, the complexity that comes with huge masses of data is often difficult to handle. Humans ar not as scalable as an automated solution to handle data with precision and efficiency. Take Cisco’s M5 Unified Computing or HPE’s InfoSight as examples. They are trying to alleviate the fact that humans are increasingly unable to deal with the complexity of a modern data center. One of the consequences of using automated systems is that there is always a possibility of humans losing their jobs and being replaced by machines at varying degrees depending on the nature of job roles. AI is predicted to open its doors to robots and automated machines that will soon perform repetitive tasks in the datacenters. On the bright side, organizations could allow employees, freed from repetitive and mundane tasks, to invest their time in more productive, and creative aspects of running a data center. In addition to new jobs, the capital involved in setting up and maintaining a data center is huge. Now add AI to the Datacenter and you have to invest double or maybe triple the amount of money to keep everything running smoothly. Managing and storing all of the operational log data for analysis also comes with its own set of issues. The log data that acts as the input to these ML systems becomes a larger data set than the application data itself. Hence firms need a proper plan in place to manage all of this data. Embracing AI in data centers would mean greater financial benefits from the outset while attracting more customers. It would be interesting to see the turnout of tech companies following Google’s footsteps and implementing AI in their data centers. Tech companies should definitely watch this space to take their data center operations up a notch. 5 ways artificial intelligence is upgrading software engineering Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce
Read more
  • 0
  • 0
  • 7249
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-amazon-patents-2018-machine-learning-ar-robotics
Natasha Mathur
06 Aug 2018
7 min read
Save for later

Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics

Natasha Mathur
06 Aug 2018
7 min read
"There are two kinds of companies, those that work to try to charge more and those that work to charge less. We will be the second."-- Jeff Bezos, CEO Amazon When Jeff Bezos launched Amazon.com in 1994, it was an online bookselling site. This was during a time when bookstores such as Barnes & Noble, Waldenbooks and Crown Books were the leading front runners in the bookstore industry in the American shopping malls. Today, Amazon’s name has become almost synonymous with online retail for most people and has now spread its wings to cloud computing, electronics, tech gadgets and the entertainment world. With market capitalization worth $897.47B as of August 3rd 2018, it’s hard to believe that there was a time when Amazon sold only books. Amazon is constantly pushing to innovate and as new inventions come to shape, there are “patents” made that helps the company have a competitive advantage over technologies and products in order to attract more customers. [box type="shadow" align="" class="" width=""]According to United States Patent and Trademark Office (USPTO), Patent is an exclusive right to invention and “the right to exclude others from making, using, offering for sale, or selling the invention in the United States or “importing” the invention into the United States”.[/box] As of March 20, 2018, Amazon owned 7,717 US patents filed under two business entities, Amazon Technologies, Inc. (7,679), and Amazon.com, Inc (38). Looking at the chart below, you can tell that Amazon Technologies, Inc., was one among the top 15 companies in terms of number of patents granted in 2017. Top 15 companies, by number of patents granted by USPTO, 2017 Amazon competes closely with the world’s leading tech giants in terms of patenting technologies. The below table only considers US patents. Here, Amazon holds only few US patents than IBM, Microsoft, Google, and Apple.  Number of US Patents Containing Emerging-Technology Keywords in Patent Description Some successfully patented Amazon innovations in 2018 There are thousands of inventions that Amazon is tied up with and for which they have filed for patents. These include employee surveillance AR goggles, a real-time accent translator, robotic arms tossing warehouse items,  one-click buying, drones,etc. Let’s have a look at these remarkable innovations by Amazon. AR goggles for improving human-driven fulfillment (or is it to track employees?) Date of Patent: August 2, 2018 Filed: March 20, 2017 Assignee: Amazon Technologies, Inc.   AR Goggles                                                          Features: Amazon has recently patented a pair of augmented reality goggles that could be used to keep track of its employees.The patent is titled “Augmented Reality User interface facilitating fulfillment.” As per the patent application, the application is a wearable computing device such as augmented reality glasses that are worn on user’s head. The user interface is rendered upon one or more lenses of the augmented reality glasses and it helps to show the workers where to place objects in Amazon's fulfillment centers. There’s also a feature in the AR glasses which provides workers with turn-by-turn directions to the destination within the fulfillment centre. This helps them easily locate the destination as all the related information gets rendered on the lenses.    AR Goggles  steps The patent has received criticism over concerns that this application might hamper the privacy of employees within the warehouses, tracking employees’ every single move. However, Amazon has defended the application by saying that it has got nothing to do with “employee surveillance”. As this is a patent, there’s no guarantee if it will actually hit the market. Robotic arms that toss warehouse items Date of Patent: July 17, 2018 Filed: September 29, 2015 Assignee: Amazon Technologies, Inc. Features: Amazon won a patent titled “Robotic tossing of items in inventory system” last month. As per the patent application, “Robotic arms or manipulators can be used to toss inventory items within an inventory system. Tossing strategies for the robotic arms may include information about how a grasped item is to be moved and released by a robotic arm to achieve a trajectory for moving the item to a receiving location”.  Robotic Arms Utilizing a robotic arm to toss an item to a receiving location can help improve throughput through the inventory system. This is possible as the robotic arms will help with reducing the amount of time that may otherwise be spent on placing a grasped item directly onto a surface for receiving the item. “The tossing strategy may be based at least in part upon a database containing information about the item, characteristics of the item, and/or similar items, such as information indicating tossing strategies that have been successful or unsuccessful for such items in the past,” the patent reads.  Robotic Arms Steps Amazon’s aim with this is to eliminate the challenges faced by modern inventory systems like supply chain distribution centers, airport luggage systems, etc, while responding to requests for inventory items. The patent received criticism over the concern that one of the examples in the application was a dwarf figurine and could possibly mock people of short stature. But, according to Amazon, “The intention was simply to illustrate a robotic arm moving products, and it should not be taken out of context.” Real-time accent translator Date of Patent: June 21, 2018 Filed: December 21, 2016 Assignee: Amazon Technologies, Inc. Features: Amazon won a patent for an audio system application, titled “Accent translation” back in June this year, which will help with translating the accent of the speaker to the listener’s accent. The aim with this app is to get rid of the possible communication barriers which may arise due to different accents as they can be difficult to understand at times. Accent translation system The accent translation system collects a number of audio samples from different sources such as phone call, television, movies, broadcasts, etc. Each audio sample will have its association with at least one of the accent sample sets present in its database.  For instance, german accent will be associated with the german accent sample set.   Accent translation system steps In a two-party dialog, acquired audio is analyzed and if it associates with one among a wide range of saved accents then the audio from both the sides is outputted based on the accent of the opposite party. The possibilities with this application are endless. One major use case is the customer care industry where people have to constantly talk to different people with different accents. Drone that uses Human gestures and voice commands Date of Patent: March 20, 2018 Filed: July 18, 2016 Assignee: Amazon Technologies, Inc. Features: Amazon patented for a drone, titled “Human interaction with unmanned aerial vehicles”, earlier this year, that would use human gestures and voice commands for package delivery. Amazon Drone makes use of propulsion technology which will help with managing the speed, trajectory, and direction of the drone.   Drones As per the patent application, “an unmanned aerial vehicle is provided which includes propulsion device, sensor device and a management system. The management system is configured to receive human gestures via the sensor device and in response, instruct the propulsion device to affect and adjustment to the behavior of the unnamed aerial vehicle. Human gestures include-- visible gestures, audible gestures, and other gestures capable of recognition by the unmanned vehicle”. Working structure of drones The concept for drones started when Amazon CEO, Jeff Bezos, promised, back in 2013, that the company aims to make 30-minute deliveries, of packages up to 2.25 kgs or 5 pounds. Amazon’s patents are a clear indication of its efforts and determination for inventing cutting-edge technologies for optimizing its operations so that it can pass on the benefits to its customers in the form of competitively priced product offerings. As Amazon has been putting its focus on machine learning, the drones and robotic arms will make the day-to-day tasks of the facility workers easier and more efficient. In fact, Amazon has stepped up its game big time and is incorporating Augmented reality, with its AR glasses to further scale efficiencies. The real-time accent translators help eliminate the communication barriers, making Amazon cover a wide range of areas and perhaps provide a seamless customer care experience in the coming days. Amazon Echo vs Google Home: Next-gen IoT war Amazon is selling facial recognition technology to police  
Read more
  • 0
  • 3
  • 7198

article-image-python-software-foundation-jetbrains-python-developer-survey
Aaron Lazar
24 May 2018
7 min read
Save for later

What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal

Aaron Lazar
24 May 2018
7 min read
Python Software Foundation together with Jetbrains conduct their developer survey every year and at the end of 2017, over 9,500 developers from all over the world participated in this insightful Python developer survey. The learnings are pretty interesting and so, I thought I’d quickly summarise the most relevant points here. So here we go. TL;DR: Adoption of Python 3 is growing rapidly, but 25% of developers are yet to migrate to Python 3 despite the closely looming deadline (1st of Jan, 2020). There are as many Python developers doing primarily data science as there are those focused on web development. This is quite different from the 2016 findings, where there was a difference of 17% between Web and Data. A majority of Python developers use JavaScript, HTML/CSS and SQL along with Python. Django, Pandas, NumPy and Matplotlib are the most popular python frameworks. Jupyter Notebook and Docker are the most popular technologies used along with Python. Among cloud platforms, AWS is the most popular. Both editions of PyCharm (Community and Professional) are the most popular tools for Python development, followed by SublimeText. Code autocompletion, code refactorings and writing unit tests are the most widely used features in Python. More than half the respondents had a full-time job, working on Python. A majority of respondents held the role of a Developer/Programmer and belonged to the age group 21-39. Most of the developers are located in the US, India and China. The above stats are quite interesting no doubt. This got me thinking about the why behind those numbers. Here’s my perspective on some of those findings. I would love to hear yours in the comments section below. How is Python being used? Starting with the usage of Python, the survey revealed that close to 80% of the respondents used Python as their primary language for development. When asked which languages they generally used it with, the top responses were JavaScript, HTML/CSS and SQL. On the other hand, a lot of Java developers and those using Bash/shell, seem to use Python as their secondary language. This shows that Python is quite interoperable with a number of other languages, making it versatile to use in web, enterprise and server side scripting. Now when it comes to what tasks Python is used for in day to day development, it wasn’t a surprise when respondents mentioned Data Analysis. More than 50% use Python as their primary language for data analysis, however, only 32% claimed that they used it for Machine Learning. On the other hand, 54% mentioned that they used it for web development. 36% responded that they used Python for DevOps and system administration purposes. This isn’t surprising as most developers tend to stick to a particular tool/stack as far as possible. Developers also responded that they used Python the most for Web Development, apart from anything else, with Machine Learning + Data Analysis close on its heels. Most DevOps and Sys admins use Python as their secondary language - that might be because shell/bash are their primary languages. In the 2016 survey, the percentage of web developers was much more than ML/Data Analysts, but the difference has reduced greatly. What roles do these developers hold? When asked what roles these developers hold, the responses were quite interesting! While nearly a quarter were in a combination of Data Analysis and Machine Learning roles, another quarter were in a combination of Data Analysis and Web Development! 15% claimed to be in Web Development and Machine Learning. This relationship, although quite unlikely, is extremely interesting and worth exploring further. One reason could be that developers are building machine learning solutions that are offered to customers as a web application, rather than as a desktop application. Another reason could also be that a lot of web apps these days are becoming more data driven and require some kind of machine learning components running under the hood. What version of Python are developers rolling with and what tools do they use it with? A very surprising fact that surfaced from the survey was that 25% of developers still haven’t migrated their codebases to Python 3 and are still working with Python 2. This is quite astonishing, since the support for Python 2 will be discontinued in less than two years (from Jan 1, 2020 to be precise). Although, the adoption for Python 3 has been growing steadily over the years, most of the developers who were still using Python 2 turned out to be web developers. This is so because data scientists might have moved into using Python quite recently, as compared to web developers who might have been using Python for a long time and hence, haven’t migrated their legacy code. What are their prefered tool set with Python? When asked about the tools that developers used, the web developers responded that a majority of them used Django(76%), while 53% used Requests and 49% used Flask. When it came to GUI frameworks, 9% of developers used PyQT / PyGTK / wxPython while 6% used TkInter. 29% of these developers mentioned that they used scientific libraries like NumPy / pandas / Matplotlib / scipy. This is quite supportive of the overlap between both the GUI development and Data Science roles. On the other hand, Data Scientists responded that 65% used NumPy / pandas / Matplotlib / scipy. 38% used Keras / Theano / TensorFlow / scikit-learn, while 31% and 27% used Django and Flask respectively. Django was a clear winner in the tools section, with an overall of 41% developers using it. When asked about what tools they used along with Python, the web developers responded that 47% used Docker, 46% used an ORM like SQLAlchemy, PonyORM, etc. and 40% used Redis. 27% of them used Jupyter Notebook. The Data Scientists on the other hand, used Jupyter Notebook a lot (52%). 40% of them used Anaconda and 23% Docker. Of the various cloud platforms, developers chose AWS the most (65%). When it came to Python features that were used the most, Code autocompletion (84%), code refactorings (82%) and writing unit tests (81%), made the top of the list. 75% developers used SQL databases while only 46% used NoSQL. Of the various IDEs and Editors, PyCharm in both its versions, community and professional, was the most popular, closely tailed by Sublime, Vim, IDLE, Atom, and VS Code. While Web Developers preferred PyCharm, data scientists prefer Jupyter Notebook. Developer Profile: Employment, Job Roles and Experience Unsurprisingly, 52% of Python developers claimed that they were in a full-time job. This ties in well with the 2018 StackOverflow Developer survey which labeled Python as the “most wanted” programming language. So developers out there, if you’re well versed with Python, you’re more likely to be hired. Talking about job roles, 73% held the role of a Developer/Programmer, while 19% held the role of a Data Analyst and 18% an Architect. Interestingly, 7% of them held the role of a CIO / CEO / CTO. In terms of years of experience, the results were well balanced with almost as many developers having more than 5 years of experience as those with less than 5 years of experience. 67% of the respondents were in the age group of 21-39, meaning that a majority of young folk seem to be using Python. If you’re one of them, and are looking to progress in your career, check out our extensive catalog of Python titles. As for geographic location of the developers, 18% were from the US while 13% were from India and 7% from China. Should you move to Python 3? 7 Python experts’ opinions Python experts talk Python on Twitter: Q&A Recap Introducing Dask: The library that makes scalable analytics in Python easier
Read more
  • 0
  • 2
  • 7186

article-image-7-reasons-to-choose-graphql-apis-over-rest-for-building-your-apis
Sugandha Lahoti
09 Aug 2018
4 min read
Save for later

7 reasons to choose GraphQL APIs over REST for building your APIs

Sugandha Lahoti
09 Aug 2018
4 min read
REST has long been the go-to web service for front-end developers, but recently GraphQL has exploded in popularity. Now there's another great choice for developers for implementing APIs – the Facebook created, open source GraphQL specification. Facebook has been using GraphQL APIs for almost 6 years now in most components of the Facebook and Instagram apps and websites. And since it’s open source announcement in 2015, a large number of industries, from tech giants to lean startups, have also been using this specification for creating web services. Here are 7 reasons why you should also give GraphQL a try for building your APIs. #1. GraphQL is Protocol agnostic Both REST and GraphQL are specifications for building and consuming APIs and can be operated over HTTP. However, GraphQL is protocol agnostic. What this means is that it does not depend on anything HTTP. We don't use HTTP methods or HTTP response codes with GraphQL, except for using it as a channel for GraphQL communication. #2. GraphQL allows Data Fetching GraphQL APIs allow data fetching. This data fetching feature is what makes it better as compared to REST, as you have only one endpoint to access data on a server. Whereas in a typical REST API, you may have to make requests to multiple endpoints to fetch or retrieve data from a server. #3. GraphQL eliminates Overfetching and Underfetching As mentioned earlier, the GraphQL server is a single endpoint that handles all the client requests, and it can give the clients the power to customize those requests at any time. Clients can ask for multiple resources in the same request and they can customize the fields needed from all of them. This way, clients can be in control of the data they fetch and they can easily avoid the problems of over-fetching and under-fetching. With GraphQL, clients and servers are independent which means they can be changed without affecting each other. #4. Openness, Flexibility, and Power GraphQL APIs solves the data loading problem with its three attributes. First, GraphQL is an open specification rather than a software. You can use GraphQL to serve many different needs at once. Secondly, GraphQL is flexible enough not to be tied to any particular programming language, database or hosting environment. Third GraphQL brings in power and performance and reduces code complexity by using declarative queries instead of writing code. #5. Request and response are directly related In RESTful APIs, the language we use for the request is different than the language we use for the response. However, in the case of GraphQL APIs, the language used for the request is directly related to the language used for the response. Since we use a similar language to communicate between clients and servers, debugging problems become easier. With GraphQL APIs queries mirroring the shape of their response, any deviations can be detected, and these deviations would point us to the exact query fields that are not resolving correctly. #6. GraphQL features declarative data communication GraphQL pays major attention towards improving the DI/DX. The developer experience is as important as the user experience, maybe more. When it comes to data communication, we need to give developers a declarative language for communicating an application's data requirements. GraphQL acts as a simple query language that allows developers to ask for the data required by their applications in a simple, natural, and declarative way that mirrors the way they use that data in their applications. That's why frontend application developers love GraphQL. #7. Open source ecosystem and a fabulous community GraphQL has evolved in leaps and bounds from when it was open sourced. The only tooling available for developers to use GraphQL was the graphql-js reference implementation, when it came out first. Now, reference implementations of the GraphQL specification are available in various languages with multiple GraphQL clients. In addition, you also have multiple tools such as Prisma, GraphQL Faker, GraphQL Playground, graphql-config etc to build GraphQL APIs. The GraphQL community is growing rapidly. Entire conferences are exclusively dedicated to GraphQL, GraphQL Europe, GraphQL Day and GraphQL Summit to name a few. If you want to learn GraphQL, here a few resources to help you get your feet off the ground quickly. Learning GraphQL and Relay Hands-on GraphQL for Better RESTful Web Services [Video] Learning GraphQL with React and Relay [Video] 5 web development tools will matter in 2018 What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies
Read more
  • 0
  • 0
  • 7160

article-image-a-quick-look-at-ml-in-algorithmic-trading-strategies
Natasha Mathur
14 Feb 2019
6 min read
Save for later

A Quick look at ML in algorithmic trading strategies

Natasha Mathur
14 Feb 2019
6 min read
Algorithmic trading relies on computer programs that execute algorithms to automate some, or all, elements of a trading strategy. Algorithms are a sequence of steps or rules to achieve a goal and can take many forms. In the case of machine learning (ML), algorithms pursue the objective of learning other algorithms, namely rules, to achieve a target based on data, such as minimizing a prediction error.  In this article, we have a look at use cases of ML and how it is used in algorithmic trading strategies. These algorithms encode various activities of a portfolio manager who observes market transactions and analyzes relevant data to decide on placing buy or sell orders. The sequence of orders defines the portfolio holdings that, over time, aim to produce returns that are attractive to the providers of capital, taking into account their appetite for risk. This article is an excerpt taken from the book 'Hands-On Machine Learning for Algorithmic Trading' written by Stefan Jansen.  The book explores effective trading strategies in real-world markets using NumPy, spaCy, pandas, scikit-learn, and Keras. Ultimately, the goal of active investment management consists in achieving alpha, that is, returns in excess of the benchmark used for evaluation. The fundamental law of active management applies the information ratio (IR) to express the value of active management as the ratio of portfolio returns above the returns of a benchmark, usually an index, to the volatility of those returns. It approximates the information ratio as the product of the information coefficient (IC), which measures the quality of forecast as their correlation with outcomes, and the breadth of a strategy expressed as the square root of the number of bets. The use of ML for algorithmic trading, in particular, aims for more efficient use of conventional and alternative data, with the goal of producing both better and more actionable forecasts, hence improving the value of active management. Quantitative strategies have evolved and become more sophisticated in three waves: In the 1980s and 1990s, signals often emerged from academic research and used a single or very few inputs derived from the market and fundamental data. These signals are now largely commoditized and available as ETF, such as basic mean-reversion strategies. In the 2000s, factor-based investing proliferated. Funds used algorithms to identify assets exposed to risk factors like value or momentum to seek arbitrage opportunities. Redemptions during the early days of the financial crisis triggered the quant quake of August 2007 that cascaded through the factor-based fund industry. These strategies are now also available as long-only smart-beta funds that tilt portfolios according to a given set of risk factors. The third era is driven by investments in ML capabilities and alternative data to generate profitable signals for repeatable trading strategies. Factor decay is a major challenge: the excess returns from new anomalies have been shown to drop by a quarter from discovery to publication, and by over 50% after publication due to competition and crowding. There are several categories of trading strategies that use algorithms to execute trading rules: Short-term trades that aim to profit from small price movements, for example, due to arbitrage Behavioral strategies that aim to capitalize on anticipating the behavior of other market participants Programs that aim to optimize trade execution, and A large group of trading based on predicted pricing The HFT funds discussed above most prominently rely on short holding periods to benefit from minor price movements based on bid-ask arbitrage or statistical arbitrage. Behavioral algorithms usually operate in lower liquidity environments and aim to anticipate moves by a larger player likely to significantly impact the price. The expectation of the price impact is based on sniffing algorithms that generate insights into other market participants' strategies, or market patterns such as forced trades by ETFs. Trade-execution programs aim to limit the market impact of trades and range from the simple slicing of trades to match time-weighted average pricing (TWAP) or volume-weighted average pricing (VWAP). Simple algorithms leverage historical patterns, whereas more sophisticated algorithms take into account transaction costs, implementation shortfall or predicted price movements. These algorithms can operate at the security or portfolio level, for example, to implement multileg derivative or cross-asset trades. Let's now have a look at different applications in Trading where ML is of key importance. Use Cases of ML for Trading ML extracts signals from a wide range of market, fundamental, and alternative data, and can be applied at all steps of the algorithmic trading-strategy process. Key applications include: Data mining to identify patterns and extract features Supervised learning to generate risk factors or alphas and create trade ideas Aggregation of individual signals into a strategy Allocation of assets according to risk profiles learned by an algorithm The testing and evaluation of strategies, including through the use of synthetic data The interactive, automated refinement of a strategy using reinforcement learning Supervised learning for alpha factor creation and aggregation The main rationale for applying ML to trading is to obtain predictions of asset fundamentals, price movements or market conditions. A strategy can leverage multiple ML algorithms that build on each other. Downstream models can generate signals at the portfolio level by integrating predictions about the prospects of individual assets, capital market expectations, and the correlation among securities. Alternatively, ML predictions can inform discretionary trades as in the quantamental approach outlined above. ML predictions can also target specific risk factors, such as value or volatility, or implement technical approaches, such as trend following or mean reversion. Asset allocation ML has been used to allocate portfolios based on decision-tree models that compute a hierarchical form of risk parity. As a result, risk characteristics are driven by patterns in asset prices rather than by asset classes and achieve superior risk-return characteristics. Testing trade ideas Backtesting is a critical step to select successful algorithmic trading strategies. Cross-validation using synthetic data is a key ML technique to generate reliable out-of-sample results when combined with appropriate methods to correct for multiple testing. The time series nature of financial data requires modifications to the standard approach to avoid look-ahead bias or otherwise contaminate the data used for training, validation, and testing. In addition, the limited availability of historical data has given rise to alternative approaches that use synthetic data. Reinforcement learning Trading takes place in a competitive, interactive marketplace. Reinforcement learning aims to train agents to learn a policy function based on rewards. In this article, we briefly discussed how ML has become a key ingredient for different stages of algorithmic trading strategies. If you want to learn more about trading strategies that use ML, be sure to check out the book  'Hands-On Machine Learning for Algorithmic Trading'. Using machine learning for phishing domain detection [Tutorial] Anatomy of an automated machine learning algorithm (AutoML) 10 machine learning algorithms every engineer needs to know
Read more
  • 0
  • 0
  • 7101
article-image-5-best-practices-to-perform-data-wrangling-with-python
Savia Lobo
18 Oct 2018
5 min read
Save for later

5 best practices to perform data wrangling with Python

Savia Lobo
18 Oct 2018
5 min read
Data wrangling is the process of cleaning and structuring complex data sets for easy analysis and making speedy decisions in less time. Due to the internet explosion and the huge trove of IoT devices there is a massive availability of data, at present. However, this data is most often in its raw form and includes a lot of noise in the form of unnecessary data, broken data, and so on. Clean up of this data is essential in order to use it for analysis by organizations. Data wrangling plays a very important role here by cleaning this data and making it fit for analysis. Also, Python language has built-in features to apply any wrangling methods to various data sets to achieve the analytical goal. Here are 5 best practices that will help you out in your data wrangling journey with the help of Python. And at the end, all you’ll have is a clean and ready to use data for your business needs. 5 best practices for data wrangling with Python Learn the data structures in Python really well Designed to be a very high-level language, Python offers an array of amazing data structures with great built-in methods. Having a solid grasp of all the capabilities will be a potent weapon in your repertoire for handling data wrangling task. For example, dictionary in Python can act almost like a mini in-memory database with key-value pairs. It supports extremely fast retrieval and search by utilizing a hash table underneath. Explore other built-in libraries related to these data structures e.g. ordered dict, string library for advanced functions. Build your own version of essential data structures like stack, queues, heaps, and trees, using classes and basic structures and keep them handy for quick data retrieval and traversal. Learn and practice file and OS handling in Python How to open and manipulate files How to manipulate and navigate directory structure Have a solid understanding of core data types and capabilities of Numpy and Pandas How to create, access, sort, and search a Numpy array. Always think if you can replace a conventional list traversal (for loop) with a vectorized operation. This will increase speed of your data operation. Explore special file types like .npy (Numpy’s native storage) to access/read large data set with much higher speed than usual list. Know in details all the file types you can read using built-in Pandas methods. This will simplify to a great extent your data scraping. Almost all of these methods have great data cleaning and other checks built in. Try to use such optimized routines instead of writing your own to speed up the process. Build a good understanding of basic statistical tests and a panache for visualization Running some standard statistical tests can quickly give you an idea about the quality of the data you need to wrangle with. Plot data often even if it is multi-dimensional. Do not try to create fancy 3D plots. Learn to explore simple set of pairwise scatter plots. Use boxplots often to see the spread and range of the data and detect outliers. For time-series data, learn basic concepts of ARIMA modeling to check the sanity of the data Apart from Python, if you want to master one language, go for SQL As a data engineer, you will inevitably run across situations where you have to read from a large, conventional database storage. Even if you use Python interface to access such database, it is always a good idea to know basic concepts of database management and relational algebra. This knowledge will help you build on later and move into the world of Big Data and Massive Data Mining (technologies like Hadoop/Pig/Hive/Impala) easily. Your basic data wrangling knowledge will surely help you deal with such scenarios. Although Data wrangling may be the most time-consuming process, it is the most important part of the data management. Data collected by businesses on a daily basis can help them make decisions on the latest information available. It also allows businesses to find the hidden insights and use it in the decision-making processes and provide them with new analytic initiatives, improved reporting efficiency and much more. About the authors Dr. Tirthajyoti Sarkar works in San Francisco Bay area as a senior semiconductor technologist where he designs state-of-the-art power management products and applies cutting-edge data science/machine learning techniques for design automation and predictive analytics. He has 15+ years of R&D experience and is a senior member of IEEE. Shubhadeep Roychowdhury works as a Sr. Software Engineer at a Paris based Cyber Security startup where he is applying the state-of-the-art Computer Vision and Data Engineering algorithms and tools to develop cutting edge product. Data cleaning is the worst part of data analysis, say data scientists Python, Tensorflow, Excel and more – Data professionals reveal their top tools Manipulating text data using Python Regular Expressions (regex)
Read more
  • 0
  • 0
  • 7043

article-image-top-5-penetration-testing-tools-for-ethical-hackers
Vijin Boricha
27 Apr 2018
5 min read
Save for later

Top 5 penetration testing tools for ethical hackers

Vijin Boricha
27 Apr 2018
5 min read
Software systems are vulnerable. That's down to a range of things, from the constant changes our software systems undergo, to the extent of the opportunities for criminals to take advantage of the gaps and vulnerabilities within these systems. Fortunately, penetration testers - or ethical hackers - are a vital line of defence. Yes, you need to properly understand the nature of cyber security threats before you take steps to tackle them, but penetration testing tools are the next step towards securing your software. There's famous saying from Stephane Nappo that sums up cyber security today: It takes 20 years to build a reputation and few minutes of cyber-incident to ruin it. So, make sure you have the right people with the right penetration testing tools to protect not only your software but your reputation too.  The most popular penetration testing tools Kali Linux Kali linux is a Linux distro designed for digital forensics and penetration testing. The predecessor of BackTrack, it has grown in adoption to become one of the most widely used penetration testing tools. Kali Linux is  based on debian - most of its packages are imported from Debian repositories. Kali includes more than 500 preinstalled penetration testing programs that makes it possible to exploit wired, wireless, and ARM devices. The recent release of Kali Linux 2018.1 supports Cloud penetration testing. Kali has collaborated with some of the planet's leading cloud platforms such as AWS and Azure, helping to change the way we approach cloud security. Metasploit Metasploit is another popular penetration testing framework. It was created in 2003 using Perl and was acquired by Rapid7 in 2009 by which time it was completely rewritten in Ruby. It is a collaboration of the open source community and Rapid 7 with the outcome being the Metasploit Project well known for its anti-forensic and evasion tools. Metasploit is a concept of ‘exploit’ which is a code that is capable of surpassing any security measures entering vulnerable systems. Once through the security firewalls, it runs as a ‘payload’, a code that performs operations on a target machine, as a result creating the ideal framework for penetration testing. Wireshark WireShark is one of the world’s primary network protocol analyzers also popular as a packet analyzer. It was initially released as Ethereal back in 1998 and due to some trademark issues was renamed to WireShark in 2006. Users usually use WireShark for network analysis, troubleshooting, and software and communication protocol development. Wireshark basically functions in the second to seventh layer of network protocols, and the analysis made is presented in a human readable form. Security Operations Center analysts and network forensics investigators use this protocol analysis technique to analyze the amount of bits and bytes flowing through a network. The easy to use functionalities and the fact that it is open source makes Wireshark one of the most popular packet analyzers for security professionals and network administrators who want to quickly earn money as freelancers. Burp Suite Threats to web applications have grown in recent years. Ransomware and cryptojacking have become increased techniques used by cybercriminals to attack users in the browser. Burp or Burp Suite is one widely used graphical tool for testing web application security. Since it's about application security there are two versions to this tool: a paid version that include all the functionalities and the free version that comes with few important functionalities. This tool comes preinstalled with basic functionalities that will help you with web application security checks. If you are looking at getting into web penetration testing this should definitely be your first choice as it works with Linux, Mac and Windows as well. Nmap Nmap also known as Network Mapper is a security scanner. As the name suggests it builds a map of the network to discover hosts and services on a computer network. Nmap follows a set of protocols to function where it sends a crafted packet to the target host and then analyses the responses. It was initially released in 1997 and since then it has provided a variety of features to detect vulnerabilities and network glitches. The major reason why one should opt for Nmap is that it is capable of adapting to network conditions like network delay and network congestion during a scan. To keep your environment protected from security threats you should take necessary measures. There are n number of penetration testing tools out there with exceptional capabilities. The most important thing would be to choose the necessary tool based on your environment’s requirement. You can pick and choose from the above mentioned tools as they are shortlisted taking into consideration the fact that they are effective, well supported and easy to understand and most importantly they are open-source. Learn some of the most important penetration testing tools in cyber security Kali Linux - An Ethical Hacker's Cookbook, Metasploit Penetration Testing Cookbook - Third Edition Network Analysis using Wireshark 2 Cookbook - Second Edition For a complete list of books and videos on this topic, check out our penetration testing products.
Read more
  • 0
  • 0
  • 7009

article-image-why-is-everyone-going-crazy-over-webassembly
Amarabha Banerjee
09 Sep 2018
4 min read
Save for later

Why is everyone going crazy over WebAssembly?

Amarabha Banerjee
09 Sep 2018
4 min read
The history of web has seen a few major events in the past three decades. One of them was the launch of JavaScript 22 years ago on December 4, 1995.  Since then JavaScript has slowly evolved to become the de-facto standard of front-end web development. The present day web is much more dynamic and data intensive. Heavy graphics based games and applications require a much more robust browser.  That is why developers are going crazy over the concept of WebAssembly. Is it here to replace JavaScript? Or is it like any other hype that will fade away with time? The answer is neither of the two. Why use WebAssembly when you have JavaScript? To understand the buzz around WebAssembly, we will have to understand what JavaScript does best and what its limitations are. JavaScript compiles into machine code as it runs in the browser. Machine code is the language that communicates with the PC and instructs it what to do. Not only that, it also parses, analyzes, and optimizes the Emscripten-generated JavaScript while loading the application. That’s what makes the browser slow in compute heavy applications. JavaScript is a dynamically typed language. It doesn’t have any stored functions in advance. That’s why when the compiler in your browser runs JavaScript, it doesn’t know which function call is going to come next. That might seem very inconvenient. But that feature is what makes JavaScript based browsers so intuitive, and interactive. This feature ensures that your system would not have to install a standalone desktop application. The same application can be run from the browser. Graphical Representation of an Assembler-Source: logrocket The above image shows how an assembly level language is transformed into machine code when it is compiled. This is what exactly happens when WebAssembly code runs in browser. But since WebAssembly is in binary format, it becomes much easier for the compiler to convert it into machine code. Unfortunately JavaScript is not suitable for every single application. For example, gaming is an area, where running JavaScript code in the browser for a highly interactive multiplayer game is not the best solution. It takes a heavy toll on the system resources. That’s where WebAssembly comes in. WebAssembly is a low level binary language that runs parallel to JavaScript. Its biggest advantages are speed, portability and flexibility. The speed comes from the fact that Webassembly is in binary. JavaScript is a high level language. Compiling that to the machine code puts significant pressure on the JavaScript engine. Compared to that, WebAssembly binary files are much smaller in size (in Kb) and easy to execute and convert to machine code. Functioning of a WASM: Source: logrocket The code optimization in WebAssembly happens during the compilation of source code, unlike JavaScript. WebAssembly manages memory manually, just like in languages like C and C++, so there’s no garbage collection either. This enables code compiler performance similar to native code. You can also compile other languages like Rust, C, C++ into WASM format. This enables developers to run their native code in the browser without knowing much of JavaScript. WASM is not something that you can write as a code. It’s a format which is created from your native code, that transcompiles directly into machine code. This allows it to run parallel to HTML5, CSS and JavaScript code, giving you the taste of both worlds. So, is WebAssembly going to replace JavaScript? JavaScript is clearly not replaceable. Just that for heavy graphics/ audio/ AI based apps, a lot of function calls are made in the browser. This makes the browser slow. WebAssembly eases out this aspect. There are separate compilers that can turn your C, C++, Rust code into WASM code. These are then used in the browser as JavaScript objects. Since these are very small in size, they make the application fast. Support for WebAssembly has been rolled out by all major browsers. Majority of the world is using WebAssembly currently in their browsers. Until JavaScript capabilities improve, WebAssembly will work alongside Javascript to make your apps perform better and in making your browser interactive, intuitive and lightweight. Golang 1.11 rc1 is here with experimental port for WebAssembly! Unity switches to WebAssembly as the output format for the Unity WebGL build target Introducing Life: A cross-platform WebAssembly VM for decentralized Apps written in Go  Grain: A new functional programming language that compiles to Webassembly
Read more
  • 0
  • 0
  • 6985
article-image-deoldify-colorising-and-restoring-bw-images-and-videos-using-a-nogan-approach
Savia Lobo
17 May 2019
5 min read
Save for later

DeOldify: Colorising and restoring B&W images and videos using a NoGAN approach

Savia Lobo
17 May 2019
5 min read
Wouldn’t it be magical if we could watch old black and white movie footages and images in color? Deep learning, more precisely, GANs can help here. A recent approach by a software researcher Jason Antic tagged as ‘DeOldify’ is a deep learning based project for colorizing and restoring old images and film footages. https://twitter.com/johnbreslin/status/1127690102560448513 https://twitter.com/johnbreslin/status/1129360541955366913 In one of the sessions at the recent Facebook Developer Conference held from April 30 - May 1, 2019, Antic, along with Jeremy Howard, and Uri Manor talked about how by using GANs one can reconstruct images and videos, such as increasing their resolution or adding color to a black and white film. However, they also pointed out that GANs can be slow, and difficult and expensive to train. They demonstrated how to colorize old black & white movies and drastically increase the resolution of microscopy images using new PyTorch-based tools from fast.ai, the Salk Institute, and DeOldify that can be trained in just a few hours on a single GPU. https://twitter.com/citnaj/status/1123748626965114880 DeOldify makes use of a NoGAN training, which combines the benefits of GAN training (wonderful colorization) while eliminating the nasty side effects (like flickering objects in the video). NoGAN training is crucial while getting some images or videos stable and colorful. An example of DeOldify trying to achieve a stable video is as follows: Source: GitHub Antic said, “the video is rendered using isolated image generation without any sort of temporal modeling tacked on. The process performs 30-60 minutes of the GAN portion of "NoGAN" training, using 1% to 3% of Imagenet data once. Then, as with still image colorization, we "DeOldify" individual frames before rebuilding the video.” The three models in DeOldify DeOldify includes three models including video, stable and artistic. Each of the models has its strengths and weaknesses, and their own use cases. The Video model is for video and the other two are for images. Stable https://twitter.com/johnbreslin/status/1126733668347564034 This model achieves the best results with landscapes and portraits and produces fewer zombies (where faces or limbs stay gray rather than being colored in properly). It generally has less unusual miscolorations than artistic, but it's also less colorful in general. This model uses a resnet101 backbone on a UNet with an emphasis on width of layers on the decoder side. This model was trained with 3 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 7% of Imagenet data trained once (3 hours of direct GAN training). Artistic https://twitter.com/johnbreslin/status/1129364635730272256 This model achieves the highest quality results in image coloration, with respect to interesting details and vibrance. However, in order to achieve this, one has to adjust the rendering resolution or render_factor. Additionally, the model does not do as well as ‘stable’ in a few key common scenarios- nature scenes and portraits. Artistic model uses a resnet34 backbone on a UNet with an emphasis on depth of layers on the decoder side. This model was trained with 5 critic pretrain/GAN cycle repeats via NoGAN, in addition to the initial generator/critic pretrain/GAN NoGAN training, at 192px. This adds up to a total of 32% of Imagenet data trained once (12.5 hours of direct GAN training). Video https://twitter.com/citnaj/status/1124719757997907968 The Video model is optimized for smooth, consistent and flicker-free video. This would definitely be the least colorful of the three models; while being almost close to the ‘stable’ model. In terms of architecture, this model is the same as "stable"; however, differs in training. It's trained for a mere 2.2% of Imagenet data once at 192px, using only the initial generator/critic pretrain/GAN NoGAN training (1 hour of direct GAN training). DeOldify was achieved by combining certain approaches including: Self-Attention Generative Adversarial Network: Here, Antic has modified the generator, a pre-trained U-Net, to have the spectral normalization and self-attention. Two Time-Scale Update Rule: It’s just one to one generator/critic iterations and higher critic learning rate. This is modified to incorporate a "threshold" critic loss that makes sure that the critic is "caught up" before moving on to generator training. This is particularly useful for the "NoGAN" method. NoGAN doesn’t have a separate research paper. This, in fact, is a new type of GAN training developed to solve some key problems in the previous DeOldify model. NoGAN includes the benefits of GAN training while spending minimal time doing direct GAN training. Antic says, “I'm looking to make old photos and film look reeeeaaally good with GANs, and more importantly, make the project useful.” “I'll be actively updating and improving the code over the foreseeable future. I'll try to make this as user-friendly as possible, but I'm sure there's going to be hiccups along the way”, he further added. To further know about the hardware components and other details head over to Jason Antic’s GitHub page. Training Deep Convolutional GANs to generate Anime Characters [Tutorial] Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Using deep learning methods to detect malware in Android Applications
Read more
  • 0
  • 0
  • 6984

article-image-rxswift-part-1-where-start-beginning-hot-and-cold-observables
Darren Karl
09 Feb 2017
6 min read
Save for later

RxSwift Part 1: Where to Start? Beginning with Hot and Cold Observables

Darren Karl
09 Feb 2017
6 min read
In the earlier articles, we gave a short introduction to RxSwift and talked about the advantages of the functional aspect of Rx,by using operators and composing a stream of operations. In my journey to discover and learn Rx, I was drawn to it after finding various people talking about its benefits. After I finally bought into the idea that I wanted to learn it, I began to read the documentation. I was overwhelmed by how many objects, classes, or operators were provided. There were loads of various terminologies that I encountered in the documentation. The documentation was (and still is) there, but because I was still at page one, my elementary Rx vocabulary prevented me from actually being able to appreciate, maximize, and use RxSwift. I had to go through months of soaking in the documentation until it saturated in my brain and things clickedfinally. I found that I wasn’t the only one who was experiencing this after talking with some of my RxSwift community members in Slack. This is the gap. RxSwift is a beautifully designed API (I’ll talk about why exactly, later), but I personally didn’t know how long it would take to go from my working non-Rx knowledge to slowly learning the well-designed tools that Rx provides. The problem wasn’t that the documentation was lacking, because it was sufficient. It was that while reading the documentation, I found that I didn't even know what questions to ask or which documentation answered what questions I had. What I did know was programming concepts in the context of application development, in a non-Rx way. What I wanted to discover was how things would be done in RxSwift along with the thought processes that led to the design of elegant units of code, such as the various operators like flatMap or concatMap or the units of code such as Subjects or Drivers. This article aims to walk you through real programming situations that software developers encounter, while gradually introducing the Rx concepts that can be used. This article assumes that you’ve read through the last two articles on RxSwift I’ve written, which is linked above, and that you’ve found and read some of the documentation but don’t know where to start. It also assumes that you’re familiar with how network calls or database queries are made and how to wrap them using Rx. A simple queuing application Let’s start with something simple, such as a mobile application, for queuing. We can have multiple queues, which contain zero to many people in order. Let’s say that we have the following code that performs a network query to get the queue data from your REST API. We assume that these are network requests wrapped using Observable.create(): privatefuncgetQueues() -> Observable<[Queue]> privatefuncgetPeople(in queue: Queue) -> Observable<[Person]> privatevardisposeBag = DisposeBag() An example of the Observable code for getting the queue data is available here. Where do I write my subscribe code? Initially, a developer might write the following code in the viewDidLoad() method and bind it to some UITableView: funcviewDidLoad() { getQueue() .subscribeOn(ConcurrentDispatchQueueScheduler(queue: networkQueue)) .observeOn(MainScheduler.instance) .bindTo(tableView.rx.items(cellIdentifier: "Cell")) { index, model, cell in cell.textLabel?.text = model } .addDisposableTo(disposeBag) } However, if the getQueues() observable code loads the data from a cold observable network call, then, by definition, the cold observable will only perform the network call once during viewDidLoad(), load the data into the views, and it is done. The table view will not update in case the queue is updated by the server, unless the view controller gets disposed and viewDidLoad() is performed again. Note that should the network call fail, we can use the catchError() operator right after and swap in a database query or from a cache instead, assuming we’ve persisted the queue data through a file or database. Thisway, we’re assured that this view controller will always have data to display. Introduction to cold and hot observables By cold observable, we mean that the observable code (that is, the network call to get the data) will only begin emitting items on subscription (which is currently on viewDidLoad). This is the difference between a hot and a cold observable: hot observables can be emitting items even when there are no observers subscribed, while cold observables will only run once an observer is subscribed. The examples of cold observables are things you’ve wrapped using Observable.create(), while the examples of hot observables are things like UIButton.rx.tap or UITextField.rx.text, which can be emitting items such as Void for a button press or String for atext field, even when there aren’t any observers subscribed to them. Inherently, we are wrong to use a cold observable because its definition will simply not meet the demands of our application. A quick fix might be to write it in viewWillAppear Going back to our queuing example, one could write it in the viewWillAppear() life cycle state of the app so that it will refresh its data every time the view appears. The problem that arises from this solution is that we perform a network query too frequently. Furthermore, every time viewWillAppear is called, note that a new subscription is added to the disposeBag. If, for some reason, the last subscription does not dispose (that is, it is still processing and emitting items and has not yet entered into the onComplete or onError state) and you’ve begun to perform a network query again, then it means that you have a possibility of amemory leak! Here’s an example of the (impractical) code that refreshes on every view. The code will work (it will refresh every time), but this isn’t good code: publicviewWillAppear(_ animated: Bool) { getQueues().bindTo(tableView.rx.items(cellIdentifier: "Cell")) { index, model, cell in cell.textLabel?.text = model } .addDisposableTo(self.disposeBag) } So, if we don’t want to query only one time and we don’t want to query too frequently, it begs the question,“How many times should the queries really be performed?” In part 2 of this article, we'll discuss what the right amount of querying is. About the Author Darren Karl Sapalo is a software developer, an advocate ofUX, and a student taking up his Master's degree in Computer Science. He enjoyed developing games in his free time when he was twelve. He finally finished with his undergraduate thesis on computer vision and took up some industry work with Apollo Technologies Inc. developing for both Android and iOS platforms.
Read more
  • 0
  • 0
  • 6973