Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-abandoning-agile
Aaron Lazar
23 May 2018
7 min read
Save for later

Abandoning Agile

Aaron Lazar
23 May 2018
7 min read
“We’re Agile”. That’s the kind of phrase I would expect from a football team, a troupe of ballet dancers or maybe a martial artist. Everytime I hear it come from the mouth of a software professional, I go like “Oh boy, not again!”. So here I am to talk about something that might touch a nerve or two, of an Agile fan. I’m talking about whether you should be abandoning agile once and for all! Okay, so what is Agile? Agile software development is an approach to software development, where requirements and solutions evolve through a collaborative effort of self-organizing and cross-functional teams, as well as the end user. Agile advocates adaptive planning, evolutionary development, early delivery, and a continuous improvement. It also encourages a rapid and flexible response to change. The Agile Manifesto was created by some of the top software gurus on the likes of Uncle Bob, Martin Fowler, et al. The values that it stands for are: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan Apart from these, it follows 12 principles, as given here, through which it aims to improve software development. At its heart, it is a mindset. So what’s wrong? Honestly speaking, everything looks rosy from the outside until you’ve actually experienced it. Let me ask you at this point, and I’d love to hear your answers in the comments section below. Has there never been a time when you felt at least one of the 12 principles were a hindrance to your personal, as well as team’s development process? Well, if yes, you’re not alone. But before throwing the baby out with the bathwater, let’s try and understand a bit and see if there’s been some misinterpretation, which could be the actual culprit. Here are some common misinterpretations of what it is, what it can and cannot do. I like to call them: The 7 Deadly Sins #1 It changes processes One of the main myths about Agile is that it changes processes. It doesn't really change your processes, it changes your focus. If you’ve been having problems with your process and you feel Agile would be your knight in shining armor, think again. You need something more than just Agile and Lean. This is one of the primary reasons teams feel that Agile isn’t working for them - they’ve not understood whether they should have gone Agile or not. In other words, they don’t know why they went Agile in the first place! #2 Agile doesn’t work for large, remote teams The 4th point of the Agile manifesto states, “developers must work together daily throughout the project”. Have you ever thought about how “awesome aka impractical” it is to coordinate with teams in India, all the way from the US on a daily basis? The fact is that it’s not practically possible for such a thing to happen when teams are spread across time zones. What it intends is to have the entire team communicating with each other on a daily basis and there’s always the possibility of a Special Point of Contact to communicate and pass on the information to other team members. So no matter how large the team, if implemented in the right way, Agile works. Strong communication and documentation helps a great deal here. #3 Prefer the “move fast and break things” approach Well, personally I prefer to MFABT. Mostly because at work, I’m solely responsible for my own actions. What about when you’re part of a huge team that’s working on something together? When you take such an approach, there are always hidden costs of being 'wrong'. Moreover, what if everytime you moved fast, all you did was break things? Do you think your team’s morale would be uplifted? #4 Sprints are counterproductive People might argue that sprints are dumb and what’s the point of releasing software in bits and pieces? I think what you should actually think about is whether what you’re focusing on can actually be done quicker. Faster doesn’t apply to everything. Take making babies for example. Okay, jokes apart, you’ll realise you might often need to slow things down in order to go fast, so that you reach your goal without making mistakes. At least not too many costly ones anyway. Before you dive right into Agile, understand whether it will add value to what you do. #5 I love micromanagement Well, too bad for you dude, Agile actually promotes self-driven, self-managed and autonomous teams that are learning continuously to adapt and adjust. In enterprises where there is bureaucracy, it will not work. Bear in mind that most organizations (may be apart from startups) are hierarchical in nature which brings with bureaucracy in some form or flavor. #6 Scrum saves time Well, yes it does. Although if you’re a manager and think Scrum is going to cut you a couple of hours from paying attention to your individual team members, you’re wrong. The idea of Scrum is to identify where you’ve reached, what you need to do today and whether there’s anything that might get in the way of that. Scrum doesn’t cover for knowing your team members problems and helping them overcome them. #7 Test everything, everytime No no no no…. That’s a wrong notion, which in fact wastes a lot of time. What you should actually be doing is automated regression tests. No testing is bad too; you surely don’t want bad surprises before you release! Teams and organisations tend to get carried away by the Agile movement and try to imitate others without understanding whether what they’re doing is actually in conjunction with what the business needs. Now back to what I said at the beginning - when teams say they’re agile, half of them only think they are. It was built for the benefit of software teams all across the globe, and from what teams say, it does work wonders! Like any long term relationship, it takes conscious efforts and time everyday to make it work. Should you abandon Agile? Yes and no. If you happen to have the slightest hint that one or more of the following are true for your organisation, you really need to abandon Agile or it will backfire: Your team is not self-managed and lacks matured and cross-functional developers Your customers need you to take approvals at every release stage Not everyone in your organisation believes in Agile Your projects are not too complex Always remember, Agile is not a tool and if someone is trying to sell you a tool to help you become Agile, they’re looting you. It is a mindset; a family that trusts each other, and a team that communicates effectively to get things done. My suggestion is to go ahead and become agile, only if the whole family is for it and is willing to transform together. In other words, Agile is not a panacea for all development projects. Your choice of methodology will come down to what makes the best sense for your project, your team and your organization. Don’t be afraid to abandon agile in favor of new methodologies such as Chaos Engineering and MOB Programming or even go back to the good ol’ waterfall model. Let us know what you think of Agile and how well your organisation has adapted to it, if has adopted it. You can look up some fun discussions about whether it works or sucks on Hacker news: In a nutshell, why do a lot of developers dislike Agile? Poor Man’s Agile: Scrum in 5 Simple Steps What is Mob Programming? 5 things that will matter in application development in 2018 Chaos Engineering: managing complexity by breaking things
Read more
  • 0
  • 2
  • 4574

article-image-mxnet-versatile-dl-framework
Aaron Lazar
05 Sep 2017
5 min read
Save for later

Why MXNet is a versatile Deep Learning framework

Aaron Lazar
05 Sep 2017
5 min read
Tools to perform Deep Learning tasks are in abundance. You have programming languages that are adapted for the job or those specifically created to get the job done. Then, you have several frameworks and libraries which allow data scientists to design systems that sift through tonnes of data and learn from it. But a major challenge for all tools lies in tackling two primary issues: The size of the data The speed of computation Now, with petabytes and exabytes of data, it’s become way more taxing for researchers to handle. Take image processing for example. ImageNet itself is such a massive dataset consisting of trillions of images from several distinct classes that tackling this scale is a serious lip-biting affair. The speed at which researchers are able to get actionable insights from the data is also an important factor. Powerful hardware like multi-core GPUs, rumbling with raw power and begging to be tamed, have waltzed into the mosh pit of big data. You may try to humble these mean machines with old school machine learning stacks like R, SciPy or NumPy, but in vain. So, the deep learning community developed several powerful libraries to solve this problem, and they succeeded to an extent. But two major problems still existed - the frameworks failed to solve the problems of efficiency and flexibility together. This is where a one-of-a-kind, powerful, and flexible library like MXNet rises up to the challenge and makes developers’ lives a lot easier. What is MXNet? MXNet sits happy at over 10k stars on Github and has recently been inducted into the Apache Software Foundation. It focuses on accelerating the development and deployment of Deep Neural Networks at scale. This means exploiting the full potential of multi-core GPUs to process tonnes of data at blazing fast speeds. We’ll take a look at some of MXNet’s most interesting features over the next few minutes. Why is MXNET so good? Efficient MXNet is backed by a C++ backend which allows it to be extremely fast on even a single machine. It allows for automatically parallelizing computation across devices as well as synchronising the computation when multithreading is introduced. Moreover, the support for linear scaling means that not only the number of GPUs can be scaled, but MXNet also supports heavily distributed computing by scaling the number of machines as well. Moreover, MXNet has a graph optimisation layer that sits on top of a dynamic dependency scheduler, which enhances memory efficiency and speed. Extremely portable MXNet is extremely portable, especially as it can be programmed in umpteen languages such as C++, R, Python, Julia, JavaScript, Go, and more. It’s widely supported across most operating systems like Linux, Windows, iOS as well as Android making it multi-platform, including low level platforms. Moreover, it works well in cloud environments, like AWS - one of the reasons AWS has officially adopted MXNet as its deep learning framework of choice. You can now run MXNet from the Deep Learning AMI. Great for data visualization MXNet uses not only the mx.viz.plot_network method for visualising neural networks but it also has in-built support for Graphviz, to visualise neural nets as a computation graph. Check Joseph Paul Cohen’s blog for a great side-by-side visualisation of CNN architectures in MXNet. Alternatively, you could strip the TensorBoard off TensorFlow and use it with MXNet. Jump here for more info on that. Flexible MXNet supports both imperative and declarative/symbolic styles of programming, allowing you to blend both styles for increased efficiency. Libraries like Numpy and Torch support plain imperative programming, while TensorFlow, Theano, and Caffe support plain declarative programming. You can get a closer look at what these styles actually are here. MXNet is the only framework so far that mixes both styles to maximise efficiency and productivity. In-built profiler MXNet comes packaged with an in-built profiler that lets you profile execution times, layer-by-layer, in the network. Now, while you’ll be interested in using your own general profiling tools like gprof and nvprof to profile at kernel, function, or instruction level, the in-built profiler is specifically tuned to provide detailed information at a symbol or operator level. Limitations While MXNet has a host of attractive features that explain why it has earned public admiration, it has its share of limitations just like any other popular tool. One of the biggest issues encountered with MXNet is that it tends to give varied results when compile settings are modified. For example, a model would work well with cuDNN3 but wouldn’t with cuDNN4. To overcome issues like this, you might have to spend some time on forums. Moreover, it is a daunting task to write your own operators or layers in C++, to achieve efficiency. Although, with v0.9, the official documentation mentions that it has become easier. Finally, the documentation is introductory and is not organised well enough to create custom operators or to perform other advanced tasks. So, should I use MXNet? MXNet is the new kid on the block that supports modern deep learning models like CNNs and LSTMs. It boasts of immense speed, scalability, and flexibility to solve your deep learning problems and consumes as little as 4 Gigs of memory when running deep networks with almost a thousand layers. The core library including its dependencies can mash into a single C++ source file, which can be compiled on both Android and iOS, as well as a browser with the JavaScript extensions. But like all other libraries, it has it’s own hurdles, which aren’t that life threatening enough, to prevent one from getting the job done, and a good one at that! Is that enough to get you excited and start using MXNet? Go get working then! And don’t forget to tell us your experiences of working with MXNet.
Read more
  • 0
  • 0
  • 4569

article-image-intro-meteor-js-full-stack-developers
Ken Lee
14 Oct 2015
9 min read
Save for later

Intro to Meteor for JS full-stack developers

Ken Lee
14 Oct 2015
9 min read
If you are like me, a JavaScript full-stack developer, your choices of technology might be limited when dealing with modern app/webapp development. You could choose a MEAN stack (MongoDB, Express, AngularJS, and Node.js), learn all four of these technologies in order to mix and match, or employ some ready frameworks, like DerbyJS. However , none of them provide you with the one-stop shop experience like Meteor, which stands out among the few on the canvas. What is Meteor? Meteor is an open-source "platform" (more than a framework) in pure JavaScript that is built on top of Node.js, facilitating via DDP protocol and leveraging MongoDB as data storage. It provides developers with the power to build a modern app/webapp that is equipped with production-ready, real-time (reactive), and cross-platform (web, iOS, Android) capabilities. It was designed to be easy to learn, even for beginners, so we could focus on developing business logic and user experience, rather than getting bogged down with the nitty-gritty of learning technologies' learning curve. Your First Real-time App: Vote Me Up! Below, we will be looking at how to build one reactive app with Meteor within 30 mins or less. Step 1: Installation (3-5 Mins) For OS X or Linux developers, head over to the terminal and install the official release from Meteor. $ curl https://install.meteor.com/ |sh For Windows developers, please download the official installer here. Step 2: Create an app (3-5 mins) After we have Meteor installed, we now can create new app simply by: $ meteor create voteMeUp This will create a new folder named voteMeUp under the current working directory. Check under the voteMeUp folder -- we will see that three files and one folder have been created. voteMeUp/ .meteor/ voteMeUp.html voteMeUp.css .meteor is for internal use. We should not touch this folder. The other three files are obvious enough even for beginners -- the HTML markup, stylesheet, and one JavaScript that made the barebone structure one can get for web/webapp development. The default app structure tells us that Meteor gives us freedom on folder structure. We can organise any files/folders we feel appropriate as long as we don't step onto the special folder names Meteor is looking at. Here, we will be using a basic folder structure for our app. You can visit the official documentation for more info on folder structure and file load order. voteMeUp/ .meteor/ client/ votes/ vote.html vote.js main.html collections/ votes.js server/ presets.js publications.js Meteor is a client-database-server platform. We will be writing codes for client and server independently, communicating through the reactive DB drivers API, publications, and subscriptions. For a brief tutorial, we just need to pay attention to the behaviour of these folders Files in client/ folder will run on client side (user browser) Files in server/ folder will run on server side (Node.js server) Any other folders, i.e. collections/ would be run on both client and server Step 3: Add some packages (< 3 mins) Meteor is driven by an active community, since developers around the world are creating reusable packages to compensate app/webapp development. This is also why Meteor is well-known for rapid prototyping. For brevity’s sake, we will be using packages from Meteor: underscore. Underscore is a JavaScript library that provides us some useful helper functions, and this package provided by Meteor is a subset of the original library. $ meteor add underscore There are a lot useful packages around; some are well maintained and documented. They are developed by seasoned web developers around the world. Check them out: Iron Router/Flow Router, used for application routing Collection2, used for automatic validation on insert and update operations Kadira, a monitoring platform for your app Twitter Bootstrap, a popular frontend framework by Twitter Step 3: Start the server (< 1 min) Start the server simply by: $ meteor Now we can visit site http://localhost:3000. Of course you will be staring at a blank screen! We haven't written any code yet. Let's do that next. Step 4: Write some code (< 20 Mins) As you start to write the first line of code, by the time you save the file, you will notice that the browser page will auto reload by itself. Thanks to the hot code push mechanism built-in, we don't need to refresh the page manually. Database Collections Let's start with the database collection(s). We will keep our app simple, since we just need one collection, votes, that we will put it in collections/votes.js like this: Votes = newMongo.Collection('votes'); All files in the collections/ folder run on both the client and the server side. When this line of code is executed, a mongo collection will be established on the server side. On the client side, a minimongo collection will be established. The purpose of the minimongo is to reimplement the MongoDB API against an in-memory JavaScript database. It is like a MongoDB emulator that runs inside our client browser. Some preset data We will need some data to start working with. We can put this in server/presets.js. These are just some random names, with vote count 0 to start with. if (Votes.find().count() === 0) { Votes.insert({ name: "Janina Franny", voteCount: 0 }); Votes.insert({ name: "Leigh Borivoi", voteCount: 0 }); Votes.insert({ name: "Amon Shukri", voteCount: 0 }); Votes.insert({ name: "Dareios Steponas", voteCount: 0 }); Votes.insert({ name: "Franco Karl", voteCount: 0 }); } Publications Since this is for an educational purpose, , we will publish (Meteor.publish()) all the data to the client side under server/publications.js. You most likely would not do this for a production application. Planning for publication is one major step in Meteor app/webapp development, so we don't want to publish too little or too much data over to client. Just enough data is what we always keep an eye out for. Meteor.publish('allVotes', function() { return Votes.find(); }); Subscriptions Once we have the publication in place, we can start to subscribe to them by the name, allVotes, as shown above in the publication. Meteor provides template level subscription, which means we could subscribe to the publication when a template is loaded, and get unsubscribed when the template is destroyed. We will be putting in our subscription under client/votes/votes.js, (Meteor.subscribe()). onCreated is a callback when the template name votes is being created. Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); The votes template put in client/votes/votes.html would be some simple markup such as the following: <template name="votes"> <h2>All Votes</h2> <ul> {{#each sortedVotes}} <li>{{name}} ({{voteCount}}) <button class="btn-up-vote">Up Vote</button></li> {{/each}} </ul> <h3>Total votes: {{totalVotes}}</h3> </template> If you are curious what those markups are with {{ and }}, enter Meteor Blaze, which is a powerful library for creating live-updating on client side. Similar to AngularJS and React, Blaze serves as the default front-end templating engine for Meteor, but it is simpler to use and easier to understand. The Main Template There must be somewhere to start our application. client/main.html is the place to kickoff our template(s). <body> {{> votes}} </body> Helpers In order to show all of our votes we will need some helper functions. As you can see from the previous template, {{#each sortedVotes}}, where a loop should happen and print out the names and their votes in sorting order {{totalVotes}}, which is supposed to show the total vote count. We will put this code into the same file we have previously worked on: client/votes/votes.js, and the complete code should be: Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); Template.votes.helpers({ 'sortedVotes': function() { return Votes.find({}, { sort: { voteCount: -1 } }) }, 'totalVotes': function() { var votes = Votes.find(); if (votes.count() > 0) { return _.reduce(votes.fetch(), function(memo, obj) { return memo + obj.voteCount; }, 0); } } }); Sure enough, the helpers will return all of the votes, sorted in descending order (the larger number on top), and returning the sum (reduce - function provided by underscrore) of votes. This is all we need to show the vote listing. Head over to the browser, and you should be seeing the listing on-screen! Events In order to make the app useful and reactive we need an event to update the listing on the fly when someone votes on the names. This can be done easily with an event binding to the 'Up Vote' button. We will be adding the event handler in the same file: client/votes/votes.js Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); Template.votes.helpers({ 'sortedVotes': function() { return Votes.find({}, { sort: { voteCount: -1 } }) }, 'totalVotes': function() { var votes = Votes.find(); if (votes.count() > 0) { return _.reduce(votes.fetch(), function(memo, obj) { return memo + obj.voteCount; }, 0); } } }); Template.votes.events({ 'click .btn-up-vote': function() { Votes.update({ _id: this._id }, { $inc: { voteCount: 1 } }); } }); This new event handler just did a quick and dirty update on the Votes collections, by field name _id. Each event handler will have this pointing to the current template context -- i.e. the {{#each}} in the template indicates a new context. So this._id will return the current _id of each record in the collection. Step 5: Done. Enjoy your first real-time app! You can now visit the site with different browsers/tabs open side by side. Action on one will trigger the reactive behavior on the other. Have fun voting! Conclusion By now, we can see how easily we can build a fully functional real-time app/webapp using Meteor. With "great power comes great responsibility[RJ8] " (pun intended), and proper planning/structuring for our app/webapp is of the utmost importance once we have empowered by these technologies. Use it wisely and you can improve both the quality and performance of your app/webapp. Try it out, and let me know if you are sold. Resources: Meteor official site Meteor official documentation Meteor package library: Atmosphere Discover Meteor Want more JavaScript content? Look no further than our dedicated JavaScript page.  About the Author Ken Lee is the co-found of Innomonster Pte. Ltd. (http://innomonster.com/), a specialized website/app design & development company based in Singapore. He has eight years of experience in web development, and is passionate about front-end and JS full-stack development. You can reach him at [email protected].
Read more
  • 0
  • 0
  • 4553

article-image-machine-learning-apis-for-google-cloud-platform
Amey Varangaonkar
28 Jun 2018
7 min read
Save for later

Machine learning APIs for Google Cloud Platform

Amey Varangaonkar
28 Jun 2018
7 min read
Google Cloud Platform (GCP) is considered to be one of the Big 3 cloud platforms among Microsoft Azure and AW. GCP is widely used cloud solutions supporting AI capabilities to design and develop smart models to turn your data into insights at a cheap, affordable cost. The following excerpt is taken from the book 'Cloud Analytics with Google Cloud Platform' authored by Sanket Thodge. GCP offers many machine learning APIs, among which we take a look at the 3 most popular APIs: Cloud Speech API A powerful API from GCP! This enables the user to convert speech to text by using a neural network model. This API is used to recognize over 100 languages throughout the world. It can also support filter of unwanted noise/ content from a text, under various types of environments. It supports context-awareness recognition, works on any device, any platform, anywhere, including IoT. It has features like Automatic Speech Recognition (ASR), Global Vocabulary, Streaming Recognition, Word Hints, Real-Time Audio support, Noise Robustness, Inappropriate Content Filtering and supports for integration with other APIs of GCP.  The architecture of the Cloud Speech API is as follows: In other words, this model enables speech to text conversion by ML. The components used by the Speech API are: REST API or Google Remote Procedure Call (gRPC) API Google Cloud Client Library JSON API Python Cloud DataLab Cloud Data Storage Cloud Endpoints The applications of the model include: Voice user interfaces Domotic appliance control Preparation of structured documents Aircraft / direct voice outputs Speech to text processing Telecommunication It is free of charge for 15 seconds per usage, up to 60 minutes per month. More than that will be charged at $0.006 per usage. Now, as we have learned about the concepts and the applications of the model, let's learn some use cases where we can implement the model: Solving crimes with voice recognition: AGNITIO, A voice biometrics specialist partnered with Morpho (Safran) to bring Voice ID technology into its multimodal suite of criminal identification products. Buying products and services with the sound of your voice: Another most popular and mainstream application of biometrics, in general, is mobile payments. Voice recognition has also made its way into this highly competitive arena. A hands-free AI assistant that knows who you are: Any mobile phone nowadays has voice recognition software in the form of AI machine learning algorithms. Cloud Translation API Natural language processing (NLP) is a part of artificial intelligence that focuses on Machine Translation (MT). MT has become the main focus of NLP group for many years. MT deals with translating text from the source language to text in the target language. Cloud Translation API provides a graphical user interface to translate an inputted string of a language to targeted language, it’s highly responsive, scalable and dynamic in nature. This API enables translation among 100+ languages. It also supports language detection automatically with accuracy. It provides a feature to read a web page contents and translate to another language, and need not be text extracted from a document. The Translation API supports various features such as programmatic access, text translation, language detection, continuous updates and adjustable quota, and affordable pricing. The following image shows the architecture of the translation model:  In other words, the cloud translation API is an adaptive Machine Translation Algorithm. The components used by this model are: REST API Cloud DataLab Cloud data storage Python, Ruby Clients Library Cloud Endpoints The most important application of the model is the conversion of a regional language to a foreign language. The cost of text translation and language detection is $20 per 1 million characters. Use cases Now, as we have learned about the concepts and applications of the API, let's learn two use cases where it has been successfully implemented: Rule-based Machine Translation Local Tissue Response to Injury and Trauma We will discuss each of these use cases in the following sections. Rule-based Machine Translation The steps to implement rule-based Machine Translation successfully are as follows: Input text Parsing Tokenization Compare the rules to extract the meaning of prepositional phrase Find word of inputted language to word of the targeted language Frame the sentence of the targeted language Local tissue response to injury and trauma We can learn about the Machine Translation process from the responses of a local tissue to injuries and trauma. The human body follows a process similar to Machine Translation when dealing with injuries. We can roughly describe the process as follows: Hemorrhaging from lesioned vessels and blood clotting Blood-borne physiological components, leaking from the usually closed sanguineous compartment, are recognized as foreign material by the surrounding tissue since they are not tissue-specific Inflammatory response mediated by macrophages (and more rarely by foreign-body giant cells) Resorption of blood clot Ingrowth of blood vessels and fibroblasts, and the formation of granulation tissue Deposition of an unspecific but biocompatible type of repair (scar) tissue by fibroblasts Cloud Vision API Cloud Vision API is powerful image analytic tool. It enables the users to understand the content of an image. It helps in finding various attributes or categories of an image, such as labels, web, text, document, properties, safe search, and code of that image in JSON. In labels field, there are many sub-categories like text, line, font, area, graphics, screenshots, and points. How much area of graphics involved, text percentage, what percentage of empty area and area covered by text, is there any image partially or fully mapped in web are included web contents. The document consists of blocks of the image with detailed description, properties show that the colors used in image is visualized. If any unwanted or inappropriate content is removed from the image through safe search. The main features of this API are label detection, explicit content detection, logo and landmark detection, face detection, web detection, and to extract the text the API used Optical Character Reader (OCR) and is supported for many languages. It does not support face recognition system. The architecture for the Cloud Vision API is as follows: We can summarize the functionalities of the API as extracting quantitative information from images, taking the input as an image and the output as numerics and text. The components used in the API are: Client Library REST API RPC API OCR Language Support Cloud Storage Cloud Endpoints Applications of the API include: Industrial Robotics Cartography Geology Forensics and Military Medical and Healthcare Cost: Free of charge for the first 1,000 units per month; after that, pay as you go. Use cases This technique can be successfully implemented in: Image detection using an Android or iOS mobile device Retinal Image Analysis (Ophthalmology) We will discuss each of these use cases in the following topics. Image detection using Android or iOS mobile device Cloud Vision API can be successfully implemented to detect images using your smartphone. The steps to do this are simple: Input the image Run the Cloud Vision API Executes methods for detection of Face, Label, Text, Web and Document properties Generate the response in the form of phrase or string Populate the image details as a text view Retinal Image Analysis – ophthalmology Similarly, the API can also be used to analyze retinal images. The steps to implement this are as follows: Input the images of an eye Estimate the retinal biomarkers Do the process to remove the effected portion without losing necessary information Identify the location of specific structures Identify the boundaries of the object Find similar regions in two or more images Quantify the image with retinal portion damage You can learn a lot more about the machine learning capabilities of GCP on their official documentation page. If you found the above excerpt useful, make sure you check out our book 'Cloud Analytics with Google Cloud Platform' for more information on why GCP is a top cloud solution for machine learning and AI. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) How machine learning as a service is transforming cloud Google announce the largest overhaul of their Cloud Speech-to-Text  
Read more
  • 0
  • 0
  • 4552

article-image-containerized-data-science-docker
Darwin Corn
03 Jul 2016
4 min read
Save for later

Containerized Data Science with Docker

Darwin Corn
03 Jul 2016
4 min read
So, you're itching to begin your journey into data science but you aren't sure where to start. Well, I'm glad you’ve found this post since I will give the details in a step-by-step fashion as to how I circumvented the unnecessarily large technological barrier to entry and got my feet wet, so to speak. Containerization in general and Docker in particular have taken the IT world by storm in the last couple of years by making LXC containers more than just VM alternatives for the enterprising sysadmin. Even if you're coming at this post from a world devoid of IT, the odds are good that you've heard of Docker and their cute whale mascot. Of course, now that Microsoft is on board, the containerization bandwagon and a consortium of bickering stakeholders have formed, so you know that container tech is here to stay. I know, FreeBSD has had the concept of 'jails' for almost two decades now. But thanks to Docker, container tech is now usable across the big three of Linux, Windows and Mac (if a bit hack-y in the case of the latter two), and today we're going to use its positives in an exploration into the world of data science. Now that I have your interest piqued, you're wondering where the two intersect. Well, if you're like me, you've looked at the footprint of R-studio and the nightmare maze of dependencies of IPython and “noped” right out of there. Thanks to containers, these problems are solved! With Docker, you can limit the amount of memory available to the container, and the way containers are constructed ensures that you never have to deal with troubleshooting broken dependencies on update ever again. So let's install Docker, which is as straightforward as using your package manager in Linux, or downloading Docker Toolbox if you're using a Mac or Windows PC, and running the installer. The instructions that follow will be tailored to a Linux installation, but are easily adapted to Windows or Mac as well. On those two platforms, you can even bypass these CLI commands and use Kitematic, or so I hear. Now that you have Docker installed, let's look at some use cases for how to use it to facilitate our journey into data science. First, we are going to pull the Jupyter Notebook container so that you can work with that language-agnostic tool. # docker run --rm -it -p 8888:8888 -v "$(pwd):/notebooks" jupyter/notebook The -v "$(pwd):/notebooks" flag will mount the current directory to the /notebooks directory in the container, allowing you to save your work outside the container. This will be important because you’ll be using the container as a temporary working environment. The --rm flag ensures that the container is destroyed when it exits. If you rerun that command to get back to work after turning off your computer for instance, the container will be replaced with an entirely new one. That flag allows it access to the folder on the local filesystem, ensuring that your work survives the casually disposable nature of development containers. Now go ahead and navigate to http://localhost:8888, and let's get to work. You did bring a dataset to analyze in a notebook, right? The actual nuts and bolts of data science are beyond the scope of this post, but for a quick intro to data and learning materials, I've found Kaggle to be a great resource. While we're at it, you should look at that other issue I mentioned previously—that of the application footprint. Recently a friend of mine convinced me to use R, and I was enjoying working with the language until I got my hands on some real data and immediately felt the pain of an application not designed for endpoint use. I ran a regression and it locked up my computer for minutes! Fortunately, you can use a container to isolate it and only feed it limited resources to keep the rest of the computer happy. # docker run -m 1g -ti --rm r-base This command will drop you into an interactive R CLI that should keep even the leanest of modern computers humming along without a hiccup. Of course, you can also use the -c and --blkio-weight flags to restrict access to the CPU and HDD resources respectively, if limiting it to the GB of RAM wasn't enough. So, a program installation and a command or two (or a couple of clicks in the Kitematic GUI), and we're off and running using data science with none of the typical headaches. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 4539

article-image-data-science-for-non-techies-how-i-got-started
Amey Varangaonkar
20 Jul 2018
7 min read
Save for later

Data science for non-techies: How I got started (Part 1)

Amey Varangaonkar
20 Jul 2018
7 min read
As a category manager, I manage the data science portfolio of product ideas for Packt Publishing, a leading tech publisher. In simple terms, I place informed bets on where to invest, what topics to publish on etc.  While I have a decent idea of where the industry is heading and what data professionals are looking forward to learn and why etc, it is high time I walked in their shoes for a couple of reasons. Basically, I want to understand the reason behind Data Science being the ‘Sexiest job of the 21st century’, and if the role is really worth all the fame and fortune. In the process, I also wanted to explore the underlying difficulties, challenges and obstacles that every data scientist has had to endure at some point in his/her journey, or still does, maybe. The cherry on top, is that I get to use the skills I develop, to supercharge my success in my current role that is primarily insight-driven. This is the first of a series of posts on how I got started with Data Science. Today, I’m sharing my experience with devising a learning path and then gathering appropriate learning resources. Devising a learning path To understand the concepts of data science, I had to research a lot. There are tons and tons of resources out there, many of which are very good. Once you seperate the good from the rest, it can be quite intimidating to pick the options that suit you the best. Some of the primary questions that clouded my mind were: What should be my programming language of choice? R or Python? Or something else? What tools and frameworks do I need to learn? What about the statistics and mathematical aspects of machine learning? How essential are they? Two videos really helped me find the answers to the questions above: If you don’t want to spend a lot of your time mastering the art of data science, there’s a beautiful video on how to become a data scientist in six months What are the questions asked in a data science interview? What are the in-demand skills that you need to master in order to get a data science job? This video on 5 Tips For Getting a Data Science Job really is helpful. After a lot of research that included reading countless articles and blogs and discussions with experts, here is my learning plan: Learn Python Per the recently conducted Stack Overflow Developer Survey 2018, Python stood out as the most-wanted programming language, meaning the developers who do not use it yet want to learn it the most. As one of the most widely used general-purpose programming languages, Python finds large applications when it comes to data science. Naturally, you get attracted to the best option available, and Python was the one for me. The major reasons why I chose to learn Python over the other programming languages: Very easy to learn: Python is one of the easiest programming languages to learn. Not only is the syntax clean and easy to understand, even the most complex of data science tasks can be done in a few lines of Python code. Efficient libraries for Data Science: Python has a vast array of libraries suited for various data science tasks, from scraping data to visualizing and manipulating it. NumPy, SciPy, pandas, matplotlib, Seaborn are some of the libraries worth mentioning here. Python has terrific libraries for machine learning: Learning a framework or a library which makes machine learning easier to perform is very important. Python has libraries such as scikit-learn and Tensorflow that makes machine learning easier and a fun-to-do activity. To make the most of these libraries, it is important to understand the fundamentals of Python. My colleague and good friend Aaron has put out a list of top 7 Python programming books which helped as a brilliant starting point to understand the different resources out there to learn Python. The one book that stood out for me was Learn Python Programming - Second Edition - This is a very good book to start Python programming from scratch. There is also a neat skill-map present on Mapt, where you can progressively build up your knowledge of Python - right from the absolute basics to the most complex concepts. Another handy resource to learn the A-Z of Python is Complete Python Masterclass. This is a slightly long course, but it will take you from the absolute fundamentals to the most advanced aspects of Python programming. Task Status: Ongoing Learn the fundamentals of data manipulation After learning the fundamentals of Python programming, the plan is to head straight to the Python-based libraries for data manipulation, analysis and visualization. Some of the major ones are what we already discussed above, and the plan to learn them is in the following order: NumPy - Used primarily for numerical computing pandas - One of the most popular Python packages for data manipulation and analysis matplotlib - The go-to Python library for data visualization, rivaling the likes of R’s ggplot2 Seaborn - A data visualization library that runs on top of matplotlib used for creating visually appealing charts, plots and histograms Some very good resources to learn about all these libraries: Python Data Analysis Python for Data Science and Machine Learning - This is a very good course with a detailed coverage on the machine learning concepts. Something to learn later. The aim is to learn these libraries upto a fairly intermediate level, and be able to manipulate, analyze and visualize any kind of data, including missing, unstructured data and time-series data. Understand the fundamentals of statistics, linear algebra and probability In order to take a step further and enter into the foray of machine learning, the general consensus is to first understand the maths and statistics behind the concepts of machine learning. Implementing them in Python is relatively easier once you get the math right, and that is what I plan to do. I shortlisted some very good resources for this as well: Statistics for Machine Learning Stanford University - Machine Learning Course at Coursera Task Status: Ongoing Learn Machine Learning (Sounds odd I know) After understanding the math behind machine learning, the next step is to learn how to perform predictive modeling using popular machine learning algorithms such as linear regression, logistic regression, clustering, and more. Using real-world datasets, the plan is to learn the art of building state-of-the-art machine learning models using Python’s very own scikit-learn library, as well as the popular Tensorflow package. To learn how to do this, the courses I mentioned above should come in handy: Stanford University - Machine Learning Course at Coursera Python for Data Science and Machine Learning Python Machine Learning, Second Edition Task Status: To be started [box type="shadow" align="" class="" width=""]During the course of this journey, websites like Stack Overflow and Stack Exchange will be my best friends, along with the popular resources such as YouTube.[/box] As I start this journey, I plan to share my experiences and knowledge with you all. Do you think the learning path looks good? Is there anything else that I should include in my learning path? I would really love to hear your comments, suggestions and experiences. Stay tuned for the next post where I seek answers to questions such as ‘How much of Python should I learn in order to be comfortable with Data Science?’, ‘How much time should I devote per day or week to learn the concepts in Data Science?’ and much more.. Read more Why is data science important? 9 Data Science Myths Debunked 30 common data science terms explained
Read more
  • 0
  • 0
  • 4530
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-what-keras
Janu Verma
01 Jun 2017
5 min read
Save for later

What is Keras?

Janu Verma
01 Jun 2017
5 min read
What is keras? Keras is a high-level library for deep learning, which is built on top of Theano and Tensorflow. It is written in Python, and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical techniques. The key idea behind Keras is to facilitate fast prototyping and experimentation. In the words of Francois Chollet, the creator of Keras: Being able to go from idea to result with the least possible delay is key to doing good research. This is a huge advantage, especially for scientists and beginner developers because one can jump right into deep learning without getting their hands dirty. Because deep learning is currently on the rise, the demand for people trained in deep learning is ever increasing. Organizations are trying to incorporate deep learning into their workflows, and Keras offers an easy API to test and build deep learning applications without considerable effort. Deep learning research is such a hot topic right now, scientists need a tool to quickly try out their ideas, and they would rather spend time on coming up with ideas than putting together a neural network model. I use Keras in my own research, and I know a lot of other researchers relying on Keras for its easy and flexible API. What are the key features of Keras? Keras is a high-level interface to Theano or Tensorflow and any of these can be used in the backend. It is extremely easy to switch from one backend to another. Training deep neural networks is a memory and time intensive task. Modern deep learning frameworks like Tensorflow, Caffe, Torch, etc. can also run on GPU, though there might be some overhead in setting up and running the GPU. Keras runs seamlessly on both CPU and GPU. Keras supports most of the neural layer types e.g. fully connected, convolution, pooling, recurrent, embedding, dropout, etc., which can be combined in any which way to build complex models. Keras is modular in nature in the sense that each component of a neural network model is a separate, standalone, fully-configurable module, and these modules can be combined to create new models. Essentially, layers, activation, optimizers, dropout, loss, etc. are all different modules that can be assembled to build models. A key advantage of modularity is that new features are easy to add as separate modules. This makes Keras fully expressive, extremely flexible, and well-suited for innovative research. Coding in Keras is extremely easy. The API is very user-friendly with the least amount of cognitive load. Keras is a full Python framework, and all coding is done in Python, which makes it easy to debug and explore. The coding style is very minimalistic, and operations are added in very intuitive python statements. How is Keras built? The core component of Keras architecture is a model. Essentially, a model is a neural network model with layers, activations, optimization, and loss. The simplest Keras model is Sequential, which is just a linear stack of layers; other layer arrangements can be formed using the Functional model. We first initialize a model, add layers to it one by one, each layer followed by its activation function (and regularization, if desired), and then the cost function is added to the model. The model is then compiled. A compiled model can be trained, using the simple API (model.fit()), and once trained the model can be used to make predictions (model.predict()). The similarity to scikit-learn API can be noted here. Two models can be combined sequentially or parallel. A model trained on some data can be saved as an HDF5 file, which can be loaded at a later time. This eliminates the need to train a model again and again; train once and make predictions whenever desired. Keras provides an API for most common types of layers. You can also merge or concatenate layers for a parallel model. It is also possible to write your own layers. Other ingredients of a neural network model like loss function, metric, optimization method, activation function, and regularization are all available with most common choices. Another very useful component of Keras is the preprocessing module with support for manipulating and processing image, text, and sequence data. A number of deep learning models and their weights obtained by training on a big dataset are made available. For example, we have VGG16, VGG19, InceptionV3, Xception, ResNet50 image recognition models with their weights after training on ImageNet data. These models can be used for direct prediction, feature building, and/or transfer learning. One of the greatest advantage of Keras is a huge list of example code available on the Keras GitHub repository (with discussions on accompanying blog) and on the wider Internet. You can learn how to use Keras for text classification using a LSTM model, generate inceptionistic art using deep dream, using pre-trained word embeddings, building variational autoencoder, or train a Siamese network, etc. There is also a visualization module, which provides functionality to draw a Keras model. This uses the graphviz library to plot and save the model graph to a file. All in all, Keras is a library worth exploring, if you haven't already.  Janu Verma is a Researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute.  He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals etc.  His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area, email to schedule a meeting. Check out his personal website here.
Read more
  • 0
  • 0
  • 4524

article-image-5-ways-to-reduce-app-deployment-time
Guest Contributor
27 Oct 2018
6 min read
Save for later

5 ways to reduce App deployment time

Guest Contributor
27 Oct 2018
6 min read
Over 6,000 mobile apps are released on the Google Play Store every day. This breeds major competition among different apps that are constantly trying to reach more consumers. Spoilt for choice, the average app user is no longer willing to put up with lags, errors, and other things that might go wrong with their app experience. Because consumers have such high expectations, developers need to find a way to release new updates, or deployments faster. This means app developers need to keep the deployment time low without compromising quality. The world of app development is always evolving, and any new deployments come with a risk. You need the right strategy to keep things from going wrong at every stage of the deployment process. Luckily, it’s not as complicated as you might think to create a workflow that won’t breed errors. Here are some tips to get you started. 1. Logging to catch issues before they happen An application log is a file that keeps track of events being logged by a software, which includes vital information such as errors and warnings. Logging helps in catching potential problems before they happen. Even if a problem arises, you’ll have a log to show you why it might have occurred. Logging also provides a history of earlier version updates which you can restore from. You have two options for application logging: creating your own framework or utilizing one that’s already readily available. While it’s completely possible to create your own, based on your own decision about what’s important for your application, there are already tools that work effectively that you can implement for your project. You can learn more about creating a system for finding problems before they happen here: Python Logging Basics - The Ultimate Guide to Logging. 2. Batching to identify errors/breakdowns quickly Deploying in batches gives developers much more control than releasing all major updates at once. When you reduce the amount of change taking place in every update, it’s easier to identify errors and breakdowns. If you update your app with large overhauls, you’ll spend countless hours hunting where something went wrong. Even if your team already utilizes small batch updates, you can take steps to make this process easier through automation using tools like Compuware, Helpsystems or Microsystems' Automation Batch Tools. Writing fresh code every time you need to make a change takes time and effort. When you have an agile schedule, you need to optimize your code to ensure time isn’t spent on repetitive tasks. Automated batching will help your team work faster and will fewer errors. 3. Key Performance Indicators to benchmark success Key Performance Indicators, also known as KPIs anticipate the success of your app. You should identify these early on so you’re able to not only recognize the success of your app but also notice areas that need improving. The KPIs you choose depend on the type of app. In the app world, some of the most common KPIs are: Number of downloads App open rate New users Retention rate Session length Conversion rate from users to customers Knowing your KPIs will help you anticipate user trends. If you notice your session length going down, for example, that’s a good sign it’s time for an update. On the other hand, an increase in downloads is a good indicator that you’re doing something right. 4. Testing Finally, you’ll want to set up a system for testing your app deployments effectively. Testing is important to make sure everything is working properly so you can quickly launch your newest deployment without worrying about things going wrong. You can create sample tests for all aspect of the user experience like logins, key pages, and APIs. However, you’ll need to choose a method (or several) of testing that makes sense based on your deployment size. Common application testing types: Functionality Testing: This is to ensure the app is working on all devices. Performance Testing: With this test, several mobile challenges are introduced like poor network coverage, and less memory that stress the application’s server. Memory Leakage Testing: This step checks for optimized memory processing. Security Testing: As security becomes a greater concern for users, apps need to be tested to ensure data is protected. The good news is much of this testing can be done through automated tools. With just a few clicks, you can test for all of the things above. The most common automated testing tools include Selenium, TestingWhiz, and Test IO. 5. Deployment tracking software When you’re continuously deploying new updates for your app, you need to have a way to track these changes in real-time. This helps your team see when the deployments happened, how they relate to prior deployments, and how they’ve affected your predetermined KPIs. While you should still have a system for testing, automating code, and tracking errors, some errors still happen since there is no way to prevent a problem from happening 100% of the time. Using a deployment tracking software such as Loggly (full disclosure, I work at Loggly), Raygun or Airbrake will help cut down on time spent searching for an error. Because they identify immediately if an error is related to newly released codes, you can spend less time locating a problem and more time solving it. When it comes to your app’s success, you need to make sure your deployments are as pain-free as possible. You don’t have time to waste since competition is fierce today, but that is no excuse to compromise on quality. The above tips will streamline your deployment process so you can focus on building something your users love. About the Author Ashley is an award-winning writer who discovered her passion in providing creative solutions for building brands online. Since her first high school award in Creative Writing, she continues to deliver awesome content through various niches. Mastodon 2.5 released with UI, administration, and deployment changes Google App Engine standard environment (beta) now includes PHP 7.2 Multi-Factor Authentication System – Is it a Good Idea for an App?
Read more
  • 0
  • 0
  • 4523

article-image-what-you-need-know-about-iot-product-development
Raka Mahesa
10 Oct 2017
5 min read
Save for later

What you need to know about IoT product development

Raka Mahesa
10 Oct 2017
5 min read
Software is eating the world. It's a famous statement made by Marc Andreessen back in 2011 about the rise of software companies and how software will disrupt many, many industries. Today, as we live among devices that run on smart software, that statement couldn't be more true. We live surrounded by dozens of devices that are connected to each other, as the Internet of Things slowly spreads throughout our world. Each year, a batch of new smart devices are introduced to the market, hoping to find a place in our connected lives.  Have you ever wondered though about how these smart devices are made? Are they a software project? Or are they actually a hardware project? What consideration do we need to think about when we're developing these products? With those questions in mind, let's take a further look into the product development of the Internet of Things.   Before we go on though, let's clarify the kind of product that we will be discussing. For this article, what counts as a product is a software or hardware project that was not made for personal use. The scale and complexity of the product doesn't really matter. It could be a simple connected camera network, it could be a brand new type of device that the world has never seen before, or it could be simply adding an analytical tool to a currently working device.  Working with hardware is expensive  Now that we have that cleared up, let's start with the first and most important thing you need to know about IoT product development: working with hardware is not only different from developing software, it's also more difficult and more expensive. In fact, the reason that so many startup companies are popping up these days, is because starting a software business is much cheaper than starting a hardware business. Before software was prevalent, it was much harder and costly to start a technology business.  Unlike software, hardware isn't easy to change. Once you're set to manufacture a particular hardware, there's no changing the end result, even if there's a mistake with your initial design. And even if your design is flawless, there could still be a problem with the material you're working with or even with the manufacturer themselves. So, when working with hardware, you need to be extra careful, because a single mistake could end up being exceptionally costly.  Fortunately, these days there are solutions that could alleviate those issues, like 3D printing. With 3D printing, we can cheaply produce our hardware design. That way, we can quickly evaluate the look and detect any issue with the hardware design without needing to go back and forth with the manufacturer. Do keep in mind that even with 3D printing, we still need to test our hardware with the actual, final material and manufacturing method.  Requirements and functionality are important  Another thing that you need to know about IoT product development is that you need to figure out the full requirement and functionality of your product very early on. Yes, when you're developing software, you also need to find out about the software requirement in the beginning, but it's a bit different with IoT, because it affects everything in the project.  You see, with software development, your toolkit is meant to be general and capable of dealing with most problems. For example, if you want to build a web application, then most of the time, the framework and language that you choose will be able to build the application that you want. The development environment for IoT doesn't work that way however, it is much more specific. A certain toolkit for IoT is meant to solve problems with certain conditions.  Coupled with the fact that IoT products have additional factors that need to be considered like power consumption, among others, choosing the right platform for the right project is a must. For example, if later in the project it was found out that you need more processing power than the one provided by your hardware platform, then you need to retool plenty of stuff.  Consider UI User interaction is another big thing you need to consider in IoT product development. A lot of devices don't have any screen or any complicated input method, so you need to figure out early how users will interact with your product. Should the user be able do any interaction right on the device? Or should the user interact with the device using their phones? Should the user be able to access the device remotely? These are all questions you need to answer before you can determine the component your product requires.  Consider connectivity  Speaking of remote access, connectivity is also another factor you will need to consider in IoT product development. While there are many ways for your product to connect to the Internet, you should also ask whether it makes sense for your product to have an Internet connection or not. Maybe your product will be placed in a spot where wireless connection doesn't reach. Maybe instead of via Internet, your product should be able to transfer its data and log whenever a storage device is connected with it.  There are a lot of things that you need to consider when you are developing products for the Internet of Things. The topics we discussed should provide you with a good place to start.  About the Author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 4509

article-image-top-5-nosql-databases
Akram Hussain
31 Oct 2014
4 min read
Save for later

Top 5 NoSQL Databases

Akram Hussain
31 Oct 2014
4 min read
NoSQL has seen a sharp rise in both adoption and migration from the tried and tested relational database management systems. The open source world has accepted it with open arms, which wasn’t the case with large enterprise organisations that still prefer and require ACID-compliant databases. However, as there are so many NoSQL databases, it’s difficult to keep track of them all! Let’s explore the most popular and different ones available to us: 1 - Apache Cassandra Apache Cassandra is an open source NoSQL database. Cassandra is a distributed database management system that is massively scalable. An advantage of using Cassandra is its ability to manage large amounts of structured, semi-structured, and unstructured data. What makes Cassandra more appealing as a database system is its ability to ‘Scale Horizontally’, and it’s one of the few database systems that can process data in real time and generate high performance and maintain high availability. The mixture of a column-oriented database with a key-value store means not all rows require a column, but the columns are grouped, which is what makes them look like tables. Cassandra is perfect for ‘mission critical’ big data projects, as Cassandra offers ‘no single point of failure’ if a data node goes down. 2 - MongoDB MongoDBis an open source schemaless NoSQL database system; its unique appeal is that it’s a ‘Document database’ as opposed to a relational database. This basically means it’s a ‘data dumpster’ that’s free for all. The added benefit in using MongoDB is that it provides high performance, high availability, and easy scalability (auto-sharding) for large sets of unstructured data in JSON-like files. MongoDB is the ultimate opposite to the popular MySQL. MySQL data has to be read in rows and columns, which has its own set of benefits with smaller sets of data. 3 - Neo4j Neo4j is an open source NoSQL ‘graph-based database’. Neo4j is the frontrunner of the graph-based model. As a graph database, it manages and queries highly connected data reliably and efficiently. It allows developers to store data more naturally from domains such as social networks and recommendation engines. The data collected from sites and applications are initially stored in nodes that are then represented as graphs. 4 - Hadoop Hadoop is easy to look over as a NoSQL database due to its ecosystem of tools for big data. It is a framework for distributed data storage and processing, designed to help with huge amounts of data while limiting financial and processing-time overheads. Hadoop includes a database known as HBase, which runs on top of HDFS and is a distributed, column-oriented data store. HBase is also better known as a distributed storage system for Hadoop nodes, which are then used to run analytics with the use of MapReduce V2, also known as Yarn. 5 - OrientDB OrientDB has been included as a wildcard! It’s a very interesting database and one that has everything going for it, but has always been in the shadows of Neo4j. Orient is an open source NoSQL hybrid graph-document database that was developed to combine the flexibility of a document database with the complexity of a graph database (Mongo and Neo4j all in one!). With the growth of complex and unstructured data (such as social media), relational databases were not able to handle the demands of storing and querying this type of data. Document databases were developed as one solution and visualizing them through nodes was another solution. Orient has combined both into one, which sounds awesome in theory but might be very different in practice! Whether the Hybrid approach works and is adopted remains to be seen.
Read more
  • 0
  • 0
  • 4504
article-image-solving-day-7-advent-code-using-swift
Nicky Gerritsen
07 Mar 2016
7 min read
Save for later

Solving Day 7 of Advent of Code using Swift

Nicky Gerritsen
07 Mar 2016
7 min read
Eric Wastl created the website Advent of Code, a website that published a new programming exercise from the first of December until Christmas. I came across this website somewhere in the first few days of December and as I participated in the ACM ICPC in the past, I expected I should be able to solve these problems. I decided it would be a good idea to use Swift to write these solutions. While solving the problems, I came across one problem that I was able to do really well in Swift and I'd like to explain that one in this post. Introduction After reading the problem, I immediately noticed some interesting points: We can model the input as a graph, where each wire is a vertex and each connection in the circuit connects some vertices to another vertex. For example x AND y -> z connect both x and y to vertex z. The example input is ordered in such a way that you can just iterate over the lines from top to bottom and apply the changes. However, the real input does not have this ordering. To get the real input in the correct ordering, one should note that the input is basically a DAG. Or at least it should be, otherwise it cannot be solved. This means we can use topological sorting to sort the vertices of the graph in the order we should walk them. Although in the example input, it seems that AND, OR, NOT, LSHIFT and RSHFT always operated on a wire, this is not the case. They can also operate on a constant value. Implementation Note that I replaced some guard lines with forced unwrapping here. The source code linked at the end contains the original code. First off, we define a Source, which is an element of an operation, i.e. in x AND y both x and y are a Source: enum Source { case Vertex(String) case Constant(UInt16) func valueForGraph(graph: Graph) -> UInt16 { switch self { case let .Vertex(vertex): return graph.vertices[vertex]!.value! case let .Constant(val): return val } } var vertex: String? { switch self { case let .Vertex(v): return v case .Constant(_): return nil } } static func parse(s: String) -> Source { if let i = UInt16(s) { return .Constant(i) } else { return .Vertex(s) } } } A Source is either a Vertex (i.e. a wire) and then it has a corresponding string as identifier, or it is a constant and then it contains some value. We define a function that will return the value for this Source given a Graph (more on this later). For a constant source the whole graph does not matter, but for a wire we should look up the value in the graph. The second function is used to extract the identifier of the vertex of the source, if any. Finally we also have a function that helps us parse a string or integer into a Source. Next up we have an Operation enumeration, which holds all information about one line of input: enum Operation { case Assign(Source) case And(Source, Source) case Or(Source, Source) case Not(Source) case LeftShift(Source, UInt16) case RightShift(Source, UInt16) func applytoGraph(graph: Graph, vertex: String) { let v = graph.vertices[vertex]! switch self { case let .Assign(source1): v.value = source1.valueForGraph(graph) case let .And(source1, source2): v.value = source1.valueForGraph(graph) & source2.valueForGraph(graph) /* etc for other cases */ case let .RightShift(source1, bits): v.value = source1.valueForGraph(graph) >> bits } } static func parseOperation(input: String) -> Operation { if let and = input.rangeOfString(" AND ") { let before = input.substringToIndex(and.startIndex) let after = input.substringFromIndex(and.endIndex) return .And(Source.parse(before), Source.parse(after)) } /* etc for other options */ var sourceVertices: [String] { /* code that switches on self and extracts vertex from each source */ } The operation enum has a static function that allows us to parse a line from the input into an Operation and a function that will allow us to apply it to a graph. Furthermore, it has a computed variable that returns all source vertices for the operation. Now a Vertex is an easy class: class Vertex { var idx: String var outgoing: Set<String> var incoming: Set<String> var operations: [Operation] var value: UInt16? init(idx: String) { self.idx = idx self.outgoing = [] self.incoming = [] self.operations = [] } } It has an ID and keeps track of a set of incoming and outgoing edges (we need both for topological sorting). Furthermore, it has a value (which is initially not set) and a list of operations that has this vertex as target. Because we want to store vertices in a set, we need to let it conform to Equatable and Hashable. Because we have a unique string identifier for each vertex, this is easy: extension Vertex: Equatable {} func ==(lhs: Vertex, rhs: Vertex) -> Bool { return lhs.idx == rhs.idx } extension Vertex: Hashable { var hashValue: Int { return self.idx.hashValue } } The last structure we need is a graph, which basically hold a list of all vertices: class Graph { var vertices: [String: Vertex] init() { self.vertices = [:] } func addVertexIfNotExists(idx: String) { if let _ = self.vertices[idx] { return } self.vertices[idx] = Vertex(idx: idx) } func addOperation(operation: Operation, target: String) { // Add an operation for a given target to this graph self.addVertexIfNotExists(target) self.vertices[target]?.operations.append(operation) let sourceVertices = operation.sourceVertices for v in sourceVertices { self.addVertexIfNotExists(v) self.vertices[target]?.incoming.insert(v) self.vertices[v]?.outgoing.insert(target) } } } We define a helper function that ads a vertex if not already added. We then use this function to define a function that can add operation to the graph, together with all required vertices and edges. Now we need to be able to topologically sort the vertices of the graph, which can be done using Kahn's Algorithm[MA6] . This[MA7]  can be done in Swift almost exactly using the pseudo-code explained there: extension Graph { func topologicalOrder() -> [Vertex] { var L: [Vertex] = [] var S: Set<Vertex> = Set(vertices.values.filter { $0.incoming.count == 0 }) while S.count > 0 { guard let n = S.first else { fatalError("No more nodes in S") } S.remove(n) L.append(n) for midx in n.outgoing { guard let m = self.vertices[midx] else { fatalError("Can not find vertex") } n.outgoing.remove(m.idx) m.incoming.remove(n.idx) if m.incoming.count == 0 { S.insert(m) } } } return L } } Now we are basically done, as we can now write up a function that calculates the value of a given wire in a graph: func getFinalValueInGraph(graph: Graph, vertex: String) -> UInt16? { let topo = graph.topologicalOrder() for vertex in topo { for op in v.operations { op.applytoGraph(graph, vertex: vertex.idx) } } return graph.vertices[vertex]?.value } Conclusions This post (hopefully) gave you some insight intohow I solved one of the bigger Advent of Code problems. As you can see Swift has some really nice features that help in this case, like enums with types and functional methods like filter. If you like these kinds of problems I suggest you go to the Advent of Code website and start solving the problems. There are quite a few that are really easy to get started. The complete code for this blogpost can be found at my GitHub account. About the author Nicky Gerritsen is currently a Software Architect at StreamOne, a small Dutch company specialized in video streaming and storage. In his spare time he loves to code on Swift projects and learn about new things Swift. He can be found on Twitter @nickygerritsen and on GitHub: https://github.com/nickygerritsen/.
Read more
  • 0
  • 0
  • 4479

article-image-10-reasons-to-love-raspberry-pi
Vincy Davis
26 Jun 2019
9 min read
Save for later

10+ reasons to love Raspberry Pi

Vincy Davis
26 Jun 2019
9 min read
It’s 2019 and unless you’ve been living under a rock, you know what a Raspberry Pi is. A series of credit-card-sized board computers, initially developed to promote computer science in schools, has now released its Raspberry Pi 4 Model B in the market yesterday. Read More: Raspberry Pi 4 is up for sale at $35, with 64-bit ARM core, up to 4GB memory, full-throughput gigabit Ethernet and more! Since its release in 2012, Raspberry Pi has had several iterations and variations. Today it has become a phenomenon, it’s the world's third best-selling, general-purpose computer. It's inside laptops, tablets, and robots. This year its offering students and young people an opportunity to conduct scientific investigations in space, by writing computer programs that run on Raspberry Pi computers aboard the International Space Station. Developers around the world are using different models of this technology to implement varied applications. What do you do with your Raspberry Pi? Following the release of Raspberry Pi 4, an interesting HN thread on applications of the Raspberry Pi exploded with over a thousand comments and over 1.5k votes. The original thread poster asked, “I have Raspberry Pi and I mainly use it for VPN and piHole. I’m curious if you have one, have you found it useful? What do you do with your Raspberry Pi?” Below are some select use cases from the thread. Innovative: Raspberry Pi Zero transformed a braille display into a full-feature Linux laptop A braille user transformed a braille display into a full-feature Linux laptop, using a Raspberry Pi Zero. The user used a braille display which featured a small compartment with micro-USB and converted it into a ARM-based, monitorless, Linux laptop with a keyboard and a braille display. It can be charged/powered via USB so it can also be run from a power bank or a solar charger, thus potentially being able to run for days, rather than just hours, without needing a standard wall-jack. This helped the user to save space, power and weight. Monitor Climate change effects Changes in climate have been affecting each and everyone of us, in some way or the other. Some developers are using Raspberry Pi innovatively to tackle these climatic changes. Monitoring inhouse CO2 levels A developer working with the IBM Watson Group states that he uses several Raspberry Pis to monitor CO2 levels in his house. Each Raspberry Pi has a CO2 sensor, with a Python script to retrieve data from sensor and upload it to a server, which is also a Raspberry Pi. Later, on detecting that his bedroom has high level of CO2, he improved ventilation and reduced the CO2 levels around. Measuring conditions of coral reefs Nemo Pi is a Nemo foundation’s technology, which works as an underground weather station. It uses Raspberry Pi computers to protect coral reefs from climate change by measuring temperature, visibility, pH levels, and the concentration of CO2 and nitrogen oxide at each anchor point. Checking weather updates remotely You can also use the Raspberry Pi for ‘Weather Monitoring’, to check the changes in the weather remotely using a smartphone. The main conditions in the weather monitor are the temperature, humidity, and the air quality. Raspberry Pi 3 model B, can be programmed such that it takes data from Arduino, and depending on the data acquired, the cameras are actuated. The Pi receives data from sensors and uploads it to the cloud so that appropriate action can be taken. Making Home Automation feasible Raspberry Pi has been designed to let you create whatever you can dream of, and of course developers are making full use of it. There are many instances of developers using Raspberry Pi to make their home automation more feasible. Automatic pet door drive A developer have used this technology to install a fire-protection-approved door drive for their pets. It is used along with another Raspberry Pi which analyzes a video stream and detects the pet. If the pet is in the frame for ‘n’ amount of time, a message is sent to the Pi connected to the door drive, which opens up slightly, to let the pet in. Home automation Raspberry Pi 3 model works with the Home Assistant with a Z-Wave USB Dongle, and provides climate, covers, lights, locks, sensors, switches, and thermostats information. There are many takers of the RaZberry card, which is a tiny daughter card that sits on top of the Raspberry PI GPIO connector. It is powered by the Raspberry PI board with 3.3 V and communicates using UART TTL signals. It supports home automation and is not only compatible with all models of Raspberry Pi, but also with all third party software. Watering a plant via a reddit bot! There’s another simple instance where a subreddit has control over the watering of a live plant. The Pi runs a reddit bot that reads the votes, and switch on the pump to water. It also collects data about sunlight, moisture, temp and humidity to help form the decision about watering. Build easy electronic projects Raspberry Pi can be used to learn coding and to build electronics projects, and for many of the things that your desktop PC does, like spreadsheets, word processing, and browsing the internet to learn programming and execute projects. Make a presentation Rob Reilly, an independent consultant states that he uses Raspberry Pi in his Steampunk conference badge while giving tech talks. He plugs it in the HDMI, powers up the badge and runs slides with a nano-keyboard/mousepad and LibreOffice. He says that this works great for him as it displays a promotional video on it's 3.5" touch-screen and runs on a cell phone power pack. Control a 3D printer, a camera or even IoT apps A user of Raspberry Pi states that he makes use of the Raspberry Pi 3 model to use OctoPrint.  It is an open source web interface for 3D printers which allows to control and monitor all aspects of printer and print jobs. A system architect says that he regularly uses Raspberry Pi for digital signage, controlled servos, and as cameras. Currently, he also uses a Pi Zero W model for a demo Azure IoT solutions. Raspberry Pi is also used as a networked LED marquee controller. Read More: Raspberry Pi Zero W: What you need to know and why it’s great FullPageOS is a Raspberry Pi distribution to display one webpage in full screen. It includes Chromium out of the box and the scripts necessary to load it at boot. This repository contains the source script to generate the distribution out of an existing Raspbian distro image. Also a developer, who’s also the Former VP of Engineering at Blekko Inc search engine states that he uses Raspberry Pi for several purposes such as running the waveforms live software from Digilent and hooks to an Analog Discovery on his workbench. He also uses Raspberry Pi for driving a display which showcases a dashboard of various things like Nagios alerts, data trends, etc. Read More: Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial] Enjoy Gaming with Raspberry Pi There are many Raspberry Pi-Exclusive Games, available for its users. Minecraft PE is one such game, which comes preinstalled with Raspbian. Most games designed to run natively on the Raspberry Pi are written in Python. Raspberry Pi is being used to stream PlayStation to backups over SMB by networking the onboard Ethernet port of the Pi, to allow access to a Samba Share service running on the Pi. It allows seamless playback of games with heavy Full Motion Video sequences. When an additional support for Xlink Kai is provided to play LAN enabled games over the Pi’s WiFi connection, it enables smooth connection for lag-free multiplayer on original hardware. A user on Hacker News comments that he uses RetroPie, which has a library of many interesting games. Loved not only by developers, but also by the general public These 35$ masterpieces of Raspberry Pi give big power in the hands of someone with little imagination and a spare of electronics. With its fast processing and better network connectivity, even beginners can use Raspberry Pi for practical purposes. A college student, on Hacker News claims that he uses a Raspberry Pi 3b+ model to automate his data entry job by using Python and Selenium, which is a portable framework for testing web applications and provides a playback tool for authoring functional tests. He says that since its automated, it allows him to take long coffee breaks and not worry about it, while travelling. Kevin Smith, the co-founder of Vault states that his office uses a Raspberry Pi and blockchain NFTs to control the coffee machine. An owner of the NFT, once authenticated, can select the coffee type on their phone which then signals the Raspberry Pi to make the particular coffee type, by jumping the contacts that was previously used to be pressed by the machine's buttons. Another interesting use of Raspberry Pi is by a user who used the Raspberry Pi technology to get real-time information from the local transit authority, and the GPS installed buses to help those stranded at the bus station. Raspberry Pi 3 models can also be installed in a Tesla car within the internal network as a bastion box, to run a software which provides interaction with the car’s entertainment system. Read More: Build your first Raspberry Pi project Last year, the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT, which lets you decode and stream live TV. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. Tensorflow 1.9 has also announced that they will officially support Raspberry Pi, thus enabling users to try their hand on live machine learning projects. There’s no doubt that, with all its varied features and low priced computers, developers and the  general public have many opportunities to experiment with Raspberry Pi and get their work done. From students to international projects, Raspberry Pi is being used assuredly in many cases. You can now install Windows 10 on a Raspberry Pi 3 Raspberry Pi opens its first offline store in England Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial]
Read more
  • 0
  • 0
  • 4462

article-image-how-to-become-an-exceptional-performance-engineer
Guest Contributor
14 Dec 2019
8 min read
Save for later

How to become an exceptional Performance Engineer

Guest Contributor
14 Dec 2019
8 min read
Whenever I think of performance engineering, I am reminded of Amazon’s CEO Jeff Bezos’ statement, “Focusing on the customer makes a company more resilient.” Any company which follows this consumer-focused approach has a performance engineering domain in it, though in varying capacity and form. The connection is simple. More and more businesses are becoming web-based, so they are interacting with their customers digitally. In such a scenario, if they have to provide exceptional customer experience, they have to build resilient, stable, user-centric and high performing web-systems and applications. And to do that, they need performance engineering. What is Performance Engineering? Let me explain performance engineering with an example. Suppose, your team is building an online shopping portal. The developers will build a system that allows people to access products and buy them. They will ensure that the entire transaction is smooth, uncomplicated for the user and can be done quickly. Now imagine that to promote the portal, you do a flash sale, and 1000 users come on the platform and start doing transactions simultaneously. And your system, under this load, starts performing slower, a lot of transactions fail and your users are dejected. This will directly affect your brand image, customer loyalty, and revenue. How about we fix this before such a situation occurs? That is exactly what performance engineering entails. A performance engineer would essentially take into account such scenarios and conduct load tests and check the system’s performance in the development phase itself. Load tests check the behavior of your system in particular situations. A ‘load’ is a possible scenario that can affect the system, for instance, sale offers or peak times. If the system is able to handle the load, it will check if it is scalable. If the system is unable to handle it, they will analyze the result, find the possible bottleneck by checking the code and try to rectify it. So, for the above example, a performance engineer would have tested the system for 100 transactions at a time, then 500, and then 1000 and would have even gone up to one hundred thousand. Hence, performance engineering ensures crash-free operation of a system, software or application. Using processes and systematic techniques, practices, and activities, a performance engineer ensures that the performance requirements are met during the development cycle. However, this is not a blanket role. It would vary with your field of operation. The work of a performance engineer working on a web application will be a lot different than that of a database performance engineer or that of a streaming performance engineer. For each of these, your “load” would vary but your goal is the same, ensuring that your system is resilient enough to shoulder that load. Before I dive deeper into the role of a performance engineer, I’d like to clarify the difference between a performance tester and a performance engineer. (Yes, they are not the same!) Performance Tester versus Performance Engineer Well, many people think that 2-3 years of experience as a performance tester can easily land you a performance engineering job. Well, no. It is a long journey, which requires much more knowledge than what a tester has. A performance tester would have testing knowledge and would know about performance analysis and performance monitoring concepts across different applications. They would essentially conduct a “load test” to check the performance, stability, and scalability of a system, and produce reports to share with the developer to work on. Their work ends here. But this is not the case for a performance engineer. A performance engineer will look for the root cause of the performance issue, work towards finding a possible solution for it and then tune and optimize the system to sort the said issue until the performance parameters are met. Simply put, performance testing can be considered as a part of performance engineering but not as the same thing. Roles and Responsibilities of a Performance Engineer Designing Effective Tests As a performance engineer, your first task is to design an effective test to check the system. I found this checklist on Dzone that is really helpful for designing tests: Identify your goals, requirements, desires, workload model and your stakeholders. Understand how to test concurrency, arrival rates, and scheduling. Understand the roles of scalability, capacity, and reliability as quality attributes and requirements. Understand how to setup/create test data and data management. Scripting, Running Tests and Interpreting Results There are several performance testing tools available in the market. But you would have to work with different languages based on the tool you use. For instance, you’d have to build your testing in C and Javascript while working with Microfocus Loadrunner. Similarly, you’d script in Java and Javascript for Apache JMeter. Once your test is ready, you’d run that test on your system. Make sure you use consistent metrics while running these tests or else your results would be inaccurate. Finally, you will interpret those results. In this, you’d have to figure out what the bottlenecks are and where they are occurring. For that, you would have to read results and analyze graphs that your performance testing tool has produced and draw conclusions. Fine Tuning And Performance Optimisation Once you know what the bottleneck is and where it is occurring, you would have to find a solution to overcome it to enhance the performance of the system you are testing. (Something a performance tester won’t do!) Your task is to ensure that the system/application is optimized to the level where it works optimally at the maximum load possible. Of course, you can seek aid from a developer (backend, frontend or full-stack) working on the project to figure this out. But as a performance engineer, you’d have to be involved actively in this fine-tuning and optimization process. There are four major skills/attributes that differentiate an exceptional performance engineer from an average one. Proves that their load results are scalable If you are a good performance engineer, you will not serve a half-cooked meal. First of all, take all possibilities into account. For instance, take the example of the same online shopping portal. If you are considering a load test for 1000 simultaneous transactions, consider it for both scenarios wherein the transactions are happening for different products or when it is happening for the same product. If your portal does a launch sale for an exclusive product that is available for a limited period, you may have too many people trying to buy it at the same time. Ask yourself if your system could withstand that load? Proves that their load results are sustainable Not just this, you should also consider whether your results are sustainable over a defined period of time. The system should operate without crashing. It is often recommended that a load test runs for 30 mins. While thirty minutes will be enough to detect most new performance changes as they are introduced, in order to make these tests legitimate, it is necessary to prove they can run for at least two hours at the same load. These time durations may vary for different programs/systems/applications. Uses Benchmarks A benchmark essentially is a point of reference based on which you can compare and assess the performance of your system. It is a set standard against which you can check the quality of your product/application/system. For some systems, like databases, standard benchmarks are readily available for you to test on. As a performance engineer, you must be aware of the performance benchmarks in your field/domain. For example, you’d find benchmarks for testing firewalls, databases, and end-to-end IT systems. The most commonly used benchmarking frameworks are Benchmark Framework 2.0 & TechEmpower. Understands User Behavior If you don’t have an understanding of user reactions in different situations, you cannot design an effective load test. A good performance engineer knows their user demographics, understands their key behavior and knows how the user would interact with the system. While it is impossible to predict user behavior entirely, for instance, a sale may result in 100,000 transactions per hour to barely 100 per hour, you should check user statistics, analyze user activity and conduct and prepare your system for optimum usage. All in all, besides strong technical skills, as a performance engineer, you must always be far-sighted. You must be able to see beyond what meets the eye and catch what others might miss. The role, invariably, requires a lot of technical expertise. But it also requires non-technical skills like problem-solving, attention-to-detail and insightfulness. About the Author Dr Sandeep Deshmukh is the founder and CEO at Workship. He holds a  PhD from IIT Bombay, and has worked in Big Data, Hadoop ecosystem, Distributed Systems, AI/ML, etc for 12+ yrs. He has been an Engineering Manager at DataTorrent and Data Scientist with Reliance Industries.
Read more
  • 0
  • 0
  • 4462
article-image-ai-powered-mixed-reality
Sugandha Lahoti
22 Nov 2017
8 min read
Save for later

7 promising real-world applications of AI-powered Mixed Reality

Sugandha Lahoti
22 Nov 2017
8 min read
Mixed Reality has become a disruptive force that is bridging the gap between reality and imagination. With AI, it is now poised to change the world as we see it! The global mixed reality market is expected to reach USD 6.86 billion by 2024. Mixed Reality has found application in not just the obvious gaming and entertainment industries but also has great potential in business and other industries ranging from manufacturing, travel, and medicine to advertising. Maybe that is why the biggest names in tech are battling it out to capture the MR market with their devices - Microsoft HoloLens, GoogleGlass 2.0, Meta2 handsets to name a few. Incorporating Artificial Intelligence is their next step towards MR market domination. So what’s all the hype about MR and how can AI take it to the next level? Through the looking glass: Understanding Mixed Reality Mixed reality is essentially a fantastic concoction of virtual reality (a virtual world with virtual objects) and augmented reality (the real world with digital information). This means virtual objects are overlaid in the real world and mixed reality enables the person who is experiencing the MR environment to perceive virtual objects as ”real ones”. While in augmented reality it is easy to break the illusion and recognize that the objects are not real (Hello Pokemon Go!), in Mixed Reality, it is harder to break the illusion as virtual objects behave like real-world objects. So when you lean in close or interact with a virtual object in MR, it gets closer in a way a real object would. The MR experience is made possible with mixed reality devices that are typically lightweight and wearable. They are generally equipped with front-mounted cameras to recognize the distinctive features of the real world (such as objects and walls) and blend them with the virtual reality as seen through the headset. They also include a processor for processing the information relayed by an array of sensors embedded in the headset. These processors run the algorithms used for pattern recognition on a cloud-based server. These devices then use a projector for displaying virtual images in real environments which are finally reflected to the eye with the help of beam-splitting technology. All this sounds magical already, what can AI do for MR to top this? Curiouser and curiouser: The AI-powered Mixed Reality Mixed Reality and Artificial Intelligence are two powerful technology tools. The convergence of the two means a seamless immersive experience for users that blends the virtual and physical reality. Mixed Reality devices already enable interaction of virtual holograms in the physical environment and thereby combine virtual worlds with reality. But most MR devices require a large number of calculations and adjustments to accurately determine the position of a virtual object in a real-world scenario. They then apply rules and logic to those objects to make them behave like real-world objects. As these computations happen on the cloud, the results have perceivable time lag which comes in the way of giving the user a truly immersive experience. Also, user mobility is restricted due to current device limitations. Recently there has been a rise of the AI coprocessor in Mixed Reality devices. The announcement of Microsoft’s HoloLens 2 project, an upgrade to the existing MR device which now includes an AI coprocessor is a case in point. By using AI chips for computing, the above calculations, for example, MR devices will deliver high precision results faster. It means algorithms and calculations can run instantaneously without the need for data to be sent to/from a cloud. Having the data locally on your headset will eliminate time lag, thereby creating more real-time immersive experiences. In other words, as the visual data is analyzed directly on the device and computationally-exhaustive tasks are performed close to the data source, the enhanced processing speed results in quicker performance. Since the data remains on your headset always, fewer computations are needed to be performed on the cloud, hence the data is more secure. Using an AI chip also allows flexible implementation of deep neural networks. They help in automating complex calculations such as depth perception estimations and generally provide a better understanding of the environment to the MR devices. Generative models from Deep Learning can be used to generate believable virtual characters (avatars) in the real world. Images can also be more intelligently compressed with AI techniques. This would enable faster transmission over wireless networks. Motion capture techniques are now employing AI functionalities such as phase-functioned neural networks and self-teaching AI. They use machine learning techniques to combine a vast library of stored movements and combine and fit them into new characters. By using AI-powered Mixed Reality devices, the plan is to provide a more realistic experience which is fast and provides more mobility. The ultimate goal is to build AI-powered mixed reality devices that are intelligent and self-learning. Follow the white rabbit: Applications of AI-power Mixed Reality  Let us look at various sectors where Artificially Intelligent Mixed Reality has started finding traction. Gaming and Entertainment In the field of gaming, procedural content generation techniques allow automatic generation of Mixed Reality games (as opposed to manual creation by game designers) by encoding elements such as individual structures, enemies etc with their relationships. Artificial Intelligence enhances PCG algorithms in object identification and in recognizing other relationships between the real and virtual objects. Deep learning techniques can be used for tasks like super resolution, photo to texture mapping, and texture multiplication. Healthcare and Surgical Procedures AI-powered mixed reality tech has also found its use in the field of Healthcare and surgical operations. Scopis has announced a mixed reality surgical navigation system that uses the Microsoft HoloLens for spinal surgery applications. It employs image recognition and manipulation techniques which allow the surgeon to see both the patient and a superimposed image of the pedicle screws (used for vertebrae fixation surgeries) for surgical procedures. Retail Retail is another sector which is under the spell of this AI infused MR tech. DigitalBridge, a mixed-reality company uses mixed reality, artificial intelligence, and deep learning to create a platform that allows consumers to virtually try on home decor products before buying them. Image and Video Manipulation AI algorithms and MR techniques can also enrich video and image manipulation. As we speak, Microsoft is readying the release of Microsoft Remix 3D service. This software adds “mixed reality” digital images and animations to videos. It keeps the digital content in the same position in relation to the real objects using image recognition, computer vision, and AI algorithms. Military and Defence AI-powered Mixed Reality is also finding use in the defense sector, where MR training simulations controlled by artificially intelligent software combine real people and physical environments with a virtual setup. Construction and Homebuilding Builders can visualize their options in life-sized models with MR devices. With AI, they can leave virtual messages or videos at key locations to keep other technicians and architects up to date when they’re away. Using MR and AI techniques, an architect can call a remote expert into the virtual environment if need be, and virtual assistants can be utilized for further assistance. COINS is a construction and homebuilding organization which uses AI-powered Mixed Reality devices. They are collaborating with Sketchup for virtual messaging and 3D modeling and Microsoft for HoloLens and Skype Chatbot assistance. Industrial Farming Machine learning algorithms can be used to study a sensor-enabled field of crop and record the growth, requirements and anticipate future needs. An MR device can then provide a means to interact with the plants and analyze present conditions all the while adjusting future needs. Infosys’ Plant.IO is one such digital farm which when combined with the power of an MR device can overlay virtual objects over a real-world scenario. Conclusion Through these examples, we can see the rapid adoption of AI-powered Mixed Reality recipes across diverse fields enabled by the rise of AI chips and the employment of more exhaustive computations with complex machine learning and deep learning algorithms. The next milestone is to see the rise of a Mixed Reality environment which is completely immersive and untethered. This would be made possible by the adoption of more complex AI techniques and advances made in the field of AI both in hardware and software. Instead of voice or search based commands in the MR environments, AI techniques will be used to harness eye and body gestures. As MR devices become smaller and mobile, AI-powered mixed reality will also give rise to intelligent application development incorporating mixed reality through the phone’s camera lens. AI technologies would thus help expand the scope of MR not only as an interesting tool, with applications in gaming and entertainment, but also as a practical and useful approach for how we see, interact and learn from our environment.
Read more
  • 0
  • 0
  • 4449

article-image-chatbot-toolkit-developers-design-develop-manage-conversational-ui
Bhagyashree R
10 Sep 2018
7 min read
Save for later

A chatbot toolkit for developers: design, develop, and manage conversational UI

Bhagyashree R
10 Sep 2018
7 min read
Although chatbots have been under development for at least a few decades, they did not become mainstream channels for customer engagement until recently. Due to serious efforts by industry giants like Apple, Google, Microsoft, Facebook, IBM, and Amazon, and their subsequent investments in developing toolkits, chatbots and conversational interfaces have become a serious contender to other customer contact channels. In this time, chatbots have been applied in various sectors and various conversational scenarios within sectors like retail, banking and finance, governmental, health, legal, and many more. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. This book is organized as eight chatbot projects that will introduce the ecosystem of tools, techniques, concepts, and even gadgets relating to conversational interfaces. Over the last few years, an ecosystem of tools and services has grown around the idea of conversational interfaces. There are a number of tools that we can plug and play to design, develop, and manage chatbots. Mockup tools Mockups can be used to show clients as to how a chatbot would look and behave. These are tools that you may want to consider using during conversation design, after coming up with sample conversations between the user and the bot. Mockup tools allow you to visualize the conversation between the user and the bot and showcase the dynamics of conversational turn-taking. Some of these tools allow you to export the mockup design and make videos. BotSociety.io and BotMock.com are some of the popular mockup tools. Channels in Chatbots Channels refer to places where users can interact with the chatbot. There are several deployment channels over which your bots can be exposed to users. These include Messaging services such as Facebook Messenger, Skype, Kik, Telegram, WeChat, and Line Office and team chat services such as Slack, Microsoft Teams, and many more Traditional channels such as the web chat, SMS, and voice calls Smart speakers such as Amazon Echo and Google Home. Choose the channel based on your users and the requirements of the project. For instance, if you are building a chatbot targeting consumers, Facebook Messenger can be the best channel because of the growing number of users who use the service already to keep in touch with friends and family. To add your chatbot to their contact list may be easier than getting them to download your app. If the user needs to interact with the bot using voice in a home or office environment, smart speaker channels can be an ideal choice. And finally, there are tools that can connect chatbots to many channels simultaneously (for example, Dialogflow integration, MS Bot Service, and Smooch.io, and so on). Chatbot development tools There are many tools that you can use to build chatbots without having to code even a single line: Chatfuel, ManyChat, Dialogflow, and so on. Chatfuel allows designers to create the conversational flow using visual elements. With ManyChat, you can build the flow using a visual map called the FlowBuilder. Conversational elements such as bot utterances and user response buttons can be configured using drag and drop UI elements. Dialogflow can be used to build chatbots that require advanced natural language understanding to interact with users. On the other hand, there are scripting languages such as Artificial Intelligence Markup Language (AIML), ChatScript, and RiveScript that can be used to build chatbots. These scripts will contain the conversational content and flow that then needs to be fed into an interpreter program or a rules engine to bring the chatbot to life. The interpreter decides how to progress the conversation by matching user utterances to templates in the scripts. While it is straightforward to build conversational chatbots using this approach, it becomes difficult to build transactional chatbots without generating explicit semantic representations of user utterances. PandoraBots is a popular web-based platform for building AIML chatbots. Alternatively, there are SDK libraries that one can use to build chatbots: MS Bot Builder, BotKit, BotFuel, and so on provide SDKs in one or more programming languages to assist developers in building the core conversational management module. The ability to code the conversational manager gives developers the flexibility to mold the conversation and integrate the bot to backend tasks better than no-code and scripting platforms. Once built, the conversation manager can then be plugged into other services such as natural language understanding to understand user utterances. Analytics in Chatbots Like other digital solutions, chatbots can benefit from collecting and analyzing their usage statistics. While you can build a bespoke analytics platform from scratch, you can also use off-the-shelf toolkits that are widely available now. Many off-the-shelf analytics toolkits are available that can be plugged into a chatbot, using which incoming and outgoing messages can be logged and examined. These tools tell chatbot builders and managers the kind of conversations that actually transpire between users and the chatbot. The data will give useful information such as the conversational tasks that are popular, places where conversational experience breaks down, utterances that the bot did not understand, and the requests which the chatbots still need to scale up to. Dashbot.io, BotAnalytics, and Google's Chatbase are a few analytic toolkits that you can use to analyze your chatbot's performance. Natural language understanding Chatbots can be built without having to understand utterances from the user. However, adding the natural language understanding capability is not very difficult. It is one of the hallmark features that sets chatbots apart from their digital counterparts such as websites and apps with visual elements. There are many natural language understanding modules that are available as cloud services. Major IT players like Google, Microsoft, Facebook, and IBM have created tools that you can plug into your chatbot. Google's Dialogflow, Microsoft LUIS, IBM Watson, SoundHound, and Facebook's Wit.ai are some of the NLU tools that you can try. Directory services One of the challenges of building the bot is to get users to discover and use it. Chatbots are not as popular as websites and mobile apps, so a potential user may not know where to look to find the bot. Once your chatbot is deployed, you need to help users find it. There are directories that list bots in various categories. Chatbots.org is one of the oldest directory services that has been listing chatbots and virtual assistants since 2008. Other popular ones are Botlist.co, BotPages, BotFinder, and ChatBottle. These directories categorize bots in terms of purpose, sector, languages supported, countries, and so on. In addition to these, channels such as Facebook and Telegram have their own directories for the bots hosted on their channel. In the case of Facebook, you can help users find your Messenger bot using their Discover service. Monetization Chatbots are built for many purposes: to create awareness, to support customers after sales, to provide paid services, and many more. In addition to all these, chatbots with interesting content can engage users for a long time and can be used to make some money through targeted personalized advertising. Services such as CashBot.ai and AddyBot.com can integrate with your chatbot to send targeted advertisements and recommendations to users, and when users engage, your chatbot makes money. In this article, we saw tools that can help you build a chatbot, collect and analyze its usage statistics, add features like natural language understanding, and many more. The aforementioned is not an exhaustive list of tools and nor are the services listed under each type. These tools are evolving over time as chatbots are finding their niche in the market. This list gives you an idea of how multidimensional the conversational UI ecosystem is and help you explore the space and feed your creative mind. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. How to build a chatbot with Microsoft Bot framework Facebook’s Wit.ai: Why we need yet another chatbot development framework? How to build a basic server side chatbot using Go
Read more
  • 0
  • 0
  • 4449