Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-healthcare-analytics-logistic-regression-to-reduce-patient-readmissions
Guest Contributor
20 Dec 2017
8 min read
Save for later

Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions

Guest Contributor
20 Dec 2017
8 min read
[box type="info" align="" class="" width=""]We bring to you another guest post by Benjamin Rojogan on Logistic regression to aid healthcare sector in reducing patient readmission. Ben's previous post on ensemble methods to optimize machine learning models is also available for a quick read here.[/box] ER visits are not cheap for any party involved. Whether this be the patient or the insurance company. However, this does not stop some patients from being regular repeat visitors. These recurring visits are due to lack of intervention for problems such as substance abuse, chronic diseases and mental illness. This increases costs for everybody in the healthcare system and reduces quality of care by playing a role in the overflowing of Emergency Departments (EDs). Research teams at UW and other universities are partnering with companies like Kensci to figure out how to approach the problem of reducing readmission rates. The ability to predict the likelihood of a patient’s readmission will allow for targeted intervention which in turn will help reduce the frequency of readmissions. Thus making the population healthier and hopefully reducing the estimated 41.3 billion USD healthcare costs for the entire system. How do they plan to do it? With big data and statistics, of course. A plethora of algorithms are available for data scientists to use to approach this problem. Many possible variables could affect the readmission and medical costs. Also, there are also many different ways researchers might pose their questions. However, the researchers at UW and many other institutions have been heavily focused on reducing the readmission rate simply by trying to calculate whether a person would or would not be readmitted. In particular, this team of researchers was curious about chronic ailments. Patients with chronic ailments are likely to have random flare ups that require immediate attention. Being able to predict if a patient will have an ER visit can lead to managing the cause more effectively. One approach taken by the data science team at UW as well as the Department of Family and Community Medicine at the University of Toronto was to utilize logistic regression to predict whether or not a patient would be readmitted. Patient readmission can be broken down into a binary output: either the patient is readmitted or not. As such logistic regression has been a useful model in my experience to approach this problem. Logistic Regression to predict patient readmissions Why do data scientists like to use logistic regression? Where is it used? And how does it compare to other data algorithms? Logistic regression is a statistical method that statisticians and data scientists use to classify people, products, entities, etc. It is used for analyzing data that produces a binary classification based on one or many independent variables. This means, it produces two clear classifications (Yes or No, 1 or 0, etc). With the example above, the binary classification would be: is the patient readmitted or not? Other examples of this could be whether to give a customer a loan or not, whether a medical claim is fraud or not, whether a patient has diabetes or not. Despite its name, logistic regression does not provide the same output like linear regression (per se). There are some similarities, for instance, the linear model is somewhat consistent as you might notice in the equation below where you see what is very similar to a linear equation. But the final output is based on the log odds. Linear regression and multivariate regression both take one to many independent variables and produce some form of continuous function. Linear regression could be used to predict the price of a house, a person’s age or the cost of a product an e-commerce should display to each customer. The output is not limited to only a few discrete classifications. Whereas logistic regression produces discrete classifiers. For instance, an algorithm using logistic regression could be used to classify whether or not a certain stock price would be either >$50 a share or <$50 a share. Linear regression would be used to predict if a stock share would be worth $50.01, $50.02….etc. Logistic regression is a calculation that uses the odds of a certain classification. In the equation above, the symbol you might know as pi actually represents the odds or probability. To reduce the error rate, we should predict Y = 1 when p ≥ 0.5 and Y = 0 when p < 0.5. This creates a linear classifier, a boundary that when the coefficients β0 + x · β has a p value that is p < 0.5 then Y = 0. By generating coefficients that help predict the logit transformation, the method allows to classify for the characteristic of interest. Now that is a lot of complex math mumbo jumbo. Let’s try to break it down into simpler terms. Probability vs. Odds Let’s start with probability. Let’s say a patient has the probability of 0.6 of being readmitted. Then the probability that the patient won’t be readmitted is .4. Now, we want to take this and convert it into odds. This is what the formula above is doing. You would take .6/.4 and get odds of 1.5. That means the odds of the patient being readmitted are 1.5 to 1. If instead the probability was .5 for both being readmitted and not being readmitted, then the odds would be 1:1. Now the next step in the logistic regression model would be to take the odds and get the “Log odds”. You do this by taking the 1.5 and put it into the log portion of the equation. Now you will get .18(rounded). In logistic regression, we don’t actually know p. That is what we are trying to essentially find and model using various coefficients and input variables. Each input provides a value that changes how much more likely an event will or will not occur. All of these coefficients are used to calculate the log odds. This model can take multiple variables like age, sex, height, etc. and specify how much of an effect each variable has on the odds an event will occur. Once the initial model is developed, then comes the work of deciding its value. How does a business go from creating an algorithm inside a computer and translate it into action. Some of us like to say the “computers” are the easy part. Personally I find the hard part to be the “people”. After all, at the end of the day, it comes down to business value. Will an algorithm save money or not? That means it has to be applied in real life. This could take the form of a new initiative, strategy, product recommendation, etc. You need to find the outliers that are worth going after! For instance, if we go back to the patient readmission example again. The algorithm points out patients with high probabilities of being readmitted. However if the readmission costs are low, they will probably be ignored..sadly. That is how businesses (including hospitals) look at problems. Logistic regression is a great tool for binary classification. It is unlike many other algorithms that estimate continuous variables or estimate distributions. This statistical method can be utilized to classify whether a person will be likely to get cancer because of environmental variables like proximity to a highway, smoking habits, etc? This method has been used effectively in the medical, financial and insurance industry successfully for a while. Knowing when to use what algorithm takes time. However, the more problems a data scientist faces, the faster they will recognize whether to use logistic regression or decision trees. Using logistic regression provides the opportunity for healthcare institutions to accurately target at risk individuals who should receive a more tailored behavioral health plan to help improve their daily health habits. This in turn opens the opportunity for better health for patients and lower costs for hospitals. [box type="shadow" align="" class="" width=""] About the Author Benjamin Rogojan Ben has spent his career focused on healthcare data. He has focused on developing algorithms to detect fraud, reduce patient readmission and redesign insurance provider policy to help reduce the overall cost of healthcare. He has also helped develop analytics for marketing and IT operations in order to optimize limited resources such as employees and budget. Ben privately consults on data science and engineering problems both solo as well as with a company called Acheron Analytics. He has experience both working hands-on with technical problems as well as helping leadership teams develop strategies to maximize their data.[/box]
Read more
  • 0
  • 0
  • 13179

article-image-what-is-react-js-how-does-it-work
Packt
05 Mar 2018
9 min read
Save for later

What is React.js and how does it work?

Packt
05 Mar 2018
9 min read
What is React.js? React.js is one of the most talked about JavaScript web frameworks in years. Alongside Angular, and more recently Vue, React is a critical tool that has had a big impact on the way we build web applications. But it's hard to find a better description of React.js than the single sentence on the project's home page: A JavaScript library for building user interfaces. It's a library. For building user interfaces. This is perfect because, more often than not, this is all we want. The best part about this description is that it highlights React's simplicity. It's not a mega framework. It's not a full-stack solution that's going to handle everything from the database to real-time updates over web socket connections. We don't actually want most of these pre-packaged solutions, because in the end, they usually cause more problems than they solve. Facebook sure did listen to what we want. This is an extract from React and React Native by Adam Boduch. Learn more here. React.js is just the view. That's it. React.js is generally thought of as the view layer in an application. You might have used library like Handlebars, or jQuery in the past. Just as jQuery manipulates UI elements, or Handlebars templates are inserted onto the page, React components change what the user sees. The following diagram illustrates where React fits in our frontend code. This is literally all there is to React. We want to render this data to the UI, so we pass it to a React component which handles the job of getting the HTML into the page. You might be wondering what the big deal is. On the surface, React appears to be another rendering technology. But it's much more than that. It can make application development incredibly simple. That's why it's become so popular. React.js is simple React doesn't have many moving parts for us to learn about and understand. The advantage to having a small API to work with is that you can spend more time familiarizing yourself with it, experimenting with it, and so on. The opposite is true of large frameworks, where all your time is devoted to figuring out how everything works. The following diagram gives a rough idea of the APIs that we have to think about when programming with React. React is divided into two major APIs. First, there's the React DOM. This is the API that's used to perform the actual rendering on a web page. Second, there's the React component API. These are the parts of the page that are actually rendered by React DOM. Within a React component, we have the following areas to think about: Data: This is data that comes from somewhere (the component doesn't care where), and is rendered by the component. Lifecycle: These are methods that we implement that respond to changes in the lifecycle of the component. For example, the component is about to be rendered. Events: This is code that we write for responding to user interactions. JSX: This is the syntax of React components used to describe UI structures. Don't fixate on what these different areas of the React API represent just yet. The takeaway here is that React is simple. Just look at how little there is to figure out! This means that we don't have to spend a ton of time going through API details here. Instead, once you pick up on the basics, you can spend more time on nuanced React usage patterns. React has a declarative UI structure React newcomers have a hard time coming to grips with the idea that components mix markup in with their JavaScript. If you've looked at React examples and had the same adverse reaction, don't worry. Initially, we're all skeptical of this approach, and I think the reason is that we've been conditioned for decades by the separation of concerns principle. Now, whenever we see things mixed together, we automatically assume that this is bad and shouldn't happen. The syntax used by React components is called JSX (JavaScript XML). The idea is actually quite simple. A component renders content by returning some JSX. The JSX itself is usually HTML markup, mixed with custom tags for the React components. What's absolutely groundbreaking here is that we don't have to perform little micro-operations to change the content of a component. For example, think about using something like jQuery to build your application. You have a page with some content on it, and you want to add a class to a paragraph when a button is clicked. Performing these steps is easy enough, but the challenge is that there are steps to perform at all. This is called imperative programming, and it's problematic for UI development. While this example of changing the class of an element in response to an event is simple, real applications tend to involve more than 3 or 4 steps to make something happen. Read more: 5 reasons to learn React React components don't require executing steps in an imperative way to render content. This is why JSX is so central to React components. The XML-style syntax makes it easy to describe what the UI should look like. That is, what are the HTML elements that this component is going to render? This is called declarative programming, and is very well-suited for UI development. Time and data Another area that's difficult for React newcomers to grasp is the idea that JSX is like a static string, representing a chunk of rendered output. Are we just supposed to keep rendering this same view? This is where time and data come into play. React components rely on data being passed into them. This data represents the dynamic aspects of the UI. For example, a UI element that's rendered based on a Boolean value could change the next time the component is rendered. Here's an illustration of the idea. Each time the React component is rendered, it's like taking a snapshot of the JSX at that exact moment in time. As our application moves forward through time, we have an ordered collection of rendered user interface components. In addition to declaratively describing what a UI should be, re-rendering the same JSX content makes things much easier for developers. The challenge is making sure that React can handle the performance demands of this approach. Performance matters with React Using React to build user interfaces means that we can declare the structure of the UI with JSX. This is less error-prone than the imperative approach to assembling the UI piece by piece. However, the declarative approach does present us with one challenge—performance. For example, having a declarative UI structure is fine for the initial rendering, because there's nothing on the page yet. So the React renderer can look at the structure declared in JSX, and render it into the browser DOM. This is illustrated below. On the initial render, React components and their JSX are no different from other template libraries. For instance, Handlebars will render a template to HTML markup as a string, which is then inserted into the browser DOM. Where React is different from libraries like Handlebars is when data changes, and we need to re-render the component. Handlebars will just rebuild the entire HTML string, the same way it did on the initial render. Since this is problematic for performance, we often end up implementing imperative workarounds that manually update tiny bits of the DOM. What we end up with is a tangled mess of declarative templates, and imperative code to handle the dynamic aspects of the UI. We don't do this in React. This is what sets React apart from other view libraries. Components are declarative for the initial render, and they stay this way even as they're re-rendered. It's what React does under the hood that makes re-rendering declarative UI structures possible. React has something called the virtual DOM, which is used to keep a representation of the real DOM elements in memory. It does this so that each time we re-render a component, it can compare the new content, to the content that's already displayed on the page. Based on the difference, the virtual DOM can execute the imperative steps necessary to make the changes. So not only do we get to keep our declarative code when we need to update the UI, React will also make sure that it's done in a performant way. Here's what this process looks like: When you read about React, you'll often see words like diffing and patching. Diffing means comparing old content with new content to figure out what's changed. Patching means executing the necessary DOM operations to render the new content React.js has the right level of abstraction React.js doesn't have a great deal of abstraction, but the abstractions the framework does implement are crucial to its success. In the preceding section, you saw how JSX syntax translates to the low-level operations that we have no interest in maintaining. The more important way to look at how React translates our declarative UI components is the fact that we don't necessarily care what the render target is. The render target happens to be the browser DOM with React. But this is changing. We're only just starting to see this with React Native, but the possibilities are endless. I personally will not be surprised when React Toast becomes a thing, targeting toasters that can singe the rendered output of JSX on to bread. The abstraction level with React is at the right level, and it's in the right place. The following diagram gives you an idea of how React can target more than just the browser. From left to right, we have React Web (just plain React), React Native, React Desktop, and React Toast. As you can see, to target something new, the same pattern applies: Implement components specific to the target Implement a React renderer that can perform the platform-specific operations under the hood Profit This is obviously an oversimplification of what's actually implemented for any given React environment. But the details aren't so important to us. What's important is that we can use our React knowledge to focus on describing the structure of our user interface on any platform. Disclaimer: React Toast will probably never be a thing, unfortunately.
Read more
  • 0
  • 0
  • 12912

article-image-declarative-ui-programming-faceoff-apples-swiftui-vs-googles-flutter
Guest Contributor
14 Jun 2019
5 min read
Save for later

Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter

Guest Contributor
14 Jun 2019
5 min read
Apple recently announced a new declarative UI framework for its operating system - SwiftUI, at its annual developer conference WWDC 2019. SwiftUI will power all of Apple’s devices (MacBooks, watches, tv’s, iPads and smartphones). You can integrate SwiftUI views with objects from the UIKit, AppKit, and WatchKit frameworks to take further advantage of platform-specific functionality. It's said to be productive for developers and would save effort while writing codes. SwiftUI documentation,  states that, “Declare the content and layout for any state of your view. SwiftUI knows when that state changes, and updates your view’s rendering to match.”   This means that the developers simply have to describe the current UI state to the response of events and leave the in-between transitions to the framework. The UI updates the state automatically as it changes. Benefits of a Declarative UI language Without describing the control flow, the declarative UI language expresses the logic of computation. You describe what elements you need and how they would look like without having to worry about its exact position and its visual style. Some of the benefits of Declarative UI language are: Increased speed of development. Seamless integration between designers and coders. Forces separation between logic and presentation.    Changes in UI don’t require recompilation SwiftUI’s declarative syntax is quite similar to Google’s Flutter which also runs on declarative UI programming. Flutter contains beautiful widgets with captivating logos, fonts, and expressive style. The use of Flutter has significantly increased in 2019 and is among the fastest developing skills in the developer community. Similar to Flutter, SwiftUI provides layout structure, controls, and views for the application’s user interface. This is the first time Apple’s stepping up to the declarative UI programming and has described SwiftUI as a modern way to declare user interfaces. In the imperative method, developers had to manually construct a fully functional UI entity and later change it using methods and setters. In SwiftUI the application layout just needs to be described once, vastly reducing the code complexity. Apart from declarative UI, SwiftUI also features Xcode, which contains software development tools and is an integrated development environment for the OS.  If any code modifications are made inside Xcode, developers now can preview the codes in real-time and tweak parameters. Swift UI also features dark mode, drag and drop building tools by Xcode and interface layout.  Languages such as Hebrew and Arabic are also incorporated. However, one of the drawbacks of SwiftUI is that it will only support apps that will continue to relay forward with iOS13. It’s a sort of limited tool in this sense and the production would take at least a year or two if an older iOS version is to be supported. SwiftUI vs Flutter Development   Apple’s answer to Google is simple here. Flutter is compatible with both Android and iOS whereas SwiftUI is a new member of Apple’s ecosystem. Developers use Flutter for cross-platform apps with a single codebase. It highlights that Flutter is pushing other languages to adopt its simplistic way of developing UI. Now with the introduction of SwiftUI, which works on the same mechanism as Flutter, Apple has announced itself to the world of declarative UI programming. What does it mean for developers who build exclusively for iOS? Well, now they can make Native Apps for their client’s who do not prefer the Flutter way. SwiftUI will probably reduce the incentive for Apple-only developers to adopt Flutter. Many have pointed out that Apple has just introduced a new framework for essentially the same UI experience. We have to wait and see what Swift UI has under its closet for the longer run. Developers in communities like Reddit and others are actively sharing their thoughts on the recent arrival of SwiftUI. Many agree on the fact that “SwiftUI is flutter with no Android support”.   Developers who’d target “Apple only platform” through SwiftUI, will eventually return to Flutter to target all other platforms, which makes Flutter could benefit from SwiftUI and not the other way round. The popularity of the react native is no brainer. Native mobile app development for iOS and Android is always high on cost and companies usually work with 2 different sets of teams. Cross-platform solutions drastically bridge the gaps in terms of developmental costs. One could think of Flutter as React native with the full support of native features (one doesn’t have to depend on native platforms for solutions and Flutter renders similar performance to native). Like React Native, Flutter uses reactive-style views. However, while React Native transpiles to native widgets, Flutter compiles all the way to native code. Conclusion SwiftUI is about making development interactive, faster and easier. The latest inbuilt graphical UI design tool allows designers to assemble a user interface without having to write any code. Once the code is modified, it instantly appears in the visual design tool. Codes can be assembled, redefined and tested in real time with previews that could run on a range of Apple's devices. However, SwiftUI is still under development and will take its time to mature. On the other hand, Flutter app development services continue to deliver scalable solutions for startups/enterprises. Building native apps are not cheap and Flutter with the same feel of native provides cost-effective services. It still remains a competitive cross-platform network with or without SwiftUI’s presence. Author Bio Keval Padia is the CEO of Nimblechapps, a prominent Mobile app development company based in India. He has a good knowledge of Mobile App Design and User Experience Design. He follows different tech blogs and current updates of the field lure him to express his views and thoughts on certain topics.
Read more
  • 0
  • 0
  • 12883

article-image-eight-things-you-need-learn-python
Oli Huggins
02 Jun 2016
4 min read
Save for later

Eight Things You Need To Learn with Python

Oli Huggins
02 Jun 2016
4 min read
We say it a lot, but Python really is a versatile language that can be applied to many different purposes. Web developers, data analysts, security pros - there's an impressive range of challenges that can be solved by Python. So, what exactly should you be learning to do with this great language to really get the most out of it?   Writing Python What's the most important thing to learn with Python? How to write it. As Python becomes the popular language of choice for most developers, there is an increasing need to learn and adopt it on different environments for different purposes. The Beginning Python video course focuses on just that. Aimed at a complete novice with no previous programming experience in Python, this course will guide the readers every step of the way. Starting with the absolute basics like understanding of variables, arrays, and strings, the course goes on teach the intricacies of Python. It teaches how you can build your own functions making use of the existing functions in Python. By the end, the course ensures that you have a strong foundation of the programming concepts in Python. Design Patterns As Python matures from being used just as a scripting language and into enterprise development and data science, the need for clean, reusable code becomes ever more vital. The modern Python developer cannot go astray with tried and true design patterns for Python when they want to write efficient, reliable Python code. The second edition of Learning Python Design Patterns is stuffed with rich examples of design pattern implementation. From OOP to more complex concepts, you'll find everything you need to improve your Python within. Machine Learning Design We all know how powerful Python is for machine learning - so why are your results proving sub-par and inaccurate? The issue is probably not your implementation, but rather with your system design. Just knowing the relevant algorithms and tools is not enough for a really effective system - you need the right design. Designing Machine Learning Systems with Python covers various machine learning designing aspects with the help of real-world data sets and examples and will enable you to evaluate and decide the right design for your needs. Python for the Next Generation Python was built to be simple, and it's the perfect language to get kids coding. With programmers getting younger and younger these days, get them learning with a language that will serve them well for life. In Python for Kids, kids will create two interesting game projects that they can play and show off to their friends and teachers, as well as learn Python syntax, and how to do basic logic building. Distributed Computing What do you do when your Python application takes forever to give the output? Very heavy computing results in delayed response or, sometimes, even failure. For special systems that deal with a lot of data and are mission critical, the response time becomes an important factor. In order to write highly available, reliable, and fault tolerant programs, one needs to take aid of distributed computing. Distributed Computing with Python will teach you how to manage your data intensive and resource hungry Python applications with the aid of parallel programming, synchronous and asynchronous programming, and many more effective techniques. Deep Learning Python is at the forefront of the deep learning revolution - the next stage of machine learning, and maybe even a step towards AI. As machine learning becomes a mainstream practice, deep learning has taken a front seat among data scientists. The Deep Learning with Python video course is a great stepping stone in entering the world of deep learning with Python -- learn the basics, clear your concepts, and start implementing efficient deep learning for making better sense of data. Get all that it takes to understand and implement Python deep learning libraries from this insightful tutorial. Predictive Analytics With the power of Python and predictive analytics, you can turn your data into amazing predictions of the future. It's not sorcery, just good data science. Written by Ashish Kumar, a data scientist at Tiger Analytics, Learning Predictive Analytics with Python is a comprehensive, intermediate-level book on Predictive Analytics and Python for aspiring data scientists. Internet of Things Python's rich libraries of data analytics, combined with its popularity for scripting microcontroller units such as the Raspberry Pi and Arduino, make it an exceptional choice for building IoT. Internet of Things with Python offers an exciting view of IoT from many angles, whether you're a newbie or a pro. Leverage your existing Python knowledge to build awesome IoT project and enhance your IoT skills with this book.  
Read more
  • 0
  • 0
  • 12857

article-image-systems-programming-go-unix-linux
Mihalis Tsoukalos
24 Jan 2018
17 min read
Save for later

Systems programming with Go in UNIX and Linux

Mihalis Tsoukalos
24 Jan 2018
17 min read
This is a guest post by Mihalis Tsoukalos. Mihalis is a Unix administrator, programmer, and Mathematician who enjoys writing. He is the author of Go Systems Programming from which this Go programming tutorial is taken. What is Go? Back when UNIX was first introduced, the only way to write systems software was by using C; nowadays you can program systems software using programming languages including Go. Apart from Go, other preferred languages for developing system utilities are Python, Perl, Rust and Ruby. Go is a modern generic purpose open-source programming language that was officially announced at the end of 2009, was begun as an internal Google project and has been inspired by many other programming languages including C, Pascal, Alef and Oberon. Its spiritual fathers are Robert Griesemer, Ken Thomson and Rob Pike that designed Go as a language for professional programmers that want to build reliable and robust software. Apart from its syntax and standard functions, Go comes with a pretty rich and convenient standard library. What is systems programming? Systems programming is a special area of programming on UNIX machines. Please note that Systems programming is not limited to UNIX machines. Most commands that have to do with System Administration tasks such as disk formatting, network interface configuration, module loading, kernel performance tracking, and so on, are implemented using the techniques of Systems Programming. Additionally, the /etc directory, which can be found on all UNIX systems, contains plain text files that deal with the configuration of a UNIX machine and its services and are also manipulated using systems software. You can group the various areas of systems software and related system calls in the following sets: File I/O: This area deals with file reading and writing operations, which is the most important task of an operating system. File input and output must be fast and efficient and, above all, it must be reliable. Advanced File I/O: Apart from the basic input and output system calls, there are also more advanced ways to read or write a file including asynchronous I/O and non-blocking I/O. System files and Configuration: This group of systems software includes functions that allow you to handle system files such as /etc/password and get system specific information such as system time and DNS configuration. Files and Directories: This cluster includes functions and system calls that allow the programmer to create and delete directories and get information such as the owner and the permissions of a file or a directory. Process Control: This group of software allows you to create and interact with UNIX processes. Threads: When a process has multiple threads, it can perform multiple tasks. However, threads must be created, terminated and synchronized, which is the purpose of this collection of functions and system calls. Server Processes: This set includes techniques that allow you to develop server processes, which are processes that get executed in the background without the need for an active terminal. Go is not that good at writing server processes in the traditional UNIX way – but let me explain this a little more. UNIX servers like Apache use fork(2) to create one or more children processes; this process is called forking and refers to cloning the parent process into a child process and continue executing the same executable from the same point and, most importantly, sharing memory. Although Go does not offer an equivalent to the fork(2) function this is not an issue because you can use goroutines to cover most of the uses of fork(2). Interprocess Communication: This set of functions allows processes that run on the same UNIX machine to communicate with each other using features such as pipes, FIFOs, message queues, semaphores and shared memory. Signal Processing: Signals offer processes a way of handling asynchronous events, which can be very handy. Almost all server processes have extra code that allows them to handle UNIX signals using the system calls of this group. Network Programming: This is the art of developing applications that work over computer networks with the he€lp of TCP/IP and is not Systems programming per se. However, most TCP/IP servers and clients are dealing with system resources, users, files and directories so most of the times you cannot create network applications without doing some kind of Systems programming. The challenging thing with Systems programming is that you cannot afford to have an incomplete program; you can either have a fully working, secure program that can be used on a production system or nothing at all. This mainly happens because you cannot trust end users and hackers! The key difficulty in systems programming is the fact that an erroneous system call can make your UNIX machine misbehave or, even worst, crash it! Most security issues on UNIX systems usually come from wrongly implemented systems software because bugs in systems software can compromise the security of an entire system. The worst part is that this can happen many years after using a certain piece of software! Systems programming examples with Go Printing the permission of a file or a directory With the help of the ls(1) command, you can find out the permissions of a file: $ ls -l /bin/ls -rwxr-xr-x 1 root wheel 38624 Mar 23 01:57 /bin/ls The presented Go program, which is named permissions.go, will teach you how to print the permissions of a file or a directory using Go and will be presented in two parts. The first part is the next: package main import ( "fmt" "os" ) func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Please provide an argument!") os.Exit(1) } file := arguments[1] The second part contains the important Go code: info, err := os.Stat(file) if err != nil { fmt.Println("Error:", err) os.Exit(1) } mode := info.Mode() fmt.Print(file, ": ", mode, "n") } Once again most of the Go code is for dealing with the command line argument and making sure that you have one! The Go code that does the actual job is mainly the call to the os.Stat() function, which returns a FileInfo structure that describes the file or directory examined by os.Stat(). From the FileInfo structure you can discover the permissions of a file by calling the Mode() function. Executing permissions.go creates the following kind of output: $ go run permissions.go /bin/ls /bin/ls: -rwxr-xr-x $ go run permissions.go /usr /usr: drwxr-xr-x $ go run permissions.go /us Error: stat /us: no such file or directory exit status 1 How to write to files using fmt.Fprintf() The use of the fmt.Fprintf() function allows you to write formatted text to files in a way that is similar to the way the fmt.Printf() function works. The Go code that illustrates the use of fmt.Fprintf() will be named fmtF.go and is going to be presented in three parts. The first part is the expected preamble of the program: package main import ( "fmt" "os" ) The second part has the next Go code: func main() { if len(os.Args) != 2 { fmt.Println("Please provide a filename") os.Exit(1) } filename := os.Args[1] destination, err := os.Create(filename) if err != nil { fmt.Println("os.Create:", err) os.Exit(1) } defer destination.Close() First, you make sure that you have one command line argument before continuing. Then, you read that command line argument and you give it to os.Create() in order to create it! Please note that the os.Create() function will truncate the file if it already exists. The last part is the following: fmt.Fprintf(destination, "[%s]: ", filename) fmt.Fprintf(destination, "Using fmt.Fprintf in %sn", filename) } Here, you write the desired text data to the file that is identified by the destination variable using fmt.Fprintf() as if you were using the fmt.Printf() method. Executing fmtF.go will generate the following output: $ go run fmtF.go test $ cat test [test]: Using fmt.Fprintf in test In other words, you can create plain text files using fmt.Fprintf(). Developing wc(1) in Go The principal idea behind the code of the wc.go program is that you read a text file line by line until there is nothing left to read. For each line you read you find out the number of characters and the number of words it has. As you need to read your input line by line, the use of bufio is preferred instead of the plain io because it simplifies the code. However, trying to implement wc.go on your own using io would be a very educational exercise. But first you will see the kind of output the wc(1) utility generates: $ wcwc.gocp.go 68 160 1231wc.go 45 112 755cp.go 113 272 1986 total So, if wc(1) has to process more than one file, it automatically generates summary information. Counting words The trickiest part of the implementation is word counting, which is implemented using Go regular expressions: r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } What the provided regular expression does is separating the words of a line based on whitespace characters in order to count them afterwards! The code! After this little introduction, it is time to see the Go code of wc.go, which will be presented in five parts. The first part is the expected preamble: import ( "bufio" "flag" "fmt" "io" "os" "regexp" ) The second part is the implementation of the count() function, which includes the core functionality of the program: func count(filename string) (int, int, int) { var err error varnumberOfLinesint varnumberOfCharactersint varnumberOfWordsint numberOfLines = 0 numberOfCharacters = 0 numberOfWords = 0 f, err := os.Open(filename) if err != nil { fmt.Printf("error opening file %s", err) os.Exit(1) } defer f.Close() r := bufio.NewReader(f) for { line, err := r.ReadString('n') if err == io.EOF { break } else if err != nil { fmt.Printf("error reading file %s", err) } numberOfLines++ r := regexp.MustCompile("[^s]+") for range r.FindAllString(line, -1) { numberOfWords++ } numberOfCharacters += len(line) } return numberOfLines, numberOfWords, numberOfCharacters } There exist lot of interesting things here. First of all, you can see the Go code presented in the previous section for counting the words of each line. Counting lines is easy because each time the bufio reader reads a new line the value of the numberOfLines variable is increased by one. The ReadString() function tells the program to read until the first occurrence of a 'n' in the input – multiple calls to ReadString() mean that you are reading a file line by line. Next, you can see that the count() function returns three integer values. Last, counting characters is implemented with the help of the len() function that returns the number of characters in a given string, which in this case is the line that was read. The for loop terminates when you get the io.EOF error message, which signifies that there is nothing left to read from the input file. The third part of wc.go starts with the beginning of the implementation of the main() function, which also includes the configuration of the flag package: func main() { minusC := flag.Bool("c", false, "Characters") minusW := flag.Bool("w", false, "Words") minusL := flag.Bool("l", false, "Lines") flag.Parse() flags := flag.Args() if len(flags) == 0 { fmt.Printf("usage: wc<file1> [<file2> [... <fileN]]n") os.Exit(1) } totalLines := 0 totalWords := 0 totalCharacters := 0 printAll := false for _, filename := range flag.Args() { The last for statement is for processing all input files given to the program. The wc.go program supports three flags: the -c flag is for printing the character count, the -w flag is for printing the word count and the -l flag is for printing the line count. The fourth part is the next: numberOfLines, numberOfWords, numberOfCharacters := count(filename) totalLines = totalLines + numberOfLines totalWords = totalWords + numberOfWords totalCharacters = totalCharacters + numberOfCharacters if (*minusC&& *minusW&& *minusL) || (!*minusC&& !*minusW&& !*minusL) { fmt.Printf("%d", numberOfLines) fmt.Printf("t%d", numberOfWords) fmt.Printf("t%d", numberOfCharacters) fmt.Printf("t%sn", filename) printAll = true continue } if *minusL { fmt.Printf("%d", numberOfLines) } if *minusW { fmt.Printf("t%d", numberOfWords) } if *minusC { fmt.Printf("t%d", numberOfCharacters) } fmt.Printf("t%sn", filename) } This part deals with the printing of the information on a per file basis depending on the command line flags. As you can see, most of the Go code here is for handling the output according to the command line flags. The last part is the following: if (len(flags) != 1) &&printAll { fmt.Printf("%d", totalLines) fmt.Printf("t%d", totalWords) fmt.Printf("t%d", totalCharacters) fmt.Println("ttotal") return } if (len(flags) != 1) && *minusL { fmt.Printf("%d", totalLines) } if (len(flags) != 1) && *minusW { fmt.Printf("t%d", totalWords) } if (len(flags) != 1) && *minusC { fmt.Printf("t%d", totalCharacters) } if len(flags) != 1 { fmt.Printf("ttotaln") } } This is where you print the total number of lines, words and characters read according to the flags of the program. Once again, most of the Go code here is for modifying the output according to the command line flags. Executing wc.go will generated the following kind of output: $ go build wc.go $ ls -l wc -rwxr-xr-x 1 mtsouk staff 2264384 Apr 29 21:10 wc $ ./wcwc.gosparse.gonotGoodCP.go 120 280 2319 wc.go 44 98 697 sparse.go 27 61 418 notGoodCP.go 191 439 3434 total $ ./wc -l wc.gosparse.go 120 wc.go 44 sparse.go 164 total $ ./wc -w -l wc.gosparse.go 120 280 wc.go 44 98 sparse.go 164 378 total If you do not execute go build wc.go in order to create an executable file, then executing go run wc.go using Go source files as arguments will fail because the compiler will try to compile the Go source files instead of treating them as command line arguments to the go run wc.go command: $ go run wc.gosparse.go # command-line-arguments ./sparse.go:11: main redeclared in this block previous declaration at ./wc.go:49 $ go run wc.gowc.go package main: case-insensitive file name collision: "wc.go" and "wc.go" $ go run wc.gocp.gosparse.go # command-line-arguments ./cp.go:35: main redeclared in this block previous declaration at ./wc.go:49 ./sparse.go:11: main redeclared in this block previous declaration at ./cp.go:35 Additionally, trying to execute wc.go on a Linux system with Go version 1.3.3 will fail because it uses features of Go that can be found in newer versions – if you use the latest Go version you will have no problem running wc.go. The error message you will get will be the following: $ go version go version go1.3.3 linux/amd64 $ go run wc.go # command-line-arguments ./wc.go:40: syntax error: unexpected range, expecting { ./wc.go:46: non-declaration statement outside function body ./wc.go:47: syntax error: unexpected } Reading a text file character by character Although reading a text file character by character is not needed for the development of the wc(1) utility, it would be good to know how to implement it in Go. The name of the file will be charByChar.go and will be presented in four parts. The first part comes with the following Go code: import ( "bufio" "fmt" "io/ioutil" "os" "strings" ) Although charByChar.go does not have many lines of Go code, it needs lots of Go standard packages, which is a naïve indication that the task it implements is not trivial. The second part is: func main() { arguments := os.Args if len(arguments) == 1 { fmt.Println("Not enough arguments!") os.Exit(1) } input := arguments[1] The third part is the following: buf, err := ioutil.ReadFile(input) if err != nil { fmt.Println(err) os.Exit(1) } The last part has the next Go code: in := string(buf) s := bufio.NewScanner(strings.NewReader(in)) s.Split(bufio.ScanRunes) for s.Scan() { fmt.Print(s.Text()) } } ScanRunes is a split function that returns each character (rune) as a token. Then the call to Scan() allows us to process each character one by one. There also exist ScanWords and ScanLines for getting words and lines scanned, respectively. If you use fmt.Println(s.Text()) as the last statement to the program instead of fmt.Print(s.Text()), then each character will be printed in its own line and the task of the program will be more obvious. Executing charByChar.go generates the following kind of output: $ go run charByChar.go test package main … The wc(1) command can verify the correctness of the Go code of charByChar.go by comparing the input file with the output generated by charByChar.go: $ go run charByChar.go test | wc 32 54 439 $ wc test 32 54 439 test How to create sparse files in Go Big files that are created with the os.Seek() function may have holes in them and occupy fewer disk blocks than files with the same size but without holes in them; such files are called sparse files. This section will develop a program that creates sparse files. The Go code of sparse.go will be presented in three parts. The first part is: package main import ( "fmt" "log" "os" "path/filepath" "strconv" ) The second part of sparse.go has the following Go code: func main() { if len(os.Args) != 3 { fmt.Printf("usage: %s SIZE filenamen", filepath.Base(os.Args[0])) os.Exit(1) } SIZE, _ := strconv.ParseInt(os.Args[1], 10, 64) filename := os.Args[2] _, err := os.Stat(filename) if err == nil { fmt.Printf("File %s already exists.n", filename) os.Exit(1) } The strconv.ParseInt() function is used for converting the command line argument that defines the size of the sparse file from its string value to its integer value. Additionally, the os.Stat() call makes sure that you will not accidentally overwrite an existing file. The last part is where the action takes place: fd, err := os.Create(filename) if err != nil { log.Fatal("Failed to create output") } _, err = fd.Seek(SIZE-1, 0) if err != nil { fmt.Println(err) log.Fatal("Failed to seek") } _, err = fd.Write([]byte{0}) if err != nil { fmt.Println(err) log.Fatal("Write operation failed") } err = fd.Close() if err != nil { fmt.Println(err) log.Fatal("Failed to close file") } } First, you try to create the desired sparse file using os.Create(). Then, you call fd.Seek() in order to make the file bigger without adding actual data. Last, you write a byte to it using fd.Write(). As you do not have anything more to do with the file, you call fd.Close() and you are done. Executing sparse.go generates the following output: $ go run sparse.go 1000 test $ go run sparse.go 1000 test File test already exists. exit status 1 How can you tell whether a file is a sparse file or not? You will learn in a while, but first let us create some files: $ go run sparse.go 100000 testSparse $ dd if=/dev/urandom bs=1 count=100000 of=noSparseDD 100000+0 records in 100000+0 records out 100000 bytes (100 kB) copied, 0.152511 s, 656 kB/s $ dd if=/dev/urandom seek=100000 bs=1 count=0 of=sparseDD 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000159399 s, 0.0 kB/s $ ls -l noSparse DDsparse DDtestSparse -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse So, how can you tell if any of the three files is a sparse file or not? The -s flag of the ls(1) utility shows the number of file system blocks actually used by a file. So, the output of the ls -ls command allows you to detect if you are dealing with a sparse file or not: $ ls -ls noSparse DDsparse DDtestSparse 104 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 noSparseDD 0 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:43 sparseDD 8 -rw-r--r-- 1 mtsoukmtsouk 100000 Apr 29 21:40 testSparse Now look at the first column of the output. The noSparseDD file, which was generated using the dd(1) utility, is not a sparse file. The sparseDD file is a sparse file generated using the dd(1) utility. Last, the testSparse is also a sparse file that was created using sparse.go. Mihalis Tsoukalos is a Unix administrator, programmer, DBA and mathematician who enjoys writing. He is currently writing Mastering Go. His research interests include programming languages, databases and operating systems. He holds a B.Sc in Mathematics from the University of Patras and an M.Sc in IT from University College London (UK). He has written various technical articles for Sys Admin, MacTech, C/C++ Users Journal, Linux Journal, Linux User and Developer, Linux Format and Linux Voice.
Read more
  • 0
  • 0
  • 12801

article-image-oldest-programming-languages-use-today
Antonio Cucciniello
11 Jul 2017
5 min read
Save for later

The oldest programming languages in use today

Antonio Cucciniello
11 Jul 2017
5 min read
Today, we are going to be discussing some of the oldest, most established programming languages that are still in use today. Some developers may be surprised to learn that many of these languages surpass them in age, in a world where technology, especially in the world of development, is advancing at such a rapid rate. But then, old is gold, after all. So, in age order, let’s present the oldest programming languages in use today: C The C language was created in 1972 (it’s not that old, okay). C is a lower level language that was based an earlier language called B (do you see a trend here?) It is a general-purpose language, and a parent language which many future programming languages derive from, such as C#, Java, JavaScript, Perl, PHP and Python. It is used in many applications that must interface with hardware or play with memory. C++ Pronounced see-plus-plus, C++ was developed 11 years later in 1983. It is very similar to C, in fact it is often considered an extension of C. It added various concepts such as classes, virtual functions, and templates. It is more of an intermediate level language that can be used lower level or higher level, depending on the application. It is also known for being used in low latency applications. Objective-C Around the same time as C++ was being released to the public, Objective-C was created. If you took an educated guess from the name and said that it would be another extension of C, then you’d be right. This version was meant to be an object-oriented version of C (there’s a lot in a name, clearly). It is used, probably most famously, by Apple. If you are a Mac or iOS user, then your iPhone or Mac applications were most likely developed with Objective-C (until they recently moved over to Swift). Python We are going to take a quick jump ahead in time to the 90’s for this one. In 1991, the Python programming language was released, though it had been in development in the late 80’s. It is a dynamically-typed, object-oriented language that is often used for scripting and web applications. It is usually used with some of its frameworks like Django or Flask on the backend. It is one of the most popular programming languages in use today. Ruby In 1993, Ruby was released. Today, you probably heard of Ruby on Rails, which primarily is used to create the backend of web applications using Ruby. Unlike the many languages derived from C, this language was influenced by older languages such as Perl and Lisp. This language was designed for productive and fun programming. This was done by making the language closer to human needs, rather than machine needs. Java Two years later in 1995, Java was developed. This is a high level language that is derived from C. It is famously known for its use in web applications and as the language to use to develop Android applications and Android OS. It used to be the most popular language a few years ago, but its popularity and usage has definitely decreased. PHP In the same year as Java was developed, PHP was born. It is an open source programming language developed for the purpose of creating dynamic websites. It is also used for server side web development. Its usage is definitely declining, but it is still in use today. JavaScript That same year (yup, ’95 was good year for programming, not so much for fans of Full House), JavaScript was brought to the world. Its purpose was to be a high level language that helped with the functionality of a web page. Today, it is sometimes used as a scripting language, as well as being used on the backend of applications with the release of Node.js. It is one of the most popular and widely used programming languages today. Conclusion That was our brief history lesson on some in use programming languages. Even though some of them are 20, 30, even over 40 years old, they are being used by thousands of developers daily. They all have a variety of uses, from lower level to higher level, from web applications to mobile applications. Do you feel there is a need for newer languages, or are you happy with what we have? If you have any favorites, let us know which one and why! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 12718
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-polycloud-a-better-alternative-to-cloud-agnosticism
Richard Gall
16 May 2018
3 min read
Save for later

Polycloud: a better alternative to cloud agnosticism

Richard Gall
16 May 2018
3 min read
What is polycloud? Polycloud is an emerging cloud strategy that is starting to take hold across a range of organizations. The concept is actually pretty simple: instead of using a single cloud vendor, you use multiple vendors. By doing this, you can develop a customized cloud solution that is suited to your needs. For example, you might use AWS for the bulk of your services and infrastructure, but decide to use Google's cloud for its machine learning capabilities. Polycloud has emerged because of the intensely competitive nature of the cloud space today. All three major vendors - AWS, Azure, and Google Cloud - don't particularly differentiate their products. The core features are pretty much the same across the market. Of course, there are certain subtle differences between each solution, as the example above demonstrates; taking a polycloud approach means you can leverage these differences rather than compromising with your vendor of choice. What's the difference between a polycloud approach and a cloud agnostic approach? You might be thinking that polycloud sounds like cloud agnosticism. And while there are clearly many similarities, the differences between the two are very important. Cloud agnosticism aims for a certain degree of portability across different cloud solutions. This can, of course, be extremely expensive. It also adds a lot of complexity, especially in how you orchestrate deployments across different cloud providers. True, there are times when cloud agnosticism might work for you; if you're not using the services being provided to you, then yes, cloud agnosticism might be the way to go. However, in many (possibly most) cases, cloud agnosticism makes life harder. Polycloud makes it a hell of a lot easier. In fact, it ultimately does what many organizations have been trying to do with a cloud agnostic strategy. It takes the parts you want from each solution and builds it around what you need. Perhaps one of the key benefits of a polycloud approach is that it gives more power back to users. Your strategic thinking is no longer limited to what AWS, Azure or Google offers - you can instead start with your needs and build the solution around that. How quickly is polycloud being adopted? Polycloud first featured in Thoughtworks' Radar in November 2017. At that point it was in the 'assess' stage of Thoughtworks' cycle; this means it's simply worth exploring and investigating in more detail. However, in its May 2018 Radar report, polycloud had moved into the 'trial' phase. This means it is seen as being an approach worth adopting. It will be worth watching the polycloud trend closely over the next few months to see how it evolves. There's a good chance that we'll see it come to replace cloud agnosticism. Equally, it's likely to impact the way AWS, Azure and Google respond. In many ways, the trend a reaction to the way the market has evolved; it may force the big players in the market to evolve what they offer to customers and clients. Read next Serverless computing wars: AWS Lambdas vs Azure Functions How to run Lambda functions on AWS Greengrass
Read more
  • 0
  • 0
  • 12565

article-image-whats-new-in-vr-haptics
Natasha Mathur
16 Jul 2018
8 min read
Save for later

What’s new in VR Haptics?

Natasha Mathur
16 Jul 2018
8 min read
Virtual Reality is evolving at a staggering rate. Some of the humankind’s most exciting tools and technologies are coming to the Virtual reality Space. One such technology which is taking over the VR world and making it more powerful is the VR haptics technology. VR Haptics technology offers an extra dimension to the VR world by letting users feel the virtual environment via the sense of touch, in addition to visual and aural perception. It makes you feel truly immersive in the artificial world. Imagine yourself in a desert seeing the sand and feeling it glide under your feet as you walk. It uses external devices like Gloves, Shoes, Joysticks, etc, via which users can receive feedback in the form of vibrations from these computer applications. This feedback provides physical sensations in the hand or other parts of the body. It also provides a realistic simulation of the movements and behaviors, similar to those realized in the real world. VR Haptics: a growing domain The VR haptics technology is growing beyond creating vibrations in game controllers. Now, in the near future, you might able to cuddle a dog and feel it licking your face in the VR world. This speaks volumes about the pace at which the haptic technology is growing. One famous example which discusses modern VR is the popular sci-fi novel “Ready Player One”. It illustrates the possibilities of haptic technology in the future. The novel explores the journey of a guy as he sets foot into a virtual reality simulator (OASIS). He uses a headset and a pair of gloves to maneuver around the virtual world. Apart from the gloves, a lot of future concept products are also covered in the novel which makes the illusion of immersion easier to picture, such as towers emitting smells in the VR world and Wind/Temperature generators that mimic real-life. Haptics came about just as head mounted displays (HMD) came to light in the 2010s. HMDs allowed people to see the virtual reality while haptic feedback gave people the opportunity to experience the virtual world and to act within it. Texture, temperature, pressure, taste, smell and other non-visual sensory inputs became real in VR. Apart from virtual reality games and apps, Haptics feedback is used widely in personal computers, mobile devices, robots, and more. But, in this article, we’ll stick to the use of haptic technology or haptic feedback in the VR space. Usually, most VR users use Touch Controllers for haptic feedback. But, recently, a lot of third-party companies are coming out with products such as gloves for systems like the Oculus Rift & HTC Vive. Here is a list of recent developments in the haptic technology for the VR world. Super affordable VR Haptic gloves by Plexus Most of the currently available options in the VR haptics field are somewhat pricey but earlier this month, Plexus announced their new product, a VR haptic and sensor glove. https://vimeo.com/276517370 Source: Plexus Key features Plexus VR haptics gloves offer a fully modular tracking solution which is capable of tracking up to 0.01 degrees of precision. These gloves are capable of individual finger tracking as well as tracking each joint on the finger, thereby, offering higher precision in the VR world. It is compatible with the HTC Vive, Oculus Rift as well as Windows Mixed Reality devices. The VR haptic gloves also come with additional adapter plates. The development kit version of the Plexus haptic gloves, priced at $249 per glove pair, can be pre-ordered on the official Plexus Website. The company will begin shipping in August 2018 but at the moment, shipping is only available to USA, Europe, Canada and Australia. Kaaya Tech’s full body tracking HoloSuit Kaaya came out with a motion capture (MoCap) suit called HoloSuit, last month, which offers motion capture as well as haptic feedback. HoloSuit is the world’s first affordable, wireless, easy to use, bi-directional, full body motion capture suit. User’s entire body movement data is captured by Holosuit and it uses haptic feedback to send information back to the user. https://www.youtube.com/watch?v=SEQsDR32gII&t=122s  Source: HoloSuit It can be used in various areas such as sports, healthcare, education, entertainment or industrial operations. Key Features The HoloSuit consists of 36 embedded sensors in the pro version and 26 embedded sensors in the less complex version. Embedded sensors carry out all the work of capturing body motion which is necessary for world-scale tracking. It also consists of 9 haptic feedback devices, and 6 embedded firing buttons ( buttons that govern specific tasks such as saving the game, pausing, etc ) which are dispersed across both arms, legs, and all the ten fingers. It delivers data wirelessly either through Wifi or Bluetooth LE to a VR setup by using Unity or a Wi-Fi SDK. The HoloSuit doesn’t come with an external camera tracking option. It supports all the major platforms such as Windows, macOS, iOS, and Android devices. A complete HoloSuit is quite expensive and starts at a regular price of $999. Jacket and Jersey are priced at $499, jersey or track pants for $399, and a pair of gloves are available for $799. HoloSuit Pro is priced at $1,599. Shipping for the full body VR haptic HoloSuit will start this November. Disney’s VR Haptic “Force Jacket” Disney came out with their VR haptic jacket, namely, “Force Jacket” back in April. It provides users with precisely directed force along with a high-frequency vibration which is felt against the user’s upper body in sync with the visual medium. The prototype is made out of a converted life jacket and is provided with 26 airbags. https://www.youtube.com/watch?v=5BOFHEow608   Source: DisneyResearchHub The Force Jacket is created by engineers at Disney Research, MIT and Carnegie Mellon University. Key Features The Haptic Jacket uses an air compressor and a vacuum pump. These air compartments in the jacket can be inflated to exert a force on the user’s body relative to force sensitive resistors. 26 air compartments are activated using microcontrollers for either pressure or vibrotactile feedback or both. Controllers are used to activating the solenoid valves which are connected to the vacuum. There are certain Jacket inflation parameters like speed, force, and duration which are specified using the haptic effects editor. The jacket makes use of the motion interface to sequentially inflate the compartments for simulating motion across the body. Each airbag within the haptic jacket can be influenced to mimic sensations such as being hit in the chest by a snowball, getting tapped on the shoulder, lime dripping on their back, getting punched in the side, and a snake coiling its body around the user. The jacket is mainly to be used in the entertainment and gaming industry and is not available for the consumer market. But, it seems to have great potential in the future for other applications as well. VR gloves by Haptx Haptx announced a pair of VR gloves back in November of last year. The gloves use micro-pneumatics technology for detailed haptics and force feedback (the ability to restrict your fingers’ movement to simulate holding objects) in the fingers. https://www.youtube.com/watch?v=2C2_kbjtjRU Source: HaptX Key Features It features technology that enables it to provide 100 points of tactile displacement feedback. It offers up to five pounds of resistance per finger. It also comes with sub-millimeter precision motion tracking The glove uses SDK of HaptX’s design, which is created by using Unreal Engine’s physics system. This tells the glove when and where it needs to apply haptic effects as well as when and how to engage the force feedback. No information on pricing or worldwide availability has been released by the company yet. But, it is rumored to launch the VR gloves for the consumer market sometime later this year. Apart from these products, there are other minor advancements that keep happening in the VR haptics space. For example, Heather Culbertson, Assistant Professor of USC's computer department, recently created a haptic armband which is capable of mimicking the sensation of a human touch. VR aims to provide you with an environment where you feel truly immersive and where you can feel the objects as in the real world. These products are bringing the VR world a step closer to achieve richer levels of immersive experiences. Gone are the days when haptic feedback was limited to just vibrating controllers and joysticks. As the technology advances, the whole new world of VR haptic devices is here to make your VR experience as seamlessly immersive as possible. In fact, some people even believe that without Haptics, VR is nothing but a picture and a sound. Game developers say Virtual Reality is here to stay CTA announces its first AR/VR Standard terminology Top 7 modern Virtual Reality hardware systems  
Read more
  • 0
  • 0
  • 12472

article-image-what-can-artificial-intelligence-do-for-the-aviation-industry
Guest Contributor
14 May 2019
6 min read
Save for later

What can Artificial Intelligence do for the Aviation industry

Guest Contributor
14 May 2019
6 min read
The use of AI (Artificial Intelligence) technology in commercial aviation has brought some significant changes in the ways flights are being operated today. World’s leading airliner service providers are now using AI tools and technologies to deliver a more personalized traveling experience to their customers. From building AI-powered airport kiosks to using it for automating airline operations and security checking, AI will play even more critical roles in the aviation industry. Engineers have found AI can help the aviation industry with machine vision, machine learning, robotics, and natural language processing. Artificial intelligence has been found to be highly potent and various researches have shown how the use of artificial intelligence can bring significant changes in aviation. Few airlines now use artificial intelligence for predictive analytics, pattern recognition, auto scheduling, targeted advertising, and customer feedback analysis showing promising results for better flight experience. A recent report shows that aviation professionals are thinking to use artificial intelligence to monitor pilot voices for a hassle-free flying experience of the passengers. This technology is to bring huge changes in the world of aviation. Identification of the Passengers There’s no need to explain how modern inventions are contributing towards the betterment of mankind and AI can help in air transportation in numerous ways. Check-in before boarding is a vital task for an airline and they can simply take the help of artificial intelligence to do it easily, the same technology can be also used for identifying the passengers as well. American airline company Delta Airlines took the initiative in 2017. Their online check-in via Delta mobile app and ticketing kiosks have shown promising results and nowadays you can see many airlines taking similar features to the whole new level. The Transportation Security Administration of the United States has introduced new AI technology to identify potential threats at the John F. Kennedy, Los Angeles International Airport and Phoenix airports. Likewise, Hartsfield-Jackson Airport is planning to launch America’s first biometric terminal. Once installed, “the AI technology will make the process of passenger identification fast and easy for officials. Security scanners, biometric identification”, and machine learning are some of the AI technologies that will make a number of jobs easy for us. In this way, AI helps us predict disruption in airline services. Baggage Screening Baggage screening is another tedious but important task that needs to be done at the airport. However, AI has simplified the process of baggage screening. The American airlines once conducted a competition on app development on artificial intelligence and Team Avatar became the winner of the competition for making an app that would allow the users to determine the size of their baggage at the airport. Osaka Airport in Japan is planning to install the Syntech ONE 200, which is an AI technology developed to screen baggage for multiple passenger lanes. Such tools will not only automate the process of baggage screening but also help authorities detect illegal items effectively. Syntech ONE 200is compatible with the X-ray security system and it increases the probability of identification of potential threats. Assisting Customers AI can be used to assist customers in the airport and it can help a company reduce its operational costs and labor costs at the same time. Airlines companies are now using AI technologies to help their customers to resolve issues quickly by getting accurate information on future flights trips on their internet-enabled devices. More than 52% of airlines companies across the world have planned to install AI-based tools to improve their customer service functions in the next five years. Artificial Intelligence can answer various common questions of the customers, assisting them for check-in requests, the status of the flight and more. Nowadays artificial intelligence is also used in air cargo for different purposes such as revenue management, safety, and maintenance and it has shown impressive results till date. Maintenance Prediction Airlines companies are planning to implement AI technology to predict potential failures of maintenance on aircraft. Leading aircraft manufacturer Airbus is taking measures to improve the reliability of aircraft maintenance. They are using Skywise, a cloud-based data storing system. It helps the fleet to collect and record a huge amount of real-time data. The use of AI in the predictive maintenance analytics will pave the way for a systematic approach on how and when the aircraft maintenance should be done.  Nowadays you can see how top-rated airlines use artificial intelligence to make the process of maintenance easy and improve the user experience at the same time. Pitfalls of using AI in Aviation Despite being considered as a future of the aviation industry,  AI has some pitfalls. For instance, it takes time for implementation and it cannot be used as an ideal tool for customer service. The recent incident of Ethiopian Airlines Boeing 737 was an eye-opener for us and it clearly represents the drawback of AI technology in the aviation sector. The Boeing 737 crashed a few minutes after it took off from the capital of Ethiopia. The failure of the MCAC system was the key reasons behind the fatal accident. Also, AI is quite expensive; for example, if an airline company is planning to deploy a chatbot, it will have to invest more than $15,000. Thus, it would be a hard thing for small companies to invest for the same and this could create a barrier between small and big airlines in the future. As the market is becoming highly competitive, big airlines will conquer the market and the small airlines might face an existential threat due to this reason.   Conclusion The use of artificial intelligence in aviation has made many tasks easy for airlines and airport authorities across the world. From identifying passengers to screening the bags and providing fast and efficient customer care solutions. Unlike the software industry, the risks of real life harms are exponentially higher in the aviation industry. While other industries have started using this technology long back, the adoption of AI in aviation has been one of caution, and rightly so. As the aviation industry embraces the benefits of artificial intelligence and machine learning, it must also invest in putting in place checks and balances to identify, reduce and eliminate harmful consequences of AI, whether intended or otherwise.  As Silicon Valley reels in ethical dilemmas, the aviation industry will do well to learn from Silicon Valley while making a transition to a smart future. The aviation industry known for its rigorous safety measures and processes may, in fact, have a thing or two to teach Silicon Valley when it comes to designing, adopting and deploying AI systems into live systems that have high-risk profiles. Author Bio Maria Brown is Content Writer, Blogger and maintaining Social Media Optimization for 21Twelve Interactive. She believes in sharing her solid knowledge base with a focus on entrepreneurship and business. You can find her on Twitter.
Read more
  • 0
  • 0
  • 12467

article-image-6-common-challenges-faced-by-android-app-developers
Guest Contributor
21 Sep 2018
5 min read
Save for later

6 common challenges faced by Android App developers

Guest Contributor
21 Sep 2018
5 min read
The primary target for businesses while working on mobile apps is the Android platform, thanks to the massive market share the mobile operating system holds. It’s popularity can be attributed to the fact that it is open source and is regular updated with new enhancements and features. Android devices generally tend to differ based on the mobile hardware features even when powered by the same version of the Android OS. This is why it is essential that when developing apps for Android, developers create mobile apps capable of targeting a diverse range of mobile devices running on different versions of Android OS. During the various stages of planning, developing and testing, developers need to focus comprehensively on the apps functionality, accessibility, usability, performance, and security so that users can be engaged despite their choice of device. Also, they also need to look for ways to make the apps deliver a more personalized user experience across the various devices an operating system. Furthermore, developers need to understand and find solutions to the common challenges involved in android app development. Common Challenges Android App Developers Face 1. Hardware Features The Android OS is unlike any other mobile operating system. For one thing, it is an open source system. Alphabet gives manufacturers the leeway to customize the operating system to their specific needs. Also, there are no regulations on the devices being released by the different manufacturers. As a result, you can find various Android devices with different hardware features running on the same Android version. Two smartphones running on Android latest ver, for example, may have different screen resolutions, camera, screen size, and other hardware structures. During android app development, developers need to account for all of this to ensure the application delivers a personalized experience to each user. 2. Lack of Uniform User Interface Design Rules Since Google is yet to release any standard UI (user interface) design rules or process for mobile app developers, most developers don’t follow any standard UI development rules or procedure. Because developers are creating custom UI interfaces in their preferred way, a lot of apps tend to function or look different across different devices. This diversity and incompatibility of the UI usually affects the user experience that the Android app directly delivers. Smart developers prefer to go for a responsive layout that’ll keep the UI consistent across different devices. Moreover, developers need to test the UI of the app extensively by combining emulators and real mobile devices. Designing a UI that makes the app deliver the same user experience across varying Android devices is one of the more daunting challenges developers face. 3. API Incompatibility A lot of developers make use of third-party APIs to enhance the functionality and interoperability of a mobile device. Unfortunately, not all third-party APIs available for Android app development are of high quality.. Some APIs were created for a particular Android version and will not work on devices running on a different version of the operating system. Developers usually have to come up with ways to make a single API work on all Android versions, a task they often find to be very challenging. 4. Security Flaws As previously mentioned, Android is an open source software, and because of that, manufacturers find it easy to customize Android to their desired specifications. However, this openness and the massive market size makes Android a frequent target for security attacks. There have been several instances where the security of millions of Android mobile devices have been affected by security flaws and bugs like mRST, Stagefright, FakeID, ‘Certifi-gate,’ TowelRoot and Installer Hijacking. Developers need to include robust security features in their applications and utilize the latest encryption mechanisms to keep user information secure and out of the hands of hackers. 5. Search Engine Visibility The latest data from Statista shows that Google Play Store contains a higher number of mobile apps. Additionally, a large number of Android users prefer free apps than paid apps which is why developers need to promote their mobile applications to increase their download numbers and employ application monetization options. The best way to promote the app to reach their target audience is to use comprehensive digital marketing strategies. Most developers make use of digital marketing professionals to promote their apps aggressively. 6. Patent Issues Google doesn’t implement any guidelines for the evaluation of the quality of new apps that are getting submitted to the Play Store. This lack of a quality assessment guideline causes a lot of patent-related issues for developers. Some developers, to avoid patent issues, have to modify and redesign their apps in the future. As per my personal experience, I have tried to cover general challenges faced by Android app developers. I’m sure keeping wary of these challenges would help developers to build successful apps in the most hassle free way. Author Bio Harnil Oza is the CEO of Hyperlink InfoSystem, one of the leading app development companies in New York, USA and India who deliver mobile solutions mainly on Android and iOS platform. He regularly contributes his knowledge on leading blogging sites. LEGO launches BrickHeadz Builder AR, a new and free Android app to bring bricks and toys to life How Android app developers can convert iPhone apps How to Secure and Deploy an Android App
Read more
  • 0
  • 0
  • 12392
article-image-4-key-benefits-of-using-firebase-for-mobile-app-development
Guest Contributor
19 Oct 2018
6 min read
Save for later

4 key benefits of using Firebase for mobile app development

Guest Contributor
19 Oct 2018
6 min read
A powerful backend solution is essential for building sophisticated mobile apps. In recent years, Firebase has emerged to prominence as a power-packed Backend-as-a-Solution (BaaS), thanks to its wide-ranging features and performance boosting elements. After being acquired in 2014 by Google, several of its features further got a performance boost. These features have made  Firebase quite a popular backend solution for app developers and other emerging IT sectors. Let us look at its 4 key benefits for cross-platform mobile app development. Unleashing the power of Google Analytics Google Analytics for Firebase is a completely free solution with unconstrained reporting on many aspects. The reporting feature allows you to evaluate client behavior, report on broken links, user interactions and all other aspects of user experience and user interface. The reporting helps developers make informed decisions while optimizing the UI and the app performance. The unmatched scale of reporting: Firebase analytics allows access to unlimited reports on as many as 500 different events. The developers can also create custom events for reporting as their need suits. Robust audience segmentation: The Firebase analytics also allows segmenting the app audience on different parameters and grounds. The integrated console allows segmenting the audience on the basis of device information, custom events, and user characteristics. Crash reporting to fix Bugs Firebase also helps to address performance issues of an app by fixing bugs right from its backend solution. It is also equipped with robust crash reporting feature. Its crash reporting helps to deliver intricate and detailed bug and crash reports to address all the coding errors in an app. The reporting feature is capable of grouping together the issues in different categories as per the characteristics of the problem. Here are some of the attributes of this reporting feature. Monitoring errors: It is capable of monitoring fatal errors for iOS apps and both fatal and non-fatal errors for Android apps. Generally, reports are initiated as per the impact caused by such errors on the user experience. Required data collection to fix errors: The reports also enlist all the details concerning the device in use, performance shortfalls and user scenarios concerning the erroneous events. According to the contributing factors and other similarities, the issues are grouped in different categories. Email alerts: It also allows sending email alerts as and when such issues or problems are detected. The configuration of error reporting: The error reporting can also be configured remotely to control who can access the reports and list of events that occurred before an event. It is free: Crash and bug reporting is free with Firebase. You don't need to pay a penny to access this feature. Synchronizing data with real-time database With Firebase you can sync the offline and online data through NoSQL database. This makes the application data available on both offline and online states of the app. This boosts collaboration on the application data in real time. Here are some of its benefits. Real-time: Unlike the so-called HTTP requests that work to update the data across interfaces, the Real-time Database of firebase syncs data with every change thus helping to reflect the change in real time across any device in use. Offline: As Firebase Real-time Database SDK helps save your data in local disk, you can always access the data offline. As and when connectivity is back, the changes are synced with the present state of the server. Access from multiple devices: The Firebase Real-time Database allows accessing application data from multiple devices and interfaces including mobile devices and web. Splitting and scaling your data: Thanks to Firebase Real-time Database, you can split your data across multiple databases within the same project and set rules for each database instances. Firebase is feature rich for futuristic app development In addition to the above, Firebase is fully empowered with a host of rich features required for building sophisticated and most feature-rich mobile apps. Let us have a look at some of the key features of Firebase that made it a reliable platform for cross-platform development. Hosting: The hosting feature of Firebase allows developers to update their contents in the Content Delivery Network (CDN) during production. Firebase offers full hosting support with a custom domain, Global CDN, and an automatically provided SSL Certificate. Authentication: Firebase backend service offers a powerful authentication feature. It comes equipped with simple SDKs and easy to use libraries to integrate authentication feature with any mobile app. Storage: Firebase storage feature is powered by Google Cloud Storage and allows users to easily download media files and visual contents. This feature is also helpful in making use of user-generated content. Cloud Messaging: With Cloud Messaging, a mobile app powered can easily send a message to users and indulge in real-time communication. Remote Configuration: This feature of Firebase allows developers to incorporate certain changes in the app remotely. Thanks to this, the changes are reflected in the existing version, and the user does not need to download the latest updated version. Test Lab: With Test lab, developers can easily test the app in all the devices listed in the Google data center. It can even do the testing without requiring any test code of the respective app. Notifications: This feature gives developers a console to manage and send user-focused custom notifications to the users. App Indexing: This feature allows developers to index the app in Google Search and achieve higher search ranks in app marketplaces like Play Store and App Store. Dynamic Links: Firebase also equips the app to create dynamic links or smart URLs to present the respective app across all digital platforms including social media, mobile app, web, email, and other channels. All the above-mentioned benefits and useful features that empower mobile app developers to create dynamic user experience helped Firebase achieve such unprecedented popularity among developers worldwide. No wonder, in a short time span it has become a very popular backend solution for so many successful cross-platform mobile apps. Some exemplary use cases of Firebases Here we have picked two use cases of Firebase, respectively for one relatively new and successful app and one leading app in its niche. Fabulous Fabulous is a unique app that trains users to dispose of bad habits and get used to good habits to ensure health and wellbeing. The app by customizing the onboarding process through Firebase managed to double the retention rate. The app could incorporate custom user experience for different groups of users as per their preference. Onefootball This leading mobile soccer app OneFootBall experienced more than 5% increase in user session time thanks to Firebase. The new backend solution powered by Firebase helped the game app engage the audience more efficiently than ever before. The custom contents created by this popular app can enjoy better traction with users thanks to higher engagement. Author Bio: Juned Ahmed works as an IT consultant at IndianAppDevelopers, a leading Mobile app development company which offers to hire app developers in India for mobile solutions. He has more than 10 years of experience in developing and implementing marketing strategies. How to integrate Firebase on Android/iOS applications natively. Build powerful progressive web apps with Firebase. How to integrate Firebase with NativeScript for cross-platform app development.
Read more
  • 0
  • 0
  • 12388

article-image-how-artificial-intelligence-can-improve-pentesting
Melisha Dsouza
21 Oct 2018
8 min read
Save for later

How artificial intelligence can improve pentesting

Melisha Dsouza
21 Oct 2018
8 min read
686 cybersecurity breaches were reported in the first three months of 2018 alone, with unauthorized intrusion accounting for 38.9% of incidents. And with high-profile data breaches dominating headlines, it’s clear that while modern, complex software architecture might be more adaptable and data-intensive than ever, securing that software is proving a real challenge. Penetration testing (or pentesting) is a vital component within the cybersecurity toolkit. In theory, it should be at the forefront of any robust security strategy. But it isn’t as simple as just rolling something out with a few emails and new software - it demands people with great skills, as well a culture where stress testing and hacking your own system is viewed as a necessity, not an optional extra. This is where artificial intelligence comes in - the automation that you can achieve through artificial intelligence could well help make pentesting much easier to do consistently and at scale. In turn, this would help organizations tackle both issues of skills and culture, and get serious about their cybersecurity strategies. But before we dive deeper into artificial intelligence and pentesting, let’s take a look at where we are now, and the shortcomings of established pentesting methods. The shortcomings of established methods of pentesting Typically, pentesting is carried out in 5 stages: Source: Incapsula Every one of these stages, when carried out by humans, opens up the chance of error. Yes, software is important, but contextual awareness and decisions are required.. This process, then, provides plenty of opportunities for error. From misinterpreting data - like thinking a system is secure, when actually it isn’t - to taking care of evidence and thoroughly and clearly recording the results of pentests, even the most experienced pentester will get things wrong. But even if you don’t make any mistakes, this whole process is hard to do well at scale. It requires a significant amount of time and energy to test a piece of software, which, given the pace of change created by modern processes, makes it much harder to maintain the levels of rigor you ultimately want from pentesting. This is where artificial intelligence comes in. The pentesting areas that artificial intelligence can impact Let’s dive into the different stages of pentesting that AI can impact. #1 Reconnaissance Stage The most important stage in pentesting is the Reconnaissance or information gathering stage. As rightly said by many in cybersecurity, "The more information gathered, the higher the likelihood of success." Therefore, a significant amount of time should be spent obtaining as much information as possible about the target. Using AI to automate this stage would provide accurate results as well as save a lot of time invested. Using a combination of Natural Language Processing, Computer Vision, and Artificial Intelligence, experts can identify a wide variety of details that can be used to build a profile of the company, its employees, the security posture, and even the software/hardware components of the network and computers. #2 Scanning Stage Comprehensive coverage is needed In the scanning phase. Manually scanning through thousands if systems in an organization is not ideal. NNor is it ideal to interpret the results returned by scanning tools. AI can be used to tweak the code of the scanning tools to scan systems as well as interpret the results of the scan. It can help save pentesters time and help in the overall efficiency of the pentesting process. AI can focus on test management and the creation of test cases automatically that will check if a particular program can be tagged having security flaw. They can also be used to check how a target system responds to an intrusion. #3 Gaining and Maintaining access stage Gaining access phase involves taking control of one or more network devices in order to either extract data from the target, or to use that device to then launch attacks on other targets. Once a system is scanned for vulnerabilities, the pentesters need to ensure that the system does not have any loopholes that attackers can exploit to get into the network devices. They need to check that the network devices are safely protected with strong passwords and other necessary credentials. AI-based algorithms can try out different combinations of passwords to check if the system is susceptible for a break-in. The algorithms can be trained to observe user data, look for trends or patterns to make inferences about possible passwords used. Maintaining access focuses on establishing other entry points to the target. This phase is expected to trigger mechanisms, to ensure that the penetration tester’s security when accessing the network. AI-based algorithms should be run at equal intervals to time to guarantee that the primary path to the device is closed. The algorithms should be able to discover backdoors, new administrator accounts, encrypted channels, new network access channels, and so on. #4 Covering Tracks And Reporting The last stage tests whether an attacker can actually remove all traces of his attack on the system. Evidence is most often stored in user logs, existing access channels, and in error messages caused by the infiltration process. AI-powered tools can assist in the discovery of hidden backdoors and multiple access points that haven't been left open on the target network; All of these findings should be automatically stored in a report with a proper timeline associated with every attack done. A great example of a tool that efficiently performs all these stages of pentesting is CloudSEK’s X-Vigil. This tool leverages AI to extract data, derive analysis and discover vulnerabilities in time to protect an organization from data breach. Manual vs automated vs AI-enabled pentesting Now that you have gone through the shortcomings of manual pen testing and the advantages of AI-based pentesting, let’s do a quick side-by-side comparison to understand the difference between the two.   Manual Testing Automated Testing AI enabled pentesting Manual testing is not accurate at all times due to human error This is more likely to return false positives AI enabled pentesting is accurate as compared to automated testing Manual testing is time-consuming and takes up human resources.   Automated testing is executed by software tools, so it is significantly faster than a manual approach.   AI enabled testing does not consume much time. The algorithms can be deployed for thousands of systems at a single instance. Investment is required for human resources.   Investment is required for testing tools. AI will save the investment for human resources in pentesting. Rather, the same employees can be used to perform less repetitive and more efficient tasks Manual testing is only practical when the test cases are run once or twice, and frequent repetition is not required..   Automated testing is practical when tools find test vulnerabilities out of programmable bounds AI-based pentesting is practical in organizations with thousands of systems that need to be tested at once to save time and resources.   AI-based pentesting tools Pentoma is an AI-powered penetration testing solution that allows software developers to conduct smart hacking attacks and efficiently pinpoint security vulnerabilities in web apps and servers. It identifies holes in web application security before hackers do, helping prevent any potential security damages. Pentoma analyzes web-based applications and servers to find unknown security risks.In Pentoma, with each hacking attempt, machine learning algorithms incorporate new vulnerability discoveries, thus continuously improving and expanding threat detection capability. Wallarm Security Testing is another AI based testing tool that discovers network assets, scans for common vulnerabilities, and monitors application responses for abnormal patterns. It discovers application-specific vulnerabilities via Automated Threat Verification. The content of a blocked malicious request is used to create a sanitized test with the same attack vector to see how the application or its copy in a sandbox would respond. With such AI based pentesting tools, pentesters can focus on the development process itself, confident that applications are secured against the latest hacking and reverse engineering attempts, thereby helping to streamline a product’s time to market. Perhaps it is the increase in the number of costly data breaches or the continually expanding attack and proliferation of sensitive data and the attempt to secure them with increasingly complex security technologies that businesses lack in-house expertise to properly manage. Whatever be the reason, more organizations are waking up to the fact that if vulnerabilities are not caught in time can be catastrophic for the business. These weaknesses, which can range from poorly coded web applications, to unpatched databases to exploitable passwords to an uneducated user population, can enable sophisticated adversaries to run amok across your business.  It would be interesting to see the growth of AI in this field to overcome all the aforementioned shortcomings. 5 ways artificial intelligence is upgrading software engineering Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 8 ways Artificial Intelligence can improve DevOps
Read more
  • 0
  • 0
  • 12163

article-image-data-science-windows-big-no
Aaron Lazar
13 Apr 2018
5 min read
Save for later

Data science on Windows is a big no

Aaron Lazar
13 Apr 2018
5 min read
I read a post from a Linkedin connection about a week ago. It read: “The first step in becoming a data scientist: forget about Windows.” Even if you’re not a programmer, that's pretty controversial. The first nerdy thought I had was, that’s not true. The first step to Data Science is not choosing an OS, it’s statistics! Anyway, I kept wondering what’s wrong with doing data science on Windows, exactly. Why is the legacy product (Windows), created by one of the leaders in Data Science and Artificial Intelligence, not suitable to support the very thing it is driving? As a publishing professional and having worked with a lot of authors, one of the main issues I’ve faced while collaborating with them is the compatibility of platforms, especially when it comes to sharing documents, working with code, etc. At least 80 percent of the authors I’ve worked with have been using something other than Windows. They are extremely particular about the platform they’re working on, and have usually chosen Linux. I don’t know if they consider it a punishable offence, but I’ve been using Windows since I was 12, even though I have played around with Macs and machines running Linux/Unix. I’ve never been affectionately drawn towards those machines as much as my beloved laptop that is happily rolling on Windows 10 Pro. Why is data science on Windows is a bad idea? When Microsoft created Windows, its main idea was to make the platform as user friendly as possible, and it focused every ounce of energy on that and voila! They created one of the most simplest operating systems that one could ever use. Microsoft wanted to make computing easy for everyone - teachers, housewives, kids, business professionals. However, they did not consider catering to the developer community as much as its users. Now that’s not to say that you can’t really use a Windows machine to code. Of course, you can run Python or R programs. But you’re likely to face issues with compatibility and speed. If you’re choosing to use the command line, and something goes wrong, it’s a real PITA to debug on Windows. Also, if you’re doing cluster computing with other Linux/Macs, it’s better to have one of them yourself. Many would agree that Windows is more likely to suffer a BSoD (Blue Screen of Death) than a Mac or a Unix machine, messing up your algorithm that’s been running for a long time. [box type="note" align="" class="" width=""]Check out our most read post 15 useful Python libraries to make your Data science tasks easier. [/box] Is it all that bad? Well, not really. In fact, if you need to pump in a couple more gigs of RAM, you can’t think of doing that on a Mac. Although you might still encounter some weird stuff like those mentioned above, on a Windows PC, you can always Google up a workaround. Don’t beat yourself up if you own a PC. You can always set up a dual boot, running a Linux distribution parallely. You might want to check out Vagrant for this. Also, you’ll be surprised if you’re a Mac owner and you plan some heavy duty Deep Learning on a GPU, you can’t really run CUDA without messing things up. CUDA will only work well with NVIDIAs GPUs on a PC. In Joey Tribbiani's words “This is a moo point.” To me, data science is really OS agnostic. For instance, now with Docker, you don’t really have to worry much about which OS you’re running - so from that perspective, data science on Windows may work for you. Still feel for Windows? Well, there are obviously drawbacks. You’ll still keep living with the fear of isolation that Microsoft tries to create in the minds of customers. Moreover, you’ll be faced with “slowdom” if that’s a word, what with all the background processes eating away your computing power! You’ll be defying everything that modern computing is defined by - KISS, Open Source, Agile, etc. Another important thing you need to keep in mind is that when you’re working with so much data, you really don’t wanna get hacked! Last but not the least, if you’re intending to dabble with AI and Blockchain, your best bet is not going to be Windows. All said and done, if you’re a budding data scientist who’s looking to buy some new equipment, you might want to consider a few things before you invest in your machine. Think about what you’ll be working with, what tools you might want to use and if you want to play safe, it’s best to go with a Linux system. If you have the money and want to flaunt it, while still enjoying support from most tools, think about a Mac. And finally, if you’re brave and are not worried about having two OSes running on your system, go in for a Windows PC. So the next time someone decides to gift you a Windows PC, don’t politely decline right away. Grab it and swiftly install a Linux distro! Happy coding! :) *I will put an asterisk here, for the thoughts put in this article are completely my personal opinion and it might differ from person to person. Go ahead and share your thoughts in the comments section below.
Read more
  • 0
  • 4
  • 12135
article-image-create-strong-data-science-project-portfolio-lands-job
Aaron Lazar
13 Feb 2018
8 min read
Save for later

How to create a strong data science project portfolio that lands you a job

Aaron Lazar
13 Feb 2018
8 min read
Okay, you’re probably here because you’ve got just a few months to graduate and the projects section of your resume is blank. Or you’re just an inquisitive little nerd scraping the WWW for ways to crack that dream job. Either way, you’re not alone and there are ten thousand others trying to build a great Data Science portfolio to land them a good job. Look no further, we’ll try our best to help you on how to make a portfolio that catches the recruiter’s eye! David “Trent” Salazar‘s portfolio is a great example of a wholesome one and Sajal Sharma’s, is a good example of how one can display their Data Science Portfolios on a platform like Github. Companies are on the lookout for employees who can add value to the business. To showcase this on your resume effectively, the first step is to understand the different ways in which you can add value. 4 things you need to show in a data science portfolio Data science can be broken down into 4 broad areas: Obtaining insights from data and presenting them to the business leaders Designing an application that directly benefits the customer Designing an application or system that directly benefits other teams in the organisation Sharing expertise on data science with other teams You’ll need to ensure that your portfolio portrays all or at least most of the above, in order to easily make it through a job selection. So let’s see what we can do to make a great portfolio. Demonstrate that you know what you're doing So the idea is to show the recruiter that you’re capable of performing the critical aspects of Data Science, i.e. import a data set, clean the data, extract useful information from the data using various techniques, and finally visualise the findings and communicate them. Apart from the technical skills, there are a few soft skills that are expected as well. For instance, the ability to communicate and collaborate with others, the ability to reason and take the initiative when required. If your project is actually able to communicate these things, you’re in! Stay focused and be specific You might know a lot, but rather than throwing all your skills, projects and knowledge in the employer’s face, it’s always better to be focused on doing something and doing it right. Just as you’d do in your resume, keeping things short and sweet, you can implement this while building your portfolio too. Always remember, the interviewer is looking for specific skills. Research the data science job market Find 5-6 jobs, probably from Linkedin or Indeed, that interest you and go through their descriptions thoroughly. Understand what kind of skills the employer is looking for. For example, it could be classification, machine learning, statistical modeling or regression. Pick up the tools that are required for the job - for example, Python, R, TensorFlow, Hadoop, or whatever might get the job done. If you don’t know how to use that tool, you’ll want to skill-up as you work your way through the projects. Also, identify the kind of data that they would like you to be working on, like text or numerical, etc. Now, once you have this information at hand, start building your project around these skills and tools. Be a problem solver Working on projects that are not actual ‘problems’ that you’re solving, won’t stand out in your portfolio. The closer your projects are to the real-world, the easier it will be for the recruiter to make their decision to choose you. This will also showcase your analytical skills and how you’ve applied data science to solve a prevailing problem. Put at least 3 diverse projects in your data science portfolio A nice way to create a portfolio is to list 3 good projects that are diverse in nature. Here are some interesting projects to get you started on your portfolio: Data Cleaning and wrangling Data Cleaning is one of the most critical tasks that a data scientist performs. By taking a group of diverse data sets, consolidating and making sense of them, you’re giving the recruiter confidence that you know how to prep them for analysis. For example, you can take Twitter or Whatsapp data and clean it for analysis. The process is pretty simple; you first find a “dirty” data set, then spot an interesting angle to approach the data from, clean it up and perform analysis on it, and finally present your findings. Data storytelling Storytelling showcases not only your ability to draw insight from raw data, but it also reveals how well you’re able to convey the insights to others and persuade them. For example, you can use data from the bus system in your country and gather insights to identify which stops incur the most delays. This could be fixed by changing their route. Make sure your analysis is descriptive and your code and logic can be followed. Here’s what you do; first you find a good dataset, then you explore the data and spot correlations in the data. Then you visualize it before you start writing up your narrative. Tackle the data from various angles and pick up the most interesting one. If it’s interesting to you, it will most probably be interesting to anyone else who’s reviewing it. Break down and explain each step in detail, each code snippet, as if you were describing it to a friend. The idea is to teach the reviewer something new as you run through the analysis. End to end data science If you’re more into Machine Learning, or algorithm writing, you should do an end-to-end data science project. The project should be capable of taking in data, processing it and finally learning from it, every step of the way. For example, you can pick up fuel pricing data for your city or maybe stock market data. The data needs to be dynamic and updated regularly. The trick for this one is to keep the code simple so that it’s easy to set up and run. You first need to identify a good topic. Understand here that we will not be working with a single dataset, rather you will need to import and parse all the data and bring it under a single dataset yourself. Next, get the training and test data ready to make predictions. Document your code and other findings and you’re good to go. Prove you have the data science skill set If you want to get that job, you’ve got to have the appropriate tools to get the job done. Here’s a list of some of the most popular tools with a link to the right material for you to skill-up: Data science languages There's a number of key languages in data science that are essential. It might seem obvious, but making sure they're on your resume and demonstrated in your portfolio is incredibly important. Include things like: Python R Java Scala SQL Big Data tools If you're applying for big data roles, demonstrating your experience with the key technologies is a must. It not only proves you have the skills, but also shows that you have an awareness of what tools can be used to build a big data solution or project. You'll need: Hadoop, Spark Hive Machine learning frameworks With machine learning so in demand, if you can prove you've used a number of machine learning frameworks, you've already done a lot to impress. Remember, many organizations won't actually know as much about machine learning as you think. In fact, they might even be hiring you with a view to building out this capability. Remember to include: TensorFlow Caffe2 Keras PyTorch Data visualisation tools Data visualization is a crucial component of any data science project. If you can visualize and communicate data effectively, you're immediately demonstrating you're able to collaborate with others and make your insights accessible and useful to the wider business. Include tools like these in your resume and portfolio:  D3.js Excel chart  Tableau  ggplot2 So there you have it. You know what to do to build a decent data science portfolio. It’s really worth attending competitions and challenges. It will not only help you keep up to data and well oiled with your skills, but also give you a broader picture of what people are actually working on and with what tools they’re able to solve problems.
Read more
  • 0
  • 2
  • 11749

article-image-learn-kotlin-next-universal-programming-language
Sugandha Lahoti
11 May 2018
14 min read
Save for later

Forget C and Java. Learn Kotlin: the next universal programming language

Sugandha Lahoti
11 May 2018
14 min read
Kotlin is fast moving towards becoming the universal programming language. What is a universal programming language? From a simplistic view, the expectation could be that one language is used for all types of programming. While that may be far-fetched in today's complex world, the expectation could be adjusted to one language becoming the dominant programming language. Most certainly, it is the single, most important language to master. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,  Kotlin Blueprints, written by Ashish Belagali, Hardik Trivedi, and Akshay Chordiya. With this book, you will learn how to design and prototype professional-grade applications using various features of Kotlin.[/box] Historically, different languages have used strategies appropriate for those times to become the universal programming languages: In the 1970s, C became the universal programming language. Prior to C, the programming languages of the world were divided between low-level and high-level languages, the former being the languages that were close to machine code and the latter being ones that were more concise and worked better for human understanding. The C programming language was developed as a single language that could work as a low-level and a high-level language. The Unix operating system was showcased as one that was built ground-up entirely on C, without needing another low-level language. In the 1990s, Java became the universal programming language with the Write Once Run Anywhere strategy. Prior to Java, developers needed to create different programs to run on different platforms (different operating systems running on different hardware needed different programs to run). However, with Java, programs could be written targeting a single platform, namely the Java Virtual Machine (JVM). The JVM is available on all the popular platforms and takes care of all platform-specific nuances. The Java language became the universal language by being the language in which to write programs for the JVM. Another two decades have passed, and the stage is all set to welcome the next universal language. Let's examine Kotlin's strategy to become that. Why can Kotlin be described as a better Java than any other language? How does Kotlin address areas beyond the Java world? What is Kotlin's winning strategy? What does this all mean for a smart developer? Why Kotlin vs Java? Why is being a better Java important for a language? For over a decade, Java has consistently been the world's most widely used programming language. Therefore, a language that gets crowned as being a better Java should automatically attract the attention of the world's single largest community of programmers: the Java programmers. The TIOBE index is widely referred to as a gauge of the popularity of programming languages. Updated to August 2017, the index graph is reproduced in the following illustration:   The interesting point is that while Java has been the #1 programming language in the world for the last 15 years or so, it has been in a steady state of decline for many years now. Many new languages have kept coming, and existing ones have kept improving, chipping steadily into Java's developer base; however, none of them have managed to take the #1 position from Java so far. Today, Kotlin is poised to become the most serious challenger for the better Java crown, and subsequently, to take the first place, for reasons that we will see shortly. Presently at 41st place, Kotlin is marching ahead at a fast pace. In May 2017, Google announced Kotlin to be the officially supported language for Android development in league with Java. This has turned out to be a major boost for Kotlin, and the rate of its adoption has accelerated ever since. Why not other languages? Many languages prior to Kotlin have tried to become a better Java. Let's see why they could never become one. Every language attracts the programmer community by giving them an ability to do something that was cumbersome before. Their adoption is directly driven by how much value the promise has for them and how much faith the community can put into that promise. All languages or frameworks that claimed to be a better Java and offered something worthwhile beyond what Java offers also took something back in turn. Here are a few examples: .NET framework has been the longtime rival of Java and has supported multiple languages from day one. Based on the lessons learned from Java, the .NET designers came up with better language constructs. However, the biggest hurdle for .NET was that it was a proprietary technology, and that created an impediment to its adoption. Also, .NET was more aggressive in adding newer language constructs. While the framework evolved quickly as a result of that, it broke its backward compatibility many times. Ruby (and Python) offered shortened code, enticing programming constructs, and greater expressiveness as opposed to the boring Java; however, they took away static typing support (which helps to make robust programs) and made the programs slower. Scala offered shortened code and advanced programming constructs, without sacrificing typing safety. However, Scala is complex and has a substantially high learning curve. It supports multiple coding styles. So, there is a danger that Scala code written by one developer may not be understood easily by another. These are risk factors for any project that includes a team of developers and when the application is expected to be supported over a long period, which is true about most applications anyway. Why Kotlin? Unlike other languages, Kotlin offers a lot of power over Java, while not taking anything away. Let's take a look at the following screenshot to see how: Kotlin is interoperable with Java. It is possible to write applications containing both Java and Kotlin code, calling one from the other. Calling Java code from Kotlin is simpler, as opposed to the other way around, but the former will be the case most of the times anyway, where new Kotlin code is added on top of legacy Java code. Kotlin is interoperable and can use all the Java libraries and legacy coding without having to do any code conversion. It is possible to inject Kotlin into a Java project without boiling the ocean. Concise yet expressive code While being interoperable, Kotlin code is far superior to Java code. Like Scala, Kotlin uses type inference to cut down on a lot of boilerplate code and makes it concise. (Type inference is a better feature than dynamic typing as it reduces the code without sacrificing the robustness of the end product). However, unlike Scala, Kotlin code is easy to read and understand, even for someone who may not know Kotlin. Kotlin's data class construct is the most prominent example of being concise as shown in the following: data class Employee (val id: Long, var name: String) Compared to its Java counterpart, the preceding line has packed into it the class definition, member variables, constructor, getter-setter methods, and also the utility methods, such as equals() and hashCode(). This will easily take 15-20 lines of Java code. The data classes construct is not an isolated example. There are many others where the syntax is concise and expressive. Consider the following as additional examples: Kotlin's default values to function parameters save the need to overload the functions Kotlin's extension functions can be used to add domain-specific functionality to existing classes, making it easy for someone from the domain to understand Enhanced robustness Statically typed languages have a built-in safety net because of the assurance that the compiler will catch any incorrect type cast. Both Java and Kotlin support static typing. With Java Generics introduced in Java 1.5, they both fare better over the Java releases prior to 1.5. However, Kotlin takes a big step further in addressing the Null pointer error. This Null pointer error causes a lot of checks in Java programs: String s = someOperation(); if (s != null) { ... } One can see that the null check is not needed if someOperation() never returns null. On the other hand, it is possible for a programmer to omit the null check while someOperation() returning null is a valid case. With Kotlin, the definition of someOperation() itself will return either String or String? and then there are implications on the subsequent code, so the developer just cannot go wrong. Refer the  following table: fun someOperation() : String // not nullable fun someOperation() : String? // nullable val s = someOperation() if (s != null) { // null check not needed – editor warning … } val s = someOperation() n = s.length() // error, null check imposed n = s?.length() ?: 0 // handling null condition One may point out that Java developers can use the @Nullable and @NotNull annotations or the Optional class; however, these were added quite late, most developers are not aware of them, and they can always get away with not using them, as the code does not break. Finally, they are not as elegant as putting a question mark. There is also a subtle point here. If a Kotlin developer is careless, he would write just the type name, which would automatically become a non-nullable declaration. If he wanted to make it nullable, he would have to  key in that extra question mark deliberately. Thus, you are on the side of caution, and that is as far as keeping the code robust is concerned. Another example of this robustness is found in the var/val declarations. Seasoned programmers know that most variables get a value assigned to them only once. In Kotlin, while declaring the variable, you choose val for such a variable. At the time of variable declaration, the programmer has to select between val and var, and so he puts some thought into it. On the other hand, in Java, you can get away with just declaring the type with its name, and you will rarely find any Java code that defines a variable with the final keyword, which is Java's way of declaring that the variable can be assigned a value only once. Basically, with the same maturity level of programmers, you expect a relatively more robust code in Kotlin as opposed to Java, and that's a big win from the business perspective. Excellent IDE support from day one Kotlin comes from JetBrains, who also develop a well-known Java integrated development environment (IDE): IntelliJ IDEA. JetBrains developers made sure that Kotlin has first-class support in IDEA. Not only that, they also developed a Kotlin plugin for Eclipse, which is the #1 most widely used Java IDE. Contrast this with the situation when Java appeared on the scene roughly two decades ago. There was no good IDE support. Programmers were asked to use simple text editors. Coding Java was hard, with no safety net provided by an IDE, until the Eclipse editor was open-sourced. In the case of Kotlin, the editor's suggestions being available from day one means that they can learn the language faster, make fewer mistakes, and write good quality compilable code with relative ease. Clearly, Kotlin does not want to waste any time in climbing up the ladder of popularity. Beyond being a better Java We saw that on the JVM platform, Kotlin is neat and quite superior. However, Kotlin has set its eyes beyond the JVM. Its strategy is to win based on its superior and modern feature set. Before we go ahead, let's list the top five appeals of Kotlin: Static typing (like in C or Java) means that there is built-in type safety. The compiler catches any incorrect type assignments. This makes programs robust. Kotlin is concise and expressive. Being concise implies that there is less to read and maintain. Being expressive implies better maintainability. Being a JVM language, the Kotlin programs can take advantage of the features built into the JVM, such as its cross-platform nature, memory management, high performance and sandbox security. Kotlin has inbuilt null-safety. Null references are famous as the billion-dollar mistake, as admitted by its inventor Tony Hoare and cost a great deal of unnecessary null-checks in programs. Kotlin eliminates those and makes the programs more robust. Kotlin is easy to learn, especially for Java developers. Its syntax is clean and therefore easy to understand, because of which, Kotlin programs are fun for developers to code and easy to understand, and enhancing for their peers. From a business angle, they are more robust and easy to maintain for businesses. Kotlin is in the winning camp The features of Kotlin have a good validation when one considers that other languages, which have similar features, are also growing in popularity: The Crystal language attracts Ruby programmers by adding static typing support. Similarly, TypeScript adds static typing support to JavaScript and has become the preferred language for some JavaScript frameworks. Scala and F# add functional programming support to traditional non-functional paradigms without sacrificing type safety and, hence, are more attractive. Kotlin uses functional programming, just enough to ease out the programming in a lot of cases, but not too much to make it complex. Like Kotlin, Swift, and Rust also have inbuilt null-safety. Kotlin and Swift are often compared, as their syntaxes resemble one another a lot. Server-side languages, which were getting designed after the emergence of the parallel computing phenomena, became one of the chief requirements for providing inbuilt constructs for easing the programmer's work. One can find this in both Kotlin (coroutines) and Rust. Go native strategy The Kotlin developers figured that the same strategy that is used on the JVM platform could be used on other platforms too. Consider the following illustration: On no platform does Kotlin disrupt the platform's existing technology: The JVM works with the Java bytecode and Kotlin gives an alternative to Java to generate the same bytecode (By no means is Kotlin the first alternative as there are already 200+ languages that work with JVM, but it is the most elegant one for all the reasons that we have seen previously). On modern browsers where JavaScript is the de facto standard, Kotlin can work by transpiling to JavaScript. Again, this means that Kotlin is friendly with existing browsers without making any special effort. On the Node.js platform where JavaScript is used on the server side, your Kotlin code transpiles into JavaScript, and hence there are no changes needed in the Node.js framework for Kotlin to run. In a similar way, Kotlin/Native plans to work with other technologies in a native way. Since the platform's technology is not disrupted, there are zero changes needed at the platform level to adopt Kotlin. Kotlin's compatibility with a given platform can be taken for granted from day one. This eliminates a big business risk. Kotlin's winning strategy Kotlin's winning strategy is the sum of the various factors that we have seen previously. It has a two-pronged strategy to win over the developers with the coolness of the language, and the ease of working with it, to win over business users with its business benefits. The following illustration shows us the different benefits of using Kotlin: The other benefits also include: The growing popularity of the language Endorsement from Google to make Kotlin an officially supported language in May 2017 Kotlin-specific development frameworks emerging Leading Java frameworks, such as Spring, offering Kotlin-specific improvements The growing number of applications being tried out with Kotlin The user groups spread across Kotlin developer hubs The growing number of technology companies using Kotlin With this in mind, the winning strategy for smart programmers is to master Kotlin and learn to work with Kotlin on various platforms. Being ahead of the curve as opposed to following the world after Kotlin is already big but it will be a quick path to being recognized as a leader. Further chapters of this book will help you in exactly this mission. Apart from going through this book, we strongly suggest you join the community. Join the Kotlin weekly mailing list at http://kotlinweekly.net. Join the nearest Kotlin user group at http://kotlinlang.org/community/user-groups.html. Kotlin's community on Slack is at https://kotlinlang.slack.com/. We saw how Kotlin is well positioned to take off as the universal programming language. It offers an opportunity for smart programmers to establish themselves at the forefront of this rising tide. This article was taken from the book Kotlin Blueprints. If you liked reading this piece, check out the  book to build comprehensive applications using Kotlin features.  Getting started with Kotlin programming Build your first Android app with Kotlin How to convert Java code into Kotlin
Read more
  • 0
  • 2
  • 11714