Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-isomorphic-javascript
Sam Wood
26 Jan 2017
3 min read
Save for later

Why you should learn Isomorphic JavaScript in 2017

Sam Wood
26 Jan 2017
3 min read
One of the great challenges of JavaScript development has been wrangling your code for both the server- and the client-side of your site or app. Fullstack JS Devs have worked to master the skills to work on both the front and backend, and numerous JS libraries and frameworks have been created to make your life easier. That's why Isomorphic JavaScript is your Next Big Thing to learn for 2017. What even is Isomorphic JavaScript? Isomorphic JavaScript are JavaScript applications that run on both the client and the server-side. The term comes from a mathematical concept, whereby a property remains constant even as its context changes. Isomorphic JavaScript therefore shares the same code, whether it's running in the context of the backend or the frontend. It's often called the 'holy grail' of web app development. Why should I use Isomorphic JavaScript? "[Isomorphic JavaScript] provides several advantages over how things were done 'traditionally'. Faster perceived load times and simplified code maintenance, to name just two," says Google Engineer and Packt author Matt Frisbie in our 2017 Developer Talk Report. Netflix, Facebook and Airbnb have all adopted Isomorphic libraries for building their JS apps. Isomorphic JS apps are *fast*, operating off one base of code means that no time is spent loading and parsing any client-side JavaScript when a page is accessed. It might only be a second - but that slow load time can be all it takes to frustrate and lose a user. But Isomorphic apps are faster to render HTML content directly in browser, ensuring a better user experience overall. Isomorphic JavaScript isn't just quick for your users, it's also quick for you. By utilizing one framework that runs on both the client and the server, you'll open yourself up to a world of faster development times and easier code maintenance. What tools should I learn for Isomorphic JavaScript? The premier and most powerful tool for Isomorphic JS is probably Meteor - the fullstack JavaScript platform. With 10 lines of JavaScript in Meteor, you can do what will take you 1000s elsewhere. No need to worry about building your own stack of libraries and tools - Meteor does it all in one single package. Other Isomorphic-focused libraries include Rendr, created by Airbnb. Rendr allows you to build a Backbone.js + Handlebars.js single-page app that can also be fully rendered on the server-side - and was used to build the Airbnb mobile web app for drastically improved page load times. Rendr also strives to be a library, rather than a framework, meaning that it can be slotted into your stack as you like and gives you a bit more flexibility than a complete solution such as Meteor.
Read more
  • 0
  • 0
  • 5227

article-image-messaging-app-telegram-updated-privacy-policy-open-challenge
Amarabha Banerjee
08 Sep 2018
7 min read
Save for later

Messaging app Telegram's updated Privacy Policy is an open challenge

Amarabha Banerjee
08 Sep 2018
7 min read
Social media companies are facing a lot of heat presently because of their privacy issues. One of them is Facebook. The Cambridge analytica scandal had even prompted a senate hearing for Mark Zuckerberg. On the other end of this spectrum, there is another messaging app known as Telegram, registered in London, United Kingdom, founded by the Russian entrepreneur Pavel Durov. Telegram has been in the news for an absolutely opposite situation. It’s often touted as one of the most secure and secretive messaging apps. The end to end encryption ensures that security agencies across the world have a tough time getting access to any suspicious piece of information. For this reason Russia has banned the use of Telegram app on April 2018. Telegram updated their privacy policies on . These updates have further ensured that Telegram will retain the title of the most secure messaging application in the planet. It’s imperative for any messaging app to get access to our data. But how they choose to use it makes you either vulnerable or secure. Telegram in their latest update have stated that they process personal data on the grounds that such processing caters to the following two goals: Providing effective and innovative Services to our users To detect, prevent or otherwise address fraud or security issues in respect of their provision of Services. The caveat for the second point being the security interests shall not override the space of fundamental rights and freedoms that require protection of personal data. This clause is an excellent example on how applications can prove to be a torchbearer for human rights and basic human privacy amidst glaring loopholes. Telegram have listed the the kind of user data accessed by the app. They are as follows: Basic Account Data Telegram stores basic account user data that includes mobile number, profile name, profile picture and about information, which are needed  to create a Telegram account. The most interesting part of this is Telegram allows you to only keep your username (if you choose to) public. The people who have you in their contact list will see you as you want them to - for example you might be a John Doe in public, but your mom will still see you as ‘Dear Son’ in their contacts. Telegram doesn’t require your real name, gender, age or even your screen name to be your real name. E-mail Address When you enable 2-step-verification for your account or store documents using the Telegram Passport feature, you can opt to set up a password recovery email. This address will only be used to send you a password recovery code if you forget it. They are particular about not sending any unsolicited marketing emails to you. Personal Messages Cloud Chats Telegram stores messages, photos, videos and documents from your cloud chats on their servers so that you can access your data from any of your devices anytime without having to rely on third-party backups. All data is stored heavily encrypted and the encryption keys in each case are stored in several other data centers in different jurisdictions. This way local engineers or physical intruders cannot get access to user data. Secret Chats Telegram has a feature called Secret chats that uses end-to-end encryption. This means that all data is encrypted with a key that only the sender and the recipients know. There is no way for us or anybody else without direct access to your device to learn what content is being sent in those messages. Telegram does not store ‘secret chats’ on their servers. They also do not keep any logs for messages in secret chats, so after a short period of time there is no way of determining who or when you messaged via secret chats. Secret chats are not available in the cloud — you can only access those messages from the device they were sent to or from. Media in Secret Chats When you send photos, videos or files via secret chats, before being uploaded, each item is encrypted with a separate key, not known to the server. This key and the file’s location are then encrypted again, this time with the secret chat’s key — and sent to your recipient. They can then download and decipher the file. This means that the file is technically on one of Telegram’s servers, but it looks like a piece of random indecipherable garbage to everyone except for you and the recipient. This complete process is random and there random data packets are periodically purged from the storage disks too. Public Chats In addition to private messages, Telegram also supports public channels and public groups. All public chats are cloud chats. Like everything else on Telegram, the data you post in public communities is encrypted, both in storage and in transit — but everything you post in public will be accessible to everyone. Phone Number and Contacts Telegram uses phone numbers as unique identifiers so that it is easy for you to switch from SMS and other messaging apps and retain your social graph. But the most important thing is that permissions from the users are a must before the cookies are allowed into your browser. Cookies Telegram promises that the only cookies they use are those to operate and provide their Services on the web. They clearly state that they don’t use cookies for profiling or advertising. Their cookies are small text files that allow them to provide and customize their Services, and provide an enhanced user experience. Also, whether or not to use these cookies is a choice made by the users. So, how does Telegram remain in business? The Telegram business model doesn’t match that of a revenue generating service. The founder Pavel Durov is also the founder of the popular Russian social networking site VK. Telegram doesn’t charge for any messaging services, it doesn’t show ads yet. Some new in app purchase features might be included in the new version. As of now, the main source of revenue for Telegram are donations and mainly the earnings of Pavel Durov himself (from the social networking site VK). What can social networks learn from Telegram? Telegram’s policies elevate privacy standards that many are asking from other social messaging apps. The clamour for stopping the exploitation of user data, using their location details for targeted marketing and advertising campaigns is increasing now. Telegram shows that privacy can be achieved, if intended, in today’s overexposed social media world. But there is are also costs to this level of user privacy and secrecy, that are sometimes not discussed enough. The ISIS members behind the 2015 Paris attacks used Telegram to spread propaganda. ISIS also used the app to recruit the perpetrators of the Christmas market attack in Berlin last year and claimed credit for the massacre. More recently, a Turkish prosecutor found that the shooter behind the New Year’s Eve attack at the Reina nightclub in Istanbul used Telegram to receive directions for it from an ISIS leader in Raqqa. While these incidents can never negate the need for a secure and less intrusive social media platform like Telegram, there should be workarounds and escape routes designed for stopping extremists and terrorist activities. Telegram have assured that all ISIS messaging channels are deleted from their network which is a great way to start. Content moderation, proactive sentiment and pattern recognition and content/account isolation are the next challenges for Telegram. One thing is for sure, Telegram’s continual pursuance of user secrey and user data privacy is throwing an open challenge to others to follow suite. Whether others will oblige or not, only time will tell. To read about Telegram’s updated privacy policies in detail, you can check out the official Telegram Privacy Settings. How to stay safe while using Social Media Time for Facebook, Twitter and other social media to take responsibility or face regulation What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies
Read more
  • 0
  • 0
  • 5209

article-image-mapreduce-amazon-emr-nodejs
Pedro Narciso
14 Dec 2016
8 min read
Save for later

MapReduce on Amazon EMR with Node.js

Pedro Narciso
14 Dec 2016
8 min read
In this post, you will learn how to write a Node.js MapReduce application and how to run it on Amazon EMR. You don’t need to be familiar with Hadoop or EMR API's. In order to run the examples, you will need a Github account, an Amazon AWS, some money to spend at AWS, and Bash or an equivalent installed on your computer. EMR, BigData, and MapReduce We define BigData as those data sets too large or too complex to be processed by traditional processing applications. BigData is also a relative term: A data set can be too big for your Raspberry PI, while being a piece of cake for your desktop. What is MapReduce? MapReduce is a programming model that allows data sets to be processed in a parallel and distributed fashion. How does it work? You create a cluster and feed it with the data set. Then, you define a mapper and a reducer. MapReduce involves the following three steps: Mapping step: This breaks down the input data into KeyValue pairs Shuffling step: KeyValue pairs are grouped by Key Reducing step: KeyValue pairs are processed by Key in parallel It’s guaranteed that all data belonging to a single key will be processed by a single reducer instance. Our processing job project directory setup Today, we will implement a very simple processing job: counting unique words from a set of text files. The code for this article is hosted at Here. Let's set up a new directory for our project: $ mkdir -p emr-node/bin $ cd emr-node $ npm init --yes $ git init We also need some input data. In our case, we will download some books from project Gutenberg as follows: $ mkdir data $ curl -Lo data/tmohah.txt http://www.gutenberg.org/ebooks/45315.txt.utf-8 $ curl -Lo data/mad.txt http://www.gutenberg.org/ebooks/5616.txt.utf-8 Mapper and Reducer As we stated before, the mapper will break down its input into KeyValue pairs. Since we use the streaming API, we will read the input form stdin. We will then split each line into words, and for each word, we are going to print "word1" to stdout. TAB character is the expected field separator. We will see later the reason for setting "1" as the value. In plain Javascript, our ./bin/mapper can be expressed as: #!/usr/bin/env node const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); rl.on('line', function(line){ line.trim().split(' ').forEach(function(word){ console.log(`${word}t1`); }); }); As you can see, we have used the readline module (a Node built-in module) to parse stdin. Each line is broken down into words, and each word is printed to stdout as we stated before. Time to implement our reducer. The reducer expects a set of KeyValue pairs, sorted by key, as input, such as the following: First<TAB>1 First<TAB>1 Second<TAB>1 Second<TAB>1 Second<TAB>1 We then expect the reducer to output the following: First<TAB>2 Second<TAB>3 Reducer logic is very simple and can be expressed in pseudocode as: IF !previous_key previous_key = current_key counter = value IF previous_key equals current_key counter = counter + value ELSE print previous_key<TAB>counter previous_key = current_key; counter = value; The first statement is necessary to initialize the previous_key and counter variables. Let's see the real JavaScript implementation of ./bin/reducer: #!/usr/bin/env node var previousKey, counter; const readline = require('readline'); const rl = readline.createInterface({ input : process.stdin }); function print(){ console.log(`${previousKey}t${counter}`); } function countWord(line) { let [currentKey, value] = line.split('t'); value = +value; if(typeof previousKey === 'undefined'){ previousKey = currentKey; counter = value; return; } if(previousKey === currentKey){ counter = counter + value; return; } print(); previousKey = currentKey; counter = value; } process.stdin.on('end',function(){ print(); }); rl.on('line', countWord); Again, we use readline module to parse stdin line by line. The countWord function implements our reducer logic described before. The last thing we need to do is to set execution permissions to those files: chmod +x ./bin/mapper chmod +x ./bin/reducer How do I test it locally? You have two ways to test your code: Install Hadoop and run a job With a simple shell script The second one is my preferred one for its simplicity: ./bin/mapper <<EOF | sort | ./bin/reducer first second first first second first EOF It should print the following: first<TAB>4 second<TAB>2 We are now ready to run our job in EMR! Amazon environment setup Before we run any processing job, we need to perform some setup on the AWS side. If you do not have an S3 bucket, you should create one now. Under that bucket, create the following directory structure: <your bucket> ├── EMR │ └── logs ├── bootstrap ├── input └── output Upload our previously downloaded books from project Gutenberg to the input folder. We also need AWS cli installed on the computer. You can install it with the python package manager. If you do not have AWS cli installed on your computer, then run: $ sudo pip install awscli awscli requires some configuration, so run the following and provide the requested data: $ aws configure You can find this data in your Amazon AWS web console. Be aware that usability is not Amazon’s strongest point. If you do not have your IAM EMR roles yet, it is time to create them: aws emr create-default-roles Good. You are now ready to deploy your first cluster. Check out this (run-cluster.sh) script: #!/bin/bash MACHINE_TYPE='c1.medium' BUCKET='pngr-emr-demo' REGION='eu-west-1' KEY_NAME='pedro@triffid' aws emr create-cluster --release-label 'emr-4.0.0' --enable-debugging --visible-to-all-users --name PNGRDemo --instance-groups InstanceCount=1,InstanceGroupType=CORE,InstanceType=$MACHINE_TYPE InstanceCount=1,InstanceGroupType=MASTER,InstanceType=$MACHINE_TYPE --no-auto-terminate --enable-debugging --log-uri s3://$BUCKET/EMR/logs --bootstrap-actions Path=s3://$BUCKET/bootstrap/bootstrap.sh,Name=Install --ec2-attributes KeyName=$KEY_NAME,InstanceProfile=EMR_EC2_DefaultRole --service-role EMR_DefaultRole --region $REGION The previous script will create a 1 master, 1 core cluster, which is big enough for now. You will need to update this script with your bucket, region, and key name. Remember that your keys are listed at "AWS EC2 console/Key pairs". Running this script will print something like the following: { "ClusterId": "j-1HHM1B0U5DGUM" } That is your cluster ID and you will need it later. Please visit your Amazon AWS EMR console and switch to your region. Your cluster should be listed there. It is possible to add the processing steps with either the UI or aws cli. Let's use a shell script (add-step.sh): #!/bin/bash CLUSTER_ID=$1 BUCKET='pngr-emr-demo' OUTPUT='output/1' aws emr add-steps --cluster-id $CLUSTER_ID --steps Name=CountWords,Type=Streaming,Args=[-input,s3://$BUCKET/input,-output,s3://$BUCKET/$OUTPUT,-mapper,mapper,-reducer,reducer] It is important to point out that our "OUTPUT" directory does not exist at S3 yet. Otherwise, the job will fail. Call ./add-step.sh plus the cluster ID to add our CountWords step: ./add-step j-1HHM1B0U5DGUM Done! So go back to the Amazon UI, reload the cluster page, and check the steps. "CountWords" step should be listed there. You can track job progress from the UI (reload the page) or from the command line interface. Once the job is done, terminate the cluster. You will probably want to configure the cluster to terminate as soon as it finishes or when any step fails. Termination behavior can be specified with the "aws emr create-cluster". Sometimes the bootstrap process can be difficult. You can SSH into the machines, but before that, you will need to modify their security groups, which are listed at "EC2 web console/security groups". Where to go from here? You can (and should) break down your processing jobs into smaller steps because it will simplify your code and add more composability to your steps. You can compose more complex processing jobs by using the output of a step as the input for the next step. Imagine that you have run the "CountWords" processing job several times and now you want to sum the outputs. Well, for that particular case, you just add a new step with an "identity mapper" and your already built reducer, and feed it with all of the previous outputs. Now you can see why we output "WORD1" from the mapper. About the author Pedro Narciso García Revington is a Senior Full Stack Developer with 10+ years of experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD, and polyglot persistence.
Read more
  • 0
  • 0
  • 5159

article-image-how-to-handle-aws-through-the-command-line
Felix Rabe
04 Jan 2016
8 min read
Save for later

How to Handle AWS Through the Command Line

Felix Rabe
04 Jan 2016
8 min read
Are you a developer running OS X or Linux, and would like to give Amazon AWS a shot? And do you prefer the command line over fancy GUIs? Read on and you might have your first AWS-provided, Ubuntu-powered Nginx web server running in 30 to 45 minutes. This article will guide you from just having OS X or Linux installed on your computer to a running virtual server on Amazon Web Services (AWS), completely controlled via its Command Line Interface (CLI). If you just start out using Amazon AWS, you'll be able to benefit from the AWS Free Tier for 12 months. This tutorial only uses resources available via the AWS free tier. (Disclaimer: I am not affiliated with Amazon in any other way than as a user.) Required skills: A basic knowledge of the command line (shell) and web technologies (SSH, HTTP) is all that's needed. Open your operating system's terminal to follow along. What are Amazon Web Services (AWS)? Amazon AWS is a collection of services based on Amazon's infrastructure that Amazon provides to the general public. These services include computing resources, file storage, databases, and even crowd-sourced manual labor. See http://aws.amazon.com/products/ for an overview of the provided services. What is the Amazon AWS Command Line Interface (CLI)? The AWS CLI tool enables to control all operational aspects of AWS from the command line. This is a great advantage for automating processes and for people (like me) with a preference for textual user interfaces. How to create an AWS account Head over to https://aws.amazon.com/ and sign up for an account. This process will require you to have a credit card and a mobile phone at hand. Then come back and read on. How to generate access keys Before the AWS CLI can be configured for use, you need to create a user with the required permissions and download his access keys (AWS Access Key ID and AWS Secret Access Key) for use in the AWS CLI. In the AWS Console (https://console.aws.amazon.com/), open your account menu in the upper right corner and click on "Security Credentials": Security Credentials If a dialog pops up for you, just dismiss it for now by clicking "Continue to Security Credentials". Then, in the sidebar on the left, click on "Groups": Groups Create a group (e.g. "Developers") and attach the policy "AmazonEC2FullAccess" to it. Then, in the sidebar on the left, click on "Users": Users Create a new user, and then copy or download the security credentials to a safe place. You will need them soon. Click on the new user, then "Add User to Groups" to add the user to the group you've just created before. This gives the user (and the keys) the required capabilities to manipulate EC2 Install AWS CLI via Homebrew (OS X) (Linux users can skip to the next section.) On OS X, Homebrew provides a simple way to install other software from the command line and is widely used. Even though the AWS CLI documentation recommends installation via pip (the Python package manager), I chose to install AWS CLI via Homebrew as it is more common. AWS CLI on Homebrew might lag behind a version compared to pip, though. Open the Terminal application and install Homebrew by running: ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" The installation script will guide you through the necessary steps to get Homebrew set up. Once finished, install AWS CLI using: brew install awscli aws --version If the last command successfully shows you the version of the AWS CLI, you can continue on to the section about configuring AWS CLI. Install AWS CLI via pip (Linux) On Debian or Ubuntu Linux, run: sudo apt-get install python-pip sudo pip install awscli aws --version On Fedora, run: sudo yum install python-pip sudo pip install awscli aws --version If the last command successfully shows you the version of the AWS CLI, you can continue on with the next section. Configure AWS CLI and Run Your First Virtual Server Run aws configure and paste in the credentials you've received earlier: $ aws configure AWS Access Key ID [None]: AKIAJXIXMECPZBXKMK7A AWS Secret Access Key [None]: XUEZaXQ32K+awu3W+I/qPyf6+PIbFFORNM4/3Wdd Default region name [None]: us-west-1 Default output format [None]: json Here you paste the credentials you've copied or downloaded above, for "AWS Access Key ID" and "AWS Secret Access Key". (Don't bother trying the values given in the example, as I've already changed the keys.) We'll use the region "us-west-1" here. If you want to use another one, you will have to find an equivalent AMI (HD image, "Ubuntu Server 14.04 LTS (HVM), SSD Volume Type", ID ami-df6a8b9b in region "us-west-1") with a different ID for your region. The output formats available are "json", "table" and "text", and can be changed for each individual AWS CLI command by appending the --output <format> option. "json" is the default and produces pretty-printed (though not key-sorted) JSON output. "table" produces a human-readable presentation. "text" is a tab-delimited format that is easy to parse in shell scripts. Help on AWS CLI The AWS CLI is well documented on http://aws.amazon.com/documentation/cli/, and man pages for all commands are available by appending help to the end of the command line: aws help aws ec2 help aws ec2 run-instances help Amazon EC2 Amazon EC2 is the central piece of AWS. The EC2 website says: "Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud." In other words, EC2 provides the capability to run virtual machines connected to the Internet. That's what will make our Nginx run, so let's make use of it. To do that, we need to enable SSH networking and generate an SSH key for logging in. Setting Up the Security Group (Firewall) Security Groups are virtual firewalls. To make a virtual machine accessible, it is associated with one (or more) security group. A security group defines which ports are open and to what IP ranges. Without further ado: aws ec2 create-security-group --group-name tutorial-sg --description "Tutorial security group" aws ec2 authorize-security-group-ingress --group-name tutorial-sg --protocol tcp --port 22 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name tutorial-sg --protocol tcp --port 80 --cidr 0.0.0.0/0 This creates a security group called "tutorial-sg" with ports 22 and 80 open to the world. To confirm that you have set it up correctly, you can then run: aws ec2 describe-security-groups --group-name tutorial-sg --query 'SecurityGroups[0].{name:GroupName,description:Description,ports:IpPermissions[*].{from:FromPort,to:ToPort,cidr:IpRanges[0].CidrIp,protocol:IpProtocol}}' The --query option is a great way to filter through AWS CLI JSON output. You can safely remove the --query option from the aws ec2 describe-security-groups command to see the JSON output in full. The AWS CLI documentation has more information about AWS CLI output manipulation and the --query option. Generate an SSH Key To actually log in via SSH, we need an SSH key: aws ec2 create-key-pair --key-name tutorial-key --query 'KeyMaterial' --output text > tutorial-key.pem chmod 0400 tutorial-key.pem Run Your First Instance (Virtual Machine) on EC2 Finally, we can run our first instance on AWS! Remember that the image ID "ami-df6a8b9b" is specific to the region "us-west-1". In case you wonder about the size of the disk, this command will also create a new 8 GB disk volume based on the size of the specified disk image: instance=$(aws ec2 run-instances --image-id ami-df6a8b9b --count 1 --instance-type t2.micro --security-groups tutorial-sg --key-name tutorial-key --query 'Instances[0].InstanceId' --output text) ; echo Instance: $instance This shows you the IP address and the state of the new instance in a nice table: aws ec2 describe-instances --instance-ids $instance --query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress,State.Name]' --output table Install Nginx and Open Your Shiny New Website And now we can log in to the new instance to install Nginx: ipaddr=$(aws ec2 describe-instances --instance-ids $instance --query 'Reservations[0].Instances[0].PublicIpAddress' --output text) ; echo IP: $ipaddr ssh -i tutorial-key.pem ubuntu@$ipaddr sudo apt-get update && ssh -i tutorial-key.pem ubuntu@$ipaddr sudo apt-get install -y nginx If you now open the website at $ipaddr in your browser (OS X: `open http://$ipaddr, Ubuntu:xdg-open http://$ipaddr`), you should be greeted with the "Welcome to nginx!" message. Success! :) Cleaning Up In case you might want to stop your instance again: aws ec2 stop-instances --instance-ids $instance To start the instance again, substitute start for stop (caution: the IP address will probably change), and to completely remove the instance (including the volume), substitute terminate for stop. Resources Amazon Web Services: http://aws.amazon.com/ AWS CLI documentation: http://aws.amazon.com/documentation/cli/ AWS CLI EC2 reference: http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html Official AWS Tutorials: http://docs.aws.amazon.com/gettingstarted/latest/awsgsg-intro/gsg-aws-tutorials.html AWS Command Line Interface: http://aws.amazon.com/cli/ Homebrew: http://brew.sh/ About the author Felix Rabe is a developer who develops in Go and deploys in Docker. He can be found on Github and Twitter @felixrabe.
Read more
  • 0
  • 0
  • 5151

article-image-5-2d-game-engines-you-might-not-have-considered
Ed Bowkett
30 Oct 2014
5 min read
Save for later

5 2D Game Engines you might not have considered

Ed Bowkett
30 Oct 2014
5 min read
In this blog we will cover 5 game engines you can use to create 2D games. 2D games are very appealing for a wide range of reasons. They’re great for the indie game scene, they’re great to learn the fundamentals of game development, and it’s a great place to start coding and you have fun doing it. I’ve thrown in some odd ones that you might not have considered before and remember, this isn’t a definitive list—just my thoughts! Construct 2 Construct 2 start page Construct 2 is fantastically simple to get into, thanks to its primary focus on people with no programming experience and its drag-and-drop editor that allows the quick creation of games. It is an HTML5-based game editor and is fast and easy to learn for novices, with the ability to create certain games such as platformers and shooters very quickly. Added to this, the behavior system is very simple and rapid to use, so Construct 2 is an easy choice for new people to get to grips with their first game. Furthermore, with the announcement that Construct 2 is now supported on the Wii U, along with a booming indie game market, Construct 2 has become an appealing engine to use, particularly to the non-programmer user base. However, one potential downside for aspiring game developers is the cost of Construct 2, coming in at a pricey £79.99. Pygame A basic game in Pygame I mention this game library simply because Python is just so goddamned awesome, and, with the amount of people using Python currently, it deserves a mention. With an increasing amount of games being created using Pygame, it’s possibly an area of interest to those that haven’t considered it in the past. It’s really easy to get started, it’s cross-platform and, have I said it’s made in Python? As far as negatives go, performance-wise, Python isn’t the greatest when it comes to large games, but this shouldn’t affect 2D development. It’s also free and open source. You just need to learn Python… GameMaker 'Hotline Miami' screenshot made in GameMaker Similar to Construct 2, GameMaker uses a simple drag-and-drop system. It primarily uses 2D graphics, but also allows limited use of 3D graphics. It has its own scripting language, Game Maker Language (GML for short), and is compatible with Windows, Mac, Ubuntu, Android, iOS, Windows Phone, and Tizen. A wealth of titles has been created via GameMaker including the bestsellers, Hotline Miami and Spelunky. Gamemaker is very simple for creating animations and sounds and allows you to create these within minutes, allowing you to focus on other things, such as possibly making it multiplayer. Also, as mentioned above, the ease of exporting to multiple platforms is a great plus for developers wanting to expand their audience quickly. Again a potential downside is the cost of Gamemaker, which comes in at $49.99. Cocos2D/Cocos2D-X Cocos2D-X game Cocos2d is an open source framework, with Cocos2d-x being a branch within this. It allows developers to exploit their existing C++, Lua, and JavaScript knowledge, and its ability to deploy cross-platform on Android, Windows Phone, Mac, and Windows saves significant cost and time for many developers. Many of the top grossing mobile games are made by Cocos2d-x and many of the leading game studios create games using Cocos2d-X. With its breadth of languages, usability, and functions, Cocos2D-x really is a no brainer.  A potential downside for users is it is entirely library-based, so it is entirely in code. This means in order to set up scenes, everything has to be done by the developer. However, it is free, which is always a bonus. Unity A basic 2D game using Unity A curveball to end this list. Everyone has heard of Unity and the great 3D games you can create. However, with 4.3, it introduced a huge opportunity to develop 2D games on Unity’s game engine. Having used it, the ability to quickly set up a 2D environment and the ease of quickly creating a basic game using professional tools is very appealing. Coupled with the performance of Unity, it ensures that your games will be polished and developers will be able to learn a bit of C# should they wish. It comes with a cost, either at $1,500 or at $75 a month (that’s if you want to go professional, but it’s free to dabble with), which for some is a stretch and is easily the most expensive on this list. This list provides a personal view on what I consider great 2D frameworks to get to grips with. It is not a definitive list; there are others out there that also do the job that these do, however I hope I have provided a balance and produced some alternatives that people might not have considered before. Read 'Tappy Defender - Building the home screen' to start developing your own game in Android Studio.
Read more
  • 0
  • 0
  • 5140

article-image-how-build-great-game-development-team
Raka Mahesa
22 Oct 2017
5 min read
Save for later

How to build a great game development team

Raka Mahesa
22 Oct 2017
5 min read
What is the secret behind a great video game? Different people will give different answers to these questions. Some say that you need to have this unwavering vision of a game design and apply it to the game, but others say that you need to research the market and see what games people like to buy. However, there is one answer that everyone would agree with, and that is,to make a great game, you need to have a great team. Before we go further talking about building a great development team, let's clarify one thing first. Forming a team can happen in two ways, with the first one starting with zero team members and recruiting new people into the team. The other one is adding to people from an already-formed team to make a new, additional team. Fortunately, the process of building a great team is the same for both methods. All right, with that out of the way, let's proceed. The very first aspect to tackle when you want to build a game development team is about scale. How big is the game you want to make, and in turn, how big is the team you need to make that game? After all, making a big game with a small team is very hard, and making a small game with a big team is a waste of resources. No matter the size of game you want to make, though, it's always better to start off with a small, core team. Keep in mind that the difference in scale will affect the type of person you want on this initial core team. In a small project, it'd be fine to have people that usually work alone in the core team. But in a larger project, you want people that have leadership qualities for the core team members, because they're expected to manage their own subordinates when the team size grows. Roles are key Now that we have determined the scale of our project, let's focus on the next important aspect: roles. When we're building a game development team, it's pretty crucial to determine what roles are needed and how many people are required to fill those roles. There are a couple of things that affect team roles. One of them is the type of games you're going to make. If you're making a story-rich game with a lot of dialogues, then having a writer (or even multiple writers) on the team is pretty important. On the other hand, if you're making a racing game, which usually doesn't have a big narrative aspect, then having a programmer that can simulate car physics is much more important than having a writer. And again, scale matters. In a big development team, you’re going to want specialized roles for the team, like network programmer, engine programmer, gameplay programmer, and so on. Meanwhile, in a smaller development team, you're better off with general roles and people who can work in various fields. And if the team size is small enough, it's not strange for people to have multiple roles, like a programmer that is also a game designer. Explore outsourcing opportunities Another aspect that you should think about is outsourcing. How much of your game development should be done in-house? Which part of the game can be outsourced to another party that turns out okay? It's quite common to contract third parties to produce music and artwork for the game, while programming generally tends to be fully done in-house. Identifying this aspect is quite important to determine the actual team you're going to need to build your game. Okay, after determining the roles needed in our team, here comes the next important thing: what kind of people do we want for those roles? Sure, you're going to want the usual traits, like hardworking, honest, passionate, and all the other CV building words that all employers seek. But are there any specific traits that you want for a member of a game development team? Collaboration and ownership Game development is a massive collaboration between multiple disciplines. And in this kind of collaboration, communication is key to making a project successful. So that's the one trait that you want from the people in your team: being able to collaborate with others. For example, it's pretty great to have artists that are able communicate the visual that they want in terms that programmers can understand, and vice versa. Having great people isn't enough to build a great team, however. To spur people to make great works, they need to have a sense of ownership over the project they're working on. There are many ways to foster this sense of ownership among the team members, but in essence, the people in the team need to feel like they're valued and they have some sort of control over their work. Instead of being told how something is supposed to be done, it's better to have a discussion about how something should be done. There are still many aspects of a game development team that we haven't covered, but hopefully, all of the things we have discussed will help you in making a great team. Good luck! About the Author Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 5137
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-benefits-using-kotlin-java-android
HariVigneshJayapalan
06 Mar 2017
6 min read
Save for later

Benefits of using Kotlin Java for Android

HariVigneshJayapalan
06 Mar 2017
6 min read
Kotlin is a statically typed programming language for JVM, Android, and browser. Kotlin is a new programming language from JetBrains, the maker of the world’s best IDEs. Why Kotlin? Before we jump into the benefits of Kotlin, we need to understand how Kotlin originated and evolved. We already have many programming languages, but how has Kotlin emerged to capture programmers’ heart? A 2013 study showed that language features matter little compared withecosystem issues when developers evaluate programming languages. Kotlin compiles to JVM bytecode or JavaScript. It is not a language you will write a kernel in. It is of the greatest interest to people who work with Java today, although it could appeal to all programmers who use a garbage-collected runtime, including people who currently use Scala, Go, Python, Ruby, and JavaScript. Kotlin comes from industry, not academia. It solves problems faced by working programmers and developers today. As an example, the type system helps you avoid null pointer exceptions. Research languages tend to just not have null at all, but this is of no use to people working with large codebases and APIs that do. Kotlin costs nothing to adopt! It’s open source, but that’s not the point. It means that there’s a high quality, one-click Java to Kotlin converter tool(available in Android Studio), and a strong focus on Java binary compatibility. You can convert an existing Java project, one file at a time, and everything will still compile, even for complex programs that run up to millions of lines of code. Kotlin programs can use all existing Java frameworks and libraries, even advanced frameworks that rely on annotation processing. The interop is seamless and does not require wrappers or adapter layers. It integrates with Maven, Gradle, and other build systems. It is approachable and it can be learned in a few hours by simply reading the language reference. The syntax is clean and intuitive. Kotlin looks a lot like Scala, but it’s simpler. The language balances terseness and readability as well.It also enforces no particular philosophy of programming, such as overly functional or OOP styling. Combined with the appearance of frameworks like Anko and Kovenant, this resource lightness means Kotlin has become popular among Android developers. You can read a report written by a developer at Square on their experience with Kotlin and Android. Kotlin features Let's summarize why it’s the right time to jump from native Java to Kotlin Java. Concise: Drastically reduces the amount of boilerplate code you need to write. Safe: Avoid entire classes of errors, such as null pointer exceptions. Versatile: Build server-side applications, Android apps, or frontend code running in the browser. Interoperable: Leverage existing frameworks and libraries of the JVM with 100% Java Interoperability. Brief discussion Let’s discuss a few important features in detail. Functional programming support Functional programming is not easy, at least in the beginning, until it becomes fun. There arezero-overhead lambdas and the ability to do mapping, folding, etc. over standard Java collections. The Kotlin type system distinguishes between mutable and immutable views over collections. Function purity The concept of a pure function (a function that does not have side effects) is the most important functional concept, which allows us to greatly reduce code complexity and get rid of most mutable states. Higher-order functions Higher-order Functions either take functions as parameters, return functions, or both.Higher-order functions are everywhere. You just pass functions to collections to make the code easy to read.titles.map{ it.toUpperCase()}reads like plain English. Isn’t it beautiful? Immutability Immutability makes it easier to write, use, and reason about the code (class invariant is established once and then unchanged). The internal state of your app components will be more consistent. Kotlin enforces immutability by introducingvalkeyword as well as Kotlin collections, which are immutable by default. Once thevalor a collection is initialized, you can be sure about its validity. Null safety Kotlin’s type system is aimed at eliminating the danger of null references from code, also known as The Billion Dollar Mistake. One of the most common pitfalls in many programming languages, including Java, is that of accessing a member of null references, resulting in null reference exceptions. In Java, this would be the equivalent toa NullPointerException, or NPE for short. In Kotlin, the type system distinguishes between references that can hold null (nullable references) and those that can't (non-null references).For example, a regular variable of type String can’t hold null: var a: String = “abc” a = null // compilation error To allow nulls, you can declare a variable as a nullable string, written String?: var b: String? = “abc” b = null // ok Anko DSL for Android Anko DSL for Android is a great library, which significantly simplifies working with views, threads, and Android lifecycle. The GitHub description states that Anko is a “Pleasant Android application development” and it truly has proven to be so. Removing ButterKnife dependency In Kotlin, you can just reference your view property by its @id XML parameter;these properties would have the same name as declared in your XML file. More info can be found in official docs. Smart casting // Java if (node instanceOf Tree) { return ((Tree) node).symbol; } // kotlin if (node is Tree) { returnnode.symbol; // Smart casting, no need of casting } if (document is Payable &&document.pay()) { // Smart casting println(“Payable document ${document.title} was payed for.”) } Kotlin uses lazy evaluation just like in Java. So, if the document were not a Payable, the second part would not be evaluated in the first place. Hence, if evaluated, Kotlin knows that the document is a Payable and uses a smart cast. Try it now! Like many modern languages, Kotlin has a way to try it out via your web browser. Unlike those other languages, Kotlin’s tryout site is practically a full-blown IDE that features fast autocompletion, real-time background compilation, and even online static analysis! TRY IT NOW About the author HariVigneshJayapalan is a Google Certified Android App developer, IDF Certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and a wannabe entrepreneur.
Read more
  • 0
  • 0
  • 5122

article-image-8-ways-artificial-intelligence-can-improve-devops
Prasad Ramesh
01 Sep 2018
6 min read
Save for later

8 ways Artificial Intelligence can improve DevOps

Prasad Ramesh
01 Sep 2018
6 min read
DevOps combines development and operations in an agile manner. ITOps refers to network infrastructure, computer operations, and device management. AIOps is artificial intelligence applied to ITOps, a term coined by Gartner. Makes us wonder what AI applied to DevOps would look like. Currently, there are some problem areas in DevOps that mainly revolve around data. Namely, accessing the large pool of data, taking actions on it, managing alerts etc. Moreover, there are errors caused by human intervention. AI works heavily with data and can help improve DevOps in numerous ways. Before we get into how AI can improve DevOps, let’s take a look at some of the problem areas in DevOps today. The trouble with DevOps Human errors: When testing or deployment is performed manually and there is an error, it is hard to repeat and fix. Many a time, the software development is outsourced in companies. In such cases, there is lack of coordination between the dev and ops teams. Environment inconsistency: Software functionality breaks when the code moves to different environments as each environment has different configurations. Teams can run around wasting a lot of time due to bugs when the software works fine on one environment but not on the other. Change management: Many companies have change management processes well in place, but they are outdated for DevOps. The time taken for reviews, passing a new module etc is manual and proves to be a bottleneck. Changes happen frequently in DevOps and the functioning suffers due to old processes. Monitoring: Monitoring is key to ensure smooth functioning in Agile. Many companies do not have the expertise to monitor the pipeline and infrastructure. Moreover monitoring only the infrastructure is not enough, there also needs to be monitoring of application performance, solutions need to be logged and analytics need to be tracked. Now let’s take a look at 8 ways AI can improve DevOps given the above context. 1. Better data access One of the most critical issues faced by DevOps teams is the lack of unregulated access to data. There is also a large amount of data, while the teams rarely view all of the data and focus on the outliers. The outliers only work as an indicator but do not give robust information. Artificial intelligence can compile and organize data from multiple sources for repeated use. Organized data is much easier to access and understand than heaps of raw data. This will help in predictive analysis and eventually a better decision making process. This is very important and enables many other ways listed below. 2. Superior implementation efficiency Artificially intelligent systems can work with minimal or no human intervention. Currently, a rules-based environment managed by humans is followed in DevOps teams. AI can transform this into self governed systems to greatly improve operational efficiency. There are limitations to the volume and complexity of analysis a human can perform. Given the large volumes of data to be analyzed and processed, AI systems being good at it, can set optimal rules to maximize operational efficiencies. 3. Root cause analysis Conducting root cause analysis is very important to fix an issue permanently. Not getting to the root cause allows for the cause to persist and affect other areas further down the line. Often, engineers don’t investigate failures in depth and are more focused on getting the release out. This is not surprising given the limited amount of time they have to work with. If fixing a superficial area gets things working, the root cause is not found. AI can take all data into account and see patterns between activity and cause to find the root cause of failure. 4 Automation Complete automation is a problem in DevOps, many tasks in DevOps are routine and need to be done by humans. An AI model can automate these repeatable tasks and speed up the process significantly. A well-trained model increases the scope of complexity of the tasks that can be automated by machines. AI can help achieve least human intervention so that developers can focus on more complex interactive problems. Complete automation also allows the errors to be reproduced and fixed promptly. 5 Reduce Operational Complexity AI can be used to simplify operations by providing a unified view. An engineer can view all the alerts and relevant data produced by the tools in a single place. This improves the current scenario where engineers have to switch between different tools to manually analyze and correlate data. Alert prioritization, root cause analysis, evaluating unusual behavior are complex time consuming tasks that depend on data. An organized singular view can greatly benefit in looking up data when required. “AI and machine learning makes it possible to get a high-level view of the tool-chain, but at the same time zoom in when it is required.” -SignifAI 6 Predicting failures A critical failure in a particular tool/area in DevOps can cripple the process and delay cycles. With enough data, machine learning models can predict when an error can occur. This goes beyond simple predictions. If an occurred fault is known to produce certain readings, AI can read patterns and predict the signs failure. AI can see indicators that humans may not be able to. As such early failure prediction and notification enable the team to fix it before it can affect the software development life cycle (SDLC). 7 Optimizing a specific metric AI can work towards solutions where the uptime is maximized. An adaptive machine learning system can learn how the system works and improve it. Improving could mean tweaking a specific metric in the workflow for optimized performance. Configurations can be changed by AI for optimal performance as required during different production phases. Real-time analysis plays a big part in this. 8 Managing Alerts DevOps systems can be flooded with alerts which are hard for humans to read and act upon. AI can analyze these alerts in real-time and categorize them. Assigning priority to alerts helps teams towards work on fixing them rather than going through a long list of alerts. The alerts can simply be tagged by a common ID for specific areas or AI can be trained for classifying good and bad alerts. Prioritizing alerts in such a way that flaws are shown first to be fixed will help smooth functioning. Conclusion As we saw, most of these areas depend heavily on data. So getting the system right to enhance data accessibility is the first step to take. Predictions work better when data is organized, performing root cause analysis is also easier. Automation can repeat mundane tasks and allow engineers to focus on more interactive problems that machines cannot handle. With machine learning, the overall operation efficiency, simplicity, and speed can be improved for smooth functioning of DevOps teams. Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner Top 7 DevOps tools in 2018 GitLab’s new DevOps solution
Read more
  • 0
  • 0
  • 5108

article-image-react-native-performance
Pierre Monge
21 Feb 2017
7 min read
Save for later

React Native Performance

Pierre Monge
21 Feb 2017
7 min read
Since React Native[1] came out, the core group of developers, as well as the community, kept on improving its framework, including the performance and stability of the technology. In this article, we talk about React Native's performance. This blog post is aimed at those people who want to learn more about React Native, but it might be a bit too complex for beginners. How does it work? In order to understand how to optimize our application, we have to understand how React Native works. But don't worry; it's not too hard. Let’s take the following piece of code into consideration: st=>start: Javascript e=>end: Natif op=>operation: Bridge st->op->e Let's discuss what it represents. React Native bases itself on two environments: a JS (Javascript) environment and a Native environment. These two entities communicate together with a bridge. The JS is our "organizer." This is where we will run our algorithms, moderate our views, run network calls, and so on. The Native is there for the display and the physical link part. It senses physical events as well as some virtuals if we ask it to do so, and then sends it to the JS part. The bridge exists as a link, as shown in the following code:       render(): ReactElement<*> {     return ( <TextInput           value={''}           onChangeText={() => { /* Here we handle native event */ } />     );  } Here, we simply have a Text Input. Most of the component involves all the branches of the React Native stack. This Text Input is called in JS, but is displayed on the device in Native. Every character typed in the Native component involves the physical event, transforms it in letter or action, and then transmits it by the bridge to the JS component. In all of the transactions of data between JS and Native, the Bridge always intervenes so that the data is included in both parts. The bridge has to serialize the data. The bridge is simple. It's a bit stupid, and it has only one job, but... it is the one that will bother us the most. The Bridge and other losses of performance Bridge Imagine that you are in an airport. You get your ticket online in five minutes; you are already in the plane and the flight will take the time that it's supposed to. However, before that, there's the regulation of the flight—the checking in. It will take horribly long to find the right flight, drop-down your luggage at the right place, go through security and get yourself checked, and so on. Well, this is our Bridge. Js is fast even though it is the main thread. Native is also fast, but the Bridge is slow. Actually, it's more like it has so much data to serialize that it takes it so much time to serialize that he can't improve its performance. Or... It is slow, simply because you made it go slow! The Bridge is optimized to batch the data[2]. Therefore, we can't send it data too fast; and, if we really have to, then we have to minimize to the maximum. Let's take for example an animation. We want to make a square go from left to the right in 10 seconds. The pure JS versions:       /* on top of class */ let i = 0; loadTimer = () => {     if (i < 100) {         i += 1;         setTimeout(loadTimer, 100);     } }; ... componentDidMount(): void {     loadTimer(); } ... render(): ReactElement<*> {     let animationStyle = {         transform: [             {                 translateX: i * Dimensions.get('window').width / 100,             },         ],     };     return ( <View             style={[animationStyle, { height: 50, width: 50, backgroundColor: 'red' }] }         />     ); } Here is an implementation in pure JS of a pseudo animation. This version, where we make raw data go through the bridge, is dirty. It's dirty code and very slow, TO BAN! Animated Version:       ... componentDidMount(): void {     this.state.animatedValue.setValue(0);     Animated.spring(this.state.animatedValue, {         toValue: 1,     }); } ... render(): ReactElement<*> {     let animationStyle = {         transform: [             {                 translateX: this.state.animatedValue.interpolate({                     inputRange: [0, 1],                     outputRange: [0, Dimensions.get('window').width],                 }),             },         ],     };     return ( <Animated.View             style={[animationStyle, { height: 50, width: 50, backgroundColor: 'red' }] }         />     ); } It's already much more understandable. The Animated library has been created to improve the performance of the animations, and its objective is to lighten the use of the bridge by sending predictions of the data to the native before starting the animation. The animation will be much softer and successful with the rightful library. The general perfomance of the app will automatically be improved. However, the animation is not the only one at fault here. You have to take time to verify that you don't have too much unnecessary data going through the bridge. Other Factors Thankfully, the Bridge isn't the only one at fault, and there are many other ways to optimize a React Native application.Therefore, here is an exhaustive list of why and/or how you can optimize your application: Do not neglect your business logic; even if JS and native are supposed to be fast; you have to optimize them. Ban the while or synchronize functions, which takes time in your application. Blocking the JS is the same as blocking the application. The rendering of a view is costly, and it is done most of the time without anything changing! It's why you MUST use the 'shouldComponentUpdate' method in your components. If you do not manage to optimize a JavaScript component, then it means that it would be good to transduce it in Native. The transactions with the bridge should be minimized. There are many states in a React Native application. A debug stage, which is a release state. The release state increases the performance of the app greatly with the flags of compilation while taking out the dev mode. On the other hand, it doesn't solve everything. The 'debugger mode' will slow down your application because the JS will turn on your browser and won't do it on your phone. Tools The React Native "tooling" is not yet very developed, but a great part of the toolset is that itis coming from the application. A hundred percentof the functionality is native. Here is a short list of some of the important tools that should help you out: Tools Platform Description react-addons-perf Both This is a tool that allows simple benchmarks to render components; it also gives the wasted time (the time loss to give the components), which didn't change. Systrace Android This is hard to use but useful to detect big bottlenecks. Xcode iOS This function of Xcode allows you to understand how our application is rendered (great to use if you have unnecessary views). rn-snoopy Both Snoopy is a softwarethatallows you to spy on the bridge. The main utility of this tool is the debug, but it can be used in order to optimize. You now have some more tricks and tools to optimize your React Native application. However,there is no hidden recipeor magic potion...It will take some time and research. The performance of a React Native application is very important. The joy of creating a mobile application in JavaScript must at least be equal to the experience of the user testing it. About the Author Pierre Monge is an IT student from Bordeaux, France. His interests include C, JS, Node, React, React Native, and more. He can be found on GitHub @azendoo. [1]React Native allows you to build mobile apps using only JavaScript. It uses the same design as React, allowing you to compose a rich mobile UI from declarative components. [2]Batch processing
Read more
  • 0
  • 0
  • 5100

article-image-5-reasons-why-your-next-app-should-be-a-pwa-progressive-web-app
Sugandha Lahoti
24 Apr 2018
3 min read
Save for later

5 reasons why your next app should be a PWA (progressive web app)

Sugandha Lahoti
24 Apr 2018
3 min read
Progressive Web Apps (PWA) are a progression to web apps and native apps. So you get the experience of a native app from a mobile’s browser. They take up less storage space (the pitfall of native apps) and reduce loading time (the main reason why people leave a website). In the past few years, PWA has taken everything that is great about a native app – the functionality, touch gestures, user experience, push notifications – and transferred it into a web browser. Antonio Cucciniello’s article on ‘What is PWA?’ highlights some of their characteristics with details on how they affect you as a developer. Many companies have already started with their PWAs and are reaping benefits of this reliable, fast, and engaging approach. Twitter, for instance, introduced Twitter Lite and achieved 65% increase in pages per session, 75% increase in Tweets sent and 20% decrease in bounce rate. Lancôme has rebuilt their mobile website as a PWA and increased conversions by 17%. Most recently, George.com, a leading UK clothing brand, after upgrading their site to a Progressive Web App, saw a 31% increase in mobile conversion. Still not convinced? Here are 5 reasons why your next app should be a Progressive Web App. PWAs are device-agnostic What this means is that PWAs can work with various systems without requiring any special adaptations. As PWAs are hosted online, they are hyper-connected and work across all devices.  Developers will no longer need to develop multiple apps across multiple mobile platforms meaning huge savings in app development time and effort. PWAs have a Seamless UX The user experience provided by a PWA across different devices is the same. You could use a PWA on your phone,  switch to a tablet and notice no difference. They offer a complete full-screen experience with help from a web app manifest file. Smooth animations, scrolling, and navigation keep the user experience smooth.Additionally, Web push notifications show relevant and timely notifications even when the browser is closed, making it easy to re-engage with users. PWAs are frictionless Native apps take a lot of time to download. With a PWA, users won’t have to wait to install and download the app. PWAs use Service Workers and Javascript separately from the main thread to allow apps to load near instantly and reliably no matter the kind of network connection. Moreover, PWAs have improved app cross-functionality. So switching between apps, and sharing information between them becomes less intrusive. The user experience is faster and more intuitive. PWAs are secure PWAs are secured using HTTPS. HTTPS secures the connection between a PWA and the user-base by preventing intruders from actively tampering with the communications between your PWA and your users’ browsers. It also prevents intruders from being able to passively listen to communications between your PWA and your users. PWAs have better SEO According to a report, over 60 percent of searches now occur on mobile devices. This makes PWA very powerful from an SEO perspective as a simple google search might pull up your website and then launch the visitor into your PWA. PWAs can be indexed easily by search engines, making them more likely to appear in search results. The PWA fever is on and as long as PWAs increase app functionality and offer more offline capabilities, the list of reasons to consider them will only grow. Perhaps even more than a native app. Read More Windows launches progressive web apps… that don’t yet work on mobile 5 things that will matter in web development in 2018  
Read more
  • 0
  • 0
  • 5086
article-image-understanding-sentiment-analysis-and-other-key-nlp-concepts
Sunith Shetty
20 Dec 2017
12 min read
Save for later

Understanding Sentiment Analysis and other key NLP concepts

Sunith Shetty
20 Dec 2017
12 min read
[box type="note" align="" class="" width=""]This article is an excerpt taken from a book Big Data Analytics with Java written by Rajat Mehta. This book will help you learn to perform big data analytics tasks using machine learning concepts such as clustering, recommending products, data segmentation and more. [/box] With this post, you will learn what is sentiment analysis and how it is used to analyze emotions associated within the text. You will also learn key NLP concepts such as Tokenization, stemming among others and how they are used for sentiment analysis. What is sentiment analysis? One of the forms of text analysis is sentimental analysis. As the name suggests this technique is used to figure out the sentiment or emotion associated with the underlying text. So if you have a piece of text and you want to understand what kind of emotion it conveys, for example, anger, love, hate, positive, negative, and so on you can use the technique sentimental analysis. Sentimental analysis is used in various places, for example: To analyze the reviews of a product whether they are positive or negative This can be especially useful to predict how successful your new product is by analyzing user feedback To analyze the reviews of a movie to check if it's a hit or a flop Detecting the use of bad language (such as heated language, negative remarks, and so on) in forums, emails, and social media To analyze the content of tweets or information on other social media to check if a political party campaign was successful or not  Thus, sentimental analysis is a useful technique, but before we see the code for our sample sentimental analysis example, let's understand some of the concepts needed to solve this problem. [box type="shadow" align="" class="" width=""]For working on a sentimental analysis problem we will be using some techniques from natural language processing and we will be explaining some of those concepts.[/box] Concepts for sentimental analysis Before we dive into the fully-fledged problem of analyzing the sentiment behind text, we must understand some concepts from the NLP (Natural Language Processing) perspective. We will explain these concepts now. Tokenization From the perspective of machine learning one of the most important tasks is feature extraction and feature selection. When the data is plain text then we need some way to extract the information out of it. We use a technique called tokenization where the text content is pulled and tokens or words are extracted from it. The token can be a single word or a group of words too. There are various ways to extract the tokens, as follows: By using regular expressions: Regular expressions can be applied to textual content to extract words or tokens from it. By using a pre-trained model: Apache Spark ships with a pre-trained model (machine learning model) that is trained to pull tokens from a text. You can apply this model to a piece of text and it will return the predicted results as a set of tokens. To understand a tokenizer using an example, let's see a simple sentence as follows: Sentence: "The movie was awesome with nice songs" Once you extract tokens from it you will get an array of strings as follows: Tokens: ['The', 'movie', 'was', 'awesome', 'with', 'nice', 'songs'] [box type="shadow" align="" class="" width=""]The type of tokens you extract depends on the type of tokens you are interested in. Here we extracted single tokens, but tokens can also be a group of words, for example, 'very nice', 'not good', 'too bad', and so on.[/box] Stop words removal Not all the words present in the text are important. Some words are common words used in the English language that are important for the purpose of maintaining the grammar correctly, but from conveying the information perspective or emotion perspective they might not be important at all, for example, common words such as is, was, were, the, and so. To remove these words there are again some common techniques that you can use from natural language processing, such as: Store stop words in a file or dictionary and compare your extracted tokens with the words in this dictionary or file. If they match simply ignore them. Use a pre-trained machine learning model that has been taught to remove stop words. Apache Spark ships with one such model in the Spark feature package. Let's try to understand stop words removal using an example: Sentence: "The movie was awesome" From the sentence we can see that common words with no special meaning to convey are the and was. So after applying the stop words removal program to this data you will get: After stop words removal: [ 'movie', 'awesome', 'nice', 'songs'] [box type="shadow" align="" class="" width=""]In the preceding sentence, the stop words the, was, and with are removed.[/box] Stemming Stemming is the process of reducing a word to its base or root form. For example, look at the set of words shown here: car, cars, car's, cars' From our perspective of sentimental analysis, we are only interested in the main words or the main word that it refers to. The reason for this is that the underlying meaning of the word in any case is the same. So whether we pick car's or cars we are referring to a car only. Hence the stem or root word for the previous set of words will be: car, cars, car's, cars' => car (stem or root word) For English words again you can again use a pre-trained model and apply it to a set of data for figuring out the stem word. Of course there are more complex and better ways (for example, you can retrain the model with more data), or you have to totally use a different model or technique if you are dealing with languages other than English. Diving into stemming in detail is beyond the scope of this book and we would encourage readers to check out some documentation on natural language processing from Wikipedia and the Stanford nlp website. [box type="shadow" align="" class="" width=""]To keep the sentimental analysis example in this book simple we will not be doing stemming of our tokens, but we will urge the readers to try the same to get better predictive results.[/box] N-grams Sometimes a single word conveys the meaning of context, other times a group of words can convey a better meaning. For example, 'happy' is a word in itself that conveys happiness, but 'not happy' changes the picture completely and 'not happy' is the exact opposite of 'happy'. If we are extracting only single words then in the example shown before, that is 'not happy', then 'not' and 'happy' would be two separate words and the entire sentence might be selected as positive by the classifier However, if the classifier picks the bi-grams (that is, two words in one token) in this case then it would be trained with 'not happy' and it would classify similar sentences with 'not happy' in it as 'negative'. Therefore, for training our models we can either use a uni-gram or a bi-gram where we have two words per token or as the name suggest an n-gram where we have 'n' words per token, it all depends upon which token set trains our model well and it improves its predictive results accuracy. To see examples of n-grams refer to the following table:   Sentence The movie was awesome with nice songs Uni-gram ['The', 'movie', 'was', 'awesome', 'with', 'nice', 'songs'] Bi-grams ['The movie', 'was awesome', 'with nice', 'songs'] Tri-grams ['The movie was', 'awesome with nice', 'songs']   For the purpose of this case study we will be only looking at unigrams to keep our example simple. By now we know how to extract words from text and remove the unwanted words, but how do we measure the importance of words or the sentiment that originates from them? For this there are a few popular approaches and we will now discuss two such approaches. Term presence and term frequency Term presence just means that if the term is present we mark the value as 1 or else 0. Later we build a matrix out of it where the rows represent the words and columns represent each sentence. This matrix is later used to do text analysis by feeding its content to a classifier. Term Frequency, as the name suggests, just depicts the count or occurrences of the word or tokens within the document. Let's refer to the example in the following table where we find term frequency:   Sentence The movie was awesome with nice songs and nice dialogues. Tokens (Unigrams only for now) ['The', 'movie', 'was', 'awesome', 'with', 'nice', 'songs', 'and', 'dialogues'] Term Frequency ['The = 1', 'movie = 1', 'was = 1', 'awesome = 1', 'with = 1', 'nice = 2', 'songs = 1', 'dialogues = 1']   As seen in the preceding table, the word 'nice' is repeated twice in the preceding sentence and hence it will get more weight in determining the opinion shown by the sentence. Bland term frequency is not a precise approach for the following reasons: There could be some redundant irrelevant words, for example, the, it, and they that might have a big frequency or count and they might impact the training of the model There could be some important rare words that could convey the sentiment regarding the document yet their frequency might be low and hence they might not be inclusive for greater impact on the training of the model Due to this reason, a better approach of TF-IDF is chosen as shown in the next sections. TF-IDF TF-IDF stands for Term Frequency and Inverse Document Frequency and in simple terms it means the importance of a term to a document. It works using two simple steps as follows: It counts the number of terms in the document, so the higher the number of terms the greater the importance of this term to the document. Counting just the frequency of words in a document is not a very precise way to find the importance of the words. The simple reason for this is there could be too many stop words and their count is high so their importance might get elevated above the importance of real good words. To fix this, TF-IDF checks for the availability of these stop words in other documents as well. If the words appear in other documents as well in large numbers that means these words could be grammatical words such as they, for, is, and so on, and TF-IDF decreases the importance or weight of such stop words. Let's try to understand TF-IDF using the following figure: As seen in the preceding figure, doc-1, doc-2, and so on are the documents from which we extract the tokens or words and then from those words we calculate the TF-IDFs. Words that are stop words or regular words such as for , is, and so on, have low TF-IDFs, while words that are rare such as 'awesome movie' have higher TF-IDFs. TF-IDF is the product of Term Frequency and Inverse document frequency. Both of them are explained here: Term Frequency: This is nothing but the count of the occurrences of the words in the document. There are other ways of measuring this, but the simplistic approach is to just count the occurrences of the tokens. The simple formula for its calculation is:      Term Frequency = Frequency count of the tokens Inverse Document Frequency: This is the measure of how much information the word provides. It scales up the weight of the words that are rare and scales down the weight of highly occurring words. The formula for inverse document frequency is: TF-IDF: TF-IDF is a simple multiplication of the Term Frequency and the Inverse Document Frequency. Hence: This simple technique is very popular and it is used in a lot of places for text analysis. Next let's look into another simple approach called bag of words that is used in text analytics too. Bag of words As the name suggests, bag of words uses a simple approach whereby we first extract the words or tokens from the text and then push them in a bag (imaginary set) and the main point about this is that the words are stored in the bag without any particular order. Thus the mere presence of a word in the bag is of main importance and the order of the occurrence of the word in the sentence as well as its grammatical context carries no value. Since the bag of words gives no importance to the order of words you can use the TF-IDFs of all the words in the bag and put them in a vector and later train a classifier (naïve bayes or any other model) with it. Once trained, the model can now be fed with vectors of new data to predict on its sentiment. Summing it up, we have got you well versed with sentiment analysis techniques and NLP concepts in order to apply sentimental analysis. If you want to implement machine learning algorithms to carry out predictive analytics and real-time streaming analytics you can refer to the book Big Data Analytics with Java.    
Read more
  • 0
  • 0
  • 5080

article-image-what-blockchain
Lauren Stephanian
28 Apr 2017
6 min read
Save for later

What is Blockchain?

Lauren Stephanian
28 Apr 2017
6 min read
The difference between Blockchain and Bitcoin Before we explore Blockchain in depth, it’s worth looking at the key differences between Blockchain and Bitcoin, which are very closely associated with each other. Both were conceived by a mysterious figure who goes by the alias Satoshi Nakamoto; while both ideas are revolutionary, the key distinction is this: Blockchain was created for the implementation of Bitcoin, has a broader application. Bitcoin is ultimately just a cryptocurrency, and is actually very similar to any other currency in its use. So, then, what is Blockchain? Blockchain, put simply, is a way to store data or transactions on a growing ledger. The way that it works allows us to safely rely on the data shown to us, because it is built on the concept of decentralized consensus.  Decentralized consensus is reached by a Blockchain because each block containing some data (for example, a certain amount of Bitcoins heading to someone's account) within the chain has an encryption called a hashing function that connects it to the next block along with a time stamp. This chain of blocks continuously grows, and once a transaction is recorded, it is immutable--there is no going back and altering the data, only building on top of it. Everyone sees the same unchanged data on a Blockchain, and therefore actions based on this data, such as sending money to someone, can be safely taken because the information shown cannot be disputed because every party agrees. Other uses for Blockchain Although Blockchain is most commonly associated with Bitcoin, there are many different uses for this technology. Blockchain will change our way of not only working with one another, but also storing data, identifying ourselves, and even voting, by making these actions easier and secure by design. Below are eight ways Blockchain technology will revolutionize the way we conduct business, and the way governments function. All of these areas create opportunities for developers. Cryptocurrencies The most common use for Blockchain is, as mentioned above, to create or mine for cryptocurrencies. You can set up a store to accept these cryptocurrencies as payment, and you can send money around the world to different people, or mine for money by using a tool called a miner, which collects money every time a new transaction is made. These currencies are theoretically totally safeguarded from theft. Digital assets Just as you might do with regular currencies, you can create financial securities using cryptocurrencies. For example you can use Bitcoin to create derivatives, such as futures contracts, based on what you think the future value of Bitcoins will be. You can also create stocks and bonds, and almost any other security you might want to trade. Decentralized exchanges In a similar vein, Blockchain can be used to make financial exchanges safer, faster, and easier to track. A decentralized exchange is an exchange market that does not rely on third parties to hold onto customers’ funds and instead allows customers to trade directly and anonymously with each other. It reduces risk of hacking because there is no single point of failure and it is potentially faster because third parties are not involved. Smart contracts Smart contracts or decentralized applications (DApps) are just contracts that you can use in any way you might use a regular contract. Because they are based on Blockchain technology, they are decentralized and therefore any third parties or middlemen (i.e. lawyers) can be effectively removed from the equation. They are cheaper, faster, and more secure than traditional contracts, which is why governments, banks, and real estate companies are beginning to turn to them. Distributed Cloud Storage Blockchain technology will be a huge disruptor in the data storage industry. Currently a third party, such as Dropbox, controls all your cloud data. Should they want to take it, alter it, or remove it, they are legally allowed to do so and you won't be able to do anything to stop them. However, any one party cannot control decentralized cloud storage, and therefore your data is more secure and untouchable by anyone you wouldn't want to interfere with it. Verifiable Data (Identification) Identity theft is a problem caused by lack of security and information being held by a centralized party. Blockchain can decentralize databases holding verifiable data like identification and make them less susceptible to hacking and theft. Once verifiable data is decentralized, it can easily be checked for accuracy by third parties needing to access it. Data breaches, such as what happened to Anthem in early 2015, are becoming more and more of a common occurrence, which indicates that we need our technology to adapt in order to keep up with the ever-changing landscape of the Internet. Blockchain will be the answer to this; it's just a matter of when this change will happen. Digital voting Perhaps the most relevant item on this list is digital voting. Currently an innumerable amount of countries are host to reports of voter fraud, and whether they are true or not, this casts doubts on the credibility of any administration’s leadership. Additionally, using Blockchain technology could eventually allow for online voting, which would assist in correcting low voter turnout because it makes voting easier and more accessible to all citizens. In 2014, a political party in Denmark became the first group to use Blockchain in their voting process. This secured their results and made them more credible. It would be advantageous for more countries to begin using Blockchain to verify election results and some might even begin adopting the ability to count online votes for increased participation. Academic credentialing At least at one point in your life, you have probably heard about people lying about their alma mater on their resume, or even editing their transcripts. By having schools and certification programs upload credentials to a decentralized database, Blockchain can make verifying these important details fast and painless for all prospective employers. What are the key takeaways? Despite the lack of implementation in current businesses and governments, Blockchain will change our society for the better. It is already established in the footbed of the tech realm, and will remain until the next great revolutionary technology comes along to change the way we store and share data. About the Author Lauren Stephanian is a software developer by training and an analyst for the structured notes trading desk at Bank of America Merrill Lynch. She is passionate about staying on top of the latest technologies and understanding their place in society. When she is not working, programming, or writing, she is playing tennis, traveling, or hanging out with her good friends in Manhattan or Brooklyn. You can follow her on Twitter or Medium at @lstephanian or via her website, http://lstephanian.com.
Read more
  • 0
  • 0
  • 5080

article-image-why-microservices-and-devops-are-match-made-heaven
Erik Kappelman
12 Oct 2017
4 min read
Save for later

Why microservices and DevOps are a match made in heaven

Erik Kappelman
12 Oct 2017
4 min read
What are microservices? In terms of software, ‘services’ could be thought of as little chunks of functionality. Services are a part of service-oriented architecture (SOA). Services are stateless, adhere to a contract (shared standards), are autonomous, relatively granular, and should be a ‘black-box’ for the user. Microservices are a logical extension of services. Microservices are services that perform only one function. This matches the Unix philosophy, “Do one thing, and do it well.” But who cares? And what about DevOps? Well, although the title is a cliche, DevOps and microservices based architecture are absolutely a match made in heaven. Following the philosophy of fully explaining terms, let's talk a bit about DevOps. What is DevOps? DevOps, which come from the words "development operations," is a process that is used to create software. DevOps is not one specific philosophy or process, there are many variants, but there are some shared features across most of the variants. DevOps advocates for a continuous development process where as many elements of this process are as automated as possible. Each iteration of a product is coded, built, tested, packaged, and released, and then monitored. This is referred to as the DevOps toolchain. When there is a need or desire to upgrade or change functionality or the way a product is designed, the process begins again. The idea is that DevOps should run in a circular fashion, always upgrading and always getting better. There are myriad tools in use right now within various flavors of DevOps. These tool are designed to meld with the DevOps tool chain and have revolutionized the development process for many developers and companies. Why microservices and DevOps go together You should already see why microservices and DevOps go together so well. DevOps calls for continuous monitoring, testing, and deployment of software. Microservices are inherently modular, because they are intended to perform a single function. Software that is modular easily fits into the DevOps structure. Incremental changes can be made to parts of a project, perhaps a single microservice. If the service contracts and control mechanisms are properly created, a single microservice should be able to be easily upgraded, built, tested, deployed and monitored without sending a cascading wave of bugs through adjacent services. DevOps really doesn’t make much sense outside of a structure like this. If your software is designed as a behemoth interconnected, interdependent ball of wax, changing part of the functionality will ‘break’ everything. This means that as changes or upgrades to a software are made, almost every change, no matter how big or small, will trigger what amounts to an almost full rewrite, or upgrade of the software in question. When applied to this kind of project, most DevOps processes would actually hinder the development process instead of helping. When projects are modularized at a relatively granular level, such as when a project is employing a microservice based structure, DevOps expedites delivery time and quality simultaneously. It should be noted that both a microservice architecture and DevOps processes are not tied to any specific tools or languages. These are philosophies for development and could be applied many different ways. That being said, there are many continuous integration and deployment tools, as well as, many different automation tools that are designed for use within a DevOps framework. Criticisms of microservices There are some criticisms of the microservice structure. One criticism is that using microservices does not get rid of the complexities of a traditional program. Those complexities are just moved onto the network that the services are using to communicate. Stress to the network is another criticism of the microservice architecture. This is because the service architecture distributes the elements of a program or process around a network in various places. In order for these services to perform cohesive functions, they must utilize the network to communicate. Depending on how many services make up a program or process, this could translate into significant network activity, which then creates problems of its own. Another criticism is that microservices can sometimes become, so called, ‘nanoservices.’ This is a service that performs a function so small that cost of the service actually outweighs the utility of the service. These criticisms should be kept in mind, but, in my opinion, they don’t amount to enough to impact the usual functionality of microservices in a DevOps environment. Using microservices in the DevOps process helps fully realize the potential of continuous integration, testing a deployment promised by DevOps. These tools combined can optimize computing in a manner that should be utilized during development whenever possible.
Read more
  • 0
  • 0
  • 5058
article-image-a-ux-strategy-is-worthless-without-a-solid-usability-test-plan
Sugandha Lahoti
15 Jun 2018
48 min read
Save for later

A UX strategy is worthless without a solid usability test plan

Sugandha Lahoti
15 Jun 2018
48 min read
A UX practitioner's primary goal is to provide every user with the best possible experience of a product. The only way to do this is to connect repeatedly with users and make sure that the product is being designed according to their needs. At the beginning of a project, this research tends to be more exploratory. Towards the end of the project, it tends to be more about testing the product. In this article, we explore in detail one of the most common methods of testing a product with users--the usability test. We will describe the steps to plan and conduct usability tests. Usability tests provide insights into how to practically plan, conduct, and analyze any user research. This article is an excerpt from the book UX for the Web by Marli Ritter, and Cara Winterbottom. This book teaches you how UX and design thinking can make your site stand out from the rest of the sites on the internet. Tips to maximize the value of user testing Testing with users is not only about making their experience better; it is also about getting more people to use your product. People will not use a product that they do not find useful, and they will choose the product that is most enjoyable and usable if they have options. This is especially the case with the web. People leave websites if they can't find or do things they want. Unlike with other products, they will not take time to work it out. Research by organizations such as the Nielsen Norman group generally shows that a website has between 5 and 10 seconds to show value to a visitor. User testing is one of the main methods available to us to ensure that we make websites that are useful, enjoyable, and usable. However, to be effective it must be done properly. Jared Spool, a usability expert, identified seven typical mistakes that people make while testing with users, which lessen its value. The following list addresses how not to make those mistakes: Know why you're testing: What are the goals of your test? Make sure that you specify the test goals clearly and concretely so that you choose the right method. Are you observing people's behavior (usability test), finding out whether they like your design (focus group or sentiment analysis), or finding out how many do something on your website (web analytics)? Posing specific questions will help to formulate the goals clearly. For example, will the new content reduce calls to the service center? Or what percentage of users return to the website within a week? Design the right tasks: If your testing involves tasks, design scenarios that correspond to tasks users would actually perform. Consider what would motivate someone to spend time on your website, and use this to create tasks. Provide participants with the information they would have to complete the tasks in a real-life situation; no more and no less. For example, do not specify tasks using terms from your website interface; then participants will simply be following instructions when they complete the tasks, rather than using their own mental models to work out what to do. Recruit the right users: If you design and conduct a test perfectly, but test on people who are not like your users, then the results will not be valid. If they know too much or too little about the product, subject area, or technology, then they will not behave like your users would and will not experience the same problems. When recruiting participants, ask what qualities define your users, and what qualities make one person experience the website differently to another. Then recruit on these qualities. In addition, recruit the right number of users for your method. Ongoing research by the Nielsen Norman group and others indicate that usability tests typically require about five people per test, while A/B tests require about 40 people, and card sorting requires about 15 people. These numbers have been calculated to maximize the return on investment of testing. For example, five users in a usability test have been shown by the Nielsen Norman group (and confirmed repeatedly by other researchers) to find about 85% of the serious problems in an interface. Adding more users improves the percentage marginally, but increases the costs significantly. If you use the wrong numbers then your results will not be valid or the amount of data that you need to analyze will be unmanageable for the time and resources you have available. Get the team and stakeholders involved: If user testing is seen as an outside activity, most of the team will not pay attention as it is not part of their job and easy to ignore. When team members are involved, they gain insights into their own work and its effectiveness. Try to get team members to attend some of the testing if possible. Otherwise, make sure everyone is involved in preparing the goals and tasks (if appropriate) for the test. Share the results in a workshop afterward, so everyone can be involved in reflecting on the results and their implications. Facilitate the test well: Facilitating a test well is a difficult task. A good facilitator makes users feel comfortable so they act more naturally. At the same time, the facilitator must control the flow of the test so that everything is accomplished in the available time, and not give participants hints about what to do or say. Make sure that facilitators have a lot of practice and constructive feedback from the team to improve their skills. Plan how to share the results: It takes time and skill to create an effective user testing report that communicates the test and results well. Even if you have the time and skill, most team members will probably not read the report. Find other ways to share results to those who need them. For example, create a bug list for developers using project management software or a shared online document; have a workshop with the team and stakeholders and present the test and results to them. Have review sessions immediately after test days. Iterate: Most user testing is most effective if performed regularly and iteratively; for testing different aspects or parts of the design; for testing solutions based on previous tests; for finding new problems or ideas introduced by the new solutions; for tracking changes to results based on time, seasonality, maturity of product or user base; or for uncovering problems that were previously hidden by larger problems. Many organizations only make provision to test with users once at the end of design, if at all. It is better to split your budget into multiple tests if possible. As we explore usability testing, each of these guidelines will be addressed more concretely. Planning and conducting usability tests Before starting, let's look at what we mean by a usability test, and describe the different types. Usability testing involves watching a representative set of users attempt realistic tasks, and collecting data about what they do and say. Essentially, a usability test is about watching a user interact with a product. This is what makes it a core UX method: it persuades stakeholders about the importance of designing for and testing with their users. Team members who watch participants struggle to use their product are often shocked that they had not noticed the glaringly obvious design problems that are revealed. In later iterations, usability tests should reveal fewer or more minor problems, which provides proof of the success of a design before launch. Apart from glaring problems, how do we know what makes a design successful? The definition of usability by the International Organization for Standardization (ISO) is: Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. This definition shows us the kind of things that make a successful design. From this definition, usability comprises: Effectiveness: How completely and accurately the required tasks can be accomplished. Efficiency: How quickly tasks can be performed. Satisfaction: How pleasant and enjoyable the task is. This can become a delight if a design pleases us in unexpected ways. There are three additional points that arise from the preceding points: Discoverability: How easy it is to find out how to use the product the first time. Learnability: How easy it is to continue to improve using the product, and remember how to use it. Error proneness: How well the product prevents errors and helps users recover. This equates to the number and severity of errors that users experience while doing tasks. These six points provide us with guidance on the kinds of tasks we should be designing and the kind of observations we should be making when planning a usability test. There are three ways of gathering data in a usability test--using metrics to guide quantitative measurement, observing, and asking questions. The most important is observation. Metrics allow comparison, guide observation, and help us design tasks, but they are not as important as why things happen. We discover why by observing interactions and emotional responses during task performance. In addition, we must be very careful when assigning meaning to quantitative metrics because of the small numbers of users involved in usability tests. Typically, usability tests are conducted with about five participants. This number has been repeatedly shown to be most effective when considering testing costs against the number of problems uncovered. However, it is too small for statistical significance testing, so any numbers must be reported carefully. If we consider observation against asking questions, usability tests are about doing things, not discussing them. We may ask users to talk aloud while doing tasks to help us understand what they are thinking, but we need the context of what they are doing. "To design an easy-to-use interface, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior." - Jakob Nielsen This means that usability tests trump questionnaires and surveys. It also means that people are notoriously bad at remembering what they did or imagining what they will do. It does not mean that we never listen to what users say, as there is a lot of value to be gained from a well-formed question asked at the right time. We must just be careful about how we understand it. We need to interpret what people say within the context of how they say it, what they are doing when they say it, and what their biases might be. For example, users tend to tell researchers what they think we want to hear, so any value judgment will likely be more positive than it should. This is called experimenter bias. Despite the preceding cautions, all three methods are useful and increase the value of a test. While observation is core, the most effective usability tests include tasks carefully designed around metrics, and begin and end with a contextual interview with the user. The interviews help us to understand the user's previous and current experiences, and the context in which they might use the website in their own lives. Planning a usability test can seem like a daunting task. There are so many details to work out and organize, and they all need to come together on the day(s) of the test. The following diagram is a flowchart of the usability test process. Each of the boxes represents a different area that must be considered or organized: However, by using these areas to break the task down into logical steps and keeping a checklist, the task becomes manageable. Planning usability tests In designing and planning a usability test, you need to consider five broad questions: What: What are the objectives, scope, and focus of the test? What fidelity are you testing? How: How will you realize the objectives? Do you need permissions and sign off? What metrics, tasks, and questions are needed? What are the hardware and software requirements? Do you need a prototype? What other materials will you need? How will you conduct the test? Who: How many participants and who will they be? How will you recruit them? What are the roles needed? Will team members, clients, and stakeholders attend? Will there be a facilitator and/or a notetaker? Where: What venue will you use? Is the test conducted in an internal or external lab, on the streets/in coffee shops, or in users' homes/work? When: What is the date of the test? What will the schedule be? What is the timing of each part? Documenting these questions and their answers forms your test plan. The following figure illustrates the thinking around each of these broad questions: It is important to remember that no matter how carefully you plan usability testing, it can all go horribly wrong. Therefore, have backup plans wherever you can. For example, for participants who cancel late or do not arrive, have a couple of spares ready; for power cuts, be prepared with screenshots so you can at least simulate some tasks on paper; for testing live sites when the internet connection fails, have a portable wireless router or cache pages beforehand. Designing the test - formulating goals and structure The first thing to consider when planning a usability test is its goal. This will dictate the test scope, focus, and tasks and questions. For example, if your goal is a general usability test of the whole website, the tasks will be based on the business reasons for the site. These are the most important user interactions. You will ask questions about general impressions of the site. However, if your goal is to test the search and filtering options, your tasks will involve finding things on the website. You will ask questions about the difficulty of finding things. If you are not sure what the specific goal of the usability test might be, think about the following three points: Scope: Do you want to test part of the design, or the whole website? Focus: Which area of the website will you focus on? Even if you want to test the whole website, there will be areas that are more important. For example, checkout versus contact page. Behavioral questions: Are there questions about how users behave, or how different designs might impact user behavior, that are being asked within the organization? Thinking about these questions will help you refine your test goals. Once you have the goals, you can design the structure of the test and create a high-level test plan. When deciding on how many tests to conduct in a day and how long each test should be, remember moderator and user fatigue. A test environment is a stressful situation. Even if you are testing with users in their own home, you are asking them to perform unfamiliar tasks with an unfamiliar system. If users become too tired, this will affect test results negatively. Likewise, facilitating a test is tiring as the moderator must observe and question the user carefully, while monitoring things like the time, their own language, and the script. Here are details to consider when creating a schedule for test sessions: Test length: Typically, each test should be between 60 and 90 minutes long. Number of tests: You should not be facilitating more than 5-6 tests in one day. When counting the hours, leave at least half an hour cushioning space between each test. This gives you time to save the recording, make extra notes if necessary, communicate with any observers, and it provides flexibility if participants arrive later or tests run longer than they should. Number of tasks: This is roughly the number of tasks you hope to include in the test. In a 60-minute test, you will probably have about 40-45 minutes for tasks. The rest of the time will be taken with welcoming the participant, the initial interview, and closing questions at the end. In 45 minutes, you can fit about 5-8 tasks, depending on the nature of the tasks. It is important to remember that less is more in a test. You want to give participants time to explore the website and think about their options. You do not want to be rushing them on to the next task. The last thing to consider is moderating technique. This is how you interact with the participant and ask for their input. There are two aspects: thinking aloud and probing. Thinking aloud is asking participants to talk about what they are thinking and doing so you can understand what is in their heads. Probing is asking participants ad-hoc questions about interesting things that they do. You can do both concurrently or retrospectively: Concurrent thinking aloud and probing: Here, the participant talks while they do tasks and look at the interface. The facilitator asks questions as they come up, while the participant is doing tasks. Concurrent probing interferes with metrics such as time on task and accuracy, as you might distract users. However, it also takes less test time and can deliver more accurate insights, as participants do not have to remember their thoughts and feelings; these are shared as they happen. Retrospective thinking aloud and probing: This involves retracing the test or task after it is finished and asking participants to describe what they were thinking in retrospect. The facilitator may note down questions during tasks, and ask these later. While retrospective techniques simulate natural interaction more closely, they take longer because tasks are retraced. This means that the test must be longer or there will be fewer tasks and interview questions. Retrospective techniques also require participants to remember what they were thinking previously, which can be faulty. Concurrent moderating techniques are preferable because of the close alignment between users acting and talking about those actions. Retrospective techniques should only be used if timing metrics are very important. Even in these cases, concurrent thinking aloud can be used with retrospective probing. Thinking aloud concurrently generally interferes very little with task times and accuracy, as users are ideally just verbalizing ideas already in their heads. At each stage of test planning, share the ideas with the team and stakeholders and ask for feedback. You may need permission to go forward with test objectives and tasks. However, even if you do not need sign off, sharing details with the team gets everyone involved in the testing. This is a good way to share and promote design values. It also benefits the test, as team members will probably have good ideas about tasks to include or elements of the website to test that you have not considered. Designing tasks and metrics As we have stated previously, usability testing is about watching users interacting with a product. Tasks direct the interactions that you want to see. Therefore, they should cover the focus area of the test, or all important interactions if the whole website is tested. To make the test more natural, if possible create scenarios or user stories that link the tasks together so participants are performing a logical sequence of activities. If you have scenarios or task analyses from previous research, choose those that relate to your test goals and focus, and use them to guide your task design. If not, create brief scenarios that cover your goals. You can do this from a top-down or bottom-up perspective: Top down: What events or conditions in their world would motivate people to use this design? For example, if the website is a used goods marketplace, a potential user might have an item they want to get rid of easily, while making some money; or they might need an item and try to get it cheaply secondhand. Then, what tasks accomplish these goals? Bottom up: What are the common tasks that people do on the website? For example, in the marketplace example, common tasks are searching for specific items; browsing through categories of items; adding an item to the site to sell, which might include uploading photographs or videos, adding contact details and item descriptions. Then, create scenarios around these tasks to tie them together. Tasks can be exploratory and open-ended, or specific and directed. A test should have both. For example, you can begin with an open-ended task, such as examining the home page and exploring the links that are interesting. Then you can move onto more directed tasks, such as finding a particular color, size, and brand of shoe and adding it to the checkout cart. It is always good to begin with exploratory tasks, but these can be open-ended or directed. For example, to gather first impressions of a website, you could ask users to explore as they prefer from the home page and give their impressions as they work; or you could ask users to look at each page for five seconds, and then write down everything they remember seeing. The second option is much more controlled, which may be necessary if you want more direct comparison between participants, or are testing with a prototype where only parts of the website are available. Metrics are needed for task, observation, and interview analysis, so that we can evaluate the success of the design we are testing. They guide how we examine the results of a usability test. They are based on the definition of usability, and so relate to effectiveness, efficiency, satisfaction, discoverability, learnability, and error proneness. Metrics can be qualitative or quantitative. Qualitative metrics aim to encode the data so that we can detect patterns and trends in it, and compare the success of participants, tasks, or tests. For example, noting expressions of delight or frustration during a task. Quantitative metrics collect numbers that we can manipulate and compare against each other or benchmarks. For example, the number of errors each participant makes in a task. We must be careful how we use and assign meaning to quantitative metrics because of the small sample sizes. Here are some typical metrics: Task success or completion rates: This measures effectiveness and should always be captured as a base. It relates most closely to conversion, which is the primary business goal for a website, whether it is converting browsers to buyers, or visitors to registered users. You may just note success or failure, but it is more revealing to capture the degree of task success. For example, you can specify whether the task is completed easily, with some struggle, with help, or is not completed successfully. Time on task: A measure of efficiency. How long it takes to complete tasks. Errors per task: A measure of error-proneness. The number and severity of errors per task, especially noting critical errors where participants may not even realize they have made a mistake. Steps per task: A measure of efficiency. A number of steps or pages needed to complete each task, often against a known minimum. First click: A measure of discoverability. Noting the first click to accomplish each task, to report on findability of items on the web page. This can also be used in more exploratory tasks to judge what attracts the user's attention first. When you have designed tasks, consider them against the definition of usability to make sure that you have covered everything that you need or want to cover. The preceding diagram shows the metrics typically associated with each component of the usability definition. A valid criticism of usability testing is that it only tests first-time use of a product, as participants do not have time to become familiar with the system. There are ways around this problem. For example, certain types of task, such as search and browsing, can be repeated with different items. In later tasks, participants will be more familiar with the controls. The facilitator can use observation or metrics such as task time and accuracy to judge the effect of familiarity. A more complicated method is to conduct longitudinal tests, where participants are asked to return a few days or a week later and perform similar tasks. This is only reasonable to spend time and money on if learnability is an important metric. Planning questions and observation The interview questions that are asked at the beginning and end of a test provide valuable context for user actions and reactions, such as the user's background, their experiences with similar websites or the subject-area, and their relationship to technology. They also help the facilitator to establish rapport with the user. Other questions provide valuable qualitative information about the user's emotional reaction to the website and the tasks they are doing. A combination of observation and questions provides data on aspects such as ease of use, usefulness, satisfaction, delight, and frustration. For the initial interview, questions should be about: Welcome: These set the participant at ease, and can include questions about the participant's lifestyle, job, and family. These details help UX practitioners to present test participants as real people with normal lives when reporting on the test. Domain: These ask about the participant's experience with the domain of the website. For example, if the website is in the domain of financial services, questions might be around the participant's banking, investments, loans, and their experiences with other financial websites. As part of this, you might investigate their feelings about security and privacy. Tech: These questions ask about the participant's usage and experience with technology. For example, for testing a website on a computer, you might want to know how often the participant uses the internet or social media each day, what kinds of things they do on the internet, and whether they buy things online. If you are testing mobile usage, you might want to inquire about how often the participant uses the internet on their phone each day, and what kind of sites they visit on mobile versus desktop. Like tasks, questions can be open-ended or closed. An example of an open-ended question is: Tell me about how you use your phone throughout a normal workday, beginning with waking up in the morning and ending with going to sleep at night. The facilitator would then prompt the participant for further details suggested during the reply. A closed question might be: What is your job? These generate simple responses, but can be used as springboards into deeper answers. For example, if the answer is fireman, the facilitator might say, That's interesting. Tell me more about that. What do you do as a fireman? Questions asked at the end of the test or during the test are more about the specific experience of the website and the tasks. These are often made more quantifiable by using a rating scale to structure the answer. A typical example is a Likert scale, where participants specify their agreement or disagreement with a statement on a 5- to 7-point scale. For example, a statement might be: I can find what I want easily using this website. #1 is labeled Strongly Agree and #7 is labelled Strongly Disagree. Participants choose the number that corresponds to the strength of their agreement or disagreement. You can then compare responses between participants or across different tests. Examples of typical questions include: Ease of use (after every task): On a scale of 1-7, where 1 is really hard and 7 is really easy, how difficult or easy did you find this task? Ease of use (at the end): On a scale of 1-7, how easy or difficult did you find working on this website? Usefulness: On a scale of 1-7, how useful do you think this website would be for doing your job? Recommendation: On a scale of 1-7, how likely are you to recommend this website to a friend? It is important to always combine these kinds of questions with observation and task performance, and to ask why afterwards. People tend to self-report very positively, so often you will pay less attention to the number they give and more to how they talk about their answer afterwards. The final questions you ask provide closure for the test and end it gracefully. These can be more general and conversational. They might deliver useful data, but that is not the priority. For example, What did you think of the website? or Is there anything else you'd like to say about the website? Questions during the test often arise ad hoc because you do not understand why the participant does an action, or what they are thinking about if they stare at a page of the website for a while. You might also want to ask participants what they expect to find before they select a menu item or look at a page. In preparing for observation, it is helpful to make a list of the kinds of things you especially want to observe during the test. Typical options are: Reactions to each new page of the website First reactions when they open the Home page The variety of steps used to complete each task Expressions of delight or frustration Reactions to specific elements of the website First clicks for each task First click off the Home page Much of what you want to observe will be guided by the usability test objectives and the nature of the website. Preparing the script Once you have designed all the elements of the usability test, you can put them together in a script. This is a core document in usability testing, as it acts as the facilitator's guide during each test. There are different ideas about what to include in a script. Here, we describe a comprehensive script that describes the whole test procedure. This includes, in rough order: The information that must be told to the participant in the welcome speech. The welcome speech is very important, as it is the participant's first experience of the facilitator. It is where the rapport will first be established. The following information may need to be included: Introduction to the facilitator, client, and product. What will happen during the test, including the length. The idea that the website is being tested, not the participant, and that any problems are the fault of the product. This means the participant is valuable and helpful to the team developing a great website. Asking the participant to think aloud as much as possible, and to be honest and blunt about what they think. Asking them to imagine that they are at home in a natural situation, exploring the website. If there are observers, indication that people may be watching and that they should be ignored. Asking permission to record the session, and telling the participant why. Assuring them of their privacy and the limited usage of the recordings to analysis and internal reporting. A list of any documents that the participant must look at or sign first, for example, an NDA. Instructions on when to switch on and off any recording devices. The questions to ask in thematic sections, for example, welcome, domain, and technology. These can include potential follow - on questions, to delve for more information if necessary. A task section, that has several parts: An introduction to the prototype if necessary. If you are testing with a prototype, there will probably be unfinished areas that are not clickable. It is worth alerting participants so they know what to expect while doing tasks and talking aloud. Instructions on how to use the technology if necessary. Ideally your participants should be familiar with the technology, but if this is not the case, you want to be testing the website, not the technology. For example, if you are testing with a particular screen reader and the participant has not used it before, or if you are using eye tracking technology. An introduction to the tasks, including any scenarios provided to the participant. The description of each task. Be careful not to use words from the website interface when describing tasks, so you do not guide the participant too much. For example, instead of: How would you add this item to your cart?, say How would you buy this item? Questions to include after each task. For example, the ease of use question. Questions to prompt the participant if they are not thinking aloud when they should, especially for each new page of the website or prototype. For example: What do you see here? What can you do here? What do you think these options mean? Final questions to finish off the test, and give the participant a chance to emphasize any of their experiences. A list of any documents the participant must sign at the end, and instructions to give the incentive if appropriate. Once the script is created, timing is added to each task and section, to help the facilitator make sure that the tests do not run over time. This will be refined as the usability test is practiced. The script provides a structure to take notes in during the test, either on paper or digitally: Create a spreadsheet with rows for each question and task Use the first column for the script, from the welcome questions onwards Capture notes in subsequent columns for the user Use a separate spreadsheet for each participant during the test After all the tests, combine the results into one spreadsheet so you can easily analyze and compare The following is a diagram showing sections of the script for notetaking, with sample questions and tasks, for a radio station website: Securing a venue and inviting clients and team members If you are testing at an external venue, this is one of the first things you will need to organize for a usability test, as these venues typically need to be booked about one-two months in advance. Even if you are testing in your own offices, you will still need to book space for the testing. When considering a test venue, you should be looking for the following: A quiet, dedicated space where the facilitator, participant, and potentially a notetaker, can sit. This needs surfaces for all the equipment that will be used during the test, and comfortable space for the participant. Consider the lighting in the test room. This might cause glare if you are testing on mobile phones, so think about how best to handle the glare. For example, where the best place is for the participant to sit, and whether you can use indirect lighting of some kind. A reception room where participants can wait for their testing session. This should be comfortable. You may want to provide refreshments for participants here. Ideally, an observation room for people to watch the usability tests. Observers should never be in the same space as the testing, as this will distract participants, and probably make them uncomfortable. The observation room should be linked to the test room, either with cables or wirelessly, so observers can see what is happening on the participant's screen, and hear (and ideally see) the participant during the test. Some observation rooms have two-way mirrors into the test room, so observers can watch the facilitator and participant directly. Refreshments should be available for the observers. We have discussed various testing roles previously. Here, we describe them formally: Facilitator: This is the person who conducts the test with the participant. They sit with the participant, instruct them in the tasks and ask questions, and take notes. This is the most important role during the test. We will discuss it further in the Conducting usability tests section. Participant: This is the person who is doing the test. We will discuss recruiting test participants in the next section. Notetaker: This is an optional role. It can be worth having a separate notetaker, so the facilitator does not have to take notes during the test. This is especially the case if the facilitator is inexperienced. If there is a notetaker, they sit quietly in the test room and do not engage with the participant, except when introduced by the facilitator. Receptionist: Someone must act as receptionist for the participants who arrive. This cannot be the facilitator, as they will be in the sessions. Ask a team member or the office receptionist to take this role. Observers: Everyone else is an observer. These can be other team members and/or clients. Observers should be given guidelines for their behavior. For example, they should not interact with test participants or interrupt the test. They watch from a separate room, and should not be too noisy so that they can be heard in the test room (often these rooms are close to each other). The facilitator should discuss the tests with observers between sessions, to check if they have any questions they would like added to the test, and to discuss observations. It is worth organizing a debriefing for immediately after the tests, or the next day if possible, for the observers and facilitator to discuss the tests and observations. It is important that as many stakeholders as possible are persuaded to watch at least some of the usability testing. Watching people using your designs is always enlightening, and really helps to bring a team together. Remember to invite clients and team members early, and send reminders closer to the day. Recruiting participants When recruiting participants for usability tests, make sure that they are as close as possible to your target audience. If your website is live and you have a pool of existing users, then your job is much easier. However, if you do not have a user pool, or you want to test with people who have not used your site, then you need to create a specification for appropriate users that you can give to a recruiter or use yourself. To specify your target audience, consider what kinds of people use your website, and what attributes will cause them to behave differently to other users. If you have created personas during previous research, use these to help identify target user characteristics. If you are designing a website for a client, work with them to identify their target users. It is important to be specific, as it is difficult to look for people who fulfill abstract qualities. For example, instead of asking for tech savvy people, consider what kinds of technology such people are more likely to use, and what their activities are likely to be. Then ask for people who use the technology in the ways you have identified. Consider the behaviors that result from certain types of beliefs, attitudes, and lifestyle choices. The following are examples of general areas you should consider: Experience with technology: You may want users who are comfortable with technology or who have used specific technology, for example, the latest smartphones, or screen readers. Consider the properties that will identify these people. For example, you can specify that all participants must own a specific type or generation of mobile device, and must have owned it for at least two months. Online experience: You may want users with a certain level and frequency of internet usage. To elicit this, you can specify that you want people who have bought items online within the last few months, or who do online banking, or have never done these things. Social media presence: Often, you want people who have a certain amount of social media interaction, potentially on specific platforms. In this case you would specify that they must regularly post to or read social media such as Facebook, Twitter, Instagram, Snapchat, or more hobbyist versions such as Pinterest and/or Flickr. Experience with the domain: Participants should not know too much or too little about the domain. For example, if you are testing banking software, you may want to exclude bank employees, as they are familiar with how things work internally. Demographics: Unless your target audience is very skewed, you probably want to recruit a variety of people demographically. For example, a range of ages, gender ethnicity, economic, and education levels. There may be other characteristics you need in your usability test participants. The previous characteristics should give you an idea of how to specify such people. For example, you may want hobbyist photographers. In this case, you would recruit people who regularly take photographs and share them with friends in some way. Do not use people who you have previously used in testing, unless you specifically need people like this, as they will be familiar with your tests and procedures, which will interfere with results. Recruiting takes time and is difficult to do well. There are various ways of recruiting people for user testing, depending on your business. You may be able to use people or organizations associated with your business or target audience members to recruit people using the screening questions and incentives that you give them. You can set up social media lists of people who follow your business and are willing to participate. You can also use professional recruiters, who will get you exactly the kinds of people you need, but will charge you for it. For most tests, an incentive is usually given to thank participants for their time. This is often money, but it can also be a gift, such as stationery or gift certificates. A recruitment brief is the document that you give to recruiters. The following are the details you need to include: Day of the test, the test length, and the ideal schedule. This should state the times at which the first and last participants may be scheduled, how long each test will take, and the buffer period that should be between each session. The venue. This should include an address, maps, parking, and travel information. Contact details for the team members who will oversee the testing and recruitment. A description of the test that can be given to participants. The incentives that will be provided. The list of qualities you need in participants, or screening questions to check for these. This document can be modified to share with less formal recruitment associates. The benefit of recruiters is that they handle the whole recruitment process. If you and your team recruit participants yourselves, you will need to remind them a week before the test, and the day before the test, usually by messaging or emailing them. On the day of the test, phone participants to confirm that they will be arriving, and that they know how to get to the venue. Participants still often do not attend tests, even with all the reminders. This is the nature of testing with real people. Ideally you will be given some notice, so try to recruit an extra couple of possible participants who you can call in a pinch on the day. Setting up the hardware, software, and test materials Depending on the usability test, you will have to prepare different hardware, software, and test materials. These include screen recording software and hardware, notetaking hardware, the prototype to test, screen sharing options, and so on. The first thing to consider is the prototype, as this will have implications for hardware and software. Are you testing a live website, an electronic prototype, or a paper prototype? Live website: Set up any accounts or passwords that may be necessary. Make sure you have reliable access to the internet, or a way to cache the website on your machine if necessary. Electronic prototype: Make sure the prototype works the way it is supposed to, and that all the parts that are accessed during the tasks can be interacted with, if required. Try not to make it too obvious which parts work and which parts do not work, as this may guide participants to the correct actions during the test. Be prepared to talk participants through parts of the prototype that do not work, so they have context for the tasks. Have a safe copy of the prototype in case this copy becomes corrupted in some way. Paper prototype: Make sure that you have sketches or printouts of all the screens that you need to complete the tasks. With paper prototype testing, the facilitator takes the role of the computer, shows the results of the actions that the participant proposes, and talks participants through the screens. Make sure that you are prepared for this and know the order of the screens. Have multiple copies of the paper prototype in case parts get lost or destroyed. For any of the three, make sure the team goes through the test tasks to make sure that everything is working the way it should be. For hardware and other software, keep an equipment list, so you can check it to make sure you have all the necessary hardware with you. You may need to include: Hardware for participant to interact with the prototype or live site: This may be a desktop, laptop, or mobile device. If testing on a mobile device, you can ask participants to use their own familiar phones instead of an unfamiliar test mobile device. However, participants may have privacy issues with using their own phones, and you will not be able to test the prototype or live site on the phone beforehand. If you provide a laptop, include a separate mouse as people often have difficulty with unfamiliar mouse pads. Recording the screen and audio: This is usually screen capture software. There are many options for screen capturing software, such as Lookback, an inexpensive option for iOS and Android, and CamStudio, a free option for the PC. Specialist software that handles multiple camera inputs allows you to record face and screen at the same time. Examples are iSpy, free CCTV software, Silverback, an inexpensive option for the Mac, and Morae, an expensive but impressive option for the PC. Mobile recording alternative: You can also record mobile video with an external camera that captures the participant's fingers on screen. This means you do not have to install additional software on the phone, which might cause performance problems. In this case, you would use a document camera attached to the table, or a portable rig with a camera containing the phone and attached to a nearby PC. The video will include hesitations and hovering gestures, which are useful for understanding user behavior, but fingers might occlude the screen. In addition, rigs may interfere with natural usage of the mobile phone, as participants must hold the rig as well as the phone. Observer viewing screen: This is needed if there are observers. The venue might have screen sharing set up; if not, you will have to bring your own hardware and software. This could be an external monitor and extension cables to connect to a laptop in the interview room. It could also be screen sharing software, for example, join.me. Capturing notes: You will need a method to capture notes. Even if you are screen recording, notes will help you to review the recordings more efficiently, and remind you about parts of the recording you wanted to pay special attention to. One method is using a tablet or laptop and spreadsheet. Typing is fast and the electronic notes are easy to put together after the tests. An alternative is paper and pencil. The benefit of this is that it is least disruptive to the participant. However, these notes must be captured electronically. Camera for participant face: Capturing the participant's face is not crucial. However, it provides good insight into their feelings about tasks and questions. If you don't record face, you will only have tone of voice and the notes that were taken to remind you. Possible methods are using a webcam attached to the computer doing screen recording, or using inbuilt software such as Hangouts, Skype, or FaceTime for Apple devices. Microphone: Often sound quality is not great on screen capturing software, because of feedback from computer equipment. Using an external microphone improves the quality of sound. Wireless router: A portable wireless router in case of internet problems (if you are using the internet). Extra extension cables and chargers for all devices. You will also need to make sure that you have multiple copies of all documents needed for the testing. These might include: Consent form: When you are testing people, they typically need to give their permission to be tested. You also typically need proof that the incentive has been received by the participant. These are usually combined into a form that the participant signs to give their permission and acknowledge receipt of the incentive. Non-disclosure agreement (NDA): Many businesses require test participants to sign NDAs before viewing the prototype. This must be signed before the test begins. Test materials: Any documents that provide details to the participants for the test. Checklists: It is worth printing out your checklists for things to do and equipment, so that you can check them off as you complete actions, and be sure that you have done everything by the time it needs to be done. The following figure shows a basic sample checklist for planning a usability test. For a more detailed checklist, add in timing and break the tasks down further. These refinements will depend on the specific usability test. Where you are uncertain about how long something will take, overestimate. Remember that once you have fixed the day, everything must be ready by then. Checklist for usability test preparation Conducting usability tests On the day(s) of the usability test, if you have planned properly, all you should have to worry about are the tests themselves, and interacting with the participants. Here is a list of things to double-check on the day of each test: Before the first test: Set up and check equipment and rooms. Have a list of participants and their order. Make sure there are refreshments for participants and observers. Make sure you have a receptionist to welcome participants. Make sure that the prototype is installed or the website is accessible via the internet and working. Test all equipment, for example, recording software, screen sharing, and audio in observations room. Turn off anything on the test computer or device that might interfere with the test, for example, email, instant messaging, virus scans, and so on. Create bookmarks for any web pages you need to open. Before each test: Have the script ready to capture notes from a new participant. Have the screen recorder ready. Have the browser open in a neutral position, for example, Google search. Have sign sheets and incentive ready. Start screen sharing. Reload sample data if necessary, and clear the browser history from the last test. During each test: Follow the script, including when the participant must sign forms and receive the incentive. Press record on the screen recorder. Give the microphone to the participant if appropriate. After each test: Stop recording and save the video. Save the script. End screen sharing. Note extra details that you did not have time for during the session. Once you have all the details organized, the test session is in the hands of the facilitator. Best practices for facilitating usability sessions The facilitator should be welcoming and friendly, but relatively ordinary and not overly talkative. The participant and website should be the focus of the interview and test, not the facilitator. To create rapport with the participant, the facilitator should be an ally. A good way to do this is to make fun of the situation and reassure participants that their experiences in the test will be helpful. Another good technique is to ask more like an apprentice than an expert, so that the participant answers your questions, for example: Can you tell me more about how this works? and What happens next?. Since you want participants to feel as natural and comfortable as possible in their interactions, the facilitator should foster natural exploration and help satisfy participant curiosity as much as possible. However, they need to remain aware of the script and goals of the test, so that the participant covers what is needed. Participants often struggle to talk aloud. They forget to do so while doing tasks. Therefore, the facilitator often needs to nudge participants to talk aloud or for information. Here are some useful questions or comments: What are you thinking? What do you think about that? Describe the steps you're doing here. What's going on here? What do you think will happen next? Is that what you expected to happen? Can you show me how you would do that? When you are asking questions, you want to be sure that you help participants to be as honest and accurate as possible. We've previously stated that people are notoriously bad at projecting what they will do or remembering what they did. This does not mean that you cannot ask about what people do. You must just be careful about how you ask and always try to keep it concrete. The priorities in asking questions are: Now: Participants talking aloud about what they are doing and thinking now. Retrospective: Participants talking about what they have done or thought in the past. Never prospective: Never ask participants about what they would do in the future. Rather ask about what they have done in similar situations in the past. Here are some other techniques for ensuring you get the best out of the participants, and do not lead them too much yourself: Ask probing questions such as why and how to get to the real reasons for actions. Do not assume you know what participants are going to say. Check or paraphrase if you are not sure what they said or why they said it. For example, So are you saying the text on the left is hard to read? or You're not sure about what? or That picture is weird? How? Do not ask leading questions, as people will give positive answers to please you. For example, do not say Does that make sense?, Do you like that? or Was that easy? Rather say Can you explain how this works? What do you think of that? and How did you find doing that task? Do not tell participants what they are looking at. You are trying to find out what they think. For example, instead of Here is the product page, say Tell me what you see here, or Tell me what this page is about. Return the question to the participant if they ask what to do or what will happen: I can't tell you because we need to find out what you would do if you were alone at home. What would you normally do? or What do you think will happen? Ask one question at a time, and make time for silence. Don't overload the participants. Give them a chance to reply. People will often try to fill the silence, so you may get more responses if you don't rush to fill it yourself. Encourage action, but do not tell them what to do. For example, Give it a try. Use acknowledgment tokens to encourage action and talking aloud. For example, OK, uh huh, mm hmm. A good facilitator makes participants feel comfortable and guides them through the tasks without leading while observing carefully and asking questions where necessary. It takes practice to accomplish this well. The facilitator (and the notetaker if there is one) must also think about the analysis that will be done. Analysis is time-consuming; think about what can be done beforehand to make it easier. Here are some pointers: Taking notes on a common spreadsheet with a script is helpful because the results are ready to be combined easily. If you are gathering quantitative results, such as timing tasks or counting steps to accomplish activities, prepare spaces to note these on the spreadsheet before the test, so all the numbers are easily accessible afterward. If you are rating task completion, then note a preliminary rating as the task is completed. This can be as simple as selecting appropriate cell colors beforehand and coloring each cell as the task is completed. This may change during analysis, but you will have initial guidance. Listen for useful and illustrative quotes or video segment opportunities. Note down the quote or roughly note the timestamp, so you know where to look in the recording. In general, have a timer at hand, and note the timestamp of any important moments in each test. This will make reviewing the recordings easier and less time-consuming. We examined how to plan, organize, and conduct a usability test. As part of this, we have discussed how to design a test with goals, tasks, metrics, and questions using the definition of usability. If you liked this article, be sure to check out this book UX for the Web  to make a web app fully accessible from a development and design perspective. 3 best practices to develop effective test automation with Selenium Unit Testing in .NET Core with Visual Studio 2017 for better code quality Unit Testing and End-To-End Testing
Read more
  • 0
  • 0
  • 5047

article-image-why-jvm-java-virtual-machine-for-deep-learning
Guest Contributor
10 Nov 2019
5 min read
Save for later

Why use JVM (Java Virtual Machine) for deep learning

Guest Contributor
10 Nov 2019
5 min read
Deep learning is one of the revolutionary breakthroughs of the decade for enterprise application development. Today, majority of organizations and enterprises have to transform their applications to exploit the capabilities of deep learning. In this article, we will discuss how to leverage the capabilities of JVM (Java virtual machine) to build deep learning applications. Entreprises prefer JVM Major JVM languages used in enterprise are Java, Scala, Groovy and Kotlin. Java is the most widely used programming language in the world. Nearly all major enterprises in the world use Java in some way or the other. Enterprises use JVM based languages such as Java to build complex applications because JVM features are optimal for production applications. JVM applications are also significantly faster and require much fewer resources to run compared to their counterparts such as Python. Java can perform more computational operations per second compared to Python. Here is an interesting performance benchmarking for the same. JVM optimizes performance benchmarks Production applications represent a business and are very sensitive to performance degradation, latency, and other disruptions. Application performance is estimated from latency/throughput measures. Memory overload and high resource usage can influence above said measures. Applications that demand more resources or memory require good hardware and further optimization from the application itself. JVM helps in optimizing performance benchmarks and tune the application to the hardware’s fullest capabilities. JVM can also help in avoiding memory footprints in the application. We have discussed on JVM features so far, but there’s an important context on why there’s a huge demand for JVM based deep learning in production. We’re going to discuss that next. Python is undoubtedly the leading programming language used in deep learning applications. For the same reason, the majority of enterprise developers i.e, Java developers are forced to switch to a technology stack that they’re less familiar with. On top of that, they need to address compatibility issues and deployment in a production environment while integrating neural network models. DeepLearning4J, deep learning library for JVM Java Developers working on enterprise applications would want to exploit deployment tools like Maven or Gradle for hassle-free deployments. So, there’s a demand for a JVM based deep learning library to simplify the whole process. Although there are multiple deep learning libraries that serve the purpose, DL4J (Deeplearning4J) is one of the top choices. DL4J is a deep learning library for JVM and is among the most popular repositories on GitHub. DL4J, developed by the Skymind team, is the first open-source deep learning library that is commercially supported. What makes it so special is that it is backed by ND4J (N-Dimensional Arrays for Java) and JavaCPP. ND4J is a scientific computational library developed by the Skymind team. It acts as the required backend dependency for all neural network computations in DL4J. ND4J is much faster in computations than NumPy. JavaCPP acts as a bridge between Java and native C++ libraries. ND4J internally depends on JavaCPP to run native C++ libraries. DL4J also has a dedicated ETL component called DataVec. DataVec helps to transform the data into a format that a neural network can understand. Data analysis can be done using DataVec just like Pandas, a popular Python data analysis library. Also, DL4J uses Arbiter component for hyperparameter optimization. Arbiter finds the best configuration to obtain good model scores by performing random/grid search using the hyperparameter values defined in a search space. Why choose DL4J for your deep learning applications? DL4J is a good choice for developing distributed deep learning applications. It can leverage the capabilities of Apache Spark and Hadoop to develop high performing distributed deep learning applications. Its performance is equivalent to Caffe in case multi-GPU hardware is used. We can use DL4J to develop multi-layer perceptrons, convolutional neural networks, recurrent neural networks, and autoencoders. There are a number of hyperparameters that can be adjusted to further optimize the neural network training. The Skymind team did a good job in explaining the important basics of DL4J on their website. On top of that, they also have a gitter channel to discuss or report bugs straight to their developers. If you are keen on exploring reinforcement learning further, then there’s a dedicated library called RL4J (Reinforcement Learning for Java) developed by Skymind. It can already play doom game! DL4J combines all the above-mentioned components (DataVec, ND4J, Arbiter and RL4J) for the deep learning workflow thus forming a powerful software suite. Most importantly, DL4J enables productionization of deep learning applications for the business. If you are interested to learn how to develop real-time applications on DL4J, checkout my new book Java Deep Learning Cookbook. In this book, I show you how to install and configure Deeplearning4j to implement deep learning models. You can also explore recipes for training and fine-tuning your neural network models using Java. By the end of this book, you’ll have a clear understanding of how you can use Deeplearning4j to build robust deep learning applications in Java. Author Bio Rahul Raj has more than 7 years of IT industry experience in software development, business analysis, client communication and consulting for medium/large scale projects. He has extensive experience in development activities comprising requirement analysis, design, coding, implementation, code review, testing, user training, and enhancements. He has written a number of articles about neural networks in Java and is featured by DL4J and Official Java community channel. You can follow Rahul on Twitter, LinkedIn, and GitHub. Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss 6 most commonly used Java Machine learning libraries Deeplearning4J 1.0.0-beta4 released with full multi-datatype support, new attention layers, and more!
Read more
  • 0
  • 0
  • 5037