Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-creating-sample-application-simple
Packt
04 Sep 2013
8 min read
Save for later

Creating a sample application (Simple)

Packt
04 Sep 2013
8 min read
(For more resources related to this topic, see here.) How to do it... To create an application, include the JavaScript and CSS files in your page. Perform the following steps: Create an HTML document, index.html, under your project directory. Please note that this directory should be placed in the web root of your web server. Create the directories styles and scripts under your project directory. Copy the CSS file kendo.mobile.all.min.css, from <downloaded directory>/styles to the styles directory created in step 2. Then add a reference to the CSS file in the head section of the document. Download the jQuery library from jQuery.com. Place this file in the scripts directory and add a reference to this file in the document before closing the body tag. You can also specify the CDN location of the file in the document. Copy the JavaScript file kendo.mobile.min.js, from the <downloaded directory>/js tag to the scripts directory created in step 2. Then add a reference to this JavaScript file in the document (after jQuery). Add the text "Hello Kendo!!" in the body tag of the index.html file as follows: <!DOCTYPE HTML><html><head><title>My first Kendo Mobile Application</title><link rel="stylesheet"type="text/css"href="styles/kendo.mobile.all.min.css"></head><body>Hello Kendo!!<script type="text/javascript"src = "scripts/jquery.min.js"></script><script type="text/javascript"src = "scripts/kendo.mobile.min.js"></script></body></html> The preceding code snippet is a simple HTML page with references to Kendo Mobile CSS and JavaScript files. These files are minified and contain all the features, themes, and widgets. In production, you would like to include only those that are required. The downloaded ZIP file includes CSS and JavaScript files for specific features. However, in development you can use the minified files that contain all features. Another thing to note is that apart from the reference to the script kendo.mobile.min.js, the page also includes a reference to jQuery. It is the only external dependency for Kendo UI. When you view this page on a mobile device, you will see the text Hello Kendo!! shown. This page does not include any of the widgets that come as a part of the library. Now let's build on top of our Hello World application and add some visual elements; that is, UI widgets to the page. This can be done with the following steps: Add a layout first. A mobile application generally has a header, a footer, and multiple views. It is also observed that while navigating through different views in an application, the header and footer remain constant. The framework allows you to define a global layout that may contain a header and a footer for all the views in the application. Also, the framework allows you to define multiple views that can share the same layout. The following is the same page that now includes a header and footer defined in the layout: <body><div data-role="layout" data-id="defaultLayout"> <header data-role="header"> <div data-role="navbar"> My first application </div> </header> <footer data-role="footer"> <div data-role="tabstrip"> <a data-icon="about">About</a> <a data-icon="settings">Settings</a> </div> </footer> </div></body> The body contains a few div tags with data attributes. Let's look into one of these tags in detail. <div data-role="layout" data-id="defaultLayout"> Here, the div tag contains two data attributes, role and id. The role data attribute is used to initialize and configure a widget. The data-role attribute has a value, layout, identifying the target element as a layout widget. All the widgets are expected to have a role data attribute that helps in marking the target element for a specific purpose. It instructs the library which widget needs to be added to the page. The id data attribute is used to identify the widget (the layout widget) in the page. A page may define several layout widgets and each one of these must be identified by a unique ID. Here, the data-id attribute has defaultLayout as its value. Now there can be many views referring to this layout by its id. Similarly, there are other elements in the page with the data-role attribute, defining them as one of widgets in the page. Let's take a look at the header and footer widgets defined inside the layout. <header data-role="header">... </header><footer data-role="footer">...</footer> The header and footer tags have the role data attribute set to header and footer respectively. This aligns them to the top and bottom of the page, giving the rest of the available space for different views to render. Also, note that there is a navbar widget in the header and a tabstrip widget defined in the footer. As mentioned earlier, the framework comes with several widgets that can help you build the application rapidly. Now add views to the page. The index.html page now has a layout defined and when you run the page in the browser, you will see an error message in the console which says: Uncaught Error: Your kendo mobile application element does not contain any direct child elements with data-role="view" attribute set. Make sure that you instantiate the mobile application using the correct container. Views represent the actual content that has to be displayed between the header and the footer that we defined while creating a layout. A layout cannot exist without a view and hence you see that error message in the console. To fix this error, you need to define a view for your mobile application. Add the following to your index.html page: <div data-role="view" data-layout="defaultLayout"> Hello Kendo!!</div> As mentioned earlier, every widget needs to have a role data attribute to identify itself as a particular widget in the page. Here, the target element is defined as a view widget and tied to the layout by defining the data-layout attribute. The data-layout attribute has a value defaultLayout that is the same as the value for the data-id attribute of the layout that we defined earlier. This attaches the view to the layout and you will not see the error message anymore. Similarly, you can have multiple Views defined in the page that can make use of the same layout. Now, there's only one pending task for the application to start working: initializing the application. A Kendo Mobile application can be initialized using the Application object. To do that, add the following code to the page: <script> var app = new kendo.mobile.Application();</script> Include the previous script block right after references to jQuery and Kendo Mobile and before closing the body tag. This single line of JavaScript code will initialize your Kendo Mobile application and all the widgets with the data-role attribute. The Application object is used for many other purposes . How it works... When you run the index.html page in a browser, you will see a navbar and a tabstrip in the header and footer of the page. Also, the message Hello Kendo!! being shown in the body of the page. The following screenshot shows how it will look like when you view the page on an iPhone: If you have noticed, this looks like a native iOS application. The framework has the capability to render the application that looks like a native application on a device. When you view the same page on an Android device, it will look like an native Android application, as shown in the following screenshot: The framework identifies the platform on which the mobile application is being run and then provides native look and feel to the application. There are ways in which you can customize this behavior. Summary Creating a sample application (Simple)got us started with the Kendo UI Mobile framework and showed us how to build a sample application using the same. We also saw some of the Mobile UI widgets, such as layouts, views, navbar, and tabstrip in brief. Resources for Article : Further resources on this subject: Working with remote data [Article] The Decider: External APIs [Article] Constructing and Evaluating Your Design Solution [Article]
Read more
  • 0
  • 0
  • 3123

article-image-building-iphone-app-using-swift-part-1
Ryan Loomba
17 Mar 2016
6 min read
Save for later

Building an iPhone App Using Swift: Part 1

Ryan Loomba
17 Mar 2016
6 min read
In this post, I’ll be showing you how to create an iPhone app using Apple’s new Swift programming language. Swift is a new programming language that Apple released in June at their special WWDC event in San Francisco, CA. You can find more information about Swift on the official page. Apple has released a book on Swift, The Swift Programming Language, which is available on the iBook Store or can be viewed online here. OK—let’s get started! The first thing you need in order to write an iPhone app using Swift is to download a copy of Xcode 6. Currently, the only way to get a copy of Xcode 6 is to sign up for Apple’s developer program. The cost to enroll is $99 USD/year, so enroll here. Once enrolled, click on the iOS 8 GM Seed link, and scroll down to the link that says Xcode 6 GM Seed. Once Xcode is installed, go to File -> New -> New Project. We will click on Application within the iOS section and choose a Single View Application: Click on the play button in the top left of the project to build the project. You should see the iPhone simular open with a blank white screen. Next, click on the top-left blue Sample Swift App project file and navigate to the general tab. In the Deployment Info section, select portrait for the device orientation. This will force the app to only be viewed in portrait mode. First View Controller If we navigate on the left to Main.storyboard, we see a single View Controller, with a single View. First, make sure that Use Size Classes is unchecked in the Interface Builder Document section. Let’s add a text view to the top of our view. In the bottom right text box, search for Text View. Drag the Text View and position it at the top of the View. Click on the Attributes inspectoron the right toolbar to adjust the font and alignment. If we click the play button to build the project, we should see the same white screen, but now with our Swift Sample App text. View a web page Let’s add our first feature–a button that will open up a web page. First embed our controller in a navigation controller, so we can easily navigate back and forth between views. Select the view controller in the storyboard, then go to Editor -> Embed in -> Navigation controller. Note that you might need to resize the text view you added in the previous step. Now, let’s add a button that will open up a web view. Back to our view, in the bottom right let’s search for a button and drag it somewhere in the view and label it Web View. The final product should look like this: If we build the project and click on the button, nothing will happen. We need to create a destination controller that will contain the web view. Go to File -> New and create a new Cocoa Touch Class: Let’s name our new controller WebViewController and make it a subclass of UIViewController. Make sure you choose Swift as the language. Click Create to save the controller file. Back to our storyboard, search for a View Controller in the bottom-right search box and drag to the storyboard. In the Attributes inspector toolbar on the right side of the screen, let’s give this controller the title WebViewController. In the identity inspector, let’s give this view controller a custom class of WebViewController: Let’s wire up our two controllers. Ctrl + click on the Web View button we created earlier and hold. Drag your cursor over to your newly created WebViewController. Upon release, choose push. On our storyboard, let’s search for a web view in the lower-right search box and drag it into our newly created WebViewController. Resize the web view so that it takes up the entire screen, except for the top nav bar area. If we hit the large play button at the top left to build our app, clicking on the Web View link will take us to a blank screen. We’ll also have a back button that takes us back to the first screen. Writing some Swift code Let’s have the web view load up a pre-determined website. Time to get our hands dirty writing some Swift! The first thing we need to do is link the WebView in our controller to the WebViewController.swift file. In the storyboard, click on the Assistant editor button at the top-right of the screen. You should see the storyboard view of WebViewController and WebViewController.swift next to each other. Control click on WebViewController in the storyboard and drag it over to the line right before the WebViewController class is defined. Name the variable webView: In the viewDidLoad function, we are going to add some intitialization to load up our webpage. After super.viewDidLoad(), let’s first declare the URL we want to use. This can be any URL; for the example, I’m going to use my own homepage. It will look something like this: let requestURL = NSURL(string: http://ryanloomba.com) In Swift, the keyword let is used to desiginate contsants, or variables that will not change. Next, we will convert this URL into an NSURLRequest object. Finally, we will tell our WebView to make this request and pass in the request object: import UIKit class WebViewController: UIViewController { @IBOutlet var webView: UIWebView! override func viewDidLoad() { super.viewDidLoad() let requestURL = NSURL(string: "http://ryanloomba.com") let request = NSURLRequest(URL: requestURL) webView.loadRequest(request) // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } /* // MARK: - Navigation // In a storyboard-based application, you will often want to do a little preparation before navigation override func prepareForSegue(segue: UIStoryboardSegue!, sender: AnyObject!) { // Get the new view controller using segue.destinationViewController. // Pass the selected object to the new view controller. } */ } Try changing the URL to see different websites. Here’s an example of what it should look like: About the author Ryan is a software engineer and electronic dance music producer currently residing in San Francisco, CA. Ryan started up as a biomedical engineer but fell in love with web/mobile programming after building his first Android app, you can find him on GitHub @rloomba
Read more
  • 0
  • 0
  • 3106

article-image-art-android-development-using-android-studio
Packt
28 Oct 2015
5 min read
Save for later

The Art of Android Development Using Android Studio

Packt
28 Oct 2015
5 min read
 In this article by Mike van Drongelen, the author of the book Android Studio Cookbook, you will see why Android Studio is the number one IDE to develop Android apps. It is available for free for anyone who wants to develop professional Android apps. Android Studio is not just a stable and fast IDE (based on Jetbrains IntelliJ IDEA), it also comes with cool stuff such as Gradle, better refactoring methods, and a much better layout editor to name just a few of them. If you have been using Eclipse before, then you're going to love this IDE. Android Studio tip Want to refactor your code? Use the shortcut CTRL + T (for Windows: Ctrl + Alt + Shift + T) to see what options you have. You can, for example, rename a class or method or extract code from a method. Any type of Android app can be developed using Android Studio. Think of apps for phones, phablets, tablets, TVs, cars, glasses, and other wearables such as watches. Or consider an app that uses a cloud-base backend such as Parse or App Engine, a watch face app, or even a complete media center solution for TV. So, what is in the book? The sky is the limit, and the book will help you make the right choices while developing your apps. For example, on smaller screens, provide smart navigation and use fragments to make apps look great on a tablet too. Or, see how content providers can help you to manage and persist data and how to share data among applications. The observer pattern that comes with content providers will save you a lot of time. Android Studio tip Do you often need to return to a particular place in your code? Create a bookmark with Cmd + F3 (for Windows: F11). To display a list of bookmarks to choose from, use the shortcut: Cmd + F3 (for Windows: Shift + F11). Material design The book will also elaborate on material design. Create cool apps using CardView and RecycleView widgets. Find out how to create special effects and how to perform great transitions. A chapter is dedicated to the investigation of the Camera2 API and how to capture and preview photos. In addition, you will learn how to apply filters and how to share the results on Facebook. The following image is an example of one of the results: Android Studio tip Are you looking for something? Press Shift two times and start typing what you're searching for. Or to display all recent files, use the Cmd + E shortcut (for Windows: Ctrl + E). Quality and performance You will learn about patterns and how support annotations can help you improve the quality of your code. Testing your app is just as important as developing one, and it will take your app to the next level. Aim for a five-star rating in the Google Play Store later. The book shows you how to do unit testing based on jUnit or Robolectric and how to use code analysis tools such as Android Lint. You will learn about memory optimization using the Android Device Monitor, detect issues and learn how to fix them as shown in the following screenshot: Android Studio tip You can easily extract code from a method that has become too large. Just mark the code that you want to move and use the shortcut Cmd + Alt + M (for Windows: Ctrl + Alt + M). Having a physical Android device to test your apps is strongly recommended, but with thousands of Android devices being available, testing on real devices could be pretty expensive. Genymotion is a real, fast, and easy-to-use emulator and comes with many real-world device configurations. Did all your unit tests succeed? There are no more OutOfMemoryExceptions any more? No memory leaks found? Then it is about time to distribute your app to your beta testers. The final chapters explain how to configure your app for a beta release by creating the build types and build flavours that you need. Finally, distribute your app to your beta testers using Google Play to learn from their feedback. Did you know? Android Marshmallow (Android 6.0) introduces runtime permissions, which will change the way users give permission for an app. The book The art of Android development using Android Studio contains around 30 real-world recipes, clarifying all topics being discussed. It is a great start for programmers that have been using Eclipse for Android development before but is also suitable for new Android developers that know about the Java Syntax already. Summary The book nicely explains all the things you need to know to find your way in Android Studio and how to create high-quality and great looking apps. Resources for Article: Further resources on this subject: Introducing an Android platform [article] Testing with the Android SDK [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 3057
Visually different images

article-image-creating-simple-application-sencha-touch
Packt
15 Feb 2012
10 min read
Save for later

Creating a Simple Application in Sencha Touch

Packt
15 Feb 2012
10 min read
  (For more resources on this topic, see here.) Setting up your folder structure Before we get started, you need to be sure that you've set up your development environment properly. Root folder You will need to have the folders and files for your application located in the correct web server folder, on your local machine. On the Mac, this will be the Sites folder in your Home folder. On Windows, this will be C:xamphtdocs (assuming you installed xampp). Setting up your application folder Before we can start writing code, we have to perform some initial set up, copying in a few necessary resources and creating the basic structure of our application folder. This section will walk you through the basic setup for the Sencha Touch files, creating your style sheets folder, and creating the index.html file. Locate the Sencha Touch folder you downloaded. Create a folder in the root folder of your local web server. You may name it whatever you like. I have used the folder name TouchStart in this article. Create three empty sub folders called lib, app, and css in your TouchStart folder. Now, copy the resources and src folders, from the Sencha Touch folder you downloaded earlier, into the TouchStart/lib folder. Copy the following files from your Sencha Touch folder to your TouchStart/lib folder: sencha-touch.js sencha-touch-debug.js sencha-touch-debug-w-comments.js Create an empty file in the TouchStart/css folder called TouchStart.css. This is where we will put custom styles for our application. Create an empty index.html file in the main TouchStart folder. We will flesh this out in the next section. Icon files Both iOS and Android applications use image icon files for display. This creates pretty rounded launch buttons, found on most touch-style applications. If you are planning on sharing your application, you should also create PNG image files for the launch image and application icon. Generally, there are two launch images, one with a resolution of 320 x 460 px, for iPhones, and one at 768 x 1004 px, for iPads. The application icon should be 72 x 72 px. See Apple's iOS Human Interface Guidelines for specifics, at http://developer.apple.com/library/ios/#documentation/userexperience/conceptual/mobilehig/IconsImages/IconsImages.html. When you're done, your folder structure should look as follows: Creating the HTML application file Using your favorite HTML editor, open the index.html file we created when we were setting up our application folder. This HTML file is where you specify links to the other files we will need in order to run our application. The following code sample shows how the HTML should look: <!DOCTYPE html><html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <title>TouchStart Application - My Sample App</title> <!-- Sencha Touch CSS --> <link rel="stylesheet" href="lib/resources/css/sencha-touch.css"type="text/css"> <!-- Sencha Touch JS --> <script type="text/javascript" src="lib/sencha-touch-debug.js"></script> <!-- Application JS --> <script type="text/javascript" src="app/TouchStart.js"></script> <!-- Custom CSS --> <link rel="stylesheet" href="css/TouchStart.css" type="text/css"> </head> <body></body></html> Comments In HTML, anything between <!-- and --> is a comment, and it will not be displayed in the browser. These comments are to tell you what is going on in the file. It's a very good idea to add comments into your own files, in case you need to come back later and make changes. Let's take a look at this HTML code piece-by-piece, to see what is going on in this file. The first five lines are just the basic set-up lines for a typical web page: <!DOCTYPE html><html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <title>TouchStart Application - My Sample App</title> With the exception of the last line containing the title, you should not need to change this code for any of your applications. The title line should contain the title of your application. In this case, TouchStart Application – Hello World is our title. The next few lines are where we begin loading the files to create our application, starting with the Sencha Touch files. The first file is the default CSS file for the Sencha Touch library—sencha-touch.css. <link rel="stylesheet" href="lib/resources/css/ext-touch.css"type="text/css"> CSS files CSS or Cascading Style Sheet files contain style information for the page, such as which items are bold or italic, which font sizes to use, and where items are positioned in the display. The Sencha Touch style library is very large and complex. It controls the default display of every single component in Sencha Touch. It should not be edited directly. The next file is the actual Sencha Touch JavaScript library. During development and testing, we use the debug version of the Sencha Touch library, sencha-touchdebug.js: <script type="text/javascript" src="lib/sencha-touch-debug.js"></script> The debug version of the library is not compressed and contains comments and documentation. This can be helpful if an error occurs, as it allows you to see exactly where in the library the error occurred. When you have completed your development and testing, you should edit this line to use sencha-touch.js instead. This alternate file is the version of the library that is optimized for production environments and takes less bandwidth and memory to use; but, it has no comments and is very hard to read. Neither the sencha-touch-debug.js nor the sencha-touch.js files should ever be edited directly. The next two lines are where we begin to include our own application files. The names of these files are totally arbitrary, as long as they match the name of the files you create later, in the next section of this chapter. It's usually a good idea to name the file the same as your application name, but that is entirely up to you. In this case, our files are named TouchStart.js and TouchStart.css. <script type="text/javascript" src="app/TouchStart.js"></script> This first file, TouchStart.js, is the file that will contain our JavaScript application code. The last file we need to include is our own custom CSS file, called TouchStart.css. This file will contain any style information we need for our application. It can also be used to override some of the existing Sencha Touch CSS styles. <link rel="stylesheet" href="resources/css/TouchStart.css"type="text/css"> This closes out the </head> area of the index.html file. The rest of the index.html file contains the <body></body> tags and the closing </html> tag. If you have any experience with traditional web pages, it may seem a bit odd to have empty <body></body> tags, in this fashion. In a traditional web page, this is where all the information for display would normally go. For our Sencha Touch application, the JavaScript we create will populate this area automatically. No further content is needed in the index.html file, and all of our code will live in our TouchStart.js file. So, without further delay, let's write some code!   Starting from scratch with TouchStart.js Let's start by opening the TouchStart.js file and adding the following: new Ext.Application({name: 'TouchStart',launch: function() {var hello = new Ext.Container({fullscreen: true,html: '<div id="hello">Hello World</div>' });this.viewport = hello; }}); This is probably the most basic application you can possibly create: the ubiquitous "Hello World" application. Once you have saved the code, use the Safari web browser to navigate to the TouchStart folder in the root folder of your local web server. The address should look like the following: http://localhost/TouchStart/, on the PC http://127.0.0.1/~username/TouchStart, on the Mac (username should be replaced with the username for your Mac) As you can see, all that this bit of code does is create a single window with the words Hello World. However, there are a few important elements to note in this example. The first line, NewExt.Application({, creates a new application for Sencha Touch. Everything listed between the curly braces is a configuration option of this new application. While there are a number of configuration options for an application, most consist of at least the application's name and a launch function. Namespace One of the biggest problems with using someone else's code is the issue of naming. For example, if the framework you are using has an object called "Application", and you create your own object called "Application", the two functions will conflict. JavaScript uses the concept of namespaces to keep these conflicts from happening. In this case, Sencha Touch uses the namespace Ext. It is simply a way to eliminate potential conflicts between the frameworks' objects and code, and your own objects and code. Sencha will automatically set up a namespace for your own code as part of the new Ext.Application object. Ext is also part of the name of Sencha's web application framework called ExtJS. Sencha Touch uses the same namespace convention to allow developers familiar with one library to easily understand the other. When we create a new application, we need to pass it some configuration options. This will tell the application how to look and what to do. These configuration options are contained within the curly braces ({}) and separated by commas. The first option is as follows: name: 'TouchStart' The launch configuration option is actually a function that will tell the application what to do once it starts up. Let's start backwards on this last bit of code for the launch configuration and explain this.viewport. By default, a new application has a viewport. The viewport is a pseudo-container for your application. It's where you will add everything else for your application. Typically, this viewport will be set to a particular kind of container object. At the beginning of the launch function, we start out by creating a basic container, which we call hello: launch: function() {var hello = new Ext.Container({fullscreen: true,html: '<div id="hello">Hello World</div>' });this.viewport = hello; } Like the Application class, a new Ext.Container class is passed a configuration object consisting of a set of configuration options, contained within the curly braces ({}) and separated by commas. The Container object has over 40 different configuration options, but for this simple example, we only use two: fullscreen sets the size of the container to fill the entire screen (no matter which device is being used). html sets the content of the container itself. As the name implies, this can be a string containing either HTML or plain text. Admittedly, this is a very basic application, without much in the way of style. Let's add something extra using the container's layout configuration option. My application didn't work! When you are writing code, it is an absolute certainty that you will, at some point, encounter errors. Even a simple error can cause your application to behave in a number of interesting and aggravating ways. When this happens, it is important to keep in mind the following: Don't Panic. Retrace your steps and use the tools to track down the error and fix it.  
Read more
  • 0
  • 0
  • 3055

article-image-upgrading-packaging-publishing-react-vr-app
Sunith Shetty
08 Jun 2018
19 min read
Save for later

Upgrading, packaging, and publishing your React VR app

Sunith Shetty
08 Jun 2018
19 min read
It is fun to develop and experience virtual worlds at home. Eventually, though, you want the world to see your creation. To do that, we need to package and publish our app. In the course of development, upgrades to React may come along; before publishing, you will need to decide whether you need to "code freeze" and ship with a stable version, or upgrade to a new version. This is a design decision. In today’s tutorial, we will learn to upgrade React VR and bundle the code in order to publish on the web. This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. This book will get you well-versed with Virtual Reality (VR) and React VR components to create your own VR apps. One of the neat things, although it can be frustrating, is that web projects are frequently updated.  There are a couple of different ways to do an upgrade: You can install/create a new app with the same name You will then go to your old app and copy everything over This is a facelift upgrade or Rip and Replace Do an update. Mostly, this is an update to package.json, and then delete node_modules and rebuild it. This is an upgrade in place. It is up to you which method you use, but the major difference is that an upgrade in place is somewhat easier—no source code to modify and copy—but it may or may not work. A Facelift upgrade also relies on you using the correct react-vr-cli. There is a notice that runs whenever you run React VR from the Command Prompt that will tell you whether it's old: The error or warning that comes up about an upgrade when you run React VR from a Command Prompt may fly by quickly. It takes a while to run, so you may go away for a cup of coffee. Pay attention to red lines, seriously. To do an upgrade in place, you will typically get an update notification from Git if you have subscribed to the project. If you haven't, you should go to: http://bit.ly/ReactVR, create an account (if you don't have one already), and click on the eyeball icon to join the watch list. Then, you will get an email every time there is an upgrade. We will cover the most straightforward way to do an upgrade—upgrade in place, first. Upgrading in place How do you know what version of React you have installed? From a Node.js prompt, type this: npm list react-vr Also, check the version of react-vr-web: npm list react-vr-web Check the version of react-vr-cli (the command-line interface, really only for creating the hello world app). npm list react-vr-cli Check the version of ovrui (open VR's user interface): npm list ovrui You can check these against the versions on the documentation. If you've subscribed to React VR on GitHub (and you should!), then you will get an email telling you that there is an upgrade. Note that the CLI will also tell you if it is out of date, although this only applies when you are creating a new application (folder/website). The release notes are at: http://bit.ly/VRReleases . There, you will find instructions to upgrade. The upgrade instructions usually have you do the following: Delete your node_modules directory. Open your package.json file. Update react-vr, react-vr-web, and ovrui to "New version number" for example, 2.0.0. Update react to "a.b.c". Update react-native to "~d.e.f". Update three to "^g.h.k". Run npm install or yarn. Note the ~ and ^ symbols; ~version means approximately equivalent to version and ^version means compatible with version. This is a help, as you may have other packages that may want other versions of react-native and three, specifically. To get the values of {a...k}, refer to the release notes. I have also found that you may need to include these modules in the devDependencies section of package.json: "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", You may see this error: module.js:529 throw err; ^ Error: Cannot find module './node_modules/react-native/packager/blacklist' If you do, make the following changes in your projects root folder in the rncli.config.js file. Replace the var blacklist = require('./node_modules/react-native/packager/blacklist'); line with var blacklist = require('./node_modules/metro-bundler/src/blacklist');. Third-party dependencies If you have been experimenting and adding modules with npm install <something>, you may find, after an upgrade, that things do not work. The package.json file also needs to know about all the additional packages you installed during experimentation. This is the project way (npm way) to ensure that Node.js knows we need a particular piece of software. If you have this issue, you'll need to either repeat the install with the—save parameter, or edit the dependencies section in your package.json file. { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } Again, this is the manual way; a better way is to use npm install <package> -save. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors even after removing node_modules, issue these commands: npm cache clean --force npm start -- --reset-cache The cache clean won't do it by itself; you need the reset-cache, otherwise, the problem packages will still be saved, even if they don't physically exist! Really broken upgrades – rip and replace If, however, after all that work, your upgrade still does not work, all is not lost. We can do a rip and replace upgrade. Note that this is sort of a "last resort", but it does work fairly well. Follow these steps: Ensure that your react-vr-cli package is up to date, globally: [F:ReactVR]npm install react-vr-cli -g C:UsersJohnAppDataRoamingnpmreact-vr -> C:UsersJohnAppDataRoamingnpmnode_modulesreact-vr-cliindex.js + [email protected] updated 8 packages in 2.83s This is important, as when there is a new version of React, you may not have the most up-to-date react-vr-cli. It will tell you when you use it that there is a newer version out, but that line frequently scrolls by; if you get bored and don't note, you can spend a lot of time trying to install an updated version, to no avail. An npm generates a lot of verbiage, but it is important to read what it says, especially red formatted lines. Ensure that all CLI (DOS) windows, editing sessions, Node.js running CLIs, and so on, are closed. (You shouldn't need to reboot, however; just close everything using the old directory). Rename the old code to MyAppName140 (add a version number to the end of the old react-vr directory). Create the application, using react-vr init MyAppName, in other words, the original app name. The next step is easiest using a diff program (refer to http://bit.ly/WinDiff). I use Beyond Compare, but there are other ones too. Choose one and install it, if needed. Compare the two directories, .MyAppName (new) and .MyAppName140, and see what files have changed. Move over any new files from your old app, including assets (you can probably copy over the entire static_assets folder). Merge any files that have changed, except package.json. Generally, you will need to merge these files: index.vr.js client.js (if you changed it) For package.json, see what lines have been added, and install those packages in the new app via npm install <missed package> --save, or start the app and see what is missing. Remove any files seeded by the hello world app, such as chess-world.jpg (unless you are using that background, of course). Usually, you don't change the rn-cli.config.js file (unless you modified the seeded version). Most code will move directly over. Ensure that you change the application name if you changed the directory name, but with the preceding directions, you won't have to. The preceding list of upgrade steps may be slightly easier if there are massive changes to React VR; it will require some picking through source files. The source is pretty straightforward, so this should be easy in practice. I found that these techniques will work best if the automatic upgrade did not work. As mentioned earlier, the time to do a major upgrade probably is not right before publishing the app, unless there is some new feature you need. You want to adequately test your app to ensure that there aren't any bugs. I'm including the upgrade steps here, though, but not because you should do it right before publishing. Getting your code ready to publish Honestly, you should never put off organizing your clothes until, oh, wait, we're talking about code. You should never put off organizing your code until the night you want to ship it. Even the code you think is throw away may end up in production. Learn good coding habits and style from the beginning. Good code organization Good code, from the very start, is very important for many reasons: If your code uses sloppy indentation, it's more difficult to read. Many code editors, such as Visual Studio Code, Atom, and Webstorm, will format code for you, but don't rely on these tools. Poor naming conventions can hide problems. An improper case on variables can hide problems, such as using this.State instead of this.state. Most of the time spent coding, as much as 80%, is in maintenance. If you can't read the code, you can't maintain it. When you're a starting out programmer, you frequently think you'll always be able to read your own code, but when you pick up a piece years later and say "Who wrote this junk?" and then realize it was you, you will quit doing things like a, b, c, d variable names and the like. Most software at some point is maintained, read, copied, or used by someone other than the author. Most programmers think code standards are for "the other guy," yet complain when they have to code well. Who then does? Most programmers will immediately ask for the code documentation and roll their eyes when they don't find it. I usually ask to see the documentation they wrote for their last project. Every programmer I've hired usually gives me a deer in the headlights look. This is why I usually require good comments in the code. A good comment is not something like this: //count from 99 to 1 for (i=99; i>0; i--) ... A good comment is this: //we are counting bottles of beer for (i=99; i>0; i--) ... Cleaning the lint trap (checking code standards) When you wash clothes, the lint builds up and will eventually clog your washing machine or dryer, or cause a fire. In the PC world, old code, poorly typed names, and all can also build up. Refactoring is one way to clean up the code. I highly recommend that you use some form of version control, such as Git or bitbucket to check your code; while refactoring, it's quite possible to totally mess up your code and if you don't use version control, you may lose a lot of work. A great way to do a code review of your work, before you publish, is to use a linter. Linters go through your code and point out problems (crud), improper syntax, things that may work differently than you intend, and generally try to pick up your room after you, like your mom does. While you might not like it if your mom does that, these tools are invaluable. Computers are, after all, very picky and why not use the machines against each other? One of the most common ways to let software check your software for JavaScript is a program called ESLint. You can read about it at: http://bit.ly/JSLinter. To install ESLint, you can do it via npm like most packages—npm install eslint --save-dev. The --save-dev option puts a requirement in your project while you are developing. Once you've published your app, you won't need to pack the ESLint information with your project! There are a number of other things you need to get ESLint to work properly; read the configuration pages and go through the tutorials. A lot depends on what IDE you use. You can use ESLint with Visual Studio, for example. Once you've installed ESLint, you need to configure a local configuration file. Do this with eslint --init. The --init command will display a prompt that will ask you how to configure the rules it will follow. It will ask a series of questions, and ask what style to use. AirBNB is fairly common, although you can use others; there's no wrong choice. If you are working for a company, they may already have standards, so check with management. One of the prompts will ask if you need React. React VR coding style Coding style can be nearly religious, but in the JavaScript and React world, some standards are very common. AirBNB has one good, fairly well–regarded style guide at: http://bit.ly/JStyle. For React VR, some style options to consider are as follows: Use lowercase for the first letter of a variable name. In other words, this.props.currentX, not this.props.CurrentX, and don't use underscores (this is called camelCase). Use PascalCase only when naming constructors or classes. As you're using PascalCase for files, make the filename match the class, so   import MyClass from './MyClass'. Be careful about 0 vs {0}. In general, learn JavaScript and React. Always use const or let to declare variables to avoid polluting the global namespace. Avoid using ++ and --. This one was hard for me, being a C++ programmer. Hopefully, by the time you've read this, I've fixed it in the source examples. If not, do as I say, not as I do! Learn the difference between == and ===, and use them properly, another thing that is new for C++ and C# programmers. In general, I highly recommend that you pour over these coding styles and use a linter when you write your code: Third-party dependencies For your published website/application to really work reliably, we also need to update package.json; this is sort of the "project" way to ensure that Node.js knows we need a particular piece of software. We will edit the "dependencies" section to add the last line,(bold emphasis mine, bold won't show up in a text editor, obviously!): { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } This is the manual way; a better way is to use npm install <package> -s. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions, if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors, even after removing node_modules, issue these commands: npm start -- --reset-cache npm cache clean --force The cache clean won't do it by itself; you need the reset–cache, otherwise the problem packages will still be saved, even if they don't physically exist! Bundling for publishing on the web Assuming that you have your project dependencies set up correctly to get your project to run from a web server, typically through an ISP or service provider, you need to "bundle" it. React VR has a script that will package up everything into just a few files. Note, of course, that your desktop machine counts as a "web server", although I wouldn't recommend that you expose your development machine to the web. The better way to have other people experience your new Virtual Reality is to bundle it and put it on a commercial web service. Packaging React VR for release on a website The basic process is easy with the React VR provided script: Go to the VR directory where you normally run npm start, and run the npm run bundle command: You will then go to your website the same way you normally upload files, and create a directory called vr. In your project directory, in our case f:ReactVRWalkInAMaze, find the following files in .VRBuild: client.bundle.js index.bundle.js Copy those to your website. Make a directory called static_assets. Copy all of your files (that your app uses) from AppNamestatic_assets to the new static_assets folder. Ensure that you have MIME mapping set up for all of your content; in particular, .obj, .mtl, and .gltf files may need new mappings. Check with your web server documentation: For gltf files, use model/gltf-binary Any .bin files used by gltf should be application/octet-stream For .obj files, I've used application/octet-stream The official list is at http://bit.ly/MimeTypes Very generally, application/octet-stream will send the files "exactly" as they are on the server, so this is sort of a general purpose "catch all" Copy the index.html from the root of your application to the directory on your website where you are publishing the app; in our case, it'll be the vr directory, so the file is alongside the two .js files. Modify index.html for the following lines (note the change to ./index.vr): <html> <head> <title>WalkInAMaze</title> <style>body { margin: 0; }</style> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no"> </head> <body> <!-- When you're ready to deploy your app, update this line to point to your compiled client.bundle.js --> <script src="./client.bundle?platform=vr"></script> <script> // Initialize the React VR application ReactVR.init( // When you're ready to deploy your app, update this line to point to // your compiled index.bundle.js './index.vr.bundle?platform=vr&dev=false', // Attach it to the body tag document.body ); </script> </body> </html> Note for a production release, which means if you're pointing to a prebuilt bundle on a static web server and not the React Native bundler, the dev and platform flags actually won't do anything, so there's no difference between dev=true, dev=false, or even dev=foobar. Obtaining releases and attribution If you used any assets from anywhere on the web, ensure that you have the proper release. For example, many Daz3D or Poser models do not include the rights to publish the geometry information; including these on your website as an OBJ or glTF file may be a violation of that agreement. Someone could easily download the model, or nearly all the geometry fairly easily, and then use it for something else. I am not a lawyer; you should check with wherever you get your models to ensure that you have permission, and if necessary, attribute properly. Attribution licenses are a little difficult with a VR world, unless you embed the attribution into a graphic somewhere; as we've seen, adding text can sometimes be distracting, and you will always have scale issues. If you embed a VR world in a page with <iframe>, you can always give proper attribution on the HTML side. However, this isn't really VR. Checking image sizes and using content delivery sites Some of the images you use, especially the ones in a <pano> statement, can be quite large. You may need to optimize these for proper web speed and responsiveness. This is a fairly general topic, but one thing that can help is a content delivery network (CDN), especially if your world will be a high-volume one. Adding a CDN to your web server is easy. You host your asset files from a separate location, and you pass the root directory as the assetRoot at the ReactVR.init() call. For example, if your files were hosted at https://cdn.example.com/vr_assets/, you would change the method call in index.html to include the following third argument: ReactVR.init( './index.bundle.js?platform=vr&dev=false', document.body, { assetRoot: 'https://cdn.example.com/vr_assets/' } ); Optimizing your models If you were watching the web console, you may have noted this model being loaded over and over. It is not necessarily the most efficient way. Consider other techniques such as passing a model for the various child components as a prop. Polygon decimation is another technique that is very valuable in optimizing models for the web and VR. With the glTF file format, you can use "normal maps" and still make a low polygon model look like a high-resolution one. Techniques to do this are well documented in the game development field. These techniques really do work well. You should also optimize models to not display unseen geometry. If you are showing a car model with blacked out windows, for example, there is no need to have engine detail and interior details loaded (unless the windows are transparent). This sounds obvious, although I found the lamp that I used to illustrate the lighting examples had almost tripled the number of polygons than was needed; the glass lamp shade had inner and outer polygons that were inside the model. We learned to do version upgrades, and if need be, how to do rip and replace upgrades. We further discussed when to do an upgrade and how to publish it on the web. If you are interested to know about how to include existing high-performance web code into a VR app, you may refer to the book Getting Started with React VR.   Build a Virtual Reality Solar System in Unity for Google Cardboard Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 3054

article-image-introduction-hololens
Packt
11 Jul 2017
10 min read
Save for later

Introduction to HoloLens

Packt
11 Jul 2017
10 min read
In this article, Abhijit Jana, Manish Sharma, and Mallikarjuna Rao, the authors of the book, HoloLens Blueprints, we will be covering the following points to introduce you to using HoloLens for exploratory data analysis. Digital Reality - Under the Hood Holograms in reality Sketching the scenarios 3D Modeling workflow Adding Air Tap on speaker Real-time visualization through HoloLens (For more resources related to this topic, see here.) Digital Reality - Under the Hood Welcome to the world of Digital Reality. The purpose of Digital Reality is to bring immersive experiences, such as taking or transporting you to different world or places, make you interact within those immersive, mix digital experiences with reality, and ultimately open new horizons to make you more productive. Applications of Digital Reality are advancing day by day; some of them are in the field of gaming, education, defense, tourism, aerospace, corporate productivity, enterprise applications, and so on. The spectrum and scenarios of Digital Reality are huge. In order to understand them better, they are broken down into three different categories:  Virtual Reality (VR): It is where you are disconnected from the real world and experience the virtual world. Devices available on the market for VR are Oculus Rift, Google VR, and so on. VR is the common abbreviation of Virtual Reality. Augmented Reality (AR): It is where digital data is overlaid over the real world. Pokémon GO, one of the very famous games, is an example of the this globally. A device available on the market, which falls under this category, is Google Glass. Augmented Reality is abbreviated to AR. Mixed Reality (MR): It spreads across the boundary of the real environment and VR. Using MR, you can have a seamless and immersive integration of the virtual and the real world. Mixed Reality is abbreviated to MR. This topic is mainly focused on developing MR applications using Microsoft HoloLens devices. Although these technologies look similar in the way they are used, and sometimes the difference is confusing to understand, there is a very clear boundary that distinguishes these technologies from each other. As you can see in the following diagram, there is a very clear distinction between AR and VR. However, MR has a spectrum, that overlaps across all three boundaries of real world, AR, and MR. Digital Reality Spectrum The following table describes the differences between the three: Holograms in reality Till now, we have mentioned Hologram several times. It is evident that these are crucial for HoloLens and Holographic apps, but what is a Hologram? Virtual Reality Complete Virtual World User is completely isolated from the Real World Device examples: Oculus Rift and Google VR Augmented Reality Overlays Data over the real world Often used for mobile devices Device example: Google Glass Application example: Pokémon GO Mixed Reality Seamless integration of the real and virtual world Virtual world interacts with Real world Natural interactions Device examples: HoloLens and Meta Holograms are the virtual objects which will be made up with light and sound and blend with the real world to give us an immersive MR experience with both real and virtual worlds. In other words, a Hologram is an object like any other real-world object; the only difference is that it is made up of light rather than matter. The technology behind making holograms is known as Holography. The following figure represent two holographic objects placed on the top of a real-size table and gives the experience of placing a real object on a real surface: Holograms objects in real environment Interacting with holograms There are basically five ways that you can interact with holograms and HoloLens. Using your Gaze, Gesture, and Voice and with spatial audio and spatial mapping. Spatial mapping provides a detailed illustration of the real-world surfaces in the environment around HoloLens. This allows developers to understand the digitalized real environments and mix holograms into the world around you. Gaze is the most usual and common one, and we start the interaction with it. At any time, HoloLens would know what you are looking at using Gaze. Based on that, the device can take further decisions on the gesture and voice that should be targeted. Spatial audio is the sound coming out from HoloLens and we use spatial audio to inflate the MR experience beyond the visual. HoloLens Interaction Model Sketching the scenarios The next step after elaborating scenario details is to come up with sketches for this scenario. There is a twofold purpose for sketching; first, it will be input to the next phase of asset development for the 3D Artist, as well as helping to validate requirements from the customer, so there are no surprises at the time of delivery. For sketching, either the designer can take it up on their own and build sketches, or they can take help from the 3D Artist. Let's start with the sketch for the primary view of the scenario, where the user is viewing the HoloLens's hologram: Roam around the hologram to view it from different angles Gaze at different interactive components Sketch for user viewing hologram for the HoloLens Sketching - interaction with speakers While viewing the hologram, a user can gaze at different interactive components. One such component, identified earlier, is the speaker. At the time of gazing at the speaker, it should be highlighted and the user can then Air Tap at it. The Air Tap action should expand the speaker hologram and the user should be able to view the speaker component in detail. Sketch for expanded speakers After the speakers are expanded, the user should be able to visualize the speaker components in detail. Now, if the user Air Taps on the expanded speakers, the application should do the following: Open the textual detail component about the speakers; the user can read the content and learn about the speakers in detail Start voice narration, detailing speaker details The user can also Air Tap on the expanded speaker component, and this action should close the expanded speaker Textual and voice narration for speaker details  As you did sketching for the speakers, apply a similar approach and do sketching for other components, such as lenses, buttons, and so on. 3D Modeling workflow Before jumping to 3D Modeling, let's understand the 3D Modeling workflow across different tools that we are going to use during the course of this topic. The following diagram explains the flow of the 3D Modeling workflow: Flow of 3D Modeling workflow Adding Air Tap on speaker In this project, we will be using the left-side speaker for applying Air Tap on speaker. However, you can apply the same for the right-side speaker as well. Similar to Lenses, we have two objects here which we need to identify from the object explorer. Navigate to Left_speaker_geo and left_speaker_details_geo in Object Hierarchy window Tag them as leftspeaker and speakerDetails respectively By default, when you are just viewing the Holograms, we will be hiding the speaker details section. This section only becomes visible when we do the Air Tap, and goes back again when we Air Tap again: Speaker with Box Collider Add a new script inside the Scripts folder, and name it ShowHideBehaviour. This script will handle the Show and Hide behaviour of the speakerDetails game object. Use the following script inside the ShowHideBehaviour.cs file. This script we can use for any other object to show or hide. public class ShowHideBehaviour : MonoBehaviour { public GameObject showHideObject; public bool showhide = false; private void Start() { try { MeshRenderer render = showHideObject.GetComponent InChildren<MeshRenderer>(); if (render != null) { render.enabled = showhide; } } catch (System.Exception) { } } } The script finds the MeshRenderer component from the gameObject and enables or disables it based on the showhide property. In this script, the showhide is property exposed as public, so that you can provide the reference of the object from the Unity scene itself. Attach ShowHideBehaviour.cs as components in speakerDetails tagged object. Then drag and drop the object in the showhide property section. This just takes the reference for the current speaker details objects and will hide the object in the first instance. Attach show-hide script to the object By default, it is unchecked, showhide is set to false and it will be hidden from view. At this point in time, you must check the left_speaker_details_geo on, as we are now handling visibility using code. Now, during the Air Tapped event handler, we can handle the render object to enable visibility. Add a new script by navigating from the context menu Create | C# Scripts, and name it SpeakerGestureHandler. Open the script file in Visual Studio. Similar to SpeakerGestureHandler, by default, the SpeakerGestureHandler class will be inherited from the MonoBehaviour. In the next step, implement the InputClickHandler interface in the SpeakerGestureHandler class. This interface implement the methods OnInputClicked() that invoke on click input. So, whenever you do an Air Tap gesture, this method is invoked. RaycastHit hit; bool isTapped = false; public void OnInputClicked(InputEventData eventData) { hit = GazeManager.Instance.HitInfo; if (hit.transform.gameObject != null) { isTapped = !isTapped; var lftSpeaker = GameObject.FindWithTag("LeftSpeaker"); var lftSpeakerDetails = GameObject.FindWithTag("speakerDetails"); MeshRenderer render = lftSpeakerDetails.GetComponentInChildren <MeshRenderer>(); if (isTapped) { lftSpeaker.transform.Translate(0.0f, -1.0f * Time.deltaTime, 0.0f); render.enabled = true; } else { lftSpeaker.transform.Translate(0.0f, 1.0f * Time.deltaTime, 0.0f); render.enabled = false; } } } When it is gazed, we find the game object for both LeftSpeaker and speakerDetails by the tag name. For the LeftSpeaker object, we are applying transformation based on tapped or not tapped, which worked like what we did for lenses. In the case of speaker details object, we have also taken the reference of MeshRenderer to make it's visibility true and false based on the Air Tap. Attach the SpeakerGestureHandler class with leftSpeaker Game Object. Air Tap in speaker – see it in action Air Tap action for speaker is also done. Save the scene, build and run the solution in emulator once again. When you can see the cursor on the speaker, perform Air Tap. Default View and Air Tapped View Real-time visualization through HoloLens We have learned about the data ingress flow, where devices connect with the IoT Hub, and stream analytics processes the stream of data and pushes it to storage. Now, in this section, let's discuss how this stored data will be consumed for data visualization within holographic application. Solution to consume data through services Summary In this article, we demonstrated using HoloLens,  for exploring Digital Reality - Under the Hood, Holograms in reality, Sketching the scenarios, 3D Modeling workflow, Adding Air Tap on speaker,  and  Real-time visualization through HoloLens. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints [article] Raspberry Pi LED Blueprints [article] Exploring and Interacting with Materials using Blueprints [article]
Read more
  • 0
  • 0
  • 3053
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-tracking-objects-videos
Packt
12 Aug 2015
13 min read
Save for later

Tracking Objects in Videos

Packt
12 Aug 2015
13 min read
In this article by Salil Kapur and Nisarg Thakkar, authors of the book Mastering OpenCV Android Application Programming, we will look at the broader aspects of object tracking in Videos. Object tracking is one of the most important applications of computer vision. It can be used for many applications, some of which are as follows: Human–computer interaction: We might want to track the position of a person's finger and use its motion to control the cursor on our machines Surveillance: Street cameras can capture pedestrians' motions that can be tracked to detect suspicious activities Video stabilization and compression Statistics in sports: By tracking a player's movement in a game of football, we can provide statistics such as distance travelled, heat maps, and so on In this article, you will learn the following topics: Optical flow Image Pyramids (For more resources related to this topic, see here.) Optical flow Optical flow is an algorithm that detects the pattern of the motion of objects, or edges, between consecutive frames in a video. This motion may be caused by the motion of the object or the motion of the camera. Optical flow is a vector that depicts the motion of a point from the first frame to the second. The optical flow algorithm works under two basic assumptions: The pixel intensities are almost constant between consecutive frames The neighboring pixels have the same motion as the anchor pixel We can represent the intensity of a pixel in any frame by f(x,y,t). Here, the parameter t represents the frame in a video. Let's assume that, in the next dt time, the pixel moves by (dx,dy). Since we have assumed that the intensity doesn't change in consecutive frames, we can say: f(x,y,t) = f(x + dx,y + dy,t + dt) Now we take the Taylor series expansion of the RHS in the preceding equation: Cancelling the common term, we get: Where . Dividing both sides of the equation by dt we get: This equation is called the optical flow equation. Rearranging the equation we get: We can see that this represents the equation of a line in the (u,v) plane. However, with only one equation available and two unknowns, this problem is under constraint at the moment. The Horn and Schunck method By taking into account our assumptions, we get: We can say that the first term will be small due to our assumption that the brightness is constant between consecutive frames. So, the square of this term will be even smaller. The second term corresponds to the assumption that the neighboring pixels have similar motion to the anchor pixel. We need to minimize the preceding equation. For this, we differentiate the preceding equation with respect to u and v. We get the following equations: Here, and  are the Laplacians of u and v respectively. The Lucas and Kanade method We start off with the optical flow equation that we derived earlier and noticed that it is under constrained as it has one equation and two variables: To overcome this problem, we make use of the assumption that pixels in a 3x3 neighborhood have the same optical flow: We can rewrite these equations in the form of matrices, as shown here: This can be rewritten in the form: Where: As we can see, A is a 9x2 matrix, U is a 2x1 matrix, and b is a 9x1 matrix. Ideally, to solve for U, we just need to multiply by A-1on both sides of the equation. However, this is not possible, as we can only take the inverse of square matrices. Thus, we try to transform A into a square matrix by first multiplying the equation by AT on both sides of the equation: Now is a square matrix of dimension 2x2. Hence, we can take its inverse: On solving this equation, we get: This method of multiplying the transpose and then taking an inverse is called pseudo-inverse. This equation can also be obtained by finding the minimum of the following equation: According to the optical flow equation and our assumptions, this value should be equal to zero. Since the neighborhood pixels do not have exactly the same values as the anchor pixel, this value is very small. This method is called Least Square Error. To solve for the minimum, we differentiate this equation with respect to u and v, and equate it to zero. We get the following equations: Now we have two equations and two variables, so this system of equations can be solved. We rewrite the preceding equations as follows: So, by arranging these equations in the form of a matrix, we get the same equation as obtained earlier: Since, the matrix A is now a 2x2 matrix, it is possible to take an inverse. On taking the inverse, the equation obtained is as follows: This can be simplified as: Solving for u and v, we get: Now we have the values for all the , , and . Thus, we can find the values of u and v for each pixel. When we implement this algorithm, it is observed that the optical flow is not very smooth near the edges of the objects. This is due to the brightness constraint not being satisfied. To overcome this situation, we use image pyramids. Checking out the optical flow on Android To see the optical flow in action on Android, we will create a grid of points over a video feed from the camera, and then the lines will be drawn for each point that will depict the motion of the point on the video, which is superimposed by the point on the grid. Before we begin, we will set up our project to use OpenCV and obtain the feed from the camera. We will process the frames to calculate the optical flow. First, create a new project in Android Studio. We will set the activity name to MainActivity.java and the XML resource file as activity_main.xml. Second, we will give the app the permissions to access the camera. In the AndroidManifest.xml file, add the following lines to the manifest tag: <uses-permission android_name="android.permission.CAMERA" /> Make sure that your activity tag for MainActivity contains the following line as an attribute: android:screenOrientation="landscape" Our activity_main.xml file will contain a simple JavaCameraView. This is a custom OpenCV defined layout that enables us to access the camera frames and processes them as normal Mat objects. The XML code has been shown here: <LinearLayout       android_layout_width="match_parent"    android_layout_height="match_parent"    android_orientation="horizontal">      <org.opencv.android.JavaCameraView        android_layout_width="fill_parent"        android_layout_height="fill_parent"        android_id="@+id/main_activity_surface_view" />   </LinearLayout> Now, let's work on some Java code. First, we'll define some global variables that we will use later in the code: private static final String   TAG = "com.packtpub.masteringopencvandroid.chapter5.MainActivity";      private static final int       VIEW_MODE_KLT_TRACKER = 0;    private static final int       VIEW_MODE_OPTICAL_FLOW = 1;      private int                   mViewMode;    private Mat                   mRgba;    private Mat                   mIntermediateMat;    private Mat                   mGray;    private Mat                   mPrevGray;      MatOfPoint2f prevFeatures, nextFeatures;    MatOfPoint features;      MatOfByte status;    MatOfFloat err;      private MenuItem               mItemPreviewOpticalFlow, mItemPreviewKLT;      private CameraBridgeViewBase   mOpenCvCameraView; We will need to create a callback function for OpenCV, like we did earlier. In addition to the code we used earlier, we will also enable CameraView to capture frames for processing: private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {        @Override        public void onManagerConnected(int status) {            switch (status) {                case LoaderCallbackInterface.SUCCESS:                {                    Log.i(TAG, "OpenCV loaded successfully");                      mOpenCvCameraView.enableView();                } break;                default:                {                    super.onManagerConnected(status);                } break;            }        }    }; We will now check whether the OpenCV manager is installed on the phone, which contains the required libraries. In the onResume function, add the following line of code: OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10,   this, mLoaderCallback); In the onCreate() function, add the following line before calling setContentView to prevent the screen from turning off, while using the app: getWindow().addFlags(WindowManager.LayoutParams. FLAG_KEEP_SCREEN_ON); We will now initialize our JavaCameraView object. Add the following lines after setContentView has been called: mOpenCvCameraView = (CameraBridgeViewBase)   findViewById(R.id.main_activity_surface_view); mOpenCvCameraView.setCvCameraViewListener(this); Notice that we called setCvCameraViewListener with the this parameter. For this, we need to make our activity implement the CvCameraViewListener2 interface. So, your class definition for the MainActivity class should look like the following code: public class MainActivity extends Activity   implements CvCameraViewListener2 We will add a menu to this activity to toggle between different examples in this article. Add the following lines to the onCreateOptionsMenu function: mItemPreviewKLT = menu.add("KLT Tracker"); mItemPreviewOpticalFlow = menu.add("Optical Flow"); We will now add some actions to the menu items. In the onOptionsItemSelected function, add the following lines: if (item == mItemPreviewOpticalFlow) {            mViewMode = VIEW_MODE_OPTICAL_FLOW;            resetVars();        } else if (item == mItemPreviewKLT){            mViewMode = VIEW_MODE_KLT_TRACKER;            resetVars();        }          return true; We used a resetVars function to reset all the Mat objects. It has been defined as follows: private void resetVars(){        mPrevGray = new Mat(mGray.rows(), mGray.cols(), CvType.CV_8UC1);        features = new MatOfPoint();        prevFeatures = new MatOfPoint2f();        nextFeatures = new MatOfPoint2f();        status = new MatOfByte();        err = new MatOfFloat();    } We will also add the code to make sure that the camera is released for use by other applications, whenever our application is suspended or killed. So, add the following snippet of code to the onPause and onDestroy functions: if (mOpenCvCameraView != null)            mOpenCvCameraView.disableView(); After the OpenCV camera has been started, the onCameraViewStarted function is called, which is where we will add all our object initializations: public void onCameraViewStarted(int width, int height) {        mRgba = new Mat(height, width, CvType.CV_8UC4);        mIntermediateMat = new Mat(height, width, CvType.CV_8UC4);        mGray = new Mat(height, width, CvType.CV_8UC1);        resetVars();    } Similarly, the onCameraViewStopped function is called when we stop capturing frames. Here we will release all the objects we created when the view was started: public void onCameraViewStopped() {        mRgba.release();        mGray.release();        mIntermediateMat.release();    } Now we will add the implementation to process each frame of the feed that we captured from the camera. OpenCV calls the onCameraFrame method for each frame, with the frame as a parameter. We will use this to process each frame. We will use the viewMode variable to distinguish between the optical flow and the KLT tracker, and have different case constructs for the two: public Mat onCameraFrame(CvCameraViewFrame inputFrame) {        final int viewMode = mViewMode;        switch (viewMode) {            case VIEW_MODE_OPTICAL_FLOW: We will use the gray()function to obtain the Mat object that contains the captured frame in a grayscale format. OpenCV also provides a similar function called rgba() to obtain a colored frame. Then we will check whether this is the first run. If this is the first run, we will create and fill up a features array that stores the position of all the points in a grid, where we will compute the optical flow:                mGray = inputFrame.gray();                if(features.toArray().length==0){                   int rowStep = 50, colStep = 100;                    int nRows = mGray.rows()/rowStep, nCols = mGray.cols()/colStep;                      Point points[] = new Point[nRows*nCols];                    for(int i=0; i<nRows; i++){                        for(int j=0; j<nCols; j++){                            points[i*nCols+j]=new Point(j*colStep, i*rowStep);                        }                    }                      features.fromArray(points);                      prevFeatures.fromList(features.toList());                    mPrevGray = mGray.clone();                    break;                } The mPrevGray object refers to the previous frame in a grayscale format. We copied the points to a prevFeatures object that we will use to calculate the optical flow and store the corresponding points in the next frame in nextFeatures. All of the computation is carried out in the calcOpticalFlowPyrLK OpenCV defined function. This function takes in the grayscale version of the previous frame, the current grayscale frame, an object that contains the feature points whose optical flow needs to be calculated, and an object that will store the position of the corresponding points in the current frame:                nextFeatures.fromArray(prevFeatures.toArray());                Video.calcOpticalFlowPyrLK(mPrevGray, mGray,                    prevFeatures, nextFeatures, status, err); Now, we have the position of the grid of points and their position in the next frame as well. So, we will now draw a line that depicts the motion of each point on the grid:                List<Point> prevList=features.toList(), nextList=nextFeatures.toList();                Scalar color = new Scalar(255);                  for(int i = 0; i<prevList.size(); i++){                    Core.line(mGray, prevList.get(i), nextList.get(i), color);                } Before the loop ends, we have to copy the current frame to mPrevGray so that we can calculate the optical flow in the subsequent frames:                mPrevGray = mGray.clone();                break; default: mViewMode = VIEW_MODE_OPTICAL_FLOW; After we end the switch case construct, we will return a Mat object. This is the image that will be displayed as an output to the user of the application. Here, since all our operations and processing were performed on the grayscale image, we will return this image: return mGray; So, this is all about optical flow. The result can be seen in the following image: Optical flow at various points in the camera feed Image pyramids Pyramids are multiple copies of the same images that differ in their sizes. They are represented as layers, as shown in the following figure. Each level in the pyramid is obtained by reducing the rows and columns by half. Thus, effectively, we make the image's size one quarter of its original size: Relative sizes of pyramids Pyramids intrinsically define reduce and expand as their two operations. Reduce refers to a reduction in the image's size, whereas expand refers to an increase in its size. We will use a convention that lower levels in a pyramid mean downsized images and higher levels mean upsized images. Gaussian pyramids In the reduce operation, the equation that we use to successively find levels in pyramids, while using a 5x5 sliding window, has been written as follows. Notice that the size of the image reduces to a quarter of its original size: The elements of the weight kernel, w, should add up to 1. We use a 5x5 Gaussian kernel for this task. This operation is similar to convolution with the exception that the resulting image doesn't have the same size as the original image. The following image shows you the reduce operation: The reduce operation The expand operation is the reverse process of reduce. We try to generate images of a higher size from images that belong to lower layers. Thus, the resulting image is blurred and is of a lower resolution. The equation we use to perform expansion is as follows: The weight kernel in this case, w, is the same as the one used to perform the reduce operation. The following image shows you the expand operation: The expand operation The weights are calculated using the Gaussian function to perform Gaussian blur. Summary In this article, we have seen how to detect a local and global motion in a video, and how we can track objects. We have also learned about Gaussian pyramids, and how they can be used to improve the performance of some computer vision tasks. Resources for Article: Further resources on this subject: New functionality in OpenCV 3.0 [article] Seeing a Heartbeat with a Motion Amplifying Camera [article] Camera Calibration [article]
Read more
  • 0
  • 0
  • 3025

article-image-augmented-reality
Packt
22 Nov 2013
6 min read
Save for later

Augmented Reality

Packt
22 Nov 2013
6 min read
(For more resources related to this topic, see here.) A quick overview of AR concepts As AR has become increasingly popular in the media over the last few years, unfortunately, several distorted notions of Augmented Reality have evolved. Anything that is somehow related to the real world and involves some computing, such as standing in front of a shop and watching 3D models wear the latest fashions, has become AR. Augmented Reality emerged from research labs a few decades ago and different definitions of AR have been produced. As more and more research fields (for example, computer vision, computer graphics, human-computer interaction, medicine, humanities, and art) have investigated AR as a technology, application, or concept, multiple overlapping definitions now exist for AR. Rather than providing you with an exhaustive list of definitions, we will present some major concepts present in any AR application. Sensory augmentation The term Augmented Reality itself contains the notion of reality. Augmenting generally refers to the aspect of influencing one of your human sensory systems, such as vision or hearing, with additional information. This information is generally defined as digital or virtual and will be produced by a computer. The technology currently uses displays to overlay and merge the physical information with the digital information. To augment your hearing, modified headphones or earphones equipped with microphones are able to mix sound from your surroundings in realtime with sound generated by your computer. Displays The TV screen at home is the ideal device to perceive virtual content, streamed from broadcasts or played from your DVD. Unfortunately, most common TV screens are not able to capture the real world and augment it. An Augmented Reality display needs to simultaneously show the real and virtual worlds. One of the first display technologies for AR was produced by Ivan Sutherlandin 1964 (named "The Sword of Damocles"). The system was rigidly mounted on the ceiling and used some CRT screens and a transparent display to be able to create the sensation of visually merging the real and virtual. Since then, we have seen different trends in AR display, going from static to wearable and handheld displays. One of the major trends is the usage of optical see-through (OST) technology. The idea is to still see the real world through a semitransparent screen and project some virtual content on the screen. The merging of the real and virtual worlds does not happen on the computer screen, but directly on the retina of your eye, as depicted in the following figure: The other major trend in AR display is what we call video see-through (VST) technology. You can imagine perceiving the world not directly, but through a video on a monitor. The video image is mixed with some virtual content (as you will see in a movie) and sent back to some standard display, such as your desktop screen, your mobile phone, or the upcoming generation of head-mounted displays as shown in the following figure: In this book, we will work on Android-driven mobile phones and, therefore, discuss only VST systems; the video camera used will be the one on the back of your phone. Registration in 3D With a display (OST or VST) in your hands, you are already able to superimpose things from your real world, as you will see in TV advertisements with text banners at the bottom of the screen. However, any virtual content (such as text or images will remain fixed in its position on the screen. The superposition being really static, your AR display will act as a head-up display (HUD), but won't really be an AR as shown in the following figure: Google Glass is an example of an HUD. While it uses a semitransparent screen like an OST, the digital content remains in a static position. AR needs to know more about real and virtual content. It needs to know where things are in space (registration) and follow where they are moving (tracking). Registration is basically the idea of aligning virtual and real content in the same space. If you are into movies or sports, you will notice that 2D or 3D graphics are superimposed onto scenes of the physical world quite often. In ice hockey, the puck is often highlighted with a colored trail. In movies such as Walt Disney'sTRON (1982 version), the real and virtual elements are seamlessly blended. However, AR differs from those effects as it is based on all of the following aspects (proposed by Ronald T. Azumain 1997): It's in 3D: In the olden days, some of the movies were edited manually to merge virtual visual effects with real content. A well-known example is Star Wars, where all the lightsaber effects have been painted by hand by hundreds of artists and, thus, frame by frame. Nowadays, more complex techniques support merging digital 3D content (such as characters or cars) with the video image (and is called match moving). AR is inherently always doing that in a 3D space. The registration happens in real time: In a movie, everything is prerecorded and generated in a studio; you just play the media. In AR, everything is in real time, so your application needs to merge, at each instance, reality and virtuality. It's interactive: In a movie, you only look passively at the scene from where it has been shot. In AR, you can actively move around, forward, and backward and turn your AR display—you will still see an alignment between both worlds. Interaction with the environment Building a rich AR application needs interaction between environments; otherwise you end up with pretty, 3D graphics that can turn boring quite fast. AR interaction refers to selecting and manipulating digital and physical objects and navigating in the augmented scene. Rich AR applications allow you to use objects which can be on your table, to move some virtual characters, use your hands to select some floating virtual objects while walking on the street, or speak to a virtual agent appearing on your watch to arrange a meeting later in the day. We will look at how some of the standard mobile interaction techniques can also be applied to AR. We will also dig into specific techniques involving the manipulation of the real world. Summary Thus we have learned about the AR concepts through this article. Resources for Article: Further resources on this subject: Marker-based Augmented Reality on iPhone or iPad [Article] Creating Dynamic UI with Android Fragments [Article] Introducing an Android platform [Article]
Read more
  • 0
  • 0
  • 3015

article-image-ar-experience-using-vuforia-and-features-definition
Packt
01 Oct 2013
4 min read
Save for later

AR experience using Vuforia and features definition

Packt
01 Oct 2013
4 min read
(For more resources related to this topic, see here.) What decides trackable score? Trackables are the foundation of the AR experience using Vuforia. It is paramount to understand and create a suitable trackable for the experience to be robust and useful. The score attributed to the trackable in the target manager is our indication of how robust the target image is going to perform, but what decides that score? Best way of understanding this, is by understanding how Vuforia tracks the images. The idea is simple, it looks for position of contrasting edges in clusters all around the image. Those edges are tracked, and based on the map of positions that are stored in the dataset, Vuforia can tell the relative position of the trackable in the real world and accordingly render the 3D content on top of it. This particularly means that tracking the image is not a function of its color or what really is in it, as much as how many contrasting edges are there in the image, and how well they are distributed on the image. To better understand this, we can look on the current edges that are recognizable in the image we have just uploaded. To do that, simply click on the Show Features link on the top left of the webpage. The following image shows features in image target stones: Once the Show Features link has been clicked, the image target manager layers over the target image an overlay of where it detects a recognizable edge that it can track in a Vuforia image target. Notice that it is only tracking the dark edges between the Stones and nothing else in the image. It is even tracking only the high contrast edges between the Stones, while ignoring some of the lighter ones. Also notice that the number of edges found in the image is large, and evenly distributed all around the image. This is a great factor in what made this image great for tracking. To contrast this image's result, let's try an image that will yield a 1-star score when tried on the target manager. The following image shows landscape image added to target image: Before adding this image, intuitively we might think that this image is suitable for tracking. It certainly has a lot of details of a wide-angle landscape. But this image yielded a shocking 1-star result when added to the Target Manager. The main reason for the low score for this image is the fact that the entire image is a shade of green. This greatly diminishes contrasting edges in the image. If we are to click on the Show Features link on the top, we will be able to see what the target manager detected from the image. The following image shows features in the mountain landscape image: Immediately, we notice the considerably lower number of features detected in the image compared to the stones one. It only detected the edges created by the shadows of the objects in the image, which is clearly not enough to award it any score above 1 star. Features definition To help us get a higher score, we must understand what are the features that the target manager is looking for. We do know now that the main thing that the target manager is looking for in an image is edges, but what kind of edges specifically? To understand that, we need the definition of features. A feature is a sharp and spiked detail in the image, like the corner of an edge. Features must be very contrasting to be found and it has to be distributed evenly across the image and in a random manner. The following image shows shapes and features recognized in them: In the shapes illustrated above, we can see the yellow crosses representation of the features recognizable in the shape. The representation is as follows: Shape 1: It is a perfect circle without any corners at all, and such no features are recognizable in it. Shape 2: It has an edge to the left with two recognizable corners. That yields two features recognizable in the shape. Shape 3: It is a square with four edges and four corners. This yields four recognizable features in the shape. This means that any curved object yields little to none features at all. Primarily, humans and animals make very poor trackables due to their curved nature. Summary Thus in this article, we learned about how to track an image and which features are recognizable in an image. Resources for Article: Further resources on this subject: Interface Designing for Games in iOS [Article] Unity Game Development: Welcome to the 3D world [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 3011

article-image-so-what-xenmobile
Packt
08 Oct 2013
7 min read
Save for later

So, what is XenMobile?

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) XenMobile is the next generation of mobile device management (MDM) from Citrix. XenMobile provides organizations the ability to automate most of the administrative tasks on mobile devices for both corporate and highly secured environments. In addition, it can also help organizations in managing bring your own device (BYOD) environments. XenMobile MDM allows administrators to configure role-based management for device provisioning and security for both corporate and employee-owned BYOD devices. When a user enrolls with their mobile device, an organization can provision policies and apps to devices automatically, blacklist or whitelist apps, detect and protect against jail broken devices, and wipe or selectively wipe a device that is lost, stolen, or out of compliance. A selective wipe means that only corporate or sensitive data is deleted, but personal data stays intact on a user's device. XenMobile supports every major mobile OS that is being used today, giving users the freedom to choose and use a device of their choice. Citrix has done an excellent job recognizing that organizations need MDM as a key component for a secure mobile ecosystem. Citrix XenMobile adds other features such as secure @WorkMail, @WorkWeb, and ShareFile integration, so that organizations can securely and safely access e-mail, the Internet, and exchange documents in a secure manner. There are other popular solutions on the market that have similar claims. Unlike other solutions, they rely on container-based solutions which limit native applications. Container-based solutions are applications that embed corporate data, e-mail, contact, and calendar data. Unfortunately, in many cases these solutions break the user experience by limiting how they can use native applications. XenMobile does this without compromising the user experience, allowing the secure applications to exist and share the same calendar, contact, and other key integration points on the mobile device. They were the only vendors at the time of writing this article, that had a single management platform which provided MDM features with secure storage, integrated VDI, multitenant, and application load balancing features, which we believe are some of the differentiators between XenMobile and its competitors. Citrix XenMobile MDM Architecture Mobile application stores (MAS) and mobile application management (MAN) are the concepts which you manage and secure access to individual applications on a mobile device, but leave the rest of the mobile device unmanaged. In some scenarios, people consider this as a great way of managing BYOD environments because organizations only need to worry about the applications and the data they manage. XenMobile has support for mobile application management and supports individual application policies, in addition to the holistic device policies found on other competing products. In this article, you will gain a deep understanding of XenMobile and its key features. You will learn how to install, configure, and use XenMobile in your environment to manage corporate and BYOD environments. We will then explore how to get started with XenMobile, configure policies and security, and how to deploy XenMobile in our organization. Next, we will look at some of the advanced features in XenMobile, how and when to use them, how to manage compliance breaches, and other top features. Finally, we will explore what do next when you have XenMobile configured. Welcome to the world of XenMobile MDM. Let's get started. Mobile device management (MDM) is a software solution that helps the organizations to manage, provision, and secure the lifecycle of a mobile device. MDM systems allow enterprises to mass deploy policies, settings, and applications to mobile devices. These features can include provisioning the mobile devices for Wi-Fi access, corporate e-mail, develop in-house applications, tracking locations, and remote wipe. Mobile device management solutions for enterprise corporations provide these capabilities over the air and for multiple mobile operating systems. Blackberry can be considered as the world's first real mobile enterprise solution with their product Blackberry Enterprise Server (BES). BES is still considered as a very capable and well-respected MDM solution. Blackberry devices were one of the first devices that provided organizations an accurate control of their users' mobile devices. The Blackberry device was essentially a dumb device until it was connected to a BES server. Once connected to a BES server, the Blackberry device would download policies, which would govern what features the device could use. This included everything from voice roaming, Internet usage, and even camera and storage policies. Because of its detailed configurability, Blackberry devices became the standard for most corporations wanting to use mobile devices and securing them. Apple and Google have made the smartphone a mainstream device and the tablet the computing platform of choice. People ended up waiting days in line to buy the latest gadget, and once they had it, you better believe they wanted to use it all the time. All of a sudden, organizations were getting hundreds of people wanting to connect their personal devices to the corporate network in order to work more efficiently with a device they enjoyed. The revolution of consumerization of IT had begun. In addition to Apple and Google devices, XenMobile supports Blackberry, Windows Phone, and other well-known mobile operating systems. Many vendors rushed to bring solutions to organizations to help them manage their Apple and mobile devices in enterprise architectures. Vendors tried to give organizations the same management and security that Blackberry had provided them with previous BES features. Over the years, Apple and Google both recognized the need for mobile management and started building mobile device management features in their operating system, so that MDM solutions could provide better granular management and security control for enterprise organizations. Today organizations are replacing older mobile devices in favor of Apple and Google devices. They feel comfortable in having these devices connected to corporate networks because they believe that they can manage them and secure them with MDM solutions. MDM solutions are the platform for organizations to ensure that mobile devices meet the technical, legal, and business compliance needed for their users to use devices of their choice, that are modern, and in many cases more productive than their legacy counterparts. MDM vendors have chosen to be container-based solutions, or device-based management. Container-based solutions provide segmentation of device data and allow organizations to completely ignore the rest of the device since all corporate data is self-contained. A good analogy for container-based solutions is Outlook Web Access. Outlook Web Access allows any computer to access Exchange email through a web browser. Computer software and applications are completely agnostic to corporate e-mail. Container-based solutions are similar, since they are indifferent to the mobile device data and other configuration components when being used to access an organization's resources, for example, e-mail on a mobile phone. Device-based management solutions allow organizations to manage device and application settings, but can only enforce security policies based on the features made available to them by device manufacturers. XenMobile is a device-based management solution, however, it has many of the features found in container-based solutions giving organizations the best of both worlds. Summary This article briefs about the functionalities of XenMobile and it covers XenMobile's features, gives an idea to the user regarding XenMobile. Resources for Article: Further resources on this subject: Creating mobile friendly themes [Article] Creating and configuring a basic mobile application [Article] Mobiles First – How and Why [Article]
Read more
  • 0
  • 0
  • 2915
article-image-making-subtle-color-shifts-curves
Packt
23 Sep 2013
7 min read
Save for later

Making subtle color shifts with curves

Packt
23 Sep 2013
7 min read
(For more resources related to this topic, see here.) When looking at a scene, we may pick up subtle cues from the way colors shift between different image regions. For example, outdoors on a clear day, shadows have a slightly blue tint due to the ambient light reflected from the blue sky, while highlights have a slightly yellow tint because they are in direct sunlight. When we see bluish shadows and yellowish highlights in a photograph, we may get a "warm and sunny" feeling. This effect may be natural, or it may be exaggerated by a filter. Curve filters are useful for this type of manipulation. A curve filter is parameterized by sets of control points. For example, there might be one set of control points for each color channel. Each control point is a pair of numbers representing the input and output values for the given channel. For example, the pair (128, 180) means that a value of 128 in the given color channel is brightened to become a value of 180. Values between the control points are interpolated along a curve (hence the name, curve filter). In Gimp, a curve with the control points (0, 0), (128, 180), and (255, 255) is visualized as shown in the following screenshot: The x axis shows the input values ranging from 0 to 255, while the y axis shows the output values over the same range. Besides showing the curve, the graph shows the line y = x (no change) for comparison. Curvilinear interpolation helps to ensure that color transitions are smooth, not abrupt. Thus, a curve filter makes it relatively easy to create subtle, natural-looking effects. We may define an RGB curve filter in pseudocode as follows: dst.b = funcB(src.b) where funcB interpolates pointsB dst.g = funcG(src.g) where funcG interpolates pointsG dst.r = funcR(src.r) where funcR interpolates pointsR For now, we will work with RGB and RGBA curve filters, and with channel values that range from 0 to 255. If we want such a curve filter to produce natural-looking results, we should use the following rules of thumb: Every set of control points should include (0, 0) and (255, 255). This way, black remains black, white remains white, and the image does not appear to have an overall tint. As the input value increases, the output value should always increase too. (Their relationship should be monotonically increasing.) This way, shadows remain shadows, highlights remain highlights, and the image does not appear to have inconsistent lighting or contrast. OpenCV does not provide curvilinear interpolation functions but the Apache Commons Math library does. (See Adding files to the project, earlier in this chapter, for instructions on setting up Apache Commons Math.) This library provides interfaces called UnivariateInterpolator and UnivariateFunction, which have implementations including LinearInterpolator, SplineInterpolator, LinearFunction, and PolynomialSplineFunction. (Splines are a type of curve.) UnivariateInterpolator has an instance method, interpolate(double[] xval, double[] yval), which takes arrays of input and output values for the control points and returns a UnivariateFunction object. The UnivariateFunction object can provide interpolated values via the method value(double x). API documentation for Apache Commons Math is available at http://commons.apache.org/proper/commons-math/apidocs/. These interpolation functions are computationally expensive. We do not want to run them again and again for every channel of every pixel and every frame. Fortunately, we do not have to. There are only 256 possible input values per channel, so it is practical to precompute all possible output values and store them in a lookup table. For OpenCV's purposes, a lookup table is a Mat object whose indices represent input values and whose elements represent output values. The lookup can be performed using the static method Core.LUT(Mat src, Mat lut, Mat dst). In pseudocode, dst = lut[src]. The number of elements in lut should match the range of values in src, and the number of channels in lut should match the number of channels in src. Now, using Apache Commons Math and OpenCV, let's implement a curve filter for RGBA images with channel values ranging from 0 to 255. Open CurveFilter.java and write the following code: public class CurveFilter implements Filter { // The lookup table. private final Mat mLUT = new MatOfInt(); public CurveFilter( final double[] vValIn, final double[] vValOut, final double[] rValIn, final double[] rValOut, final double[] gValIn, final double[] gValOut, final double[] bValIn, final double[] bValOut) { // Create the interpolation functions. UnivariateFunction vFunc = newFunc(vValIn, vValOut); UnivariateFunction rFunc = newFunc(rValIn, rValOut); UnivariateFunction gFunc = newFunc(gValIn, gValOut); UnivariateFunction bFunc = newFunc(bValIn, bValOut); // Create and populate the lookup table. mLUT.create(256, 1, CvType.CV_8UC4); for (int i = 0; i < 256; i++) { final double v = vFunc.value(i); final double r = rFunc.value(v); final double g = gFunc.value(v); final double b = bFunc.value(v); mLUT.put(i, 0, r, g, b, i); // alpha is unchanged } } @Override public void apply(final Mat src, final Mat dst) { // Apply the lookup table. Core.LUT(src, mLUT, dst); } private UnivariateFunction newFunc(final double[] valIn, final double[] valOut) { UnivariateInterpolator interpolator; if (valIn.length > 2) { interpolator = new SplineInterpolator(); } else { interpolator = new LinearInterpolator(); } return interpolator.interpolate(valIn, valOut); } } CurveFilter stores the lookup table in a member variable. The constructor method populates the lookup table based on the four sets of control points that are taken as arguments. As well as a set of control points for each of the RGB channels, the constructor also takes a set of control points for the image's overall brightness, just for convenience. A helper method, newFunc, creates an appropriate interpolation function (linear or spline) for each set of control points. Then, we iterate over the possible input values and populate the lookup table. The apply method is a one-liner. It simply uses the precomputed lookup table with the given source and destination matrices. CurveFilter can be subclassed to define a filter with a specific set of control points. For example, let's open PortraCurveFilter.java and write the following code: public class PortraCurveFilter extends CurveFilter { public PortraCurveFilter() { super( new double[] { 0, 23, 157, 255 }, // vValIn new double[] { 0, 20, 173, 255 }, // vValOut new double[] { 0, 69, 213, 255 }, // rValIn new double[] { 0, 69, 218, 255 }, // rValOut new double[] { 0, 52, 189, 255 }, // gValIn new double[] { 0, 47, 196, 255 }, // gValOut new double[] { 0, 41, 231, 255 }, // bValIn new double[] { 0, 46, 228, 255 }); // bValOut } } This filter brightens the image, makes shadows cooler (more blue), and makes highlights warmer (more yellow). It produces flattering skin tones and tends to make things look sunnier and cleaner. It resembles the color characteristics of a brand of photo film called Kodak Portra, which was often used for portraits. The code for our other three channel mixing filters is similar. The ProviaCurveFilter class uses the following arguments for its control points: new double[] { 0, 255 }, // vValIn new double[] { 0, 255 }, // vValOut new double[] { 0, 59, 202, 255 }, // rValIn new double[] { 0, 54, 210, 255 }, // rValOut new double[] { 0, 27, 196, 255 }, // gValIn new double[] { 0, 21, 207, 255 }, // gValOut new double[] { 0, 35, 205, 255 }, // bValIn new double[] { 0, 25, 227, 255 }); // bValOut The effect is a strong, blue or greenish-blue tint in shadows and a strong, yellow or greenish-yellow tint in highlights. It resembles a film processing technique called cross-processing, which was sometimes used to produce grungy-looking photos of fashion models, pop stars, and so on. For a good discussion of how to emulate various brands of photo film, see Petteri Sulonen's blog at http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html. The control points that we use are based on examples given in this article. Curve filters are a convenient tool for manipulating color and contrast, but they are limited insofar as each destination pixel is affected by only a single input pixel. Next, we will examine a more flexible family of filters, which enable each destination pixel to be affected by a neighborhood of input pixels. Summary In this article we learned how to make subtle color shifts with curves. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [Article] A quick start – OpenCV fundamentals [Article] OpenCV: Image Processing using Morphological Filters [Article]
Read more
  • 0
  • 0
  • 2899

article-image-command-line-companion-called-artisan
Packt
06 May 2015
17 min read
Save for later

A Command-line Companion Called Artisan

Packt
06 May 2015
17 min read
In this article by Martin Bean, author of the book Laravel 5 Essentials, we will see how Laravel's command-line utility has far more capabilities and can be used to run and automate all sorts of tasks. In the next pages, you will learn how Artisan can help you: Inspect and interact with your application Enhance the overall performance of your application Write your own commands By the end of this tour of Artisan's capabilities, you will understand how it can become an indispensable companion in your projects. (For more resources related to this topic, see here.) Keeping up with the latest changes New features are constantly being added to Laravel. If a few days have passed since you first installed it, try running a composer update command from your terminal. You should see the latest versions of Laravel and its dependencies being downloaded. Since you are already in the terminal, finding out about the latest features is just one command away: $ php artisan changes This saves you from going online to find a change log or reading through a long history of commits on GitHub. It can also help you learn about features that you were not aware of. You can also find out which version of Laravel you are running by entering the following command: $ php artisan --version Laravel Framework version 5.0.16 All Artisan commands have to be run from your project's root directory. With the help of a short script such as Artisan Anywhere, available at https://github.com/antonioribeiro/artisan-anywhere, it is also possible to run Artisan from any subfolder in your project. Inspecting and interacting with your application With the route:list command, you can see at a glance which URLs your application will respond to, what their names are, and if any middleware has been registered to handle requests. This is probably the quickest way to get acquainted with a Laravel application that someone else has built. To display a table with all the routes, all you have to do is enter the following command: $ php artisan route:list In some applications, you might see /{v1}/{v2}/{v3}/{v4}/{v5} appended to particular routes. This is because the developer has registered a controller with implicit routing, and Laravel will try to match and pass up to five parameters to the controller. Fiddling with the internals When developing your application, you will sometimes need to run short, one-off commands to inspect the contents of your database, insert some data into it, or check the syntax and results of an Eloquent query. One way you could do this is by creating a temporary route with a closure that is going to trigger these actions. However, this is less than practical since it requires you to switch back and forth between your code editor and your web browser. To make these small changes easier, Artisan provides a command called tinker, which boots up the application and lets you interact with it. Just enter the following command: $ php artisan tinker This will start a Read-Eval-Print Loop (REPL) similar to what you get when running the php -a command, which starts an interactive shell. In this REPL, you can enter PHP commands in the context of the application and immediately see their output: > $cat = 'Garfield'; > AppCat::create(['name' => $cat,'date_of_birth' => new DateTime]); > echo AppCat::whereName($cat)->get(); [{"id":"4","name":"Garfield 2","date_of_birth":…}] > dd(Config::get('database.default')); Version 5 of Laravel leverages PsySH, a PHP-specific REPL that provides a more robust shell with support for keyboard shortcuts and history. Turning the engine off Whether it is because you are upgrading a database or waiting to push a fix for a critical bug to production, you may want to manually put your application on hold to avoid serving a broken page to your visitors. You can do this by entering the following command: $ php artisan down This will put your application into maintenance mode. You can determine what to display to users when they visit your application in this mode by editing the template file at resources/views/errors/503.blade.php (since maintenance mode sends an HTTP status code of 503 Service Unavailable to the client). To exit maintenance mode, simply run the following command: $ php artisan up Fine-tuning your application For every incoming request, Laravel has to load many different classes and this can slow down your application, particularly if you are not using a PHP accelerator such as APC, eAccelerator, or XCache. In order to reduce disk I/O and shave off precious milliseconds from each request, you can run the following command: $ php artisan optimize This will trim and merge many common classes into one file located inside storage/framework/compiled.php. The optimize command is something you could, for example, include in a deployment script. By default, Laravel will not compile your classes if app.debug is set to true. You can override this by adding the --force flag to the command but bear in mind that this will make your error messages less readable. Caching routes Apart from caching class maps to improve the response time of your application, you can also cache the routes of your application. This is something else you can include in your deployment process. The command? Simply enter the following: $ php artisan route:cache The advantage of caching routes is that your application will get a little faster as its routes will have been pre-compiled, instead of evaluating the URL and any matches routes on each request. However, as the routing process now refers to a cache file, any new routes added will not be parsed. You will need to re-cache them by running the route:cache command again. Therefore, this is not suitable during development, where routes might be changing frequently. Generators Laravel 5 ships with various commands to generate new files of different types. If you run $ php artisan list under the make namespace, you will find the following entries: make:command make:console make:controller make:event make:middleware make:migration make:model make:provider make:request These commands create a stub file in the appropriate location in your Laravel application containing boilerplate code ready for you to get started with. This saves keystrokes, creating these files from scratch. All of these commands require a name to be specified, as shown in the following command: $ php artisan make:model Cat This will create an Eloquent model class called Cat at app/Cat.php, as well as a corresponding migration to create a cats table. If you do not need to create a migration when making a model (for example, if the table already exists), then you can pass the --no-migration option as follows: $ php artisan make:model Cat --no-migration A new model class will look like this: <?php namespace App; use IlluminateDatabaseEloquentModel; class Cat extends Model { // } From here, you can define your own properties and methods. The other commands may have options. The best way to check is to append --help after the command name, as shown in the following command: $ php artisan make:command --help You will see that this command has --handler and --queued options to modify the class stub that is created. Rolling out your own Artisan commands At this stage you might be thinking about writing your own bespoke commands. As you will see, this is surprisingly easy to do with Artisan. If you have used Symfony's Console component, you will be pleased to know that an Artisan command is simply an extension of it with a slightly more expressive syntax. This means the various helpers will prompt for input, show a progress bar, or format a table, are all available from within Artisan. The command that we are going to write depends on the application we built. It will allow you to export all cat records present in the database as a CSV with or without a header line. If no output file is specified, the command will simply dump all records onto the screen in a formatted table. Creating the command There are only two required steps to create a command. Firstly, you need to create the command itself, and then you need to register it manually. We can make use of the following command to create a console command we have seen previously: $ php artisan make:console ExportCatsCommand This will generate a class inside app/Console/Commands. We will then need to register this command with the console kernel, located at app/Console/Kernel.php: protected $commands = [ 'AppConsoleCommandsExportCatsCommand', ]; If you now run php artisan, you should see a new command called command:name. This command does not do anything yet. However, before we start writing the functionality, let's briefly look at how it works internally. The anatomy of a command Inside the newly created command class, you will find some code that has been generated for you. We will walk through the different properties and methods and see what their purpose is. The first two properties are the name and description of the command. Nothing exciting here, this is only the information that will be shown in the command line when you run Artisan. The colon is used to namespace the commands, as shown here: protected $name = 'export:cats';   protected $description = 'Export all cats'; Then you will find the fire method. This is the method that gets called when you run a particular command. From there, you can retrieve the arguments and options passed to the command, or run other methods. public function fire() Lastly, there are two methods that are responsible for defining the list of arguments or options that are passed to the command: protected function getArguments() { /* Array of arguments */ } protected function getOptions() { /* Array of options */ } Each argument or option can have a name, a description, and a default value that can be mandatory or optional. Additionally, options can have a shortcut. To understand the difference between arguments and options, consider the following command, where options are prefixed with two dashes: $ command --option_one=value --option_two -v=1 argument_one argument_two In this example, option_two does not have a value; it is only used as a flag. The -v flag only has one dash since it is a shortcut. In your console commands, you'll need to verify any option and argument values the user provides (for example, if you're expecting a number, to ensure the value passed is actually a numerical value). Arguments can be retrieved with $this->argument($arg), and options—you guessed it—with $this->option($opt). If these methods do not receive any parameters, they simply return the full list of parameters. You refer to arguments and options via their names, that is, $this->argument('argument_name');. Writing the command We are going to start by writing a method that retrieves all cats from the database and returns them as an array: protected function getCatsData() { $cats = AppCat::with('breed')->get(); foreach ($cats as $cat) {    $output[] = [      $cat->name,      $cat->date_of_birth,      $cat->breed->name,    ]; } return $output; } There should not be anything new here. We could have used the toArray() method, which turns an Eloquent collection into an array, but we would have had to flatten the array and exclude certain fields. Then we need to define what arguments and options our command expects: protected function getArguments() { return [    ['file', InputArgument::OPTIONAL, 'The output file', null], ]; } To specify additional arguments, just add an additional element to the array with the same parameters: return [ ['arg_one', InputArgument::OPTIONAL, 'Argument 1', null], ['arg_two', InputArgument::OPTIONAL, 'Argument 2', null], ]; The options are defined in a similar way: protected function getOptions() { return [    ['headers', 'h', InputOption::VALUE_NONE, 'Display headers?',    null], ]; } The last parameter is the default value that the argument and option should have if it is not specified. In both the cases, we want it to be null. Lastly, we write the logic for the fire method: public function fire() { $output_path = $this->argument('file');   $headers = ['Name', 'Date of Birth', 'Breed']; $rows = $this->getCatsData();   if ($output_path) {    $handle = fopen($output_path, 'w');      if ($this->option('headers')) {        fputcsv($handle, $headers);      }      foreach ($rows as $row) {        fputcsv($handle, $row);      }      fclose($handle);   } else {        $table = $this->getHelperSet()->get('table');        $table->setHeaders($headers)->setRows($rows);        $table->render($this->getOutput());    } } While the bulk of this method is relatively straightforward, there are a few novelties. The first one is the use of the $this->info() method, which writes an informative message to the output. If you need to show an error message in a different color, you can use the $this->error() method. Further down in the code, you will see some functions that are used to generate a table. As we mentioned previously, an Artisan command extends the Symfony console component and, therefore, inherits all of its helpers. These can be accessed with $this->getHelperSet(). Then it is only a matter of passing arrays for the header and rows of the table, and calling the render method. To see the output of our command, we will run the following command: $ php artisan export:cats $ php artisan export:cats --headers file.csv Scheduling commands Traditionally, if you wanted a command to run periodically (hourly, daily, weekly, and so on), then you would have to set up a Cron job in Linux-based environments, or a scheduled task in Windows environments. However, this comes with drawbacks. It requires the user to have server access and familiarity with creating such schedules. Also, in cloud-based environments, the application may not be hosted on a single machine, or the user might not have the privileges to create Cron jobs. The creators of Laravel saw this as something that could be improved, and have come up with an expressive way of scheduling Artisan tasks. Your schedule is defined in app/Console/Kernel.php, and with your schedule being defined in this file, it has the added advantage of being present in source control. If you open the Kernel class file, you will see a method named schedule. Laravel ships with one by default that serves as an example: $schedule->command('inspire')->hourly(); If you've set up a Cron job in the past, you will see that this is instantly more readable than the crontab equivalent: 0 * * * * /path/to/artisan inspire Specifying the task in code also means we can easily change the console command to be run without having to update the crontab entry. By default, scheduled commands will not run. To do so, you need a single Cron job that runs the scheduler each and every minute: * * * * * php /path/to/artisan schedule:run 1>> /dev/null 2>&1 When the scheduler is run, it will check for any jobs whose schedules match and then runs them. If no schedules match, then no commands are run in that pass. You are free to schedule as many commands as you wish, and there are various methods to schedule them that are expressive and descriptive: $schedule->command('foo')->everyFiveMinutes(); $schedule->command('bar')->everyTenMinutes(); $schedule->command('baz')->everyThirtyMinutes(); $schedule->command('qux')->daily(); You can also specify a time for a scheduled command to run: $schedule->command('foo')->dailyAt('21:00'); Alternatively, you can create less frequent scheduled commands: $schedule->command('foo')->weekly(); $schedule->command('bar')->weeklyOn(1, '21:00'); The first parameter in the second example is the day, with 0 representing Sunday, and 1 through 6 representing Monday through Saturday, and the second parameter is the time, again specified in 24-hour format. You can also explicitly specify the day on which to run a scheduled command: $schedule->command('foo')->mondays(); $schedule->command('foo')->tuesdays(); $schedule->command('foo')->wednesdays(); // And so on $schedule->command('foo')->weekdays(); If you have a potentially long-running command, then you can prevent it from overlapping: $schedule->command('foo')->everyFiveMinutes()          ->withoutOverlapping(); Along with the schedule, you can also specify the environment under which a scheduled command should run, as shown in the following command: $schedule->command('foo')->weekly()->environments('production'); You could use this to run commands in a production environment, for example, archiving data or running a report periodically. By default, scheduled commands won't execute if the maintenance mode is enabled. This behavior can be easily overridden: $schedule->command('foo')->weekly()->evenInMaintenanceMode(); Viewing the output of scheduled commands For some scheduled commands, you probably want to view the output somehow, whether that is via e-mail, logged to a file on disk, or sending a callback to a pre-defined URL. All of these scenarios are possible in Laravel. To send the output of a job via e-mail by using the following command: $schedule->command('foo')->weekly()          ->emailOutputTo('[email protected]'); If you wish to write the output of a job to a file on disk, that is easy enough too: $schedule->command('foo')->weekly()->sendOutputTo($filepath); You can also ping a URL after a job is run: $schedule->command('foo')->weekly()->thenPing($url); This will execute a GET request to the specified URL, at which point you could send a message to your favorite chat client to notify you that the command has run. Finally, you can chain the preceding command to send multiple notifications: $schedule->command('foo')->weekly()          ->sendOutputTo($filepath)          ->emailOutputTo('[email protected]'); However, note that you have to send the output to a file before it can be e-mailed if you wish to do both. Summary In this article, you have learned the different ways in which Artisan can assist you in the development, debugging, and deployment process. We have also seen how easy it is to build a custom Artisan command and adapt it to your own needs. If you are relatively new to the command line, you will have had a glimpse into the power of command-line utilities. If, on the other hand, you are a seasoned user of the command line and you have written scripts with other programming languages, you can surely appreciate the simplicity and expressiveness of Artisan. Resources for Article: Further resources on this subject: Your First Application [article] Creating and Using Composer Packages [article] Eloquent relationships [article]
Read more
  • 0
  • 0
  • 2841

article-image-building-mobile-games-craftyjs-and-phonegap-part-3
Robi Sen
13 Jul 2015
9 min read
Save for later

Building Mobile Games with Crafty.js and PhoneGap, Part 3

Robi Sen
13 Jul 2015
9 min read
In this post, we will build upon what we learned in our previous series on using Crafty.js, HTML5, JavaScript, and PhoneGap to make a mobile game. In this post we will add a trigger to call back our monster AI, letting the monsters know it’s their turn to move, so each time the player moves the monsters will also move. Structuring our code with components Before we begin updating our game, let’s clean up our code a little bit. First let’s abstract out some of the code into separate files so it’s easier to work, read, edit, and develop our project. Let’s make a couple of components. The first one will be called PlayerControls.js and will tell the system what direction to move an entity when we touch on the screen. To do this, first create a new directory under your project WWW directory called src. Then create a new directory in src called com . In the folder create a new file called PlayerControls.js. Now open the file and make it look like the following: // create a simple object that describes player movement Crafty.c("PlayerControls", { init: function() { //lets now make the hero move where ever we touch Crafty.addEvent(this, Crafty.stage.elem, 'mousedown', function(e) { // lets simulate a 8 way controller or old school joystick //build out the direction of the mouse point. Remember that y increases as it goes 'downward' if (e.clientX < (player.x+Crafty.viewport.x) && (e.clientX - (player.x+Crafty.viewport.x))< 32) { myx = -1; } else if (e.clientX > (player.x+Crafty.viewport.x) && (e.clientX - (player.x+Crafty.viewport.x)) > 32){ myx = 1; } else { myx = 0; } if (e.clientY < (player.y+Crafty.viewport.y) && (e.clientY - (player.y+Crafty.viewport.y))< 32) { myy= -1; } else if (e.clientY > (player.y+Crafty.viewport.y) && (e.clientY - (player.y+Crafty.viewport.y)) > 32){ myy= 1; } else { myy = 0;} // let the game know we moved and where too var direction = [myx,myy]; this.trigger('Slide',direction); Crafty.trigger('Turn'); lastclientY = e.clientY; lastclientX = e.clientX; console.log("my x direction is " + myx + " my y direction is " + myy) console.log('mousedown at (' + e.clientX + ', ' + e.clientY + ')'); }); } }); You will note that this is very similar to the PlayerControls  component in our current index.html. One of the major differences is now we are decoupling the actual movement of our player from the mouse/touch controls. So if you look at the new PlayerControls component you will notice that all it does is set the X and Y direction, relative to a player object, and pass those directions off to a new component we are going to make called Slide. You will also see that we are using crafty.trigger to trigger an event called turn. Later in our code we are going to detect that trigger to active a callback to our monster AI letting the monsters know it’s their turn to move, so each time the player moves the monsters will also move. So let’s create a new component called Slide.js and it will go in your com directory with PlayerControls.js. Now open the file and make it look like this: Crafty.c("Slide", { init: function() { this._stepFrames = 5; this._tileSize = 32; this._moving = false; this._vx = 0; this._destX = 0; this._sourceX = 0; this._vy = 0; this._destY = 0; this._sourceY = 0; this._frames = 0; this.bind("Slide", function(direction) { // Don't continue to slide if we're already moving if(this._moving) return false; this._moving = true; // Let's keep our pre-movement location this._sourceX = this.x; this._sourceY = this.y; // Figure out our destination this._destX = this.x + direction[0] * 32; this._destY = this.y + direction[1] * 32; // Get our x and y velocity this._vx = direction[0] * this._tileSize / this._stepFrames; this._vy = direction[1] * this._tileSize / this._stepFrames; this._frames = this._stepFrames; }).bind("EnterFrame",function(e) { if(!this._moving) return false; // If we'removing, update our position by our per-frame velocity this.x += this._vx; this.y += this._vy; this._frames--; if(this._frames == 0) { // If we've run out of frames, // move us to our destination to avoid rounding errors. this._moving = false; this.x = this._destX; this.y = this._destY; } this.trigger('Moved', {x: this.x, y: this.y}); }); }, slideFrames: function(frames) { this._stepFrames = frames; }, // A function we'll use later to // cancel our movement and send us back to where we started cancelSlide: function() { this.x = this._sourceX; this.y = this._sourceY; this._moving = false; } }); As you can see, it is pretty straightforward. Basically, it handles movement by accepting a direction as a 0 or 1 within X and Y axis’. It then moves any entity that inherits its behavior some number of pixels; in this case 32, which is the height and width of our floor tiles. Now let’s do a little more housekeeping. Let’s pull out the sprite code in to a Sprites.js file and the asset loading code into a Loading.js  file. So create to new files, Sprites.js and Loading.js respectively, in your com directory and edit them to looking like the following two listings. Sprites.js: Crafty.sprite(32,"assets/dungeon.png", { floor: [0,1], wall1: [18,0], stairs: [3,1] }); // This will create entities called hero1 and blob1 Crafty.sprite(32,"assets/characters.png", { hero: [11,4], goblin1: [8,14] }); Loading.js: Crafty.scene("loading", function() { //console.log("pants") Crafty.load(["assets/dungeon.png","assets/characters.png"], function() { Crafty.scene("main"); // Run the main scene console.log("Done loading"); }, function(e) { //progress }, function(e) { //somethig is wrong, error loading console.log("Error,failed to load", e) }); }); Okay, now that is done let’s redo our index.html to make it cleaner: <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script type="text/javascript" src="src/com/loading.js"></script> <script type="text/javascript" src="src/com/sprites.js"></script> <script type="text/javascript" src="src/com/Slide.js"></script> <script type="text/javascript" src="src/com/PlayerControls.js"></script> <script> // Initialize Crafty Crafty.init(500, 320); // Background Crafty.background('green'); Crafty.scene("main",function() { Crafty.background("#FFF"); player = Crafty.e("2D, Canvas,PlayerControls, Slide, hero") .attr({x:0, y:0}) goblin = Crafty.e("2D, Canvas, goblin1") .attr({x:50, y:50}); }); Crafty.scene("loading"); </script> </body> </html> Go ahead save the file and load it in your browser. Everything should work as expected but now our index file and directory is a lot cleaner and easier to work with. Now that this is done, let’s get to giving the monster the ability to move on its own. Monster fun – moving game agents We are up to the point that we are able to move the hero of our game around the game screen with mouse clicks/touches. Now we need to make things difficult for our hero and make the monster move as well. To do this we need to add a very simple component that will move the monster around after our hero moves. To do this create a file called AI.js in the com directory. Now open it and edit it to look like this:   Crafty.c("AI",{ _directions: [[0,-1], [0,1], [1,0], [-1,0]], init: function() { this._moveChance = 0.5; this.requires('Slide'); this.bind("Turn",function() { if(Math.random() < this._moveChance) { this.trigger("Slide", this._randomDirection()); } }); }, moveChance: function(val) { this._moveChance = val; }, _randomDirection: function() { return this._directions[Math.floor(Math.random()*4)]; } }); As you can see all AI.js does, when called, is feed random directions to slide. Now we will add the AI component to the goblin entity. To do this editing your index.html to look like the following: <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script type="text/javascript" src="src/com/loading.js"></script> <script type="text/javascript" src="src/com/sprites.js"></script> <script type="text/javascript" src="src/com/Slide.js"></script> <script type="text/javascript" src="src/com/AI.js"></script> <script type="text/javascript" src="src/com/PlayerControls.js"></script> <script> Crafty.init(500, 320); Crafty.background('green'); Crafty.scene("main",function() { Crafty.background("#FFF"); player = Crafty.e("2D, Canvas,PlayerControls, Slide, hero") .attr({x:0, y:0}) goblin = Crafty.e("2D, Canvas, AI, Slide, goblin1") .attr({x:50, y:50}); }); Crafty.scene("loading"); </script> </body> </html> Here you will note we added a new entity called goblin and added the components Slide and AI. Now save the file and load it. When you move your hero you should see the goblin move as well like in this screenshot: Summary While this was a long post, you have learned a lot. Now that we have the hero and goblin moving in our game, we will build a dungeon in part 4, enable our hero to fight goblins, and create a PhoneGap build for our game. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 2840
article-image-sending-and-syncing-data
Packt
10 Aug 2015
4 min read
Save for later

Sending and Syncing Data

Packt
10 Aug 2015
4 min read
This article, by Steven F. Daniel, author of the book, Android Wearable Programming, will provide you with the background and understanding of how you can effectively build applications that communicate between the Android handheld device and the Android wearable. Android Wear comes with a number of APIs that will help to make communicating between the handheld and the wearable a breeze. We will be learning the differences between using MessageAPI, which is sometimes referred to as a "fire and forget" type of message, and DataLayerAPI that supports syncing of data between a handheld and a wearable, and NodeAPI that handles events related to each of the local and connected device nodes. (For more resources related to this topic, see here.) Creating a wearable send and receive application In this section, we will take a look at how to create an Android wearable application that will send an image and a message, and display this on our wearable device. In the next sections, we will take a look at the steps required to send data to the Android wearable using DataAPI, NodeAPI, and MessageAPIs. Firstly, create a new project in Android Studio by following these simple steps: Launch Android Studio, and then click on the File | New Project menu option. Next, enter SendReceiveData for the Application name field. Then, provide the name for the Company Domain field. Now, choose Project location and select where you would like to save your application code: Click on the Next button to proceed to the next step. Next, we will need to specify the form factors for our phone/tablet and Android Wear devices using which our application will run. On this screen, we will need to choose the minimum SDK version for our phone/tablet and Android Wear. Click on the Phone and Tablet option and choose API 19: Android 4.4 (KitKat) for Minimum SDK. Click on the Wear option and choose API 21: Android 5.0 (Lollipop) for Minimum SDK: Click on the Next button to proceed to the next step. In our next step, we will need to add Blank Activity to our application project for the mobile section of our app. From the Add an activity to Mobile screen, choose the Add Blank Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Activity so that it can be used by our application. Here we will need to specify the name of our activity, layout information, title, and menu resource file. From the Customize the Activity screen, enter MobileActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard: In the next step, we will need to add Blank Activity to our application project for the Android wearable section of our app. From the Add an activity to Wear screen, choose the Blank Wear Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Wear Activity so that our Android wearable can use it. Here we will need to specify the name of our activity and the layout information. From the Customize the Activity screen, enter WearActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard:   Finally, click on the Finish button and the wizard will generate your project and after a few moments, the Android Studio window will appear with your project displayed. Summary In this article, we learned about three new APIs, DataAPI, NodeAPI, and MessageAPIs, and how we can use them and their associated methods to transmit information between the handheld mobile and the wearable. If, for whatever reason, the connected wearable node gets disconnected from the paired handheld device, the DataApi class is smart enough to try sending again automatically once the connection is reestablished. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 2837

article-image-how-android-app-developers-can-convert-iphone-apps
Michael Kordvani
02 May 2018
5 min read
Save for later

How Android app developers can convert iPhone apps

Michael Kordvani
02 May 2018
5 min read
Businesses like to cast their nets as wide as possible in search of new customers. This type of broad outreach requires designing mobile apps for both iOS and Android phones. Although iPhones are very popular in the U.S. market, if you want to step up and attract global customers, you need to expand your product to the Android platform. Most Android app developers will face this challenge at some point: how to create an Android app from an iPhone app, and make it at least as successful as the primary product. It's not surprising that developers tend to concentrate on building up their skills for one platform in particular. Both platforms have their challenges. Spreading yourself too thin in an effort to meet the requirements for both phones can mean that the user experience suffers. But the challenges can be overcome. iPhone apps are great, but limited in terms of market size. Android apps are the biggest market players, and companies often ask the same team of Android app developers to take on both projects at once. With a few tips and tricks to help you along, you’ll be able to make your project a success. What are the benefits of redesigning an iPhone App into an Android App? Before converting your iPhone app into an Android app, it’s important to keep in mind that enlarging the customer base is not the only benefit. You will also get the chance to add more features, diversify money-making methods with new options for in-app purchasing and advertisements, as well as get a full product overhaul at only a fraction of the cost of starting from scratch. These are the obvious reasons why companies usually don’t overlook the possibility of iPhone app conversion. When a company has a team of iPhone and Android app developers and can save on new projects, it often pays off handsomely in the end. Hiring a product manager to oversee the process is not a bad idea if you have the budget for it. A manager can help the team understand the similar elements of these otherwise different platforms. Despite the UX/UI design differences in terms of navigation, icons and app architecture, you still need to code with customer requirements in mind. Also, before you start redesigning the product, keep in mind that the business model may need to be tweaked and the store submission process is quite different. UX and UI design differences between Android and iOS The platforms have significant differences in terms of design. You cannot simply copy the elements from an iPhone to an Android phone environment, at least not in a clear-cut way. You must design with the already-set styles in mind. For example, Android apps use a specific icon library, which is different from the one used for iOS. Android app developers and designers work with a wider color palette, varying in nuances and shades, while iPhone apps are more standardized. Roboto is the preferred Android font, and San Francisco is its iPhone counterpart. The hierarchical typography is not the same either. Because of the variations in the navigation tools, the user interface looks very different on Android phones. iPhone navigation is concentrated at the bottom; Android phones use more side and top navigation bars. Don’t forget about the thumb issue. iPhones are generally built around an average-sized thumb. With Android, you have a bit more leeway to accommodate all thumb sizes. Even if you focus only on these design basics, the user interface on an iPhone will still look different than the one on an Android smartphone. If we factor in button styles (flat on iOS vs. flat/floating on Android), grids and action sheets, as well as dropdown menus, things get even more complex. This guide offers a helpful comparison overview you can use when converting iPhone apps into Android apps. Sizing and resolution on Android phones also present their own challenges. Designers need to include different Android screen resolutions, which is already significantly more challenging than designing for the unified iPhone layout. iPhone app developers use points, and Android app developers use pixels when measuring screen objects, such as fonts and icons. The pt/px ratio is 0.75. At the same time, clients need some degree of standardization for brand recognition. They don’t want to confuse users with two apps that don’t appear to be from the same company. Further considerations Android app developers need to make about code and external libraries It can be challenging to find a team of Android app developers who also know how to code in iOS-friendly languages. However, it may be more efficient and cost-effective than working with two different teams. Programming languages that work for both Android apps and iOS apps are Kotlin and C-languages. Nonetheless, both platforms have widely preferred languages: Swift for iPhone apps and Java for Android apps. Android app developers should also check for compatibility before using external libraries and tools in the conversion project. Although challenging, converting iPhone apps for the Android OS platform is far from impossible. After all, people do it every day, as dual-platform apps are the rule rather than the exception. All you need to do to make a great product is to understand the key differences and make the necessary adjustments. Build your first Android app with Kotlin How to Secure and Deploy an Android App Why are Android developers switching from Java to Kotlin?
Read more
  • 0
  • 0
  • 2794