Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-using-location-data-phonegap
Packt
30 Oct 2013
11 min read
Save for later

Using Location Data with PhoneGap

Packt
30 Oct 2013
11 min read
(For more resources related to this topic, see here.) An introduction to Geolocation The term geolocation is used in order to refer to the identification process of the real-world geographic location of an object. Devices that are able to detect the user's position are becoming more common each day and we are now used to getting content based on our location ( geo targeting ). Using the Global Positioning System (GPS )—a space-based satellite navigation system that provides location and time information consistently across the globe—you can now get the accurate location of a device. During the early 1970s, the US military created Navstar, a defense navigation satellite system. Navstar was the system that created the basis for the GPS infrastructure used today by billions of devices. Since 1978 more than 60 GPS satellites have been successfully placed in the orbit around the Earth (refer to http://en.wikipedia.org/wiki/List_of_GPS_satellite_launches for a detailed report about the past and planned launches). The location of a device is represented through a point. This point is comprised of two components: latitude and longitude. There are many methods for modern devices to determine the location information, these include: Global Positioning System (GPS) IP address GSM/CDMA cell IDs Wi-Fi and Bluetooth MAC address Each approach delivers the same information; what changes is the accuracy of the device's position. The GPS satellites continuously transmit information that can parse, for example, the general health of the GPS array, roughly, where all of the satellites are in orbit, information on the precise orbit or path of the transmitting satellite, and the time of the transmission. The receiver calculates its own position by timing the signals sent by any of the satellites in the array that are visible. The process of measuring the distance from a point to a group of satellites to locate a position is known as trilateration . The distance is determined using the speed of light as a constant along with the time that the signal left the satellites. The emerging trend in mobile development is GPS-based "people discovery" apps such as Highlight, Sonar, Banjo, and Foursquare. Each app has different features and has been built for different purposes, but all of them share the same killer feature: using location as a piece of metadata in order to filter information according to the user's needs. The PhoneGap Geolocation API The Geolocation API is not a part of the HTML5 specification but it is tightly integrated with mobile development. The PhoneGap Geolocation API and the W3C Geolocation API mirror each other; both define the same methods and relative arguments. There are several devices that already implement the W3C Geolocation API; for those devices you can use native support instead of the PhoneGap API. As per the HTML specification, the user has to explicitly allow the website or the app to use the device's current position. The Geolocation API is exposed through the geolocation object child of the navigator object and consists of the following three methods: getCurrentPosition() returns the device position. watchPosition() watches for changes in the device position. clearWatch() stops the watcher for the device's position changes. The watchPosition() and clearWatch() methods work in the same way that the setInterval() and clearInterval() methods work; in fact the first one returns an identifier that is passed in to the second one. The getCurrentPosition() and watchPosition() methods mirror each other and take the same arguments: a success and a failure callback function and an optional configuration object. The configuration object is used in order to specify the maximum age of a cached value of the device's position, to set a timeout after which the method will fail and to specify whether the application requires only accurate results. var options = {maximumAge: 3000, timeout: 5000, enableHighAccuracy: true }; navigator.geolocation.watchPosition(onSuccess, onFailure, options); Only the first argument is mandatory; but it's recommended to handle always the failure use case. The success handler function receives as argument, a Position object. Accessing its properties you can read the device's coordinates and the creation timestamp of the object that stores the coordinates. function onSuccess(position) { console.log('Coordinates: ' + position.coords); console.log('Timestamp: ' + position.timestamp); } The coords property of the Position object contains a Coordinates object; so far the most important properties of this object are longitude and latitude. Using those properties it's possible to start to integrate positioning information as relevant metadata in your app. The failure handler receives as argument, a PositionError object. Using the code and the message property of this object you can gracefully handle every possible error. function onError(error) { console.log('message: ' + error.message); console.log ('code: ' + error.code); } The message property returns a detailed description of the error, the code property returns an integer; the possible values are represented through the following pseudo constants: PositionError.PERMISSION_DENIED, the user denies the app to use the device's current position PositionError.POSITION_UNAVAILABLE, the position of the device cannot be determined If you want to recover the last available position when the POSITION_UNAVAILABLE error is returned, you have to write a custom plugin that uses the platform-specific API. Android and iOS have this feature. You can find a detailed example at http://stackoverflow.com/questions/10897081/retrieving-last-known-geolocation-phonegap. PositionError.TIMEOUT, the specified timeout has elapsed before the implementation could successfully acquire a new Position object JavaScript doesn't support constants such as Java and other object-oriented programming languages. With the term "pseudo constants", I refer to those values that should never change in a JavaScript app. One of the most common tasks to perform with the device position information is to show the device location on a map. You can quickly perform this task by integrating Google Maps in your app; the only requirement is a valid API key. To get the key, use the following steps: Visit the APIs console at https://code.google.com/apis/console and log in with your Google account. Click the Services link on the left-hand menu. Activate the Google Maps API v3 service. Time for action – showing device position with Google Maps Get ready to add a map renderer to the PhoneGap default app template. Refer to the following steps: Open the command-line tool and create a new PhoneGap project named MapSample. $ cordova create ~/the/path/to/your/source/ mapmample com.gnstudio.pg.MapSample MapSample Add the Geolocation API plugin using the command line. $ cordova plugins add https: //git-wip-us.apache.org /repos/asf/cordova-plugin-geolocation.git Go to the www folder, open the index.html file, and add a div element with the id value #map inside the main div of the app below the #deviceready one. <div id='map'></div> Add a new script tag to include the Google Maps JavaScript library. <script type="text/javascript" src ="https: //maps.googleapis.com/maps/api/js?key= YOUR_API_KEY &sensor=true"> </script> Go to the css folder and define a new rule inside the index.css file to give to the div element and its content an appropriate size. #map{ width: 280px; height: 230px; display: block; margin: 5px auto; position: relative; } Go to the js folder, open the index.js file, and define a new function named initMap. initMap: function(lat, long){ // The code needed to show the map and the // device position will be added here } In the body of the function, define an options object in order to specify how the map has to be rendered. var options = { zoom: 8, center: new google.maps.LatLng(lat, long), mapTypeId: google.maps.MapTypeId.ROADMAP }; Add to the body of the initMap function the code to initialize the rendering of the map, and to show a marker representing the current device's position over it. var map = new google.maps.Map(document.getElementById('map'), options); var markerPoint = new google.maps.LatLng(lat, long); var marker = new google.maps.Marker({ position: markerPoint, map: map, title: 'Device's Location' }); Define a function to use as the success handler and call from its body the initMap function previously defined. onSuccess: function(position){ var coords = position.coords; app.initMap(coords.latitude, coords.longitude); } Define another function in order to have a failure handler able to notify the user that something went wrong. onFailure: function(error){ navigator.notification.alert(error.message, null); } Go into the deviceready function and add as the last statement the call to the Geolocation API needed to recover the device's position. navigator.geolocation.getCurrentPosition(app.onSuccess, app.onError, {timeout: 5000, enableAccuracy: false}); Open the command-line tool, build the app, and then run it on your testing devices. $ cordova build $ cordova run android What just happened? You integrated Google Maps inside an app. The map is an interactive map most users are familiar with—the most common gestures are already working and the Google Street View controls are already enabled. To successfully load the Google Maps API on iOS, it's mandatory to whitelist the googleapis.com and gstatic.com domains. Open the .plist file of the project as source code (right-click on the file and then Open As | Source Code ) and add the following array of domains: <key>ExternalHosts</key> <array> <string>*.googleapis.com</string> <string>*.gstatic.com</string> </array> Other Geolocation data In the previous example, you only used the latitude and longitude properties of the position object that you received. There are other attributes that can be accessed as properties of the Coordinates object: altitude, the height of the device, in meters, above the sea level. accuracy, the accuracy level of the latitude and longitude, in meters; it can be used to show a radius of accuracy when mapping the device's position. altitudeAccuracy, the accuracy of the altitude in meters. heading, the direction of the device in degrees clockwise from true north. speed, the current ground speed of the device in meters per second. Latitude and longitude are the best supported of these properties, and the ones that will be most useful when communicating with remote APIs. The other properties are mainly useful if you're developing an application for which Geolocation is a core component of its standard functionality, such as apps that make use of this data to create a flow of information contextualized to the geolocation data. The accuracy property is the most important of these additional features, because as an application developer, you typically won't know which particular sensor is giving you the location and you can use the accuracy property as a range in your queries to external services. There are several APIs that allow you to discover interesting data related to a place; among these the most interesting are the Google Places API and the Foursquare API. The Google Places and Foursquare online documentation is very well organized and it's the right place to start if you want to dig deeper into these topics. You can access the Google Places docs at https://developers.google.com/maps/documentation/javascript/places and Foursquare at https://developer.foursquare.com/. The itinero reference app for this article implements both the APIs. In the next example, you will look at how to integrate Google Places inside the RequireJS app. In order to include the Google Places API inside an app, all you have to do is add the libraries parameter to the Google Maps API call. The resulting URL should look similar to http://maps.google.com/maps/api/js?key=SECRET_KEY&sensor=true&libraries=places. The itinero app lets users create and plan a trip with friends. Once the user provides the name of the trip, the name of the country to be visited, and the trip mates and dates, it's time to start selecting the travel, eat, and sleep options. When the user selects the Eat option, the Google Places data provider will return bakeries, take-out places, groceries, and so on, close to the trip's destination. The app will show on the screen a list of possible places the user can select to plan the trip. For a complete list of the types of place searches supported by the Google API, refer to the online documentation at https://developers.google.com/places/documentation/supported_types.
Read more
  • 0
  • 0
  • 3950

article-image-how-make-generic-typealiases-swift
Alexander Altman
16 Nov 2015
4 min read
Save for later

How to Make Generic typealiases in Swift

Alexander Altman
16 Nov 2015
4 min read
Swift's typealias declarations are a good way to clean up our code. It's generally considered good practice in Swift to use typealiases to give more meaningful and domain-specific names to types that would otherwise be either too general-purpose or too long and complex. For example, the declaration: typealias Username = String gives a less vague name to the type String, since we're going to be using strings as usernames and we want a more domain-relevant name for that type. Similarly, the declaration: typealias IDMultimap = [Int: Set<Username>] gives a name for [Int: Set<Username>] that is not only more meaningful, but somewhat shorter. However, we run into problems when we want to do something a little more advanced; there is a possible application of typealias that Swift doesn't let us make use of. Specifically, Swift doesn't accept typealiases with generic parameters. If we try it the naïvely obvious way, typealias Multimap<Key: Hashable, Value: Hashable> = [Key: Set<Value>] we get an error at the begining of the type parameter list: Expected '=' in typealias declaration. Swift (as of version 2.1, at least) doesn't let us directly declare a generic typealias. This is quite a shame, as such an ability would be very useful in a lot of different contexts, and languages that have it (such as Rust, Haskell, Ocaml, F♯, or Scala) make use of it all the time, including in their standard libraries. Is there any way to work around this linguistic lack? As it turns out, there is! The Solution It's actually possible to effectively give a typealias type parameters, despite Swift appearing to disallow it. Here's how we can trick Swift into accepting a generic typealias: enum Multimap<Key: Hashable, Value: Hashable> { typealias T = [Key: Set<Value>] } The basic idea here is that we declare a generic enum whose sole purpose is to hold our (now technically non-generic) typealias as a member. We can then supply the enum with its type parameters and project out the actual typealias inside, like so: let idMMap: Multimap<Int, Username>.T = [0: ["alexander", "bob"], 1: [], 2: ["christina"]] func hasID0(user: Username) -> Bool { return idMMap[0]?.contains(user) ?? false } Notice that we used an enum rather than a struct; this is because an enum with no cases cannot have any instances (which is exactly what we want), but a struct with no members still has (at least) one instance, which breaks our layer of abstraction. We are essentially treating our caseless enum as a tiny generic module, within which everything (that is, just the typealias) has access to the Key and Value type parameters. This pattern is used in at least a few libraries dedicated to functional programming in Swift, since such constructs are especially valuable there. Nonetheless, this is a broadly useful technique, and it's the best method currently available for creating generic typealias in Swift. The Applications As sketched above, this technique works because Swift doesn't object to an ordinary typealias nested inside a generic type declaration. However, it does object to multiple generic types being nested inside each other; it even objects to either a generic type being nested inside a non-generic type or a non-generic type being nested inside a generic type. As a result, type-level currying is not possible. Despite this limitation, this kind of generic typealias is still useful for a lot of purposes; one big one is specialized error-return types, in which Swift can use this technique to imitate Rust's standard library: enum Result<V, E> { case Ok(V) case Err(E) } enum IO_Result<V> { typealias T = Result<V, ErrorType> } Another use for generic typealiases comes in the form of nested collections types: enum DenseMatrix<Element> { typealias T = [[Element]] } enum FlatMatrix<Element> { typealias T = (width: Int, elements: [Element]) } enum SparseMatrix<Element> { typealias T = [(x: Int, y: Int, value: Element)] } Finally, since Swift is a relatively young language, there are sure to be undiscovered applications for things like this; if you search, maybe you'll find one! Super-charge your Swift development by learning how to use the Flyweight pattern – Click here to read more About the author Alexander Altman (https://pthariensflame.wordpress.com) is a functional programming enthusiast who enjoys the mathematical and ergonomic aspects of programming language design. He's been working with Swift since the language's first public release, and he is one of the core contributors to the TypeLift (https://github.com/typelift) project.
Read more
  • 0
  • 0
  • 3923

article-image-appcelerator-titanium-creating-animations-transformations-and-understanding-drag-and-d
Packt
22 Dec 2011
10 min read
Save for later

Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop

Packt
22 Dec 2011
10 min read
(For more resources related to this subject, see here.) Animating a View using the "animate" method Any Window, View, or Component in Titanium can be animated using the animate method. This allows you to quickly and confidently create animated objects that can give your applications the "wow" factor. Additionally, you can use animations as a way of holding information or elements off screen until they are actually required. A good example of this would be if you had three different TableViews but only wanted one of those views visible at any one time. Using animations, you could slide those tables in and out of the screen space whenever it suited you, without the complication of creating additional Windows. In the following recipe, we will create the basic structure of our application by laying out a number of different components and then get down to animating four different ImageViews. These will each contain a different image to use as our "Funny Face" character. Complete source code for this recipe can be found in the /Chapter 7/Recipe 1 folder. Getting ready To prepare for this recipe, open up Titanium Studio and log in if you have not already done so. If you need to register a new account, you can do so for free directly from within the application. Once you are logged in, click on New Project, and the details window for creating a new project will appear. Enter in FunnyFaces as the name of the app, and fill in the rest of the details with your own information. Pay attention to the app identifier, which is written normally in reverse domain notation (that is, com.packtpub.funnyfaces). This identifier cannot be easily changed after the project is created and you will need to match it exactly when creating provisioning profiles for distributing your apps later on. The first thing to do is copy all of the required images into an images folder under your project's Resources folder. Then, open the app.js file in your IDE and replace its contents with the following code. This code will form the basis of our FunnyFaces application layout. // this sets the background color of the master UIView Titanium.UI.setBackgroundColor('#fff');////create root window//var win1 = Titanium.UI.createWindow({ title:'Funny Faces', backgroundColor:'#fff'});//this will determine whether we load the 4 funny face//images or whether one is selected alreadyvar imageSelected = false;//the 4 image face objects, yet to be instantiatedvar image1;var image2;var image3;var image4;var imageViewMe = Titanium.UI.createImageView({ image: 'images/me.png', width: 320, height: 480, zIndex: 0 left: 0, top: 0, zIndex: 0, visible: false});win1.add(imageViewMe);var imageViewFace = Titanium.UI.createImageView({ image: 'images/choose.png', width: 320, height: 480, zIndex: 1});imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! }});win1.add(imageViewFace);//this footer will hold our save button and zoom slider objectsvar footer = Titanium.UI.createView({ height: 40, backgroundColor: '#000', bottom: 0, left: 0, zIndex: 2});var btnSave = Titanium.UI.createButton({ title: 'Save Photo', width: 100, left: 10, height: 34, top: 3});footer.add(btnSave);var zoomSlider = Titanium.UI.createSlider({ left: 125, top: 8, height: 30, width: 180});footer.add(zoomSlider);win1.add(footer);//open root windowwin1.open(); Build and run your application in the emulator for the first time, and you should end up with a screen that looks just similar to the following example: How to do it… Now, back in the app.js file, we are going to animate the four ImageViews which will each provide an option for our funny face image. Inside the declaration of the imageViewFace object's event handler, type in the following code: imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2 }); image1.addEventListener('click', setChosenImage); win1.add(image1); image2 = Titanium.UI.createImageView({ backgroundImage: 'images/policewoman.png', left: 321, top: -140, width: 160, height: 220, zIndex: 2 }); image2.addEventListener('click', setChosenImage); win1.add(image2); image3 = Titanium.UI.createImageView({ backgroundImage: 'images/vampire.png', left: -160, bottom: -220, width: 160, height: 220, zIndex: 2 }); image3.addEventListener('click', setChosenImage); win1.add(image3); image4 = Titanium.UI.createImageView({ backgroundImage: 'images/monk.png', left: 321, bottom: -220, width: 160, height: 220, zIndex: 2 }); image4.addEventListener('click', setChosenImage); win1.add(image4); image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN }); image2.animate({ left: 160, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_OUT }); image3.animate({ left: 0, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate({ left: 160, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_LINEAR }); }}); Now launch the emulator from Titanium Studio and you should see the initial layout with our "Tap To Choose An Image" view visible. Tapping the choose ImageView should now animate our four funny face options onto the screen, as seen in the following screenshot: How it works… The first block of code creates the basic layout for our application, which consists of a couple of ImageViews, a footer view holding our "save" button, and the Slider control, which we'll use later on to increase the zoom scale of our own photograph. Our second block of code is where it gets interesting. Here, we're doing a simple check that the user hasn't already selected an image using the imageSelected Boolean, before getting into our animated ImageViews, named image1, image2, image3, and image4. The concept behind the animation of these four ImageViews is pretty simple. All we're essentially doing is changing the properties of our control over a period of time, defined by us in milliseconds. Here, we are changing the top and left properties of all of our images over a period of half a second so that we get an effect of them sliding into place on our screen. You can further enhance these animations by adding more properties to animate, for example, if we wanted to change the opacity of image1 from 50 percent to 100 percent as it slides into place, we could change the code to look something similar to the following: image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2, opacity: 0.5});image1.addEventListener('click', setChosenImage);win1.add(image1);image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN, opacity: 1.0}); Finally, the curve property of animate() allows you to adjust the easing of your animated component. Here, we used all four animation-curve constants on each of our ImageViews. They are: Titanium.UI.ANIMATION_CURVE_EASE_IN: Accelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_OUT: Decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT: Accelerate and decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_LINEAR: Make the animation speed constant throughout the animation cycles Animating a View using 2D matrix and 3D matrix transforms You may have noticed that each of our ImageViews in the previous recipe had a click event listener attached to them, calling an event handler named setChosenImage. This event handler is going to handle setting our chosen "funny face" image to the imageViewFace control. It will then animate all four "funny face" ImageView objects on our screen area using a number of different 2D and 3D matrix transforms. Complete source code for this recipe can be found in the /Chapter 7/Recipe 2 folder. How to do it… Replace the existing setChosenImage function, which currently stands empty, with the following source code: //this function sets the chosen image and removes the 4//funny faces from the screenfunction setChosenImage(e){ imageViewFace.image = e.source.backgroundImage; imageViewMe.visible = true; //create the first transform var transform1 = Titanium.UI.create2DMatrix(); transform1 = transform1.rotate(-180); var animation1 = Titanium.UI.createAnimation({ transform: transform1, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image1.animate(animation1); animation1.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image1); }); //create the second transform var transform2 = Titanium.UI.create2DMatrix(); transform2 = transform2.scale(0); var animation2 = Titanium.UI.createAnimation({ transform: transform2, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image2.animate(animation2); animation2.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image2); }); //create the third transform var transform3 = Titanium.UI.create2DMatrix(); transform3 = transform3.rotate(180); transform3 = transform3.scale(0); var animation3 = Titanium.UI.createAnimation({ transform: transform3, duration: 1000, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image3.animate(animation3); animation3.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image3); }); //create the fourth and final transform var transform4 = Titanium.UI.create3DMatrix(); transform4 = transform4.rotate(200,0,1,1); transform4 = transform4.scale(2); transform4 = transform4.translate(20,50,170); //the m34 property controls the perspective of the 3D view transform4.m34 = 1.0/-3000; //m34 is the position at [3,4] //in the matrix var animation4 = Titanium.UI.createAnimation({ transform: transform4, duration: 1500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate(animation4); animation4.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image4); }); //change the status of the imageSelected variable imageSelected = true;} How it works… Again, we are creating animations for each of the four ImageViews, but this time in a slightly different way. Instead of using the built-in animate method, we are creating a separate animation object for each ImageView, before calling the ImageView's animate method and passing this animation object to it. This method of creating animations allows you to have finer control over them, including the use of transforms. Transforms have a couple of shortcuts to help you perform some of the most common animation types quickly and easily. The image1 and image2 transforms, as shown in the previous code, use the rotate and scale methods respectively. Scale and rotate in this case are 2D matrix transforms, meaning they only transform the object in two-dimensional space along its X-axis and Y-axis. Each of these transformation types takes a single integer parameter; for scale, it is 0-100 percent and for rotate, the number of it is 0-360 degrees. Another advantage of using transforms for your animations is that you can easily chain them together to perform a more complex animation style. In the previous code, you can see that both a scale and a rotate transform are transforming the image3 component. When you run the application in the emulator or on your device, you should notice that both of these transform animations are applied to the image3 control! Finally, the image4 control also has a transform animation applied to it, but this time we are using a 3D matrix transform instead of the 2D matrix transforms used for the other three ImageViews. These work the same way as regular 2D matrix transforms, except that you can also animate your control in 3D space, along the Z-axis. It's important to note that animations have two event listeners: start and complete. These event handlers allow you to perform actions based on the beginning or ending of your animation's life cycle. As an example, you could chain animations together by using the complete event to add a new animation or transform to an object after the previous animation has finished. In our previous example, we are using this complete event to remove our ImageView from the Window once its animation has finished.
Read more
  • 0
  • 0
  • 3894
Visually different images

article-image-getting-started-kinect
Packt
30 Aug 2013
8 min read
Save for later

Getting Started with Kinect

Packt
30 Aug 2013
8 min read
(For more resources related to this topic, see here.) Before the birth of Microsoft Kinect, few people were familiar with the technology of motion sensing. Similar devices have been invented and developed originally for monitoring aerial and undersea aggressors in wars. Then in the non-military cases, motion sensors are widely used in alarm systems, lighting systems and so on, which could detect if someone or something disrupts the waves throughout a room and trigger predefined events. Although radar sensors and modern infrared motion sensors are used more popularly in our life, we seldom notice their existence, and can hardly make use of these devices in our own applications. But Kinect changed everything from the time it was launched in North America at the end of 2010. Different from most other user input controllers, Kinect enables users to interact with programs without really touching a mouse or a pad, but only through gestures. In a top-level view, a Kinect sensor is made up of an RGB camera, a depth sensor, an IR emitter, and a microphone array, which consists of several microphones for sound and voice recognition. A standard Kinect (for Windows) equipment is shown as follows: The Kinect device The Kinect drivers and software, which are either from Microsoft or from third-party companies, can even track and analyze advanced gestures and skeletons of multiple players. All these features make it possible to design brilliant and exciting applications with handsfree user inputs. And until now, Kinect had already brought a lot of games and software to an entirely new level. It is believed to be the bridge between the physical world we exist in and the virtual reality we create, and a completely new way of interacting with arts and a pro fitable business opportunity for individuals and companies. In this article, we will try to make an interesting game with the popular Kinect technology for user inputs, As Kinect captures the camera and depth images as video streams, we can also merge this view of our real-world environment with virtual elements, which is called Augmented Reality (AR) . This enables users to feel as if they appear and live in a nonexistent world, or something unbelievable exists in the physical earth. In this article, we will first introduce the installation of Kinect hardware and software on personal computers, and then consider a good enough idea compounded of Kinect and augmented reality elements. Before installing the Kinect device on your PCs, obviously you should buy Kinect equipment first. In this article, we will depend on Kinect for Windows or Kinect for Xbox 360, which can be learned about and bought at: http://www.microsoft.com/en-us/kinectforwindows/ http://www.xbox.com/en-US/kinect Please note that you don't need to buy an Xbox 360 at all. Kinect will be connected to PCs so that we can make custom programs for it. An alternative choice is Kinect for Windows, which is located at: http://www.microsoft.com/en-us/kinectforwindows/purchase/ The uses and developments of both will be of no difference for our cases. Installation of Kinect It is strongly suggested that you have a Windows 7 operating system or higher. It can be either 32-bit or 64-bit and with dual-core or faster processors. Linux developers can also benefit from third-party drivers and SDKs to manipulate Kinect components. Before we start to discuss the software installation, you can download both the Microsoft Kinect SDK and the Developer Toolkit from: http://www.microsoft.com/en-us/kinectforwindows/develop/developerdownloads.aspx In this article, we prefer to develop Kinect-based applications using Kinect SDK Version 1.5 (or higher versions) and the C++ language. Later versions should be backward compatible so that the source code provided in this article doesn't need to be changed. Setting up your Kinect software on PCs After we have downloaded the SDK and the Developer Toolkit, it's time for us to install them on the PC and ensure that they can work with the Kinect hardware. Let's perform the following steps: Run the setup executable with administrator permissions. Select I agree to the license terms and conditions after reading the License Agreement. The Kinect SDK setup dialog Follow the steps until the SDK installation has finished. And then, install the toolkit following similar instructions. The hardware installation is easy: plug the ends of the cable into the USB port and a power point, and plug the USB into your PC. Wait for the drivers to be found automatically. Now, start the Developer Toolkit Browser, choose Samples: C++ from the tabs, and find and run the sample with the name Skeletal Viewer. You should be able to see a new window demonstrating the depth/ skeleton/color images of the current physical scene, which is similar to the following image: The depth (left), skeleton (middle), and color (right) images read from Kinect Why did I do that? We chose to set up the SDK software at first so that it will install the motor and camera drivers, the APIs, and the documentations, as well as the toolkit including resources and samples onto the PC. If the operation steps are inversed, that is, the hardware is connected before installing the SDK, your Windows OS may not be able to recognize the device. Just start the SDK setup at this time and the device should be identified again during the installation process. But before actually using Kinect, you still have to ensure there is nothing between the device and you (the player). And it's best to keep the play space at least 1.8 m wide and about 1.8 m to 3.6 m long from the sensor. If you have more than one Kinect device, don't keep them face-to-face as there may be infrared interference between them. If you have multiple Kinects to install on the same PC, please note that one USB root hub can have one and only one Kinect connected. The problem happens because Kinect takes over 50 percent of the USB bandwidth, and it needs an individual USB controller to run. So plugging more than one device on the same USB hub means only one of them will work. The depth image at the left in the preceding image shows a human (in fact, the author) standing in front of the camera. Some parts may be totally black if they are too near (often less than 80 cm), or too far (often more than 4 m). If you are using Kinect for Windows, you can turn on Near Mode to show objects that are near the camera; however, Kinect for Xbox 360 doesn't have such features. You can read more about the software and hardware setup at: http://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx The idea of the AR-based Fruit Ninja game Now it's time for us to define the goal we are going to achieve in this article. As a quick but practical guide for Kinect and augmented reality, we should be able to make use of the depth detection, video streaming, and motion tracking functionalities in our project. 3D graphics APIs are also important here because virtual elements should also be included and interacted with irregular user inputs not common mouse or keyboard inputs). A fine example is the Fruit Ninja game, which is already a very popular game all over the world. Especially on mobile devices like smartphones and pads, you can see people destroy different kinds of fruits by touching and swiping their fingers on the screen. With the help of Kinect, our arms can act as blades to cut off flying fruits, and our images can also be shown along with the virtual environment so that we can determine the posture of our bodies and position of our arms through the screen display. Unfortunately, this idea is not fresh enough for now. Already, there are commercial products with similar purposes available in the market; for example: http://marketplace.xbox.com/en-US/Product/Fruit-Ninja-Kinect/66acd000-77fe-1000-9115-d80258410b79 But please note that we are not going to design a completely different product here, or even bring it to the market after finishing this article. We will only learn how to develop Kinect-based applications, work in our own way from the very beginning, and benefit from the experience in our professional work or as an amateur. So it is okay to reinvent the wheel this time, and have fun in the process and the results. Summary Kinect, which is a portmanteau of the words "kinetic" and "connect", is a motion sensor developed and released by Microsoft. It provides a natural user interface (NUI) for tracking and manipulating handsfree user inputs such as gestures and skeleton motions. It can be considered as one of the most successful consumer electronics device in recent years, and we will be using this novel device to build the Fruit Ninja game in this article. We will focus on developing Kinect and AR-based applications on Windows 7 or higher using the Microsoft Kinect SDK 1.5 (or higher) and the C++ programming language. Mainly, we have introduced how to install Kinect for Windows SDK in this article. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] Mission Running in EVE Online [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 3892

article-image-building-mobile-games-craftyjs-and-phonegap-part-1
Robi Sen
18 Mar 2015
7 min read
Save for later

Building Mobile Games with Crafty.js and PhoneGap: Part 1

Robi Sen
18 Mar 2015
7 min read
In this post, we will build a mobile game using HTML5, CSS, and JavaScript. To make things easier, we are going to make use of the Crafty.js JavaScript game engine, which is both free and open source. In this first part of a two-part series, we will look at making a simple turn-based RPG-like game based on Pascal Rettig’s Crafty Workshop presentation. You will learn how to add sprites to a game, control them, and work with mouse/touch events. Setting up To get started, first create a new PhoneGap project wherever you want in which to do your development. For this article, let’s call the project simplerpg. Figure 1: Creating the simplerpg project in PhoneGap. Navigate to the www directory in your PhoneGap project and then add a new director called lib. This is where we are going to put several JavaScript libraries we will use for the project. Now, download the JQuery library to the lib directory. For this project, we will use JQuery 2.1. Once you have downloaded JQuery, you need to download the Crafty.js library and add it to your lib directory as well. For later parts of this series,you will want to be using a web server such as Apache or IIS to make development easier. For the first part of the post, you can just drag-and-drop the HTML files into your browser to test, but later, you will need to use a web browser to avoid Same Origin Policy errors. This article assumes you are using Chrome to develop in. While IE or FireFox will work just fine, Chrome is used in this article and its debugging environment. Finally, the source code for this article can be found here on GitHub. In the lessons directory, you will see a series of index files with a listing number matching each code listing in this article. Crafty PhoneGap allows you to take almost any HTML5 application and turn it into a mobile app with little to no extra work. Perhaps the most complex of all mobile apps are videos. Video games often have complex routines, graphics, and controls. As such, developing a video game from the ground up is very difficult. So much so that even major video game companies rarely do so. What they usually do, and what we will do here, is make use of libraries and game engines that take care of many of the complex tasks of managing objects, animation, collision detection, and more. For our project, we will be making use of the open source JavaScript game engine Crafty. Before you get started with the code, it’s recommended to quickly review the website here and review the Crafty API here. Bootstrapping Crafty and creating an entity Crafty is very simple to start working with. All you need to do is load the Crafty.js library and initialize Crafty. Let’s try that. Create an index.html file in your www root directory, if one does not exist; if you already have one, go ahead and overwrite it. Then, cut and paste listing 1 into it. Listing 1: Creating an entity <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script> // Height and Width var WIDTH = 500, HEIGHT = 320; // Initialize Crafty Crafty.init(WIDTH, HEIGHT); var player = Crafty.e(); player.addComponent("2D, Canvas, Color") player.color("red").attr({w:50, h:50}); </script> </body> </html> As you can see in listing 1, we are creating an HTML5 document and loading the Crafty.js library. Then, we initialize Crafty and pass it a width and height. Next, we create a Crafty entity called player. Crafty, like many other game engines, follows a design pattern called Entity-Component-System or (ECS). Entities are objects that you can attach things like behaviors and data to. For ourplayerentity, we are going to add several components including 2D, Canvas, and Color. Components can be data, metadata, or behaviors. Finally, we will add a specific color and position to our entity. If you now save your file and drag-and-drop it into the browser, you should see something like figure 2. Figure 2: A simple entity in Crafty.  Moving a box Now,let’s do something a bit more complex in Crafty. Let’s move the red box based on where we move our mouse, or if you have a touch-enabled device, where we touch the screen. To do this, open your index.html file and edit it so it looks like listing 2. Listing 2: Moving the box <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script> var WIDTH = 500, HEIGHT = 320; Crafty.init(WIDTH, HEIGHT); // Background Crafty.background("black"); //add mousetracking so block follows your mouse Crafty.e("mouseTracking, 2D, Mouse, Touch, Canvas") .attr({ w:500, h:320, x:0, y:0 }) .bind("MouseMove", function(e) { console.log("MouseDown:"+ Crafty.mousePos.x +", "+ Crafty.mousePos.y); // when you touch on the canvas redraw the player player.x = Crafty.mousePos.x; player.y = Crafty.mousePos.y; }); // Create the player entity var player = Crafty.e(); player.addComponent("2D, DOM"); //set where your player starts player.attr({ x : 10, y : 10, w : 50, h : 50 }); player.addComponent("Color").color("red"); </script> </body> </html> As you can see, there is a lot more going on in this listing. The first difference is that we are using Crafty.background to set the background to black, but we are also creating a new entity called mouseTracking that is the same size as the whole canvas. We assign several components to the entity so that it can inherit their methods and properties. We then use .bind to bind the mouse’s movements to our entity. Then, we tell Crafty to reposition our player entity to wherever the mouse’s x and y position is. So, if you save this code and run it, you will find that the red box will go wherever your mouse moves or wherever you touch or drag as in figure 3.    Figure 3: Controlling the movement of a box in Crafty.  Summary In this post, you learned about working with Crafty.js. Specifically, you learned how to work with the Crafty API and create entities. In Part 2, you will work with sprites, create components, and control entities via mouse/touch.  About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus-year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 3890

article-image-designing-simple-robust-object-detector-and-classifier
Packt
13 Jun 2016
19 min read
Save for later

Designing a Simple, Robust Object Detector and Classifier

Packt
13 Jun 2016
19 min read
In this article by Joseph Howse, author of the book, iOS Application Development with OpenCV 3, illustrates a scale-invariant,rotation-invariant approach to object detection and classification, using OpenCV 3and just 250 lines of custom C++ code. The technique relies on blob detection, histogram analysis, and SURF (or ORB if SURF is unavailable).The classifier is sensitive to colors as well as keypoints, and itcan work with a small number of training images. For background information, sample images, and a complete tutorial on how to integrate this detector and classifier into an iOS application, refer toChapter 5, Classifying Coins and Commodities in the book,iOS Application Development with OpenCV 3 (Packt Publishing, 2016). You could also use this article's C++ code on other platforms besides iOS. (For more resources related to this topic, see here.) Defining blobs and a blob detector For our purposes, a blob simply has an image and a label. The image is cv::Mat and the label is an unsigned integer. The label's default value is 0, which shall signify that the blob has not yet been classified. Create a new header file, Blob.h, and fill it with the following declaration of a Blob class: #ifndef BLOB_H #define BLOB_H   #include <opencv2/core.hpp>   class Blob { public:   Blob(const cv::Mat &mat, uint32_t label = 0ul);     /**    * Construct an empty blob.    */   Blob();     /**    * Construct a blob by copying another blob.    */   Blob(const Blob &other);     bool isEmpty() const;     uint32_t getLabel() const;   void setLabel(uint32_t value);     const cv::Mat &getMat() const;   int getWidth() const;   int getHeight() const;   private:   uint32_t label;     cv::Mat mat; };   #endif // BLOB_H A Blob's image does not change after construction, but the label may change as a result of our classification process. Note that most of Blob's methods have the const modifier, but of course,setLabel does not because it changes the label. Now, let's declare a BlobDetector class in another new header file, BlobDetector.h. This class provides a detect public method to analyze a given image and populate vector<Blob> based on detected objects in the image. Another public method, getMask, returns a thresholded version of the most recent image that the detect method received. Internally, BlobDetector uses several more matrices and vectors to hold intermediate results, including the mask, detected edges, detected contours, and hierarchy that describes the contours' relationship to each other. Here is the detector's declaration: class BlobDetector { public:   void detect(cv::Mat &image, std::vector<Blob>&blob,     double resizeFactor = 1.0, bool draw = false);     const cv::Mat &getMask() const;   private:   void createMask(const cv::Mat &image);     cv::Mat resizedImage;   cv::Mat mask;   cv::Mat edges;   std::vector<std::vector<cv::Point>> contours;   std::vector<cv::Vec4i> hierarchy; };   #endif // !BLOB_DETECTOR_H Later, in the Detecting blobs against a plain background section, we will define the methods' bodies in new files called Blob.cpp and BlobDetector.cpp. Defining blob descriptors and a blob classifier If you are familiar with keypoint matching, you know that a keypoint has a descriptor or set of descriptive statistics. Similarly, we can define a custom descriptor for a blob. As our classifier relies on histogram comparison and keypoint matching, let's say that a blob's descriptor consists of a normalized histogram and matrix of keypoint descriptors. The descriptor object is also a convenient place to put the label. Create a new header file, BlobDescriptor.h, and put the following declaration of a BlobDescriptor class in it: #ifndef BLOB_DESCRIPTOR_H #define BLOB_DESCRIPTOR_H   #include <opencv2/core.hpp>   class BlobDescriptor { public:   BlobDescriptor(const cv::Mat &normalizedHistogram,     const cv::Mat &keypointDescriptors, uint32_t label);     const cv::Mat &getNormalizedHistogram() const;   const cv::Mat &getKeypointDescriptors() const;   uint32_t getLabel() const;   private:   cv::Mat normalizedHistogram;   cv::Mat keypointDescriptors;   uint32_t label; };   #endif // !BLOB_DESCRIPTOR_H Note that BlobDescriptor is designed as an immutable class. All its methods, except the constructor, have the const modifier. Now, let's declare a BlobClassifier class in another new header file, BlobClassifier.h. Publicly, this class receives Blob objects via an update method (for reference blobs) and a classify method (for blobs that the detector found in the scene). Privately, BlobClassifier creates, owns, and compares BlobDescriptor objects that pertain to the Blob objects. Thus, BlobClassifier is the only part of our program that needs to deal with BlobDescriptor. BlobClassifier also owns instances of OpenCV classes that are responsible for keypoint detection, description, and matching. Here is our classifier's declaration: #ifndef BLOB_CLASSIFIER_H #define BLOB_CLASSIFIER_H   #import "Blob.h" #import "BlobDescriptor.h"   #include <opencv2/features2d.hpp>   class BlobClassifier { public:   BlobClassifier();     /**    * Add a reference blob to the classification model.    */   void update(const Blob &referenceBlob);     /**    * Clear the classification model.    */   void clear();     /**    * Classify a blob that was detected in a scene.    */   void classify(Blob &detectedBlob) const;   private:   BlobDescriptor createBlobDescriptor(const Blob &blob) const;   float findDistance(const BlobDescriptor &detectedBlobDescriptor,     const BlobDescriptor &referenceBlobDescriptor) const;     /**    * A feature detector and descriptor extractor.    * It finds features in images.    * Then, it creates descriptors of the features.    */   cv::Ptr<cv::Feature2D> featureDetectorAndDescriptorExtractor;     /**    * A descriptor matcher.    * It matches features based on their descriptors.    */   cv::Ptr<cv::DescriptorMatcher> descriptorMatcher;     /**    * Descriptors of the reference blobs.    */   std::vector<BlobDescriptor> referenceBlobDescriptors; };   #endif // !BLOB_CLASSIFIER_H Later, in the Classifying blobs by color and keypoints section, we will write the methods' bodies in new files called BlobDescriptor.cpp and BlobClassifier.cpp. Detecting blobs against a plain background Let's assume that the background has a distinctive color range, such as "cream to snow white". Our blob detector will calculate the image's dominant color range and search for large regions whose colors differ from this range. These anomalous regions will constitute the detected blobs. For small objects such as a bean or coin, a user can easily find a plain background such as a blank sheet of paper, plain table-top, plain piece of clothing, or even the palm of a hand. As our blob detector dynamically estimates the background color range, it can cope with various backgrounds and lighting conditions; it is not limited to a lab environment. Create a new file, BlobDetector.cpp, for the implementation of our BlobDetector class. (To review the header, refer back to the Defining blobs and a blob detector section.) At the top of BlobDetector.cpp, we will define several constants that pertain to the breadth of the background color range, the size and smoothing of the blobs, and the color of the blobs' rectangles in the preview image. Here is the relevant code: #include <opencv2/imgproc.hpp>   #include "BlobDetector.h"   const double MASK_STD_DEVS_FROM_MEAN = 1.0; const double MASK_EROSION_KERNEL_RELATIVE_SIZE_IN_IMAGE = 0.005; const int MASK_NUM_EROSION_ITERATIONS = 8;   const double BLOB_RELATIVE_MIN_SIZE_IN_IMAGE = 0.05;   const cv::Scalar DRAW_RECT_COLOR(0, 255, 0); // Green Of course, the heart of BlobDetector is its detect method. Optionally, the method creates a downsized version of the image for faster processing. Then, we call a helper method, createMask, to perform thresholding and erosion on the (resized) image. We pass the resulting mask to the cv::Canny function to perform Canny edge detection. We pass the edge mask to the cv::findContours function, which populates a vector of contours, in the vector<vector<cv::Point>> format. That is to say, each contour is a vector of points. For each contour, we find the points' bounding rectangle. If we are working with a resized image, we restore the bounding rectangle to the original scale. We reject rectangles that are very small. Finally, for each accepted rectangle, we put a new Blob object in the output vector and optionally draw the rectangle in the original image. Here is the detect method's implementation: void BlobDetector::detect(cv::Mat &image,   std::vector<Blob>&blobs, double resizeFactor, bool draw) {   blobs.clear();     if (resizeFactor == 1.0) {     createMask(image);   } else {     cv::resize(image, resizedImage, cv::Size(), resizeFactor,       resizeFactor, cv::INTER_AREA);     createMask(resizedImage);   }     // Find the edges in the mask.   cv::Canny(mask, edges, 191, 255);     // Find the contours of the edges.   cv::findContours(edges, contours, hierarchy, cv::RETR_TREE,     cv::CHAIN_APPROX_SIMPLE);     std::vector<cv::Rect> rects;   int blobMinSize = (int)(MIN(image.rows, image.cols) *     BLOB_RELATIVE_MIN_SIZE_IN_IMAGE);   for (std::vector<cv::Point> contour : contours) {       // Find the contour's bounding rectangle.     cv::Rect rect = cv::boundingRect(contour);       // Restore the bounding rectangle to the original scale.     rect.x /= resizeFactor;     rect.y /= resizeFactor;     rect.width /= resizeFactor;     rect.height /= resizeFactor;       if (rect.width < blobMinSize || rect.height < blobMinSize) {       continue;     }       // Create the blob from the sub-image inside the bounding     // rectangle.     blobs.push_back(Blob(cv::Mat(image, rect)));       // Remember the bounding rectangle in order to draw it later.     rects.push_back(rect);   }     if (draw) {     // Draw the bounding rectangles.     for (const cv::Rect &rect : rects) {       cv::rectangle(image, rect.tl(), rect.br(), DRAW_RECT_COLOR);     }   } } The getMask method simply returns the mask that we previously computed in the detect method: const cv::Mat &BlobDetector::getMask() const {   return mask; } The createMask helper method begins by finding the image's mean color and standard deviation using the cv::meanStdDev function. We calculate a range of background colors based on a certain number of standard deviations from the mean, as defined by the MASK_STD_DEVS_FROM_MEAN constant near the top of BlobDetector.cpp. We deem values outside this range to be foreground colors. Using the cv::inRange function, we map the background colors (in the image) to white (in the mask) and the foreground colors (in the image) to black (in the mask). Then, we create a square kernel using the cv::getStructuringElement function. Finally, we use the kernel in the cv::erode function to apply the erosion morphological operation to the mask. This has the effect of smoothing the black (foreground) regions such that they swallow up little gaps that are probably just noise. Here is the relevant code: void BlobDetector::createMask(const cv::Mat &image) {     // Find the image's mean color.   // Presumably, this is the background color.   // Also find the standard deviation.   cv::Scalar meanColor;   cv::Scalar stdDevColor;   cv::meanStdDev(image, meanColor, stdDevColor);     // Create a mask based on a range around the mean color.   cv::Scalar halfRange = MASK_STD_DEVS_FROM_MEAN * stdDevColor;   cv::Scalar lowerBound = meanColor - halfRange;   cv::Scalar upperBound = meanColor + halfRange;   cv::inRange(image, lowerBound, upperBound, mask);     // Erode the mask to merge neighboring blobs.   int kernelWidth = (int)(MIN(image.cols, image.rows) *     MASK_EROSION_KERNEL_RELATIVE_SIZE_IN_IMAGE);   if (kernelWidth > 0) {     cv::Size kernelSize(kernelWidth, kernelWidth);     cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT,       kernelSize);     cv::erode(mask, mask, kernel, cv::Point(-1, -1),       MASK_NUM_EROSION_ITERATIONS);   } } That is the end of the blob detector's code. As you can see, it uses a general-purpose and rather linear approach, without any special cases for different kinds of objects.Moreover, we are using a separate blob detector and blob classifier, and this separation of responsibilities enables us to keep each class's implementation relatively simple. For completeness, note that the Blob class's constructors have straightforward implementations that copy the arguments. For the blob's image, we make a deep copy because the original may change. (For example, the original may be a subimage in a frame of video, and after detection we may draw rectangles atop the frame of video.) Similarly, Blob's getter and setter methods are self-explanatory. Create a new file, Blop.cpp, and fill it with the following implementation: #import "Blob.h"   Blob::Blob(const cv::Mat &mat, uint32_t label) : label(label) {   mat.copyTo(this->mat); }   Blob::Blob() { }   Blob::Blob(const Blob &other) : label(other.label) {   other.mat.copyTo(mat); }   bool Blob::isEmpty() const {   return mat.empty(); } uint32_t Blob::getLabel() const {   return label; } void Blob::setLabel(uint32_t value) {   label = value; } const cv::Mat &Blob::getMat() const {   return mat; } int Blob::getWidth() const {   return mat.cols; } int Blob::getHeight() const {   return mat.rows; } Classifying blobs by color and keypoints Our classifier operates on the assumption that a blob contains distinctive colors, distinctive keypoints, or both. To conserve memory and precompute as much relevant information as possible, we do not store images of the reference blobs, but instead we store histograms and keypoint descriptors. Create a new file, BlobClassifier.cpp, for the implementation of our BlobClassifier class. (To review the header, refer back to the Defining blob descriptors and a blob classifier section.) At the top of BlobDetector.cpp, we will define several constants that pertain to the number of histogram bins, the histogram comparison method, and the relative importance of the histogram comparison versus the keypoint comparison. Here is the relevant code: #include <opencv2/imgproc.hpp>   #include "BlobClassifier.h"   #ifdef WITH_OPENCV_CONTRIB #include <opencv2/xfeatures2d.hpp> #endif   const int HISTOGRAM_NUM_BINS_PER_CHANNEL = 32; const int HISTOGRAM_COMPARISON_METHOD = cv::HISTCMP_CHISQR_ALT;   const float HISTOGRAM_DISTANCE_WEIGHT = 0.98f; const float KEYPOINT_MATCHING_DISTANCE_WEIGHT = 1.0f -   HISTOGRAM_DISTANCE_WEIGHT; Beware that the HISTOGRAM_NUM_BINS_PER_CHANNEL constant has a cubic relationship to memory usage. For each blob descriptor, we store a three-dimensional (BGR) histogram with HISTOGRAM_NUM_BINS_PER_CHANNEL^3 elements, and each element is a 32-bit floating point number. If the constant is 32, each histogram's size in megabytes is (32^3)*32/(10^6)=1.0. This is fine for a small set of reference descriptors. If the constant is 256 (the maximum number of bins for an 8-bit color channel), the histogram's size goes up to a whopping value of (256^3)*32/(10^6)=536.9 megabytes! For an iOS application, this is unacceptable, given the platform's memory constraints. At best, in a high-end iOS device, one gigabyte of RAM might be available to each application. Conservatively, you should worry if your app's memory usage approaches 100 megabytes. Remember that OpenCV's SURF implementation is in the xfeatures2d module, which is part of opencv_contrib. If opencv_contrib is available, let's define the WITH_OPENCV_CONTRIB preprocessor flag. Then, our code imports the <opencv/xfeatures2d.hpp> header, and we use SURF. Otherwise, we use ORB. This selection also affects the implementation of BlobClassifier's constructor. OpenCV provides factory methods for various feature detectors, descriptors, and matchers, so we simply have to use the right combination of factory methods for SURF with Flann matching or ORB with brute-force matching based on the Hamming distance. Here is the constructor's implementation: BlobClassifier::BlobClassifier() { #ifdef WITH_OPENCV_CONTRIB   featureDetectorAndDescriptorExtractor =     cv::xfeatures2d::SURF::create();   descriptorMatcher = cv::DescriptorMatcher::create("FlannBased"); #else   featureDetectorAndDescriptorExtractor = cv::ORB::create();   descriptorMatcher = cv::DescriptorMatcher::create( "BruteForce-HammingLUT"); #endif } The update method's implementation calls a helper method, createBlobDescriptor, and adds the resulting BlobDescriptor to a vector of reference descriptors: void BlobClassifier::update(const Blob &referenceBlob) {   referenceBlobDescriptors.push_back(     createBlobDescriptor(referenceBlob)); } The clear method's implementation discards all the reference descriptors such that the BlobClassifier reverts to its initial, untrained state: void BlobClassifier::clear() {   referenceBlobDescriptors.clear(); } The implementation of the classify method relies on another helper method, findDistance. For each reference descriptor, classify calls findDistance to obtain a measure of dissimilarity between the query blob's descriptor and reference descriptor. We find the reference descriptor with the least distance (best similarity) and return its label as the classification result. If there are no reference descriptors, classify returns 0, the "unknown" label. Here is classify's implementation: void BlobClassifier::classify(Blob &detectedBlob) const {   BlobDescriptor detectedBlobDescriptor =     createBlobDescriptor(detectedBlob);   float bestDistance = FLT_MAX;   uint32_t bestLabel = 0;   for (const BlobDescriptor &referenceBlobDescriptor :       referenceBlobDescriptors) {     float distance = findDistance(detectedBlobDescriptor,       referenceBlobDescriptor);     if (distance < bestDistance) {       bestDistance = distance;       bestLabel = referenceBlobDescriptor.getLabel();     }   }   detectedBlob.setLabel(bestLabel); } The createBlobDescriptor helper method is responsible for calculating a normalized histogram of Bloband keypoint descriptors and using them to build a new BlobDescriptor. To calculate the (non-normalized) histogram, we use the cv::calcHist function. Among its arguments, it requires three arrays to specify the channels we want to use, the number of bins per channel, and the range of each channel's values. To normalize the resulting histogram, we divide by the number of pixels in the blob's image. The following code, pertaining to the histogram, is the first half of implementation of createBlobDescriptor: BlobDescriptor BlobClassifier::createBlobDescriptor(   const Blob &blob) const {    const cv::Mat &mat = blob.getMat();   int numChannels = mat.channels();     // Calculate the histogram of the blob's image.   cv::Mat histogram;   int channels[] = { 0, 1, 2 };   int numBins[] = { HISTOGRAM_NUM_BINS_PER_CHANNEL,     HISTOGRAM_NUM_BINS_PER_CHANNEL,     HISTOGRAM_NUM_BINS_PER_CHANNEL };   float range[] = { 0.0f, 256.0f };   const float *ranges[] = { range, range, range };   cv::calcHist(&mat, 1, channels, cv::Mat(), histogram, 3,     numBins, ranges);     // Normalize the histogram.   histogram *= (1.0f / (mat.rows * mat.cols)); Now, we must convert the blob's image to grayscale and obtain keypoints and keypoint descriptors using the detect and compute methods of cv::Feature2D. With the normalized histogram and keypoint descriptors, we have everything that we need to construct and return a new BlobDescriptor. Here is the remainder of implementation of createBlobDescriptor: // Convert the blob's image to grayscale.   cv::Mat grayMat;   switch (numChannels) {     case 4:       cv::cvtColor(mat, grayMat, cv::COLOR_BGRA2GRAY);       break;     default:       cv::cvtColor(mat, grayMat, cv::COLOR_BGR2GRAY);       break;   }     // Detect features in the grayscale image.   std::vector<cv::KeyPoint> keypoints;   featureDetectorAndDescriptorExtractor->detect(grayMat,     keypoints);     // Extract descriptors of the features.   cv::Mat keypointDescriptors;   featureDetectorAndDescriptorExtractor->compute(grayMat,     keypoints, keypointDescriptors);     return BlobDescriptor(histogram, keypointDescriptors,     blob.getLabel()); } The findDistance helper method performs histogram comparison using the cv::compareHist function and keypoint matching using the match method of cv::DescriptorMatcher. Each of the resulting keypoint matches has a distance, and we sum these distances. Then, as an overall measure of distance between the two blob descriptors, we return a weighted average of the histogram distance and the total keypoint matching distance. Here is the relevant code: float BlobClassifier::findDistance(   const BlobDescriptor &detectedBlobDescriptor,   const BlobDescriptor &referenceBlobDescriptor) const {    // Calculate the histogram distance.   float histogramDistance = (float)cv::compareHist(     detectedBlobDescriptor.getNormalizedHistogram(),     referenceBlobDescriptor.getNormalizedHistogram(),     HISTOGRAM_COMPARISON_METHOD);     // Calculate the keypoint matching distance.   float keypointMatchingDistance = 0.0f;   std::vector<cv::DMatch> keypointMatches;   descriptorMatcher->match(     detectedBlobDescriptor.getKeypointDescriptors(),     referenceBlobDescriptor.getKeypointDescriptors(),     keypointMatches);   for (const cv::DMatch &keypointMatch : keypointMatches) {     keypointMatchingDistance += keypointMatch.distance;   }     return histogramDistance * HISTOGRAM_DISTANCE_WEIGHT +     keypointMatchingDistance * KEYPOINT_MATCHING_DISTANCE_WEIGHT; } That is the end of the blob classifier's code. Again, we see that a single class can provide useful, general-purpose computer vision functionality without a terribly complicated implementation. Perhaps this is a Zen moment; our previous work and studieshave been a path to (some kind of) simplicity! Of course, OpenCV hides a lot of complexity for us in its implementations of histogram-related functions and keypoint-related classes, and in this way, the library offers us a relatively gentle path. For completeness, note that the BlobDescriptor class has a straightforward implementation. Create a new file, BlobDescriptor.cpp, and fill it with the following bodies for a constructor and getters: #include "BlobDescriptor.h"   BlobDescriptor::BlobDescriptor(const cv::Mat &normalizedHistogram, const cv::Mat &keypointDescriptors, uint32_t label) : normalizedHistogram(normalizedHistogram) , keypointDescriptors(keypointDescriptors) , label(label) { }   const cv::Mat &BlobDescriptor::getNormalizedHistogram() const {   return normalizedHistogram; } const cv::Mat &BlobDescriptor::getKeypointDescriptors() const {   return keypointDescriptors; } uint32_t BlobDescriptor::getLabel() const {   return label; } Summary Now, we have finished all the code for the detector, descriptor, and classifier! Again, for more information, refer to Chapter 5, Classifying Coins and Commodities in the book,iOS Application Development with OpenCV 3. Resources for Article: Further resources on this subject: Making subtle color shifts with curves [article] New functionality in OpenCV 3.0 [article] Camera Calibration [article]
Read more
  • 0
  • 0
  • 3890
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-creating-a-custom-layout-implementation-for-your-android-app
Aarthi Kumaraswamy
06 Apr 2018
5 min read
Save for later

Creating a custom layout implementation for your Android app

Aarthi Kumaraswamy
06 Apr 2018
5 min read
In most applications, you'll find that a combination of the ConstraintLayout, CoordinatorLayout, and some of the more primitive layout classes (such as LinearLayout and FrameLayout) are more than enough to achieve any layout requirements you can dream up for your user interface. Every now and again though, you'll find yourself needing a custom layout manager to achieve an effect required for the application. Layout classes extend from the ViewGroup class, and their job is to tell their child widgets where to position themselves, and how large they should be. They do this in two phases: the measurement phase and the layout phase. All View implementations are expected to provide measurements for their actual size according to specifications. These measurements are then used by the View widget's parent ViewGroup to allocate the amount of space the widget will consume on the screen. For example, a View might be told to consume, at most, the screen width. The View must then determine how much of that space it actually requires, and records that size in its measured dimensions. The measured dimensions are then used by the parent ViewGroup during the layout process. The second phase is the layout phase, and it is conducted by the ViewGroup parent of each View widget. This phase positions the View on the screen, relative to its parent ViewGroup location, and specifies the actual size that the widget will consume on the screen (typically based on the measured size calculated in the measurement phase). When you implement your own ViewGroup, you'll need to ensure that all of your child View widgets are given a chance to measure themselves before you perform the actual layout operations. Let's build a layout class to arrange its children in a circle. To keep the implementation simple, we'll assume that all the child widgets are the same size (for example, if they were all icons): Right-click on the widget package in the travel claim example app, and select New|Java Class. Name the new class CircleLayout. Change the Superclass to android.view.ViewGroup. Click OK to create the new class. Declare the standard ViewGroup constructors: public CircleLayout(final Context context) {  super(context); } public CircleLayout(    final Context context,    final AttributeSet attrs) {  super(context, attrs); } public CircleLayout(      final Context context,      final AttributeSet attrs,      final int defStyleAttr) {  super(context, attrs, defStyleAttr); } Override the onMeasure method to calculate the size of the CircleLayout and all of its child Viewwidgets. The measurement specifications are passed in as int values, which are interpreted using the staticmethods in the MeaureSpec class. Measurement specifications come in two flavors: at most and exactly, and each has a size value attached. In this particular layout, we always measure the CircleLayout as the size given in the specification. This means that the CircleLayout will always consume the maximum amount of space available. It also expects all of its children to be able to specify sizes without the match_parent attribute (as this will cause each child to take up all the available space): @Override protected void onMeasure(    final int widthMeasureSpec,    final int heightMeasureSpec) {  super.onMeasure(widthMeasureSpec, heightMeasureSpec);  measureChildren(widthMeasureSpec, heightMeasureSpec);  setMeasuredDimension(        MeasureSpec.getSize(widthMeasureSpec),        MeasureSpec.getSize(heightMeasureSpec)); } The next method to implement is the onLayout method. This performs the actual arrangement of the child View widget within the CircleLayout, by invoking their layout method. The layout method should never be overridden, because it's closely tied to the platform and performs several other important actions (such as notifying layout listeners). Instead, you should override onLayout, but invoking layout.CircleLayoutassumes that all the child View widgets are of the same size (and forces this as part of the onLayoutimplementation). This onLayout method simply calculates the available space, and then positions the child View widgets in a circle around the outside edge: protected void onLayout( final boolean changed, final int left, final int top, final int right, final int bottom) { final int childCount = getChildCount(); if (childCount == 0) { return; } final int width = right - left; final int height = bottom - top; // if we have children, we assume they're all the same size final int childrenWidth = getChildAt(0).getMeasuredWidth(); final int childrenHeight = getChildAt(0).getMeasuredHeight(); final int boxSize = Math.min( width - childrenWidth, height - childrenHeight); for (int i = 0; i < childCount; i++) { final View child = getChildAt(i); final int childWidth = child.getMeasuredWidth(); final int childHeight = child.getMeasuredHeight(); final double x = Math.sin((Math.PI * 2.0) * ((double) i / (double) childCount)); final double y = -Math.cos((Math.PI * 2.0) * ((double) i / (double) childCount)); final int childLeft = (int) (x * (boxSize / 2)) + (width / 2) - (childWidth / 2); final int childTop = (int) (y * (boxSize / 2)) + (height / 2) - (childHeight / 2); final int childRight = childLeft + childWidth; final int childBottom = childTop + childHeight; child.layout(childLeft, childTop, childRight, childBottom); } } Although the implementation of the onLayout method is quite long, it's also relatively simple. Most of the code is concerned with determining the desired position of the child View widgets. Layout code needs to execute as quickly as possible, and should avoid allocating any objects during the onMeasure and onLayout methods (similar to the rules of onDraw). Layout is a critical part of building the screen from a performance standpoint, because no rendering can actually occur without the layout being completed. The layout will also be rerun every time the layout changes its structure. For example, if you add or remove any child View widgets, or change the size or position of the ViewGroup. Changing the size of a ViewGroup might happen on every frame if you use a CoordinatorLayout, where the ViewGroup is being collapsed (or if you change its size as part of a property-animation). You read an excerpt from the book, Hands-On Android UI Development by Jason Morris. For more recipes on cutting edge Android UI tasks such as creating themes, animations, custom widgets and more, give this book a try.  
Read more
  • 0
  • 0
  • 3872

article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 3844

article-image-so-what-spring-android
Packt
20 Feb 2013
3 min read
Save for later

So, what is Spring for Android?

Packt
20 Feb 2013
3 min read
(For more resources related to this topic, see here.) RestTemplate The RestTemplate module is a port of the Java-based REST client RestTemplate, which initially appeared in 2009 in Spring for MVC. Like the other Spring template counterparts (JdbcTemplate, JmsTemplate, and so on), its aim is to bring to Java developers (and thus Android developers) a high-level abstraction of lower-level Java API; in this case, it eases the development of HTTP clients. In its Android version, RestTemplate relies on the core Java HTTP facilities (HttpURLConnection) or the Apache HTTP Client. According to the Android device version you use to run your app, RestTemplate for Android can pick the most appropriate one for you. This is according to Android developers' recommendations. See http://android-developers.blogspot.ca/2011/09/androids-http-clients.html. This blog post explains why in certain cases Apache HTTP Client is preferred over HttpURLConnection. RestTemplate for Android also supports gzip compression and different message converters to convert your Java objects from and to JSON, XML, and so on. Auth/Spring Social The goal of the Spring Android Auth module is to let an Android app gain authorization to a web service provider using OAuth (Version 1 or 2). OAuth is probably the most popular authorization protocol (and it is worth mentioning that, it is an open standard) and is currently used by Facebook, Twitter, Google apps (and many others) to let third-party applications access users account. Spring for Android Auth module is based on several Spring libraries because it needs to securely (with cryptography) persist (via JDBC) a token obtained via HTTP; here is a list of the needed libraries for OAuth: Spring Security Crypto: To encrypt the token Spring Android OAuth: This extends Spring Security Crypto adding a dedicated encryptor for Android, and SQLite based persistence provider Spring Android Rest Template: To interact with the HTTP services Spring Social Core: The OAuth workflow abstraction While performing the OAuth workflow, we will also need the browser to take the user to the service provider authentication page, for example, the following is the Twitter OAuth authentication dialog: What Spring for Android is not SpringSource (the company behind Spring for Android) is very famous among Java developers. Their most popular product is the Spring Framework for Java which includes a dependency injection framework (also called an inversion of control framework). Spring for Android does not bring inversion of control to the Android platform. In its very first release (1.0.0.M1), Spring for Android brought a common logging facade for Android; the authors removed it in the next version. Summary In this article, we have learned that Spring for Android helps in easy development of Android applications. We learned the details about the important modules present in it and its functions. We also learnt about dependency injection framework in short and that Spring for Android does not bring inversion of control to the Android platform. Resources for Article : Further resources on this subject: Top 5 Must-have Android Applications [Article] Creating, Compiling, and Deploying Native Projects from the Android NDK [Article] Manifest Assurance: Security and Android Permissions for Flash [Article]
Read more
  • 0
  • 0
  • 3768

article-image-vr-experiences-with-react-vr-create-maze
Sunith Shetty
12 Jun 2018
16 min read
Save for later

Building VR experiences with React VR 2.0: How to create maze that's new every time you play

Sunith Shetty
12 Jun 2018
16 min read
In today’s tutorial, we will examine the functionality required to build a simple maze. There are a few ways we could build a maze. The most straightforward way would be to fire up our 3D modeler package (say, Blender) and create a labyrinth out of polygons. This would work fine and could be very detailed. However, it would also be very boring. Why? The first time we get through the maze will be exciting, but after a few tries, you'll know the way through. When we construct VR experiences, you usually want people to visit often and have fun every time. This tutorial is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you will learn how to create amazing 360 and virtual reality content that runs directly in your browsers. A modeled labyrinth would be boring. Life is too short to do boring things. So, we want to generate a Maze randomly. This way, you can change the Maze every time so that it'll be fresh and different. The way to do that is through random numbers to ensure that the Maze doesn't shift around us, so we want to actually do it with pseudo-random numbers. To start doing that, we'll need a basic application created. Please go to your VR directory and create an application called 'WalkInAMaze': react-vr init WalkInAMaze Almost random–pseudo random number generators To have a chance of replaying value or being able to compare scores between people, we really need a pseudo-random number generator. The basic JavaScript Math.random() is not a pseudo-random generator; it really gives you a totally random number every time. We need a pseudo-random number generator that takes a seed value. If you give the same seed to the random number generator, it will generate the same sequence of random numbers. (They aren't completely random but are very close.) Random number generators are a complex topic; for example, they are used in cryptography, and if your random number generator isn't completely random, someone could break your code. We aren't so worried about that, we just want repeatability. Although the UI for this may be a bit beyond the scope of this book, creating the Maze in a way that clicking on Refresh won't generate a totally different Maze is really a good thing and will avoid frustration on the part of the user. This will also allow two users to compare scores; we could persist a board number for the Maze and show this. This may be out of scope for our book; however, having a predictable Maze will help immensely during development. If it wasn't for this, you might get lost while working on your world. (Well, probably not, but it makes testing easier.) Including library code from other projects Up to this point, I've shown you how to create components in React VR (or React). JavaScript interestingly has a historical issue with include. With C++, Java, or C#, you can include a file in another file or make a reference to a file in a project. After doing that, everything in those other files, such as functions, classes, and global properties (variables), are then usable from the file that you've issued the include statement in. With a browser, the concept of "including" JavaScript is a little different. With Node.js, we use package.json to indicate what packages we need. To bring those packages into our code, we will use the following syntax in your .js files: var MersenneTwister = require('mersenne-twister'); Then, instead of using Math.random(), we will create a new random number generator and pass a seed, as follows: var rng = new MersenneTwister(this.props.Seed); From this point on, you just call rng.random() instead of Math.random(). We can just use npm install <package> and the require statement for properly formatted packages. Much of this can be done for you by executing the npm command: npm install mersenne-twister --save Remember, the --save command to update our manifest in the project. While we are at it, we can install another package we'll need later: npm install react-vr-gaze-button --save Now that we have a good random number generator, let's use it to complicate our world. The Maze render() How do we build a Maze? I wanted to develop some code that dynamically generates the Maze; anyone could model it in a package, but a VR world should be living. Having code that can dynamically build Maze in any size (to a point) will allow a repeat playing of your world. There are a number of JavaScript packages out there for printing mazes. I took one that seemed to be everywhere, in the public domain, on GitHub and modified it for HTML. This app consists of two parts: Maze.html and makeMaze.JS. Neither is React, but it is JavaScript. It works fairly well, although the numbers don't really represent exactly how wide it is. First, I made sure that only one x was displaying, both vertically and horizontally. This will not print well (lines are usually taller than wide), but we are building a virtually real Maze, not a paper Maze. The Maze that we generate with the files at Maze.html (localhost:8081/vr/maze.html) and the JavaScript file—makeMaze.js—will now look like this: x1xxxxxxx x x x xxx x x x x x x x x xxxxx x x x x x x x x x x x x 2 xxxxxxxxx It is a little hard to read, but you can count the squares vs. xs. Don't worry, it's going to look a lot fancier. Now that we have the HTML version of a Maze working, we'll start building the hedges. This is a slightly larger piece of code than I expected, so I broke it into pieces and loaded the Maze object onto GitHub rather than pasting the entire code here, as it's long. You can find a link for the source at: http://bit.ly/VR_Chap11 Adding the floors and type checking One of the things that look odd with a 360 Pano background, as we've talked about before, is that you can seem to "float" against the ground. One fix, other than fixing the original image, is to simply add a floor. This is what we did with the Space Gallery, and it looks pretty good as we were assuming we were floating in space anyway. For this version, let's import a ground square. We could use a large square that would encompass the entire Maze; we'd then have to resize it if the size of the Maze changes. I decided to use a smaller cube and alter it so that it's "underneath" every cell of the Maze. This would allow us some leeway in the future to rotate the squares for worn paths, water traps, or whatever. To make the floor, we will use a simple cube object that I altered slightly and is UV mapped. I used Blender for this. We also import a Hedge model, and a Gem, which will represent where we can teleport to. Inside 'Maze.js' we added the following code: import Hedge from './Hedge.js'; import Floor from './Hedge.js'; import Gem from './Gem.js'; Then, inside the Maze.js we could instantiate our floor with the code: <Floor X={-2} Y={-4}/> Notice that we don't use 'vr/components/Hedge.js' when we do the import; we're inside Maze.js. However, in index.vr.js to include the Maze, we do need: import Maze from './vr/components/Maze.js'; It's slightly more complicated though. In our code, the Maze builds the data structures when props have changed; when moving, if the maze needs rendering again, it simply loops through the data structure and builds a collection (mazeHedges) with all of the floors, teleport targets, and hedges in it. Given this, to create the floors, the line in Maze.js is actually: mazeHedges.push(<Floor {...cellLoc} />); Here is where I ran into two big problems, and I'll show you what happened so that you can avoid these issues. Initially, I was bashing my head against the wall trying to figure out why my floors looked like hedges. This one is pretty easy—we imported Floor from the Hedge.js file. The floors will look like hedges (did you notice this in my preceding code? If so, I did this on purpose as a learning experience. Honest). This is an easy fix. Make sure that you code import Floor from './floor.js'; note that Floor not type-checked. (It is, after all, JavaScript.) I thought this was odd, as the hedge.js file exports a Hedge object, not a Floor object, but be aware you can rename the objects as you import them. The second problem I had was more of a simple goof that is easy to occur if you aren't really thinking in React. You may run into this. JavaScript is a lovely language, but sometimes I miss a strongly typed language. Here is what I did: <Maze SizeX='4' SizeZ='4' CellSpacing='2.1' Seed='7' /> Inside the maze.js file, I had code like this: for (var j = 0; j < this.props.SizeX + 2; j++) { After some debugging, I found out that the value of j was going from 0 to 42. Why did it get 42 instead of 6? The reason was simple. We need to fully understand JavaScript to program complex apps. The mistake was in initializing SizeX to be '4' ; this makes it a string variable. When calculating j from 0 (an integer), React/JavaScript takes 2, adds it to a string of '4', and gets the 42 string, then converts it to an integer and assigns this to j. When this is done, very weird things happened. When we were building the Space Gallery, we could easily use the '5.1' values for the input to the box: <Pedestal MyX='0.0' MyZ='-5.1'/> Then, later use the transform statement below inside the class: transform: [ { translate: [ this.props.MyX, -1.7, this.props.MyZ] } ] React/JavaScript will put the string values into This.Props.MyX, then realize it needs an integer, and then quietly do the conversion. However, when you get more complicated objects, such as our Maze generation, you won't get away with this. Remember that your code isn't "really" JavaScript. It's processed. At the heart, this processing is fairly simple, but the implications can be a killer. Pay attention to what you code. With a loosely typed language such as JavaScript, with React on top, any mistakes you make will be quietly converted to something you didn't intend. You are the programmer. Program correctly. So, back to the Maze. The Hedge and Floor are straightforward copies of the initial Gem code. Let's take a look at our starting Gem, although note it gets a lot more complicated later (and in your source files): import React, { Component } from 'react'; import { asset, Box, Model, Text, View } from 'react-vr'; export default class Gem extends Component { constructor() { super(); this.state = { Height: -3 }; } render() { return ( <Model source={{ gltf2: asset('TeleportGem.gltf'), }} style={{ transform: [{ translate: [this.props.X, this.state.Height, this.props.Z] }] }} /> ); } } The Hedge and Floor are essentially the same thing. (We could have made a prop be the file loaded, but we want a different behavior for the Gem, so we will edit this file extensively.) To run this sample, first, we should have created a directory as you have before, called WalkInAMaze. Once you do this, download the files from the Git source for this part of the article (http://bit.ly/VR_Chap11). Once you've created the app, copied the files, and fired it up, (go to the WalkInAMaze directory and type npm start), and you should see something like this once you look around - except, there is a bug. This is what the maze should look like (if you use the file  'MazeHedges2DoubleSided.gltf' in Hedge.js, in the <Model> statement):> Now, how did we get those neat-looking hedges in the game? (OK, they are pretty low poly, but it is still pushing it.) One of the nice things about the pace of improvement on web standards is their new features. Instead of just .obj file format, React VR now has the capability to load glTF files. Using the glTF file format for models glTF files are a new file format that works pretty naturally with WebGL. There are exporters for many different CAD packages. The reason I like glTF files is that getting a proper export is fairly straightforward. Lightwave OBJ files are an industry standard, but in the case of React, not all of the options are imported. One major one is transparency. The OBJ file format allows that, but at of the time of writing this book, it wasn't an option. Many other graphics shaders that modern hardware can handle can't be described with the OBJ file format. This is why glTF files are the next best alternative for WebVR. It is a modern and evolving format, and work is being done to enhance the capabilities and make a fairly good match between what WebGL can display and what glTF can export. This is however on interacting with the world, so I'll give a brief mention on how to export glTF files and provide the objects, especially the Hedge, as glTF models. The nice thing with glTF from the modeling side is that if you use their material specifications, for example, for Blender, then you don't have to worry that the export won't be quite right. Today's physically Based Rendering (PBR) tends to use the metallic/roughness model, and these import better than trying to figure out how to convert PBR materials into the OBJ file's specular lighting model. Here is the metallic-looking Gem that I'm using as the gaze point: Using the glTF Metallic Roughness model, we can assign the texture maps that programs, such as Substance Designer, calculate and import easily. The resulting figures look metallic where they are supposed to be metallic and dull where the paint still holds on. I didn't use Ambient Occlusion here, as this is a very convex model; something with more surface depressions would look fantastic with Ambient Occlusion. It would also look great with architectural models, for example, furniture. To convert your models, there is user documentation at http://bit.ly/glTFExporting. You will need to download and install the Blender glTF exporter. Or, you can just download the files I have already converted. If you do the export, in brief, you do the following steps: Download the files from http://bit.ly/gLTFFiles. You will need the gltf2_Principled.blend file, assuming that you are on a newer version of Blender. In Blender, open your file, then link to the new materials. Go to File->Link, then choose the gltf2_Principled.blend file. Once you do that, drill into "NodeTree" and choose either glTF Metallic Roughness (for metal), or glTF specular glossiness for other materials. Choose the object you are going to export; make sure that you choose the Cycles renderer. Open the Node Editor in a window. Scroll down to the bottom of the Node Editor window, and make sure that the box Use Nodes is checked. Add the node via the nodal menu, Add->Group->glTF Specular Glossiness or Metallic Roughness. Once the node is added, go to Add->Texture->Image texture. Add as many image textures as you have image maps, then wire them up. You should end up with something similar to this diagram. To export the models, I recommend that you disable camera export and combine the buffers unless you think you will be exporting several models that share geometry or materials. The Export options I used are as follows: Now, to include the exported glTF object, use the <Model> component as you would with an OBJ file, except you have no MTL file. The materials are all described inside the .glTF file. To include the exported glTF object, you just put the filename as a gltf2 prop in the <Model: <Model source={{ gltf2: asset('TeleportGem2.gltf'),}} ... To find out more about these options and processes, you can go to the glTF export web site. This site also includes tutorials on major CAD packages and the all-important glTF shaders (for example, the Blender model I showed earlier). I have loaded several .OBJ files and .glTF files so you can experiment with different combinations of low poly and transparency. When glTF support was added in React VR version 2.0.0, I was very excited as transparency maps are very important for a lot of VR models, especially vegetation; just like our hedges. However, it turns out there is a bug in WebGL or three.js that does not render the transparency properly. As a result, I have gone with a low polygon version in the files on the GitHub site; the pictures, above, were with the file MazeHedges2DoubleSided.gltf in the Hedges.js file (in vr/components). If you get 404 errors, check the paths in the glTF file. It depends on which exporter you use—if you are working with Blender, the gltf2 exporter from the Khronos group calculates the path correctly, but the one from Kupoman has options, and you could export the wrong paths. We discussed important mechanics of props, state, and events. We also discussed how to create a maze using pseudo-random number generators to make sure that our props and state didn't change chaotically. To know more about how to create, move around in, and make worlds react to us in a Virtual Reality world, including basic teleport mechanics, do check out this book Getting Started with React VR.  Read More: Google Daydream powered Lenovo Mirage solo hits the market Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand alone VR headset arrives!
Read more
  • 0
  • 0
  • 3744
article-image-geolocation-and-accelerometer-apis
Packt
25 Jan 2012
13 min read
Save for later

Geolocation and Accelerometer APIs

Packt
25 Jan 2012
13 min read
(For more resources on iOS, see here.) The iOS family makes use of many onboard sensors including the three-axis accelerometer, digital compass, camera, microphone, and global positioning system (GPS). Their inclusion has created a world of opportunity for developers, and has resulted in a slew of innovative, creative, and fun apps that have contributed to the overwhelming success of the App Store. Determining your current location The iOS family of devices are location-aware, allowing your approximate geographic position to be determined. How this is achieved depends on the hardware present in the device. For example, the original iPhone, all models of the iPod touch, and Wi-Fi-only iPads use Wi-Fi network triangulation to provide location information. The remaining devices can more accurately calculate their position using an on-board GPS chip or cell-phone tower triangulation. The AIR SDK provides a layer of abstraction that allows you to extract location information in a hardware-independent manner, meaning you can access the information on any iOS device using the same code. This recipe will take you through the steps required to determine your current location. Getting ready An FLA has been provided as a starting point for this recipe. From Flash Professional, open chapter9recipe1recipe.fla from the code bundle which can be downloaded from http://www.packtpub.com/support.   How to do it... Perform the following steps to listen for and display geolocation data: Create a document class and name it Main. Import the following classes and add a member variable of type Geolocation: package { import flash.display.MovieClip; import flash.events.GeolocationEvent; import flash.sensors.Geolocation; public class Main extends MovieClip { private var geo:Geolocation; public function Main() { // constructor code } }}   Within the class' constructor, instantiate a Geolocation object and listen for updates from it: public function Main() { if(Geolocation.isSupported) { geo = new Geolocation(); geo.setRequestedUpdateInterval(1000); geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); }} Now, write an event handler that will obtain the updated geolocation data and populate the dynamic text fields with it: private function geoUpdated(e:GeolocationEvent):void { latitudeField.text = e.latitude.toString(); longitudeField.text = e.longitude.toString(); altitudeField.text = e.altitude.toString(); hAccuracyField.text = e.horizontalAccuracy.toString(); vAccuracyField.text = e.verticalAccuracy.toString(); timestampField.text = e.timestamp.toString();} Save the class file as Main.as within the same folder as the FLA. Move back to the FLA and save it too. Publish and test the app on your device. When launched for the first time, a native iOS dialog will appear. Tap the OK button to grant your app access to the device's location data.     Devices running iOS 4 and above will remember your choice, while devices running older versions of iOS will prompt you each time the app is launched.   The location data will be shown on screen and periodically updated. Take your device on the move and you will see changes in the data as your geographical location changes. How it works... AIR provides the Geolocation class in the flash.sensors package, allowing the location data to be retrieved from your device. To access the data, create a Geolocation instance and listen for it dispatching GeolocationEvent.UPDATE events. We did this within our document class' constructor, using the geo member variable to hold a reference to the object: geo = new Geolocation();geo.setRequestedUpdateInterval(1000);geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); The frequency with which location data is retrieved can be set by calling the Geolocation. setRequestedUpdateInterval() method. You can see this in the earlier code where we requested an update interval of 1000 milliseconds. This only acts as a hint to the device, meaning the actual time between updates may be greater or smaller than your request. Omitting this call will result in the device using a default update interval. The default interval can be anything ranging from milliseconds to seconds depending on the device's hardware capabilities. Each UPDATE event dispatches a GeolocationEvent object, which contains properties describing your current location. Our geoUpdated() method handles this event by outputting several of the properties to the dynamic text fields sitting on the stage: private function geoUpdated(e:GeolocationEvent):void { latitudeField.text = e.latitude.toString(); longitudeField.text = e.longitude.toString(); altitudeField.text = e.altitude.toString(); hAccuracyField.text = e.horizontalAccuracy.toString(); vAccuracyField.text = e.verticalAccuracy.toString(); timestampField.text = e.timestamp.toString();} The following information was output: Latitude and longitude Altitude Horizontal and vertical accuracy Timestamp The latitude and longitude positions are used to identify your geographical location. Your altitude is also obtained and is measured in meters. As you move with the device, these values will update to reflect your new location. The accuracy of the location data is also shown and depends on the hardware capabilities of the device. Both the horizontal and vertical accuracy are measured in meters. Finally, a timestamp is associated with every GeolocationEvent object that is dispatched, allowing you to determine the actual time interval between each. The timestamp specifies the milliseconds that have passed since the app was launched. Some older devices that do not include a GPS unit only dispatch UPDATE events occasionally. Initially, one or two UPDATE events are dispatched, with additional events only being dispatched when location information changes noticeably. Also note the use of the static Geolocation.isSupported property within the constructor. Although this will currently return true for all iOS devices, it cannot be guaranteed for future devices. Checking for geolocation support is also advisable when writing cross-platform code. For more information, perform a search for flash.sensors.Geolocation and flash. events.GeolocationEvent within Adobe Community Help. There's more... The amount of information made available and the accuracy of that information depends on the capabilities of the device. Accuracy The accuracy of the location data depends on the method employed by the device to calculate your position. Typically, iOS devices with an on-board GPS chip will have a benefit over those that rely on Wi-Fi triangulation. For example, running this recipe's app on an iPhone 4, which contains a GPS unit, results in a horizontal accuracy of around 10 meters. The same app running on a third-generation iPod touch and relying on a Wi-Fi network, reports a horizontal accuracy of around 100 meters. Quite a difference! Altitude support The current altitude can only be obtained from GPS-enabled devices. On devices without a GPS unit, the GeolocationEvent.verticalAccuracy property will return -1 and GeolocationEvent.altitude will return 0. A vertical accuracy of -1 indicates that altitude cannot be detected. You should be aware of, and code for these restrictions when developing apps that provide location-based services. Do not make assumptions about a device's capabilities. If your application relies on the presence of GPS hardware, then it is possible to state this within your application descriptor file. Doing so will prevent users without the necessary hardware from downloading your app from the App Store. Mapping your location The most obvious use for the retrieval of geolocation data is mapping. Typically, an app will obtain a geographic location and display a map of its surrounding area. There are several ways to achieve this, but launching and passing location data to the device's native maps application is possibly the easiest solution. If you would prefer an ActionScript solution, then there is the UMap ActionScript 3.0 API, which integrates with map data from a wide range of providers including Bing, Google, and Yahoo!. You can sign up and download the API from www.umapper.com. Calculating distance between geolocations When the geographic coordinates of two separate locations are known, it is possible to determine the distance between them. AIR does not provide an API for this but an AS3 solution can be found on the Adobe Developer Connection website at: http://cookbooks.adobe.com/index.cfm?event=showdetails&postId=5701. The UMap ActionScript 3.0 API can also be used to calculate distances. Refer to www.umapper.com. Geocoding Mapping providers, such as Google and Yahoo!, provide geocoding and reverse-geocoding web services. Geocoding is the process of finding the latitude and longitude of an address, whereas reverse-geocoding converts a latitude-longitude pair into a readable address. You can make HTTP requests from your AIR for iOS application to any of these services. As an example, take a look at the Yahoo! PlaceFinder web service at http://developer.yahoo.com/geo/placefinder. Alternatively, the UMap ActionScript 3.0 API integrates with many of these services to provide geocoding functionality directly within your Flash projects. Refer to the uMapper website. Gyroscope support Another popular sensor is the gyroscope, which is found in more recent iOS devices. While the AIR SDK does not directly support gyroscope access, Adobe has made available a native extension for AIR 3.0, which provides a Gyroscope ActionScript class. A download link and usage examples can be found on the Adobe Developer Connection site at www.adobe.com/devnet/air/native-extensions-for-air/extensions/gyroscope.html. Determining your speed and heading The availability of an on-board GPS unit makes it possible to determine your speed and heading. In this recipe, we will write a simple app that uses the Geolocation class to obtain and use this information. In addition, we will add compass functionality by utilizing the user's current heading. Getting ready You will need a GPS-enabled iOS device. The iPhone has featured an on-board GPS unit since the release of the 3G. GPS hardware can also be found in all cellular network-enabled iPads. From Flash Professional, open chapter9recipe2recipe.fla from the code bundle. Sitting on the stage are three dynamic text fields. The first two (speed1Field and speed2Field) will be used to display the current speed in meters per second and miles per hour respectively. We will write the device's current heading into the third—headingField. Also, a movie clip named compass has been positioned near the bottom of the stage and represents a compass with north, south, east, and west clearly marked on it. We will update the rotation of this clip in response to heading changes to ensure that it always points towards true north. How to do it... To obtain the device's speed and heading, carry out the following steps: Create a document class and name it Main. Add the necessary import statements, a constant, and a member variable of type Geolocation: package { import flash.display.MovieClip; import flash.events.GeolocationEvent; import flash.sensors.Geolocation; public class Main extends MovieClip { private const CONVERSION_FACTOR:Number = 2.237; private var geo:Geolocation; public function Main() { // constructor code } }} Within the constructor, instantiate a Geolocation object and listen for updates: public function Main() { if(Geolocation.isSupported) { geo = new Geolocation(); geo.setRequestedUpdateInterval(50); geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); }} We will need an event listener for the Geolocation object's UPDATE event. This is where we will obtain and display the current speed and heading, and also update the compass movie clip to ensure it points towards true north. Add the following method: private function geoUpdated(e:GeolocationEvent):void { var metersPerSecond:Number = e.speed; var milesPerHour:uint = getMilesPerHour(metersPerSecond); speed1Field.text = String(metersPerSecond); speed2Field.text = String(milesPerHour); var heading:Number = e.heading; compass.rotation = 360 - heading; headingField.text = String(heading);} Finally, add this support method to convert meters per second to miles per hour: private function getMilesPerHour(metersPerSecond:Number):uint { return metersPerSecond * CONVERSION_FACTOR;} Save the class file as Main.as. Move back to the FLA and save it too. Compile the FLA and deploy the IPA to your device. Launch the app. When prompted, grant your app access to the GPS unit. Hold the device in front of you and start turning on the spot. The heading (degrees) field will update to show the direction you are facing. The compass movie clip will also update, showing you where true north is in relation to your current heading. Take your device outside and start walking, or better still, start running. On average every 50 milliseconds you will see the top two text fields update and show your current speed, measured in both meters per second and miles per hour. How it works... In this recipe, we created a Geolocation object and listened for it dispatching UPDATE events. An update interval of 50 milliseconds was specified in an attempt to receive the speed and heading information frequently. Both the speed and heading information are obtained from the GeolocationEvent object, which is dispatched on each UPDATE event. The event is captured and handled by our geoUpdated() handler, which displays the speed and heading information from the accelerometer. The current speed is measured in meters per second and is obtained by querying the GeolocationEvent.speed property. Our handler also converts the speed to miles per hour before displaying each value within the appropriate text field. The following code does this: var metersPerSecond:Number = e.speed;var milesPerHour:uint = getMilesPerHour(metersPerSecond);speed1Field.text = String(metersPerSecond);speed2Field.text = String(milesPerHour); The heading, which represents the direction of movement (with respect to true north) in degrees, is retrieved from the GeolocationEvent.heading property. The value is used to set the rotation property of the compass movie clip and is also written to the headingField text field: var heading:Number = e.heading;compass.rotation = 360 - heading;headingField.text = String(heading); The remaining method is getMilesPerHour() and is used within geoUpdated() to convert the current speed from meters per second into miles per hour. Notice the use of the CONVERSION_FACTOR constant that was declared within your document class: private function getMilesPerHour(metersPerSecond:Number):uint { return metersPerSecond * CONVERSION_FACTOR;} Although the speed and heading obtained from the GPS unit will suffice for most applications, the accuracy can vary across devices. Your surroundings can also have an affect; moving through streets with tall buildings or under tree coverage can impair the readings. You can find more information regarding flash.sensors.Geolocation and flash.events.GeolocationEvent within Adobe Community Help. There's more... The following information provides some additional detail. Determining support Your current speed and heading can only be determined by devices that possess a GPS receiver. Although you can install this recipe's app on any iOS device, you won't receive valid readings from any model of iPod touch, the original iPhone, or W-Fi-only iPads. Instead the GeolocationEvent.speed property will return -1 and GeolocationEvent.heading will return NaN. If your application relies on the presence of GPS hardware, then it is possible to state this within the application descriptor file. Doing so will prevent users without the necessary hardware from downloading your app from the App Store. Simulating the GPS receiver During the development lifecycle it is not feasible to continually test your app in a live environment. Instead you will probably want to record live data from your device and re-use it during testing. There are various apps available that will log data from the sensors on your device. One such app is xSensor, which can be downloaded from iTunes or the App Store and is free. Its data sensor log is limited to 5KB but this restriction can be lifted by purchasing xSensor Pro. Preventing screen idle Many of this article's apps don't require you to touch the screen that often. Therefore you will be likely to experience the backlight dimming or the screen locking while testing them. This can be inconvenient and can be prevented by disabling screen locking.  
Read more
  • 0
  • 0
  • 3680

article-image-introducing-android-platform
Packt
12 Sep 2013
9 min read
Save for later

Introducing an Android platform

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) Introducing an Android app Mobile software application that runs on Android is an Android app. The apps use the extension of .apk as the installer file extension. There are several popular examples of mobile apps, such as Foursquare, Angry Birds, and Fruit Ninja. Primarily in an Eclipse environment, we use Java, which is then compiled into Dalvik bytecode (not the ordinary Java bytecode). Android provides Dalvik virtual machine (DVM) inside Android (not Java virtual machine JVM). Dalvik VM does not ally with Java SE and Java ME libraries, and is built on an Apache Harmony java implementation. What is Dalvik virtual machine? Dalvik VM is a register-based architecture, authored by Dan Bornstein. It is being optimized for low memory requirements, and the virtual machine was slimmed down to use less space and less power consumption. Preparing for Android development Eclipse ADT In this part of the article, we will see how to install the development environment for Android on Eclipse Juno (4.2). Eclipse is a major IDE for Android development. We need to install an Eclipse extension ADT Android Development Toolkit (ADT) for development of the Android application. Debugging an Android project It is advisable to use the Log class for this purpose, the reason being we can filter, print different colors, and define log types. This could be one of the ways of debugging your program, by displaying variables value or parameters. To use Log, import android.util.Log, and use one the following methods to print messages to LogCat: v(String, String) (verbose) d(String, String) (debug) i(String, String) (information) w(String, String) (warning) e(String, String) (error) LogCat is used to view the internal log of the Android system. It is useful to trace any activity happening inside the device or emulator through the Android Debug Bridge (ADB). The Android project structure The following table illustrates the brief description of the important folders and files available in an Android project: Folder Functions /src The Java codes are placed in this folder. /gen It is generated automatically. /assets You can put your fonts, videos, and sounds here. It is more like a filesystem, and can also place CSS, JavaScript files, and so on. /libs It is an external library (normally in JAR). /res It contains images, layout, and global variables. /drawable-xhdpi It is used for extra high specification devices (for example, Tablet, Galaxy SIII, HTC One X). /drawable-hdpi  It is used for high specification phones (for example, SGSI, SGSII) /drawable-mdpi It is used for medium specification phones (for example, Galaxy W and HTC Desire). /drawable-ldpi  It is used for low specification phones (for example: Galaxy Y and HTC WildFire). /layout  It includes all the XML file for the screen(s) layout. /menu XML files for screen menu. /values It includes global constants. /values-v11 These are template styles definitions for devices with Honeycomb (Android API level 11). /values-v14 These are template styles definitions for devices with ICS (Android API level 14). AndroidManifest.xml This is one of important files to define the apps. This is the first file located by the Android OS in order to run the app. It contains the app's properties, activity declarations, and list of permissions.   Dalvik Debug Monitor Server (DDMS) DDMS is a must have tool to view the emulator/device activities. To access DDMS in the Eclipse, navigate to Windows | Open Perspective | Other, and then choose DDMS. By default it is available in the Android SDK (it's inside the folder android-sdk/tools by the file ddms). From this perspective the following aspects are available: Devices: The list of the devices and AVD that are connected to ADB Emulator control: It helps to carry out device functions LogCat: It views real-time system log messages Threads: It gives an idea of currently running threads within a VM Heap: It shows heap usage by application Allocation tracker: It provides information on memory allocation of objects File explorer: It explores the device filesystem Creating a new Android project using Eclipse ADT To create a new Android project in Eclipse, navigate to File | New | Project. A new project window will appear, from there choose Android | Android Application Project from the list. Then click on the Next button. Application Name: This is the name of your application, it will appear side-by-side to the launcher icon. Choose a project name that relevant to your application. Project Name: This is typically similar to your application name. Avoid having the same name with existing projects in Eclipse, it is not permitted. Package Name: This is the package name of the application. It will act as an ID in the Google Play app store if we wish to publish. Typically it will be the reverse of your domain name if we have one (since this is unique) followed by the application name, and a valid Java package name else we can have anything now and refactor it before publishing. Running the application on an Android device To run and deploy on real device, first install the driver of the device. This varies as per device model and manufacturer. These are a few links you could refer to: For Google Android devices refer to http://developer.android.com/sdk/win-usb.html. For others refer to http://www.teamandroid.com/download-android-usb-drivers/. Make sure the Android phone is connected to the computer through the USB cable. To check whether the phone is properly connected to your PC and in debug mode, please switch to the DDMS perspective. Adding multiple activity in Android application This exercise is to add an information screen on the SimpleNumb3r5 app. The information regarding the developer, e-mail, Facebook fan page, and other information is displayed. Since the screen contains a lot of text information including several pictures, so we make use of an HTML page as our approach here: Create an activity class to handle the new screen. Open the src folder, right-click on the package name (net.kerul.SimpleNumb3r5), and choose New | Other... from the selections, choose to add a new Android activity, and click on the Next button. Then, choose a blank activity and click on Next. Set the activity name as Info, and the wizard will suggest the screen layout as info_activity. Click on the Finish button. Adding the RadioGroup or RadioButton controls Android SDK provides two types of radio controls to be used in conjunction, where only one control can be chosen at a given time. RadioGroup (Android widget RadioGroup) is used to encapsulate a set of RadioButton controls for this purpose. Defining the preference screen Preferences are an important aspect of the Android applications. It allows users to have the choice to modify and personalize it. Preferences can be set in two ways: first method is to create the preferences.xml file in the res/xml directory, and second method is to set the preferences from the code. We will use the former also the easier one, by creating the preferences.xml file. Usually, there are five different preference views as listed in the following table: Views Description CheckBoxPreference It is a simple checkbox which returns true/false ListPreference It shows RadioGroup, where only 1 item can be selected EditTextPreference It shows dialog box to edit TextView, and returns String RingTonePreference It is a radioGroup that shows ringtone PreferenceCategory It is a category with preferences Fragment A fragment is an independent component that can be connected to an Activity or simply is subactivity. Typically it defines a part of UI but can also exist with no user interface, that is, headless. An instance of fragment must exist within an activity. Fragments ease the reuse of components for different layouts. Fragments are the way to support UI variances across different types of screens. The most popular use is of building single pane layouts for phone and multipane layouts for tablets (large screens). Adding an external library Android project – AdMob An Android application cannot achieve everything on its own, it will always need the company of external jars/libraries to achieve different goals and serve various purposes. Almost every free Android application published on store has advertisement embedded in it, which makes use of external component to achieve it. Incorporating advertisement in the Android application is a vital aspect of today's application development. In this article, we will continue on our DistanceConverter application, and make use of the external library, AdMob to incorporate advertisement in our application. Adding the AdMob SDK to the project Let's extract the previously downloaded AdMob SDK zip file, and we should get the folder GoogleAdMobAdsSdkAndroid-6.*.*, under that folder there is GoogleAdMobAdsSdk-6.x.x.jar. You should copy this JAR file in the libs folder of the project. Signing and distributing APK The Android package (APK), in simple terms is similar to the runnable JAR or executable file (on Windows OS) which consists of everything that is needed to run the application. The Android ecosystem uses a virtual machine, that is, Dalvik virtual machine (DVM) to run the Java applications. Dalvik uses its own bytecode which is quite different from the Java bytecode. Generating a private key An android application must be signed with our own private key. It identifies a person, corporate, or entity associated with the application. This can be generated using the program keytool from the Java SDK. The following command is used for generating the key: keytool -genkey -v -keystore <filename>.keystore -alias <key-name> -keyalg RSA -keysize 2048 -validity 10000 We can use different key for each published application, and specify different name to identify it. Also, Google expects validity of at least 25 years or more. A very important thing to consider is to keep back up and securely store key, because once it is compromised it impossible to update an already published application. Publishing to Google Play Publishing at Google Play is very simple and involves register for Google play. You just have to visit and register it at https://play.google.com/. It requires $25 USD to register, and is fairly straight and can take a few days until you get the final access. Summary In this article, we learned how to install the Eclipse Juno (the IDE), the Android SDK and the testing platform. Also, we learned about the fragment and its usage, and used it to have different layouts for landscape mode for our application DistanceConverter. We also learned about handling different screen types and persisting state during screen mode changes. Resources for Article: Further resources on this subject: Installing Alfresco Software Development Kit (SDK) [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article] Creating a pop-up menu [Article]
Read more
  • 0
  • 0
  • 3677

article-image-directives-and-services-ionic
Packt
28 Jul 2015
18 min read
Save for later

Directives and Services of Ionic

Packt
28 Jul 2015
18 min read
In this article by Arvind Ravulavaru, author of Learning Ionic, we are going to take a look at Ionic directives and services, which provides reusable components and functionality that can help us in developing applications even faster. (For more resources related to this topic, see here.) Ionic Platform service The first service we are going to deal with is the Ionic Platform service ($ionicPlatform). This service provides device-level hooks that you can tap into to better control your application behavior. We will start off with the very basic ready method. This method is fired once the device is ready or immediately, if the device is already ready. All the Cordova-related code needs to be written inside the $ionicPlatform.ready method, as this is the point in the app life cycle where all the plugins are initialized and ready to be used. To try out Ionic Platform services, we will be scaffolding a blank app and then working with the services. Before we scaffold the blank app, we will create a folder named chapter5. Inside that folder, we will run the following command: ionic start -a "Example 16" -i app.example.sixteen example16 blank Once the app is scaffolded, if you open www/js/app.js, you should find a section such as: .run(function($ionicPlatform) {   $ionicPlatform.ready(function() {     // Hide the accessory bar by default (remove this to show the accessory bar above the keyboard     // for form inputs)     if(window.cordova && window.cordova.plugins.Keyboard) {       cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);     }     if(window.StatusBar) {       StatusBar.styleDefault();     }   }); }) You can see that the $ionicPlatform service is injected as a dependency to the run method. It is highly recommended to use $ionicPlatform.ready method inside other AngularJS components such as controllers and directives, where you are planning to interact with Cordova plugins. In the preceding run method, note that we are hiding the keyboard accessory bar by setting: cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); You can override this by setting the value to false. Also, do notice the if condition before the statement. It is always better to check for variables related to Cordova before using them. The $ionicPlatform service comes with a handy method to detect the hardware back button event. A few (Android) devices have a hardware back button and, if you want to listen to the back button pressed event, you will need to hook into the onHardwareBackButton method on the $ionicPlatform service: var hardwareBackButtonHandler = function() {   console.log('Hardware back button pressed');   // do more interesting things here }         $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler); This event needs to be registered inside the $ionicPlatform.ready method preferably inside AngularJS's run method. The hardwareBackButtonHandler callback will be called whenever the user presses the device back button. A simple functionally that you can do with this handler is to ask the user if they want to really quit your app, making sure that they have not accidently hit the back button. Sometimes this may be annoying. Thus, you can provide a setting in your app whereby the user selects if he/she wants to be alerted when they try to quit. Based on that, you can either defer registering the event or you can unsubscribe to it. The code for the preceding logic will look something like this: .run(function($ionicPlatform) {     $ionicPlatform.ready(function() {         var alertOnBackPress = localStorage.getItem('alertOnBackPress');           var hardwareBackButtonHandler = function() {             console.log('Hardware back button pressed');             // do more interesting things here         }           function manageBackPressEvent(alertOnBackPress) {             if (alertOnBackPress) {                 $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler);             } else {                 $ionicPlatform.offHardwareBackButton(hardwareBackButtonHandler);             }         }           // when the app boots up         manageBackPressEvent(alertOnBackPress);           // later in the code/controller when you let         // the user update the setting         function updateSettings(alertOnBackPressModified) {             localStorage.setItem('alertOnBackPress', alertOnBackPressModified);             manageBackPressEvent(alertOnBackPressModified)         }       }); }) In the preceding code snippet, we are looking in localStorage for the value of alertOnBackPress. Next, we create a handler named hardwareBackButtonHandler, which will be triggered when the back button is pressed. Finally, a utility method named manageBackPressEvent() takes in a Boolean value that decides whether to register or de-register the callback for HardwareBackButton. With this set up, when the app starts we call the manageBackPressEvent method with the value from localStorage. If the value is present and is equal to true, we register the event; otherwise, we do not. Later on, we can have a settings controller that lets users change this setting. When the user changes the state of alertOnBackPress, we call the updateSettings method passing in if the user wants to be alerted or not. The updateSettings method updates localStorage with this setting and calls the manageBackPressEvent method, which takes care of registering or de-registering the callback for the hardware back pressed event. This is one powerful example that showcases the power of AngularJS when combined with Cordova to provide APIs to manage your application easily. This example may seem a bit complex at first, but most of the services that you are going to consume will be quite similar. There will be events that you need to register and de-register conditionally, based on preferences. So, I thought this would be a good place to share an example such as this, assuming that this concept will grow on you. registerBackButtonAction The $ionicPlatform also provides a method named registerBackButtonAction. This is another API that lets you control the way your application behaves when the back button is pressed. By default, pressing the back button executes one task. For example, if you have a multi-page application and you are navigating from page one to page two and then you press the back button, you will be taken back to page one. In another scenario, when a user navigates from page one to page two and page two displays a pop-up dialog when it loads, pressing the back button here will only hide the pop-up dialog but will not navigate to page one. The registerBackButtonAction method provides a hook to override this behavior. The registerBackButtonAction method takes the following three arguments: callback: This is the method to be called when the event is fired priority: This is the number that indicates the priority of the listener actionId (optional): This is the ID assigned to the action By default the priority is as follows: Previous view = 100 Close side menu = 150 Dismiss modal = 200 Close action sheet = 300 Dismiss popup = 400 Dismiss loading overlay = 500 So, if you want a certain functionality/custom code to override the default behavior of the back button, you will be writing something like this: var cancelRegisterBackButtonAction = $ionicPlatform.registerBackButtonAction(backButtonCustomHandler, 201); This listener will override (take precedence over) all the default listeners below the priority value of 201—that is dismiss modal, close side menu, and previous view but not above the priority value. When the $ionicPlatform.registerBackButtonAction method executes, it returns a function. We have assigned that function to the cancelRegisterBackButtonAction variable. Executing cancelRegisterBackButtonAction de-registers the registerBackButtonAction listener. The on method Apart from the preceding handy methods, $ionicPlatform has a generic on method that can be used to listen to all of Cordova's events (https://cordova.apache.org/docs/en/edge/cordova_events_events.md.html). You can set up hooks for application pause, application resume, volumedownbutton, volumeupbutton, and so on, and execute a custom functionality accordingly. You can set up these listeners inside the $ionicPlatform.ready method as follows: var cancelPause = $ionicPlatform.on('pause', function() {             console.log('App is sent to background');             // do stuff to save power         });   var cancelResume = $ionicPlatform.on('resume', function() {             console.log('App is retrieved from background');             // re-init the app         });           // Supported only in BlackBerry 10 & Android var cancelVolumeUpButton = $ionicPlatform.on('volumeupbutton', function() {             console.log('Volume up button pressed');             // moving a slider up         });   var cancelVolumeDownButton = $ionicPlatform.on('volumedownbutton', function() {             console.log('Volume down button pressed');             // moving a slider down         }); The on method returns a function that, when executed, de-registers the event. Now you know how to control your app better when dealing with mobile OS events and hardware keys. Content Next, we will take a look at content-related directives. The first is the ion-content directive. Navigation The next component we are going to take a look at is the navigation component. The navigation component has a bunch of directives as well as a couple of services. The first directive we are going to take a look at is ion-nav-view. When the app boots up, $stateProvider will look for the default state and then will try to load the corresponding template inside the ion-nav-view. Tabs and side menu To understand navigation a bit better, we will explore the tabs directive and the side menu directive. We will scaffold the tabs template and go through the directives related to tabs; run this: ionic start -a "Example 19" -i app.example.nineteen example19 tabs Using the cd command, go to the example19 folder and run this:   ionic serve This will launch the tabs app. If you open www/index.html file, you will notice that this template uses ion-nav-bar to manage the header with ion-nav-back-button inside it. Next open www/js/app.js and you will find the application states configured: .state('tab.dash', {     url: '/dash',     views: {       'tab-dash': {         templateUrl: 'templates/tab-dash.html',         controller: 'DashCtrl'       }     }   }) Do notice that the views object has a named object: tab-dash. This will be used when we work with the tabs directive. This name will be used to load a given view when a tab is selected into the ion-nav-view directive with the name tab-dash. If you open www/templates/tabs.html, you will find a markup for the tabs component: <ion-tabs class="tabs-icon-top tabs-color-active-positive">     <!-- Dashboard Tab -->   <ion-tab title="Status" icon-off="ion-ios-pulse" icon-on="ion- ios-pulse-strong" href="#/tab/dash">     <ion-nav-view name="tab-dash"></ion-nav-view>   </ion-tab>     <!-- Chats Tab -->   <ion-tab title="Chats" icon-off="ion-ios-chatboxes-outline" icon-on="ion-ios-chatboxes" href="#/tab/chats">     <ion-nav-view name="tab-chats"></ion-nav-view>   </ion-tab>     <!-- Account Tab -->   <ion-tab title="Account" icon-off="ion-ios-gear-outline" icon- on="ion-ios-gear" href="#/tab/account">     <ion-nav-view name="tab-account"></ion-nav-view>   </ion-tab>   </ion-tabs> The tabs.html will be loaded before any of the child tabs load, since tab state is defined as an abstract route. The ion-tab directive is nested inside ion-tabs and every ion-tab directive has an ion-nav-view directive nested inside it. When a tab is selected, the route with the same name as the name attribute on the ion-nav-view will be loaded inside the corresponding tab. Very neatly structured! You can read more about tabs directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionTabs/. Next, we are going to scaffold an app using the side menu template and go through the navigation inside it; run this: ionic start -a "Example 20" -i app.example.twenty example20 sidemenu Using the cd command, go to the example20 folder and run this:   ionic serve This will launch the side menu app. We start off exploring with www/index.html. This file has only the ion-nav-view directive inside the body. Next, we open www/js/app/js. Here, the routes are defined as expected. But one thing to notice is the name of the views for search, browse, and playlists. It is the same—menuContent—for all: .state('app.search', {     url: "/search",     views: {       'menuContent': {         templateUrl: "templates/search.html"       }     } }) If we open www/templates/menu.html, you will notice ion-side-menus directive. It has two children ion-side-menu-content and ion-side-menu. The ion-side-menu-content displays the content for each menu item inside the ion-nav-view named menuContent. This is why all the menu items in the state router have the same view name. The ion-side-menu is displayed on the left-hand side of the page. You can set the location on the ion-side-menu to the right to show the side menu on the right or you can have two side menus. Do notice the menu-toggle directive on the button inside ion-nav-buttons. This directive is used to toggle the side menu. If you want to have the menu on both sides, your menu.html will look as follows: <ion-side-menus enable-menu-with-back-views="false">   <ion-side-menu-content>     <ion-nav-bar class="bar-stable">       <ion-nav-back-button>       </ion-nav-back-button>         <ion-nav-buttons side="left">         <button class="button button-icon button-clear ion- navicon" menu-toggle="left">         </button>       </ion-nav-buttons>       <ion-nav-buttons side="right">         <button class="button button-icon button-clear ion- navicon" menu-toggle="right">         </button>       </ion-nav-buttons>     </ion-nav-bar>     <ion-nav-view name="menuContent"></ion-nav-view>   </ion-side-menu-content>     <ion-side-menu side="left">     <ion-header-bar class="bar-stable">       <h1 class="title">Left</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu>   <ion-side-menu side="right">     <ion-header-bar class="bar-stable">       <h1 class="title">Right</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu> </ion-side-menus> You can read more about side menu directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionSideMenus/. This concludes our journey through the navigation directives and services. Next, we will move to Ionic loading. Ionic loading The first service we are going to take a look at is $ionicLoading. This service is highly useful when you want to block a user's interaction from the main page and indicate to the user that there is some activity going on in the background. To test this, we will scaffold a new blank template and implement $ionicLoading; run this: ionic start -a "Example 21" -i app.example.twentyone example21 blank Using the cd command, go to the example21 folder and run this:   ionic serve This will launch the blank template in the browser. We will create an app controller and define the show and hide methods inside it. Open www/js/app.js and add the following code: .controller('AppCtrl', function($scope, $ionicLoading, $timeout) {       $scope.showLoadingOverlay = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     $scope.hideLoadingOverlay = function() {         $ionicLoading.hide();     };       $scope.toggleOverlay = function() {         $scope.showLoadingOverlay();           // wait for 3 seconds and hide the overlay         $timeout(function() {             $scope.hideLoadingOverlay();         }, 3000);     };   }) We have a function named showLoadingOverlay, which will call $ionicLoading.show(), and a function named hideLoadingOverlay(), which will call $ionicLoading.hide(). We have also created a utility function named toggleOverlay(), which will call showLoadingOverlay() and after 3 seconds will call hideLoadingOverlay(). We will update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-stable">         <h1 class="title">$ionicLoading service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-dark" ng-click="toggleOverlay()">             Toggle Overlay         </button>     </ion-content> </body> We have a button that calls toggleOverlay(). If you save all the files, head back to the browser, and click on the Toggle Overlay button, you will see the following screenshot: As you can see, the overlay is shown till the hide method is called on $ionicLoading. You can also move the preceding logic inside a service and reuse it across the app. The service will look like this: .service('Loading', function($ionicLoading, $timeout) {     this.show = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     this.hide = function() {         $ionicLoading.hide();     };       this.toggle= function() {         var self  = this;         self.show();           // wait for 3 seconds and hide the overlay         $timeout(function() {             self.hide();         }, 3000);     };   }) Now, once you inject the Loading service into your controller or directive, you can use Loading.show(), Loading.hide(), or Loading.toggle(). If you would like to show only a spinner icon instead of text, you can call the $ionicLoading.show method without any options: $scope.showLoadingOverlay = function() {         $ionicLoading.show();     }; Then, you will see this: You can configure the show method further. More information is available at http://ionicframework.com/docs/nightly/api/service/$ionicLoading/. You can also use the $ionicBackdrop service to show just a backdrop. Read more about $ionicBackdrop at http://ionicframework.com/docs/nightly/api/service/$ionicBackdrop/. You can also checkout the $ionicModal service at http://ionicframework.com/docs/api/service/$ionicModal/; it is quite similar to the loading service. Popover and Popup services Popover is a contextual view that generally appears next to the selected item. This component is used to show contextual information or to show more information about a component. To test this service, we will be scaffolding a new blank app: ionic start -a "Example 23" -i app.example.twentythree example23 blank Using the cd command, go to the example23 folder and run this:   ionic serve This will launch the blank template in the browser. We will add a new controller to the blank project named AppCtrl. We will be adding our controller code in www/js/app.js. .controller('AppCtrl', function($scope, $ionicPopover) {       // init the popover     $ionicPopover.fromTemplateUrl('button-options.html', {         scope: $scope     }).then(function(popover) {         $scope.popover = popover;     });       $scope.openPopover = function($event, type) {         $scope.type = type;         $scope.popover.show($event);     };       $scope.closePopover = function() {         $scope.popover.hide();         // if you are navigating away from the page once         // an option is selected, make sure to call         // $scope.popover.remove();     };   }); We are using the $ionicPopover service and setting up a popover from a template named button-options.html. We are assigning the current controller scope as the scope to the popover. We have two methods on the controller scope that will show and hide the popover. The openPopover method receives two options. One is the event and second is the type of the button we are clicking (more on this in a moment). Next, we update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-positive">         <h1 class="title">Popover Service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-block button-dark" ng- click="openPopover($event, 'dark')">             Dark Button         </button>         <button class="button button-block button-assertive" ng- click="openPopover($event, 'assertive')">             Assertive Button         </button>         <button class="button button-block button-calm" ng- click="openPopover($event, 'calm')">             Calm Button         </button>     </ion-content>     <script id="button-options.html" type="text/ng-template">         <ion-popover-view>             <ion-header-bar>                 <h1 class="title">{{type}} options</h1>             </ion-header-bar>             <ion-content>                 <div class="list">                     <a href="#" class="item item-icon-left">                         <i class="icon ion-ionic"></i> Option One                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-help-buoy"></i> Option Two                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-hammer"></i> Option Three                     </a>                     <a href="#" class="item item-icon-left" ng- click="closePopover()">                         <i class="icon ion-close"></i> Close                     </a>                 </div>             </ion-content>         </ion-popover-view>     </script> </body> Inside ion-content, we have created three buttons, each themed with a different mood (dark, assertive, and calm). When a user clicks on the button, we show a popover that is specific for that button. For this example, all we are doing is passing in the name of the mood and showing the mood name as the heading in the popover. But you can definitely do more. Do notice that we have wrapped our template content inside ion-popover-view. This takes care of positioning the modal appropriately. The template must be wrapped inside the ion-popover-view for the popover to work correctly. When we save all the files and head back to the browser, we will see the three buttons. Depending on the button you click, the heading of the popover changes, but the options remain the same for all of them: Then, when we click anywhere on the page or the close option, the popover closes. If you are navigating away from the page when an option is selected, make sure to call: $scope.popover.remove(); You can read more about Popover at http://ionicframework.com/docs/api/controller/ionicPopover/. Our GitHub organization With the ever-changing frontend world, keeping up with latest in the business is quite essential. During the course of the book, Cordova, Ionic, and Ionic CLI has evolved a lot and we are predicting that they will keep evolving till they become stable. So, we have created a GitHub organization named Learning Ionic (https://github.com/learning-ionic), which consists of code for all the chapters. You can raise issues, submit pull requests and we will also try and keep it updated with the latest changes. So, you can always refer back to GitHub organization for the possible changes. Summary In this article, we looked at various Ionic directives and services that help us develop applications easily. Resources for Article: Further resources on this subject: Mailing with Spring Mail [article] Implementing Membership Roles, Permissions, and Features [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 3674
article-image-introducing-tablayout
Packt
29 Oct 2015
14 min read
Save for later

Introducing TabLayout

Packt
29 Oct 2015
14 min read
 In this article by Antonio Pachón, author of the book, Mastering Android Application Development, we take a look on the TabLayout design library and the different activities you can do with it. The TabLayout design library allows us to have fixed or scrollable tabs with text, icons, or a customized view. You would remember from the first instance of customizing tabs in this book that it isn't very easy to do, and to change from scrolling to fixed tabs, we need different implementations. (For more resources related to this topic, see here.) We want to change the color and design of the tabs now to be fixed; for this, we need to first go to activity_main.xml and add TabLayout, removing the previous PagerTabStrip. Our view will look as follows: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     android_layout_height="fill_parent"     android_layout_width="fill_parent"     android_orientation="vertical"     >       <android.support.design.widget.TabLayout         android_id="@+id/tab_layout"         android_layout_width="match_parent"         android_layout_height="50dp"/>       <android.support.v4.view.ViewPager           android_id="@+id/pager"         android_layout_width="match_parent"         android_layout_height="wrap_content">       </android.support.v4.view.ViewPager>   </LinearLayout> When we have this, we need to add tabs to TabLayout. There are two ways to do this; one is to create the tabs manually as follows: tabLayout.addTab(tabLayout.newTab().setText("Tab 1")); The second way, which is the one we'll use, is to set the view pager to TabLayout. In our example, MainActivity.java should look as follows: public class MainActivity extends ActionBarActivity {       @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_main);           MyPagerAdapter adapter = new         MyPagerAdapter(getSupportFragmentManager());         ViewPager viewPager = (ViewPager)         findViewById(R.id.pager);         viewPager.setAdapter(adapter);         TabLayout tabLayout = (TabLayout)         findViewById(R.id.tab_layout);         tabLayout.setupWithViewPager(viewPager);         }     @Override     protected void attachBaseContext(Context newBase) {         super.attachBaseContext(CalligraphyContextWrapper         .wrap(newBase));     } } If we don't specify any color, TabLayout will use the default color from the theme, and the position of the tabs will be fixed. Our new tab bar will look as follows: Toolbar, action bar, and app bar Before continuing to add motion and animation to our app, we need to clarify the concepts of toolbar, the action bar, the app bar, and AppBarLayout, which might cause a bit of confusion. The action bar and app bar are the same component; app bar is just a new name that has been acquired in material design. This is the opaque bar fixed at the top of our activity that usually shows the title of the app, navigation options, and the different actions. The icon is or isn't displayed depending on the theme. Since Android 3.0, the theme Holo or any of its descendants is used by default for the action bar. Moving onto the next concept, the toolbar; introduced in API 21, Andorid Lollipop, it is a generalization of the action bar that doesn't need to be fixed at the top of the activity. We can specify whether a toolbar is to act as the activity action bar with the setActionBar() method. This means that a toolbar can act as an action bar depending on what we want. If we create a toolbar and set it as an action bar, we must use a theme with the .NoActionBar option to avoid having a duplicated action bar. Otherwise, we would have the one that comes by default in a theme along with the toolbar that we have created as the action bar. A new element, called AppBarLayout, has been introduced in the design support library; it is LinearLayout intended to contain the toolbar to display animations based on scrolling events. We can specify the behavior while scrolling in the children with the app:layout_scrollFlag attribute. AppBarLayout is intended to be contained in CoordinatorLayout—the component introduced as well in the design support library—which we will describe in the following section. Adding motion with CoordinatorLayout CoordinatorLayout allows us to add motion to our app, connecting touch events and gestures with views. We can coordinate a scroll movement with the collapsing animation of a view, for instance. These gestures or touch events are handled by the Coordinator.Behaviour class; AppBarLayout already has this private class. If we want to use this motion with a custom view, we would have to create this behavior ourselves. CoordinatorLayout can be implemented at the top level of our app, so we can combine this with the application bar or any element inside our activity or fragment. It also can be implemented as a container to interact with its child views. Continuing with our app, we will show a full view of a job offer when we click on a card. This will be displayed in a new activity. This activity will contain a toolbar showing the title of the job offer and logo of the company. If the description is long, we will need to scroll down to read it, and at the same time, we want to collapse the logo at the top, as it is not relevant anymore. In the same way, while scrolling back up, we want it to expand it again. To control the collapsing of the toolbar we will need CollapsingToolbarLayout. The description will be contained in NestedScrollView, which is a scroll view from the android v4 support library. The reason to use NestedScrollView is that this class can propagate the scroll events to the toolbar, while ScrollView can't. Ensure that compile com.android.support:support-v4:22.2.0 is up to date. So, for now, we can just place an image from the drawable folder to implement the CoordinatorLayout functionality. Our offer detail view, activity_offer_detail.xml, will look as follows: <android.support.design.widget.CoordinatorLayout     android_layout_width="match_parent"     android_layout_height="match_parent">      <android.support.design.widget.AppBarLayout         android_id="@+id/appbar"         android_layout_height="256dp"         android_layout_width="match_parent">         <android.support.design.widget.CollapsingToolbarLayout             android_id="@+id/collapsingtoolbar"             android_layout_width="match_parent"             android_layout_height="match_parent"             app_layout_scrollFlags="scroll|exitUntilCollapsed">               <ImageView                 android_id="@+id/logo"                 android_layout_width="match_parent"                 android_layout_height="match_parent"                 android_scaleType="centerInside"                 android_src="@drawable/googlelogo"                 app_layout_collapseMode="parallax" />             <android.support.v7.widget.Toolbar                 android_id="@+id/toolbar"                 android:layout_height="?attr/actionBarSize"                 android_layout_width="match_parent"                 app_layout_collapseMode="pin"/>         </android.support.design.widget.CollapsingToolbarLayout>     </android.support.design.widget.AppBarLayout>     <android.support.v4.widget.NestedScrollView         android_layout_width="fill_parent"         android_layout_height="fill_parent"         android_paddingLeft="20dp"         android_paddingRight="20dp"         app_layout_behavior=         "@string/appbar_scrolling_view_behavior">              <TextView                 android_id="@+id/rowJobOfferDesc"                 android_layout_width="fill_parent"                 android_layout_height="fill_parent"                 android_text="Long scrollabe text"                   android_textColor="#999"                 android_textSize="18sp"                 />      </android.support.v4.widget.NestedScrollView>  </android.support.design.widget.CoordinatorLayout> As you can see here, the CollapsingToolbar layout reacts to the scroll flag and tells its children how to react. The toolbar is pinned at the top, always remaining visible, app:layout_collapseMode="pin" however the logo will disappear with a parallax effect app:layout_collapseMode="parallax". Don't forget to add to NestedScrollview to the app:layout_behavior="@string/appbar_scrolling_view_behavior" attribute and clean the project to generate this string resource internally. If you have problems, you can set the string directly by typing "android.support.design.widget.AppBarLayout$ScrollingViewBehavior", and this will help you to identify the issue. When we click on a job offer, we need to navigate to OfferDetailActivity and send information about the offer. As you probably know from the beginner level, to send information between activities we use intents. In these intents, we can put the data or serialized objects to be able to send an object of the JobOffer type we have to make the The JobOffer class implements Serialization. Once we have done this, we can detect the click on the element in JobOffersAdapter: public class MyViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener, View.OnLongClickListener{     public TextView textViewName;     public TextView textViewDescription;       public MyViewHolder(View v){         super(v);         textViewName =         (TextView)v.findViewById(R.id.rowJobOfferTitle);         textViewDescription =         (TextView)v.findViewById(R.id.rowJobOfferDesc);         v.setOnClickListener(this);         v.setOnLongClickListener(this);     }     @Override     public void onClick(View view) {             Intent intent = new Intent(view.getContext(),             OfferDetailActivity.class);             JobOffer selectedJobOffer =             mOfferList.get(getPosition());             intent.putExtra("job_title",             selectedJobOffer.getTitle());             intent.putExtra("job_description",             selectedJobOffer.getDescription());             view.getContext().startActivity(intent);     } Once we start the activity, we need to retrieve the title and set it to the toolbar. Add a long text to the TextView description inside NestedScrollView to first test with dummy data. We want to be able to scroll to test the animation: public class OfferDetailActivity extends AppCompatActivity {     @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_offer_detail);         String job_title =         getIntent().getStringExtra("job_title");         CollapsingToolbarLayout collapsingToolbar =         (CollapsingToolbarLayout)         findViewById(R.id.collapsingtoolbar);         collapsingToolbar.setTitle(job_title);     } } Finally, ensure that your styles.xml file in the folder values uses a theme with no action bar by default: <resources>     <!-- Base application theme. -->     <style name="AppTheme"     parent="Theme.AppCompat.Light.NoActionBar">         <!-- Customize your theme here. -->     </style> </resources> We are now ready to test the behavior of the app. Launch it and scroll down. Take a look at how the image collapses and the toolbar is pinned at the top. It will look similar to this: We are missing an attribute to achieve a nice effect in the animation. Just collapsing the image doesn't collapse it enough; we need to make the image disappear in a smooth way and be replaced by the background color of the toolbar. Add the contentScrim attribute to CollapsingToolbarLayout, and this will fade in the image as is collapsing using the primary color of the theme, which is the same used by the toolbar at the moment: <android.support.design.widget.CollapsingToolbarLayout     android_id="@+id/collapsingtoolbar"     android_layout_width="match_parent"     android_layout_height="match_parent"     app_layout_scrollFlags="scroll|exitUntilCollapsed"     app_contentScrim="?attr/colorPrimary"> With this attribute, the app looks better when collapsed and expanded: We just need to style the app a bit more by changing colors and adding padding to the image; we can change the colors of the theme in styles.xml through the following code: <resources>      <!-- Base application theme. -->     <style name="AppTheme"     parent="Theme.AppCompat.Light.NoActionBar">         <item name="colorPrimary">#8bc34a</item>         <item name="colorPrimaryDark">#33691e</item>         <item name="colorAccent">#FF4081</item>     </style> </resources> Resize AppBarLayout to 190dp and add 50dp of paddingLeft and paddingRight to ImageView to achieve the following result: Summary In this article we learned what the TabLayout design library is and the different activities you can do with it. Resources for Article: Further resources on this subject: Using Resources[article] Prerequisites for a Map Application[article] Remote Desktop to Your Pi from Everywhere [article]
Read more
  • 0
  • 0
  • 3668

article-image-article-what_is_ngui
Packt
20 Jan 2014
8 min read
Save for later

What is NGUI?

Packt
20 Jan 2014
8 min read
(For more resources related to this topic, see here.) The Next-Gen User Interface kit is a plugin for Unity 3D. It has the great advantage of being easy to use, very powerful, and optimized compared to Unity's built-in GUI system, UnityGUI. Since it is written in C#, it is easily understandable and you may tweak it or add your own features, if necessary. The NGUI Standard License costs $95. With this, you will have useful example scenes included. I recommend this license to start comfortably—a free evaluation version is available, but it is limited, outdated, and not recommended. The NGUI Professional License, priced at $200, gives you access to NGUI's GIT repository to access the latest beta features and releases in advance. A $2000 Site License is available for an unlimited number of developers within the same studio. Let's have an overview of the main features of this plugin and see how they work. UnityGUI versus NGUI With Unity's GUI, you must create the entire UI in code by adding lines that display labels, textures, or any other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. This is no longer necessary; with NGUI, UI elements are simple GameObjects! You can create widgets—this is what NGUI calls labels, sprites, input fields, and so on—move them, rotate them, and change their dimensions using handles or the Inspector. Copying, pasting, creating prefabs, and every other useful feature of Unity's workflow is also available. These widgets are viewed by a camera and rendered on a layer that you can specify. Most of the parameters are accessible through Unity's Inspector, and you can see what your UI looks like directly in the Game window, without having to hit the Play button. Atlases Sprites and fonts are all contained in a large texture called atlas. With only a few clicks, you can easily create and edit your atlases. If you don't have any images to create your own UI assets, simple default atlases come with the plugin. That system means that for a complex UI window composed of different textures and fonts, the same material and texture will be used when rendering. This results in only one draw call for the entire window. This, along with other optimizations, makes NGUI the perfect tool to work on mobile platforms. Events NGUI also comes with an easy-to-use event framework that is written in C#. The plugin comes with a large number of additional components that you can attach to GameObjects. These components can perform advanced tasks depending on which events are triggered: hover, click, input, and so on. Therefore, you may enhance your UI experience while keeping it simple to configure. Code less, get more! Localization NGUI comes with its own localization system, enabling you to easily set up and change your UI's language with the push of a button. All your strings are located in the .txt files: one file per language. Shaders Lighting, normal mapping, and refraction shaders are supported in NGUI, which can give you beautiful results. Clipping is also a shader-controlled feature with NGUI, used for showing or hiding specific areas of your UI. We've now covered what NGUI's main features are, and how it can be useful to us as a plugin, and now it's time to import it inside Unity. Importing NGUI After buying the product from the Asset Store or getting the evaluation version, you have to download it. Perform the following steps to do so: Create a new Unity project. Navigate to Window | Asset Store. Select your downloads library. Click on the Download button next to NGUI: Next-Gen UI. When the download completes, click on the NGUI icon / product name in the library to access the product page. Click on the Import button and wait for a pop-up window to appear. Check the checkbox for NGUI v.3.0.2.unity package and click on Import. In the Project view, navigate to Assets | NGUI and double-click on NGUI v.3.0.2. A new imported pop-up window will appear. Click on Import again. Click any button on the toolbar to refresh it.The NGUI tray will appear! The NGUI tray will look like the following screenshot: You have now successfully imported NGUI to your project. Let's create your first 2D UI. Creating your UI We will now create our first 2D user interface with NGUI's UI Wizard. This wizard will add all the elements needed for NGUI to work. Before we continue, please save your scene as Menu.unity. UI Wizard Create your UI by opening the UI Wizard by navigating to NGUI | Open | UIWizard from the toolbar. Let's now take a look at the UI Wizard window and its parameters. Window You should now have the following pop-up window with two parameters: Parameters The two parameters are as follows: Layer: This is the layer on which your UI will be displayed Camera: This will decide if the UI will have a camera, and its drop-down options are as follows: None: No camera will be created Simple 2D:Uses a camera with orthographic projection Advanced 3D:Uses a camera with perspective projection Separate UI Layer I recommend that you separate your UI from other usual layers. We should do it as shown in the following steps: Click on the drop-down menu next to the Layer parameter. Select Add Layer. Create a new layer and name it GUI2D. Go back to the UI Wizard window and select this new GUI2D layer for your UI. You can now click on the Create Your UI button. Your first 2D UI has been created! Your UI structure The wizard has created four new GameObjects on the scene for us: UI Root (2D) Camera Anchor Panel Let's now review each in detail. UI Root (2D) The UIRoot component scales widgets down to keep them at a manageable size. It is also responsible for the Scaling Style—it will either scale UI elements to remain pixel perfect or to occupy the same percentage of the screen, depending on the parameters you specify. Select the UI Root (2D) GameObject in the Hierarchy. It has the UIRoot.cs script attached to it. This script adjusts the scale of the GameObject it's attached to in order to let you specify widget coordinates in pixels, instead of Unity units as shown in the following screenshot: Parameters The UIRoot component has four parameters: Scaling Style: The following are the available scaling styles: PixelPerfect:This will ensure that your UI will always try to remain at the same size in pixels, no matter what resolution. In this scaling mode, a 300 x 200 window will be huge on a 320 x 240 screen and tiny on a 1920 x 1080 screen. That also means that if you have a smaller resolution than your UI, it will be cropped. FixedSize:This will ensure that your UI will be proportionally resized depending on the screen's height. The result is that your UI will not be pixel perfect but will scale to fit the current screen size. FixedSizeOnMobiles:This will ensure fixed size on mobiles and pixel perfect everywhere else. Manual Height: With the FixedSize scaling style, the scale will be based on this height. If your screen's height goes over or under this value, it will be resized to be displayed identically while maintaining the aspect ratio(width/height proportional relationship). Minimum Height: With the PixelPerfect scaling style, this parameter defines the minimum height for the screen. If your screen height goes below this value, your UI will resize. It will be as if the Scaling Style parameter was set to FixedSize with Manual Height set to this value. Maximum Height: With the PixelPerfect scaling style, this parameter defines the maximum height for the screen. If your screen height goes over this value,your UI will resize. It will be as if the ScalingStyle parameter was set to FixedSize with Manual Height set to this value. Please set the Scaling Style parameter to FixedSize with a Manual Height value of 1080. This will allow us to have the same UI on any screen size up to 1920 x 1080. Even though the UI will look the same on different resolutions, the aspect ratio is still a problem since the rescale is based on the screen's height only. If you want to cover both 4:3 and 16:9 screens, your UI should not be too large—try to keep it square.Otherwise, your UI might be cropped on certain screen resolutions. On the other hand, if you want a 16:9 UI, I recommend you force this aspect ratio only. Let's do it now for this project by performing the following steps: Navigate to Edit| Project Settings | Player. In the Inspectoroption, unfold the Resolution and Presentationgroup. Unfold the Supported Aspect Ratios group. Check only the 16:9 box. Summary In this article, we discussed NGUI's basic workflow—it works with GameObjects, uses atlases to combine multiple textures in one large texture, has an event system, can use shaders, and has a localization system. After importing the NGUI plugin, we created our first 2D UI with the UI Wizard, reviewed its parameters, and created our own GUI 2D layer for our UI to reside on. Resources for Article: Further resources on this subject: Unity 3D Game Development: Don't Be a Clock Blocker [Article] Component-based approach of Unity [Article] Unity 3: Building a Rocket Launcher [Article]
Read more
  • 0
  • 0
  • 3662