Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-sencha-touch-layouts-revisited
Packt
16 Feb 2012
7 min read
Save for later

Sencha Touch: Layouts Revisited

Packt
16 Feb 2012
7 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on The Various Components in Sencha Touch. The base component class When we talk about components in Sencha Touch, we are generally talking about buttons, panels, sliders, toolbars, form fields, and other tangible items that we can see on the screen. However, all of these components inherit their configuration options, methods, properties, and events from a single base component with the startlingly original name of component. This can obviously lead to a bit of confusion, so we will refer to this as Ext.Component. One of the most important things to understand is that you will never actually use Ext.Component directly. It is simply used as a building block for all of the other components in Sencha Touch. However, it is important to be familiar with the base component class, because anything it can do, all the other components can do. Learning this one class can give you a huge head start on everything else. Some of the more useful configuration options of Ext.Component are as follows: border cls disabled height/width hidden html margin padding scroll style ui Ext.Component also contains a number of useful methods that will allow you to get and set properties on any Sencha Touch component. Here are a few of those methods: addCls and removeCls: Add or remove a CSS class from your component. destroy: Remove the component from memory. disable and enable: Disable or enable the component (very useful in forms). getHeight, getWidth, and getSize: Get the current height, width, or size of the component. Size returns both height and width. You can also use setHeight, setWidth, and setSize, to change the dimensions of your component. show and hide: Show or hide the component. setPosition: Set the top and left values for the component. update: Update the content area of a component. Unlike our configuration options, methods can only be used once the component is created. This means we also need to understand how to get the component itself before we can begin using the methods. This is where the Ext class comes into play. The Ext object and Ext.getCmp() The Ext object is created, by default, when the Sencha Touch library is loaded. This object has methods that are used to create our initial application and its components. It also allows us to talk to our other components after they have been created. For example, let's take a look at a code example: new Ext.Application({ name: 'TouchStart', launch: function() { var hello = new Ext.Container({ fullscreen: true, html: '<div id="hello">Hello World</div>', id: 'helloContainer' }); this.viewport = hello;}}); The configuration option, id: 'helloContainer' will allow us to grab the container, later on, using our Ext class and the method getCmp(). For example, we can add the following code after this.viewport = hello;: var myContainer = Ext.getCmp('helloContainer');myContainer.update('Hello Again!'); By using Ext.getCmp, we get back the component with an id value of helloContainer, which we then set to our variable myContainer. Using this method returns an actual component, in this case a container. Since we get this object back as a container component, we can use any of the methods that the container understands. For our example, we used the update() method to change the content of the container to 'Hello Again!'. Typically, these types of changes will be generated by a button click and not in the launch function. This example simply shows that we can manipulate the component on the fly even after it gets created. The ID configuration It's a good idea to include an id configuration in all of your components. This makes it possible to use Ext.getCmp() to get to those components, later on, when we need them. Remember to keep the ID of every component unique. If you plan on creating multiple copies of a component, you will need to make sure that a unique ID is generated for each copy. The Ext.getCmp() method is great for grabbing Sencha Touch components and manipulating them.   Layouts revisited Layouts are another area we need to expand upon. When you start creating your own applications, you will need a firm understanding of how the different layouts affect what you see on the screen. To this end, we are going to start out with a demonstration application that shows how the different layouts work. For the purposes of this demo application, we will create the different components, one at a time, as individual variables. This is done for the sake of readability and should not be considered the best programming style. Remember that any items created this way will take up memory, even if the user never views the component. var myPanel = new Ext.Panel({ ... It is always a much better practice to create your components, using xtype attibutes, within your main container: items: [{ xtype: 'panel', ... This allows Sencha Touch to render the components as they are needed, instead of all at once when the page loads. The card layout To begin with, we will create a simple card layout: new Ext.Application({ name: 'TouchStart', launch: function() { var layoutPanel = new Ext.Panel({ fullscreen: true, layout: 'card', id: 'layoutPanel', cardSwitchAnimation: 'slide', items: [hboxTest] }); this.viewport = layoutPanel; }}); This sets up a single panel with a card layout. The card layout arranges its items similar to a stack of cards. Only one of these cards is active and displayed at a time. The card layout keeps any additional cards in the background and only creates them when the panel receives the setActiveItem() command. Each item in the list can be activated by using setActiveItem() and the item number. This can be a bit confusing, as the numbering of the items is zero-indexed, meaning that you start counting at zero and not one. For example, if you want to activate the fourth item in the list, you would use: layoutPanel.setActiveItem(3); In this case, we are starting out with only a single card/item called hboxTest. We need to add this container to make our program run. The hbox layout Above the line that says var layoutPanel = new Ext.Panel({, in the preceding code, add the following code: var hboxTest = new Ext.Container({ layout: { type: 'hbox', align: 'stretch' }, items: [{ xtype: 'container', flex: 1, html: 'My flex is 1', margin: 5, style: 'background-color: #7FADCF' }, { xtype: 'container', flex: 2, html: 'My flex is 2', margin: 5, style: 'background-color: #7FADCF' }, { xtype: 'container', width: 80, html: 'My width is 80', margin: 5, style: 'background-color: #7FADCF' }]}); This gives us a container with an hbox layout and three child items. Child and parent In Sencha Touch, we often find ourselves dealing with very large arrays of items, nested in containers that are in turn nested in other containers. It is often helpful to refer to a container as a parent to any items it contains. These items are then referred to as the children of the container. The hbox layout stacks its items horizontally and uses the width and flex values to determine how much horizontal space each of its child items will take up. The align: 'stretch' configuration causes the items to stretch to fill all of the available vertical space. If we run the code now, we should see this: You should try adjusting the flex and width values to see how it affects the size of the child containers. You can also change the available configuration options for align (center, end, start, and stretch), to see the different options available. Once you are finished, let's move on and add some more items to our card layout.  
Read more
  • 0
  • 0
  • 2068

article-image-various-components-sencha-touch
Packt
16 Feb 2012
8 min read
Save for later

The Various Components in Sencha Touch

Packt
16 Feb 2012
8 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Sencha Touch: Layouts Revisited. The TabPanel and Carousel components In our last application, we used buttons and a card layout to create an application that switched between different child items. While it is often desirable for your application to do this programmatically (with your own buttons and code), you can also choose to have Sencha Touch set this up automatically, using TabPanel or Carousel. TabPanel TabPanel is useful when you have a number of views the user needs to switch between, such as, contacts, tasks, and settings. The TabPanel component autogenerates the navigation for the layout, which makes it very useful as the main container for an application. The following is a code example: new Ext.Application({ name: 'TouchStart', launch: function() { this.viewport = new Ext.TabPanel({ fullscreen: true, cardSwitchAnimation: 'slide', tabBar:{ dock: 'bottom', layout: { pack: 'center' } }, items: [{ xtype: 'container', title: 'Item 1', fullscreen: false, html: 'TouchStart container 1', iconCls: 'info' }, { xtype: 'container', html: 'TouchStart container 2', iconCls: 'home', title: 'Item 2' }, { xtype: 'container', html: 'TouchStart container 3', iconCls: 'favorites', title: 'Item 3' }] }); }}); TabPanel, in this code, automatically generates a card layout; you don't have to declare a layout. You do need to declare a configuration for the tabBar component. This is where your tabs will automatically appear. In our previous code example, we dock the toolbar at the bottom. This will generate a large square button for each child item in the items list. The button will also use the iconCls value to assign an icon to the button. The title configuration is used to name the button. If you dock the tabBar component at the top, it makes the buttons small and round. It also eliminates the icons, even if you declare a value for iconCls, in your child items. Only the title configuration is used when the bar is docked at the top. Carousel The Carousel component is similar to TabPanel, but the navigation it generates is more appropriate for things such as slide shows. It probably would not work as well as a main interface for your application, but it does work well as a way to display multiple items in a single swipeable container. Similar to TabPanel, Carousel gathers its child items and automatically arranges them in a card layout. In fact, we can actually make just some simple modifications to our previous code to make it into a Carousel: new Ext.Application({ name: 'TouchStart', launch: function() { this.viewport = new Ext.Carousel({ fullscreen: true, direction: 'horizontal', items: [{ html: 'TouchStart container 1' }, { html: 'TouchStart container 2' }, { html: 'TouchStart container 3' }] }); }}); The first thing we did was create a new Ext.Carousel class instead of a new Ext.TabPanel class. We also added a configuration for direction, which can be either horizontal (scrolling from left to right) or vertical (scrolling up or down). We removed the docked toolbar, because, as we will see, Carousel doesn't use one. We also removed iconClass and title from each of our child items for the same reason. Finally, we removed the xtype configuration, since the Carousel automatically creates a panel for each of its items. Unlike TabPanel, Carousel has no buttons, only a series of dots at the bottom, with one dot for each child item. While it is possible to navigate using the dots, the Carousel component automatically sets itself up to respond to a swipe on a touch screen. You can duplicate this gesture in the browser by clicking and holding with the mouse, while moving it horizontally. If you declare a direction: vertical configuration in your Carousel, you can swipe vertically, to move between the child items. TabPanel and the Carousel components understand the activeItem configuration. This lets you set which item appears when the application first loads. Additionally, they all understand the setActiveItem() method that allows you to change the selected child item after the application loads. Carousel also has methods for next() and previous(), which allow you to step through the items in order. It should also be noted that, since TabPanel and Carousel both inherit from the panel, they also understand any methods and configurations that panels and containers understand. Along with containers and panels, TabPanel and Carousel will serve as the main starting point for most of your applications. However, there is another type of panel you will likely want to use at some point: the FormPanel.   FormPanel The FormPanel panel is a very specialized version of the panel, and as the name implies, it is designed to handle form elements. Unlike panels and containers, you don't need to specify the layout for FormPanel. It automatically uses its own special form layout. A basic example of creating a FormPanel would look something like this: var form = new Ext.form.FormPanel({ items: [ { xtype: 'textfield', name : 'first', label: 'First name' }, { xtype: 'textfield', name : 'last', label: 'Last name' }, { xtype: 'emailfield', name : 'email', label: 'Email' } ]}); For this example, we just create the panel and add items for each field in the form. Our xtype tells the form what type of field to create. We can add this to our Carousel and replace our first container, as follows: this.viewport = new Ext.Carousel({ fullscreen: true, direction: 'horizontal', items: [form, { layout: 'fit', html: 'TouchStart container 2' }, { layout: 'fit', html: 'TouchStart container 3' }]}); Anyone who has worked with forms in HTML should be familiar with all of the standard field types, so the following xtype attribute names will make sense to anyone who is used to standard HTML forms: checkboxfield fieldset hiddenfield passwordfield radiofield selectfield textfield textareafield These field types all match their HTML cousins, for the most part. Sencha Touch also offers a few specialized text fields that can assist with validating the user's input: emailfield - Accepts only a valid e-mail address, and on iOS devices, will pull up an alternate e-mail address and URL-friendly keyboard numberfield - Accepts only numbers urlfield - Accepts only a valid web URL, and also brings up the special keyboard These special fields will only submit if the input is valid. All of these basic form fields inherit from the main container class, so they have all of the standard height, width, cls, style, and other container configuration options. They also have a few field-specific options: label - A text label to use with the field labelAlign - Where the label appears; this can be top or left, and defaults to left labelWidth - How wide the label should be name - This corresponds to the HTML name attribute, which is how the value of the field will be submitted maxLength - How many characters can be used in the field required - If the field is required in order for the form to submit Form field placement While FormPanel is typically the container you will use when displaying form elements, you can also place them in any panel or toolbar, if desired. FormPanel has the advantage of understanding the submit() method that will post the form values via AJAX request or POST. If you include a form field in something that is not a FormPanel, you will need to get and set the values for the field using your own custom JavaScript method. In addition to the standard HTML fields, there are a few specialty fields available in Sencha Touch. These include the datepicker, slider, spinner, and toggle fields. DatePicker datepickerfield places a clickable field in the form with a small triangle on the far right side. You can add a date picker to our form by adding the following code after the emailfield item: ,{ xtype: 'datepickerfield', name : 'date', label: 'Date'} When the user clicks the field, a DatePicker will appear, allowing the user to select a date by rotating the month, day, and year wheels, by swiping up or down. Sliders, spinners, and toggles Sliders allow for the selection of a single value from a specified numerical range. The sliderfield value displays a bar, with an indicator, that can be slid horizontally to select a value. This can be useful for setting volume, color values, and other ranged options. Like the slider, a spinner allows for the selection of a single value from a specified numerical range. The spinnerfield value displays a form field with a numerical value with + and - buttons on either side of the field. A toggle allows for a simple selection between one and zero (on and off) and displays a toggle-style button on the form. Add the following new components to the end of our list of items: ,{ xtype : 'sliderfield', label : 'Volume', value : 5, minValue: 0, maxValue: 10},{ xtype: 'togglefield', name : 'turbo', label: 'Turbo'},{xtype: 'spinnerfield',minValue: 0,maxValue: 100,incrementValue: 2,cycle: true} The following screenshot shows how the new components will look: Our sliderfield and spinnerfield have configuration options for minValue and maxValue. We also added an incrementValue attribute, to spinnerfield, that will cause it to move in increments of 2 whenever the + or - button is pressed.  
Read more
  • 0
  • 0
  • 2035

article-image-creating-simple-application-sencha-touch
Packt
15 Feb 2012
10 min read
Save for later

Creating a Simple Application in Sencha Touch

Packt
15 Feb 2012
10 min read
  (For more resources on this topic, see here.) Setting up your folder structure Before we get started, you need to be sure that you've set up your development environment properly. Root folder You will need to have the folders and files for your application located in the correct web server folder, on your local machine. On the Mac, this will be the Sites folder in your Home folder. On Windows, this will be C:xamphtdocs (assuming you installed xampp). Setting up your application folder Before we can start writing code, we have to perform some initial set up, copying in a few necessary resources and creating the basic structure of our application folder. This section will walk you through the basic setup for the Sencha Touch files, creating your style sheets folder, and creating the index.html file. Locate the Sencha Touch folder you downloaded. Create a folder in the root folder of your local web server. You may name it whatever you like. I have used the folder name TouchStart in this article. Create three empty sub folders called lib, app, and css in your TouchStart folder. Now, copy the resources and src folders, from the Sencha Touch folder you downloaded earlier, into the TouchStart/lib folder. Copy the following files from your Sencha Touch folder to your TouchStart/lib folder: sencha-touch.js sencha-touch-debug.js sencha-touch-debug-w-comments.js Create an empty file in the TouchStart/css folder called TouchStart.css. This is where we will put custom styles for our application. Create an empty index.html file in the main TouchStart folder. We will flesh this out in the next section. Icon files Both iOS and Android applications use image icon files for display. This creates pretty rounded launch buttons, found on most touch-style applications. If you are planning on sharing your application, you should also create PNG image files for the launch image and application icon. Generally, there are two launch images, one with a resolution of 320 x 460 px, for iPhones, and one at 768 x 1004 px, for iPads. The application icon should be 72 x 72 px. See Apple's iOS Human Interface Guidelines for specifics, at http://developer.apple.com/library/ios/#documentation/userexperience/conceptual/mobilehig/IconsImages/IconsImages.html. When you're done, your folder structure should look as follows: Creating the HTML application file Using your favorite HTML editor, open the index.html file we created when we were setting up our application folder. This HTML file is where you specify links to the other files we will need in order to run our application. The following code sample shows how the HTML should look: <!DOCTYPE html><html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <title>TouchStart Application - My Sample App</title> <!-- Sencha Touch CSS --> <link rel="stylesheet" href="lib/resources/css/sencha-touch.css"type="text/css"> <!-- Sencha Touch JS --> <script type="text/javascript" src="lib/sencha-touch-debug.js"></script> <!-- Application JS --> <script type="text/javascript" src="app/TouchStart.js"></script> <!-- Custom CSS --> <link rel="stylesheet" href="css/TouchStart.css" type="text/css"> </head> <body></body></html> Comments In HTML, anything between <!-- and --> is a comment, and it will not be displayed in the browser. These comments are to tell you what is going on in the file. It's a very good idea to add comments into your own files, in case you need to come back later and make changes. Let's take a look at this HTML code piece-by-piece, to see what is going on in this file. The first five lines are just the basic set-up lines for a typical web page: <!DOCTYPE html><html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <title>TouchStart Application - My Sample App</title> With the exception of the last line containing the title, you should not need to change this code for any of your applications. The title line should contain the title of your application. In this case, TouchStart Application – Hello World is our title. The next few lines are where we begin loading the files to create our application, starting with the Sencha Touch files. The first file is the default CSS file for the Sencha Touch library—sencha-touch.css. <link rel="stylesheet" href="lib/resources/css/ext-touch.css"type="text/css"> CSS files CSS or Cascading Style Sheet files contain style information for the page, such as which items are bold or italic, which font sizes to use, and where items are positioned in the display. The Sencha Touch style library is very large and complex. It controls the default display of every single component in Sencha Touch. It should not be edited directly. The next file is the actual Sencha Touch JavaScript library. During development and testing, we use the debug version of the Sencha Touch library, sencha-touchdebug.js: <script type="text/javascript" src="lib/sencha-touch-debug.js"></script> The debug version of the library is not compressed and contains comments and documentation. This can be helpful if an error occurs, as it allows you to see exactly where in the library the error occurred. When you have completed your development and testing, you should edit this line to use sencha-touch.js instead. This alternate file is the version of the library that is optimized for production environments and takes less bandwidth and memory to use; but, it has no comments and is very hard to read. Neither the sencha-touch-debug.js nor the sencha-touch.js files should ever be edited directly. The next two lines are where we begin to include our own application files. The names of these files are totally arbitrary, as long as they match the name of the files you create later, in the next section of this chapter. It's usually a good idea to name the file the same as your application name, but that is entirely up to you. In this case, our files are named TouchStart.js and TouchStart.css. <script type="text/javascript" src="app/TouchStart.js"></script> This first file, TouchStart.js, is the file that will contain our JavaScript application code. The last file we need to include is our own custom CSS file, called TouchStart.css. This file will contain any style information we need for our application. It can also be used to override some of the existing Sencha Touch CSS styles. <link rel="stylesheet" href="resources/css/TouchStart.css"type="text/css"> This closes out the </head> area of the index.html file. The rest of the index.html file contains the <body></body> tags and the closing </html> tag. If you have any experience with traditional web pages, it may seem a bit odd to have empty <body></body> tags, in this fashion. In a traditional web page, this is where all the information for display would normally go. For our Sencha Touch application, the JavaScript we create will populate this area automatically. No further content is needed in the index.html file, and all of our code will live in our TouchStart.js file. So, without further delay, let's write some code!   Starting from scratch with TouchStart.js Let's start by opening the TouchStart.js file and adding the following: new Ext.Application({name: 'TouchStart',launch: function() {var hello = new Ext.Container({fullscreen: true,html: '<div id="hello">Hello World</div>' });this.viewport = hello; }}); This is probably the most basic application you can possibly create: the ubiquitous "Hello World" application. Once you have saved the code, use the Safari web browser to navigate to the TouchStart folder in the root folder of your local web server. The address should look like the following: http://localhost/TouchStart/, on the PC http://127.0.0.1/~username/TouchStart, on the Mac (username should be replaced with the username for your Mac) As you can see, all that this bit of code does is create a single window with the words Hello World. However, there are a few important elements to note in this example. The first line, NewExt.Application({, creates a new application for Sencha Touch. Everything listed between the curly braces is a configuration option of this new application. While there are a number of configuration options for an application, most consist of at least the application's name and a launch function. Namespace One of the biggest problems with using someone else's code is the issue of naming. For example, if the framework you are using has an object called "Application", and you create your own object called "Application", the two functions will conflict. JavaScript uses the concept of namespaces to keep these conflicts from happening. In this case, Sencha Touch uses the namespace Ext. It is simply a way to eliminate potential conflicts between the frameworks' objects and code, and your own objects and code. Sencha will automatically set up a namespace for your own code as part of the new Ext.Application object. Ext is also part of the name of Sencha's web application framework called ExtJS. Sencha Touch uses the same namespace convention to allow developers familiar with one library to easily understand the other. When we create a new application, we need to pass it some configuration options. This will tell the application how to look and what to do. These configuration options are contained within the curly braces ({}) and separated by commas. The first option is as follows: name: 'TouchStart' The launch configuration option is actually a function that will tell the application what to do once it starts up. Let's start backwards on this last bit of code for the launch configuration and explain this.viewport. By default, a new application has a viewport. The viewport is a pseudo-container for your application. It's where you will add everything else for your application. Typically, this viewport will be set to a particular kind of container object. At the beginning of the launch function, we start out by creating a basic container, which we call hello: launch: function() {var hello = new Ext.Container({fullscreen: true,html: '<div id="hello">Hello World</div>' });this.viewport = hello; } Like the Application class, a new Ext.Container class is passed a configuration object consisting of a set of configuration options, contained within the curly braces ({}) and separated by commas. The Container object has over 40 different configuration options, but for this simple example, we only use two: fullscreen sets the size of the container to fill the entire screen (no matter which device is being used). html sets the content of the container itself. As the name implies, this can be a string containing either HTML or plain text. Admittedly, this is a very basic application, without much in the way of style. Let's add something extra using the container's layout configuration option. My application didn't work! When you are writing code, it is an absolute certainty that you will, at some point, encounter errors. Even a simple error can cause your application to behave in a number of interesting and aggravating ways. When this happens, it is important to keep in mind the following: Don't Panic. Retrace your steps and use the tools to track down the error and fix it.  
Read more
  • 0
  • 0
  • 3055
Banner background image

article-image-creating-compiling-and-deploying-native-projects-android-ndk
Packt
13 Feb 2012
13 min read
Save for later

Creating, Compiling, and Deploying Native Projects from the Android NDK

Packt
13 Feb 2012
13 min read
(For more resources on Android, see here.) Compiling and deploying NDK sample applications I guess you cannot wait anymore to test your new development environment. So why not compile and deploy elementary samples provided by the Android NDK first to see it in action? To get started, I propose to run HelloJni, a sample application which retrieves a character string defined inside a native C library into a Java activity (an activity in Android being more or less equivalent to an application screen). Time for action – compiling and deploying hellojni sample Let's compile and deploy HelloJni project from command line using Ant: Open a command-line prompt (or Cygwin prompt on Windows). Go to hello-jni sample directory inside the Android NDK. All the following steps have to performed from this directory: $ cd $ANDROID_NDK/samples/hello-jni Create Ant build file and all related configuration files automatically using android command (android.bat on Windows). These files describe how to compile and package an Android application: android update project –p . (Move the mouse over the image to enlarge.) Build libhello-jni native library with ndk-build, which is a wrapper Bash script around Make. Command ndk-build sets up the compilation toolchain for native C/ C++ code and calls automatically GCC version featured with the NDK. $ ndk-build Make sure your Android development device or emulator is connected and running. Compile, package, and install the final HelloJni APK (an Android application package). All these steps can be performed in one command, thanks to Ant build automation tool. Among other things, Ant runs javac to compile Java code, AAPT to package the application with its resources, and finally ADB to deploy it on the development device. Following is only a partial extract of the output: $ ant install The result should look like the following extract: Launch a shell session using adb (or adb.exe on Windows). ADB shell is similar to shells that can be found on the Linux systems: $ adb shell From this shell, launch HelloJni application on your device or emulator. To do so, use am, the Android Activity Manager. Command am allows to start Android activities, services or sending intents (that is, inter-activity messages) from command line. Command parameters come from the Android manifest: # am start -a android.intent.action.MAIN -n com.example.hellojni/com.example.hellojni.HelloJni Finally, look at your development device. HelloJni appears on the screen! What just happened? We have compiled, packaged, and deployed an official NDK sample application with Ant and SDK command-line tools. We will explore them more in later part. We have also compiled our first native C library (also called module) using the ndk-build command. This library simply returns a character string to the Java part of the application on request. Both sides of the application, the native and the Java one, communicate through Java Native Interface. JNI is a standard framework that allows Java code to explicitly call native C/C++ code with a dedicated API. Finally, we have launched HelloJni on our device from an Android shell (adb shell) with the am Activity Manager command. Command parameters passed in step 8 come from the Android manifest: com.example.hellojni is the package name and com.example.hellojni. HelloJni is the main Activity class name concatenated to the main package. <?xml version="1.0" encoding="utf-8"?><manifest package="com.example.hellojni" HIGHLIGHT android_versionCode="1" android_versionName="1.0">... <activity android_name=".HelloJni" HIGHLIGHT android_label="@string/app_name">... Automated build Because Android SDK, NDK, and their open source bricks are not bound to Eclipse or any specific IDE, creating an automated build chain or setting up a continuous integration server becomes possible. A simple bash script with Ant is enough to make it work! HelloJni sample is a little bit... let's say rustic! So what about trying something fancier? Android NDK provides a sample named San Angeles. San Angeles is a coding demo created in 2004 for the Assembly 2004 competition. It has been later ported to OpenGL ES and reused as a sample demonstration in several languages and systems, including Android. You can find more information by visiting one of the author's page: http://jet.ro/visuals/4k-intros/san-angeles-observation/. Have a go hero – compiling san angeles OpenGL demo To test this demo, you need to follow the same steps: Go to the San Angeles sample directory. Generate project files. Compile and install the final San Angeles application. Finally run it. As this application uses OpenGL ES 1, AVD emulation will work, but may be somewhat slow! You may encounter some errors while compiling the application with Ant: The reason is simple: in res/layout/ directory, main.xml file is defined. This file usually defines the main screen layout in Java application—displayed components and how they are organized. However, when Android 2.2 (API Level 8) was released, the layout_width and layout_height enumerations, which describe the way UI components should be sized, were modified: FILL_PARENT became MATCH_PARENT. But San Angeles uses API Level 4. There are basically two ways to overcome this problem. The first one is selecting the right Android version as the target. To do so, specify the target when creating Ant project files: $ android update project –p . -–target android-8 This way, build target is set to API Level 8 and MATCH_PARENT is recognized. You can also change the build target manually by editing default.properties at the project root and replacing: target=android-4 with the following line: target=android-8 The second way is more straightforward: erase the main.xml file! Indeed, this file is in fact not used by San Angeles demo, as only an OpenGL screen created programmatically is displayed, without any UI components. Target right! When compiling an Android application, always check carefully if you are using the right target platform, as some features are added or updated between Android versions. A target can also dramatically change your audience wideness because of the multiple versions of Android in the wild... Indeed, targets are moving a lot and fast on Android!. All these efforts are not in vain: it is just a pleasure to see this old-school 3D environment full of flat-shaded polygons running for the first time. So just stop reading and run it! Exploring android SDK tools Android SDK includes tools which are quite useful for developers and integrators. We have already overlooked some of them including the Android Debug Bridge and android command. Let's explore them deeper.   Android debug bridge You may have not noticed it specifically since the beginning but it has always been there, over your shoulder. The Android Debug Bridge is a multifaceted tool used as an intermediary between development environment and emulators/devices. More specifically, ADB is: A background process running on emulators and devices to receive orders or requests from an external computer. A background server on your development computer communicating with connected devices and emulators. When listing devices, ADB server is involved. When debugging, ADB server is involved. When any communication with a device happens, ADB server is involved! A client running on your development computer and communicating with devices through ADB server. That is what we have done to launch HelloJni: we got connected to our device using adb shell before issuing the required commands. ADB shell is a real Linux shell embedded in ADB client. Although not all standard commands are available, classical commands, such as ls, cd, pwd, cat, chmod, ps, and so on are executable. A few specific commands are also provided such as: logcat To display device log messages dumpsys To dump system state dmesg To dump kernel messages ADB shell is a real Swiss Army knife. It also allows manipulating your device in a flexible way, especially with root access. For example, it becomes possible to observe applications deployed in their "sandbox" (see directory /data/data) or to a list and kill currently running processes. ADB also offers other interesting options; some of them are as follows: pull <device path> <local path> To transfer a file to your computer push <local path> <device path> To transfer a file to your device or emulator install <application package> To install an application package install -r <package to reinstall> To reinstall an application, if already deployed devices To list all Android devices currently connected, including emulators reboot To restart an Android device programmatically wait-for-device To sleep, until a device or emulator is connected to your computer (for example,. in a script) start-server To launch the ADB server communicating with devices and emulators kill-server To terminate the ADB server bugreport To print the whole device state (like dumpsys) help To get an exhaustive help with all options and flags available To ease the writing of issued command, ADB provides facultative flags to specify before options: -s <device id> To target a specific device -d To target current physical device, if only one is connected (or an error message is raised) -e To target currently running emulator, if only one is connected (or an error message is raised) ADB client and its shell can be used for advanced manipulation on the system, but most of the time, it will not be necessary. ADB itself is generally used transparently. In addition, without root access to your phone, possible actions are limited. For more information, see http://developer.android.com/guide/developing/tools/adb.html. Root or not root. If you know the Android ecosystem a bit, you may have heard about rooted phones and non-rooted phones. Rooting a phone means getting root access to it, either "officially" while using development phones or using hacks with an end user phone. The main interest is to upgrade your system before the manufacturer provides updates (if any!) or to use a custom version (optimized or modified, for example, CyanogenMod). You can also do any possible (especially dangerous) manipulations that an Administrator can do (for example, deploying a custom kernel). Rooting is not an illegal operation, as you are modifying YOUR device. But not all manufacturers appreciate this practice and usually void the warranty. Have a go hero – transferring a file to SD card from command line Using the information provided, you should be able to connect to your phone like in the good old days of computers (I mean a few years ago!) and execute some basic manipulation using a shell prompt. I propose you to transfer a resource file by hand, like a music clip or a resource that you will be reading from a future program of yours. To do so, you need to open a command-line prompt and perform the following steps: Check if your device is available using adb from command line. Connect to your device using the Android Debug Bridge shell prompt. Check the content of your SD card using standard Unix ls command. Please note that ls on Android has a specific behavior as it differentiates ls mydir from ls mydir/, when mydir is a symbolic link. Create a new directory on your SD card using the classic command mkdir . Finally, transfer your file by issuing the appropriate adb command. Project configuration tool The command named android is the main entry point when manipulating not only projects but also AVDs and SDK updates. There are few options available, which are as follows: create project: This option is used to create a new Android project through command line. A few additional options must be specified to allow proper generation: -p The project path -n The project name -t The Android API target -k The Java package, which contains application's main class -a The application's main class name (Activity in Android terms) For example: $ android create project –p ./MyProjectDir –n MyProject –t android-8 –k com.mypackage –a MyActivity update project: This is what we use to create Ant project files from an existing source. It can also be used to upgrade an existing project to a new version. Main parameters are as follows: -p The project path -n To change the project name -l To include an Android library project (that is, reusable code). The path must be relative to the project directory). -t To change the Android API target There are also options to create library projects (create lib-project, update lib- project) and test projects (create test-project, update test-project). I will not go into details here as this is more related to the Java world. As for ADB, android command is your friend and can give you some help: $ android create project –help   Command android is a crucial tool to implement a continuous integration toolchain in order to compile, package, deploy, and test a project automatically entirely from command line. Have a go hero – towards continuous integration With adb, android, and ant commands, you have enough knowledge to build a minimal automatic compilation and deployment script to perform some continuous integration. I assume here that you have a versioning software available and you know how to use it. Subversion (also known as SVN) is a good candidate and can work in local (without a server). Perform the following operations: Create a new project by hand using android command. Then, create a Unix or Cygwin shell script and assign it the necessary execution rights (chmod command). All the following steps have to be scribbled in it. In the script, check out sources from your versioning system (for example, using a svn checkout command) on disk. If you do not have a versioning system, you can still copy your own project directory using Unix commands. Build the application using ant. Do not forget to check command results using $?. If the returned value is different from 0, it means an error occurred. Additionally, you can use grep or some custom tools to check potential error messages. If needed, you can deploy resources files using adb Install it on your device or on the emulator (which you can launch from the script) using ant as shown previously. You can even try to launch your application automatically and check Android logs (see logcat option in adb). Of course, your application needs to make use of logs! A free monkey to test your App! In order to automate UI testing on an Android application, an interesting utility that is provided with the Android SDK is MonkeyRunner, which can simulate user actions on a device to perform some automated UI testing. Have a look at http://developer.android.com/guide/developing/tools/monkeyrunner_concepts.html . To favor automation, a single Android shell statement can be executed from command-line as follows: adb shell ls /sdcard/   To execute a command on an Android device and retrieve its result back on your host shell, execute the following command: adb shell "ls / notexistingdir/ 1> /dev/null 2> &1; echo $?" Redirection is necessary to avoid polluting the standard output. The escape character before $? is required to avoid early interpretation by the host shell. Now you are fully prepared to automate your own build toolchain!
Read more
  • 0
  • 0
  • 4801

article-image-article-debugging-with-opengles-in-ios5
Packt
25 Jan 2012
14 min read
Save for later

Debugging with OpenGL ES in iOS 5

Packt
25 Jan 2012
14 min read
(For more resources on Debugging with OpenGL ES in iOS 5, see here.) The Open Graphics Library (OpenGL) can be simply defned as a software interface to the graphics hardware. It is a 3D graphics and modeling library that is highly portable and extremely fast. Using the OpenGL graphics API, you can create some brilliant graphics that are capable of representing 2D and 3D data.   The OpenGL library is a multi-purpose, open-source graphics library that supports applications for 2D and 3D digital content creation, mechanical and architectural design, virtual prototyping, fight simulation, and video games, and allows application developers to confgure a 3D graphics pipeline, and submit data to it. An object is defned by connected vertices. The vertices of the object are then transformed, lit, and assembled into primitives, and rasterized to create a 2D image that can be directly sent to the underlying graphics hardware to render the drawing, which is deemed to be typically very fast, due to the hardware being dedicated to processing graphics commands. We have some fantastic stuff to cover in this article, so let's get started. Understanding the new workfow feature within Xcode In this section, we will be taking a look at the improvements that have been made to the Xcode 4 development environment, and how this can enable us to debug OpenGL ES applications much easier, compared to the previous versions of Xcode. We will look at how we can use the frame capture feature of the debugger to capture all frame objects that are included within an OpenGL ES application. This tool enables you to list all the frame objects that are currently used by your application at a given point of time. We will familiarize ourselves with the new OpenGL ES debugger within Xcode, to enable us to track down specifc issues relating to OpenGL ES within the code. Creating a simple project to debug an OpenGL ES application Before we can proceed, we frst need to create our OpenGLESExample project. Launch Xcode from the /Developer/Applications folder. Select the OpenGL Game template from the Project template dialog box. Then, click on the Next button to proceed to the next step in the wizard. This will allow you to enter in the Product Name and your Company Identifer. Enter in OpenGLESExample for the Product Name, and ensure that you have selected iPhone from the Device Family dropdown box. Next, click on the Next button to proceed to the fnal step in the wizard. Choose the folder location where you would like to save your project. Then, click on the Create button to save your project at the location specifed. Once your project has been created, you will be presented with the Xcode development interface, along with the project fles that the template created for you within the Project Navigator window. Now that we have our project created, we need to confgure our project to enable us to debug the state of the objects. Detecting OpenGL ES state information and objects To enable us to detect and monitor the state of the objects within our application, we need to enable this feature through the Edit Scheme… section of our project, as shown in the following screenshot: From the Edit Scheme section, as shown in the following screenshot, select the Run OpenGLESExampleDebug action, then click on the Options tab, and then select the OpenGL ES Enable frame capture checkbox. For this feature to work, you must run the application on an iOS device, and the device must be running iOS 5.0 or later. This feature will not work within the iOS simulator. You will need to ensure that after you have attached your device, you will then need to restart Xcode for this option to become available. When you have confgured your project correctly, click on the OK button to accept the changes made, and close the dialog box. Next, build and run your OpenGL ES application. When you run your application, you will see two three-dimensional and colored box cubes. When you run your application on the iOS device, you will notice that the frame capture appears within the Xcode 4 debug bar, as shown in the following screenshot: When using the OpenGL ES features of Xcode 4.2, these debugging features enable you to do the following: Inspect OpenGL ES state information. Introspect OpenGL ES objects such as view textures and shaders. Step through draw calls and watch changes with each call. Step through the state calls that proceed each draw call to see exactly how the image is constructed. The following screenshot displays the captured frame of our sample application. The debug navigator contains a list of every draw call and state call associated with that particular frame. The buffers that are associated with the frame are shown within the editor pane, and the state information is shown in the debug windowpane. The default view when the OpenGL ES frame capture is launched is displayed in the Auto view. This view displays the color portion, which is the Renderbuffer #1, as well as its grayscale equivalent of the image, that being Renderbuffer #2. You can also toggle the visibility between each of the channels for red, green and blue, as well as the alpha channels, and then use the Range scroll to adjust the color range. This can be done easily by selecting each of the cog buttons, shown in the previous screenshot. You also have the ability to step through each of the draw calls in the debug navigator, or by using the double arrows and slider in the debug bar. When using the draw call arrows or sliders, you can have Xcode select the stepped-to draw call from the debug navigator. This can be achieved by Control + clicking below the captured frame, and choosing the Reveal in Debug Navigator from the shortcut menu. You can also use the shortcut menu to toggle between the standard view of drawing the image, as well as showing the wireframe view of the object, by selecting the Show Wireframe option from the pop-up menu, as shown in the previous screenshot. When using the wireframe view of an object, it highlights the element that is being drawn by the selected draw call. To turn off the wireframe feature and have the image return back to the normal state, select the Hide Wireframe option from the pop-up menu, as shown in the following screenshot: Now that you have a reasonable understanding of debugging through an OpenGL ES application and its draw calls, let's take a look at how we can view the textures associated with an OpenGL ES application. View textures When referring to textures in OpenGL ES 2.0, this is basically an image that can be sampled by the graphics engine pipeline, and is used to map a colored image onto a mapping surface. To view objects that have been captured by the frame capture button, follow these simple steps: Open the Assistant Editor to see the objects associated with the captured frame. In this view, you can choose to see all of the objects, only bound objects, or the stack. This can be accessed from the View | Assistant Editor | Show Assistant Editor menu, as shown in the following screenshot: Open a secondary assistant editor pane, so that you can see both the objects and the stack frame at the same time. This can be accessed from the View | Assistant Editor | Add Assistant Editor menu shown previously, or by clicking on the + symbol, as shown in the following screenshot: To see details about any object contained within the OpenGL ES assistant editor, double-click on the object, or choose the item from the pop-up list, as shown in the next screenshot. It is worth mentioning that, from within this view, you have the ability to change the orientation of any object that has been captured and has been rendered to the view. To change the orientation, locate the Orientation options shown at the bottom- right hand of the screen. Objects can be changed to appear in one or more views as needed, and these are as follows: Rotate clockwise Rotate counter-clockwise Flip orientation vertically Flip orientation horizontally For example, if you want to see information about the vertex array object (VAO), you would double-click on it to see it in more detail, as shown in the following screenshot. This displays all the X, Y, and Z-axes required to construct each of our objects. Next, we will take a look into how shaders are constructed. Shaders There are two types of shaders that you can write for OpenGL ES; these are Vertex shaders and Fragment shaders. These two shaders make up what is known as the Programmable portion of the OpenGL ES 2.0 programmable pipeline, and are written in a C-like language syntax, called the OpenGL ES Shading Language (GLSL). The following screenshot outlines the OpenGL ES 2.0 programmable pipeline, and combines a version of the OpenGL Shading Language for programming Vertex Shader and Fragment Shader that has been adapted for embedded platforms for iOS devices: Shaders are not new, these have been used in a variety of games that use OpenGL. Such games that come to mind are: Doom 3 and Quake 4, or several fight simulators, such as Microsoft's Flight Simulator X. Once thing to note about shaders, is that they are not compiled when your application is built. The source code of the shader gets stored within your application bundle as a text fle, or defned within your code as a string literal, that is, vertShaderPathname = [[NSBundlemainBundle] pathForResource:@"Shader" ofType:@"vsh"];   Before you can use your shaders, your application has to load and compile each of them. This is done to preserve device independence. Let's take for example, if Apple decided to change to a different GPU manufacturer, for future releases of its iPhone, the compiled shaders may not work on the new GPU. Having your application deferring the compilation to runtime will avoid this problem, and any latest versions of the GPU will be fully supported without a need for you to rebuild your application. The following table explains the differences between the two shaders. Shader type Description Vertex shaders These are programs that get called once-per-vertex in your scene. An example to explain this better would be - if you were rendering a simple scene with a single square, with one vertex at each corner, this would be called four times. Its job is to perform some calculations such as lighting, geometry transforms, moving, scaling and rotating of objects, to simulate realism. Fragment shaders These are programs that get called once-per-pixel in your scene. So, if you're rendering that same simple scene with a single square, it will be called once for each pixel that the square covers. Fragment shaders can also perform lighting calculations, and so on, but their most important job is to set the final color for the pixel. Next, we will start by examining the implementation of the vertex shader that the OpenGL template created for us. You will notice that these shaders are code fles that have been implemented using C-Syntax like instructions. Lets, start by examining each section of the vertex shader fle, by following these simple steps: Open the Shader.vsh vertex shader fle located within the OpenGLESExample folder of the Project Navigator window, and examine the following code snippet. attribute vec4 position; attribute vec3 normal; varyinglowp vec4 colorVarying; uniform mat4 modelViewProjectionMatrix; uniform mat3 normalMatrix; void main(){   vec3eyeNormal = normalize(normalMatrix * normal);   vec3lightPosition = vec3(0.0, 0.0, 1.0);   vec4diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);   floatnDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));   colorVarying = diffuseColor * nDotVP;   gl_Position = modelViewProjectionMatrix * position; } Next, we will take a look at what this piece of code is doing and explain what is actually going on. So let's start. The attribute keyword declares that this shader is going to be passed in an input variable called position. This will be used to indicate the position of the vertex. You will notice that the position variable has been declared of type vec4, which means that each vertex contains four foating-point values. The second attribute input variable that is declared with the variable name normal, has been declared of type vec3, which means that the vertex con- tains three foating-point values that are used for the rotational aspect around the x, y, and z axes. The third attribute input variable that is declared with the variable name diffuseColor, defnes the color to be used for the vertex. We declare an- other variable called colorVarying. You will notice that it doesn't contain the attribute keyword. This is because it is an output variable that will be passed to the fragment shader. The varying keyword tells us the value for a particular vertex. This basically means that you can specify a different color for each vertex, and it will make all the values in-between a neat gradient that you will see in the fnal output. We have declared this as vec4, because colors are comprised of four compo- nent values. Finally, we declare two uniform keyword variables called modelViewProjectionMatrix and normalMatrix. The model, view, and projection matrices are three separate matrices. Model maps from an object's local coordinate space into world space, view from world space to camera space, and projection from camera to screen. When all three are used, you can then use the one result to map all the way from object space to screen space, enabling you to work out what you need to pass on to the next stage of a programmable pipeline from the incoming vertex positions. The normal matrix vectors are used to determine how much light is received at the specifed vertex or surface. Uniforms are a second form of data that al- low you to pass from your application code to the shaders. Uniform types are available to both vertex and fragment shaders, which, unlike attributes, are only available to the vertex shader. The value of a uniform cannot be changed by the shaders, and will have the same value every time a shader runs for a given trip through the pipeline. Uniforms can also contain any kind of data that you want to pass along for use in your shader. Next, we assign the value from the color per-vertex attribute to the varying variable colorVarying. This value will then be available in the fragment shader in interpolated form. Finally, we modify the gl_Position output variable, using the foating point translate variable to move the vertex along the X, Y, and Z-axes, based on the value of the translate uniform. Next, we will take a look at the fragment shader that the OpenGL ES tem- plate created for us. Open the Shader.fsh fragment shader fle located within the OpenGLESExample folder of the Project Navigator window, and examine the following code snippet. varyinglowp vec4 colorVarying; void main(){   gl_FragColor = colorVarying; } We will now take a look at this code snippet, and explain what is actually going on here. You will notice that within the fragment shader, the declaration of the varying type variable colorVarying, as highlighted in the code, has the same name as it did in the vertex shader. This is very important; if these names were different, OpenGL ES won't realize it's the same variable, and your program will produce unexpected results. The type is also very important, and it has to be the same data type as it was declared within the vertex shader. This is a GLSL keyword that is used to specify the precision of the number of bytes used to represent a number. From a programming point of view, the more bytes that are used to represent a number, the fewer problems you will be likely to have with the rounding of foating point calculations. GLSL allows the user to precision modifers any time a variable is declared, and it must be declared within this fle. Failure to declare it within the fragment shader, will result in your shader failing to compile. The lowp keyword is going to give you the best performance with the least accuracy during interpolation. This is the better option when dealing with colors, where small rounding errors don't matter. Should you fnd the need to increase the precision, it is better to use the mediump or highp, if the lack of precision causes you problems within your application. For more information on the OpenGL ES Shading Language (GLSL) or the Precision modifers, refer to the following documentation located at: http://www.khronos.org/registry/ gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf.
Read more
  • 0
  • 0
  • 1359

article-image-geolocation-and-accelerometer-apis
Packt
25 Jan 2012
13 min read
Save for later

Geolocation and Accelerometer APIs

Packt
25 Jan 2012
13 min read
(For more resources on iOS, see here.) The iOS family makes use of many onboard sensors including the three-axis accelerometer, digital compass, camera, microphone, and global positioning system (GPS). Their inclusion has created a world of opportunity for developers, and has resulted in a slew of innovative, creative, and fun apps that have contributed to the overwhelming success of the App Store. Determining your current location The iOS family of devices are location-aware, allowing your approximate geographic position to be determined. How this is achieved depends on the hardware present in the device. For example, the original iPhone, all models of the iPod touch, and Wi-Fi-only iPads use Wi-Fi network triangulation to provide location information. The remaining devices can more accurately calculate their position using an on-board GPS chip or cell-phone tower triangulation. The AIR SDK provides a layer of abstraction that allows you to extract location information in a hardware-independent manner, meaning you can access the information on any iOS device using the same code. This recipe will take you through the steps required to determine your current location. Getting ready An FLA has been provided as a starting point for this recipe. From Flash Professional, open chapter9recipe1recipe.fla from the code bundle which can be downloaded from http://www.packtpub.com/support.   How to do it... Perform the following steps to listen for and display geolocation data: Create a document class and name it Main. Import the following classes and add a member variable of type Geolocation: package { import flash.display.MovieClip; import flash.events.GeolocationEvent; import flash.sensors.Geolocation; public class Main extends MovieClip { private var geo:Geolocation; public function Main() { // constructor code } }}   Within the class' constructor, instantiate a Geolocation object and listen for updates from it: public function Main() { if(Geolocation.isSupported) { geo = new Geolocation(); geo.setRequestedUpdateInterval(1000); geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); }} Now, write an event handler that will obtain the updated geolocation data and populate the dynamic text fields with it: private function geoUpdated(e:GeolocationEvent):void { latitudeField.text = e.latitude.toString(); longitudeField.text = e.longitude.toString(); altitudeField.text = e.altitude.toString(); hAccuracyField.text = e.horizontalAccuracy.toString(); vAccuracyField.text = e.verticalAccuracy.toString(); timestampField.text = e.timestamp.toString();} Save the class file as Main.as within the same folder as the FLA. Move back to the FLA and save it too. Publish and test the app on your device. When launched for the first time, a native iOS dialog will appear. Tap the OK button to grant your app access to the device's location data.     Devices running iOS 4 and above will remember your choice, while devices running older versions of iOS will prompt you each time the app is launched.   The location data will be shown on screen and periodically updated. Take your device on the move and you will see changes in the data as your geographical location changes. How it works... AIR provides the Geolocation class in the flash.sensors package, allowing the location data to be retrieved from your device. To access the data, create a Geolocation instance and listen for it dispatching GeolocationEvent.UPDATE events. We did this within our document class' constructor, using the geo member variable to hold a reference to the object: geo = new Geolocation();geo.setRequestedUpdateInterval(1000);geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); The frequency with which location data is retrieved can be set by calling the Geolocation. setRequestedUpdateInterval() method. You can see this in the earlier code where we requested an update interval of 1000 milliseconds. This only acts as a hint to the device, meaning the actual time between updates may be greater or smaller than your request. Omitting this call will result in the device using a default update interval. The default interval can be anything ranging from milliseconds to seconds depending on the device's hardware capabilities. Each UPDATE event dispatches a GeolocationEvent object, which contains properties describing your current location. Our geoUpdated() method handles this event by outputting several of the properties to the dynamic text fields sitting on the stage: private function geoUpdated(e:GeolocationEvent):void { latitudeField.text = e.latitude.toString(); longitudeField.text = e.longitude.toString(); altitudeField.text = e.altitude.toString(); hAccuracyField.text = e.horizontalAccuracy.toString(); vAccuracyField.text = e.verticalAccuracy.toString(); timestampField.text = e.timestamp.toString();} The following information was output: Latitude and longitude Altitude Horizontal and vertical accuracy Timestamp The latitude and longitude positions are used to identify your geographical location. Your altitude is also obtained and is measured in meters. As you move with the device, these values will update to reflect your new location. The accuracy of the location data is also shown and depends on the hardware capabilities of the device. Both the horizontal and vertical accuracy are measured in meters. Finally, a timestamp is associated with every GeolocationEvent object that is dispatched, allowing you to determine the actual time interval between each. The timestamp specifies the milliseconds that have passed since the app was launched. Some older devices that do not include a GPS unit only dispatch UPDATE events occasionally. Initially, one or two UPDATE events are dispatched, with additional events only being dispatched when location information changes noticeably. Also note the use of the static Geolocation.isSupported property within the constructor. Although this will currently return true for all iOS devices, it cannot be guaranteed for future devices. Checking for geolocation support is also advisable when writing cross-platform code. For more information, perform a search for flash.sensors.Geolocation and flash. events.GeolocationEvent within Adobe Community Help. There's more... The amount of information made available and the accuracy of that information depends on the capabilities of the device. Accuracy The accuracy of the location data depends on the method employed by the device to calculate your position. Typically, iOS devices with an on-board GPS chip will have a benefit over those that rely on Wi-Fi triangulation. For example, running this recipe's app on an iPhone 4, which contains a GPS unit, results in a horizontal accuracy of around 10 meters. The same app running on a third-generation iPod touch and relying on a Wi-Fi network, reports a horizontal accuracy of around 100 meters. Quite a difference! Altitude support The current altitude can only be obtained from GPS-enabled devices. On devices without a GPS unit, the GeolocationEvent.verticalAccuracy property will return -1 and GeolocationEvent.altitude will return 0. A vertical accuracy of -1 indicates that altitude cannot be detected. You should be aware of, and code for these restrictions when developing apps that provide location-based services. Do not make assumptions about a device's capabilities. If your application relies on the presence of GPS hardware, then it is possible to state this within your application descriptor file. Doing so will prevent users without the necessary hardware from downloading your app from the App Store. Mapping your location The most obvious use for the retrieval of geolocation data is mapping. Typically, an app will obtain a geographic location and display a map of its surrounding area. There are several ways to achieve this, but launching and passing location data to the device's native maps application is possibly the easiest solution. If you would prefer an ActionScript solution, then there is the UMap ActionScript 3.0 API, which integrates with map data from a wide range of providers including Bing, Google, and Yahoo!. You can sign up and download the API from www.umapper.com. Calculating distance between geolocations When the geographic coordinates of two separate locations are known, it is possible to determine the distance between them. AIR does not provide an API for this but an AS3 solution can be found on the Adobe Developer Connection website at: http://cookbooks.adobe.com/index.cfm?event=showdetails&postId=5701. The UMap ActionScript 3.0 API can also be used to calculate distances. Refer to www.umapper.com. Geocoding Mapping providers, such as Google and Yahoo!, provide geocoding and reverse-geocoding web services. Geocoding is the process of finding the latitude and longitude of an address, whereas reverse-geocoding converts a latitude-longitude pair into a readable address. You can make HTTP requests from your AIR for iOS application to any of these services. As an example, take a look at the Yahoo! PlaceFinder web service at http://developer.yahoo.com/geo/placefinder. Alternatively, the UMap ActionScript 3.0 API integrates with many of these services to provide geocoding functionality directly within your Flash projects. Refer to the uMapper website. Gyroscope support Another popular sensor is the gyroscope, which is found in more recent iOS devices. While the AIR SDK does not directly support gyroscope access, Adobe has made available a native extension for AIR 3.0, which provides a Gyroscope ActionScript class. A download link and usage examples can be found on the Adobe Developer Connection site at www.adobe.com/devnet/air/native-extensions-for-air/extensions/gyroscope.html. Determining your speed and heading The availability of an on-board GPS unit makes it possible to determine your speed and heading. In this recipe, we will write a simple app that uses the Geolocation class to obtain and use this information. In addition, we will add compass functionality by utilizing the user's current heading. Getting ready You will need a GPS-enabled iOS device. The iPhone has featured an on-board GPS unit since the release of the 3G. GPS hardware can also be found in all cellular network-enabled iPads. From Flash Professional, open chapter9recipe2recipe.fla from the code bundle. Sitting on the stage are three dynamic text fields. The first two (speed1Field and speed2Field) will be used to display the current speed in meters per second and miles per hour respectively. We will write the device's current heading into the third—headingField. Also, a movie clip named compass has been positioned near the bottom of the stage and represents a compass with north, south, east, and west clearly marked on it. We will update the rotation of this clip in response to heading changes to ensure that it always points towards true north. How to do it... To obtain the device's speed and heading, carry out the following steps: Create a document class and name it Main. Add the necessary import statements, a constant, and a member variable of type Geolocation: package { import flash.display.MovieClip; import flash.events.GeolocationEvent; import flash.sensors.Geolocation; public class Main extends MovieClip { private const CONVERSION_FACTOR:Number = 2.237; private var geo:Geolocation; public function Main() { // constructor code } }} Within the constructor, instantiate a Geolocation object and listen for updates: public function Main() { if(Geolocation.isSupported) { geo = new Geolocation(); geo.setRequestedUpdateInterval(50); geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); }} We will need an event listener for the Geolocation object's UPDATE event. This is where we will obtain and display the current speed and heading, and also update the compass movie clip to ensure it points towards true north. Add the following method: private function geoUpdated(e:GeolocationEvent):void { var metersPerSecond:Number = e.speed; var milesPerHour:uint = getMilesPerHour(metersPerSecond); speed1Field.text = String(metersPerSecond); speed2Field.text = String(milesPerHour); var heading:Number = e.heading; compass.rotation = 360 - heading; headingField.text = String(heading);} Finally, add this support method to convert meters per second to miles per hour: private function getMilesPerHour(metersPerSecond:Number):uint { return metersPerSecond * CONVERSION_FACTOR;} Save the class file as Main.as. Move back to the FLA and save it too. Compile the FLA and deploy the IPA to your device. Launch the app. When prompted, grant your app access to the GPS unit. Hold the device in front of you and start turning on the spot. The heading (degrees) field will update to show the direction you are facing. The compass movie clip will also update, showing you where true north is in relation to your current heading. Take your device outside and start walking, or better still, start running. On average every 50 milliseconds you will see the top two text fields update and show your current speed, measured in both meters per second and miles per hour. How it works... In this recipe, we created a Geolocation object and listened for it dispatching UPDATE events. An update interval of 50 milliseconds was specified in an attempt to receive the speed and heading information frequently. Both the speed and heading information are obtained from the GeolocationEvent object, which is dispatched on each UPDATE event. The event is captured and handled by our geoUpdated() handler, which displays the speed and heading information from the accelerometer. The current speed is measured in meters per second and is obtained by querying the GeolocationEvent.speed property. Our handler also converts the speed to miles per hour before displaying each value within the appropriate text field. The following code does this: var metersPerSecond:Number = e.speed;var milesPerHour:uint = getMilesPerHour(metersPerSecond);speed1Field.text = String(metersPerSecond);speed2Field.text = String(milesPerHour); The heading, which represents the direction of movement (with respect to true north) in degrees, is retrieved from the GeolocationEvent.heading property. The value is used to set the rotation property of the compass movie clip and is also written to the headingField text field: var heading:Number = e.heading;compass.rotation = 360 - heading;headingField.text = String(heading); The remaining method is getMilesPerHour() and is used within geoUpdated() to convert the current speed from meters per second into miles per hour. Notice the use of the CONVERSION_FACTOR constant that was declared within your document class: private function getMilesPerHour(metersPerSecond:Number):uint { return metersPerSecond * CONVERSION_FACTOR;} Although the speed and heading obtained from the GPS unit will suffice for most applications, the accuracy can vary across devices. Your surroundings can also have an affect; moving through streets with tall buildings or under tree coverage can impair the readings. You can find more information regarding flash.sensors.Geolocation and flash.events.GeolocationEvent within Adobe Community Help. There's more... The following information provides some additional detail. Determining support Your current speed and heading can only be determined by devices that possess a GPS receiver. Although you can install this recipe's app on any iOS device, you won't receive valid readings from any model of iPod touch, the original iPhone, or W-Fi-only iPads. Instead the GeolocationEvent.speed property will return -1 and GeolocationEvent.heading will return NaN. If your application relies on the presence of GPS hardware, then it is possible to state this within the application descriptor file. Doing so will prevent users without the necessary hardware from downloading your app from the App Store. Simulating the GPS receiver During the development lifecycle it is not feasible to continually test your app in a live environment. Instead you will probably want to record live data from your device and re-use it during testing. There are various apps available that will log data from the sensors on your device. One such app is xSensor, which can be downloaded from iTunes or the App Store and is free. Its data sensor log is limited to 5KB but this restriction can be lifted by purchasing xSensor Pro. Preventing screen idle Many of this article's apps don't require you to touch the screen that often. Therefore you will be likely to experience the backlight dimming or the screen locking while testing them. This can be inconvenient and can be prevented by disabling screen locking.  
Read more
  • 0
  • 0
  • 3680
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-sencha-touch-catering-form-related-needs
Packt
28 Dec 2011
14 min read
Save for later

Sencha Touch: Catering Form Related Needs

Packt
28 Dec 2011
14 min read
(For more resources on Sencha Touch, see here.) Most of the useful applications not only present the data, but also accept inputs from their users. When we think of having a way to accept inputs from the user, send them to the server for further processing, and allow the user to modify them, we think of forms and the form fields. If our application requires users to enter some information, then we go about using the HTML form fields, such as <input>, <select>, and so on, and wrap inside a <form> element. Sencha Touch uses these tags and provides convenient JavaScript classes to work with the form and its fields. It provides field classes such as Url, Toggle, Select, Text, and so on. Each of these classes provides properties to initialize the field, handle the events, and utility methods to manipulate the behavior and the values of the field. On the other side, the form takes care of the rendering of the fields and also handles the data submission. Each field can be created by using the JSON notation (JavaScript Object Notation—http://www.json.org) or by creating an instance of the class. For example, a text field can either be constructed by using the following JSON notation: { xtype: 'textfield', name: 'text', label: 'My Text' } Alternatively, we can use the following class constructor: var txtField = new Ext.form.Text({ name: 'text', label: 'My Text' }); The first approach relies on xtype, which is a type assigned to each of the Sencha Touch components. It is used as shorthand for the class. The basic difference between the two is that the xtype approach is more for the lazy initialization and rendering. The object is created only when it is required. In any application, we would use a combination of these two approaches. In this article, we will go through all the form fields and understand how to make use of them and learn about their specific behaviors. In addition, we will see how to create a form using one or more form fields and handle the form validation and submission. Getting your form ready with FormPanel This recipe shows how to create a basic form using Sencha Touch and implement some of the behaviors such as submitting the form data, handling errors during the submission, and so on. Getting ready Make sure that you have set up your development environment How to do it... Carry out the following steps: Create a ch02 folder in the same folder where we had created the ch01 folder. Create and open a new file named ch02_01.js and paste the following code into it: Ext.setup({ onReady: function() { var form; //form and related fields config var formBase = { //enable vertical scrolling in case the form exceeds the page height scroll: 'vertical', url: 'http://localhost/test.php', items: [{//add a fieldset xtype: 'fieldset', title: 'Personal Info', instructions: 'Please enter the information above.', //apply the common settings to all the child items of the fieldset defaults: { required: true, //required field labelAlign: 'left', labelWidth: '40%' }, items: [ {//add a text field xtype: 'textfield', name : 'name', label: 'Name', useClearIcon: true,//shows the clear icon in the field when user types autoCapitalize : false }, {//add a password field xtype: 'passwordfield', name : 'password', label: 'Password', useClearIcon: false }, { xtype: 'passwordfield', name : 'reenter', label: 'Re-enter Password', useClearIcon: true }, {//add an email field xtype: 'emailfield', name : 'email', label: 'Email', placeHolder: '[email protected]', useClearIcon: true }] } ], listeners : { //listener if the form is submitted, successfully submit : function(form, result){ console.log('success', Ext.toArray(arguments)); }, //listener if the form submission fails exception : function(form, result){ console.log('failure', Ext.toArray(arguments)); } }, //items docked to the bottom of the form dockedItems: [ { xtype: 'toolbar', dock: 'bottom', items: [ { text: 'Reset', handler: function() { form.reset(); //reset the fields } }, { text: 'Save', ui: 'confirm', handler: function() { //submit the form data to the url form.submit(); } } ] } ] }; if (Ext.is.Phone) { formBase.fullscreen = true; } else { //if desktop Ext.apply(formBase, { autoRender: true, floating: true, modal: true, centered: true, hideOnMaskTap: false, height: 385, width: 480 }); } //create form panel form = new Ext.form.FormPanel(formBase); form.show(); //render the form to the body } });   Include the following line in index.html: <script type="text/javascript" charset="utf-8" src="ch02/ch02_01.js"></script> Deploy and access it from the browser. You will see the following screen: How it works... The code creates a form panel, with a field set inside it. The field set has four fields specified as part of its child items. xtype mentioned for each field instructs the Sencha Touch component manager which class to use to instantiate them. form = new Ext.form.FormPanel(formBase) creates the form and the other field components using the config defined as part of the formBase. form.show() renders the form to the body and that is how it will appear on the screen. url contains the URL where the form data will be posted upon submission. The form can be submitted in the following two ways: By hitting Go, on the virtual keyboard or Enter on a field that ends up generating the action event. By clicking on the Save button, which internally calls the submit() method on the form object. form.reset() resets the status of the form and its fields to the original state. Therefore, if you had entered the values in the fields and clicked on the Reset button, all the fields would be cleared. form.submit() posts the form data to the specified url. The data is posted as an Ajax request using the POST method. Use of useClearIcon on the field instructs Sencha Touch whether it should show the clear icon in the field when the user starts entering a value in it. On clicking on this icon, the value in the field is cleared. There's more... In the preceding code, we saw how to construct a form panel, add fields to it, and handle events. We will see what other non-trivial things we may have to do in the project and how we can achieve these using Sencha Touch. Standard submit This is the old and traditional way for form data posting to the server url. If your application need is to use the standard form submit, rather than Ajax, then you will have to set standardSubmit to true on the form panel. This is set to false, by default. The following code snippet shows the usage of this property: var formBase = { scroll: 'vertical', standardSubmit: true, ... After this property is set to true on the FormPanel, form.submit() will load the complete page specified in url. Do not submit on field action As we saw earlier, the form data is automatically posted to the url if the action event (when the Go or Enter key is hit) occurs. In many applications, this default feature may not be desirable. In order to disable this feature, you will have to set submitOnAction to false on the form panel. Post-submission handling Say we posted our data to the url. Now, either the call may fail or it may succeed. In order to handle these specific conditions and act accordingly, we will have to pass additional config options to the form's submit() method. The following code shows the enhanced version of the submit call: form.submit({ success: function(form, result) { Ext.Msg.alert("INFO", "Form submitted!"); }, failure: function(form, result) { Ext.Msg.alert("INFO", "Form submission failed!"); } }); If the Ajax call (to post form data) fails, then the failure callback function is called, and in the case of success, the success callback function is called. This works only if the standardSubmit is set to false. Working with search In this and the subsequent recipes of the article, we will go over each of the form fields and understand how to work with them. This recipe describes the steps required to create and use a search form field. Getting ready Make sure that you have set up your development environment. Make sure that you have followed the Getting your form ready with FormPanel recipe. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_02.js. Open a new file named ch02_02.js and replace the definition of formBase with the following code: var formBase = { items: [{ xtype: 'searchfield', name: 'search', label: 'Search' }] }; Include ch02_02.js in place of ch02_01.js in index.html. Deploy and access the application in the browser. You will see a form panel with a search field. How it works... The search field can be constructed by using the Ext.form.Search class instance or by using the xtype—searchfield. The search form field implements HTML5 <input> with type="search". However, the implementation is very limited. For example, the HTML5 search field allows us to associate a data list to it which it can use during the search, whereas this feature is not present in Sencha Touch. Similarly, the W3 search field spec defines a pattern attribute to allow us to specify a regular expression against which a User Agent is meant to check the value, which is not supported yet in Sencha Touch. For more detail, you may refer to the W3 search field (http://www.w3.org/TR/html-markup/input.search.html) and the source code of the Ext.form.Search class. There's more... Often, in the application, for the search fields we do not use a label. Rather, we would like to show a text, such as Search inside the field that will disappear when the focus is on the field. Let's see how we can achieve this. Using a placeholder Placeholders are supported by most of the form fields in Sencha Touch by using the property placeHolder. The placeholder text appears in the field as long as there is no value entered in it and the field does not have the focus. The following code snippet shows the typical usage of it: { xtype: 'searchfield', name: 'search', label: 'Search', placeHolder: 'Search...' } Putting custom validation in the e-mail field This recipe describes how to make use of the e-mail form field provided by Sencha Touch and how to validate the value entered into it to find out whether the entered e-mail passes the validation rule or not. Getting ready Make sure that you have set up your development environment. Make sure that you have followed the Getting your form ready with FormPanel recipe in this article. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_03.js. Open a new file named ch02_03.js and replace the definition of formBase with the following code: var formBase = { items: [{ xtype: 'emailfield', name : 'email', label: 'Email', placeHolder: '[email protected]', useClearIcon: true, listeners: { blur: function(thisTxt, eventObj) { var val = thisTxt.getValue(); //validate using the pattern if (val.search("[a-c]+@[a-z]+[.][a-z]+") == -1) Ext.Msg.alert("Error", "Invalid e-mail address!!"); else Ext.Msg.alert("Info", "Valid e-mail address!!"); } } }] }; Include ch02_03.js in place of ch02_02.js in index.html. Deploy and access the application in the browser. How it works... The e-mail field can be constructed by using the Ext.form.Email class instance or by using the xtype: emailfield. The e-mail form field implements HTML5 <input> with type="email". However, as with the search field, the implementation is very limited. For example, the HTML5 e-mail field allows us to specify a regular expression pattern which can be used to validate the value entered in the field. Working with dates using DatePicker This recipe describes how to make use of the date picker form field provided by Sencha Touch which allows the user to select a date. Getting ready Make sure that you have set up your development environment. Make sure that you have followed the Getting your form ready with FormPanel recipe in this article. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_04.js. Open a new file named ch02_04.js and replace the definition of formBase with the following code: var formBase = { items: [{ xtype: 'datepickerfield', name: 'date', label: 'Date' }] }; Include ch02_04.js in place of ch02_03.js in index.html. Deploy and access the application in the browser. How it works... The date picker field can be constructed by using the Ext.form.DatePicker class instance or by using xtype: datepickerfield. The date picker form field implements HTML <select>. When the user tries to select an entry, it shows the date picker with the month, day, and year slots for selection. After selection, when the user clicks on the Done button, the field is set with the selected value. There's more... Additionally, there are other things that can be done such as setting the date to the current date, or any particular date, or changing the order of appearance of a month, day, and year. Let's see what it takes to accomplish this. Setting the default date to the current date In order to set the default value to the current date, the value property must be set to the current date. The following code shows how to do it: var formBase = { items: [{ xtype: 'datepickerfield', name: 'date', label: 'Date', value: new Date(), Setting the default date to a particular date The default date is 01/01/1970. Let's assume that you need to set the date to a different date, but not the current date. To do so, you will have to set the value using the year, month, and day properties, as follows: var formBase = { items: [{ xtype: 'datepickerfield', name: 'date', label: 'Date', value: {year: 2011, month: 6, day: 11}, ... Changing the slot order By default, the slot order is month, day, and year. You can change it by setting the slotOrder property of the picker property of date picker, as shown in the following code: var formBase = { items: [{ xtype: 'datepickerfield', name: 'date', label: 'Date', picker: {slotOrder: ['day', 'month', 'year']} }] }; Setting the picker date range By default, the date range shown by the picker is 1970 until the current year. For our application need, if we have to alter the year range, we can do so by setting the yearFrom nd yearTo properties of the picker property of the date picker, as follows: var formBase = { items: [{ xtype: 'datepickerfield', name: 'date', label: 'Date', picker: {yearFrom: 2000, yearTo: 2010} }] }; Making a field hidden Often in an application, there would be a need to hide fields which are not needed in a particular context but are required and hence need to be shown in another. In this recipe, we will see how to make a field hidden and show it, conditionally. Getting ready Make sure that you have set up your development environment. Make sure that you have followed the Getting your form ready with FormPanel recipe in this article. How to do it... Carry out the following steps: Edit ch02_04.js and modify the code as follows by adding the hidden property: var formBase = { items: [{ xtype: 'datepickerfield', id: 'datefield-id', name: 'date', hidden: true, label: 'Date'}] }; Deploy and access the application in the browser. How it works... When a field is marked as hidden, Sencha Touch uses the DOM's hide method on the element to hide that particular field. There's more... Let's see how we can programmatically show/hide a field. Showing/Hiding a field at runtime Each component in Sencha Touch supports two methods: show and hide. The show method shows the element and hide hides the element. In order to call these methods, we will have to first find the reference to the component, which can be achieved by either using the object reference or by using the Ext.getCmp() method. Given a component ID, the getCmp method returns us the component. The following code snippet demonstrates how to show an element: var cmp = Ext.getCmp('datefield-id'); cmp.show(); To hide an element, we will have to call cmp.hide();
Read more
  • 0
  • 0
  • 1999

article-image-appcelerator-titanium-creating-animations-transformations-and-understanding-drag-and-d
Packt
22 Dec 2011
10 min read
Save for later

Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop

Packt
22 Dec 2011
10 min read
(For more resources related to this subject, see here.) Animating a View using the "animate" method Any Window, View, or Component in Titanium can be animated using the animate method. This allows you to quickly and confidently create animated objects that can give your applications the "wow" factor. Additionally, you can use animations as a way of holding information or elements off screen until they are actually required. A good example of this would be if you had three different TableViews but only wanted one of those views visible at any one time. Using animations, you could slide those tables in and out of the screen space whenever it suited you, without the complication of creating additional Windows. In the following recipe, we will create the basic structure of our application by laying out a number of different components and then get down to animating four different ImageViews. These will each contain a different image to use as our "Funny Face" character. Complete source code for this recipe can be found in the /Chapter 7/Recipe 1 folder. Getting ready To prepare for this recipe, open up Titanium Studio and log in if you have not already done so. If you need to register a new account, you can do so for free directly from within the application. Once you are logged in, click on New Project, and the details window for creating a new project will appear. Enter in FunnyFaces as the name of the app, and fill in the rest of the details with your own information. Pay attention to the app identifier, which is written normally in reverse domain notation (that is, com.packtpub.funnyfaces). This identifier cannot be easily changed after the project is created and you will need to match it exactly when creating provisioning profiles for distributing your apps later on. The first thing to do is copy all of the required images into an images folder under your project's Resources folder. Then, open the app.js file in your IDE and replace its contents with the following code. This code will form the basis of our FunnyFaces application layout. // this sets the background color of the master UIView Titanium.UI.setBackgroundColor('#fff');////create root window//var win1 = Titanium.UI.createWindow({ title:'Funny Faces', backgroundColor:'#fff'});//this will determine whether we load the 4 funny face//images or whether one is selected alreadyvar imageSelected = false;//the 4 image face objects, yet to be instantiatedvar image1;var image2;var image3;var image4;var imageViewMe = Titanium.UI.createImageView({ image: 'images/me.png', width: 320, height: 480, zIndex: 0 left: 0, top: 0, zIndex: 0, visible: false});win1.add(imageViewMe);var imageViewFace = Titanium.UI.createImageView({ image: 'images/choose.png', width: 320, height: 480, zIndex: 1});imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! }});win1.add(imageViewFace);//this footer will hold our save button and zoom slider objectsvar footer = Titanium.UI.createView({ height: 40, backgroundColor: '#000', bottom: 0, left: 0, zIndex: 2});var btnSave = Titanium.UI.createButton({ title: 'Save Photo', width: 100, left: 10, height: 34, top: 3});footer.add(btnSave);var zoomSlider = Titanium.UI.createSlider({ left: 125, top: 8, height: 30, width: 180});footer.add(zoomSlider);win1.add(footer);//open root windowwin1.open(); Build and run your application in the emulator for the first time, and you should end up with a screen that looks just similar to the following example: How to do it… Now, back in the app.js file, we are going to animate the four ImageViews which will each provide an option for our funny face image. Inside the declaration of the imageViewFace object's event handler, type in the following code: imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2 }); image1.addEventListener('click', setChosenImage); win1.add(image1); image2 = Titanium.UI.createImageView({ backgroundImage: 'images/policewoman.png', left: 321, top: -140, width: 160, height: 220, zIndex: 2 }); image2.addEventListener('click', setChosenImage); win1.add(image2); image3 = Titanium.UI.createImageView({ backgroundImage: 'images/vampire.png', left: -160, bottom: -220, width: 160, height: 220, zIndex: 2 }); image3.addEventListener('click', setChosenImage); win1.add(image3); image4 = Titanium.UI.createImageView({ backgroundImage: 'images/monk.png', left: 321, bottom: -220, width: 160, height: 220, zIndex: 2 }); image4.addEventListener('click', setChosenImage); win1.add(image4); image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN }); image2.animate({ left: 160, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_OUT }); image3.animate({ left: 0, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate({ left: 160, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_LINEAR }); }}); Now launch the emulator from Titanium Studio and you should see the initial layout with our "Tap To Choose An Image" view visible. Tapping the choose ImageView should now animate our four funny face options onto the screen, as seen in the following screenshot: How it works… The first block of code creates the basic layout for our application, which consists of a couple of ImageViews, a footer view holding our "save" button, and the Slider control, which we'll use later on to increase the zoom scale of our own photograph. Our second block of code is where it gets interesting. Here, we're doing a simple check that the user hasn't already selected an image using the imageSelected Boolean, before getting into our animated ImageViews, named image1, image2, image3, and image4. The concept behind the animation of these four ImageViews is pretty simple. All we're essentially doing is changing the properties of our control over a period of time, defined by us in milliseconds. Here, we are changing the top and left properties of all of our images over a period of half a second so that we get an effect of them sliding into place on our screen. You can further enhance these animations by adding more properties to animate, for example, if we wanted to change the opacity of image1 from 50 percent to 100 percent as it slides into place, we could change the code to look something similar to the following: image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2, opacity: 0.5});image1.addEventListener('click', setChosenImage);win1.add(image1);image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN, opacity: 1.0}); Finally, the curve property of animate() allows you to adjust the easing of your animated component. Here, we used all four animation-curve constants on each of our ImageViews. They are: Titanium.UI.ANIMATION_CURVE_EASE_IN: Accelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_OUT: Decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT: Accelerate and decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_LINEAR: Make the animation speed constant throughout the animation cycles Animating a View using 2D matrix and 3D matrix transforms You may have noticed that each of our ImageViews in the previous recipe had a click event listener attached to them, calling an event handler named setChosenImage. This event handler is going to handle setting our chosen "funny face" image to the imageViewFace control. It will then animate all four "funny face" ImageView objects on our screen area using a number of different 2D and 3D matrix transforms. Complete source code for this recipe can be found in the /Chapter 7/Recipe 2 folder. How to do it… Replace the existing setChosenImage function, which currently stands empty, with the following source code: //this function sets the chosen image and removes the 4//funny faces from the screenfunction setChosenImage(e){ imageViewFace.image = e.source.backgroundImage; imageViewMe.visible = true; //create the first transform var transform1 = Titanium.UI.create2DMatrix(); transform1 = transform1.rotate(-180); var animation1 = Titanium.UI.createAnimation({ transform: transform1, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image1.animate(animation1); animation1.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image1); }); //create the second transform var transform2 = Titanium.UI.create2DMatrix(); transform2 = transform2.scale(0); var animation2 = Titanium.UI.createAnimation({ transform: transform2, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image2.animate(animation2); animation2.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image2); }); //create the third transform var transform3 = Titanium.UI.create2DMatrix(); transform3 = transform3.rotate(180); transform3 = transform3.scale(0); var animation3 = Titanium.UI.createAnimation({ transform: transform3, duration: 1000, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image3.animate(animation3); animation3.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image3); }); //create the fourth and final transform var transform4 = Titanium.UI.create3DMatrix(); transform4 = transform4.rotate(200,0,1,1); transform4 = transform4.scale(2); transform4 = transform4.translate(20,50,170); //the m34 property controls the perspective of the 3D view transform4.m34 = 1.0/-3000; //m34 is the position at [3,4] //in the matrix var animation4 = Titanium.UI.createAnimation({ transform: transform4, duration: 1500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate(animation4); animation4.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image4); }); //change the status of the imageSelected variable imageSelected = true;} How it works… Again, we are creating animations for each of the four ImageViews, but this time in a slightly different way. Instead of using the built-in animate method, we are creating a separate animation object for each ImageView, before calling the ImageView's animate method and passing this animation object to it. This method of creating animations allows you to have finer control over them, including the use of transforms. Transforms have a couple of shortcuts to help you perform some of the most common animation types quickly and easily. The image1 and image2 transforms, as shown in the previous code, use the rotate and scale methods respectively. Scale and rotate in this case are 2D matrix transforms, meaning they only transform the object in two-dimensional space along its X-axis and Y-axis. Each of these transformation types takes a single integer parameter; for scale, it is 0-100 percent and for rotate, the number of it is 0-360 degrees. Another advantage of using transforms for your animations is that you can easily chain them together to perform a more complex animation style. In the previous code, you can see that both a scale and a rotate transform are transforming the image3 component. When you run the application in the emulator or on your device, you should notice that both of these transform animations are applied to the image3 control! Finally, the image4 control also has a transform animation applied to it, but this time we are using a 3D matrix transform instead of the 2D matrix transforms used for the other three ImageViews. These work the same way as regular 2D matrix transforms, except that you can also animate your control in 3D space, along the Z-axis. It's important to note that animations have two event listeners: start and complete. These event handlers allow you to perform actions based on the beginning or ending of your animation's life cycle. As an example, you could chain animations together by using the complete event to add a new animation or transform to an object after the previous animation has finished. In our previous example, we are using this complete event to remove our ImageView from the Window once its animation has finished.
Read more
  • 0
  • 0
  • 3894

article-image-article-integrating-ios-features-using-monotouch
Packt
13 Dec 2011
10 min read
Save for later

Integrating iOS Features Using MonoTouch

Packt
13 Dec 2011
10 min read
(For more resources on this topic, see here.) Mobile devices offer a handful of features to the user. Creating an application that interacts with those features to provide a complete experience to users can surely be considered as an advantage. In this article, we will discuss some of the most common features of iOS and how to integrate some or all of their functionality to our applications. We will see how to offer the user the ability to make telephone calls and send SMS and e-mails, either by using the native platform applications, or by integrating the native user interface in our projects. Also, we will discuss the following components: MFMessageComposeViewController: This controller is suitable for sending text (SMS) messagesIntegrating iOS Features MFMailComposeViewController: This is the controller for sending e-mails with or without attachments ABAddressBook: This is the class that provides us access to the address book database ABPersonViewController: This is the controller that displays and/or edits contact information from the address book EKEventStore: This is the class that is responsible for managing calendar events Furthermore, we will learn how to read and save contact information, how to display contact details, and interact with the device calendar. Note that some of the examples in this article will require a device. For example, the simulator does not contain the messaging application. To deploy to a device, you will need to enroll as an iOS Developer through Apple's Developer Portal and obtain a commercial license of MonoTouch. Starting phone calls In this recipe, we will learn how to invoke the native phone application to allow the user to place a call. Getting ready Create a new project in MonoDevelop, and name it PhoneCallApp. The native phone application is not available on the simulator. It is only available on an iPhone device. How to do it... Add a button on the view of MainController, and override the ViewDidLoad method. Implement it with the following code. Replace the number with a real phone number, if you actually want the call to be placed: this.buttonCall.TouchUpInside += delegate {  NSUrl url = new NSUrl("tel:+123456789012");  if (UIApplication.SharedApplication.CanOpenUrl(url)){    UIApplication.SharedApplication.OpenUrl(url);  }  else{    Console.WriteLine("Cannot open url: {0}", url.AbsoluteString);  }} ; Compile and run the application on the device. Tap the Call! button to start the call. The following screenshot shows the phone application placing a call: How it works... Through the UIApplication.SharedApplication static property, we have access to the application's UIApplication object. We can use its OpenUrl method, which accepts an NSUrl variable to initiate a call: UIApplication.SharedApplication.OpenUrl(url); Since not all iOS devices support the native phone application, it would be useful to check for availability frst: if (UIApplication.SharedApplication.CanOpenUrl(url))   When the OpenUrl method is called, the native phone application will be executed, and it will start calling the number immediately. Note that the tel: prefx is needed to initiate the call. There's more... MonoTouch also supports the CoreTelephony framework, through the MonoTouch. CoreTelephony namespace. This is a simple framework that provides information on call state, connection, carrier info, and so on. Note that when a call starts, the native phone application enters into the foreground, causing the application to be suspended. The following is a simple usage of the CoreTelephony framework: CTCallCenter callCenter = new CTCallCenter();callCenter.CallEventHandler = delegate(CTCall call) {  Console.WriteLine(call.CallState);} ;   Note that the handler is assigned with an equals sign (=) instead of the common plus-equals (+=) combination. This is because CallEventHandler is a property and not an event. When the application enters into the background, events are not distributed to it. Only the last occured event will be distributed when the application returns to the foreground. More info on OpenUrl The OpenUrl method can be used to open various native and non-native applications. For example, to open a web page in Safari, just create an NSUrl object with the following link: NSUrl url = new NSUrl("http://www.packtpub.com");   See also In this article: Sending text messages and e-mails Sending text messages and e-mails In this recipe, we will learn how to invoke the native mail and messaging applications within our own application. Getting ready Create a new project in MonoDevelop, and name it SendTextApp. How to do it... Add two buttons on the main view of MainController. Override the ViewDidLoad method of the MainController class, and implement it with the following code: this.buttonSendText.TouchUpInside += delegate {  NSUrl textUrl = new NSUrl("sms:");  if (UIApplication.SharedApplication.CanOpenUrl(textUrl)){    UIApplication.SharedApplication.OpenUrl(textUrl);  } else{    Console.WriteLine("Cannot send text message!");  }} ;this.buttonSendEmail.TouchUpInside += delegate {  NSUrl emailUrl = new NSUrl("mailto:");  if (UIApplication.SharedApplication.CanOpenUrl(emailUrl)){    UIApplication.SharedApplication.OpenUrl(emailUrl);  } else{    Console.WriteLine("Cannot send e-mail message!");  }} ; Compile and run the application on the device. Tap on one of the buttons to open the corresponding application. How it works... Once again, using the OpenUrl method, we can send text or e-mail messages. In this example code, just using the sms: prefx will open the native text messaging application. Adding a cell phone number after the sms: prefx will open the native messaging application: UIApplication.SharedApplication.OpenUrl(new NSUrl("sms:+123456789012"));     Apart from the recipient number, there is no other data that can be set before the native text message application is displayed. For opening the native e-mail application, the process is similar. Passing the mailto: prefx opens the edit mail controller. UIApplication.SharedApplication.OpenUrl(new NSUrl("mailto:"));     The mailto: url scheme supports various parameters for customizing an e-mail message. These parameters allows us to enter sender address, subject, and message: UIApplication.SharedApplication.OpenUrl("mailto:[email protected]?subject=Email%20with%20MonoTouch!&body=This%20is%20the%20message%20body!"); There's more... Although iOS provides access to opening the native messaging applications, pre-defning message content in the case of e-mails, this is where the control from inside the application stops. There is no way of actually sending the message through code. It is the user that will decide whether to send the message or not. More info on opening external applications The OpenUrl method provides an interface for opening the native messaging applications. Opening external applications has one drawback: the application that calls the OpenUrl method transitions to the background. Up to iOS version 3.*, this was the only way of providing messaging through an application. Since iOS version 4.0, Apple has provided the messaging controllers to the SDK. The following recipes discuss their usage. See also In this article: Starting phone calls Using text messaging in our application Using text messaging in our application In this recipe, we will learn how to provide text messaging functionality within our application using the native messaging user interface. Getting ready Create a new project in MonoDevelop, and name it TextMessageApp. How to do it... Add a button on the view of MainController. Enter the following using directive in the MainController.cs fle: using MonoTouch.MessageUI; Implement the ViewDidLoad method with the following code, changing the recipient number and/or the message body at your discretion: private MFMessageComposeViewController messageController;public override void ViewDidLoad (){  base.ViewDidLoad ();  this.buttonSendMessage.TouchUpInside += delegate {    if (MFMessageComposeViewController.CanSendText){      this.messageController = new          MFMessageComposeViewController();      this.messageController.Recipients = new          string[] { "+123456789012" };      this.messageController.Body = "Text from MonoTouch";      this.messageController.MessageComposeDelegate =          new MessageComposerDelegate();      this.PresentModalViewController(         this.messageController, true);    } else{      Console.WriteLine("Cannot send text message!");    }  } ;} Add the following nested class: private class MessageComposerDelegate :    MFMessageComposeViewControllerDelegate{  public override void Finished (MFMessageComposeViewController     controller, MessageComposeResult result){    switch (result){      case MessageComposeResult.Sent:        Console.WriteLine("Message sent!");      break;      case MessageComposeResult.Cancelled:        Console.WriteLine("Message cancelled!");      break;      default:        Console.WriteLine("Message sending failed!");      break;    }    controller.DismissModalViewControllerAnimated(true);  }} Compile and run the application on the device. Tap the Send message button to open the message controller. Tap the Send button to send the message, or the Cancel button to return to the application. How it works... The MonoTouch.MessageUI namespace contains the necessary UI elements that allow us to implement messaging in an iOS application. For text messaging (SMS), we need the MFMessageComposeViewController class. Only the iPhone is capable of sending text messages out of the box. With iOS 5, both the iPod and the iPad can send text messages, but the user might not have enabled this feature on the device. For this reason, checking for availability is the best practice. The MFMessageComposeViewController class contains a static method, named CanSendText, which returns a boolean value indicating whether we can use this functionality. The important thing in this case is that we should check if sending text messages is available prior to initializing the controller. This is because when you try to initialize the controller on a device that does not support text messaging, or the simulator, you will get the following message on the screen:   To determine when the user has taken action in the message UI, we implement a Delegate object and override the Finished method: private class MessageComposerDelegate :    MFMessageComposeViewControllerDelegate   Another option, provided by MonoTouch, is to subscribe to the Finished event of the MFMessageComposeViewController class. Inside the Finished method, we can provide functionality according to the MessageComposeResult parameter. Its value can be one of the following three: Sent: This value indicates that the message was sent successfully Cancelled: This value indicates that the user has tapped the Cancel button, and the message will not be sent Failed: This value indicates that message sending failed The last thing to do is to dismiss the message controller, which is done as follows: controller.DismissModalViewControllerAnimated(true);   After initializing the controller, we can set the recipients and body message to the appropriate properties: this.messageController.Recipients = new string[] { "+123456789012" };this.messageController.Body = "Text from MonoTouch";   The Recipients property accepts a string array that allows for multiple recipient numbers. You may have noticed that the Delegate object for the message controller is set to its MessageComposeDelegate property, instead of the common Delegate. This is because the MFMessageComposeViewController class directly inherits from the UINavigationController class, so the Delegate property accepts values of the type UINavigationControllerDelegate. There's more... The fact that the SDK provides the user interface to send text messages does not mean that it is customizable. Just like invoking the native messaging application, it is the user who will decide whether to send the message or discard it. In fact, after the controller is presented on the screen, any attempts to change the actual object or any of its properties will simply fail. Furthermore, the user can change or delete both the recipient and the message body. The real beneft though is that the messaging user interface is displayed within our application, instead of running separately. SMS only The MFMessageComposeViewController can only be used for sending Short Message Service (SMS) messages and not Multimedia Messaging Service (MMS).
Read more
  • 0
  • 0
  • 901

article-image-iphone-applications-tune-design-performance
Packt
11 Oct 2011
10 min read
Save for later

iPhone Applications Tune-Up: Design for Performance

Packt
11 Oct 2011
10 min read
(For more resources on iPhone, see here.) The design phase of development is typically where we take into account any element of an application that may have a significant impact on the overall architecture of the final product. Project structuring, required functions, preferred features, hardware specifications, interoperability, and logical limitations are all factors that should be considered within this phase. Elements not regularly included during the design phase may include visuals, color schemes, intricate feature details, and other interchangeable aspects of the final product. When designing with performance in mind, you must take into account the desired characteristics and levels of performance you are looking to achieve in your application. Knowing precisely where your application's performance needs are and focusing greater attention in those areas is the basic premise of the performance-tuning concept. Identifying areas where performance tuning is necessary in many circumstances may be the most difficult part. Obvious areas like memory, database, and network communications may stand out and be somewhat simple to diagnose, however, less common user interface or architectural issues may require profiling and even user feedback for identification. For instance, a database-laden application would be expected to be as optimized as possible for efficient querying, while an application tailored towards video recording and playback may not necessarily require a focus on database efficiency. Similarly a project, which may end up with as little as a few thousand lines of source code may not require a great deal of project structuring and framework planning, while a much larger project will need more time dedicated to these areas. Overloading your application and testing for weaknesses by pushing it beyond its capabilities can prove to be extremely valuable. As an example, databases and table views can be loaded with overly large datasets to identify missing keys or object misuse. The design phase may help you identify potential bottlenecks giving you an opportunity to alter the layout and design of your project before any development has taken place and it becomes too cumbersome to resolve in the midst of coding. Bottlenecks that are unavoidable can be highlighted as areas in your application, which you may want to spend more time squeezing efficiency from. Bottlenecks, which are identified early, stand a good chance of being resolved much easier than waiting until after an applications project is secured and in motion. Preparing the project To take full advantage of Xcode means to understand in depth the philosophy behind the Xcode user interface. Becoming proficient with Xcode will have a great impact on your effectiveness as a developer. Like any tool, knowing its capabilities as well as its limitations allows you to make smarter decisions, quicker. A car is not designed from the interior to the exterior or from the roof to the tires; it is designed from the core outward. Any good vehicle gets its start from a well engineered, tested, and proven frame. The frame is the single key component to which all other components will be bolted and attached. A poor frame design will lead to various structural issues, which in turn lead to more granular problems as these components get further away from the frame. An application project is quite similar, without a solid frame to build an application upon; the quality of the final product will surely be affected. Source code, files, and other resources become cluttered, which has the potential to create similarly damaging granular issues later on in the development lifecycle. Just like one single automotive frame is not the answer for every vehicle on the road, developers are free to organize a project in the way that is most beneficial for the application as well as the workflow and preference of the developer. Although refactoring has come a long way and making organizational project changes during the development phase can be done, it is highly recommended that project decisions be made early on to limit problems and keep productivity as high as possible. A large portion of project management as far as iOS applications are concerned are handled by and through Xcode, Apple's standard integrated development environment. Xcode is an extremely powerful and feature rich integrated development environment with dozens of configuration options that directly affect an individual project. Xcode is not limited to iOS development and is quite capable of creating virtually any type of application including applications for OS X, command-line utilities, libraries, frameworks, plugins, kernel extensions, and more. Xcode is regularly used as a development environment for varying compiled languages as well as nearly all mainstream scripting languages. For those of you who keep regular tabs on Apple and Xcode, you are more than likely well aware of the release of Xcode 4 and may have actually followed it throughout the beta process as well. Xcode 4 is an entire rewrite of the popular development environment, making needed changes to a tool that was begging for upgrades. Xcode 4 follows the paradigm of single window applications, in which all development and testing is performed within the single Xcode 4 interface. Most notable is the integration of interface builder into the core Xcode 4 interface, which brings all of the functionality of these previously separate tools together, integrating them completely. Xcode's abilities far surpass the needs that developing an iOS application requires and it is again very important to understand the development environment in great detail in order to maximize its benefits. One particularly useful project configuration option is the ability to treat compiler warnings as errors. Warnings are the compilers way of telling the developer that something is happening that it may not understand or that it's just not bad enough to prevent an application from running, but still noteworthy to inform the developer. Good programming practice suggests, every developer strive to produce warning free code. Warning free code is simply healthy code and the practice of resolving warnings as early as possible is a habit that will ultimately help in producing code that performs well. Within the Build Settings for a specific target, we can enable the Treat Warnings as Errors option to nudge us in the proper direction for maintaining healthy code. Although this feature can have a slight impact on development and testing time, it comes highly recommended and should be considered for developers interested in high quality and well performing code. In addition to helping create higher quality code, it's a forced education that may be priceless for career programmers and weekend code warriors alike. It is shown in the following screenshot: Project organization Every feature, function, and activity that is performed within Xcode revolves around the project. Much like any other project concept we have been exposed to, Xcode uses projects to organize files, resources, and properties for the ultimate purpose of creating applications and products. For most intents and purposes, the default project settings of Xcode will be sufficient enough for the average developer to create a multitude of applications with relatively little issue. However, we are interested not in achieving averages but in tuning, optimizing, and grabbing every bit of performance possible. We're also interested in streamlining the development process as much as we can. This is precisely why a better than average understanding of the development environment we will be working in is critical. Obviously, the majority of an application project is going to be made up of its classes, libraries, and other source code specific components. Organization of source code is a core principle for any project that is more than a handful of classes and libraries. Once a project begins to mature into dozens or hundreds of files, the importance of well-organized code becomes more apparent. Inevitably, without some type of organizational form, source code, and general project resources become unruly and difficult to find. We've all experienced giant monolithic source code and project structures with resources wildly strewn about without order. Personally, I find this level of chaos revolting and believe order and organization to be a few of the many characteristics of quality. Project structure Xcode's project source code structure is an open canvas, one that doesn't force a developer to use any particular method for organizing code. Unlike various programming environments, Xcode provides the freedom for a developer to build in virtually any way they like. While this level of freedom allows a developer to use the project structure that best fits their style and experience, it leaves plenty of room for mess and confusion if not entirely setting them up for failure. The solution to this problem is to have a well organized and thought out plan of action for how a project and its resources will be laid out. Remember that not a single project structure will work, nor should it work for every proposed project, however, knowing what options are available and the positive and negative effects they might have is quite important. To understand the basic principles of how to organize an Xcode project, we must first understand how a default Xcode project is structured. Remember that not a single project structure will work, nor should it work for every proposed project, however, knowing what options are available and the positive and negative effects they might have is quite important. To understand the basic principles of how to organize an Xcode project, we must first understand how a default Xcode project is structured. Following is a screenshot of a new default Xcode project in which the structure in the left-panel appears to be well organized: Contrast the logical organization of the Xcode interface with the project's underlying file structure and the grouping principle becomes clearer. Xcode stores the logical reference of the project's underlying file structure and uses groups to help developers visualize order within the development environment. In other words, what you see within Xcode is not what is actually happening on disk. The structure within Xcode is comprised of references to the disks, files, and directories. This additional layer of abstraction allows developers to group or relocate project's resources within Xcode for easier management, but not effect the actual disk structure of the project as shown in the following screenshot: At first glance, the underlying structure looks rather clean and simplistic, but imagine this directory in a few days, weeks, or even months time with dozens of more classes and resources. Now, one might argue that as long as the logical representation of the project is clear and concise that the underlying file architecture is unimportant. While this might be true for smaller and less complicated application projects, as a project grows in size there are many factors to consider other than how data is represented within a development environment. Consider the impact that a flat storage architecture might have throughout the life of an Xcode project. The free naming of classes and other resources may be significantly limited as all files are stored within the same base folder. Additionally, browsing source code within a source code repository like GitHub and Google Code may become difficult and tedious. Choosing exactly how a project is laid out and how its components will be organized is akin to selecting the right vehicle frame for which our project will be based from.
Read more
  • 0
  • 0
  • 1184
article-image-iphone-javascript-web-20-integration
Packt
04 Oct 2011
7 min read
Save for later

iPhone JavaScript: Web 2.0 Integration

Packt
04 Oct 2011
7 min read
  (For more resources on iPhone JavaScript, see here.) Introduction The mashup applications allow us to exchange data with other web applications or services. Web 2.0 applications provide this feature through different mechanisms. Currently, some of the most popular websites like YouTube, Flickr, and Twitter provide a way for exchanging data through their API's. From the point of view of the user interfaces, mashups allow us to build rich interfaces for our application. The first recipe for this article will cover embedding a standard RSS feed information. Later in our application we'll delve into YouTube, Facebook, Twitter, and Flickr and build some interesting mashup web applications for the iPhone. Embedding an RSS feed Our goal for this recipe will be to read an RSS feed and present the information provided n our application. In practice, we're going to use a feed offered by The New York Times newspaper, where each item provides a summary and a link to the original web page where the information resides. You could also choose another RSS feed for testing this recipe. The code for this recipe can be found at code/ch10/rss.html in the code bundle provided on the Packtpub site. Getting ready Make sure iWebKit is installed in your computer before continuing. How to do it... As you've learned in the previous recipes, you need to create an XHTML file with the required headers for loading the files provided by iWebKit: <link href="../iwebkit/css/style.css" rel="stylesheet" media="screen" type="text/css" /><scriptsrc="../iwebkit/javascript/functions.js" type="text/javascript"></script> The second step will be to build our simple user interface containing only a top bar and an unordered list with one item. The top bar is added with the following lines: <div id="topbar"> <div id="title">RSS feed</div></div> To add the unordered list , use the following code: <div id="content"> <ul class="pageitem"> <li class="textbox"> <p> <script src="http://rssxpress.ukoln.ac.uk/lite/ viewer/?rss=http://www.nytimes.com/services/xml/ rss/nyt/HomePage.xml" type="text/javascript"></script> </p> </li> </ul></div> Finally, you should add the code for closing the body and html tags and save the new file as rss.html. After loading your new application, you will see a screen, as shown in the screenshot: If you click on one of the items, Safari Mobile will open the web page for the article, as shown in the following screenshot: How it works... For avoiding complexity and keeping our recipe as simple as possible, we used a free web service provided by RSSxpress. This service is called RSSxpressLite and it works by returning a chunk of JavaScript code. This code inserts an HTML table, containing a summary and a link for each item provided by the referenced RSS feed. Thanks to this web service, we don't need to parse the response of the original feed; RSSxpressLite does the job for us. If the mentioned web service returns the code that we need, you should only write a small line of JavaScript code referring to the web service through its URL and pass as a parameter the RSS feed for displaying information. There's more... For learning more about RSSxpressLite, take a look at http://rssxpress.ukoln.ac.uk/lite/include/. Opening a YouTube video It is safe to say that everyone who uses the Internet knows of YouTube. It is one of the most popular websites in the world. Millions of people use YouTube to watch videos through an assortment of devices, such as PC's, tablets, and smartphones. Apple's devices are not an exception and of course we can watch YouTube videos on the iPhone and iPad. In this case, we're going to load a YouTube video when the user clicks on a specific button. The ink will open a new web page, which allows us to play it. The simple XHTML recipe can be found at code/ch10/youtube.html in the code bundle provided on the Packtpub site. Getting ready This recipe only requires the UiUIKit framework f or building the user interface for this application. You can use your favorite YouTube video for this recipe. By default, we're using a video provided by Apple introducing the new iPad 2 device. How to do it... Following the example from the previous recipe, create a new XHTML file called youtube.html and insert the standard headers for loading the UiUIKit framework. Then add the following CSS inside the <head> section to style the main button: <style type="text/css"> #btn { margin-right: 12px; }</style> Our graphical user interface will be completed by adding the following XHTML code: <div id="header"> <h1>YouTube video</h1></div><h1>Video</h1><p id="btn"> <a href="http://m.youtube.com/watch?v=Z_d6_gbb90I" class="button white">Watch</a></p> After loading the new application on your device, you will see a screen similar to the following screenshot: When the user clicks on our main button, Safari Mobile will go to the web page of the video at YouTube, as shown in the following screenshot: After clicking on the play button, the video will start playing. We can rotate our device for a better aspect ratio, as shown in the following screenshot: How it works... This recipe is pretty simple; we only need to create a link to the desired YouTube video. The most important thing to keep in mind is that we'll use a mobile-specific domain for loading our video to mobile devices. The URL is simply http://m.youtube.com, instead of the regular URL http://www.youtube.com. Posting on your Facebook wall The application developed for this recipe shows how to authenticate with Facebook and how to write a post on your public wall. If everything is successful, an alert box is used to report it to the user. Although there are myriad complex applications with better functionalities that can be built for Facebook, we will focus on simple posting in this recipe. This application will only allow you to post on your wall. For this you need to hardcode your Facebook account for posting. This is to keep the recipe as simple as possible and to get a good understanding of all the complex processes involved in dealing with the OAuth protocol used by Facebook. However, thanks to this open protocol, it is easier to allow secure authentication of APIs from web applications. Also, this recipe requires using a real Internet domain and a server with public access. Thus, you cannot test this application on your local machine. Our application needs to use a server- side language for which we'll use PHP. Currently, it's very easy to find hosting services for PHP applications. You can also find very cheap services for hosting your PHP code. You can find the complete code for this recipe at code/ch10/facebook/ in the code bundle provided on the Packtpub site. Getting ready To bu ild the application for this recipe, you need to have a public server with an Internet domain linked to it. Also, you must install a web server with a PHP interpreter and have the UiUIKit framework ready to use. You need to install the cURL library, which allows PHP to connect and communicate to many different types of servers. Two interesting resources for this issue are: http://www.php.net/manual/en/book.curl.php http://www.php.net/manual/en/curl.setup.php  
Read more
  • 0
  • 0
  • 4092

article-image-windows-phone-7-silverlight-location-services
Packt
02 Sep 2011
11 min read
Save for later

Windows Phone 7 Silverlight: Location Services

Packt
02 Sep 2011
11 min read
  (For more resources on this subject, see here.)   Introduction One of the most powerful features of smartphones today is location awareness. Windows Phone 7 is no exception. The wide consumerization of GPS around 10 years ago brought handheld GPS receivers for consumers on the go, but few individuals could justify the expense or pocket space. Now that smartphones have GPS built in, developers have built incredibly powerful applications that are location-aware. For example, apps that help users track their jogging route, get real-time navigation assistance while driving, and map/analyze their golf game. In this article, we will take a deep dive into the location API for Windows Phone 7 by building an application to help navigate during travel and another to map the user's location. Tracking latitude and longitude In this recipe, we will implement the most fundamental use of location services, tracking latitude and longitude. Our sample application will be a navigation helper which displays all the available location information. We will also review the different ways in which the phone gets its location information and their attributes. Getting ready We will be working in Visual Studio for this tutorial, so start by opening Studio and creating a new Windows Phone 7 application using the Windows Phone Application project template. All the location/GPS-related methods and classes are found in the System.Deviceassembly, so add this reference next: We will need some UI to start tracking and displaying the data, so go to the MainPage.xaml file, if it's not already open. Change the ContentPanel from a Grid to a StackPanel, then add a button to the designer, and set its Content property to Start Tracking. Next add four TextBlocks. Two of these will be Latitude and Longitude labels. We will use the others to display the latitude/longitude coordinates, so set their x:Name properties to txtLatitudeand txtLongitude respectively. You can also set the application and page titles if you like. The resulting page should look similar to the following screenshot: How to do it... The core class used for tracking location is the GeoCoordinateWatcher. We subscribe to the events on the watcher to be noticed when changes occur: Double-click on your button in the designer to go to the click event handler for the button in the code behind file. This is where we will start watching for location changes. Create a GeoCoordinateWatcher field variable named _watcher. Set this field variable inside your click event handler to a new GeoCoordinateWatcher. Next add a handler to the PositionChanged event named _watcher_PositionChanged. Then start watching for position changes by calling the Start method. Next add a handler to the PositionChanged event named _watcher_PositionChanged. Then start watching for position changes by calling the Start method. In order to use the position information, create the void handler method with parameters named sender of type object and e of type GeoPositionChangedEventArgs<GeoCoordinate>. Inside this method set the Text properties on the txtLatitude and txtLongitude text boxes to the coordinate values e.Position.Location.Latitude and e.Position.Location.Longitude respectively. Latitude and longitude as strings:Latitude and longitude are of type double and can be converted to strings using the ToString method for display. You should end up with a class that is similar to the following block of code: public partial class MainPage : PhoneApplicationPage{ private IGeoPositionWatcher<GeoCoordinate> _watcher; public MainPage() { InitializeComponent(); } private void butTrack_Click(object sender, RoutedEventArgs e) { _watcher = new GeoCoordinateWatcher(); _watcher.PositionChanged += _watcher_PositionChanged; _watcher.Start(); } void _watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e) { txtLatitude.Text = e.Position.Location.Latitude.ToString(); txtLongitude.Text = e.Position.Location.Longitude.ToString(); }} That's it. You can now deploy this app to your phone, start tracking, and see the latitude and longitude changes on your screen. How it works... The watcher starts a new background thread to watch for position changes. Each change is passed to your event handler(s) for processing. Window Phone 7 provides location services through the following three sources: GPS: Satellite based Wi-Fi: Known wireless network positions Cellular: Cellular tower triangulation Each of these position providers has their strengths and weaknesses, but the combination of the three covers nearly any possible use case: GPS is the most accurate, but you must have an unobstructed view of the sky Wi-Fi can be accurate depending on how close you are to the access point, but you must be in the range of a known wireless network Cellular is the least accurate and only needs cell signal So if you're in an urban area with tall buildings, GPS may be intermittent but Wi-Fi networks and cellular coverage should be plentiful. If you are in a rural area, GPS should work well and cellular triangulation might help where available. Tracking altitude, speed, and course In this section, we will discuss the different types of location information that are provided by the GeoCoordinateWatcher and how they might be used. A quick look at the Object Browser shows us that the GeoCoordinate object has several interesting properties: In addition to Latitude and Longitude, there is Altitude, Speed, and Course, among others. Altitude and Speed are pretty self-explanatory, but Course might not be as obvious. Course is your heading or the direction you are going, given two points. The following table shows each property and its unit of measurement: Horizontal and Vertical Accuracy specifies the accuracy of Latitude/Longitude and Altitude, respectively, in meters. For example, this means your actual Latitude position is between the reported Latitude minus the accuracy value and the reported Latitude plus the accuracy value. The smaller the accuracy value, the more accurate but the longer it may take to get a position. Getting ready Add three more sets of TextBlock controls under the longitude control for each of the following properties: Altitude, Speed, and Course. Set the speed label TextBlock Text property to Speed (mph). Name the TextBlock controls as you did for latitude/longitude so we can assign their Text properties from the code behind. The page should look similar to the following screenshot: How to do it... Perform the following steps to add altitude, speed, and course to the application: Open the code behind file for the page, and in the positionChanged handler, set Altitude in the same way as we did for latitude/longitude before; simply set the Text property of the txtAltitude TextBlock to the Altitude property as a string. For the Speed property, convert from meters per second to miles per hour. One meter/sec equals 2.2369363 miles per hour, so we can multiply the Speed property by 2.2369363 to get miles per hour. Display Course so that the normal users can understand it, using the name of the direction (that is, North, South, East, West). The Course value is a degree value from 0 to 360, where 0/360 is north and the degrees go clock-wise with a compass. Create a series of if statements that will provide the correct heading. Between 316 and 45 will be North, 46 and 135 will be East, 136 and 225 will be South, and between 226 and 315 will be West. Our _watcher_PositionChanged method is now as follows: void _watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e) { txtLatitude.Text = e.Position.Location.Latitude. ToString() txtLongitude.Text = e.Position.Location.Longitude. ToString(); txtAltitude.Text = e.Position.Location.Altitude. ToString(); txtSpeed.Text = (e.Position.Location.Speed * 2.2369363). ToString(); double course = e.Position.Location.Course; string heading = string.Empty; if (course >= 46 && course <= 135) heading = "East"; if (course >= 136 && course <= 225) heading = "South"; if (course >= 226 && course <= 315) heading = "West"; else heading = "North"; txtCourse.Text = heading; } How it works... If you deploy the application to your phone now, you will see Speed display NaN (Not a Number) , Altitude display zero, and Course is blank. This is because Altitude, Speed, and Course are only available when you specify that you want high accuracy location information. We do this by instantiating the GeoCoordinateWatcher with a GeoPositionAccuracy type of GeoPositionAccuracy.High in the constructor. By default, the accuracy is set to GeoPositionAccuracy.Default, which only uses cellular triangulation and is not accurate enough to calculate speed, altitude, or course. GeoPositionAccuracy.High uses GPS and Wi-Fi, when available, which provides more accurate positions. Although it is more accurate, it also uses more power and can take longer to get your position. This is why High is not the default. It is strongly recommended that you only use the higher accuracy when it is absolutely needed. In this case, we need the Altitude, Speed, and Course, so it is necessary. Set the accuracy level to high in the GeoCoordinateWatcher constructor , like so: _watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.High); If you redeploy the application to the phone, you may notice it still shows NaN for Speed. This may be because you are indoors and have an obstructed view of the sky or it may just take a few moments to get a good signal. Once you have a good GPS signal, you should see valid Speed, Altitude, and Course values. The best way to test this application is in the passenger seat of a driving vehicle so you can compare the vehicles, speedometer to the speed in the application. There may be times, as well, when you lose GPS signal. When this occurs, the latitude and longitude values will also be set to NaN. In such cases, you may want to give the user a friendlier explanation of the problem. You can simply check the IsUnknown property in the position changed event and provide a better message. For example: void _watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e) { if (e.Position.Location.IsUnknown) { txtLatitude.Text = "Finding your position. Please wait ..."; txtLongitude.Text = ""; txtAltitude.Text = ""; txtSpeed.Text = ""; txtCourse.Text = ""; return; } txtLatitude.Text = e.Position.Location.Latitude. ToString(); txtLongitude.Text = e.Position.Location.Longitude. ToString(); txtAltitude.Text = e.Position.Location.Altitude. ToString(); txtSpeed.Text = (e.Position.Location.Speed * 2.2369363). ToString(); double course = e.Position.Location.Course; string heading = string.Empty; if (course >= 46 && course <= 135) heading = "East"; if (course >= 136 && course <= 225) heading = "South"; if (course >= 226 && course <= 315) heading = "West"; else heading = "North"; txtCourse.Text = heading; } The last property we will cover in this recipe is the Permission property on the GeoPositionWatcher. Before submitting your app to the marketplace, you must define which phone capabilities your app requires. One of those capabilities is location. Before a user installs an application, he/she is informed of the capabilities the app requires and must accept them to install. Even though the user has given the app permission to use location services of the phone, the user can still turn off location services for all apps from the settings menu. The Permission property will help us check for this and tell the user why the app isn't working. There is a slight trick though; the Permission property will be set to Granted when the watcher is first created, even if Location services are disabled in the Settings menu. It will be reset to Denied after the Start method is called. So we must check for a Denied permission value after calling the Start method. For instance: private void butTrack_Click(object sender, RoutedEventArgs e){ _watcher = newGeoCoordinateWatcher(GeoPositionAccuracy.High); _watcher.PositionChanged += _watcher_PositionChanged; _watcher.StatusChanged += _watcher_StatusChanged; _watcher.Start(); if (_watcher.Permission == GeoPositionPermission.Denied) tbLatitude.Text = "Please enable location services and retry";} We can test this by turning off location services. From the start screen, flick left to the App list, tap Settings, and then tap location. Swipe the switch left to the Off position. Redeploy the application to your phone, click the Start Tracking button, and you will see our new message. As mentioned previously, the user must accept the capabilities of the application before installing it. There may be future updates to the phone which allow the user to change the allowed capabilities of individual apps from the settings menu as well. The Permission property would also be useful in this scenario. There's more... You may have also noticed the CivicAddressResolver and CivicAddress classes in the System.Device.Location namespace . As its name implies, the CivicAddressResolver returns an address from a GeoCoordinate. Unfortunately, this is not yet implemented for Windows Phone. You can instantiate them and attempt to use them, but the returned CivicAddress will always be unknown. Hopefully, this will be implemented in the future updates of the operating system.
Read more
  • 0
  • 0
  • 1402

article-image-android-30-application-development-multimedia-management
Packt
08 Aug 2011
6 min read
Save for later

Android 3.0 Application Development: Multimedia Management

Packt
08 Aug 2011
6 min read
Android 3.0 Application Development Cookbook Over 70 working recipes covering every aspect of Android development Very few successful applications are completely silent or have only static graphics, and in order that Android developers take full advantage of the advanced multimedia capabilities of today's smartphones, the system provides the android.media package, which contains many useful classes. The MediaPlayer class allows the playback of both audio and video from raw resources, files, and network streams, and the MediaRecorder class makes it possible to record both sound and images. Android also offers ways to manipulate sounds and create interactive effects through the use of the SoundPool class, which allows us to not only bend the pitch of our sounds but also to play more than one at a time.   Playing an audio file from within an application One of the first things that we may want to do with regards to multimedia is play back an audio file. Android provides the android.media.MediaPlayer class for us and this makes playback and most media related functions remarkably simple. In this recipe we will create a simple media player that will play a single audio file. Getting ready Before we start this project we will need an audio file for playback. Android can decode audio with any of the following file extensions: .3GP .MP4 .M4A .MP3 .OGG .WAV There are also quite a few MIDI file formats that are acceptable but have not been included here as their use is less common and their availability often depends on whether a device is running the standard Android platform or a specific vendor extension. Before you start this exercise create or find a short sound sample in one of the given formats. We used a five second Ogg Vorbis file and called it my_sound_file.ogg. How to do it... Start up a new Android project in Eclipse and create a new folder: res/raw. Place the sound file that you just prepared in this folder. In this example we refer to it as my_sound_file. Using either the Graphical Layout or the main.xml panel edit the file res/layout/main.xml to contain three buttons, as seen in the following screenshot: Call these buttons play_button, pause_button and stop_button. In the Java activity code declare a MediaPlayer in the onCreate() method: @Override public void onCreate(Bundle state) { super.onCreate(state); setContentView(R.layout.main); final MediaPlayer mPlayer; Associate the buttons we added in step 3 with Java variables by adding the following lines to onCreate(): Button playButton = (Button) findViewById(R.id.play_button); Button pauseButton = (Button) findViewById(R.id.pause_button); Button stopButton = (Button) findViewById(R.id.stop_button); We need a click listener for our play button. This also can be defined from within onCreate(): playButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { mPlayer = MediaPlayer.create(this, R.raw.my_sound_file); mPlayer.setLooping(true); mPlayer.start(); } }); Next add a listener for the pause button as follows: pauseButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { mPlayer.pause(); } }); Finally, include a listener for the stop button: stopButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { mPlayer.stop(); mPlayer.reset(); } }); Now run this code on an emulator or your handset and test each of the buttons. How it works... The MediaPlayer class provides some useful functions and the use of start(), pause(), stop(), and setLooping() should be clear. However, if you are thinking that calling MediaPlayer.create(context, ID) every time the start button is pressed is overkill, you would be correct. This is because once stop() has been called on the MediaPlayer, the media needs to be reset and prepared (with reset() and prepare()) before start() can be called again. Fortunately MediaPlayer.create() also calls prepare() so that the first time we play an audio file we do not have to worry about this. The lifecycle of the MediaPlayer is not always straightforward and the order in which it takes on various states is best explained diagrammatically: Otherwise, MediaPlayer has lots of useful methods such as isPlaying(), which will return a Boolean telling us whether our file is being played or not, or getDuration() and getCurrentPosition(), which inform us of how long the sample is and how far through it we are. There are also some useful hooks that we can employ using MediaPlayer and the most commonly used are onCompletionListener() and onErrorListener(). There's more... We are not restricted to playing back raw resources. We can also playback local files or even stream audio. Playing back a file or a stream Use the MediaPlayer.setDataSource(String) method to play an audio file or stream. In the case of streaming audio this will need to be a URL representing a media file that is capable of being played progressively, and you will need to prepare the media player each time it runs: MediaPlayer player = new MediaPlayer(); player.setDataSource("string value of your file path or URL"); player.prepare(); player.start(); It is essential to surround setDataSource() with a try/catch clause in case the source does not exist when dealing with removable or online media.   Playing back video from external memory The MediaPlayer class that we met in the previous recipe works for video in the same manner that it does for audio and so as not to make this task a near copy of the last, here we will look at how to play back video files stored on an SD card using the VideoView object. Getting ready This recipe requires a video file for our application to playback. Android can decode H.263, H.264 and MPEG-4 files; generally speaking this means files with .3gp and .mp4 file extensions. For platforms since 3.0 (API level 11) it is also possible to manage H.264 AVC files. Find a short video clip in one of these compatible formats and save it on the SD card of your handset. Alternatively you can create an emulator with an SD card enabled and push your video file onto it. This can be done easily through Eclipse's DDMS perspective from the File Explorer tab: In this example we called our video file my_video.3gp.  
Read more
  • 0
  • 0
  • 2109
article-image-how-interact-database-using-rhom
Packt
28 Jul 2011
5 min read
Save for later

How to Interact with a Database using Rhom

Packt
28 Jul 2011
5 min read
  Rhomobile Beginner's Guide Step-by-step instructions to build an enterprise mobile web application from scratch         Read more about this book       (For more resources on this topic, see here.) What is ORM? ORM connects business objects and database tables to create a domain model where logic and data are presented in one wrapping. In addition, the ORM classes wrap our database tables to provide a set of class-level methods that perform table-level operations. For example, we might need to find the Employee with a particular ID. This is implemented as a class method that returns the corresponding Employee object. In Ruby code, this will look like: employee= Employee.find(1) This code will return an employee object whose ID is 1.   Exploring Rhom Rhom is a mini Object Relational Mapper (ORM) for Rhodes. It is similar to another ORM, Active Record in Rails but with limited features. Interaction with the database is simplified, as we don't need to worry about which database is being used by the phone. iPhone uses SQLite and Blackberry uses HSQL and SQLite depending on the device. Now we will create a new model and see how Rhom interacts with the database.   Time for action – Creating a company model We will create a model company. In addition to a default attribute ID that is created by Rhodes, we will have one attribute name that will store the name of the company. Now, we will go to the application directory and run the following command: $ rhogen model company name which will generate the following: [ADDED] app/Company/index.erb[ADDED] app/Company/edit.erb[ADDED] app/Company/new.erb[ADDED] app/Company/show.erb[ADDED] app/Company/index.bb.erb[ADDED] app/Company/edit.bb.erb[ADDED] app/Company/new.bb.erb[ADDED] app/Company/show.bb.erb[ADDED] app/Company/company_controller.rb[ADDED] app/Company/company.rb[ADDED] app/test/company_spec.rb We can notice the number of files generated by the Rhogen command. Now, we will add a link on the index page so that we can browse it from our homepage. Add a link in the index.erb file for all the phones except Blackberry. If the target phone is a Blackberry, add this link to the index.bb.erb file inside the app folder. We will have different views for Blackberry. <li> <a href="<%= url_for :controller => :Company %>"><span class="title"> Company</span><span class="disclosure_indicator"/></a></li> We can see from the image that a Company link is created on the homepage of our application. Now, we can build our application to add some dummy data. You can see that we have added three companies Google, Apple, and Microsoft. What just happened? We just created a model company with an attribute name, made a link to access it from our homepage, and added some dummy data to it. We will add a few companies' names because it will help us in the next section. Association Associations are connections between two models, which make common operations simpler and easier for your code. So we will create an association between the Employee model and the Company model.   Time for action – Creating an association between employee and company The relationship between an employee and a company can be defined as "An employee can be in only one company but one company may have many employees". So now we will be adding an association between an employee and the company model. After we make entries for the company in the company model, we would be able to see the company select box populated in the employee form. The relationship between the two models is defined in the employee.rb file as: belongs_to :company_id, 'Company' Here, Company corresponds to the model name and company_id corresponds to the foreign key. Since at present we have the company field instead of company_id in the employee model, we will rename company to company_id. To retrieve all the companies, which are stored in the Company model, we need to add this line in the new action of the employee_controller: @companies = Company.find(:all) The find command is provided by Rhom, which is used to form a query and retrieve results from the database. Company.find(:all) will return all the values stored in the Company model in the form of an array of objects. Now, we will edit the new.erb and edit.erb files present inside the Employee folder. <h4 class="groupTitle">Company</h4><ul> <li> <select name="employee[company_id]"> <% @companies.each do |company|%> <option value="<%= company.object%>" <%= "selected" if company.object == @employee.company_id%> > <%=company.name %></option> <%end%> </select> </li></ul> If you observe in the code, we have created a select box for selecting a company. Here we have defined a variable @companies that is an array of objects. And in each object we have two fields named company name and its ID. We have created a loop and shown all the companies that are there in the @companies object. In the above image the companies are populated in the select box, which we added before and it is displayed in the employee form. What just happened? We just created an association between the employee and company model and used this association to populate the company select box present in the employee form. As of now, Rhom has fewer features then other ORMs like Active Record. As of now there is very little support for database associations.  
Read more
  • 0
  • 0
  • 1278

article-image-android-30-application-development-managing-menus
Packt
26 Jul 2011
7 min read
Save for later

Android 3.0 Application Development: Managing Menus

Packt
26 Jul 2011
7 min read
Android 3.0 Application Development Cookbook All Android handsets have a hard menu key for calling up secondary choices that do not need to be made available from the main screen or perhaps need to be made available across an application. In concord with Android's philosophy of separating appearance from function, menus are generally created in the same way as other visual elements, that is, with the use of a definitive XML layout file. There is a lot that can be done to control menus dynamically and Android provides classes and interfaces for displaying context-sensitive menus, organizing menu items into groups, and including shortcuts. Creating and inflating an options menu To keep our application code separate from our menu layout information, Android uses a designated resource folder (res/menu) and an XML layout file to define the physical appearance of our menu; such as the titles and icons we see in Android pop-up menus. The Activity class contains a callback method, onCreateOptionsMenu(), that can be overridden to inflate a menu. Getting ready Android menus are defined in a specific, designated folder. Eclipse does not create this folder by default so start up a new project and add a new folder inside the res folder and call it menu. How to do it... Create a new XML file in our new res/menu folder and call it my_menu.xml. Complete the new file as follows: <?xml version="1.0" encoding="utf-8"?> <menu > <item android_id="@+id/item_one" android_title="first item" /> <item android_id="@+id/item_two" android_title="second item" /> </menu> In the Java application file, include the following overridden callback: @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.my_menu, menu); return true; } Run the application on a handset or emulator and press the hard menu key to view the menu: How it works... Whenever we create an Android menu using XML we must place it in the folder we used here (res/menu). Likewise, the base node of our XML structure must be <menu>. The purpose of the id element should be self explanatory and the title attribute is used to set the text that the user sees when the menu item is inflated. The MenuInflater object is a straightforward way of turning an XML layout file into a Java object. We create a MenuInflater with getMenuInflater() which returns a MenuInflater from the current activity, of which it is a member. The inflate() call takes both the XML file and the equivalent Java object as its parameters. There's more... The type of menu we created here is referred to as an options menu and it comes in two flavors depending on how many items it contains. There is also a neater way to handle item titles when they are too long to be completely displayed. Handling longer options menus When an options menu has six or fewer items it appears as a block of items at the bottom of the screen. This is called the icon menu and is, as its name suggests, the only menu type capable of displaying icons. On tablets running API level 11 or greater the Action bar can also be used to access the menu. The icon menu is also the only menu type that cannot display radio buttons or check marks. When an inflated options menu has more than six items, the sixth place on the icon menu is replaced by the system's own More item, which when pressed calls up the extended menu which displays all items from the sixth onwards, adding a scroll bar if necessary. Providing condensed menu titles If Android cannot fit an item's title text into the space provided (often as little as one third of the screen width) it will simply truncate it. To provide a more readable alternative, include the android:titleCondensed="string" attribute alongside android:title in the item definition. Adding Option menu items to the Action Bar For tablet devices targeting Android 3.0 or greater, option menu items can be added to the Action Bar. Adjust the target build of the above project to API level 11 or above and replace the res/menu/my_menu.xml file with the following: <?xml version="1.0" encoding="utf-8"?> <menu > <item android_id="@+id/item_one" android_title="first item" android_icon="@drawable/icon" android_showAsAction="ifRoom" /> <item android_id="@+id/item_two" android_title="second item" android_icon="@drawable/icon" android_showAsAction="ifRoom|withText" /> <item android_id="@+id/item_three" android_title="third item" android_icon="@drawable/icon" android_showAsAction="always" /> <item android_id="@+id/item_four" android_title="fourth item" android_icon="@drawable/icon" android_showAsAction="never" /> </menu> Note from the output that unless the withText flag is included, the menu item will display only as an icon: Designing Android compliant menu icons The menu items we defined in the previous recipe had only text titles to identify them to the user, however nearly all Icon Menus that we see on Android devices combine a text title with an icon. Although it is perfectly possible to use any graphic image as a menu icon, using images that do not conform to Android's own guidelines on icon design is strongly discouraged, and Android's own development team are particularly insistent that only the subscribed color palette and effects are used. This is so that these built-in menus which are universal across Android applications provide a continuous experience for the user. Here we examine the colors and dimensions prescribed and also examine how to provide the subsequent images as system resources in such a way as to cater for a variety of screen densities. Getting ready The little application we put together in the last recipe makes a good starting point for this one. Most of the information here is to do with design of the icons, so you may want to have a graphics editor such as GIMP or PhotoShop open, or you may want to refer back here later for the exact dimensions and palettes. How to do it... Open the res/menu/my_menu.xml file and add the android:icon elements seen here to each item: <?xml version="1.0" encoding="utf-8"?> <menu > <item android_id="@+id/item_one" android_icon="@drawable/my_menu_icon" android_title="first item" /> <item android_id="@+id/item_two" android_icon="@drawable/my_menu_icon" android_title="second item" /> </menu> With your graphics editor, create a new transparent PNG file, precisely 48 by 48 pixels in dimension. Ensuring that there is at least a 6 pixel border all the way around, produce your icon as a simple two-dimensional flat shape. Something like this: Fill the shape with a grayscale gradient that ranges from 47% to 64% (white) with the lighter end at the top. Provide a black inner shadow with the following settings: 20% opaque 90° angle (top to bottom) 2 pixel width 2 pixel distance Next, add an inner bevel with: Depth of 1% 90° altitude 70% opaque, white highlight 25% opaque, black shadow Now give the graphic a white outer glow with: 55% opacity 3 pixel size 10% spread Make two copies of our graphic, one resized to 36 by 36 pixels and one 72 by 72 pixels. Save the largest file in the res/drawable-hdpi as my_menu_icon.png. Save the 48 by 48 pixel file with the same name in the drawable-mdpi folder and the smallest image in drawable-ldpi. To see the full effect of these three files in action you will need to run the software on handsets with different screen resolutions or construct emulators to that purpose. How it works... As already mentioned, Android currently insists that menu icons conform to their guidelines and most of the terms used here should be familiar to anyone who has designed an icon before. The designated drawable folders allow us to provide the best possible graphics for a wide variety of screen densities. Android will automatically select the most appropriate graphic for a handset or tablet so that we can refer to our icons generically with @drawable/. It is only ever necessary to provide icons for the first five menu items as the Icon Menu is the only type to allow icons.  
Read more
  • 0
  • 0
  • 1861