Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-offloading-work-ui-thread-android
Packt
13 Oct 2016
8 min read
Save for later

Offloading work from the UI Thread on Android

Packt
13 Oct 2016
8 min read
In this article by Helder Vasconcelos, the author of Asynchronous Android Programming book - Second Edition, will present the most common asynchronous techniques techniques used on Android to develop an application, that using the multicore CPUs available on the most recent devices, is able to deliver up to date results quickly, respond to user interactions immediately and produce smooth transitions between the different UI states without draining the device battery. (For more resources related to this topic, see here.) Several reports have shown that an efficient application that provides a great user experience have better review ratings, higher user engagement and are able to achieve higher revenues. Why do we need Asynchronous Programming on Android? The Android system, by default, executes the UI rendering pipeline, base components (Activity, Fragment, Service, BroadcastReceiver, ...) lifecycle callback handling and UI interaction processing on a single thread, sometimes known as UI thread or main thread. The main thread handles his work sequentially collecting its work from a queue of tasks (Message Queue) that are queued to be processed by a particular application component. When any unit of work, such as a I/O operation, takes a significant period of time to complete, it will block and prevent the main thread from handling the next tasks waiting on the main thread queue to processed. Besides that, most Android devices refresh the screen 60 times per second, so every 16 milliseconds (1s/60 frames) a UI rendering task has to be processed by the main thread in order to draw and update the device screen. The next figure shows up a typical main thread timeline that runs the UI rendering task every 16ms. When a long lasting operation, prevents the main thread from executing frame rendering in time, the current frame drawing is deferred or some frame drawings are missed generating a UI glitch noticeable to the application user. A typical long lasting operation could be: Network data communication HTTP REST Request SOAP Service Access File Upload or Backup Reading or writing of files to the filesystem Shared Preferences Files File Cache Access Internal Database reading or writing Camera, Image, Video, Binary file processing. A user Interface glitch produced by dropped frames on Android is known on Android as jank. The Android SDK command systrace (https://developer.android.com/studio/profile/systrace.html) comes with the ability to measure the performance of UI rendering and then diagnose and identify problems that may arrive from various threads that are running on the application process. In the next image we illustrate a typical main thread timeline when a blocking operation dominates the main thread for more than 2 frame rendering windows: As you can perceive, 2 frames are dropped and the 1 UI Rendering frame was deferred because our blocking operation took approximately 35ms, to finish: When the long running operation that runs on the main thread does not complete within 5 seconds, the Android System displays an “Application not Responding” (ANR) dialog to the user giving him the option to close the application. Hence, in order to execute compute-intensive or blocking I/O operations without dropping a UI frame, generate UI glitches or degrade the application responsiveness, we have to offload the task execution to a background thread, with less priority, that runs concurrently and asynchronously in an independent line of execution, like shown on the following picture. Although, the use of asynchronous and multithreaded techniques always introduces complexity to the application, Android SDK and some well known open source libraries provide high level asynchronous constructs that allow us to perform reliable asynchronous work that relieve the main thread from the hard work. Each asynchronous construct has advantages and disadvantages and by choosing the right construct for your requirements can make your code more reliable, simpler, easier to maintain and less error prone. Let’s enumerate the most common techniques that are covered in detail in the “Asynchronous Android Programming” book. AsyncTask AsyncTask is simple construct available on the Android platform since Android Cupcake (API Level 3) and is the most widely used asynchronous construct. The AsyncTask was designed to run short-background operations that once finished update the UI. The AsyncTask construct performs the time consuming operation, defined on the doInBackground function, on a global static thread pool of background threads. Once doInBackground terminates with success, the AsyncTask construct switches back the processing to the main thread (onPostExecute) delivering the operation result for further processing. This technique if it is not used properly can lead to memory leaks or inconsistent results. HandlerThread The HandlerThread is a Threat that incorporates a message queue and an Android Looper that runs continuously waiting for incoming operations to execute. To submit new work to the Thread we have to instantiate a Handler that is attached to HandlerThread Looper. public class DownloadImageTask extends AsyncTask<URL, Integer, Bitmap> { protected Long doInBackground(URL... urls) {} protected void onProgressUpdate(Integer... progress) {} protected void onPostExecute(Bitmap image) {} } HandlerThread handlerThread = new HandlerThread("MyHandlerThread"); handlerThread.start(); Looper looper = handlerThread.getLooper(); Handler handler = new Handler(looper); handler.post(new Runnable(){ @Override public void run() {} }); The Handler interface allow us to submit a Message or a Runnable subclass object that could aggregate data and a chunk of work to be executed. Loader The Loader construct allow us to run asynchronous operations that load content from a content provider or a data source, such as an Internal Database or a HTTP service. The API can load data asynchronously, detect data changes, cache data and is aware of the Fragment and Activity lifecycle. The Loader API was introduced to the Android platform at API level 11, but are available for backwards compatibility through the Android Support libraries. public static class TextLoader extends AsyncTaskLoader<String> { @Override public String loadInBackground() { // Background work } @Override public void deliverResult(String text) {} @Override protected void onStartLoading() {} @Override protected void onStopLoading() {} @Override public void onCanceled(String text) {} @Override protected void onReset() {} } IntentService The IntentService class is a specialized subclass of Service that implements a background work queue using a single HandlerThread. When work is submitted to an IntentService, it is queued for processing by a HandlerThread, and processed in order of submission. public class BackupService extends IntentService { @Override protected void onHandleIntent(Intent workIntent) { // Background Work } } JobScheduler The JobScheduler API allow us to execute jobs in background when several prerequisites are fulfilled and taking into the account the energy and network context of the device. This technique allows us to defer and batch job executions until the device is charging or an unmetered network is available. JobScheduler scheduler = (JobScheduler) getSystemService( Context.JOB_SCHEDULER_SERVICE ); JobInfo.Builder builder = new JobInfo.Builder(JOB_ID, serviceComponent); builder.setRequiredNetworkType(JobInfo.NETWORK_TYPE_UNMETERED); builder.setRequiresCharging(true); scheduler.schedule(builder.build()); RxJava RxJava is an implementation of the Reactive Extensions (ReactiveX) on the JVM, that was developed by Netflix, used to compose asynchronous event processing that react to an observable source of events. The framework extends the Observer pattern by allowing us to create a stream of events, that could be intercepted by operators (input/output) that modify the original stream of events and deliver the result or an error to a final Observer. The RxJava Schedulers allow us to control in which thread our Observable will begin operating on and in which thread the event is delivered to the final Observer or Subscriber. Subscription subscription = getPostsFromNetwork() .map(new Func1<Post, Post>() { @Override public Post call(Post Post) { ... return result; } }) .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(new Observer<Post>() { @Override public void onCompleted() {} @Override public void onError() {} @Override public void onNext(Post post) { // Process the result } }); Summary As you have seen, there are several high level asynchronous constructs available to offload the long computing or blocking tasks from the UI thread and it is up to developer to choose right construct for each situation because there is not an ideal choice that could be applied everywhere. Asynchronous multithreaded programming, that produces reliable results, is difficult and error prone task so using a high level technique tends to simplify the application source code and the multithreading processing logic required to scale the application over the available CPU cores. Remember that keeping your application responsive and smooth is essential to delight your users and increase your chances to create a notorious mobile application. The techniques and asynchronous constructs summarized on the previous paragraphs are covered in detail in the Asynchronous Android Programming book published by Packt Publishing. Resources for Article: Further resources on this subject: Getting started with Android Development [article] Practical How-To Recipes for Android [article] Speeding up Gradle builds for Android [article]
Read more
  • 0
  • 0
  • 3648

article-image-creating-adaptive-application-layout
Packt
14 Jan 2016
17 min read
Save for later

Creating an Adaptive Application Layout

Packt
14 Jan 2016
17 min read
In this article by Jim Wilson, the author of the book, Creating Dynamic UI with Android Fragments - Second Edition, we will see how to create an adaptive application layout. (For more resources related to this topic, see here.) In our application, we'll leave the wide-display aspect of the program alone because static layout management is working fine there. We work on the portrait-oriented handset aspect of the application. For these devices, we'll update the application's main activity to dynamically switch between displaying the fragment containing the list of books and the fragment displaying the selected book's description. Updating the layout to support dynamic fragments Before we write any code to dynamically manage the fragments within our application, we first need to modify the activity layout resource for portrait-oriented handset devices. This resource is contained in the activity_main.xml layout resource file that is not followed by (land) or (600dp). The layout resource currently appears as shown here: <LinearLayout android_orientation="vertical" android_layout_width="match_parent" android_layout_height="match_parent" > <!-- List of Book Titles --> <fragment android_layout_width="match_parent" android_layout_height="0dp" android_layout_weight="1" android_name="com.jwhh.fragments.BookListFragment2" android_id="@+id/fragmentTitles" tools_layout="@layout/fragment_book_list"/> </LinearLayout> We need to make two changes to the layout resource. The first is to add an id attribute to the LinearLayout view group so that we can easily locate it in code. The other change is to completely remove the fragment element. The updated layout resource now contains only the LinearLayout view group, which includes an id attribute value of @+id/layoutRoot. The layout resource now appears as shown here: <LinearLayout android_id="@+id/layoutRoot" android_orientation="vertical" android_layout_width="match_parent" android_layout_height="match_parent" > </LinearLayout> We still want our application to initially display the book list fragment, so removing the fragment element may seem like a strange change, but doing so is essential as we move our application to dynamically manage the fragments. We will eventually need to remove the book list fragment to replace it with the book description fragment. If we were to leave the book list fragment in the layout resource, our attempt to dynamically remove it later would silently fail. Only dynamically added fragments can be dynamically removed. Attempting to dynamically remove a fragment that was statically added with the fragment element in a layout resource will silently fail. Adapting to device differences When our application is running on a portrait-oriented handset device, the activity needs to programmatically load the fragment containing the book list. This is the same Fragment class, BookListFragment2, we were previously loading with the fragment element in the activity_main.xml layout resource file. Before we load the book list fragment, we first need to determine whether we're running on a device that requires dynamic fragment management. Remember that, for the wide-display devices, we're going to leave the static fragment management in place. There'll be a couple of places in our code where we'll need to take different logic paths depending on which layout we're using. So we'll need to add a boolean class-level field to the MainActivity class in which we can store whether we're using dynamic or static fragment management: boolean mIsDynamic; We can interrogate the device for its specific characteristics such as screen size and orientation. However, remember that much of our previous work was to configure our application to take the advantage of the Android resource system to automatically load the appropriate layout resources based on the device characteristics. Rather than repeating those characteristics checks in code, we can instead simply include the code to determine which layout resource was loaded. The layout resource for wide-display devices we created earlier, activity_main_wide.xml, statically loads both the book list fragment and the book description fragment. We can include in our activity's onCreate method code to determine whether the loaded layout resource includes one of those fragments as shown here: public class MainActivity extends Activity implements BookListFragment.OnSelectedBookChangeListener { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_dynamic); // Get the book description fragment FragmentManager fm = getFragmentManager(); Fragment bookDescFragment = fm.findFragmentById(R.id.fragmentDescription); // If not found than we're doing dynamic mgmt mIsDynamic = bookDescFragment == null || !bookDescFragment.isInLayout(); } // Other members elided for clarity } When the call to the setContentView method returns, we know that the appropriate layout resource for the current device has been loaded. We then use the FragmentManager instance to search for the fragment with an id value of R.id.fragmentDescription that is included in the layout resource for wide-display devices, but not the layout resource for portrait-oriented handsets. A return value of null indicates that the fragment was not loaded and we are, therefore, on a device that requires us to dynamically manage the fragments. In addition to the test for null, we also include the call to the isInLayout method to protect against one special case scenario. In the scenario where the device is in a landscape layout and then rotated to portrait, a cached instance to the fragment identified by R.id.fragmentDescription may still exist even though in the current orientation, the activity is not using the fragment. By calling the isInLayout method, we're able to determine whether the returned reference is part of the currently loaded layout. With this, our test to set the mIsDynamic member variable effectively says that we'll set mIsDynamic to true when the R.id.fragmentDescription fragment is not found (equals null), or it's found but is not a part of the currently loaded layout (!bookDescFragment.isInLayout). Dynamically loading a fragment at startup Now that we're able to determine whether or not dynamically loading the book list fragment is necessary, we add the code to do so to our onCreate method as shown here: protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_dynamic); // Get the book description fragment FragmentManager fm = getFragmentManager(); Fragment bookDescFragment = fm.findFragmentById(R.id.fragmentDescription); // If not found than we're doing dynamic mgmt mIsDynamic = bookDescFragment == null || !bookDescFragment.isInLayout(); // Load the list fragment if necessary if (mIsDynamic) { // Begin transaction FragmentTransaction ft = fm.beginTransaction(); // Create the Fragment and add BookListFragment2 listFragment = new BookListFragment2(); ft.add(R.id.layoutRoot, listFragment, "bookList"); // Commit the changes ft.commit(); } } Following the check to determine whether we're on a device that requires dynamic fragment management, we include FragmentTransaction to add an instance of the BookListFragment2 class to the activity as a child of the LinearLayout view group identified by the id value R.id.layoutRoot. This code capitalizes on the changes we made to the activity_main.xml resource file by removing the fragment element and including an id value on the LinearLayout view group. Now that we're dynamically loading the book list, we're ready to get rid of that other activity. Transitioning between fragments As you'll recall, whenever the user selects a book title within the BookListFragment2 class, the fragment notifies the main activity by calling the MainActivity.onSelectedBookChanged method passing the index of the selected book. The onSelectedBookChanged method currently appears as follows: public void onSelectedBookChanged(int bookIndex) { FragmentManager fm = getFragmentManager(); // Get the book description fragment BookDescFragment bookDescFragment = (BookDescFragment) fm.findFragmentById(R.id.fragmentDescription); // Check validity of fragment reference if(bookDescFragment == null || !bookDescFragment.isVisible()){ // Use activity to display description Intent intent = new Intent(this, BookDescActivity.class); intent.putExtra("bookIndex", bookIndex); startActivity(intent); } else { // Use contained fragment to display description bookDescFragment.setBook(bookIndex); } } In the current implementation, we use a technique similar to what we did in the onCreate method to determine which layout is loaded; we try to find the book description fragment within the currently loaded layout. If we find it, we know the current layout includes the fragment, so we go ahead and set the book description directly on the fragment. If we don't find it, we call the startActivity method to display the activity that contains the book description fragment. Starting a separate activity to handle the interaction with the BookListFragment2 class unnecessarily adds complexity to our program. Doing so requires that we pass data from one activity to another, which can sometimes be complex, especially if there are a large number of values or some of those values are object types that require additional coding to be passed in an Intent instance. More importantly, using a separate activity to manage the interaction with the BookListFragment2 class results in redundant work due to the fact that we already have all the code necessary to interact with the BookListFragment2 class in the MainActivity class. We'd prefer to handle the interaction with the BookListFragment2 class consistently in all cases. Eliminating redundant handling To eliminate this redundant handling, we start by stripping any code in the current implementation that deals with starting an activity. We can also avoid repeating the check for the book description fragment because we performed that check earlier in the onCreate method. Instead, we can now check the mIsDynamic class-level field to determine the proper handling. With that in mind, now we can initially modify the onSelectedBookChanged method to look like the following code: public void onSelectedBookChanged(int bookIndex) { BookDescFragment bookDescFragment; FragmentManager fm = getFragmentManager(); // Check validity of fragment reference if(mIsDynamic) { // Handle dynamic switch to description fragment } else { // Use the already visible description fragment bookDescFragment = (BookDescFragment) fm.findFragmentById(R.id.fragmentDescription); bookDescFragment.setBook(bookIndex); } } We now check the mIsDynamic member field to determine the appropriate code path. We still have some work to do if it turns out to be true, but in the case of it being false, we can simply get a reference to the book description fragment that we know is contained within the current layout and set the book index on it much like we were doing before. Creating the fragment on the fly In the case of the mIsDynamic field being true, we can display the book description fragment by simply replacing the book list fragment we added in the onCreate method with the book description fragment using the code shown here: FragmentTransaction ft = fm.beginTransaction(); bookDescFragment = new BookDescFragment(); ft.replace(R.id.layoutRoot, bookDescFragment, "bookDescription"); ft.addToBackStack(null); ft.setCustomAnimations( android.R.animator.fade_in, android.R.animator.fade_out); ft.commit(); Within FragmentTransaction, we create an instance of the BookDescFragment class and call the replace method passing the ID of the same view group that contains the BookListFragment2 instance that we added in the onCreate method. We include a call to the addToBackStack method so that the back button functions correctly, allowing the user to tap the back button to return to the book list. The code includes a call to the FragmentTransaction class' setCustomAnimations method that creates a fade effect when the user switches from one fragment to another. Managing asynchronous creation We have one final challenge that is to set the book index on the dynamically added book description fragment. Our initial thought might be to simply call the BookDescFragment class' setBook method after we create the BookDescFragment instance, but let's first take a look at the current implementation of the setBook method. The method currently appears as follows: public void setBook(int bookIndex) { // Lookup the book description String bookDescription = mBookDescriptions[bookIndex]; // Display it mBookDescriptionTextView.setText(bookDescription); } The last line of the method attempts to set the value of mBookDescriptionTextView within the fragment, which is a problem. Remember that the work we do within a FragmentTransaction class is not immediately applied to the user interface. Instead, as we discussed earlier in this chapter, in the deferred execution of transaction changes section, the work within the transaction is performed sometime after the completion of the call to the commit method. Therefore, the BookDescFragment instance's onCreate and onCreateView methods have not yet been called. As a result, any views associated with the BookDescFragment instance have not yet been created. An attempt to call the setText method on the BookDescriptionTextView instance will result in a null reference exception. One possible solution is to modify the setBook method to be aware of the current state of the fragment. In this scenario, the setBook method checks whether the BookDescFragment instance has been fully created. If not, it will store the book index value in the class-level field and later automatically set the BookDescriptionTextView value as part of the creation process. Although there may be some scenarios that warrant such a complicated solution, fragments give us an easier one. The Fragment base class includes a method called setArguments. With the setArguments method, we can attach data values, otherwise known as arguments, to the fragment that can then be accessed later in the fragment lifecycle using the getArguments method. Much like we do when associating extras with an Intent instance, a good practice is to define constants on the target class to name the argument values. It is also a good programming practice to provide a constant for an argument default value in the case of non-nullable types such as integers as shown here: public class BookDescFragment extends Fragment { // Book index argument name public static final String BOOK_INDEX = "book index"; // Book index default value private static final int BOOK_INDEX_NOT_SET = -1; // Other members elided for clarity } If you used Android Studio to generate the BookDescFragment class, you'll find that the ARG_PARAM1 and ARG_PARAM2 constants are included in the class. Android Studio includes these constants to provide examples of how to pass values to fragments just as we're discussing now. Since we're adding our own constant declarations, you can delete the ARG_PARAM1 and ARG_PARAM2 constants from the BookDescFragment class and also the lines in the generated BookDescFragment.onCreate and BookDescFragment.newInstance methods that reference them. We'll use the BOOK_INDEX constant to get and set the book index value and the BOOK_INDEX_NOT_SET constant to indicate whether the book index argument has been set. To simplify the process of creating the BookDescFragment instance and passing it the book index value, we'll add a static factory method named newInstance to the BookDescFragment class that appears as follows: public static BookDescFragment newInstance(int bookIndex) { BookDescFragment fragment = new BookDescFragment(); Bundle args = new Bundle(); args.putInt(BOOK_INDEX, bookIndex); fragment.setArguments(args); return fragment; } The newInstance methods start by creating an instance of the BookDescFragment class. It then creates an instance of the Bundle class, stores the book index in the Bundle instance, and then uses the setArguments method to attach it to the BookDescFragment instance. Finally, the newInstance method returns the BookDescFragment instance. We'll use this method shortly within the MainActivity class to create our BookDescFragment instance. If you used Android Studio to generate the BookDescFragment class, you'll find that most of the newInstance method is already in place. The only change you'll have to make is replace the two lines that referenced the ARG_PARAM1 and ARG_PARAM2 constants you deleted with the call to the args.putInt method shown in the preceding code. We can now update the BookDescFragment class' onCreateView method to look for arguments that might be attached to the fragment. Before we make any changes to the onCreateView method, let's look at the current implementation that appears as follows: public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View viewHierarchy = inflater.inflate( R.layout.fragment_book_desc, container, false); // Load array of book descriptions mBookDescriptions = getResources().getStringArray(R.array.bookDescriptions); // Get reference to book description text view mBookDescriptionTextView = (TextView) viewHierarchy.findViewById(R.id.bookDescription); return viewHierarchy; } As the onCreateView method is currently implemented, it simply inflates the layout resource, loads the array containing the book descriptions, and caches a reference to the TextView instance where the book description is loaded. We can now update the method to look for and use a book index that might be attached as an argument. The updated method appears as follows: public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View viewHierarchy = inflater.inflate( R.layout.fragment_book_desc, container, false); // Load array of book descriptions mBookDescriptions = getResources().getStringArray(R.array.bookDescriptions); // Get reference to book description text view mBookDescriptionTextView = (TextView) viewHierarchy.findViewById(R.id.bookDescription); // Retrieve the book index if attached Bundle args = getArguments(); int bookIndex = args != null ? args.getInt(BOOK_INDEX, BOOK_INDEX_NOT_SET) : BOOK_INDEX_NOT_SET; // If we find the book index, use it if (bookIndex != BOOK_INDEX_NOT_SET) setBook(bookIndex); return viewHierarchy; } Just before we return the fragment's view hierarchy, we call the getArguments method to retrieve any arguments that might be attached. The arguments are returned as an instance of the Bundle class. If the Bundle instance is non-null, we call the Bundle class' getInt method to retrieve the book index and assign it to the bookIndex local variable. The second parameter of the getInt method, BOOK_INDEX_NOT_SET, is returned if the fragment happens to have arguments attached that do not include the book index. Although this should not normally be the case, being prepared for any such an unexpected circumstance is a good idea. Finally, we check the value of the bookIndex variable. If it contains a book index, we call the fragment's setBook method to display it. Putting it all together With the BookDescFragment class now including support for attaching the book index as an argument, we're ready to fully implement the main activity's onSelectedBookChanged method to include switching to the BookDescFragment instance and attaching the book index as an argument. The method now appears as follows: public void onSelectedBookChanged(int bookIndex) { BookDescFragment bookDescFragment; FragmentManager fm = getFragmentManager(); // Check validity of fragment reference if(mIsDynamic){ // Handle dynamic switch to description fragment FragmentTransaction ft = fm.beginTransaction(); // Create the fragment and pass the book index bookDescFragment = BookDescFragment.newInstance(bookIndex); // Replace the book list with the description ft.replace(R.id.layoutRoot, bookDescFragment, "bookDescription"); ft.addToBackStack(null); ft.setCustomAnimations( android.R.animator.fade_in, android.R.animator.fade_out); ft.commit(); } else { // Use the already visible description fragment bookDescFragment = (BookDescFragment) fm.findFragmentById(R.id.fragmentDescription); bookDescFragment.setBook(bookIndex); } } Just as before, we start with the check to see whether we're doing dynamic fragment management. Once we determine that we are, we start the FragmentTransaction instance and create the BookDescFragment instance. We then create a new Bundle instance, store the book index into it, and then attach the Bundle instance to the BookDescFragment instance with the setArguments method. Finally, we put the BookDescFragment instance into place as the current fragment, take care of the back stack, enable animation, and complete the transaction. Everything is now complete. When the user selects a book title from the list, the onSelectedBookChanged method gets called. The onSelectedBookChanged method then creates and displays the BookDescFragment instance with the appropriate book index attached as an argument. When the BookDescFragment instance is ultimately created, its onCreateView method will then retrieve the book index from the arguments and display the appropriate description. Summary In this article, we saw that using the FragmentTransaction class, we're able to dynamically switch between individual fragments within an activity, eliminating the need to create a separate activity class for each screen in our application. This helps to prevent the proliferation of unnecessary activity classes, better organize our applications, and avoid the associated increase in complexity. Resources for Article: Further resources on this subject: UI elements and their implementation[article] Creating Dynamic UI with Android Fragments[article] Android Fragmentation Management[article]
Read more
  • 0
  • 0
  • 3647

article-image-integrating-objective-c
Packt
01 Apr 2016
11 min read
Save for later

Integrating with Objective-C

Packt
01 Apr 2016
11 min read
In this article written by Kyle Begeman author of the book Swift 2 Cookbook, we will cover the following recipes: Porting your code from one language to another Replacing the user interface classes Upgrading the app delegate Introduction Swift 2 is out, and we can see that it is going to replace Objective-C on iOS development sooner or later, however how should you migrate your Objective-C app? Is it necessary to rewrite everything again? Of course you don't have to rewrite a whole application in Swift from scratch, you can gradually migrate it. Imagine a four years app developed by 10 developers, it would take a long time to be rewritten. Actually, you've already seen that some of the codes we've used in this book have some kind of "old Objective-C fashion". The reason is that not even Apple computers could migrate the whole Objective-C code into Swift. (For more resources related to this topic, see here.) Porting your code from one language to another In the previous recipe we learned how to add a new code into an existing Objective-C project, however you shouldn't only add new code but also, as far as possible, you should migrate your old code to the new Swift language. If you would like to keep your application core on Objective-C that's ok, but remember that new features are going to be added on Swift and it will be difficult keeping two languages on the same project. In this recipe we are going to port part of the code, which is written in Objective-C to Swift. Getting ready Make a copy of the previous recipe, if you are using any version control it's a good time for committing your changes. How to do it… Open the project and add a new file called Setup.swift, here we are going to add a new class with the same name (Setup): class Setup { class func generate() -> [Car]{ var result = [Car]() for distance in [1.2, 0.5, 5.0] { var car = Car() car.distance = Float(distance) result.append(car) } var car = Car() car.distance = 4 var van = Van() van.distance = 3.8 result += [car, van] return result } } Now that we have this car array generator we can call it on the viewDidLoad method replacing the previous code: - (void)viewDidLoad { [super viewDidLoad]; vehicles = [Setup generate]; [self->tableView reloadData]; } Again press play and check that the application is still working. How it works… The reason we had to create a class instead of creating a function is that you can only export to Objective-C classes, protocols, properties, and subscripts. Bear that in mind in case of developing with the two languages. If you would like to export a class to Objective-C you have two choices, the first one is inheriting from NSObject and the other one is adding the @objc attribute before your class, protocol, property, or subscript. If you paid attention, our method returns a Swift array but it was converted to an NSArray, but as you might know, they are different kinds of array. Firstly, because Swift arrays are mutable and NSArray are not, and the other reason is that their methods are different. Can we use NSArray in Swift? The answer is yes, but I would recommend avoiding it, imagine once finished migrating to Swift your code still follows the old way, it would be another migration. There's more… Migrating from Objective-C is something that you should do with care, don't try to change the whole application at once, remember that some Swift objects behave differently from Objective-C, for example, dictionaries in Swift have the key and the value types specified but in Objective-C they can be of any type. Replacing the user interface classes At this moment you know how to migrate the model part of an application, however in real life we also have to replace the graphical classes. Doing it is not complicated but it could be a bit full of details. Getting ready Continuing with the previous recipe, make a copy of it or just commit the changes you have and let's continue with our migration. How to do it… First create a new file called MainViewController.swift and start importing the UIKit: import UIKit The next step is creating a class called MainViewController, this class must inherit from UIViewController and implement the protocols UITableViewDataSource and UITableViewDelegate: class MainViewController:UIViewController,UITableViewDataSource, UITableViewDelegate {  Then, add the attributes we had in the previous view controller, keep the same name you have used before: private var vehicles = [Car]() @IBOutlet var tableView:UITableView! Next, we need to implement the methods, let's start with the table view data source methods: func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int{ return vehicles.count } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell{ var cell:UITableViewCell? = self.tableView.dequeueReusableCellWithIdentifier ("vehiclecell") if cell == nil { cell = UITableViewCell(style: .Subtitle, reuseIdentifier: "vehiclecell") } var currentCar = self.vehicles[indexPath.row] cell!.textLabel?.numberOfLines = 1 cell!.textLabel?.text = "Distance (currentCar.distance * 1000) meters" var detailText = "Pax: (currentCar.pax) Fare: (currentCar.fare)" if currentCar is Van{ detailText += ", Volume: ( (currentCar as Van).capacity)" } cell!.detailTextLabel?.text = detailText cell!.imageView?.image = currentCar.image return cell! } Pay attention that this conversion is not 100% equivalent, the fare for example isn't going to be shown with two digits of precision, there is an explanation later of why we are not going to fix this now.  The next step is adding the event, in this case we have to do the action done when the user selects a car: func tableView(tableView: UITableView, willSelectRowAtIndexPath indexPath: NSIndexPath) -> NSIndexPath? { var currentCar = self.vehicles[indexPath.row] var time = currentCar.distance / 50.0 * 60.0 UIAlertView(title: "Car booked", message: "The car will arrive in (time) minutes", delegate: nil, cancelButtonTitle: "OK").show() return indexPath } As you can see, we need only do one more step to complete our code, in this case it's the view didLoad. Pay attention that another difference from Objective-C and Swift is that in Swift you have to specify that you are overloading an existing method: override func viewDidLoad() { super.viewDidLoad() vehicles = Setup.generate() self.tableView.reloadData() } } // end of class Our code is complete, but of course our application is still using the old code. To complete this operation, click on the storyboard, if the document outline isn't being displayed, click on the Editor menu and then on Show Document Outline: Now that you can see the document outline, click on View Controller that appears with a yellow circle with a square inside: Then on the right-hand side, click on the identity inspector, next go to the custom class and change the value of the class from ViewController to MainViewController. After that, press play and check that your application is running, select a car and check that it is working. Be sure that it is working with your new Swift class by paying attention on the fare value, which in this case isn't shown with two digits of precision. Is everything done? I would say no, it's a good time to commit your changes. Lastly, delete the original Objective-C files, because you won't need them anymore. How it works… As you can see, it's not so hard replacing an old view controller with a Swift one, the first thing you need to do is create a new view controller class with its protocols. Keep the same names you had on your old code for attributes and methods that are linked as IBActions, it will make the switch very straightforward otherwise you will have to link again. Bear in mind that you need to be sure that your changes are applied and that they are working, but sometimes it is a good idea to have something different, otherwise your application can be using the old Objective-C and you didn't realize it. Try to modernize our code using the Swift way instead of the old Objective-C style, for example, nowadays it's preferable using interpolation rather than using stringWithFormat. We also learned that you don't need to relink any action or outlet if you keep the same name. If you want to change the name of anything you might first keep its original name, test your app, and after that you can refactor it following the traditional factoring steps. Don't delete the original Objective-C files until you are sure that the equivalent Swift file is working on every functionality. There's more… This application had only one view controller, however applications usually have more than one view controller. In this case, the best way you can update them is one by one instead of all of them at the same time. Upgrading the app delegate As you know there is an object that controls the events of an application, which is called application delegate. Usually you shouldn't have much code here, but a few of them you might have. For example, you may deactivate the camera or the GPS requests when your application goes to the background and reactivate them when the app returns active. Certainly it is a good idea to update this file even if you don't have any new code on it, so it won't be a problem in the future. Getting ready If you are using the version control system, commit your changes from the last recipe or if you prefer just copy your application. How to do it… Open the previous application recipe and create a new Swift file called ApplicationDelegate.swift, then you can create a class with the same name. As in our previous class we didn't have any code on the application delegate, so we can differentiate it by printing on the log console. So add this traditional application delegate on your Swift file: class ApplicationDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { print("didFinishLaunchingWithOptions") return true } func applicationWillResignActive(application: UIApplication) { print("applicationWillResignActive") } func applicationDidEnterBackground(application: UIApplication) { print("applicationDidEnterBackground") } func applicationWillEnterForeground(application: UIApplication) { print("applicationWillEnterForeground") } func applicationDidBecomeActive(application: UIApplication) { print("applicationDidBecomeActive") } func applicationWillTerminate(application: UIApplication) { print("applicationWillTerminate") } } Now go to your project navigator and expand the Supporting Files group, after that click on the main.m file. In this file we are going to import the magic file, the Swift header file: #import "Chapter_8_Vehicles-Swift.h" After that we have to specify that the application delegate is the new class we have, so replace the AppDelegate class on the UIApplicationMain call with ApplicationDelegate. Your main function should be like this: int main(int argc, char * argv[]) { @autoreleasepool { return UIApplicationMain(argc, argv, nil, NSStringFromClass([ApplicationDelegate class])); } } It's time to press play and check whether the application is working or not. Press the home button or the combination shift + command + H if you are using the simulator and again open your application. Have a look that you have some messages on your log console. Now that you are sure that your Swift code is working, remove the original app delegate and its importation on the main.m. Test your app just in case. You could consider that we had finished this part, but actually we still have another step to do: removing the main.m file. Now it is very easy, just click on the ApplicationDelegate.swift file and before the class declaration add the attribute @UIApplicationMain, then right click on the main.h and choose to delete it. Test it and your application is done. How it works… The application delegate class has always been specified at the starting of an application. In Objective-C, it follows the C start point, which is a function called main. In iOS, you can specify the class that you want to use as an application delegate. If you program for OS X the procedure is different, you have to go to your nib file and change its class name to the new one. Why did we have to change the main function and then eliminate it? The reason is that you should avoid massive changes, if something goes wrong you won't know the step where you failed, so probably you will have to rollback everything again. If you do your migration step by step ensuring that it is still working, it means that in case of finding an error, it will be easier to solve it. Avoid doing massive changes on your project, changing step by step will be easier to solve issues. There's more… In this recipe, we learned the last steps of how to migrate an app from Objective-C to Swift code, however we have to remember that programming is not only about applications, you can also have a framework. In the next recipe, we are going to learn how to create your own framework compatible with Swift and Objective-C. Summary This article shows you how Swift and Objective-C can live together and give you a step-by-step guide on how to migrate your Objective-C app to Swift. Resources for Article: Further resources on this subject: Concurrency and Parallelism with Swift 2 [article] Swift for Open Source Developers [article] Your First Swift 2 Project [article]
Read more
  • 0
  • 0
  • 3645

article-image-physics-uikit-dynamics
Packt
14 May 2014
8 min read
Save for later

Physics with UIKit Dynamics

Packt
14 May 2014
8 min read
(For more resources related to this topic, see here.) Motion and physics in UIKit With the introduction of iOS 7, Apple completely removed the skeuomorphic design that has been used since the introduction of the iPhone and iOS. In its place is a new and refreshing flat design that features muted gradients and minimal interface elements. Apple has strongly encouraged developers to move away from a skeuomorphic and real-world-based design in favor of these flat designs. Although we are guided away from a real-world look, Apple also strongly encourages that your user interface have a real-world feel. Some may think this is a contradiction; however, the goal is to give users a deeper connection to the user interface. UI elements that respond to touch, gestures, and changes in orientation are examples of how to apply this new design paradigm. In order to help assist in this new design approach, Apple has introduced two very nifty APIs, UIKit Dynamics and Motion Effects. UIKit Dynamics To put it simply, iOS 7 has a fully featured physics engine built into UIKit. You can manipulate specific properties to provide a more real-world feel to your interface. This includes gravity, springs, elasticity, bounce, and force to name a few. Each interface item will contain its own properties and the dynamic engine will abide by these properties. Motion effects One of the coolest features of iOS 7 on our devices is the parallax effect found on the home screen. Tilting the device in any direction will pan the background image to emphasize depth. Using motion effects, we can monitor the data supplied by the device's accelerometer to adjust our interface based on movement and orientation. By combining these two features, you can create great looking interfaces with a realistic feel that brings it to life. To demonstrate UIKit Dynamics, we will be adding some code to our FoodDetailViewController.m file to create some nice effects and animations. Adding gravity Open FoodDetailViewController.m and add the following instance variables to the view controller: UIDynamicAnimator* animator; UIGravityBehavior* gravity; Scroll to viewDidLoad and add the following code to the bottom of the method: animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view]; gravity = [[UIGravityBehavior alloc] initWithItems:@[self.foodImageView]]; [animator addBehavior:gravity]; Run the application, open the My Foods view, select a food item from the table view, and watch what happens. The food image should start to accelerate towards the bottom of the screen until it eventually falls off the screen, as shown in the following set of screenshots: Let's go over the code, specifically the two new classes that were just introduced, UIDynamicAnimator and UIGravityBehavior. UIDynamicAnimator This is the core component of UIKit Dynamics. It is safe to say that the dynamic animator is the physics engine itself wrapped in a convenient and easy-to-use class. The animator will do nothing on its own, but instead keep track of behaviors assigned to it. Each behavior will interact inside of this physics engine. UIGravityBehavior Behaviors are the core compositions of UIKit Dynamics animation. These behaviors all define individual responses to the physics environment. This particular behavior mimics the effects of gravity by applying force. Each behavior is associated with a view (or views) when created. Because you explicitly define this property, you can control which views will perform the behavior. Behavior properties Almost all behaviors have multiple properties that can be adjusted to the desired effect. A good example is the gravity behavior. We can adjust its angle and magnitude. Add the following code before adding the behavior to the animator: gravity.magnitude = 0.1f; Run the application and test it to see what happens. The picture view will start to fall; however, this time it will be at a much slower rate. Replace the preceding code line with the following line: gravity.magnitude = 10.0f; Run the application, and this time you will notice that the image falls much faster. Feel free to play with these properties and get a feel for each value. Creating boundaries When dealing with gravity, UIKit Dynamics does not conform to the boundaries of the screen. Although it is not visible, the food image continues to fall after it has passed the edge of the screen. It will continue to fall unless we set boundaries that will contain the image view. At the top of the file, create another instance variable: UICollisionBehavior *collision; Now in our viewDidLoad method, add the following code below our gravity code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView]]; collision.translatesReferenceBoundsIntoBoundary = YES; [animator addBehavior:collision]; Here we are creating an instance of a new class (which is a behavior), UICollisionBehavior. Just like our gravity behavior, we associate this behavior with our food image view. Rather than explicitly defining the coordinates for the boundary, we use the convenient translatesReferenceBoundsIntoBoundary property on our collision behavior. By setting this property to yes, the boundary will be defined by the bounds of the reference view that we set when allocating our UIDynamics animator. Because the reference view is self.view, the boundary is now the visible space of our view. Run the application and watch how the image will fall, but stop once it reaches the bottom of the screen, as shown in the following screenshot: Collisions With our image view responding to gravity and our screen bounds we can start detecting collisions. You may have noticed that when the image view is falling, it falls right through our two labels below it. This is because UIKit Dynamics will only respond to UIView elements that have been assigned behaviors. Each behavior can be assigned to multiple objects, and each object can have multiple behaviors. Because our labels have no behaviors associated with them, the UIKit Dynamics physics engine simply ignores it. Let's make the food image view collide with the date label. To do this, we simply need to add the label to the collision behavior allocation call. Here is what the new code looks like: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; As you can see, all we have done is add self.foodDateLabel to the initWithItems array property. As mentioned before, any single behavior can be associated with multiple items. Run your code and see what happens. When the image falls, it hits the date label but continues to fall, pushing the date label with it. Because we didn't associate the gravity behavior with the label, it does not fall immediately. Although it does not respond to gravity, the label will still be moved because it is a physics object after all. This approach is not ideal, so let's use another awesome feature of UIKit Dynamics, invisible boundaries. Creating invisible boundaries We are going to take a slightly different approach to this problem. Our label is only a point of reference for where we want to add a boundary that will stop our food image view. Because of this, the label does not need to be associated with any UIKit Dynamic behaviors. Remove self.foodDateLabel from the following code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; Instead, add the following code to the bottom of viewDidLoad but before we add our collision behavior to the animator: // Add a boundary to the top edge CGPoint topEdge = CGPointMake(self.foodDateLabel.frame.origin.x + self.foodDateLabel.frame.size.width, self.foodDateLabel.frame.origin.y); [collision addBoundaryWithIdentifier:@"barrier" fromPoint:self.foodDateLabel.frame.origin toPoint:topEdge]; Here we add a boundary to the collision behavior and pass some parameters. First we define an identifier, which we will use later, and then we pass the food date label's origin as the fromPoint property. The toPoint property is set to the CGPoint we created using the food date label's frame. Go ahead and run the application, and you will see that the food image will now stop at the invisible boundary we defined. The label is still visible to the user, but the dynamic animator ignores it. Instead the animator sees the barrier we defined and responds accordingly, even though the barrier is invisible to the user. Here is a side-by-side comparison of the before and after: Dynamic items When using UIKit Dynamics, it is important to understand what UIKit Dynamics items are. Rather than referencing dynamics as views, they are referenced as items, which adhere to the UIDynamicItem protocol. This protocol defines the center, transform, and bounds of any object that adheres to this protocol. UIView is the most common class that adheres to the UIDynamicItem protocol. Another example of a class that conforms to this protocol is the UICollectionViewLayoutAttributes class. Summary In this article, we covered some basics of how UIKit Dynamics manages your application's behaviors, that enables us to create some really unique interface effects. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [article] Unity iOS Essentials: Flyby Background [article] New iPad Features in iOS 6 [article]
Read more
  • 0
  • 0
  • 3580

article-image-updating-data-background
Packt
20 May 2014
4 min read
Save for later

Updating data in the background

Packt
20 May 2014
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new Single View Application in Xamarin Studio and name it BackgroundFetchApp. Add a label to the controller. How to do it... Perform the following steps: We need access to the label from outside of the scope of the BackgroundFetchAppViewController class, so create a public property for it as follows: public UILabel LabelStatus { get { return this.lblStatus; } } Open the Info.plist file and under the Source tab, add the UIBackgroundModes key (Required background modes) with the string value, fetch. The following screenshot shows you the editor after it has been set: In the FinishedLaunching method of the AppDelegate class, enter the following line: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); Enter the following code, again, in the AppDelegate class: private int updateCount;public override void PerformFetch (UIApplication application,Action<UIBackgroundFetchResult> completionHandler){try {HttpWebRequest request = WebRequest.Create("http://software.tavlikos.com") as HttpWebRequest;using (StreamReader sr = new StreamReader(request.GetResponse().GetResponseStream())) {Console.WriteLine("Received response: {0}",sr.ReadToEnd());}this.viewController.LabelStatus.Text =string.Format("Update count: {0}/n{1}",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.NewData);} catch {this.viewController.LabelStatus.Text =string.Format("Update {0} failed at {1}!",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.Failed);}} Compile and run the app on the simulator or on the device. Press the home button (or Command + Shift + H) to move the app to the background and wait for an output. This might take a while, though. How it works... The UIBackgroundModes key with the fetch value enables the background fetch functionality for our app. Without setting it, the app will not wake up in the background. After setting the key in Info.plist, we override the PerformFetch method in the AppDelegate class, as follows: public override void PerformFetch (UIApplication application, Action<UIBackgroundFetchResult> completionHandler) This method is called whenever the system wakes up the app. Inside this method, we can connect to a server and retrieve the data we need. An important thing to note here is that we do not have to use iOS-specific APIs to connect to a server. In this example, a simple HttpWebRequest method is used to fetch the contents of this blog: http://software.tavlikos.com. After we have received the data we need, we must call the callback that is passed to the method, as follows: completionHandler(UIBackgroundFetchResult.NewData); We also need to pass the result of the fetch. In this example, we pass UIBackgroundFetchResult.NewData if the update is successful and UIBackgroundFetchResult.Failed if an exception occurs. If we do not call the callback in the specified amount of time, the app will be terminated. Furthermore, it might get fewer opportunities to fetch the data in the future. Lastly, to make sure that everything works correctly, we have to set the interval at which the app will be woken up, as follows: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); The default interval is UIApplication.BackgroundFetchIntervalNever, so if we do not set an interval, the background fetch will never be triggered. There's more Except for the functionality we added in this project, the background fetch is completely managed by the system. The interval we set is merely an indication and the only guarantee we have is that it will not be triggered sooner than the interval. In general, the system monitors the usage of all apps and will make sure to trigger the background fetch according to how often the apps are used. Apart from the predefined values, we can pass whatever value we want in seconds. UI updates We can update the UI in the PerformFetch method. iOS allows this so that the app's screenshot is updated while the app is in the background. However, note that we need to keep UI updates to the absolute minimum. Summary Thus, this article covered the things to keep in mind to make use of iOS 7's background fetch feature. Resources for Article: Further resources on this subject: Getting Started on UDK with iOS [Article] Interface Designing for Games in iOS [Article] Linking OpenCV to an iOS project [Article]
Read more
  • 0
  • 0
  • 3565

Packt
30 Mar 2016
15 min read
Save for later

ALM – Developers and QA

Packt
30 Mar 2016
15 min read
This article by Can Bilgin, the author of Mastering Cross-Platform Development with Xamarin, provides an introduction to Application Lifecycle Management (ALM) and continuous integration methodologies on Xamarin cross-platform applications. As the part of the ALM process that is most relevant for developers, unit test strategies will be discussed and demonstrated, as well as automated UI testing. This article is divided into the following sections: Development pipeline Troubleshooting Unit testing UI testing (For more resources related to this topic, see here.) Development pipeline The development pipeline can be described as the virtual production line that steers a project from a mere bundle of business requirements to the consumers. Stakeholders that are part of this pipeline include, but are not limited to, business proxies, developers, the QA team, the release and configuration team, and finally the consumers themselves. Each stakeholder in this production line assumes different responsibilities, and they should all function in harmony. Hence, having an efficient, healthy, and preferably automated pipeline that is going to provide the communication and transfer of deliverables between units is vital for the success of a project. In the Agile project management framework, the development pipeline is cyclical rather than a linear delivery queue. In the application life cycle, requirements are inserted continuously into a backlog. The backlog leads to a planning and development phase, which is followed by testing and QA. Once the production-ready application is released, consumers can be made part of this cycle using live application telemetry instrumentation. Figure 1: Application life cycle management In Xamarin cross-platform application projects, development teams are blessed with various tools and frameworks that can ease the execution of ALM strategies. From sketching and mock-up tools available for early prototyping and design to source control and project management tools that make up the backbone of ALM, Xamarin projects can utilize various tools to automate and systematically analyze project timeline. The following sections of this article concentrate mainly on the lines of defense that protect the health and stability of a Xamarin cross-platform project in the timeline between the assignment of tasks to developers to the point at which the task or bug is completed/resolved and checked into a source control repository. Troubleshooting and diagnostics SDKs associated with Xamarin target platforms and development IDEs are equipped with comprehensive analytic tools. Utilizing these tools, developers can identify issues causing app freezes, crashes, slow response time, and other resource-related problems (for example, excessive battery usage). Xamarin.iOS applications are analyzed using the XCode Instruments toolset. In this toolset, there are a number of profiling templates, each used to analyze a certain perspective of application execution. Instrument templates can be executed on an application running on the iOS simulator or on an actual device. Figure 2: XCode Instruments Similarly, Android applications can be analyzed using the device monitor provided by the Android SDK. Using Android Monitor, memory profile, CPU/GPU utilization, and network usage can also be analyzed, and application-provided diagnostic information can be gathered. Android Debug Bridge (ADB) is a command-line tool that allows various manual or automated device-related operations. For Windows Phone applications, Visual Studio provides a number of analysis tools for profiling CPU usage, energy consumption, memory usage, and XAML UI responsiveness. XAML diagnostic sessions in particular can provide valuable information on problematic sections of view implementation and pinpoint possible visual and performance issues: Figure 3: Visual Studio XAML analyses Finally, Xamarin Profiler, as a maturing application (currently in preview release), can help analyze memory allocations and execution time. Xamarin Profiler can be used with iOS and Android applications. Unit testing The test-driven development (TDD) pattern dictates that the business requirements and the granular use-cases defined by these requirements should be initially reflected on unit test fixtures. This allows a mobile application to grow/evolve within the defined borders of these assertive unit test models. Whether following a TDD strategy or implementing tests to ensure the stability of the development pipeline, unit tests are fundamental components of a development project. Figure 4: Unit test project templates Xamarin Studio and Visual Studio both provide a number of test project templates targeting different areas of a cross-platform project. In Xamarin cross-platform projects, unit tests can be categorized into two groups: platform-agnostic and platform-specific testing. Platform-agnostic unit tests Platform-agnostic components, such as portable class libraries containing shared logic for Xamarin applications, can be tested using the common unit test projects targeting the .NET framework. Visual Studio Test Tools or the NUnit test framework can be used according to the development environment of choice. It is also important to note that shared projects used to create shared logic containers for Xamarin projects cannot be tested with .NET unit test fixtures. For shared projects and the referencing platform-specific projects, platform-specific unit test fixtures should be prepared. When following an MVVM pattern, view models are the focus of unit test fixtures since, as previously explained, view models can be perceived as a finite state machine where the bindable properties are used to create a certain state on which the commands are executed, simulating a specific use-case to be tested. This approach is the most convenient way to test the UI behavior of a Xamarin application without having to implement and configure automated UI tests. While implementing unit tests for such projects, a mocking framework is generally used to replace the platform-dependent sections of the business logic. Loosely coupling these dependent components makes it easier for developers to inject mocked interface implementations and increases the testability of these modules. The most popular mocking frameworks for unit testing are Moq and RhinoMocks. Both Moq and RhinoMocks utilize reflection and, more specifically, the Reflection.Emit namespace, which is used to generate types, methods, events, and other artifacts in the runtime. Aforementioned iOS restrictions on code generation make these libraries inapplicable for platform-specific testing, but they can still be included in unit test fixtures targeting the .NET framework. For platform-specific implementation, the True Fakes library provides compile time code generation and mocking features. Depending on the implementation specifics (such as namespaces used, network communication, multithreading, and so on), in some scenarios it is imperative to test the common logic implementation on specific platforms as well. For instance, some multithreading and parallel task implementations give different results on Windows Runtime, Xamarin.Android, and Xamarin.iOS. These variations generally occur because of the underlying platform's mechanism or slight differences between the .NET and Mono implementation logic. In order to ensure the integrity of these components, common unit test fixtures can be added as linked/referenced files to platform-specific test projects and executed on the test harness. Platform-specific unit tests In a Xamarin project, platform-dependent features cannot be unit tested using the conventional unit test runners available in Visual Studio Test Suite and NUnit frameworks. Platform-dependent tests are executed on empty platform-specific projects that serve as a harness for unit tests for that specific platform. Windows Runtime application projects can be tested using the Visual Studio Test Suite. However, for Android and iOS, the NUnit testing framework should be used, since Visual Studio Test Tools are not available for the Xamarin.Android and Xamarin.iOS platforms.                              Figure 5: Test harnesses The unit test runner for Windows Phone (Silverlight) and Windows Phone 8.1 applications uses a test harness integrated with the Visual Studio test explorer. The unit tests can be executed and debugged from within Visual Studio. Xamarin.Android and Xamarin.iOS test project templates use NUnitLite implementation for the respective platforms. In order to run these tests, the test application should be deployed on the simulator (or the testing device) and the application has to be manually executed. It is possible to automate the unit tests on Android and iOS platforms through instrumentation. In each Xamarin target platform, the initial application lifetime event is used to add the necessary unit tests: [Activity(Label = "Xamarin.Master.Fibonacci.Android.Tests", MainLauncher = true, Icon = "@drawable/icon")] public class MainActivity : TestSuiteActivity { protected override void OnCreate(Bundle bundle) { // tests can be inside the main assembly //AddTest(Assembly.GetExecutingAssembly()); // or in any reference assemblies AddTest(typeof(Fibonacci.Android.Tests.TestsSample).Assembly); // Once you called base.OnCreate(), you cannot add more assemblies. base.OnCreate(bundle); } } In the Xamarin.Android implementation, the MainActivity class derives from the TestSuiteActivity, which implements the necessary infrastructure to run the unit tests and the UI elements to visualize the test results. On the Xamarin.iOS platform, the test application uses the default UIApplicationDelegate, and generally, the FinishedLaunching event delegate is used to create the ViewController for the unit test run fixture: public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions) { // Override point for customization after application launch. // If not required for your application you can safely delete this method var window = new UIWindow(UIScreen.MainScreen.Bounds); var touchRunner = new TouchRunner(window); touchRunner.Add(System.Reflection.Assembly.GetExecutingAssembly()); window.RootViewController = new UINavigationController(touchRunner.GetViewController()); window.MakeKeyAndVisible(); return true; } The main shortcoming of executing unit tests this way is the fact that it is not easy to generate a code coverage report and archive the test results. Neither of these testing methods provide the ability to test the UI layer. They are simply used to test platform-dependent implementations. In order to test the interactive layer, platform-specific or cross-platform (Xamarin.Forms) coded UI tests need to be implemented. UI testing In general terms, the code coverage of the unit tests directly correlates with the amount of shared code which amounts to, at the very least, 70-80 percent of the code base in a mundane Xamarin project. One of the main driving factors of architectural patterns was to decrease the amount of logic and code in the view layer so that the testability of the project utilizing conventional unit tests reaches a satisfactory level. Coded UI (or automated UI acceptance) tests are used to test the uppermost layer of the cross-platform solution: the views. Xamarin.UITests and Xamarin Test Cloud The main UI testing framework used for Xamarin projects is the Xamarin.UITests testing framework. This testing component can be used on various platform-specific projects, varying from native mobile applications to Xamarin.Forms implementations, except for the Windows Phone platform and applications. Xamarin.UITests is an implementation based on the Calabash framework, which is an automated UI acceptance testing framework targeting mobile applications. Xamarin.UITests is introduced to the Xamarin.iOS or Xamarin.Android applications using the publicly available NuGet packages. The included framework components are used to provide an entry point to the native applications. The entry point is the Xamarin Test Cloud Agent, which is embedded into the native application during the compilation. The cloud agent is similar to a local server that allows either the Xamarin Test Cloud or the test runner to communicate with the app infrastructure and simulate user interaction with the application. Xamarin Test Cloud is a subscription-based service allowing Xamarin applications to be tested on real mobile devices using UI tests implemented via Xamarin.UITests. Xamarin Test Cloud not only provides a powerful testing infrastructure for Xamarin.iOS and Xamarin.Android applications with an abundant amount of mobile devices but can also be integrated into Continuous Integration workflows. After installing the appropriate NuGet package, the UI tests can be initialized for a specific application on a specific device. In order to initialize the interaction adapter for the application, the app package and the device should be configured. On Android, the APK package path and the device serial can be used for the initialization: IApp app = ConfigureApp.Android.ApkFile("<APK Path>/MyApplication.apk") .DeviceSerial("<DeviceID>") .StartApp(); For an iOS application, the procedure is similar: IApp app = ConfigureApp.iOS.AppBundle("<App Bundle Path>/MyApplication.app") .DeviceIdentifier("<DeviceID of Simulator") .StartApp(); Once the App handle has been created, each test written using NUnit should first create the pre-conditions for the tests, simulate the interaction, and finally test the outcome. The IApp interface provides a set of methods to select elements on the visual tree and simulate certain interactions, such as text entry and tapping. On top of the main testing functionality, screenshots can be taken to document test steps and possible bugs. Both Visual Studio and Xamarin Studio provide project templates for Xamarin.UITests. Xamarin Test Recorder Xamarin Test Recorder is an application that can ease the creation of automated UI tests. It is currently in its preview version and is only available for the Mac OS platform. Figure 6: Xamarin Test Recorder Using this application, developers can select the application in need of testing and the device/simulator that is going to run the application. Once the recording session starts, each interaction on the screen is recorded as execution steps on a separate screen, and these steps can be used to generate the preparation or testing steps for the Xamarin.UITests implementation. Coded UI tests (Windows Phone) Coded UI tests are used for automated UI testing on the Windows Phone platform. Coded UI Tests for Windows Phone and Windows Store applications are not any different than their counterparts for other .NET platforms such as Windows Forms, WPF, or ASP.Net. It is also important to note that only XAML applications support Coded UI tests. Coded UI tests are generated on a simulator and written on an Automation ID premise. The Automation ID property is an automatically generated or manually configured identifier for Windows Phone applications (only in XAML) and the UI controls used in the application. Coded UI tests depend on the UIMap created for each control on a specific screen using the Automation IDs. While creating the UIMap, a crosshair tool can be used to select the application and the controls on the simulator screen to define the interactive elements: Figure 7:- Generating coded UI accessors and tests Once the UIMap has been created and the designer files have been generated, gestures and the generated XAML accessors can be used to create testing pre-conditions and assertions. For Coded UI tests, multiple scenario-specific input values can be used and tested on a single assertion. Using the DataRow attribute, unit tests can be expanded to test multiple data-driven scenarios. The code snippet below uses multiple input values to test different incorrect input values: [DataRow(0,"Zero Value")] [DataRow(-2, "Negative Value")] [TestMethod] public void FibonnaciCalculateTest_IncorrectOrdinal(int ordinalInput) { // TODO: Check if bad values are handled correctly } Automated tests can run on available simulators and/or a real device. They can also be included in CI build workflows and made part of the automated development pipeline. Calabash Calabash is an automated UI acceptance testing framework used to execute Cucumber tests. Cucumber tests provide an assertion strategy similar to coded UI tests, only broader and behavior oriented. The Cucumber test framework supports tests written in the Gherkin language (a human-readable programming grammar description for behavior definitions). Calabash makes up the necessary infrastructure to execute these tests on various platforms and application runtimes. A simple declaration of the feature and the scenario that is previously tested on Coded UI using the data-driven model would look similar to the excerpt below. Only two of the possible test scenarios are declared in this feature for demonstration; the feature can be extended: Feature: Calculate Single Fibonacci number. Ordinal entry should greater than 0. Scenario: Ordinal is lower than 0. Given I use the native keyboard to enter "-2" into text field Ordinal And I touch the "Calculate" button Then I see the text "Ordinal cannot be a negative number." Scenario: Ordinal is 0. Given I use the native keyboard to enter "0" into text field Ordinal And I touch the "Calculate" button Then I see the text "Cannot calculate the number for the 0th ordinal." Calabash test execution is possible on Xamarin target platforms since the Ruby API exposed by the Calabash framework has a bidirectional communication line with the Xamarin Test Cloud Agent embedded in Xamarin applications with NuGet packages. Calabash/Cucumber tests can be executed on Xamarin Test Cloud on real devices since the communication between the application runtime and Calabash framework is maintained by Xamarin Test Cloud Agent, the same as Xamarin.UI tests. Summary Xamarin projects can benefit from a properly established development pipeline and the use of ALM principles. This type of approach makes it easier for teams to share responsibilities and work out business requirements in an iterative manner. In the ALM timeline, the development phase is the main domain in which most of the concrete implementation takes place. In order for the development team to provide quality code that can survive the ALM cycle, it is highly advised to analyze and test native applications using the available tooling in Xamarin development IDEs. While the common codebase for a target platform in a Xamarin project can be treated and tested as a .NET implementation using the conventional unit tests, platform-specific implementations require more particular handling. Platform-specific parts of the application need to be tested on empty shell applications, called test harnesses, on the respective platform simulators or devices. To test views, available frameworks such as Coded UI tests (for Windows Phone) and Xamarin.UITests (for Xamarin.Android and Xamarin.iOS) can be utilized to increase the test code coverage and create a stable foundation for the delivery pipeline. Most tests and analysis tools discussed in this article can be integrated into automated continuous integration processes. Resources for Article:   Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Working with Xamarin.Android [article] Application Development Workflow [article]
Read more
  • 0
  • 0
  • 3548
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-speeding-gradle-builds-android
Packt
16 Jul 2015
7 min read
Save for later

Speeding up Gradle builds for Android

Packt
16 Jul 2015
7 min read
In this article by Kevin Pelgrims, the author of the book, Gradle for Android, we will cover a few tips and tricks that will help speed up the Gradle builds. A lot of Android developers that start using Gradle complain about the prolonged compilation time. Builds can take longer than they do with Ant, because Gradle has three phases in the build lifecycle that it goes through every time you execute a task. This makes the whole process very configurable, but also quite slow. Luckily, there are several ways to speed up Gradle builds. Gradle properties One way to tweak the speed of a Gradle build is to change some of the default settings. You can enable parallel builds by setting a property in a gradle.properties file that is placed in the root of a project. All you need to do is add the following line: org.gradle.parallel=true Another easy win is to enable the Gradle daemon, which starts a background process when you run a build the first time. Any subsequent builds will then reuse that background process, thus cutting out the startup cost. The process is kept alive as long as you use Gradle, and is terminated after three hours of idle time. Using the daemon is particularly useful when you use Gradle several times in a short time span. You can enable the daemon in the gradle.properties file like this: org.gradle.daemon=true In Android Studio, the Gradle daemon is enabled by default. This means that after the first build from inside the IDE, the next builds are a bit faster. If you build from the command-line interface; however, the Gradle daemon is disabled, unless you enable it in the properties. To speed up the compilation itself, you can tweak parameters on the Java Virtual Machine (JVM). There is a Gradle property called jvmargs that enables you to set different values for the memory allocation pool for the JVM. The two parameters that have a direct influence on your build speed are Xms and Xmx. The Xms parameter is used to set the initial amount of memory to be used, while the Xmx parameter is used to set a maximum. You can manually set these values in the gradle.properties file like this: org.gradle.jvmargs=-Xms256m -Xmx1024m You need to set the desired amount and a unit, which can be k for kilobytes, m for megabytes, and g for gigabytes. By default, the maximum memory allocation (Xmx) is set to 256 MB, and the starting memory allocation (Xms) is not set at all. The optimal settings depend on the capabilities of your computer. The last property you can configure to influence build speed is org.gradle.configureondemand. This property is particularly useful if you have complex projects with several modules, as it tries to limit the time spent in the configuration phase, by skipping modules that are not required for the task that is being executed. If you set this property to true, Gradle will try to figure out which modules have configuration changes and which ones do not, before it runs the configuration phase. This is a feature that will not be very useful if you only have an Android app and a library in your project. If you have a lot of modules that are loosely coupled, though, this feature can save you a lot of build time. System-wide Gradle properties If you want to apply these properties system-wide to all your Gradle-based projects, you can create a gradle.properties file in the .gradle folder in your home directory. On Microsoft Windows, the full path to this directory is %UserProfile%.gradle, on Linux and Mac OS X it is ~/.gradle. It is a good practice to set these properties in your home directory, rather than on the project level. The reason for this is that you usually want to keep memory consumption down on build servers, and the build time is of less importance. Android Studio The Gradle properties you can change to speed up the compilation process are also configurable in the Android Studio settings. To find the compiler settings, open the Settings dialog, and then navigate to Build, Execution, Deployment | Compiler. On that screen, you can find settings for parallel builds, JVM options, configure on demand, and so on. These settings only show up for Gradle-based Android modules. Have a look at the following screenshot: Configuring these settings from Android Studio is easier than configuring them manually in the build configuration file, and the settings dialog makes it easy to find properties that influence the build process. Profiling If you want to find out which parts of the build are slowing the process down, you can profile the entire build process. You can do this by adding the --profile flag whenever you execute a Gradle task. When you provide this flag, Gradle creates a profiling report, which can tell you which parts of the build process are the most time consuming. Once you know where the bottlenecks are, you can make the necessary changes. The report is saved as an HTML file in your module in build/reports/profile. This is the report generated after executing the build task on a multimodule project: The profiling report shows an overview of the time spent in each phase while executing the task. Below that summary is an overview of how much time Gradle spent on the configuration phase for each module. There are two more sections in the report that are not shown in the screenshot. The Dependency Resolution section shows how long it took to resolve dependencies, per module. Lastly, the Task Execution section contains an extremely detailed task execution overview. This overview has the timing for every single task, ordered by execution time from high to low. Jack and Jill If you are willing to use experimental tools, you can enable Jack and Jill to speed up builds. Jack (Java Android Compiler Kit) is a new Android build toolchain that compiles Java source code directly to the Android Dalvik executable (dex) format. It has its own .jack library format and takes care of packaging and shrinking as well. Jill (Jack Intermediate Library Linker) is a tool that can convert .aar and .jar files to .jack libraries. These tools are still quite experimental, but they were made to improve build times and to simplify the Android build process. It is not recommended to start using Jack and Jill for production versions of your projects, but they are made available so that you can try them out. To be able to use Jack and Jill, you need to use build tools version 21.1.1 or higher, and the Android plugin for Gradle version 1.0.0 or higher. Enabling Jack and Jill is as easy as setting one property in the defaultConfig block: android {   buildToolsRevision '22.0.1'   defaultConfig {     useJack = true   }} You can also enable Jack and Jill on a certain build type or product flavor. This way, you can continue using the regular build toolchain, and have an experimental build on the side: android {   productFlavors {       regular {           useJack = false       }        experimental {           useJack = true       }   }} As soon as you set useJack to true, minification and obfuscation will not go through ProGuard anymore, but you can still use the ProGuard rules syntax to specify certain rules and exceptions. Use the same proguardFiles method that we mentioned before, when talking about ProGuard. Summary This article helped us lean different ways to speed up builds; we first saw how we can tweak the settings, configure Gradle and the JVM, we saw how to detect parts that are slowing down the process, and then we learned Jack and Jill tool. Resources for Article: Further resources on this subject: Android Tech Page Android Virtual Device Manager Apache Maven and m2eclipse  Saying Hello to Unity and Android
Read more
  • 0
  • 0
  • 3535

article-image-react-native-tools-and-resources
Packt
03 Jan 2017
14 min read
Save for later

React Native Tools and Resources

Packt
03 Jan 2017
14 min read
In this article written by Eric Masiello and Jacob Friedmann, authors of the book Mastering React Native we will cover: Tools that improve upon the React Native development experience Ways to build React Native apps for platforms other than iOS and Android Great online resources for React Native development (For more resources related to this topic, see here.) Evaluating React Native Editors, Plugins, and IDEs I'm hard pressed to think of another topic that developers are more passionate about than their preferred code editor. Of the many options, two popular editors today are GitHub's Atom and Microsoft's Visual Studio Code (not be confused with the Visual Studio 2015). Both are cross-platform editors for Windows, macOS, and Linux that are easily extended with additional features. In this section, I'll detail my personal experience with these tools and where I have found they complement the React Native development experience. Atom and Nuclide Facebook has created a package for Atom known as Nuclide that provides a first-class development environment for React Native It features a built-in debugger similar to Chrome's DevTools, a React Native Inspector (think the Elements tab in Chrome DevTools), and support for the static type checker Flow. Download Atom from https://atom.io/ and Nuclide from https://nuclide.io/. To install the Nuclide package, click on the Atom menu and then on Preferences..., and then select Packages. Search for Nuclide and click on Install. Once installed, you can actually start and stop the React Native Packager directly from Atom (though you need launch the simulator/editor separately) and set breakpoints in Atom itself rather than using Chrome's DevTools. Take a look at the following screenshot: If you plan to use Flow, Nuclide will identify errors and display them inline. Take the following example, I've annotated the function timesTen such that it expects a number as a parameter it should return a number. However, you can see that there's some errors in the usage. Refer to the following code snippet: /* @flow */ function timesTen(x: number): number { var result = x * 10; return 'I am not a number'; } timesTen("Hello, world!"); Thankfully, the Flow integration will call out these errors in Atom for you. Refer to the following screenshot: Flow integration of Nuclide exposes two other useful features. You'll see annotated auto completion as you type. And, if you hold the Command key and click on variable or function name, Nuclide will jump straight to the source definition, even if it’s defined in a separate file. Refer to the following screenshot: Visual Studio Code Visual Studio Code is a first class editor for JavaScript authors. Out of the box, it's packaged with a built in debugger that can be used to debug Node applications. Additionally, VS Code comes with an integrated Terminal and a git tool that nicely shows visual diffs. Download Visual Studio Code from https://code.visualstudio.com/. The React Native Tools extensions for VS Code add some useful capabilities to the editor. For starters, you'll be able to execute the React Native: Run-iOS and React Native: Run Android commands directly from VS Code without needing to reach for terminal, as shown in the following screenshot: And, while a bit more involved than Atom to configure, you can use VS Code as a React Native debugger. Take a look at the following screenshot: The React Native Tools extension also provides IntelliSense for much of the React Native API, as shown in the following screenshot: When reading through the VS Code documentation, I found it (unsurprisingly) more catered toward Windows users. So, if Windows is your thing, you may feel more at home with VS Code. As a macOS user, I slightly prefer Atom/Nuclide over VS Code. VS Code comes with more useful features out of the box but that easily be addressed by installing a few Atom packages. Plus,  I found the Flow support with Nuclide really useful. But don't let me dissuade you from VS Code. Both are solid editors with great React Native support. And they're both free so no harm in trying both. Before totally switching gears, there is one more editor worth mentioning. Deco is an Integrated Development Environment (IDE) built specifically for React Native development. Standing up a new React Native project is super quick since Deco keeps a local copy of everything you'd get when running react-native in it. Deco also makes creating new stateful and stateless components super easy. Download Deco from https://www.decosoftware.com/. Once you create a new component using Deco, it gives you a nicely prefilled template including a place to add propTypes and defaultProps (something I often forget to do). Refer to the following screenshot: From there, you can drag and drop components from the sidebar directly into your code. Deco will auto-populate many of the props for you as well as add the necessary import statements. Take a look at the following code snippet: <Image style={{ width: 300, height: 200, }} resizeMode={"contain"} source={{uri:'https://unsplash.it/600/400/?random'}}/> The other nice feature Deco adds is the ability to easily launch your app from the toolbar in any installed iOS simulator or Android AVD. You don't even need to first manually open the AVD, Deco will do it all for you. Refer to the following screenshot: Currently, creating a new project with Deco starts you off with an outdated version of React Native (version 0.27.2 as of this writing). If you're not concerned with using the latest version, Deco is a great way to get a React Native app up quickly. However, if you require more advanced tooling, I suggest you look at Atom with Nuclide or Visual Studio Code with the React Native Tools extension. Taking React Native beyond iOS and Android The development experience is one of the most highly touted features by React Native proponents. But as we well know by now, React Native is more than just a great development experience. It's also about building cross-platform applications with a common language and, often times, reusable code and components. Out of the box, the Facebook team has provided tremendous support for iOS and Android. And thanks to the community, React Native has expanded to other promising platforms. In this section, I'll take you through a few of these React Native projects. I won't go into great technical depth, but I'll provide a high-level overview and how to get each running. Introducing React Native Web React Native Web is an interesting one. It treats many of React Native components you've learned about, such as View, Text, and TextInput, as higher level abstractions that map to HTML elements, such as div, span, and input, thus allowing you to build a web app that runs in a browser from your React Native code. Now if you're like me, your initial reaction might be—But why? We already have React for the web. It's called... React! However, where React Native Web shines over React is in its ability to share components between your mobile app and the web because you're still working with the same basic React Native APIs. Learn more about React Native Web at https://github.com/necolas/react-native-web. Configuring React Native Web React Native Web can be installed into your existing React Native project just like any other npm dependency: npm install --save react react-native-web Depending on the version of React Native and React Native Web you've installed, you may encounter conflicting peer dependencies of React. This may require manually adjusting which version of React Native or React Native Web is installed. Sometimes, just deleting the node_modules folder and rerunning npm install does the trick. From there, you'll need some additional tools to build the web bundle. In this example, we'll use webpack and some related tooling: npm install webpack babel-loader babel-preset-react babel-preset-es2015 babel-preset-stage-1 webpack-validator webpack-merge --save npm install webpack-dev-server --save-dev Next, create a webpack.config.js in the root of the project: const webpack = require('webpack'); const validator = require('webpack-validator'); const merge = require('webpack-merge'); const target = process.env.npm_lifecycle_event; let config = {}; const commonConfig = { entry: { main: './index.web.js' }, output: { filename: 'app.js' }, resolve: { alias: { 'react-native': 'react-native-web' } }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel', query: { presets: ['react', 'es2015', 'stage-1'] } } ] } }; switch(target) { case 'web:prod': config = merge(commonConfig, { devtool: 'source-map', plugins: [ new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify('production') }) ] }); break; default: config = merge(commonConfig, { devtool: 'eval-source-map' }); break; } module.exports = validator(config); Add the followingtwo entries to thescriptssection ofpackage.json: "web:dev": "webpack-dev-server --inline --hot", "web:prod": "webpack -p" Next, create an index.htmlfile in the root of the project: <!DOCTYPE html> <html> <head> <title>RNNYT</title> <meta charset="utf-8" /> <meta content="initial-scale=1,width=device-width" name="viewport" /> </head> <body> <div id="app"></div> <script type="text/javascript" src="/app.js"></script> </body> </html> And, finally, add an index.web.jsfile to the root of the project: import React, { Component } from 'react'; import { View, Text, StyleSheet, AppRegistry } from 'react-native'; class App extends Component { render() { return ( <View style={styles.container}> <Text style={styles.text}>Hello World!</Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#efefef', alignItems: 'center', justifyContent: 'center' }, text: { fontSize: 18 } }); AppRegistry.registerComponent('RNNYT', () => App); AppRegistry.runApplication('RNNYT', { rootTag: document.getElementById('app') }); To run the development build, we'll run webpackdev server by executing the following command: npm run web:dev web:prod can be substituted to create a production ready build. While developing, you can add React Native Web specific code much like you can with iOS and Android by using Platform.OS === 'web' or by creating custom *.web.js components. React Native Web still feels pretty early days. Not every component and API is supported, and the HTML that's generated looks a bit rough for my tastes. While developing with React Native Web, I think it helps to keep the right mindset. That is, think of this as I'm building a React Native mobile app, not a website. Otherwise, you may find yourself reaching for web-specific solutions that aren't appropriate for the technology. React Native plugin for Universal Windows Platform Announced at the Facebook F8 conference in April, 2016,the React Native plugin for Universal Windows Platform (UWP)lets you author React Native apps for Windows 10, desktop Windows 10 mobile, and Xbox One. Learn more about React Native plugin for UWP at https://github.com/ReactWindows/react-native-windows. You'll need to be running Windows 10 in order to build UWP apps. You'll also need to follow the React Native documentation for configuring your Windows environment for building React Native apps. If you're not concerned with building Android on Windows, you can skip installing Android Studio. The plugin itself also has a few additional requirements. You'll need to be running at least npm 3.x and to install Visual Studio 2015 Community (not be confused with Visual Studio Code). Thankfully, the Community version is free to use. The UWP plugin docs also tell you to install the Windows 10 SDK Build 10586. However, I found it's easier to do that from within Visual Studio once we've created the app so that we can save that part for later. Configuring the React Native plugin for UWP I won't walk you through every step of the installation. The UWP plugin docs detail the process well enough. Once you've satisfied the requirements, start by creating a new React Native project as normal: react-native init RNWindows cd RNWindows Next, install and initialize the UWP plugin: npm install --save-dev rnpm-plugin-windows react-native windows Running react-native windows will actually create a windows directory inside your project containing a Visual Studio solution file. If this is your first time installing the plugin, I recommend opening the solution (.sln) file with Visual Studio 2015. Visual Studio will then ask you to download several dependencies including the latest Windows 10 SDK. Once Visual Studio has installed all the dependencies, you can run the app either from within Visual Studio or by running the following command: react-native run-windows Take a look at the following screenshot: React Native macOS Much as the name implies, React Native allows you to create macOS desktop applications using React Native. This project works a little differently than the React Native Web and the React Native plugin for UWP. As best I can tell, since React Native macOS requires its own custom CLI for creating and packaging applications, you are not able to build a macOS and mobile app from the same project. Learn more about React Native macOS at https://github.com/ptmt/react-native-macos. Configuring React Native macOS Much like you did with the React Native CLI, begin by installing the custom CLI globally by using the following command: npm install react-native-macos-cli -g Then, use it to create a new React Native macOS app by running the following command: react-native-macos init RNDesktopApp cd RNDesktopApp This will set you up with all required dependencies along with an entry point file, index.macos.js. There is no CLI command to spin up the app, so you'll need to open the Xcode project and manually run it. Run the following command: open macos/RNDesktopApp.xcodeproj The documentation is pretty limited, but there is a nice UIExplorer app that can be downloaded and run to give you a good feel for what's available. While on some level it's unfortunate your macOS app cannot live alongside your iOS and Android code, I cannot think of a use case that would call for such a thing. That said, I was delighted with how easy it was to get this project up and running. Summary I think it's fair to say that React Native is moving quickly. With a new version released roughly every two weeks, I've lost count of how many versions have passed by in the course of writing this book. I'm willing to bet React Native has probably bumped a version or two from the time you started reading this book until now. So, as much as I'd love to wrap up by saying you now know everything possible about React Native, sadly that isn't the case. References Let me leave you with a few valuable resources to continue your journey of learning and building apps with React Native: React Native Apple TV is a fork of React Native for building apps for Apple's tvOS. For more information, refer to https://github.com/douglowder/react-native-appletv. (Note that preliminary tvOS support has appeared in early versions of React Native 0.36.) React Native Ubuntu is another fork of React Native for developing React Native apps on Ubuntu for Desktop Ubuntu and Ubuntu Touch. For more information, refer to https://github.com/CanonicalLtd/react-native JS.Coach is a collection of community favorite components and plugins for all things React, React Native, Webpack, and related tools. For more information, refer to https://js.coach/react-native Exponent is described as Rails for React Native. It supports additional system functionality and UI components beyond what's provided by React Native. It will also let you build your apps without needing to touch Xcode or Android Studio. For more information, refer to https://getexponent.com/ React Native Elements is a cross-platform UI toolkit for React Native. You can think of it as Bootstrap for React Native. For more information, refer to https://github.com/react-native-community/react-native-elements The Use React Native site is how I keep up with React Native releases and news in the React Native space. For more information, refer to http://www.reactnative.com/ React Native Radio is fantastic podcast hosted by Nader Dabit and a panel of hosts that interview other developers contributing to the React Native community. For more information, refer to https://devchat.tv/react-native-radio React Native Newsletter is an occasional newsletter curated by a team of React Native enthusiasts. For more information, refer to http://reactnative.cc/ And, finally, Dotan J. Nahum maintains an amazing resource titled Awesome React Native that includes articles, tutorials, videos, and well tested components you can use in your next project. For more information, refer to https://github.com/jondot/awesome-react-native Resources for Article: Further resources on this subject: Getting Started [article] Getting Started with React [article] Understanding React Native Fundamentals [article]
Read more
  • 0
  • 0
  • 3521

article-image-top-5-must-have-android-applications
Packt
28 Jun 2011
6 min read
Save for later

Top 5 Must-have Android Applications

Packt
28 Jun 2011
6 min read
  Android Application Testing Guide Build intensively tested and bug free Android applications     1. ES File Explorer Description: ES File Explorer has everything you would expect from a file explorer – you can copy, paste, rename and delete files. You can select multiple files at a time just as you would on your PC or MAC. You can also compress files to zip or gz. One of the best features of ES file explorer is the ability to connect to network shares – this means you can connect to a shared folder on your LAN and transfer files to and from your Android device. Its user interface is very simple, quick and easy to use. Screenshots:   Features: Multiselect and Operate files (Copy, Paste, Cut/Move, Create, Delete and Rename, Share/Send) in the phone and computers Application manager -- Manage apps(Install, Uninstall, Backup, Shortcuts, Category) View Different file formats, photos, docs, videos anywhere, support third party applications such as Document To Go to open document files Text viewers and editors Bluetooth file transfer tool Access your Home PC, via WIFI with SMB Compress and Decompress ZIP files, Unpack RAR files, Create encrypted (AES 256 bit) ZIP files Manage the files on the FTP server as the ones on the sd card Link: This application is available for download at: https://market.android.com/details?id=com.estrongs.android.pop 2. Go Sms Pro Description: GO SMS Pro is the ultimate messaging application for Android devices. There is a nice setting that launches a pop-up for incoming messages. Users can then respond or delete directly within the window. The app supports batch actions for deleting, marking all, or backing up. It is highly customizable; everything from the text color, to the color of the background, SMS ringtones for specific contacts, and themes for the SMS application can be customized. Another interesting feature that you can take advantage of is the multiple plug-ins that are available as free download in the market. The Facebook chat plug-in makes it possible to receive and send Facebook chat messages and since these messages are sent through the Facebook network it does not affect your SMS messages at all. Screenshots: Features: GO-MMS service (FREE), you may send picture/music to your friend(ever they are no GO SMS) through one SMS with 2G/3G/4G or WIFI Many cool themes; also support DIY theme, and Wallpaper Maker plug-in; Fully customizable look; Supports chat style and list style; Font changeable SMS backup and restore by all or by conversations, supports XML format, send backup file by email Support schedule SMS; Group texting Settings backup and restore Notification with privacy mode and reminder notification Security lock, support lock by thread; Blacklist Link: This application is available for download at: https://market.android.com/details?id=com.jb.gosms 3. Dolphin HD browser Description: Dolphin Browser HD is a professional mobile browser presented by Mobotap Inc. Dolphin Browser HD is the most advanced and customizable web browser. You can browse the Web with the greatest speed and efficiency by using Dolphin browser HD. The main browsing screen is clean and uncluttered. Other than the Home and Refresh buttons that flank the address bar, Dolphin HD doesn't clutter the main interface with other quick-access buttons. In addition to tabbed browsing, bookmarking (that syncs to Google bookmarks), and multitouch zooming, it can also flag sites to read later as well as a tie-in to Delicious. You can search content within a page, subscribe to RSS feeds through Google Reader, and share links with social networks. Another great feature of this app is the capability to download YouTube videos. Screenshots: Features: Manage Bookmarks Multi Touch pinch zoom Unlimited Tabs Colorful theme pack Gestures as shortcuts for common commands You can save web pages to read them offline with all images preserved Link: This application is available for download at: https://market.android.com/details?id=mobi.mgeek.TunnyBrowser 4. Winamp Description: The one big advantage that Winamp has over other playback apps is that it can sync tracks wirelessly to the device over your home network, so you don’t have to fuss with a USB cable, making it easier to manage your music. You can set Winamp to sync automatically, every time you connect your Android phone to Winamp, which makes it incredibly easy to send new playlists, purchases and downloads to your portable player, sans USB. The interface is probably the most notable upgrade over the stock player. The playback controls remain on-screen pretty much wherever you are in the app – a small touch, but one that vastly improves the functionality of Winamp. Being able to control playback from any point is more useful than you might expect. Screenshots: Features: iTunes library & playlist import Wireless & wired sync with the desktop Winamp Media Player Over 45k+ SHOUTcast Internet radio stations Integrated Android Search & “Listen to” voice actions Play queue management Playlists and playlist shortcuts Extras Menu – Now Playing data interacts with other installed apps Link: This application is available for download at: https://market.android.com/details?id=com.nullsoft.winamp 5. Advanced Task Killer Description: One click to kill applications running in the background. Advanced Task Killer is pretty simple and easy to use. It allows you to see what applications are currently running and offers the ability to terminate them quickly and easily, thus freeing up valuable memory for other processes. It also remembers your selections, so the next time you launch it, the previously spared apps remain unchecked and the previously selected ones are checked and are ready to be shut down. You can choose to have Advanced Task Killer start at launch and there’s even the option to have it appear in your notifications bar for swift access. Screenshots: Features: Kill multiple apps with one tap Adjust the security levels It comes with a notification bar icon You can kill apps automatically by selecting one of auto-kill level: Safe, Aggressive or Crazy Link: This application is available for download at: https://market.android.com/details?id=com.rechild.advancedtaskkiller Summary In this article we discussed the Top 5 must-have applications for your android phone. Further resources on this subject: Android Application Testing: Getting Started [Article] Flash Development for Android: Audio Input via Microphone [Article] Android Application Testing: TDD and the Temperature Converter [Article] Android User Interface Development: Animating Widgets and Layouts [Article]
Read more
  • 0
  • 2
  • 3469

article-image-android-fragmentation-management
Packt
16 Sep 2013
8 min read
Save for later

Android Fragmentation Management

Packt
16 Sep 2013
8 min read
(For more resources related to this topic, see here.) Smartphones, by now, have entered our lives not only as users and consumers but also as producers of our own content. Though this kind of device has been on the market since 1992 (the first was the Simon model by IBM), the big diffusion was driven by Apple's iPhone, when it was produced in 2007 (last year, the fifth generation of this device was released). Meanwhile, another big giant, Google, developed an open source product to be used as the internal operating system in mobile devices; in a different manner from the leader of the market, this company doesn't constraint itself to a unique hardware-specific device, but allows third-party companies to use it on their cell phones, which have different characteristics. The big advantage was also to be able to sell this device to consumers that don't want to (or can't have) spend as much money as the Apple phone costs. This allowed Android to win the battle of diffusion. But there is another side to the coin. A variety of devices by different producers means more fragmentation of the underlying system and a non-uniform user experience that can be really disappointing. As programmers, we have to take into account these problems and this article strives to be a useful guideline to solve that problem. The Android platform was born in 2003, as the product of a company which at first was known as Android Inc. and which was acquired by Google in 2005. Its direct competitors were and are still today the iOS platform by Apple and the RIM, know as Blackberry. Technically speaking, its core is an operating system using a Linux Kernel, aimed to be installed on devices with very different hardware (mainly mobile devices, but today it is also used in general embedded systems, for example, the game console OUYA that features a modified version of Android 4.0). Like any software that has been around for a while, many changes happened to the functionality and many versions came out, each with a name of a dessert: Apple Pie (API level 1) Banana Bread (API level 2) 1.5 Cupcake (API level 3) 1.6 – Donut (API level 4) 2.0-2.1x – Eclair (API level 5 to 7) 2.2 – Froyo (API level 8) 2.3 – Gingerbread (API level 9 and 10) 3.0-3.2 – Honeycomb (API level 11 to 13) 4.0 – Ice Cream Sandwich (API level 14 and 15) 4.1 – Jelly Bean (API level 16) Like in many other software projects, the names are in alphabetical order (another project that follows this approach is the Ubuntu distribution). The API level written in the parenthesis is the main point about the fragmentation. Each version of software introduces or removes features and bugs. In its lifetime, an operating system such as Android aims to add more fantastic innovations while avoiding breaking pre-installed applications in older versions, but also aims to make available to these older versions the same features with a process technically called backporting. For more information about the API levels, carefully read the official documentation available at http://developer.android.com/guide/topics/manifest/uses-sdk- element.html#ApiLevels. If you look at the diffusion of these versions as given by the following pie chart you can see there are more than 50 percent of devices have installed versions that we consider outdated with the latest; all that you will read in the article is thought to address these problems, mainly using backporting; in particular, to specifically address the backward compatibility issues with version 3.0 of the Android operating system—the version named Honeycomb. Version 3.0 was first intended to be installed on tablets, and in general, on devices with large screens. Android is a platform that from the beginning was intended to be used on devices with very different characteristics (think of a system where an application must be usable on VGA screens, with or without physical keyboards, with a camera, and so on); with the release of 3.0, all this was improved with specific APIs thought to extend and make developing applications easier, and also to create new patterns with the graphical user interfaces. The more important innovation was the introduction of the Fragment class. Earlier, the only main class in developing the Android applications was Activity, a class that provides the user with a screen in order to accomplish a specific task, but that was too coarse grain and not re-usable enough to be used in the applications with large screens such as a tablet. With the introduction of the Fragment class to be used as the basic block, it is now possible to create responsive mobile design; that is, producing content adapting to the context and optimizing the block's placement, using reflowing or a combination of each Fragment inside the main Activity. These are concepts inspired by the so called responsive web design, where developers build web pages that adapt to the viewport's size; the preeminent article about this argument is Responsive Web Design, Ethan Marcotte. For sake of completeness, let me list other new capabilities introduced with Honeycomb (look into the official documentation for a better understanding of them): Copy and Paste: A clipboard-based framework Loaders: Load data asynchronously Drag and Drop: Permits the moving of data between views Property animation framework: Supersedes the old Animation package, allowing the animation of almost everything into an application Hardware acceleration: From API level 11, the graphic pipeline uses dedicated hardware when it is present Support for encrypted storage In particular for address these changes and new features, Google make available a particular library called "Support library" that backports Fragment and Loader. Although the main characteristics of this classes are maintained, in the article is explained in detail how to use the low level API related with the threading stuff. Indeed an Android application is not a unique block of instructions executed one after the other, but is composed of multiple pipelines of execution. The main concepts here are the process and thread. When an application is started, the operating system creates a process (technically a Linux process) and each component is associated to this process. Together with the process, a thread of execution named main is also created. This is a very important thread because it is in charge of dispatching events to the appropriate user interface elements and receiving events from them. This thread is also called UI Thread. It's important to note that the system does not create a separate thread for each element, but instead uses the same UI thread for all of them. This can be dangerous for the responsiveness of your application, since if you perform an intensive or time expensive operation, this will block the entire UI. All Android developers fight against the ANR (Application Not Responding) message that is presented when the UI is not responsive for more than 5 seconds. Following Android's documentation, there are only two rules to follow to avoid the ANR: Do not block the UI thread Do not access the Android UI toolkit from outside the UI thread These two rules can seem simple, but there are some particulars that have to be clear. In the article some examples are shown using not only the Thread class (and the Runnable interface) but also the (very) low-level classes named Looper and Handler. Also the interaction between GUI elements and these classes are investigated to avoid nasty exceptions. Another important element introduced in Google's platform is the UI pattern named ActionBar—a piece of interface at the top of an application where the more important menu's buttons are visualized in order to be easily accessible. Also a new contextual menu is available in the action bar. When, for example, one or more items in a list are selected (such as, the Gmail application), the appearance of the bar changes and shows new buttons related to the actions available for the selected items. One thing not addressed by the compatibility package is ActionBar. Since this is a very important element for integration with the Android ecosystem, some alternatives are born, the first one from Google itself, as a simple code sample named ActionBar Compatibility that you can find in the sample/directory of the Android SDK. In the article, we will follow a different approach, using a famous open source project, ActionBarSherlock. The code for this library is not available from SDK, so we need to download it from its website (http://actionbarsherlock.com/). This library allows us to use the most part of the functionality of the original ActionBar implementation such as UP button (that permits a hierarchical navigation), ActionView and contextual action menu. Summary Thus in this article we learned about Android Fragmentation Management. Resources for Article : Further resources on this subject: Android Native Application API [Article] New Connectivity APIs – Android Beam [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 3465
article-image-unit-and-functional-tests
Packt
21 Aug 2014
13 min read
Save for later

Unit and Functional Tests

Packt
21 Aug 2014
13 min read
In this article by Belén Cruz Zapata and Antonio Hernández Niñirola, authors of the book Testing and Securing Android Studio Applications, you will learn how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. (For more resources related to this topic, see here.) Testing activities There are two possible modes of testing activities: Functional testing: In functional testing, the activity being tested is created using the system infrastructure. The test code can communicate with the Android system, send events to the UI, or launch another activity. Unit testing: In unit testing, the activity being tested is created with minimal connection to the system infrastructure. The activity is tested in isolation. In this article, we will explore the Android testing API to learn about the classes and methods that will help you test the activities of your application. The test case classes The Android testing API is based on JUnit. Android JUnit extensions are included in the android.test package. The following figure presents the main classes that are involved when testing activities: Let's learn more about these classes: TestCase: This JUnit class belongs to the junit.framework. The TestCase package represents a general test case. This class is extended by the Android API. InstrumentationTestCase: This class and its subclasses belong to the android.test package. It represents a test case that has access to instrumentation. ActivityTestCase: This class is used to test activities, but for more useful classes, you should use one of its subclasses instead of the main class. ActivityInstrumentationTestCase2: This class provides functional testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you have to create a test class named MainActivityTest that extends the ActivityInstrumentationTestCase2 class, shown as follows: public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> ActivityUnitTestCase: This class provides unit testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you can create a test class named MainActivityUnitTest that extends the ActivityUnitTestCase class, shown as follows: public class MainActivityUnitTest extends ActivityUnitTestCase<MainActivity> There is a new term that has emerged from the previous classes called Instrumentation. Instrumentation The execution of an application is ruled by the life cycle, which is determined by the Android system. For example, the life cycle of an activity is controlled by the invocation of some methods: onCreate(), onResume(), onDestroy(), and so on. These methods are called by the Android system and your code cannot invoke them, except while testing. The mechanism to allow your test code to invoke callback methods is known as Android instrumentation. Android instrumentation is a set of methods to control a component independent of its normal lifecycle. To invoke the callback methods from your test code, you have to use the classes that are instrumented. For example, to start the activity under test, you can use the getActivity() method that returns the activity instance. For each test method invocation, the activity will not be created until the first time this method is called. Instrumentation is necessary to test activities considering the lifecycle of an activity is based on the callback methods. These callback methods include the UI events as well. From an instrumented test case, you can use the getInstrumentation() method to get access to an Instrumentation object. This class provides methods related to the system interaction with the application. The complete documentation about this class can be found at: http://developer.android.com/reference/android/app/Instrumentation.html. Some of the most important methods are as follows: The addMonitor method: This method adds a monitor to get information about a particular type of Intent and can be used to look for the creation of an activity. A monitor can be created indicating IntentFilter or displaying the name of the activity to the monitor. Optionally, the monitor can block the activity start to return its canned result. You can use the following call definitions to add a monitor: ActivityMonitor addMonitor (IntentFilter filter, ActivityResult result, boolean block). ActivityMonitor addMonitor (String cls, ActivityResult result, boolean block). The following line is an example line code to add a monitor: Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(SecondActivity.class.getName(), null, false); The activity lifecycle methods: The methods to call the activity lifecycle methods are: callActivityOnCreate, callActivityOnDestroy, callActivityOnPause, callActivityOnRestart, callActivityOnResume, callActivityOnStart, finish, and so on. For example, you can pause an activity using the following line code: getInstrumentation().callActivityOnPause(mActivity); The getTargetContext method: This method returns the context for the application. The startActivitySync method: This method starts a new activity and waits for it to begin running. The function returns when the new activity has gone through the full initialization after the call to its onCreate method. The waitForIdleSync method: This method waits for the application to be idle synchronously. The test case methods JUnit's TestCase class provides the following protected methods that can be overridden by the subclasses: setUp(): This method is used to initialize the fixture state of the test case. It is executed before every test method is run. If you override this method, the first line of code will call the superclass. A standard setUp method should follow the given code definition: @Override protected void setUp() throws Exception { super.setUp(); // Initialize the fixture state } tearDown(): This method is used to tear down the fixture state of the test case. You should use this method to release resources. It is executed after running every test method. If you override this method, the last line of the code will call the superclass, shown as follows: @Override protected void tearDown() throws Exception { // Tear down the fixture state super.tearDown(); } The fixture state is usually implemented as a group of member variables but it can also consist of database or network connections. If you open or init connections in the setUp method, they should be closed or released in the tearDown method. When testing activities in Android, you have to initialize the activity under test in the setUp method. This can be done with the getActivity() method. The Assert class and method JUnit's TestCase class extends the Assert class, which provides a set of assert methods to check for certain conditions. When an assert method fails, AssertionFailedException is thrown. The test runner will handle the multiple assertion exceptions to present the testing results. Optionally, you can specify the error message that will be shown if the assert fails. You can read the Android reference of the TestCase class to examine all the available methods at http://developer.android.com/reference/junit/framework/Assert.html. The assertion methods provided by the Assert superclass are as follows: assertEquals: This method checks whether the two values provided are equal. It receives the actual and expected value that is to be compared with each other. This method is overloaded to support values of different types, such as short, String, char, int, byte, boolean, float, double, long, or Object. For example, the following assertion method throws an exception since both values are not equal: assertEquals(true, false); assertTrue or assertFalse: These methods check whether the given Boolean condition is true or false. assertNull or assertNotNull: These methods check whether an object is null or not. assertSame or assertNotSame: These methods check whether two objects refer to the same object or not. fail: This method fails a test. It can be used to make sure that a part of code is never reached, for example, if you want to test that a method throws an exception when it receives a wrong value, as shown in the following code snippet: try{ dontAcceptNullValuesMethod(null); fail("No exception was thrown"); } catch (NullPointerExceptionn e) { // OK } The Android testing API, which extends JUnit, provides additional and more powerful assertion classes: ViewAsserts and MoreAsserts. The ViewAsserts class The assertion methods offered by JUnit's Assert class are not enough if you want to test some special Android objects such as the ones related to the UI. The ViewAsserts class implements more sophisticated methods related to the Android views, that is, for the View objects. The whole list with all the assertion methods can be explored in the Android reference about this class at http://developer.android.com/reference/android/test/ViewAsserts.html. Some of them are described as follows: assertBottomAligned or assertLeftAligned or assertRightAligned or assertTopAligned(View first, View second): These methods check that the two specified View objects are bottom, left, right, or top aligned, respectively assertGroupContains or assertGroupNotContains(ViewGroup parent, View child): These methods check whether the specified ViewGroup object contains the specified child View assertHasScreenCoordinates(View origin, View view, int x, int y): This method checks that the specified View object has a particular position on the origin screen assertHorizontalCenterAligned or assertVerticalCenterAligned(View reference View view): These methods check that the specified View object is horizontally or vertically aligned with respect to the reference view assertOffScreenAbove or assertOffScreenBelow(View origin, View view): These methods check that the specified View object is above or below the visible screen assertOnScreen(View origin, View view): This method checks that the specified View object is loaded on the screen even if it is not visible The MoreAsserts class The Android API extends some of the basic assertion methods from the Assert class to present some additional methods. Some of the methods included in the MoreAsserts class are: assertContainsRegex(String expectedRegex, String actual): This method checks that the expected regular expression (regex) contains the actual given string assertContentsInAnyOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects and in any order assertContentsInOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects, but in the same order assertEmpty: This method checks if a collection is empty assertEquals: This method extends the assertEquals method from JUnit to cover collections: the Set objects, int arrays, String arrays, Object arrays, and so on assertMatchesRegex(String expectedRegex, String actual): This method checks whether the expected regex matches the given actual string exactly Opposite methods such as assertNotContainsRegex, assertNotEmpty, assertNotEquals, and assertNotMatchesRegex are included as well. All these methods are overloaded to optionally include a custom error message. The Android reference about the MoreAsserts class can be inspected to learn more about these assert methods at http://developer.android.com/reference/android/test/MoreAsserts.html. UI testing and TouchUtils The test code is executed in two different threads as the application under test, although, both the threads run in the same process. When testing the UI of an application, UI objects can be referenced from the test code, but you cannot change their properties or send events. There are two strategies to invoke methods that should run in the UI thread: Activity.runOnUiThread(): This method creates a Runnable object in the UI thread in which you can add the code in the run() method. For example, if you want to request the focus of a UI component: public void testComponent() { mActivity.runOnUiThread( new Runnable() { public void run() { mComponent.requestFocus(); } } ); … } @UiThreadTest: This annotation affects the whole method because it is executed on the UI thread. Considering the annotation refers to an entire method, statements that do not interact with the UI are not allowed in it. For example, consider the previous example using this annotation, shown as follows: @UiThreadTest public void testComponent () { mComponent.requestFocus(); … } There is also a helper class that provides methods to perform touch interactions on the view of your application: TouchUtils. The touch events are sent to the UI thread safely from the test thread; therefore, the methods of the TouchUtils class should not be invoked in the UI thread. Some of the methods provided by this helper class are as follows: The clickView method: This method simulates a click on the center of a view The drag, dragQuarterScreenDown, dragViewBy, dragViewTo, dragViewToTop methods: These methods simulate a click on an UI element and then drag it accordingly The longClickView method: This method simulates a long press click on the center of a view The scrollToTop or scrollToBottom methods: These methods scroll a ViewGroup to the top or bottom The mock object classes The Android testing API provides some classes to create mock system objects. Mock objects are fake objects that simulate the behavior of real objects but are totally controlled by the test. They allow isolation of tests from the rest of the system. Mock objects can, for example, simulate a part of the system that has not been implemented yet, or a part that is not practical to be tested. In Android, the following mock classes can be found: MockApplication, MockContext, MockContentProvider, MockCursor, MockDialogInterface, MockPackageManager, MockResources, and MockContentResolver. These classes are under the android.test.mock package. The methods of these objects are nonfunctional and throw an exception if they are called. You have to override the methods that you want to use. Creating an activity test In this section, we will create an example application so that we can learn how to implement the test cases to evaluate it. Some of the methods presented in the previous section will be put into practice. You can download the example code files from your account at http://www.packtpub.com. Our example is a simple alarm application that consists of two activities: MainActivity and SecondActivity. The MainActivity implements a self-built digital clock using text views and buttons. The purpose of creating a self-built digital clock is to have more code and elements to use in our tests. The layout of MainActivity is a relative one that includes two text views: one for the hour (the tvHour ID) and one for the minutes (the tvMinute ID). There are two buttons below the clock: one to subtract 10 minutes from the clock (the bMinus ID) and one to add 10 minutes to the clock (the bPlus ID). There is also an edit text field to specify the alarm name. Finally, there is a button to launch the second activity (the bValidate ID). Each button has a pertinent method that receives the click event when the button is pressed. The layout looks like the following screenshot: The SecondActivity receives the hour from the MainActivity and shows its value in a text view simulating that the alarm was saved. The objective to create this second activity is to be able to test the launch of another activity in our test case. Summary In this article, you learned how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments [article] Saying Hello to Unity and Android [article] Augmented Reality [article]
Read more
  • 0
  • 0
  • 3441

article-image-flappy-swift
Packt
16 Jun 2015
15 min read
Save for later

Flappy Swift

Packt
16 Jun 2015
15 min read
Let's start using the first framework by implementing a nice clone of Flappy Bird with the help of this article by Giordano Scalzo, the author of Swift by Example. (For more resources related to this topic, see here.) The app is… Only someone who has been living under a rock for the past two years may not have heard of Flappy Bird, but to be sure that everybody understands the game, let's go through a brief introduction. Flappy Bird is a simple, but addictive, game where the player controls a bird that must fly between a series of pipes. Gravity pulls the bird down, but by touching the screen, the player can make the bird flap and move towards the sky, driving the bird through a gap in a couple of pipes. The goal is to pass through as many pipes as possible. Our implementation will be a high-fidelity tribute to the original game, with the same simplicity and difficulty level. The app will consist of only two screens—a clean menu screen and the game itself—as shown in the following screenshot:   Building the skeleton of the app Let's start implementing the skeleton of our game using the SpriteKit game template. Creating the project For implementing a SpriteKit game, Xcode provides a convenient template, which prepares a project with all the useful settings: Go to New| Project and select the Game template, as shown in this screenshot: In the following screen, after filling in all the fields, pay attention and select SpriteKit under Game Technology, like this: By running the app and touching the screen, you will be delighted by the cute, rotating airplanes! Implementing the menu First of all, let's add CocoaPods; write the following code in the Podfile: use_frameworks!   target 'FlappySwift' do pod 'Cartography', '~> 0.5' pod 'HTPressableButton', '~> 1.3' end Then install CocoaPods by running the pod install command. As usual, we are going to implement the UI without using Interface Builder and the storyboards. Go to AppDelegate and add these lines to create the main ViewController:    func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool {        let viewController = MenuViewController()               let mainWindow = UIWindow(frame: UIScreen.mainScreen().bounds)        mainWindow.backgroundColor = UIColor.whiteColor()       mainWindow.rootViewController = viewController        mainWindow.makeKeyAndVisible()        window = mainWindow          return true    } The MenuViewController, as the name suggests, implements a nice menu to choose between the game and the Game Center: import UIKit import HTPressableButton import Cartography   class MenuViewController: UIViewController {    private let playButton = HTPressableButton(frame: CGRectMake(0, 0, 260, 50), buttonStyle: .Rect)    private let gameCenterButton = HTPressableButton(frame: CGRectMake(0, 0, 260, 50), buttonStyle: .Rect)      override func viewDidLoad() {        super.viewDidLoad()        setup()        layoutView()        style()        render()    } } As you can see, we are using the usual structure. Just for the sake of making the UI prettier, we are using HTPressableButtons instead of the default buttons. Despite the fact that we are using AutoLayout, the implementation of this custom button requires that we instantiate the button by passing a frame to it: // MARK: Setup private extension MenuViewController{    func setup(){        playButton.addTarget(self, action: "onPlayPressed:", forControlEvents: .TouchUpInside)        view.addSubview(playButton)        gameCenterButton.addTarget(self, action: "onGameCenterPressed:", forControlEvents: .TouchUpInside)        view.addSubview(gameCenterButton)    }       @objc func onPlayPressed(sender: UIButton) {        let vc = GameViewController()        vc.modalTransitionStyle = .CrossDissolve        presentViewController(vc, animated: true, completion: nil)    }       @objc func onGameCenterPressed(sender: UIButton) {        println("onGameCenterPressed")    }   } The only thing to note is that, because we are setting the function to be called when the button is pressed using the addTarget() function, we must prefix the designed methods using @objc. Otherwise, it will be impossible for the Objective-C runtime to find the correct method when the button is pressed. This is because they are implemented in a private extension; of course, you can set the extension as internal or public and you won't need to prepend @objc to the functions: // MARK: Layout extension MenuViewController{    func layoutView() {        layout(playButton) { view in             view.bottom == view.superview!.centerY - 60            view.centerX == view.superview!.centerX            view.height == 80            view.width == view.superview!.width - 40        }        layout(gameCenterButton) { view in            view.bottom == view.superview!.centerY + 60            view.centerX == view.superview!.centerX            view.height == 80            view.width == view.superview!.width - 40        }    } } The layout functions simply put the two buttons in the correct places on the screen: // MARK: Style private extension MenuViewController{    func style(){        playButton.buttonColor = UIColor.ht_grapeFruitColor()        playButton.shadowColor = UIColor.ht_grapeFruitDarkColor()        gameCenterButton.buttonColor = UIColor.ht_aquaColor()        gameCenterButton.shadowColor = UIColor.ht_aquaDarkColor()    } }   // MARK: Render private extension MenuViewController{    func render(){        playButton.setTitle("Play", forState: .Normal)        gameCenterButton.setTitle("Game Center", forState: .Normal)    } } Finally, we set the colors and text for the titles of the buttons. The following screenshot shows the complete menu: You will notice that on pressing Play, the app crashes. This is because the template is using the view defined in storyboard, and we are directly using the controllers. Let's change the code in GameViewController: class GameViewController: UIViewController {    private let skView = SKView()      override func viewDidLoad() {        super.viewDidLoad()        skView.frame = view.bounds        view.addSubview(skView)        if let scene = GameScene.unarchiveFromFile("GameScene") as? GameScene {            scene.size = skView.frame.size            skView.showsFPS = true            skView.showsNodeCount = true            skView.ignoresSiblingOrder = true            scene.scaleMode = .AspectFill            skView.presentScene(scene)        }    } } We are basically creating the SKView programmatically, and setting its size just as we did for the main view's size. If the app is run now, everything will work fine. You can find the code for this version at https://github.com/gscalzo/FlappySwift/tree/the_menu_is_ready. A stage for a bird Let's kick-start the game by implementing the background, which is not as straightforward as it might sound. SpriteKit in a nutshell SpriteKit is a powerful, but easy-to-use, game framework introduced in iOS 7. It basically provides the infrastructure to move images onto the screen and interact with them. It also provides a physics engine (based on Box2D), a particles engine, and basic sound playback support, making it particularly suitable for casual games. The content of the game is drawn inside an SKView, which is a particular kind of UIView, so it can be placed inside a normal hierarchy of UIViews. The content of the game is organized into scenes, represented by subclasses of SKScene. Different parts of the game, such as the menu, levels, and so on, must be implemented in different SKScenes. You can consider an SK in SpriteKit as an equivalent of the UIViewController. Inside an SKScene, the elements of the game are grouped in the SKNode's tree which tells the SKScene how to render the components. An SKNode can be either a drawable node, such as SKSpriteNode or SKShapeNode; or something to be applied to the subtree of its descendants, such as SKEffectNode or SKCropNode. Note that SKScene is an SKNode itself. Nodes are animated using SKAction. An SKAction is a change that must be applied to a node, such as a move to a particular position, a change of scaling, or a change in the way the node appears. The actions can be grouped together to create actions that run in parallel, or wait for the end of a previous action. Finally, we can define physics-based relations between objects, defining mass, gravity, and how the nodes interact with each other. That said, the best way to understand and learn SpriteKit is by starting to play with it. So, without further ado, let's move on to the implementation of our tiny game. In this way, you'll get a complete understanding of the most important features of SpriteKit. Explaining the code In the previous section, we implemented the menu view, leaving the code similar to what was created by the template. With basic knowledge of SpriteKit, you can now start understanding the code: class GameViewController: UIViewController {    private let skView = SKView()      override func viewDidLoad() {        super.viewDidLoad()        skView.frame = view.bounds        view.addSubview(skView)        if let scene = GameScene.unarchiveFromFile("GameScene") as? GameScene {            scene.size = skView.frame.size            skView.showsFPS = true            skView.showsNodeCount = true            skView.ignoresSiblingOrder = true            scene.scaleMode = .AspectFill            skView.presentScene(scene)        }    } } This is the UIViewController that starts the game; it creates an SKView to present the full screen. Then it instantiates the scene from GameScene.sks, which can be considered the equivalent of a Storyboard. Next, it enables some debug information before presenting the scene. It's now clear that we must implement the game inside the GameScene class. Simulating a three-dimensional world using parallax To simulate the depth of the in-game world, we are going to use the technique of parallax scrolling, a really popular method wherein the farther images on the game screen move slower than the closer images. In our case, we have three different levels, and we'll use three different speeds: Before implementing the scrolling background, we must import the images into our project, remembering to set each image as 2x in the assets. The assets can be downloaded from https://github.com/gscalzo/FlappySwift/blob/master/assets.zip?raw=true. The GameScene class basically sets up the background levels: import SpriteKit   class GameScene: SKScene {    private var screenNode: SKSpriteNode!    private var actors: [Startable]!      override func didMoveToView(view: SKView) {        screenNode = SKSpriteNode(color: UIColor.clearColor(), size: self.size)        addChild(screenNode)        let sky = Background(textureNamed: "sky", duration:60.0).addTo(screenNode)        let city = Background(textureNamed: "city", duration:20.0).addTo(screenNode)        let ground = Background(textureNamed: "ground", duration:5.0).addTo(screenNode)        actors = [sky, city, ground]               for actor in actors {            actor.start()        }    } } The only implemented function is didMoveToView(), which can be considered the equivalent of viewDidAppear for a UIVIewController. We define an array of Startable objects, where Startable is a protocol for making the life cycle of the scene, uniform: import SpriteKit   protocol Startable {    func start()    func stop() } This will be handy for giving us an easy way to stop the game later, when either we reach the final goal or our character dies. The Background class holds the behavior for a scrollable level: import SpriteKit   class Background {    private let parallaxNode: ParallaxNode    private let duration: Double      init(textureNamed textureName: String, duration: Double) {        parallaxNode = ParallaxNode(textureNamed: textureName)        self.duration = duration    }       func addTo(parentNode: SKSpriteNode) -> Self {        parallaxNode.addTo(parentNode)        return self    } } As you can see, the class saves the requested duration of a cycle, and then it forwards the calls to a class called ParallaxNode: // Startable extension Background : Startable {    func start() {        parallaxNode.start(duration: duration)    }       func stop() {        parallaxNode.stop()    } } The Startable protocol is implemented by forwarding the methods to ParallaxNode. How to implement the scrolling The idea of implementing scrolling is really straightforward: we implement a node where we put two copies of the same image in a tiled format. We then place the node such that we have the left half fully visible. Then we move the entire node to the left until we fully present the left node. Finally, we reset the position to the original one and restart the cycle. The following figure explains this algorithm: import SpriteKit   class ParallaxNode {    private let node: SKSpriteNode!       init(textureNamed: String) {        let leftHalf = createHalfNodeTexture(textureNamed, offsetX: 0)        let rightHalf = createHalfNodeTexture(textureNamed, offsetX: leftHalf.size.width)               let size = CGSize(width: leftHalf.size.width + rightHalf.size.width,            height: leftHalf.size.height)               node = SKSpriteNode(color: UIColor.whiteColor(), size: size)        node.anchorPoint = CGPointZero        node.position = CGPointZero        node.addChild(leftHalf)        node.addChild(rightHalf)    }       func zPosition(zPosition: CGFloat) -> ParallaxNode {        node.zPosition = zPosition        return self    }       func addTo(parentNode: SKSpriteNode) -> ParallaxNode {        parentNode.addChild(node)        return self    }   } The init() method simply creates the two halves, puts them side by side, and sets the position of the node: // Mark: Private private func createHalfNodeTexture(textureNamed: String, offsetX: CGFloat) -> SKSpriteNode {    let node = SKSpriteNode(imageNamed: textureNamed,                            normalMapped: true)    node.anchorPoint = CGPointZero    node.position = CGPoint(x: offsetX, y: 0)    return node } The half node is just a node with the correct offset for the x-coordinate: // Mark: Startable extension ParallaxNode {    func start(#duration: NSTimeInterval) {        node.runAction(SKAction.repeatActionForever(SKAction.sequence(            [                SKAction.moveToX(-node.size.width/2.0, duration: duration),                SKAction.moveToX(0, duration: 0)            ]            )))    }       func stop() {        node.removeAllActions()    } } Finally, the Startable protocol is implemented using two actions in a sequence. First, we move half the size—which means an image width—to the left, and then we move the node to the original position to start the cycle again. This is what the final result looks like: You can find the code for this version at https://github.com/gscalzo/FlappySwift/tree/stage_with_parallax_levels. Summary This article has given an idea on how to go about direction that you need to build a clone of the Flappy Bird app. For the complete exercise and a lot more, please refer to Swift by Example by Giordano Scalzo. Resources for Article: Further resources on this subject: Playing with Swift [article] Configuring Your Operating System [article] Updating data in the background [article]
Read more
  • 0
  • 0
  • 3404

article-image-understanding-uikitfundamentals
Packt
01 Jun 2016
9 min read
Save for later

Understanding UIKitFundamentals

Packt
01 Jun 2016
9 min read
In this article by Jak Tiano, author of the book Learning Xcode, we're mostly going to be talking about concepts rather than concrete code examples. Since we've been using UIKit throughout the whole book (and we will continue to do so), I'm going to do my best to elaborate on some things we've already seen and give you new information that you can apply to what we do in the future. (For more resources related to this topic, see here) As we've heard a lot about UIKit. We've seen it at the top of our Swift files in the form of import UIKit. We've used many of the UI elements and classes it provides for us. Now, it's time to take an isolated look at the biggest and most important framework in iOS development. Application management Unlike most other frameworks in the iOS SDK, UIKit is deeply integrated into the way your app runs. That's because UIKit is responsible for some of the most essential functionalities of an app. It also manages your application's window and view architecture, which we'll be talking about next. It also drives the main run loop, which basically means that it is executing your program. The UIDevice class In addition to these very important features, UIKit also gives you access to some other useful information about the device the app is currently running on through the UIDevice class. Using online resources and documentation: Since this article is about exploring frameworks, it is a good time to remind you that you can (and should!) always be searching online for anything and everything. For example, if you search for UIDevice, you'll end up on Apple's developer page for the UIDevice class, where you can see even more bits of information that you can pull from it. As we progress, keep in mind that searching the name of a class or framework will usually give you quick access to the full documentation. Here are some code examples of the information you can access: UIDevice.currentDevice().name UIDevice.currentDevice().model UIDevice.currentDevice().orientation UIDevice.currentDevice().batteryLevel UIDevice.currentDevice().systemVersion Some developers have a little bit of fun with this information: for example, Snapchat gives you a special filter to use for photos when your battery is fully charged.Always keep an open mind about what you can do with data you have access to! Views One of the most important responsibilities of UIKit is that it provides views and the view hierarchy architecture. We've talked before about what a view is within the MVC programming paradigm, but here we're referring to the UIView class that acts as the base for (almost) all of our visual content in iOS programming. While it wasn't too important to know about when just getting our feet wet, now is a good time to really dig in a bit and understand what UIViews are and how they work both on their own and together. Let's start from the beginning: a view (UIView) defines a rectangle on your screen that is responsible for output and input, meaning drawing to the screen and receiving touch events.It can also contain other views, known as subviews, which ultimately create a view hierarchy. As a result of this hierarchy, we have to be aware of the coordinate systems involved. Now, let's talk about each of these three functions: drawing, hierarchies, and coordinate systems. Drawing Each UIView is responsible for drawing itself to the screen. In order to optimize drawing performance, the views will usually try to render their content once and then reuse that image content when it doesn't change. It can even move and scale content around inside of it without needing to redraw, which can be an expensive operation: An overview of how UIView draws itself to the screen With the system provided views, all of this is handled automatically. However, if you ever need to create your own UIView subclass that uses custom drawing, it's important to know what goes on behind the scenes. To implement custom drawing in a view, you need to implement the drawRect() function in your subclass. When something changes in your view, you need to call the setNeedsDisplay() function, which acts as a marker to let the system know that your view needs to be redrawn. During the next drawing cycle, the code in your drawRect() function will be executed to refresh the content of your view, which will then be cached for performance. A code example of this custom drawing functionality is a bit beyond the scope of this article, but discussing this will hopefully give you a better understanding of how drawing works in addition to giving you a jumping off point should you need to do this in the future. Hierarchies Now, let's discuss view hierarchies. When we would use a view controller in a storyboard, we would drag UI elements onto the view controller. However, what we were actually doing is adding a subview to the base view of the view controller. And in fact, that base view was a subview of the UIWindow, which is also a UIView. So, though, we haven't really acknowledged it, we've already put view hierarchies to work many times. The easiest way to think about what happens in a view hierarchy is that you set one view's parent coordinate system relative to another view. By default, you'd be setting a view's coordinate system to be relative to the base view, which is normally just the whole screen. But you can also set the parent coordinate system to some other view so that when you move or transform the parent view, the children views are moved and transformed along with it. Example of how parenting works with a view hierarchy. It's also important to note that the view hierarchy impacts the draw order of your views. All of a view's subviews will be drawn on top of the parent view, and the subviews will be drawn in the order they were added (the last subview added will be on top). To add a subview through code, you can use the addSubview() function. Here's an example: var view1 = UIView() var view2 = UIView() view1.addSubview(view2) The top-most views will intercept a touch first, and if it doesn't respond, it will pass it down the view hierarchy until a view does respond. Coordinate systems With all of this drawing and parenting, we need to take a minute to look at how the coordinate system works in UIKit for our views.The origin (0,0 point) in UIKit is the top left of the screen, and increases along X to the right, and increases on the Y downward. Each view is placed in this upper-left positioning system relative to its parent view's origin. Be careful! Other frameworks in iOS use different coordinate systems. For example, SpriteKit uses the lower-left corner as the origin. Each view also has its own setof positioning information. This is composed of the view's frame, bounds, and center. The frame rectangle describes the origin and the size of view relative to its parent view's coordinate system. The bounds rectangle describes the origin and the size of the view from its local coordinate system. The center is just the center point of the view relative to the parent view. When dealing with so many different coordinate systems, it can seem like a nightmare to compare positions from different views. Luckily, the UIView class provides a simple convertPoint()function to convert points between systems. Try running this little experiment in a playground to see how the point gets converted from one view's coordinate system to the other: import UIKit let view1 = UIView(frame: CGRect(x: 0, y: 0, width: 50, height: 50)) let view2 = UIView(frame: CGRect(x: 10, y: 10, width: 30, height: 30)) view1.addSubview(view2) let pointFrom1 = CGPoint(x: 20, y: 20) let pointFromView2 = view1.convertPoint(pointFrom1, toView: view2) Hopefully, you now have a much better understanding of some of the underlying workings of the view system in UIKit. Documents, displays, printing, and more In this section, I'm going to do my best to introduce you to the many additional features of the UIKit framework. The idea is to give you a better understanding of what is possible with UIKit, and if anything sounds interesting to you, you can go off and explore these features on your own. Documents UIKit has built in support for documents, much like you'd find on a desktop operating system. Using the UIDocument class, UIKit can help you save and load documents in the background in addition to saving them to iCloud. This could be a powerful feature for any app that allows the user to create content that they expect to save and resume working on later. Displays On most new iOS devices, you can connect external screens via HDMI. You can take advantage of these external displays by creating a new instance of the UIWindow class, and associating it with the external display screen. You can then add subviews to that window to create a secondscreen experience for devices like a bigscreen TV. While most consumers don't ever use HDMI-connected external displays, this is a great feature to keep in mind when working on internal applications for corporate or personal use. Printing Using the UIPrintInteractionController, you can set up and send print jobs to AirPrint-enabled printers on the user's network. Before you print, you can also create PDFs by drawing content off screen to make printing easier. And more! There are many more features of UIKit that are just waiting to be explored! To be honest, UIKit seems to be pretty much a dumping ground for any general features that were just a bit too small to deserve their own framework. If you do some digging in Apple's documentation, you'll find all kinds of interesting things you can do with UIKit, such as creating custom keyboards, creating share sheets, and custom cut-copy-paste support. Summary In this article, we looked at the biggest and most important UIKit and learned about some of the most important system processes like the view hierarchy. Resources for Article:   Further resources on this subject: Building Surveys using Xcode [article] Run Xcode Run [article] Tour of Xcode [article]
Read more
  • 0
  • 0
  • 3222
article-image-debugging
Packt
14 Jan 2016
11 min read
Save for later

Debugging

Packt
14 Jan 2016
11 min read
In this article by Brett Ohland and Jayant Varma, the authors of Xcode 7 Essentials (Second Edition), we learn about the debugging process, as the Grace Hopper had a remarkable career. She not only became a Rear Admiral in the U.S. Navy but also contributed to the field of computer science in many important ways during her lifetime. She was one of the first programmers on an early computer called Mark I during World War II. She invented the first compiler and was the head of a group that created the FLOW-MATIC language, which would later be extended to create the COBOL programming language. How does this relate to debugging? Well, in 1947, she was leading a team of engineers who were creating the successor of the Mark I computer (called Mark II) when the computer stopped working correctly. The culprit turned out to be a moth that had managed to fly into, and block the operation of, an important relay inside of the computer; she remarked that they had successfully debugged the system and went on to popularize the term. The moth and the log book page that it is attached to are on display in The Smithsonian Museum of American History in Washington D.C. While physical bugs in your systems are an astronomically rare occurrence, software bugs are extremely common. No developer sets out to write a piece of code that doesn't act as expected and crash, but bugs are inevitable. Luckily, Xcode has an assortment of tools and utilities to help you become a great detective and exterminator. In this article, we will cover the following topics: Breakpoints The LLDB console Debugging the view hierarchy Tooltips and a Quick Look (For more resources related to this topic, see here.) Breakpoints The typical development cycle of writing code, compiling, and then running your app on a device or in the simulator doesn't give you much insight into the internal state of the program. Clicking the stop button will, as the name suggests, stop the program and remove it from the memory. If your app crashes, or if you notice that a string is formatted incorrectly or the wrong photo is loaded into a view, you have no idea what code caused the issue. Surely, you could use the print statement in your code to output values on the console, but as your application involves more and more moving parts, this becomes unmanageable. A better option is to create breakpoints. A breakpoint is simply a point in your source code at which the execution of your program will pause and put your IDE into a debugging mode. While in this mode, you can inspect any objects that are in the memory, see a trace of the program's execution flow, and even step the program into or over instructions in your source code. Creating a breakpoint is simple. In the standard editor, there is a gutter that is shown to the immediate left of your code. Most often, there are line numbers in this area, and clicking on a line number will add a blue arrow to that line. That is a breakpoint. If you don't see the line numbers in your gutter, simply open Xcode's settings by going to Xcode | Settings (or pressing the Cmd + , keyboard shortcut) and toggling the line numbers option in the editor section. Clicking on the breakpoint again will dim the arrow and disable the breakpoint. You can remove the breakpoint completely by clicking, dragging, and releasing the arrow indicator well outside of the gutter. A dust ball animation will let you know that it has been removed. You can also delete breakpoints by right-clicking on the arrow indicator and selecting Delete. The Standard Editor showing a breakpoint on line 14 Listing breakpoints You can see a list of all active or inactive breakpoints in your app by opening the breakpoint navigator in the sidebar to the left (Cmd + 7). The list will show the file and line number of each breakpoint. Selecting any of them will quickly take the source editor to that location. Using the + icon at the bottom of the sidebar will let you set many more types of advanced breakpoints, such as the Exception, Symbolic, Open GL Errors, and Test Failure breakpoints. More information about these types can be found on Apple's Developer site at https://developer.apple.com. The debug area When Xcode reaches a breakpoint, its debugging mode will become active. The debug navigator will appear in the sidebar on the left, and if you've printed any information onto the console, the debug area will automatically appear below the editor area: The buttons in the top bar of the debug area are as follows (from left to right): Hide/Show debug area: Toggles the visibility of the debug area Activate/Deactivate breakpoints: This icon activates or deactivates all breakpoints Step Over: Execute the current line or function and pause at the next line Step Into: This executes the current line and jumps to any functions that are called Step Out: This exits the current function and places you at the line of code that called the current function Debug view hierarchy: This shows you a stacked representation of the view hierarchy Location: Since the simulator doesn't have GPS, you can set its location here (Apple HQ, London, and City Bike Ride are some of them) Stack Frame selector: This lets you choose a frame in which the current breakpoint is running The main window of the debug area can be split into two panes: the variables view and the LLDB console. The variables view The variables view shows all the variables and constants that are in the memory and within the current scope of your code. If the instance is a value type, it will show the value, and if it's an object type, you'll see a memory address, as shown in the following screenshot: This view shows all the variables and constants that are in the memory and within the current scope of your code. If the instance is a value type, it will show the value, and if it's an object type, you'll see a memory address. For collection types (arrays and dictionaries), you have the ability to drill down to the contents of the collection by toggling the arrow indicator on the left-hand side of each instance. In the bottom toolbar, there are three sections: Scope selector: This let's you toggle between showing the current scope, showing the global scope, or setting it to auto to select for you Information icons: The Quick Look icon will show quick look information in a popup, while the print information button will print the instance's description on the console Filter area: You can filter the list of values using this standard filter input box The console area The console area is the area where the system will place all system-generated messages as well as any messages that are printed using the print or NSLog statements. These messages will be displayed only while the application is running. While you are in debug mode, the console area becomes an interactive console in the LLDB debugger. This interactive mode lets you print the values of instances in memory, run debugger commands, inspect code, evaluate code, step through, and even skip code. As Xcode has matured over the years, more and more of the advanced information available for you in the console has become accessible in the GUI. Two important and useful commands for use within the console are p and po: po: Prints the description of the object p: Prints the value of an object Depending on the type of variable or constant, p and po may give different information. As an example, let's take a UITableViewCell that was created in our showcase app, now place a breakpoint in the tableView:cellForRowAtIndexPath method in the ExamleTableView class: (lldb) p cell (UITableViewCell) $R0 = 0x7a8f0000 {   UIView = {     UIResponder = {       NSObject = {         isa = 0x7a8f0000       }       _hasAlternateNextResponder = ' '       _hasInputAssistantItem = ' '     }     ... removed 100 lines  (lldb) po cell <UITableViewCell: 0x7a8f0000; frame = (0 0; 320 44); text = 'Helium'; clipsToBounds = YES; autoresize = W; layer = <CALayer: 0x7a6a02c0>> The p has printed out a lot of detail about the object while po has displayed a curated list of information. It's good to know and use each of these commands; each one displays different information. The debug navigator To open the debug navigator, you can click on its icon in the navigator sidebar or use the Cmd + 6 keyboard shortcut. This navigator will show data only while you are in debug mode: The navigator has several groups of information: The gauges area: At a glance, this shows information about how your application is using the CPU, Memory, Disk, Network, and iCloud (only when your app is using it) resources. Clicking on each gauge will show more details in the standard editor area. The processes area: This lists all currently active threads and their processes. This is important information for advanced debugging. The information in the gauges area used to be accessible only if you ran your application in and attached the running process to a separate instruments application. Because of the extra step in running our app in this separate application, Apple started including this information inside of Xcode starting from Xcode 6. This information is invaluable for spotting issues in your application, such as memory leaks, or spotting inefficient code by watching the CPU usage. The preceding screenshot shows the Memory Report screen. Because this is running in the simulator, the amount of RAM available for your application is the amount available on your system. The gauge on the left-hand side shows the current usage as a percentage of the available RAM. The Usage Comparison area shows how much RAM your application is using compared to other processes and the available free space. Finally, the bottom area shows a graph of memory usage over time. Each of these pieces of information is useful for the debugging of your application for crashes and performance. Quick Look There, we were able to see from within the standard editor the contents of variables and constants. While Xcode is in debug mode, we have this very ability. Simply hover the mouse over most instance variables and a Quick Look popup will appear to show you a graphical representation, like this: Quick Look knows how to handle built-in classes, such as CGRect or UIImage, but what about custom classes that you create? Let's say that you create a class representation of an Employee object. What information would be the best way to visualize an instance? You could show the employee's photo, their name, their employee number, and so on. Since the system can't judge what information is important, it will rely on the developer to decide. The debugQuickLookObject() method can be added to any class, and any object or value that is returned will show up within the Quick Look popup: func debugQuickLookObject() -> AnyObject {     return "This is the custom preview text" } Suppose we were to add that code to the ViewController class and put a breakpoint on the super.viewDidLoad() line: import UIKit class ViewController: UIViewController {       override func viewDidLoad() {         super.viewDidLoad()         // Do any additional setup after loading the view, typically from a nib.     }       override func didReceiveMemoryWarning() {         super.didReceiveMemoryWarning()         // Dispose of any resources that can be recreated.     }       func debugQuickLookObject() -> AnyObject {         return "I am a quick look value"     }   } Now, in the variables list in the debug area, you can click on the Quick Look icon, and the following screen will appear: Summary Debugging is an art as well as a science. Because of this, it easily can—and does—take entire books to cover the subject. The best way to learn techniques and tools is by experimenting on your own code. Resources for Article:   Further resources on this subject: Introduction to GameMaker: Studio [article] Exploring Swift [article] Using Protocols and Protocol Extensions [article]
Read more
  • 0
  • 0
  • 3209

article-image-getting-started-cocos2d
Packt
28 Feb 2011
8 min read
Save for later

Getting Started With Cocos2d

Packt
28 Feb 2011
8 min read
  Cocos2d for iPhone 0.99 Beginner's Guide Make mind-blowing 2D games for iPhone with this fast, flexible, and easy-to-use framework! A cool guide to learning cocos2d with iPhone to get you into the iPhone game industry quickly Learn all the aspects of cocos2d while building three different games Add a lot of trendy features such as particles and tilemaps to your games to captivate your players Full of illustrations, diagrams, and tips for building iPhone games, with clear step-by-step instructions and practical examples         Read more about this book       (For more resources on this subject, see here.) Downloading Cocos2d for iPhone Visit http://www.cocos2d-iphone.org for downloading the latest files. Search the downloads page, and there you will find the different versions available. As of this writing, Version 0.99.5 is the latest stable version. Once you have downloaded the file to your computer, uncompress the folder to your desktop, and rename it to Cocos2d. Open the uncompressed folder and take a look at its contents. Inside that folder, you will find everything you will need to make a game with Cocos2d. The following is a list of the important folders and files: Cocos2d: The bread and butter of Cocos2d. CocosDenshion: The sound engine that comes along. Cocoslive: Cocos2d comes packed with its own highscore server. You can easily set it up to create a scoreboard and ranking for your game. External: Here are the sources for Box2d and Chipmunk physics engines among other things. Extras: Useful classes created by the community. Resources: Images, sounds, tilemaps, and so on, used in the tests. Templates: The contents of the templates we'll soon install. Test: The samples used to demonstrate all the things Cocos2d can do. Tools: Some tools you may find useful, such as a ParticleView project that lets you preview how particles will look. License files: They are here in case you need to look at them. We'll start by taking a look at the samples. Time for action – opening the samples project Inside the cocos2d-iphone folder, you will find a file named cocos2d-iphone. xcodeproj. This project contains all the samples and named tests that come with the source code. Let's run one of them. Open the .xcodeproj file: In the groups and files panel you will find the source code of all the samples. There are more than 35 samples that cover all of the capabilities of Cocos2d, as shown in the following screenshot: Compile and run the SpriteTest: This project comes with a target for each of the available tests, so in order to try any of them you have to select them from the overview dropdown. Go ahead and select the SpriteTest target and click on Build and Run. If everything goes well the test should start running. Play a little with it, feel how it behaves, and be amazed by the endless possibilities.   What just happened?   You have just run your first Cocos2d application. If you want to run another sample, just select it by changing the "Active target". When you do so, the "active executable" should match to the same; if it doesn't select it manually by selecting it from the overview dropdown box. You can see which "active target" and "active executable" is selected from that overview dropdown box, they should be selected. Installing the templates Cocos2d comes with three templates. These templates are the starting point for any Cocos2d game. They let you: Create a simple Cocos2d project Create a Cocos2d project with Chipmunk as physics engine Create a Cocos2d project with Box2d as physics engine Which one you decide to use for your project depends on your needs. Right now we'll create a simple project from the first template. Time for action – installing the templates Carry out the following steps for installing the templates: Open the terminal. It is located in Applications Utilities | Terminal|. Assuming that you have uncompressed the folder on your desktop and renamed it as Cocos2d, type the following: cd desktop/Cocos2d ./install_template.sh You will see a lot of output in the terminal (as shown in the following screenshot); read it to check if the templates were installed successfully. If you are getting errors, check if you have downloaded the files correctly and uncompressed them into the desktop. If it is in another place you may get errors. We just installed the Xcode templates for Cocos2d. Now, each time you create a new project in Xcode, you will be given the choice of doing it by using a Cocos2d application. We'll see how to do that in a moment. Each time a new version of Cocos2d is released, there is a template file for that version. So, you should remember to install them each time. If you have a lot of older templates and you want to remove them, you can do so by going to Developer/Platforms/iPhoneOS. platform/Developer/Library/Xcode/Project Templates/Application and deleting the respective folder.   What just happened?   Templates are very useful and they are a great starting point for any new project. Having these templates available makes starting a new project an easy task. You will have all the Cocos2d sources arranged in groups, your AppDelegate already configured to make use of the framework and even a simple starting Layer. Creating a new project from the templates Now that you have the templates ready to use, it is time to make your first project. Time for action – creating a HelloCocos2d project We are going to create a new project named HelloCocos2d from the templates you have just installed. This won't be anything like a game, but just an introduction on how to get started. The steps are as follows: Open Xcode and select File New project| (Shift + cmd + N). 2. Cocos2d templates will appear right there along with the other Xcode project templates, as shown in the following screenshot: Select Cocos2d-0.99.1 Application. Name the project HelloCocos2d and save it to your Documents folder. Once you perform the preceding steps, the project you just created should open up. Let's take a look at the important folders that were generated: Cocos2d Sources: This is where the Cocos2d source code resides. Generally, you won't touch anything from here unless you want to make modifications to the framework or want to know how it works. Classes: This is where the classes you create will reside. As you may see, two classes were automatically created, thanks to the template. We'll explore them in a moment. Resources: This is where you will include all the assets needed for your game. Go ahead and run the application. Click on Build and go and congratulate yourself, as you have created your first Cocos2d project. Let's stop for a moment and take a look at what was created here. When you run the application you'll notice a couple of things, as follows: The Cocos2d for iPhone logo shows up as soon as the application starts. Then a CCLabel is created with the text Hello World. You can see some numbers in the lower-left corner of the screen. That is the current FPS the game is running at. In a moment, we'll see how this is achieved by taking a look at the generated classes.   What just happened?   We have just created our first project from one of the templates that we installed before. As you can see, using those templates makes starting a Cocos2d project quite easy and lets you get your hands on the actual game's code sooner. Managing the game with the CCDirector The CCDirector is the class whose main purpose is scene management. It is responsible for switching scenes, setting the desired FPS, the device orientation, and a lot of other things. The CCDirector is the class responsible for initializing OpenGL ES. If you grab an older Cocos2d project you might notice that all Cocos2d classes have the "CC" prefix missing. Those were added recently to avoid naming problems. Objective-c doesn't have the concept of namespaces, so if Apple at some point decided to create a Director class, those would collide. Types of CCDirectors There are currently four types of directors; for most applications, you will want to use the default one: NSTimer Director: It triggers the main loop from an NSTimer object. It is the slowest of the Directors, but it integrates well with UIKit objects. You can also customize the update interval from 1 to 60. Mainloop Director: It triggers the main loop from a custom main loop. It is faster than the NSTimer Director, but it does not integrate well with UIKit objects, and its interval update can't be customized. ThreadMainLoop Director: It has the same advantages and limitations as the Mainloop Director. When using this type of Director, the main loop is triggered from a thread, but it will be executed on the main thread. DisplayLink Director: This type of Director is only available in OS 3.1+. This is the one used by default. It is faster than the NSTimer Director and integrates well with UIKit objects. The interval update can be set to 1/60, 1/30, or 1/15. Those are the available Directors, most of the times you will not need to make any changes here.
Read more
  • 0
  • 0
  • 3190
Modal Close icon
Modal Close icon