Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - iOS Programming

63 Articles
article-image-flexible-layouts-swift-and-uistackview-0
Milton Moura
04 Jan 2016
12 min read
Save for later

Flexible Layouts with Swift and UIStackview

Milton Moura
04 Jan 2016
12 min read
In this post we will build a Sign In and Password Recovery form with a single flexible layout, using Swift and the UIStackView class, which has been available since the release of the iOS 9 SDK. By taking advantage of UIStackView's properties, we will dynamically adapt to the device's orientation and show / hide different form components with animations. The source code for this post can the found in this github repository. Auto Layout Auto Layout has become a requirement for any application that wants to adhere to modern best practices of iOS development. When introduced in iOS 6, it was optional and full visual support in Interface Builder just wasn't there. With the release of iOS 8 and the introduction of Size Classes, the tools and the API improved but you could still dodge and avoid Auto Layout. But now, we are at a point where, in order to fully support all device sizes and split-screen multitasking on the iPad, you must embrace it and design your applications with a flexible UI in mind. The problem with Auto Layout Auto Layout basically works as an linear equation solver, taking all of the constraints defined in your views and subviews, and calculates the correct sizes and positioning for them. One disadvantage of this approach is that you are obligated to define, typically, between 2 to 6 constraints for each control you add to your view. With different constraint sets for different size classes, the total number of constraints increases considerably and the complexity of managing them increases as well. Enter the Stack View In order to reduce this complexity, the iOS 9 SDK introduced the UIStackView, an interface control that serves the single purpose of laying out collections of views. A UIStackView will dynamically adapt its containing views' layout to the device's current orientation, screen sizes and other changes in its views. You should keep the following stack view properties in mind: The views contained in a stack view can be arranged either Vertically or Horizontally, in the order they were added to the arrangedSubviews array. You can embed stack views within each other, recursively. The containing views are laid out according to the stack view's [distribution](...) and [alignment](...) types. These attributes specify how the view collection is laid out across the span of the stack view (distribution) and how to align all subviews within the stack view's container (alignment). Most properties are animatable and inserting / deleting / hiding / showing views within an animation block will also be animated. Even though you can use a stack view within an UIScrollView, don't try to replicate the behaviour of an UITableView or UICollectionView, as you'll soon regret it. Apple recommends that you use UIStackView for all cases, as it will seriously reduce constraint overhead. Just be sure to judiciously use compression and content hugging priorities to solve possible layout ambiguities. A Flexible Sign In / Recover Form The sample application we'll build features a simple Sign In form, with the option for recovering a forgotten password, all in a single screen. When tapping on the "Forgot your password?" button, the form will change, hiding the password text field and showing the new call-to-action buttons and message labels. By canceling the password recovery action, these new controls will be hidden once again and the form will return to it's initial state. 1. Creating the form This is what the form will look like when we're done. Let's start by creating a new iOS > Single View Application template. Then, we add a new UIStackView to the ViewController and add some constraints for positioning it within its parent view. Since we want a full screen width vertical form, we set its axis to .Vertical, the alignment to .Fill and the distribution to .FillProportionally, so that individual views within the stack view can grow bigger or smaller, according to their content.    class ViewController : UIViewController    {        let formStackView = UIStackView()        ...        override func viewDidLoad() {            super.viewDidLoad()                       // Initialize the top-level form stack view            formStackView.axis = .Vertical            formStackView.alignment = .Fill            formStackView.distribution = .FillProportionally            formStackView.spacing = 8            formStackView.translatesAutoresizingMaskIntoConstraints = false                       view.addSubview(formStackView)                       // Anchor it to the parent view            view.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("H:|-20-[formStackView]-20-|", options: [.AlignAllRight,.AlignAllLeft], metrics: nil, views: ["formStackView": formStackView])            )            view.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("V:|-20-[formStackView]-8-|", options: [.AlignAllTop,.AlignAllBottom], metrics: nil, views: ["formStackView": formStackView])            )            ...        }        ...    } Next, we'll add all the fields and buttons that make up our form. We'll only present a couple of them here as the rest of the code is boilerplate. In order to refrain UIStackView from growing the height of our inputs and buttons as needed to fill vertical space, we add height constraints to set the maximum value for their vertical size.    class ViewController : UIViewController    {        ...        var passwordField: UITextField!        var signInButton: UIButton!        var signInLabel: UILabel!        var forgotButton: UIButton!        var backToSignIn: UIButton!        var recoverLabel: UILabel!        var recoverButton: UIButton!        ...               override func viewDidLoad() {            ...                       // Add the email field            let emailField = UITextField()            emailField.translatesAutoresizingMaskIntoConstraints = false            emailField.borderStyle = .RoundedRect            emailField.placeholder = "Email Address"            formStackView.addArrangedSubview(emailField)                       // Make sure we have a height constraint, so it doesn't change according to the stackview auto-layout            emailField.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("V:[emailField(<=30)]", options: [.AlignAllTop, .AlignAllBottom], metrics: nil, views: ["emailField": emailField])             )                       // Add the password field            passwordField = UITextField()            passwordField.translatesAutoresizingMaskIntoConstraints = false            passwordField.borderStyle = .RoundedRect            passwordField.placeholder = "Password"            formStackView.addArrangedSubview(passwordField)                       // Make sure we have a height constraint, so it doesn't change according to the stackview auto-layout            passwordField.addConstraints(                 NSLayoutConstraint.constraintsWithVisualFormat("V:[passwordField(<=30)]", options: .AlignAllCenterY, metrics: nil, views: ["passwordField": passwordField])            )            ...        }        ...    } 2. Animating by showing / hiding specific views By taking advantage of the previously mentioned properties of UIStackView, we can transition from the Sign In form to the Password Recovery form by showing and hiding specific field and buttons. We do this by setting the hidden property within a UIView.animateWithDuration block.    class ViewController : UIViewController    {        ...        // Callback target for the Forgot my password button, animates old and new controls in / out        func forgotTapped(sender: AnyObject) {            UIView.animateWithDuration(0.2) { [weak self] () -> Void in                self?.signInButton.hidden = true                self?.signInLabel.hidden = true                self?.forgotButton.hidden = true                self?.passwordField.hidden = true                self?.recoverButton.hidden = false                self?.recoverLabel.hidden = false                self?.backToSignIn.hidden = false            }        }               // Callback target for the Back to Sign In button, animates old and new controls in / out        func backToSignInTapped(sender: AnyObject) {            UIView.animateWithDuration(0.2) { [weak self] () -> Void in                self?.signInButton.hidden = false                self?.signInLabel.hidden = false                self?.forgotButton.hidden = false                self?.passwordField.hidden = false                self?.recoverButton.hidden = true                self?.recoverLabel.hidden = true                self?.backToSignIn.hidden = true            }        }        ...    } 3. Handling different Size Classes Because we have many vertical input fields and buttons, space can become an issue when presenting in a compact vertical size, like the iPhone in landscape. To overcome this, we add a stack view to the header section of the form and change its axis orientation between Vertical and Horizontal, according to the current active size class.    override func viewDidLoad() {        ...        // Initialize the header stack view, that will change orientation type according to the current size class        headerStackView.axis = .Vertical        headerStackView.alignment = .Fill        headerStackView.distribution = .Fill        headerStackView.spacing = 8        headerStackView.translatesAutoresizingMaskIntoConstraints = false        ...    }       // If we are presenting in a Compact Vertical Size Class, let's change the header stack view axis orientation    override func willTransitionToTraitCollection(newCollection: UITraitCollection, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {        if newCollection.verticalSizeClass == .Compact {            headerStackView.axis = .Horizontal        } else {            headerStackView.axis = .Vertical        }    } 4. The flexible form layout So, with a couple of UIStackViews, we've built a flexible form only by defining a few height constraints for our input fields and buttons, with all the remaining constraints magically managed by the stack views. Here is the end result: Conclusion We have included in the sample source code a view controller with this same example but designed with Interface Builder. There, you can clearly see that we have less than 10 constraints, on a layout that could easily have up to 40-50 constraints if we had not used UIStackView. Stack Views are here to stay and you should use them now if you are targeting iOS 9 and above. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 15282

article-image-lldb-and-command-line
Packt
15 Feb 2017
23 min read
Save for later

LLDB and the Command Line

Packt
15 Feb 2017
23 min read
In this article by Stuart Grimshaw, authors of the book Mastering macOS Programming, we're going to shift up a gear and reach deep into the LLDB environment, both in Xcode and as a standalone process. Some of this stuff may look a little esoteric at first glance, but rest assured, it is anything but. Working professionally as a developer means being comfortable with the command line, being aware of what is available in LLDB, and having at least a working knowledge of the scripting languages that are a part of developing software on almost all platforms. It's also really, really cool. Once you have a bit of momentum behind you, you'll wonder how you ever lived without these tools, which allow you to automate and customize and get right into the inner workings of your code. (For more resources related to this topic, see here.) In this article, you will learn about the following: What LLDB does Running code from within LLDB Creating and manipulating breakpoints and watchpoints How to customize LLDB to your requirements How to run the Swift REPL from within LLDB How to run a Python REPL within LLDB This one will require a lot of hands-on experimentation to familiarize yourself with what is presented here. So, take this one slowly, and try out as much as you can. Once you have absorbed this stuff, you'll never be the same developer again. LLDB So, what is this LLDB? Why do we see it at the console whenever we hit a breakpoint (or a bug)? This is the LLDB prompt: (lldb) Well, LLDB stands for low level debugger, and it does what it says it does. It actually does a bit more than that, which we'll get to very soon. LLDB makes use of several components from the LLVM project, which is basically a compiler, parser, and loads of other stuff that weneed (or Xcode needs) to build a program. However, as interesting as that is to look at, it is out of the scope of this article, and we will focus entirely on the debugger in this article. LLDB is not only available in the debug area console of Xcode, where we have made frequent use of it already,it can also be used at the command line, which we will also cover in this article. The LLDB interface is a very powerful tool; you can print to it from your code, you can interact with it, configure it to your requirements, change settings on the fly, and access both the Swift REPL and the Unix system that underlies Mac OS, without having to terminate a session. Using LLDB Returning to the code, set a breakpoint at this line of code: printObservation() Once execution has stopped at that breakpoint, type the following into the console: (lldb)bt This will produce a list of all the stack frames on the current thread that lead up to the line of code where execution was halted by the breakpoint, bt stands for backtrace. The output will look like this: (lldb) bt * thread #1: tid = 0x105a7d, 0x000000010000177c 5623_16_code`ViewController.testStepCommands(self=0x00006080000c2220) -> () + 12 at ViewController.swift:62, queue = 'com.apple.main-thread', stop reason = breakpoint 3.1 * frame #0: 0x000000010000177c 5623_16_code`ViewController.testStepCommands(self=0x00006080000c2220) -> () + 12 at ViewController.swift:62 frame #1: 0x00000001000014b2 5623_16_code`ViewController.viewDidLoad(self=0x00006080000c2220) -> () + 98 at ViewController.swift:17 ... lots more edited out frame #16: 0x00007fffbfafc255 libdyld.dylib`start + 1 (lldb) There is no need to understand all of this immediately,but looking through it, you will probably recognize the symbols at the top of the stack, since they are the methods that you have coded yourself. Everything leading up to that is provided by the frameworks you use. In this case, we see that most of the work is being done by AppKit, with a modest contribution from our own code (sorry, the truth can hurt at times). Frame0 is the last frame that was put on the stack. Above that, we see some information about the current thread, its number and thread ID (tid) and so on, as well as the line of code at which execution has been halted, and even the reason for doing so. In this case, we have hit breakpoint 3.1. Remember the viewDidLoad symbolic breakpoint produced several breakpoints? That's what the number behind the decimal point means, and breakpoint3 only occurs once, hence there can only be a 3.1. In large projects, this is an excellent way to answer the perennial question, how did the code get here? Debugging line-by-line Once we have halted the program at a breakpoint, we can take over control of its further execution: Continue The following commands are available to continue execution of the code (up until any subsequent breakpoints): (lldb) continue (lldb) c (lldb) thread continue They are all equivalent. Step over The following commands are equivalent to clicking on the step-over button: (lldb) next (lldb) n (lldb) thread step-over Step into The following commands are equivalent to clicking on the step-into button: (lldb) step (lldb) s (lldb) thread step-in This will advance execution of the code by one line, and when used at a function call, it will step into the function. The step-over and step-incommands are only different at a function call, otherwise they behave the same. Step out The following commands are equivalent to clicking on the step-out button: (lldb) finish (lldb) f (lldb) thread step-out Trying it out Spend some time getting used to stepping through the code using these commands. You'll find that very quickly, your muscle memory starts to do most of the work for you, freeing up some mental capacity for actually dealing with whatever bugs you may be dealing with. If you use one of the step-over or step-into commands (n and s are the most convenient), you can hit return again to repeat that command, which is a more comfortable way of repeatedly stepping through a debug session. Enable The following commands are available for enabling and disabling breakpoints: br disable br enable Enabling breakpoints en massedoes not re-enable those that were disabled individually. Printing in LLDB LLDB provides three convenient commands to print details of an object to the console: The p or printcommand gives us a standard description of the object. The pocommand (meaning print object) gives us either a standard description, or a custom description if we have implemented one. The frame variable command prints the same asprint, but does so without executing any code, thus avoiding the danger of side effects. Preparing classes for LLDB description We can provide custom classeswith any description we wish, and have it print with the po command, by declaring the class to conform to the CustomDebugStringConvertibleprotocol, and then implementing its required variable debugDescription. Let's create a Report class, that has a few properties of mixed types, and a debugDescription variable: class Report: CustomDebugStringConvertible { var title: String? var date: Date? var approved: Bool? var debugDescription: String { return "("Report")n(title)n(date)n(approved)" } } Now create an instance of the class, and one by one, populate the optional properties: let report = Report() report.title = "Weekly Summary" report.date = Date() report.approved = true Set a breakpoint at the first of those lines, where the report instance is declared. Now type po into the LLDB console. You'll get <uninitialized> in the console. Type s into the console, to step through the code. Keep your eye on the green tinted position pointer in the breakpoint gutter, as it will show you which lines of code are being executed during the initialization of a Report instance. Type s (or return) another five times until the initialization is complete, and then type po again. At this point we have the report variable initialized, with its properties still set to nil. (lldb) po report <uninitialized> (lldb) s (lldb) s (lldb) s (lldb) s (lldb) s (lldb) po report <uninitialized> (lldb) s (lldb) po report Report nil nil nil Continue stepping through the code, following this session: (lldb) s (lldb) s (lldb) s (lldb) po report Report Optional("Weekly Summary") nil nil etc... As you step through the code, you can see the report variable being populated with data. Stop hooks But what about having an update of an object's state automatically logged to the console everytime the code is stopped? Then we wouldn't need to go through all this typing, and as we continue debugging, we only need to worry about where to set breakpoints. To do that, we can add a so-called one-liner to a breakpoint that is then attached to a stop hook. Run the code again, and this time when the code halts, enter the following into the LLDB console: (lldb) b logReport (lldb) target stop-hook add --one-liner "po report" The first line creates a breakpoint and names it logReport. The second line attaches a command to that breakpoint, equivalent to selecting the debugger commandfrom the Edit Breakpoint... menu as, which does the same as if we had typed po report ourselves. Now add breakpoints to the next three lines of code, so that the code stops at each: Use the continue command (or just c) to move from one breakpoint to the next, and you'll see that, each time, we get a description of the report current state. Later on, we'll see that there is a way to log an object's state every time it changes, without it being dependent on halting the code, and without our having to specify where the changes take place. Printing formattednumbers LLDB contains its own formatting functionality, so when necessary we can format integers before they are printed to the console. We can print the integer asis, using the p command: (lldb) p 111 (Int) $R10 = 111 We can print its binary representation using the p/t command (the t stands for two): (lldb) p/t 111 (Int) $R11 = 0b0000...lots of zeroes...001101111 A hexadecimal representation is also available, using the p/x command: (lldb) p/x 111 (Int) $R12 = 0x000000000000006f An octal representation is available with the p/o command: (lldb) p/o 111 (Int) $R13 = 0157 Executingcode from LLDB One of LLDB's great strengths is its ability to reach into the current state of the program and alter it. So, for example, we could change the title property of our report object while the code is halted on a later breakpoint, with this code: expr report.title = "Monthly Summary" After this, the code runs as if "Monthly Summary" had been entered all along. This is an essential tool to master when debugging; the ability to change values on the fly can save you hundreds of program relaunches in a single day's debugging or prototyping. Type lookups This one is short, but sweet. Using the typelookup command, we get a summary of whatever type we may be interested in. Taking a very small class, our own Report class, as an example, type the following into LLDB: (lldb) type lookup Report This will produce output to the console like the following: class Report : CustomDebugStringConvertible { var title: Swift.String? var date: Foundation.Date? var approved: Swift.Bool? var debugDescription: Swift.String { get {} } @objc deinit init() } Now try it again with a few other types. Try this: (lldb) type lookup Array We won't reproduce the output here (it's quite extensive), but it will give you an idea of just how much information is only an LLDB command away. Breakpoints in LLDB We can add breakpoints of all types, and configure them to our requirements, from within LLDB, and with a little practice you'll find this is a whole lot quicker (as well as being more flexible) than clicking in the breakpoint gutter, right-clicking the breakpoint, and wading through the breakpoint edit window (which rather bizarrely is extremely difficult not to dismiss accidentally). It might take a while before you're convinced of this, but try to remember the time that you quit Macintosh or OS X apps using the menus. Adding a breakpoint To set a breakpoint that is in the current scope, use this command: (lldb) breakpoint set --line 56 This can be abbreviated to: (lldb) b 56 To set a breakpoint somewhere else, add the name of the file after the --file option: (lldb) breakpoint set --file AppDelegate.swift --line 17 You can also set a breakpoint by specifying a method name: (lldb) breakpoint set --method testBreakpoints Imagine setting two dozen breakpoints, across a number of files, some by name, some by line number, and doing it all with the mouse. This console business seriously is faster. Breakpoint list We can inspect a detailed list of all the breakpoints, of whatever type, and whether they are user or project bound breakpoints, with the following command: (lldb) breakpoint list Or its shorter version: (lldb) br li Attaching commands to breakpoints Using the breakpoint list, take note of one of the breakpoints (set an extra one if you have to), which you will alter once you have hit the first breakpoint set in your code. With the program execution halted on that first breakpoint, type the following into the LLDB console (using a breakpoint number from your own breakpoint list): (lldb) breakpoint command add 4.1 You will be prompted to add the command: Enter your debugger command(s). Type 'DONE' to end. > So, do as it says: Enter your debugger command(s). Type 'DONE' to end. >bt > DONE We have chosen bt here, as it's a nice big dump of data onto the console; we won't miss it. Now type: (lldb) c to continue execution, and the program will continue, until it halts at breakpoint 4.1 (or whichever breakpoint you added the command to), and prints the backtrace to the console. Creating a Swift Error breakpoint We saw that we can select Swift Error from the breakpoint types list at the bottom of the breakpoint navigator pane (using the + button), and this can also be done (substantially more quickly) directly in LLDB. (lldb) breakpoint set -E swift Or we can abbreviate this to one of the following: (lldb) br s -E swift (lldb) b -E swift We can also restrict the breakpoint to a specific error type: (lldb) b -E swift -O myError Naming breakpoints You can create de facto groups of breakpoints by naming them. Multiple breakpoints can share a name, and by referring to that name you can enable, disable and delete all those breakpoints with a single command: (lldb) br set -n testStepCommands -N group1 Breakpoint 3: 2 locations. (lldb) br set -n testBreakpoints -N group1 Breakpoint 4: 2 locations. (lldb) breakpoint disable group1 2 breakpoints disabled. Having created two breakpoints that share the name group1, we then disable them by passing the breakpointdisable command the name of the group. Watchpoints We mentioned earlier that we were going to see a way to log any changes tovariables without having to stop the code. This is where we do that. We could, of course, use the variable's setter and getter methods to do this, but the point here is that we don't need to change the program's code and then recompile etc.; all of this can be done without restarting the app. The syntax and console output of watchpoints is generally similar to that of breakpoints, as is made clear from the following input (and console output): (lldb) watchpoint set variable x LLDB confirms watchpoint creation with the details of the watchpoint: Watchpoint created: Watchpoint 1: addr = 0x7fff5fbfec48 size = 8 state = enabled type = w declare @ '/Users/stu/Documents/Books/myBooks/Packt/macOS Programming/Content/ch17 LLDB CLI/5623_17_code/5623_17_code/ViewController.swift:93' watchpoint spec = 'x' When you run code containing a watchpoint, you are basically creating a breakpoint that is attached to a variable rather than a line of code. Adding conditions and commands You can also attach conditions to watchpoint, just as you can to a breakpoint: (lldb) watchpoint modify -c (x==2) And similarly, command add works just as you would expect it to: (lldb) watchpoint command add 1 Enter your debugger command(s). Type 'DONE' to end. >print x > DONE So now you can monitor the progress and state of a variable without disturbing the code at all. In the preceding code, we simply assign a debugger command to watchpoint 1;but how did we get the number of the watchpoint? By checking the watchpoint list. Watchpoint lists Just as for breakpoints, we can get a list of watchpoints, using the following command: (lldb) watchpoint list This gives us a detailed list of all watchpoints, similar to the list below: Number of supported hardware watchpoints: 4 Current watchpoint: Watchpoint 1: addr = 0x7fff5fbfec48 size = 8 state = enabled type = w declare @ '/Users/stu/Documents/Books/myBooks/Packt/macOS Programming/Content/ch17 LLDB CLI/5623_17_code/5623_17_code/ViewController.swift:93' watchpoint spec = 'x' Watchpoints must be small. An Int of Bool is okay, a String object will be too large. Enabling, disabling, and deleting watchpoints The syntax for disabling, enabling and deleting watchpoints is the same as for breakpoints: (lldb) watchpoint disable All watchpoints disabled. (1 watchpoints) This disables watchpoints only, not breakpoints (lldb) watchpoint delete About to delete all watchpoints, do you want to do that?: [Y/n] y All watchpoints removed. (1 watchpoints) Persistent customization If you want LLDB to run commands as it starts up, like defining a list of watchpoints, breakpoints (and a whole ton of other stuff), you can place them in the .lldbinitfile, at these paths: ~.lldbinit-Xcode for LLDB within the Xcode debug console ~/.lldbinit for LLDB in the command line My .initlldb file contains a group of named symbolic breakpoints for things such as viewDidLoad, which are of use in most projects, but which are also disabled by the same file. When I need them for debugging I just need to enable them by name, as opposed to define them from scratch. This is particularly appropriate when breakpoints and watchpoints are a complicated combination of conditions, commands and so on. Getting help in LLBD The following command will provide you with the online docs regarding breakpoints and watchpoints: (lldb) help breakpoint (lldb) help watchpoint Of course, you can access the complete help pages with the following: (lldb) help Try to get accustomed to using the help pages; they contain a lot of information that is otherwise difficult (or tedious) to find, and overcoming any (understandable) reticence to dive into them is better done sooner rather than later. Using shell commands in LLDB You can send shell commands to the system without leaving LLDB by using the platform shell command: (lldb) platform shell pwd /path/to/current/directory... Often, being able to type in these one liners will save you breaking the flow while opening up a Terminal window, navigating to the current directory and so on. Given that shell commands can achieve almost anything on Unix-based machines, you might be tempted to spend a fair amount of time getting used to doing this. Forgot to launch your local server? No problem, just drop into the shell and fire up the server, and then return to the debug session. REPL in LLDB The Swift REPL also runs in LLDB. Actually, whenever you use the Swift REPL, it is running on LLDB: The next time you're running the REPL in a terminal window, try typing a colon, and you'll suddenly discover you were running LLDB all along. To the Swift REPL it from the LLDB console type repl: (lldb) repl 1> Now the REPL prompt is awaiting input. A session might look something like this: (lldb) repl 1>var a = [1,2] a: [Int] = 2 values { [0] = 1 [1] = 2 } 2>a.append(3) 3>a $R0: [Int] = 3 values { [0] = 1 [1] = 2 [2] = 3 } 4> This come is really handy when you need to check an idea in Swift while you are, for example, in the middle of an LLDB session. Switching between LLDB and Swift REPL While in the REPL, you can still punch through to the underlying LLDB session by prepending a command with a colon: 2>:type lookup Report class Report : CustomDebugStringConvertible { var title: Swift.String? var date: Foundation.Date? var approved: Swift.Bool? var debugDescription: Swift.String { get {} } @objc deinit init() } In this example, we’ve printed a summary of our Report class (from a REPL session that knows nothing of that class) by using LLDB's type lookup command preceded by the colon. Leaving a REPL session To leave the REPL and return to LLDB, type just a colon on its own: (lldb) repl 4>print("repl says hi") repl says hi 5>: (lldb) The REPL session has not ended, however. If you re-enter it with the repl command, the variables you defined previously are still in scope. As long as you don't terminate the LLDB session, you can switch between the two as necessary. Using Python in LLDB LLDB has full, built-in Python support. If you type script into the LLDB console, it will open a Python REPL: (lldb) script Python Interactive Interpreter. To exit, type 'quit()', 'exit()'. >>> This is about as powerful a scripting language as you are likely to come across, and now it's just waiting there for you to use it. We'll see an example of what it can do shortly, once we have seen how to reach into the debug session from within the Python REPL. Accessing program variables from Python The Python REPL has a number of tools with which you can reach into your program's session: >>> mynumber = lldb.frame.FindVariable("i") >>> mynumber.GetValue() '42' So, we have the entire Python scripting language at our disposal, with access to the running program's variables. How we could attach Python scripts to a breakpoint; now we can run spontaneous ad hoc Python sessions as well. Pretty amazing stuff! Switching between LLDB and Python REPL Enter quit to leave the Python REPL. However, just as we saw with the Swift REPL, the Python REPL session is not killed as long as the LLDB session is not terminated: (lldb) script Python Interactive Interpreter. To exit, type 'quit()', 'exit()'. >>>a = 1 >>>quit (lldb) script Python Interactive Interpreter. To exit, type 'quit()', 'exit()'. >>>a 1 >>> This ability to switch seamlessly between the two without interrupting either LLDB or the REPL makes this a killer feature of LLDB. One liners from LLDB to Python You can also pass a line of Python to the script command and have it executed without entering the REPL: (lldb) script import os (lldb) script os.system("open http://www.apple.com") Once again, we see an enormous amount of scripting power at our disposal. Getting help in Python There is a pretty huge help document available with the following command from within LLDB: (lldb) script help(lldb) Or use the following command for a smaller, more manageable doc set: (lldb) script help(lldb.process) From these documents you have access to more specific topics about Python in LLDB. It has to be said that these docs are going to be of more use to an experienced Python programmer who is integrating Python into his or her workflow. To learn Python from scratch (and you should, it's easy), you're better off checking out more tutorial-orientated resources. Altogether now You can see that from the debug console, you can run LLDB, shell commands, the Swift REPL, and a Python REPL, all without having to kill the running debug session or fire up the Terminal. Once you get comfortable with making use of this, it becomes an indispensable part of the development process. Standalone LLDB Starting with Xcode 8, LLDB can run in a standalone Terminal session. To launch it, type the lldb command: ~ lldb (lldb) This is great for exploring LLDB, but prototyping and debugging an Xcode project is what we are here for, so let's do that next. Running Xcode projects in a standalone LLDB session To debug an Xcode project, we first need the path of the debug build, which we can get by right-clicking on theProductsfolder in the projectnavigator, and selecting Show in Finder: Now add that path as the argument to the lldb command (dragging the build from the Finder into the Terminal window is a pretty quick way to do that): ~ lldb /Users/.../Build/Products/Debug/5623_17_code.app Hit return. This will produce something similar to the following output: (lldb) target create "/Users/.../Build/Products/Debug/5623_17_code.app" Current executable set to '/Users/.../Build/Products/Debug/5623_17_code.app' (x86_64). We haven't got the code running yet, and we'll set a breakpoint before we do: (lldb) breakpoint set --file ViewController.swift --line 31 Breakpoint 2: where = 5623_17_code`_623_17_code.ViewController.viewDidLoad () -> () + 12 at ViewController.swift:31, address = 0x0000000100001bdc Now run the code: (lldb) run Once we hit that breakpoint, something like the following output will be generated: Process 32522 launched: '/Users/.../Build/Products/Debug/5623_17_code.app/Contents/MacOS/5623_17_code' (x86_64) Process 32522 stopped * thread #1: tid = 0x2039e3, 0x0000000100001bdc 5623_17_code`ViewController.viewDidLoad(self=0x0000000100a69620) -> () + 12 at ViewController.swift:31, queue = 'com.apple.main-thread', stop reason = breakpoint 2.1 frame #0: 0x0000000100001bdc 5623_17_code`ViewController.viewDidLoad(self=0x0000000100a69620) -> () + 12 at ViewController.swift:31 28 29 override func viewDidLoad() 30 { -> 31 super.viewDidLoad() 32 33 let report = Report() 34 report.title = "Weekly Summary" (lldb) As we can see (there's even a little arrow and a code excerpt), the code has been stopped at line 31, as required. Let's print details of the view controller's viewproperty while we're here: (lldb) p self.view (NSView) $R0 = 0x0000000100d18070 { AppKit.NSResponder = { baseNSObject@0 = { isa = NSView } _nextResponder = 0x0000000100d324f0 } } (lldb) Now we can type c to continue the program's execution;but what happens then is that the terminal's input is no longer going to LLDB, which you will notice by the absence of the (lldb) prompt. It is going to the running program. You need to typecontrol + C to get back to LLDB. To return to using the terminal for input to the program, type c to continue. Using these two commands, you can toggle between LLDB and the running process, should you need to. Differences between standalone and Xcode LLDB There are a few important things to note about running standalone LLDB sessions: An LLDB session that is running in a Terminal window ignores all Xcode breakpoints (and other sessions). A command line session is available all of the time, without necessarily having to halt the program's execution. We can have two (or more) open processes at the same time by launching from different terminal sessions (that isTerminal app windows). This means we can try variations in our code at the same time: We can use different variable values We can set different breakpoints and watchpoints Some of these differences make clear the advantages that a standalone session offers over LLDB running in Xcode. We won't necessarily need these features all the time, but it's a good idea to get comfortable with debugging in a separate app. When you do need those features, you'll be glad you did. Summary In this article, we have had more than a superficial glimpse of Xcode and LLDB's advanced features, though there is more to discover. You have learned the following in this article: Using LLDB's advanced debugging features Making your custom classes debug-printable Running your projects in standalone LLDB sessions Using stop hooks and watchpoints in addition to breakpoints Using shell, Swift, and Python commands in an LLDB session in addition to LLDB's own command set Resources for Article:  Further resources on this subject: Maximizing everyday debugging [article] Debugging Applications with PDB and Log Files [article] Debugging mechanisms in Oracle SQL Developer 2.1 using Pl/SQL [article]
Read more
  • 0
  • 1
  • 12309

article-image-creating-mutable-and-immutable-classes-swift
Packt
20 Jan 2016
8 min read
Save for later

Creating Mutable and Immutable Classes in Swift

Packt
20 Jan 2016
8 min read
In this article by Gastón Hillar, author of the book Object-Oriented Programming with Swift, we will learn how to create mutable and immutable classes in Swift. (For more resources related to this topic, see here.) Creating mutable classes So far, we worked with different type of properties. When we declare stored instance properties with the var keyword, we create a mutable instance property, which means that we can change their values for each new instance we create. When we create an instance of a class that defines many public-stored properties, we create a mutable object, which is an object that can change its state. For example, let's think about a class named MutableVector3D that represents a mutable 3D vector with three public-stored properties: x, y, and z. We can create a new MutableVector3D instance and initialize the x, y, and z attributes. Then, we can call the sum method with the delta values for x, y, and z as arguments. The delta values specify the difference between the existing and new or desired value. So, for example, if we specify a positive value of 30 in the deltaX parameter, it means we want to add 30 to the X value. The following lines declare the MutableVector3D class that represents the mutable version of a 3D vector in Swift: public class MutableVector3D { public var x: Float public var y: Float public var z: Float init(x: Float, y: Float, z: Float) { self.x = x self.y = y self.z = z } public func sum(deltaX: Float, deltaY: Float, deltaZ: Float) { x += deltaX y += deltaY z += deltaZ } public func printValues() { print("X: (self.x), Y: (self.y), Z: (self.z))") } } Note that the declaration of the sum instance method uses the func keyword, specifies the arguments with their types enclosed in parentheses, and then declares the body for the method enclosed in curly brackets. The public sum instance method receives the delta values for x, y, and z (deltaX, deltaY and deltaZ) and mutates the object, which means that the method changes the values of x, y, and z. The public printValues method prints the values of the three instance-stored properties: x, y, and z. The following lines create a new MutableVector3D instance method called myMutableVector, initialized with the values for the x, y, and z properties. Then, the code calls the sum method with the delta values for x, y, and z as arguments and finally calls the printValues method to check the new values after the object mutated with the call to the sum method: var myMutableVector = MutableVector3D(x: 30, y: 50, z: 70) myMutableVector.sum(20, deltaY: 30, deltaZ: 15) myMutableVector.printValues() The results of the execution in the Playground are shown in the following screenshot: The initial values for the myMutableVector fields are 30 for x, 50 for y, and 70 for z. The sum method changes the values of the three instance-stored properties; therefore, the object state mutates as follows: myMutableVector.X mutates from 30 to 30 + 20 = 50 myMutableVector.Y mutates from 50 to 50 + 30 = 80 myMutableVector.Z mutates from 70 to 70 + 15 = 85 The values for the myMutableVector fields after the call to the sum method are 50 for x, 80 for y, and 85 for z. We can say that the method mutated the object's state; therefore, myMutableVector is a mutable object and an instance of a mutable class. It's a very common requirement to generate a 3D vector with all the values initialized to 0—that is, x = 0, y = 0, and z = 0. A 3D vector with these values is known as an origin vector. We can add a type method to the MutableVector3D class named originVector to generate a new instance of the class initialized with all the values in 0. Type methods are also known as class or static methods in other object-oriented programming languages. It is necessary to add the class keyword before the func keyword to generate a type method instead of an instance. The following lines define the originVector type method: public class func originVector() -> MutableVector3D { return MutableVector3D(x: 0, y: 0, z: 0) } The preceding method returns a new instance of the MutableVector3D class with 0 as the initial value for all the three elements. The following lines call the originVector type method to generate a 3D vector, the sum method for the generated instance, and finally, the printValues method to check the values for the three elements on the Playground: var myMutableVector2 = MutableVector3D.originVector() myMutableVector2.sum(5, deltaY: 10, deltaZ: 15) myMutableVector2.printValues() The following screenshot shows the results of executing the preceding code in the Playground: Creating immutable classes Mutability is very important in object-oriented programming. In fact, whenever we expose mutable properties, we create a class that will generate mutable instances. However, sometimes a mutable object can become a problem and in certain situations, we want to avoid the objects to change their state. For example, when we work with concurrent code, an object that cannot change its state solves many concurrency problems and avoids potential bugs. For example, we can create an immutable version of the previous MutableVector3D class to represent an immutable 3D vector. The new ImmutableVector3D class has three immutable instance properties declared with the let keyword instead of the previously used var[SI1]  keyword: x, y, and z. We can create a new ImmutableVector3D instance and initialize the immutable instance properties. Then, we can call the sum method with the delta values for x, y, and z as arguments. The sum public instance method receives the delta values for x, y, and z (deltaX, deltaY, and deltaZ), and returns a new instance of the same class with the values of x, y, and z initialized with the results of the sum. The following lines show the code of the ImmutableVector3D class: public class ImmutableVector3D { public let x: Float public let y: Float public let z: Float init(x: Float, y: Float, z: Float) { self.x = x self.y = y self.z = z } public func sum(deltaX: Float, deltaY: Float, deltaZ: Float) -> ImmutableVector3D { return ImmutableVector3D(x: x + deltaX, y: y + deltaY, z: z + deltaZ) } public func printValues() { print("X: (self.x), Y: (self.y), Z: (self.z))") } public class func equalElementsVector(initialValue: Float) -> ImmutableVector3D { return ImmutableVector3D(x: initialValue, y: initialValue, z: initialValue) } public class func originVector() -> ImmutableVector3D { return equalElementsVector(0) } } In the new ImmutableVector3D class, the sum method returns a new instance of the ImmutableVector3D class—that is, the current class. In this case, the originVector type method returns the results of calling the equalElementsVector type method with 0 as an argument. The equalElementsVector type method receives an initialValue argument for all the elements of the 3D vector, creates an instance of the actual class, and initializes all the elements with the received unique value. The originVector type method demonstrates how we can call another type method within a type method. Note that both the type methods specify the returned type with -> followed by the type name (ImmutableVector3D) after the arguments enclosed in parentheses. The following line shows the declaration for the equalElementsVector type method with the specified return type: public class func equalElementsVector(initialValue: Float) -> ImmutableVector3D { The following lines call the originVector type method to generate an immutable 3D vector named vector0 and the sum method for the generated instance and save the returned instance in the new vector1 variable. The call to the sum method generates a new instance and doesn't mutate the existing object: var vector0 = ImmutableVector3D.originVector() var vector1 = vector0.sum(5, deltaX: 10, deltaY: 15) vector1.printValues() The code doesn't allow the users of the ImmutableVector3D class to change the values of the x, y, and z properties declared with the let keyword. The code doesn't compile if you try to assign a new value to any of these properties after they were initialized. Thus, we can say that the ImmutableVector3D class is 100 percent immutable. Finally, the code calls the printValues method for the returned instance (vector1) to check the values for the three elements on the Playground, as shown in the following screenshot: The immutable version adds an overhead compared with the mutable version because it is necessary to create a new instance of the class as a result of calling the sum method. The previously analyzed mutable version just changed the values for the attributes, and it wasn't necessary to generate a new instance. Obviously, the immutable version has both a memory and performance overhead. However, when we work with concurrent code, it makes sense to pay for the extra overhead to avoid potential issues caused by mutable objects. We just have to make sure we analyze the advantages and tradeoffs in order to decide which is the most convenient way of coding our specific classes. Summary In this article, we learned how to create mutable and immutable classes in Swift. Resources for Article: Further resources on this subject: Exploring Swift[article] The Swift Programming Language[article] Playing with Swift[article]
Read more
  • 0
  • 0
  • 8934

article-image-building-surveys-using-xcode
Packt
19 Jan 2016
14 min read
Save for later

Building Surveys using Xcode

Packt
19 Jan 2016
14 min read
In this article by Dhanushram Balachandran and Edward Cessna author of book Getting Started with ResearchKit, you can find the Softwareitis.xcodeproj project in the Chapter_3/Softwareitis folder of the RKBook GitHub repository (https://github.com/dhanushram/RKBook/tree/master/Chapter_3/Softwareitis). (For more resources related to this topic, see here.) Now that you have learned about the results of tasks from the previous section, we can modify the Softwareitis project to incorporate processing of the task results. In the TableViewController.swift file, let's update the rows data structure to include the reference for processResultsMethod: as shown in the following: //Array of dictionaries. Each dictionary contains [ rowTitle : (didSelectRowMethod, processResultsMethod) ] var rows : [ [String : ( didSelectRowMethod:()->(), processResultsMethod:(ORKTaskResult?)->() )] ] = [] Update the ORKTaskViewControllerDelegate method taskViewController(taskViewController:, didFinishWithReason:, error:) in TableViewController to call processResultsMethod, as shown in the following: func taskViewController(taskViewController: ORKTaskViewController, didFinishWithReason reason: ORKTaskViewControllerFinishReason, error: NSError?) { if let indexPath = tappedIndexPath { //1 let rowDict = rows[indexPath.row] if let tuple = rowDict.values.first { //2 tuple.processResultsMethod(taskViewController.result) } } dismissViewControllerAnimated(true, completion: nil) } Retrieves the dictionary of the tapped row and its associated tuple containing the didSelectRowMethod and processResultsMethod references from rows. Invokes the processResultsMethod with taskViewController.result as the parameter. Now, we are ready to create our first survey. In Survey.swift, under the Surveys folder, you will find two methods defined in the TableViewController extension: showSurvey() and processSurveyResults(). These are the methods that we will be using to create the survey and process the results. Instruction step Instruction step is used to show instruction or introductory content to the user at the beginning or middle of a task. It does not produce any result as its an informational step. We can create an instruction step using the ORKInstructionStep object. It has title and detailText properties to set the appropriate content. It also has the image property to show an image. The ORKCompletionStep is a special type of ORKInstructionStep used to show the completion of a task. The ORKCompletionStep shows an animation to indicate the completion of the task along with title and detailText, similar to ORKInstructionStep. In creating our first Softwareitis survey, let's use the following two steps to show the information: func showSurvey() { //1 let instStep = ORKInstructionStep(identifier: "Instruction Step") instStep.title = "Softwareitis Survey" instStep.detailText = "This survey demonstrates different question types." //2 let completionStep = ORKCompletionStep(identifier: "Completion Step") completionStep.title = "Thank you for taking this survey!" //3 let task = ORKOrderedTask(identifier: "first survey", steps: [instStep, completionStep]) //4 let taskViewController = ORKTaskViewController(task: task, taskRunUUID: nil) taskViewController.delegate = self presentViewController(taskViewController, animated: true, completion: nil) } The explanation of the preceding code is as follows: Creates an ORKInstructionStep object with an identifier "Instruction Step" and sets its title and detailText properties. Creates an ORKCompletionStep object with an identifier "Completion Step" and sets its title property. Creates an ORKOrderedTask object with the instruction and completion step as its parameters. Creates an ORKTaskViewController object with the ordered task that was previously created and presents it to the user. Let's update the processSurveyResults method to process the results of the instruction step and the completion step as shown in the following: func processSurveyResults(taskResult: ORKTaskResult?) { if let taskResultValue = taskResult { //1 print("Task Run UUID : " + taskResultValue.taskRunUUID.UUIDString) print("Survey started at : (taskResultValue.startDate!) Ended at : (taskResultValue.endDate!)") //2 if let instStepResult = taskResultValue.stepResultForStepIdentifier("Instruction Step") { print("Instruction Step started at : (instStepResult.startDate!) Ended at : (instStepResult.endDate!)") } //3 if let compStepResult = taskResultValue.stepResultForStepIdentifier("Completion Step") { print("Completion Step started at : (compStepResult.startDate!) Ended at : (compStepResult.endDate!)") } } } The explanation of the preceding code is given in the following: As mentioned at the beginning, each task run is associated with a UUID. This UUID is available in the taskRunUUID property, which is printed in the first line. The second line prints the start and end date of the task. These are useful user analytics data with regards to how much time the user took to finish the survey. Obtains the ORKStepResult object corresponding to the instruction step using the stepResultForStepIdentifier method of the ORKTaskResult object. Prints the start and end date of the step result, which shows the amount of time for which the instruction step was shown before the user pressed the Get Started or Cancel buttons. Note that, as mentioned earlier, ORKInstructionStep does not produce any results. Therefore, the results property of the ORKStepResult object will be nil. You can use a breakpoint to stop the execution at this line of code and verify it. Obtains the ORKStepResult object corresponding to the completion step. Similar to the instruction step, this prints the start and end date of the step. The preceding code produces screens as shown in the following image: After the Done button is pressed in the completion step, Xcode prints the output that is similar to the following: Task Run UUID : 0A343E5A-A5CD-4E7C-88C6-893E2B10E7F7 Survey started at : 2015-08-11 00:41:03 +0000     Ended at : 2015-08-11 00:41:07 +0000Instruction Step started at : 2015-08-11 00:41:03 +0000   Ended at : 2015-08-11 00:41:05 +0000Completion Step started at : 2015-08-11 00:41:05 +0000   Ended at : 2015-08-11 00:41:07 +0000 Question step Question steps make up the body of a survey. ResearchKit supports question steps with various answer types such as boolean (Yes or No), numeric input, date selection, and so on. Let's first create a question step with the simplest boolean answer type by inserting the following line of code in showSurvey(): let question1 = ORKQuestionStep(identifier: "question 1", title: "Have you ever been diagnosed with Softwareitis?", answer: ORKAnswerFormat.booleanAnswerFormat()) The preceding code creates a ORKQuestionStep object with identifier question 1, title with the question, and an ORKBooleanAnswerFormat object created using the booleanAnswerFormat() class method of ORKAnswerFormat. The answer type for a question is determined by the type of the ORKAnswerFormat object that is passed in the answer parameter. The ORKAnswerFormat has several subclasses such as ORKBooleanAnswerFormat, ORKNumericAnswerFormat, and so on. Here, we are using ORKBooleanAnswerFormat. Don't forget to insert the created question step in the ORKOrderedTask steps parameter by updating the following line: let task = ORKOrderedTask(identifier: "first survey", steps: [instStep, question1, completionStep]) When you run the preceding changes in Xcode and start the survey, you will see the question step with the Yes or No options. We have now successfully added a boolean question step to our survey, as shown in the following image: Now, its time to process the results of this question step. The result is produced in an ORKBooleanQuestionResult object. Insert the following lines of code in processSurveyResults(): //1 if let question1Result = taskResultValue.stepResultForStepIdentifier("question 1")?.results?.first as? ORKBooleanQuestionResult { //2 if question1Result.booleanAnswer != nil { let answerString = question1Result.booleanAnswer!.boolValue ? "Yes" : "No" print("Answer to question 1 is (answerString)") } else { print("question 1 was skipped") } } The explanation of the preceding code is as follows: Obtains the ORKBooleanQuestionResult object by first obtaining the step result using the stepResultForStepIdentifier method, accessing its results property, and finally obtaining the only ORKBooleanQuestionResult object available in the results array. The booleanAnswer property of ORKBooleanQuestionResult contains the user's answer. We will print the answer if booleanAnswer is non-nil. If booleanAnswer is nil, it indicates that the user has skipped answering the question by pressing the Skip this question button. You can disable the skipping-of-a-question step by setting its optional property to false. We can add the numeric and scale type question steps using the following lines of code in showSurvey(): //1 let question2 = ORKQuestionStep(identifier: "question 2", title: "How many apps do you download per week?", answer: ORKAnswerFormat.integerAnswerFormatWithUnit("Apps per week")) //2 let answerFormat3 = ORKNumericAnswerFormat.scaleAnswerFormatWithMaximumValue(10, minimumValue: 0, defaultValue: 5, step: 1, vertical: false, maximumValueDescription: nil, minimumValueDescription: nil) let question3 = ORKQuestionStep(identifier: "question 3", title: "How many apps do you download per week (range)?", answer: answerFormat3) The explanation of the preceding code is as follows: Creates ORKQuestionStep with the ORKNumericAnswerFormat object, created using the integerAnswerFormatWithUnit method with Apps per week as the unit. Feel free to refer to the ORKNumericAnswerFormat documentation for decimal answer format and other validation options that you can use. First creates ORKScaleAnswerFormat with minimum and maximum values and step. Note that the number of step increments required to go from minimumValue to maximumValue cannot exceed 10. For example, maximum value of 100 and minimum value of 0 with a step of 1 is not valid and ResearchKit will raise an exception. The step needs to be at least 10. In the second line, ORKScaleAnswerFormat is fed in the ORKQuestionStep object. The following lines in processSurveyResults() process the results from the number and the scale questions: //1 if let question2Result = taskResultValue.stepResultForStepIdentifier("question 2")?.results?.first as? ORKNumericQuestionResult { if question2Result.numericAnswer != nil { print("Answer to question 2 is (question2Result.numericAnswer!)") } else { print("question 2 was skipped") } } //2 if let question3Result = taskResultValue.stepResultForStepIdentifier("question 3")?.results?.first as? ORKScaleQuestionResult { if question3Result.scaleAnswer != nil { print("Answer to question 3 is (question3Result.scaleAnswer!)") } else { print("question 3 was skipped") } } The explanation of the preceding code is as follows: Question step with ORKNumericAnswerFormat generates the result with the ORKNumericQuestionResult object. The numericAnswer property of ORKNumericQuestionResult contains the answer value if the question is not skipped by the user. The scaleAnswer property of ORKScaleQuestionResult contains the answer for a scale question. As you can see in the following image, the numeric type question generates a free form text field to enter the value, while scale type generates a slider: Let's look at a slightly complicated question type with ORKTextChoiceAnswerFormat. In order to use this answer format, we need to create the ORKTextChoice objects before hand. Each text choice object provides the necessary data to act as a choice in a single choice or multiple choice question. The following lines in showSurvey() create a single choice question with three options: //1 let textChoice1 = ORKTextChoice(text: "Games", detailText: nil, value: 1, exclusive: false) let textChoice2 = ORKTextChoice(text: "Lifestyle", detailText: nil, value: 2, exclusive: false) let textChoice3 = ORKTextChoice(text: "Utility", detailText: nil, value: 3, exclusive: false) //2 let answerFormat4 = ORKNumericAnswerFormat.choiceAnswerFormatWithStyle(ORKChoiceAnswerStyle.SingleChoice, textChoices: [textChoice1, textChoice2, textChoice3]) let question4 = ORKQuestionStep(identifier: "question 4", title: "Which category of apps do you download the most?", answer: answerFormat4) The explanation of the preceding code is as follows: Creates text choice objects with text and value. When a choice is selected, the object in the value property is returned in the corresponding ORKChoiceQuestionResult object. The exclusive property is used in multiple choice questions context. Refer to the documentation for its use. First, creates an ORKChoiceAnswerFormat object with the text choices that were previously created and specifies a single choice type using the ORKChoiceAnswerStyle enum. You can easily change this question to multiple choice question by changing the ORKChoiceAnswerStyle enum to multiple choice. Then, an ORKQuestionStep object is created using the answer format object. Processing the results from a single or multiple choice question is shown in the following. Needless to say, this code goes in the processSurveyResults() method: //1 if let question4Result = taskResultValue.stepResultForStepIdentifier("question 4")?.results?.first as? ORKChoiceQuestionResult { //2 if question4Result.choiceAnswers != nil { print("Answer to question 4 is (question4Result.choiceAnswers!)") } else { print("question 4 was skipped") } } The explanation of the preceding code is as follows: The result for a single or multiple choice question is returned in an ORKChoiceQuestionResult object. The choiceAnswers property holds the array of values for the chosen options. The following image shows the generated choice question UI for the preceding code: There are several other question types, which operate in a very similar manner like the ones we discussed so far. You can find them in the documentations of ORKAnswerFormat and ORKResult classes. The Softwareitis project has implementation of two additional types: date format and time interval format. Using custom tasks, you can create surveys that can skip the display of certain questions based on the answers that the users have provided so far. For example, in a smoking habits survey, if the user chooses "I do not smoke" option, then the ability to not display the "How many cigarettes per day?" question. Form step A form step allows you to combine several related questions in a single scrollable page and reduces the number of the Next button taps for the user. The ORKFormStep object is used to create the form step. The questions in the form are represented using the ORKFormItem objects. The ORKFormItem is similar to ORKQuestionStep, in which it takes the same parameters (title and answer format). Let's create a new survey with a form step by creating a form.swift extension file and adding the form entry to the rows array in TableViewController.swift, as shown in the following: func setupTableViewRows() { rows += [ ["Survey" : (didSelectRowMethod: self.showSurvey, processResultsMethod: self.processSurveyResults)], //1 ["Form" : (didSelectRowMethod: self.showForm, processResultsMethod: self.processFormResults)] ] } The explanation of the preceding code is as follows: The "Form" entry added to the rows array to create a new form survey with the showForm() method to show the form survey and the processFormResults() method to process the results from the form. The following code shows the showForm() method in Form.swift file: func showForm() { //1 let instStep = ORKInstructionStep(identifier: "Instruction Step") instStep.title = "Softwareitis Form Type Survey" instStep.detailText = "This survey demonstrates a form type step." //2 let question1 = ORKFormItem(identifier: "question 1", text: "Have you ever been diagnosed with Softwareitis?", answerFormat: ORKAnswerFormat.booleanAnswerFormat()) let question2 = ORKFormItem(identifier: "question 2", text: "How many apps do you download per week?", answerFormat: ORKAnswerFormat.integerAnswerFormatWithUnit("Apps per week")) //3 let formStep = ORKFormStep(identifier: "form step", title: "Softwareitis Survey", text: nil) formStep.formItems = [question1, question2] //1 let completionStep = ORKCompletionStep(identifier: "Completion Step") completionStep.title = "Thank you for taking this survey!" //4 let task = ORKOrderedTask(identifier: "survey with form", steps: [instStep, formStep, completionStep]) let taskViewController = ORKTaskViewController(task: task, taskRunUUID: nil) taskViewController.delegate = self presentViewController(taskViewController, animated: true, completion: nil) } The explanation of the preceding code is as follows: Creates an instruction and a completion step, similar to the earlier survey. Creates two ORKFormItem objects using the questions from the earlier survey. Notice the similarity with the ORKQuestionStep constructors. Creates ORKFormStep object with an identifier form step and sets the formItems property of the ORKFormStep object with the ORKFormItem objects that are created earlier. Creates an ordered task using the instruction, form, and completion steps and presents it to the user using a new ORKTaskViewController object. The results are processed using the following processFormResults() method: func processFormResults(taskResult: ORKTaskResult?) { if let taskResultValue = taskResult { //1 if let formStepResult = taskResultValue.stepResultForStepIdentifier("form step"), formItemResults = formStepResult.results { //2 for result in formItemResults { //3 switch result { case let booleanResult as ORKBooleanQuestionResult: if booleanResult.booleanAnswer != nil { let answerString = booleanResult.booleanAnswer!.boolValue ? "Yes" : "No" print("Answer to (booleanResult.identifier) is (answerString)") } else { print("(booleanResult.identifier) was skipped") } case let numericResult as ORKNumericQuestionResult: if numericResult.numericAnswer != nil { print("Answer to (numericResult.identifier) is (numericResult.numericAnswer!)") } else { print("(numericResult.identifier) was skipped") } default: break } } } } } The explanation of the preceding code is as follows: Obtains the ORKStepResult object of the form step and unwraps the form item results from the results property. Iterates through each of the formItemResults, each of which will be the result for a question in the form. The switch statement detects the different types of question results and accesses the appropriate property that contains the answer. The following image shows the form step: Considerations for real world surveys Many clinical research studies that are conducted using a pen and paper tend to have well established surveys. When you try to convert these surveys to ResearchKit, they may not convert perfectly. Some questions and answer choices may have to be reworded so that they can fit on a phone screen. You are advised to work closely with the clinical researchers so that the changes in the surveys still produce comparable results with their pen and paper counterparts. Another aspect to consider is to eliminate some of the survey questions if the answers can be found elsewhere in the user's device. For example, age, blood type, and so on, can be obtained from HealthKit if the user has already set them. This will help in improving the user experience of your app. Summary Here we have learned to build surveys using Xcode. Resources for Article: Further resources on this subject: Signing up to be an iOS developer[article] Code Sharing Between iOS and Android[article] Creating a New iOS Social Project[article]
Read more
  • 0
  • 0
  • 6719

article-image-using-protocols-and-protocol-extensions
Packt
13 Oct 2015
22 min read
Save for later

Using Protocols and Protocol Extensions

Packt
13 Oct 2015
22 min read
In this article by Jon Hoffman, the author of Mastering Swift 2, we'll see how protocols are used as a type, how we can implement polymorphism in Swift using protocols, how to use protocol extensions, and why we would want to use protocol extensions. (For more resources related to this topic, see here.) While watching the presentations from WWDC 2015 about protocol extensions and protocol-oriented programming, I will admit that I was very skeptical. I have worked with object-oriented programming for so long that I was unsure if this new programming paradigm would solve all of the problems that Apple was claiming it would. Since I am not one to let my skepticism get in the way of trying something new, I set up a new project that mirrored the one I was currently working on, but wrote the code using Apple's recommendations for protocol-oriented programming and used protocol extensions extensively in the code. I can honestly say that I was amazed with how much cleaner the new project was compared to the original one. I believe that protocol extensions is going to be one of those defining features that set one programming language apart from the rest. I also believe that many major languages will soon have similar features. Protocol extensions are the backbone for Apple's new protocol-oriented programming paradigm and is arguably one of the most important additions to the Swift programming language. With protocol extensions, we are able to provide method and property implementations to any type that conforms to a protocol. To really understand how useful protocols and protocol extensions are, let's get a better understanding of protocols. While classes, structs, and enums can all conform to protocols in Swift, for this article, we will be focusing on classes and structs. Enums are used when we need to represent a finite number of cases and while there are valid use cases where we would have an enum conform to a protocol, they are very rare in my experience. Just remember that anywhere that we refer to a class or struct, we can also use an enum. Let's begin exploring protocols by seeing how they are full-fledged types in Swift. Protocol as a type Even though no functionality is implemented in a protocol, they are still considered a full-fledged type in the Swift programming language and can be used like any other type. What this means is we can use protocols as a parameter type or a return type in a function. We can also use them as the type for variables, constants, and collections. Let's take a look at some examples. For these few examples, we will use the PersonProtocol protocol: protocol PersonProtocol { var firstName: String {get set} var lastName: String {get set} var birthDate: NSDate {get set} var profession: String {get} init (firstName: String, lastName: String, birthDate: NSDate) } In this first example, we will see how we would use protocols as a parameter type or return type in functions, methods, or initializers: func updatePerson(person: PersonProtocol) -> PersonProtocol { // Code to update person goes here return person } In this example, the updatePerson() function accepts one parameter of the PersonProtocol protocol type and then returns a value of the PersonProtocol protocol type. Now let's see how we can use protocols as a type for constants, variables, or properties: var myPerson: PersonProtocol In this example, we create a variable of the PersonProtocol protocol type that is named myPerson. We can also use protocols as the item type to store in collection such as arrays, dictionaries, or sets: var people: [PersonProtocol] = [] In this final example, we create an array of PersonProtocol protocol types. As we can see from these three examples, even though the PersonProtocol protocol does not implement any functionality, we can still use protocols when we need to specify a type. We cannot, however, create an instance of a protocol. This is because no functionality is implemented in a protocol. As an example, if we tried to create an instance of the PersonProtocol protocol, we would be receiving the error: protocol type 'PersonProtocol' cannot be instantiated error, as shown in the following example: var test = PersonProtocol(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) We can use the instance of any class or struct that conforms to our protocol anywhere that the protocol type is required. As an example, if we defined a variable to be of the PersonProtocol protocol type, we could then populate that variable with any class or struct that conforms to the PersonProtocol protocol. For this example, let's assume that we have two types named SwiftProgrammer and FootballPlayer, which conform to the PersonProtocol protocol: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) print("(myPerson.firstName) (myPerson.lastName)") myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) print("(myPerson.firstName) (myPerson.lastName)") In this example, we start off by creating the myPerson variable of the PersonProtocol protocol type. We then set the variable with an instance of the SwiftProgrammer type and print out the first and last names. Next, we set the myPerson variable to an instance of the FootballPlayer type and print out the first and last names again. One thing to note is that Swift does not care if the instance is a class or struct. It only matters that the type conforms to the PersonProtocol protocol type. Therefore, if our SwiftProgrammer type was a struct and the FootballPlayer type was a class, our previous example would be perfectly valid. As we saw earlier, we can use our PersonProtocol protocol as the type for an array. This means that we can populate the array with instances of any type that conforms to the PersonProtocol protocol. Once again, it does not matter if the type is a class or a struct as long as it conforms to the PersonProtocol protocol. Here is an example of this: var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) In this example, we create an instance of the SwiftProgrammer type and an instance of the FootballPlayer type. We then add both instances to the people array. Polymorphism with protocols What we were seeing in the previous examples is a form of Polymorphism. The word polymorphism comes from the Greek roots Poly, meaning many and morphe, meaning form. In programming languages, polymorphism is a single interface to multiple types (many forms). In the previous example, the single interface was the PersonProtocol protocol and the multiple types were any type that conforms to that protocol. Polymorphism gives us the ability to interact with multiple types in a uniform manner. To illustrate this, we can extend our previous example where we created an array of the PersonProtocol types and loop through the array. We can then access each item in the array using the properties and methods define in the PersonProtocol protocol, regardless of the actual type. Let's see an example of this: for person in people { print("(person.firstName) (person.lastName): (person.profession)") } If we ran this example, the output would look similar to this: Jon Hoffman: Swift Programmer Dan Marino: Football Player We have mentioned a few times in this article that when we define the type of a variable, constant, collection type, and so on to be a protocol type, we can then use the instance of any type that conforms to that protocol. This is a very important concept to understand and it is what makes protocols and protocol extensions so powerful. When we use a protocol to access instances, as shown in the previous example, we are limited to using only properties and methods that are defined in the protocol. If we want to use properties or methods that are specific to the individual types, we would need to cast the instance to that type. Type casting with protocols Type casting is a way to check the type of the instance and/or to treat the instance as a specified type. In Swift, we use the is keyword to check if an instance is a specific type and the as keyword to treat the instance as a specific type. To start with, let's see how we would check the instance type using the is keyword. The following example shows how would we do this: for person in people { if person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } } In this example, we use the if conditional statement to check whether each element in the people array is an instance of the SwiftProgrammer type and if so, we print that the person is a Swift programmer to the console. While this is a good method to check whether we have an instance of a specific class or struct, it is not very efficient if we wanted to check for multiple types. It is a lot more efficient to use the switch statement, as shown in the next example, if we want to check for multiple types: for person in people { switch (person) { case is SwiftProgrammer: print("(person.firstName) is a Swift Programmer") case is FootballPlayer: print("(person.firstName) is a Football Player") default: print("(person.firstName) is an unknown type") } } In the previous example, we showed how to use the switch statement to check the instance type for each element of the array. To do this check, we use the is keyword in each of the case statements in an attempt to match the instance type. We can also use the where statement with the is keyword to filter the array, as shown in the following example: for person in people where person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Now let's look at how we can cast an instance of a class or struct to a specific type. To do this, we can use the as keyword. Since the cast can fail if the instance is not of the specified type, the as keyword comes in two forms: as? and as!. With the as? form, if the casting fails, it returns a nil, and with the as! form, if the casting fails, we get a runtime error; therefore, it is recommended to use the as? form unless we are absolutely sure of the instance type or we perform a check of the instance type prior to doing the cast. Let's look at how we would use the as? keyword to cast an instance of a class or struct to a specified type: for person in people { if let p = person as? SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } } Since the as? keyword returns an optional, we can use optional binding to perform the cast, as shown in this example. If we are sure of the instance type, we can use the as! keyword. The following example shows how to use the as! keyword when we filter the results of the array to only return instances of the SwiftProgrammer type: for person in people where person is SwiftProgrammer { let p = person as! SwiftProgrammer } Now that we have covered the basics of protocols, that is, how polymorphism works and type casting, let's dive into one of the most exciting new features of Swift protocol extensions. Protocol extensions Protocol extensions allow us to extend a protocol to provide method and property implementations to conforming types. It also allows us to provide common implementations to all the confirming types eliminating the need to provide an implementation in each individual type or the need to create a class hierarchy. While protocol extensions may not seem too exciting, once you see how powerful they really are, they will transform the way you think about and write code. Let's begin by looking at how we would use protocol extension with a very simplistic example. We will start off by defining a protocol called DogProtocol as follows: protocol DogProtocol { var name: String {get set} var color: String {get set} } With this protocol, we are saying that any type that conforms to the DogProtocol protocol, must have the two properties of the String type, namely, name and color. Now let's define the three types that conform to this protocol. We will name these types JackRussel, WhiteLab, and Mutt as follows: struct JackRussel: DogProtocol { var name: String var color: String } class WhiteLab: DogProtocol { var name: String var color: String init(name: String, color: String) { self.name = name self.color = color } } struct Mutt: DogProtocol { var name: String var color: String } We purposely created the JackRussel and Mutt types as structs and the WhiteLab type as a class to show the differences between how the two types are set up and to illustrate how they are treated in the same way when it comes to protocols and protocol extensions. The biggest difference that we can see in this example is the struct types provide a default initiator, but in the class, we must provide the initiator to populate the properties. Now let's say that we want to provide a method named speak to each type that conforms to the DogProtocol protocol. Prior to protocol extensions, we would start off by adding the method definition to the protocol, as shown in the following code: protocol DogProtocol { var name: String {get set} var color: String {get set} func speak() -> String } Once the method is defined in the protocol, we would then need to provide an implementation of the method in every type that conform to the protocol. Depending on the number of types that conformed to this protocol, this could take a bit of time to implement. The following code sample shows how we might implement this method: struct JackRussel: DogProtocol { var name: String var color: String func speak() -> String { return "Woof Woof" } } class WhiteLab: DogProtocol { var name: String var color: String init(name: String, color: String) { self.name = name self.color = color } func speak() -> String { return "Woof Woof" } } struct Mutt: DogProtocol { var name: String var color: String func speak() -> String { return "Woof Woof" } } While this method works, it is not very efficient because anytime we update the protocol, we would need to update all the types that conform to it and we may be duplicating a lot of code, as shown in this example. Another concern is, if we need to change the default behavior of the speak() method, we would have to go in each implementation and change the speak() method. This is where protocol extensions come in. With protocol extensions, we could take the speak() method definition out of the protocol itself and define it with the default behavior, in protocol extension. The following code shows how we would define the protocol and the protocol extension: protocol DogProtocol { var name: String {get set} var color: String {get set} } extension DogProtocol { func speak() -> String { return "Woof Woof" } } We begin by defining DogProtocol with the original two properties. We then create a protocol extension that extends DogProtocol and contains the default implementation of the speak() method. With this code, there is no need to provide an implementation of the speak() method in each of the types that conform to DogProtocol because they automatically receive the implementation as part of the protocol. Let's see how this works by setting our three types that conform to DogProtocol back to their original implementations and they should receive the speak() method from the protocol extension: struct JackRussel: DogProtocol { var name: String var color: String } class WhiteLab: DogProtocol { var name: String var color: String init(name: String, color: String) { self.name = name self.color = color } } struct Mutt: DogProtocol { var name: String var color: String } We can now use each of the types as shown in the following code: let dash = JackRussel(name: "Dash", color: "Brown and White") let lily = WhiteLab(name: "Lily", color: "White") let buddy = Mutt(name: "Buddy", color: "Brown") let dSpeak = dash.speak() // returns "woof woof" let lSpeak = lily.speak() // returns "woof woof" let bSpeak = buddy.speak() // returns "woof woof" As we can see in this example, by adding the speak() method to the DogProtocol protocol extension, we are automatically adding that method to all the types that conform to DogProtocol. The speak() method in the DogProtocol protocol extension can be considered a default implementation of the speak() method because we are able to override it in the type implementations. As an example, we could override the speak() method in the Mutt struct, as shown in the following code: struct Mutt: DogProtocol { var name: String var color: String func speak() -> String { return "I am hungry" } } When we call the speak() method for an instance of the Mutt type, it will return the string, "I am hungry". Now that we have seen how we would use protocols and protocol extensions, let's look at a more real-world example. In numerous apps, across multiple platforms (iOS, Android, and Windows), I have had the requirement to validate user input as it is entered. This validation can be done very easily with regular expressions; however, we do not want various regular expressions littered through out our code. It is very easy to solve this problem by creating different classes or structs that contains the validation code; however, we would have to organize these classes to make them easy to use and maintain. Prior to protocol extensions in Swift, I would use protocols to define the validation requirements and then create a struct that would conform to the protocol for each validation that I needed. Let's take a look at this preprotocol extension method. A regular expression is a sequence of characters that define a particular pattern. This pattern can then be used to search a string to see whether the string matches the pattern or contains a match of the pattern. Most major programming languages contain a regular expression parser, and if you are not familiar with regular expressions, it may be worth to learn more about them. The following code shows the TextValidationProtocol protocol that defines the requirements for any type that we want to use for text validation: protocol TextValidationProtocol { var regExMatchingString: String {get} var regExFindMatchString: String {get} var validationMessage: String {get} func validateString(str: String) -> Bool func getMatchingString(str: String) -> String? } In this protocol, we define three properties and two methods that any type that conforms to TextValidationProtocol must implement. The three properties are: regExMatchingString: This is a regular expression string used to verify that the input string contains only valid characters. regExFindMatchString: This is a regular expression string used to retrieve a new string from the input string that contains only valid characters. This regular expression is generally used when we need to validate the input real time, as the user enters information, because it will find the longest matching prefix of the input string. validationMessage: This is the error message to display if the input string contains non-valid characters. The two methods for this protocol are as follows: validateString: This method will return true if the input string contains only valid characters. The regExMatchingString property will be used in this method to perform the match. getMatchingString: This method will return a new string that contains only valid characters. This method is generally used when we need to validate the input real time as the user enters information because it will find the longest matching prefix of the input string. We will use the regExFindMatchString property in this method to retrieve the new string. Now let's see how we would create a struct that conforms to this protocol. The following struct would be used to verify that the input string contains only alpha characters: struct AlphaValidation1: TextValidationProtocol { static let sharedInstance = AlphaValidation1() private init(){} let regExFindMatchString = "^[a-zA-Z]{0,10}" let validationMessage = "Can only contain Alpha characters" var regExMatchingString: String { get { return regExFindMatchString + "$" } } func validateString(str: String) -> Bool { if let _ = str.rangeOfString(regExMatchingString, options: .RegularExpressionSearch) { return true } else { return false } } func getMatchingString(str: String) -> String? { if let newMatch = str.rangeOfString(regExFindMatchString, options: .RegularExpressionSearch) { return str.substringWithRange(newMatch) } else { return nil } } } In this implementation, the regExFindMatchString and validationMessage properties are stored properties, and the regExMatchingString property is a computed property. We also implement the validateString() and getMatchingString() methods within the struct. Normally, we would have several different types that conform to TextValidationProtocol where each one would validate a different type of input. As we can see from the AlphaValidation1 struct, there is a bit of code involved with each validation type. A lot of the code would also be duplicated in each type. The code for both methods (validateString() and getMatchingString()) and the regExMatchingString property would be duplicated in every validation class. This is not ideal, but if we wanted to avoid creating a class hierarchy with a super class that contains the duplicate code (I personally prefer using value types over classes), we would have no other choice. Now let's see how we would implement this using protocol extensions. With protocol extensions we need to think about the code a little differently. The big difference is, we do not need, nor want to define everything in the protocol. With standard protocols or when we use class hierarchy, all the methods and properties that you would want to access using the generic superclass or protocol would have to be defined within the superclass or protocol. With protocol extensions, it is preferred for us not to define a property or method in the protocol if we are going to be defining it within the protocol extension. Therefore, when we rewrite our text validation types with protocol extensions, TextValidationProtocol would be greatly simplified to look similar to this: protocol TextValidationProtocol { var regExFindMatchString: String {get} var validationMessage: String {get} } In original TextValidationProtocol, we defined three properties and two methods. As we can see in this new protocol, we are only defining two properties. Now that we have our TextValidationProtocol defined, let's create the protocol extension for it: extension TextValidationProtocol { var regExMatchingString: String { get { return regExFindMatchString + "$" } } func validateString(str: String) -> Bool { if let _ = str.rangeOfString(regExMatchingString, options: .RegularExpressionSearch) { return true } else { return false } } func getMatchingString(str: String) -> String? { if let newMatch = str.rangeOfString(regExFindMatchString, options: .RegularExpressionSearch) { return str.substringWithRange(newMatch) } else { return nil } } } In the TextValidationProtocol protocol extension, we define the two methods and the third property that were defined in original TextValidationProtocol, but were not defined in the new one. Now that we have created our protocol and protocol extension, we are able to define our text validation types. In the following code, we define three structs that we will use to validate text when a users types it in: struct AlphaValidation: TextValidationProtocol { static let sharedInstance = AlphaValidation() private init(){} let regExFindMatchString = "^[a-zA-Z]{0,10}" let validationMessage = "Can only contain Alpha characters" } struct AlphaNumericValidation: TextValidationProtocol { static let sharedInstance = AlphaNumericValidation() private init(){} let regExFindMatchString = "^[a-zA-Z0-9]{0,15}" let validationMessage = "Can only contain Alpha Numeric characters" } struct DisplayNameValidation: TextValidationProtocol { static let sharedInstance = DisplayNameValidation() private init(){} let regExFindMatchString = "^[\s?[a-zA-Z0-9\-_\s]]{0,15}" let validationMessage = "Display Name can contain only contain Alphanumeric Characters" } In each one of the text validation structs, we create a static constant and a private initiator so that we can use the struct as a singleton. After we define the singleton pattern, all we do in each type is set the values for the regExFindMatchString and validationMessage properties. Now we have not duplicated the code virtually because even if we could, we would not want to define the singleton code in the protocol extension because we would not want to force that pattern on all the conforming types. To use the text validation classes, we would want to create a dictionary object that would map the UITextField objects to the validation class to use it like this: var validators = [UITextField: TextValidationProtocol]() We could then populate the validators dictionary as shown here: validators[alphaTextField] = AlphaValidation.sharedInstance validators[alphaNumericTextField] = AlphaNumericValidation.sharedInstance validators[displayNameTextField] = DisplayNameValidation.sharedInstance We can now set the EditingChanged event of the text fields to a single method named keyPressed(). To set the edition changed event for each field, we would add the following code to the viewDidLoad() method of our view controller: alphaTextField.addTarget(self, action:Selector("keyPressed:"), forControlEvents: UIControlEvents.EditingChanged) alphaNumericTextField.addTarget(self, action: Selector("keyPressed:"), forControlEvents: UIControlEvents.EditingChanged) displayNameTextField.addTarget(self, action: Selector("keyPressed:"), forControlEvents: UIControlEvents.EditingChanged) Now lets create the keyPressed() method that each text field calls when a user types a character into the field: @IBAction func keyPressed(textField: UITextField) { if let validator = validators[textField] where !validator.validateString(textField.text!) { textField.text = validator.getMatchingString(textField.text!) messageLabel?.text = validator.validationMessage } } In this method, we use the if let validator = validators[textField] statement to retrieve the validator for the particular text field and then we use the where !validator.validateString(textField.text!) statement to validate the string that the user has entered. If the string fails validation, we use the getMatchingString() method to update the text in the text field by removing all the characters from the input string, starting with the first invalid character and then displaying the error message from the text validation class. If the string passes validation, the text in the text field is left unchanged. Summary In this article, we saw that protocols are treated as full-fledged types by Swift. We also saw how polymorphism can be implemented in Swift with protocols. We concluded this article with an in-depth look at protocol extensions and saw how we would use them in Swift. Protocols and protocol extensions are the backbone of Apple's new protocol-oriented programming paradigm. This new model for programming has the potential to change the way we write and think about code. While we did not specifically cover protocol-oriented programming in this article, understanding the topics in this article gives us the solid understanding of protocols and protocol extensions needed to learn about this new programming model. Resources for Article: Further resources on this subject: Using OpenStack Swift [Article] Dragging a CCNode in Cocos2D-Swift [Article] Installing OpenStack Swift [Article]
Read more
  • 0
  • 0
  • 6125

article-image-introduction-sql-and-sqlite
Packt
10 Feb 2016
22 min read
Save for later

Introduction to SQL and SQLite

Packt
10 Feb 2016
22 min read
In this article by Gene Da Rocha, author or the book Learning SQLite for iOS we are introduced to the background of the Structured Query Language (SQL) and the mobile database SQLite. Whether you are an experienced technologist at SQL or a novice, using the book will be a great aid to help you understand this cool subject, which is gaining momentum. SQLite is the database used on the mobile smartphone or tablet that is local to the device. SQLite has been modified by different vendors to harden and secure it for a variety of uses and applications. (For more resources related to this topic, see here.) SQLite was released in 2000 and has grown to be as a defacto database on a mobile or smartphone today. It is an open source piece of software with a low footprint or overhead, which is packaged with a relational database management system. Mr D. Richard Hipp is the inventor and author for SQLite, which was designed and developed on a battleship while he was at a company called General Dynamics at the U. S. Navy. The programming was built for a HP-UX operating system with Informix as the database engine. It took many hours in the data to upgrade or install the database software and was an over-the-top database for this experience DBA (database administrator). Mr Hipp wanted a portable, self-contained, easy-to-use database, which could be mobile, quick to install, and not dependent on the operating. Initially, SQLite 1.0 used the gdbm as its storage system, but later, it was replaced with its own B-tree implementation and technology for the database. The B-tree implementation was enhanced to support transactions and store rows of data with key order. By 2001 onwards, open source family extensions for other languages, such as Java, Python, and Perl, were written to support their applications. The database and its popularity within the open source community and others were growing. Originally based upon relational algebra and tuple relational calculus, SQL consists of a data definition and manipulation language. The scope of SQL includes data insert, query, update and delete, schema creation and modification, and data access control. Although SQL is often described as, and to a great extent is, a declarative language (4GL), it also includes procedural elements. Internationalization supported UTF-16 and UTF-8 and included text-collating sequences in version 2 and 3 in 2004. It was supported by funding from AOL (America Online) in 2004. It works with a variety of browsers, which sometimes have in-built support for this technology. For example, there are so many extensions that use Chrome or Firefox, which allow you to manage the database. There have been many features added to this product. The future with the growth in mobile phones sets this quick and easy relational database system to quantum leap its use within the mobile and tablet application space. SQLite is based on the PostgreSQL as a point of reference. SQLite does not enforce any type checking. The schema does not constrain it since the type of value is dynamic, and a trigger will be activated by converting the data type. About SQL In June 1970, a research paper was published by Dr. E.F. Codd called A Relational Model of Data for Large Shared Data Banks. The Association of Computer Machinery (ACM) accepted Codd data and technology model, which has today become the standard for the RDBMS (Relational Database Management System). IBM Corporation had invented the language called by Structured English Query Language (SEQUEL), where the word "English" was dropped to become SQL. SQL is still pronounced as what has today become the standard for the RDBMS (Relational Database Management System) had a product called which has today become the SQL technology, followed by Oracle, Sybase and Microsoft's SQL Server. The standard commercial relational database management system language today is SQL (SEQUEL). Today, there are ANSI standards for SQL, and there are many variations of this technology. Among the mentioned manufacturers, there are also others available in the open source world, for example, an SQL query engine such as Presto. This is the distribution engine for SQL under open source, which is made to execute interactive analytic queries. Presto queries are run under databases from a variety of data source sizes—gigabytes to petabytes. Companies such as Facebook and Dropbox use the Presto SQL engine for their queries and analytics in data warehouse and related applications. SQL is made up of a data manipulation and definition language built with tuple and algebra calculation in a relational format. The SQL language has a variety of statements but most would recognize the INSERT, SELECT, UPDATE and DELETE statements. These statements form a part of the database schema management process and aid the data access and security access. SQL includes procedural elements as part of its setup. Is SQLite used anywhere? Companies may use applications but they are not aware of the SQL engines that drive their data storage and information. Although, it has become a standard with the American National Standards Institute (ANSI) in 1986, SQL features and functionality are not 100% portable among different SQL systems and require code changes to be useful. These standards are always up for revision to ensure ANSI is maintained. There are many variants of SQL engines on the market from companies, such as Oracle, SQL Server (Microsoft), DB2 (IBM), Sybase (SAP), MYSQL (Oracle), and others. Different companies operate several types of pricing structures, such as free open source, or a paid per seat or by transactions or server types or loads. Today, there is a preference for using server technology and SQL in the cloud with different providers, for example, Amazon Web Services (AWS). SQLite, as it names suggests, is SQL in a light environment, which is also flexible and versatile. Enveloped and embedded database among other processes SQLite has been designed and developed to work and coexist with other applications and processes in its area. RDBMS is tightly integrated with the native application software, which requires storing information but is masked and hidden from users, and it requires minimal administration or maintenance. SQLite can work with different API hidden from users and requires minimal administration or maintenance areas. RDBMS is intertwined with other applications; that is, it requires minimal supervision; there is no network traffic; no network access conflicts or configuration; no access limitations with privileges or permissions; and a large reduced overhead. These make it easier and quicker to deploy your applications to the app stores or other locations. The different components work seamlessly together in a harmonized way to link up data with the SQLite library and other processes. These show how the Apache process and the C/C++ process work together with the SQLite-C library to interface and link with it so that it becomes seamless and integrates with the operating system. SQLite has been developed and integrated in such a way that it will interface and gel with a variety of applications and multiple solutions. As a lightweight RDBMS, it can stand on its own by its versatility and is not cumbersome or too complex to benefit your application. It can be used on many platforms and comes with a binary compatible format, which is easier to dovetail within your mobile application. The different types of I.T. professionals will be involved with SQLite since it holds the data, affects performance, and involves database design, user or mobile interface design specialists, analysts and consultancy types. These professionals could use their previous knowledge of SQL to quickly grasp SQLite. SQLite can act as both data processor for information or deal with data in memory to perform well. The different software pieces of a jigsaw can interface properly by using the C API interface to SQLite, which some another programming language code. For example, C or C++ code can be programmed to communicate with the SQLITE C API, which will then talk to the operating system, and thus communicate with the database engine. Another language such as PHP can communicate using its own language data objects, which will in turn communicate with the SQLite C API and the database. SQLite is a great database to learn especially for computer scientists who want to use a tool that can open your mind to investigate caching, B-Tree structures and algorithms, database design architecture, and other concepts. The architecture of the SQLite database As a library within the OS-Interface, SQLite will have many functions implemented through a programming called tclsqlite.c. Since many technologies and reserved words are used, to language, and in this case, it will have the C language. The core functions are to be found in main.c, legacy.c, and vmbeapi.c. There is also a source code file in C for the TCL language to avoid any confusion; the prefix of sqlite3 is used at the beginning within the SQLite library. The Tokeniser code base is found within tokenize.c. Its task is to look at strings that are passed to it and partition or separate them into tokens, which are then passed to the parser. The Parser code base is found within parse.y. The Lemon LALR(1) parser generator is the parser for SQLite; it uses the context of tokens and assigns them a meaning. To keep within the low-sized footprint of RDBMS, only one C file is used for the parse generator. The Code Generator is then used to create SQL statements from the outputted tokens of the parser. It will produce virtual machine code that will carry out the work of the SQL statements. Several files such as attach.c, build.c, delete.c, select.c, and update.c will handle the SQL statements and syntax. Virtual machine executes the code that is generated from the Code Generator. It has in-built storage where each instruction may have up to three additional operands as a part of each code. The source file is called vdbe.c, which is a part of the SQLite database library. Built-in is also a computing engine, which has been specially created to integrate with the database system. There are two header files for virtual machine; the header files that interface a link between the SQLite libraries are vdbe.h and vdbeaux.c, which have utilities used by other modules. The vdbeapi.c file also connects to virtual machine with sqlite_bind and other related interfaces. The C language routines are called from the SQL functions that reference them. For example, functions such as count() are defined in func.c and date functions are located in date.c. B-tree is the type of table implementation used in SQLite; and the C source file is btree.c. The btree.h header file defines the interface to the B-tree system. There is a different B-tree setup for every table and index and held within the same file. There is a header portion within the btree.c, which will have details of the B-tree in a large comment field. The Pager or Page Cache using the B-tree will ask for data in a fixed sized format. The default size is 1024 bytes, which can be between 512 and 65536 bytes. Commit and Rollback operations, coupled with the caching, reading, and writing of data are handled by Page Cache or Pager. Data locking mechanisms are also handled by the Page Cache. The C file page.c is implemented to handle requests within the SQLite library and the header file is pager.h. The OS Interface C file is defined in os.h. It addresses how SQLite can be used on different operating systems and become transparent and portable to the user thus, becoming a valuable solution for any developer. An abstract layer to handle Win32 and POSIX compliant systems is also in place. Different operating systems have their own C file. For example, os_win.c is for Windows, os_unix.c is for Unix, coupled with their own os_win.h and os_unix.h header files. Util.c is the C file that will handle memory allocation and string comparisons. The Utf.c C file will hold the Unicode conversion subroutines. The Utf.c C file will hold the Unicode data, sort it within the SQL engine, and use the engine itself as a mechanism for computing data. Since the memory of the device is limited and the database size has the same constraints, the developer has to think outside the box to use these techniques. These types of memory and resource management form a part of the approach when the overlay techniques were used in the past when disk and memory was limited.   SELECT parameter1, STTDEV(parameter2)       FROM Table1 Group by parameter1       HAVING parameter1 > MAX(parameter3) IFeatures As part of its standards, SQLite uses and implements most of the SQL-92 standards, but not all the potential features or parts of functionality are used or realized. For example, the SQLite uses and implements most of the SQL-92 standards but not all potent columns. The support for triggers is not 100% as it cannot write output to views, but as a substitute, the INSTEAD OF statement can be used. As mentioned previously, the use of a type for a column is different; most relational database systems assign them to individual values. SQLite will convert a string into an integer if the columns preferred type is an integer. It is a good piece of functionality when bound to this type of scripting language, but the technique is not portable to other RDBMS systems. It also has its criticisms for not having a good data integrity mechanism compared to others in relation to statically typed columns. As mentioned previously, it has many bindings to many languages, such as Basic, C, C#, C++, D, Java, JavaScript, Lua, PHP, Objective-C, Python, Ruby, and TCL. Its popularity by the open source community and its usage by customers and developers have enabled its growth to continue. This lightweight RDBMS can be used on Google Chrome, Firefox, Safari, Opera, and the Android Browsers and has middleware support using ADO.NET, ODBC, COM (ActiveX), and XULRunner. It also has the support for web application frameworks such as Django (Python-based), Ruby on Rails, and Bugzilla (Mozilla). There are other applications such as Adobe Photoshop Light, which uses SQLite and Skype. It is also part of the Windows 8, Symbian OS, Android, and OpenBSD operating. Apple also included it via API support via OSXvia OSXother applications like Adobe Photoshop Light. Apart from not having the large overhead of other database engines, SQLite has some major enhancements such as the EXPLAIN keyword with its manifest typing. To control constraint conflicts, the REPLACE and ON CONFLICT statements are used. Within the same query, multiple independent databases can be accessed using the DETACH and ATTACH statements. New SQL functions and collating sequences can be created using the predefined API's, which offer much more flexibility. As there is no configuration required, SQLite just does the job and works. There is no need to initialize, stop, restart, or start server processes and no administrator is required to create the database with proper access control or security permits. After any failure, no user actions are required to recover the database since it is self-repairing: SQLite is more advanced than is thought of in the first place. Unlike other RDBMS, it does not require a server setup via a server to serve up data or incur network traffic costs. There are no TCP/IP calls and frequent communication backwards or forwards. SQLite is direct; the operating system process will deal with database access to its file; and control database writes and reads with no middle-man process handshaking. By having no server backend, the process of installation, configuration, or administration is reduced significantly and the access to the database is granted to programs that require this type of data operations. This is an advantage in one way but is also a disadvantage for security and protection from data-driven misuse and data concurrency or data row locking mechanisms. It also allows the database to be accessed several times by different applications at the same time. It supports a form of portability for the cross-platform database file that can be located with the database file structure. The database file can be updated on one system and copied to another on either 32 bit or 64 bit with different architectures. This does not make a difference to SQLite. The usage of different architecture and the promises of developers to keep the file system stable and compatible with the previous, current, and future developments will allow this database to grow and thrive. SQLite databases don't need to upload old data to the new formatted and upgraded databases; it just works. By having a single disk file for the database, the information can be copied on a USB and shared or just reused on another device very quickly keeping all the information intact. Other RDBMS single-disk file for the database; the information can be copied on a USB and shared or just reused on another device very quickly keeping all the information in tact to grow and thrive. Another feature of this portable database is its size, which can start on a single 512-byte page and expand to 2147483646 pages at 65536 bytes per page or in bytes 140,737,488,224,256, which equates to about 140 terabytes. Most other RDBMS are much larger, but IBM's Cloudscape is small with a 2MB jar file. It is still larger than SQLite. The Firebird alternative's client (frontend) library is about 350KB, whereas the Berkeley Oracle database is around 450kb without SQL support and with one simple key/value pair's option. This advanced portable database system and its source code is in the public domain. They have no copyright or any claim on the source code. However, there are open source license issues and controls for some test code and documentation. This is great news for developers who might want to code up new extensions or database functionality that works with their programs, which could be made into a 'product extension' for SQLite. You cannot have this sort of access to SQL source code around since everything has a patent, limited access, or just no access. There are signed affidavits by developers to disown any copyright interest in the SQLite code. SQLite is different, because it is just not governed or ruled by copyright law; the way software should really work or it used. There are signed affidavits by developers to disown any copyright interest in the SQLite code. This means that you can define a column with a datatype of integer, but its property is dictated by the inputted values and not the column itself. This can allow any value to be stored in any declared data type for this column with the exception of an integer primary key. This feature would suit TCL or Python, which are dynamically typed programming languages. When you allocate space in most RDBMS in any declared char(50), the database system will allocate the full 50 bytes of disk space even if you do not allocate the full 50 bytes of disk space. So, out of char(50) sized column, three characters were used, then the disk space would be only three characters plus two for overhead including data type, length but not 50 characters such as other database engines. This type of operation would reduce disk space usage and use only what space was required. By using the small allocation with variable length records, the applications runs faster, the database access is quicker, manifest typing can be used, and the database is small and nimble. The ease of using this RDBMS makes it easier for most programmers at an intermediate level to create applications using this technology with its detailed documentation and examples. Other RDBMS are internally complex with links to data structures and objects. SQLite comprises using a virtual machine language that uses the EXPLAIN reserved word in front of a query. Virtual machine has increased and benefitted this database engine by providing an excellent process or controlled environment between back end (where the results are computed and outputted) and the front end (where the SQL is parsed and executed). The SQL implementation language is comparable to other RDBMS especially with its lightweight base; it does support recursive triggers and requires the FOR EACH row behavior. The FOR EACH statement is not currently supported, but functionality cannot be ruled out in the future. There is a complete ALTER TABLE support with some exceptions. For example, the RENAME TABLE, ADD COLUMN, or ALTER COLUMN is supported, but the DROP COLUMN, ADD CONSTRAINT, or ALTER COLUMN is not supported. Again, this functionality cannot be ruled out in the future. The RIGHT OUTER JOIN and FULL OUTER JOIN are not support, but the RIGHT OUTER JOIN, FULL OUTER JOIN, and LEFT OUTER JOIN are implemented. The views within this RDBMS are read only. As described so far in the this article, SQLite is a nimble and easy way to use database that developers can engage with quickly, use existing skills, and output systems to mobile devices and tablets far simpler than ever before. With the advantage of today's HTML5 and other JavaScript frameworks, the advancement of SQL and the number of SQLite installations will quantum leap. Working with SQLite The website for SQLite is www.sqlite.org where you can download all the binaries for the database, documentation, and source code, which works on operating systems such as Linux, Windows and MAC OS X. The SQLite share library or DLL is the library to be used for the Windows operating system and can be installed or seen via Visual Studio with the C++ language. So, the developer can write the code using the library that is presently linked in reference via the application. When execution has taken place, the DLL will load and all references in the code will link to those in the DLL at the right time. The SQLite3 command-line program, CLP, is a self-contained program that has all the components built in for you to run at the command line. It also comes with an extension for TCL. So within TCL, you can connect and update the SQLite database. SQLite downloads come with the TAR version for Unix systems and the ZIP version for Windows systems. iOS with SQLite On the hundreds of thousands of apps on all the app stores, it would be difficult to find the one that does not require a database of some sort to store or handle data in a particular way. There are different formats of data called datafeeds, but they all require some temporary or permanent storage. Small amounts of data may not be applicable but medium or large amounts of data will require a storage mechanism such as a database to assist the app. Using SQLite with iOS will enable developers to use their existing skills to run their DBMS on this platform as well. For SQLite, there is the C-library that is embedded and available to use with iOS with the Xcode IDE. Apple fully supports SQLite, which uses an include statement as a part of the library call, but there is not easy made mechanism to engage. Developers also tend to use FMDB—a cocoa/objective-C wrapper around SQLite. As SQLite is fast and lightweight, its usage of existing SQL knowledge is reliable and supported by Apple on Mac OS and iOS and support from many developers as well as being integrated without much outside involvement. The third SQLite library is under the general tab once the main project name is highlighted on the left-hand side. Then, at the bottom of the page or within the 'Linked Frameworks and Library', click + and a modal window appears. Enter the word sqlite and select sqlite; then, select the libsqlite3.dylib library. This one way to set up the environment to get going. In effect, it is the C++ wrapper called the libsqlite3.dylib library within the framework section, which allows the API to work with the SQLite commands. The way in which a text file is created in iOS is the way SQLite will be created. It will use the location (document directory) to save the file that is the one used by iOS. Before anything can happen, the database must be opened and ready for querying and upon the success of data, the constant SQLITE_OK is set to 0. In order to create a table in the SQLite table using the iOS connection and API, the method sqlite3_exec is set up to work with the open sqlite3 object and the create table SQL statement with a callback function. When the callback function is executed and a status is returned of SQLITE_OK, it is successful; otherwise, the other constant SQLITE_ERROR is set to 1. Once the C++ wrapper is used and the access to SQLite commands are available, it is an easier process to use SQLite with iOS. Summary In this article, you read the history of SQL, the impact of relational databases, and the use of a mobile SQL database namely SQLite. It outlines the history and beginnings of SQLite and how it has grown to be the most used database on mobile devices so far. Resources for Article:   Further resources on this subject: Team Project Setup [article] Introducing Sails.js [article] Advanced Fetching [article]
Read more
  • 0
  • 0
  • 5912
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 5818

article-image-concurrency-and-parallelism-swift-2
Packt
22 Feb 2016
35 min read
Save for later

Concurrency and Parallelism with Swift 2

Packt
22 Feb 2016
35 min read
When I first started learning Objective-C, I already had a good understanding of concurrency and multitasking with my background in other languages such as C and Java. This background made it very easy for me to create multithreaded applications using threads in Objective-C. Then, Apple changed everything for me when they released Grand Central Dispatch (GCD) with OS X 10.6 and iOS 4. At first, I went into denial; there was no way GCD could manage my application's threads better than I could. Then I entered the anger phase, GCD was hard to use and understand. Next was the bargaining phase, maybe I can use GCD with my threading code, so I could still control how the threading worked. Then there was the depression phase, maybe GCD does handle the threading better than I can. Finally, I entered the wow phase; this GCD thing is really easy to use and works amazingly well. After using Grand Central Dispatch and Operation Queues with Objective-C, I do not see a reason for using manual threads with Swift. In this artcle, we will learn the following topics: Basics of concurrency and parallelism How to use GCD to create and manage concurrent dispatch queues How to use GCD to create and manage serial dispatch queues How to use various GCD functions to add tasks to the dispatch queues How to use NSOperation and NSOperationQueues to add concurrency to our applications (For more resources related to this topic, see here.) Concurrency and parallelism Concurrency is the concept of multiple tasks starting, running, and completing within the same time period. This does not necessarily mean that the tasks are executing simultaneously. In order for tasks to be run simultaneously, our application needs to be running on a multicore or multiprocessor system. Concurrency allows us to share the processor or cores with multiple tasks; however, a single core can only execute one task at a given time. Parallelism is the concept of two or more tasks running simultaneously. Since each core of our processor can only execute one task at a time, the number of tasks executing simultaneously is limited to the number of cores within our processors. Therefore, if we have, for example, a four-core processor, then we are limited to only four tasks running simultaneously. Today's processors can execute tasks so quickly that it may appear that larger tasks are executing simultaneously. However, within the system, the larger tasks are actually taking turns executing subtasks on the cores. In order to understand the difference between concurrency and parallelism, let's look at how a juggler juggles balls. If you watch a juggler, it seems they are catching and throwing multiple balls at any given time; however, a closer look reveals that they are, in fact, only catching and throwing one ball at a time. The other balls are in the air waiting to be caught and thrown. If we want to be able to catch and throw multiple balls simultaneously, we need to add multiple jugglers. This example is really good because we can think of jugglers as the cores of a processer. A system with a single core processor (one juggler), regardless of how it seems, can only execute one task (catch and throw one ball) at a time. If we want to execute more than one task at a time, we need to use a multicore processor (more than one juggler). Back in the old days when all the processors were single core, the only way to have a system that executed tasks simultaneously was to have multiple processors in the system. This also required specialized software to take advantage of the multiple processors. In today's world, just about every device has a processor that has multiple cores, and both the iOS and OS X operating systems are designed to take advantage of the multiple cores to run tasks simultaneously. Traditionally, the way applications added concurrency was to create multiple threads; however, this model does not scale well to an arbitrary number of cores. The biggest problem with using threads was that our applications ran on a variety of systems (and processors), and in order to optimize our code, we needed to know how many cores/processors could be efficiently used at a given time, which is sometimes not known at the time of development. In order to solve this problem, many operating systems, including iOS and OS X, started relying on asynchronous functions. These functions are often used to initiate tasks that could possibly take a long time to complete, such as making an HTTP request or writing data to disk. An asynchronous function typically starts the long running task and then returns prior to the task completion. Usually, this task runs in the background and uses a callback function (such as closure in Swift) when the task completes. These asynchronous functions work great for the tasks that the OS provides them for, but what if we needed to create our own asynchronous functions and do not want to manage the threads ourselves? For this, Apple provides a couple of technologies. In this artcle, we will be covering two of these technologies. These are GCD and operation queues. GCD is a low-level C-based API that allows specific tasks to be queued up for execution and schedules the execution on any of the available processor cores. Operation queues are similar to GCD; however, they are Cocoa objects and are internally implemented using GCD. Let's begin by looking at GCD. Grand Central Dispatch Grand Central Dispatch provides what is known as dispatch queues to manage submitted tasks. The queues manage these submitted tasks and execute them in a first-in, first- out (FIFO) order. This ensures that the tasks are started in the order they were submitted. A task is simply some work that our application needs to perform. As examples, we can create tasks that perform simple calculations, read/write data to disk, make an HTTP request, or anything else that our application needs to do. We define these tasks by placing the code inside either a function or a closure and adding it to a dispatch queue. GCD provides three types of queues: Serial queues: Tasks in a serial queue (also known as a private queue) are executed one at a time in the order they were submitted. Each task is started only after the preceding task is completed. Serial queues are often used to synchronize access to specific resources because we are guaranteed that no two tasks in a serial queue will ever run simultaneously. Therefore, if the only way to access the specific resource is through the tasks in the serial queue, then no two tasks will attempt to access the resource at the same time or be out of order. Concurrent queues: Tasks in a concurrent queue (also known as a global dispatch queue) execute concurrently; however, the tasks are still started in the order that they were added to the queue. The exact number of tasks that can be executing at any given instance is variable and is dependent on the system's current conditions and resources. The decision on when to start a task is up to GCD and is not something that we can control within our application. Main dispatch queue: The main dispatch queue is a globally available serial queue that executes tasks on the application's main thread. Since tasks put into the main dispatch queue run on the main thread, it is usually called from a background queue when some background processing has finished and the user interface needs to be updated. Dispatch queues offer a number of advantages over traditional threads. The first and foremost advantage is, with dispatch queues, the system handles the creation and management of threads rather than the application itself. The system can scale the number of threads, dynamically based on the overall available resources of the system and the current system conditions. This means that dispatch queues can manage the threads with greater efficiency than we could. Another advantage of dispatch queues is we are able to control the order that our tasks are started. With serial queues, not only do we control the order in which tasks are started, but also ensure that one task does not start before the preceding one is complete. With traditional threads, this can be very cumbersome and brittle to implement, but with dispatch queues, as we will see later in this artcle, it is quite easy. Creating and managing dispatch queues Let's look at how to create and use a dispatch queue. The following three functions are used to create or retrieve queues. These functions are as follows: dispatch_queue_create: This creates a dispatch queue of either the concurrent or serial type dispatch_get_global_queue: This returns a system-defined global concurrent queue with a specified quality of service dispatch_get_main_queue: This returns the serial dispatch queue associated with the application's main thread We will also be looking at several functions that submit tasks to a queue for execution. These functions are as follows: dispatch_async: This submits a task for asynchronous execution and returns immediately. dispatch_sync: This submits a task for synchronous execution and waits until it is complete before it returns. dispatch_after: This submits a task for execution at a specified time. dispatch_once: This submits a task to be executed once and only once while this application is running. It will execute the task again if the application restarts. Before we look at how to use the dispatch queues, we need to create a class that will help us demonstrate how the various types of queues work. This class will contain two basic functions. The first function will simply perform some basic calculations and then return a value. Here is the code for this function, which is named doCalc(): func doCalc() { var x=100 var y = x*x _ = y/x } The other function, which is named performCalculation(), accepts two parameters. One is an integer named iterations, and the other is a string named tag. The performCalculation () function calls the doCalc() function repeatedly until it calls the function the same number of times as defined by the iterations parameter. We also use the CFAbsoluteTimeGetCurrent() function to calculate the elapsed time it took to perform all of the iterations and then print the elapse time with the tag string to the console. This will let us know when the function completes and how long it took to complete it. The code for this function looks similar to this: func performCalculation(iterations: Int, tag: String) { let start = CFAbsoluteTimeGetCurrent() for var i=0; i<iterations; i++ { self.doCalc() } let end = CFAbsoluteTimeGetCurrent() print("time for (tag): (end-start)") } These functions will be used together to keep our queues busy, so we can see how they work. Let's begin by looking at the GCD functions by using the dispatch_queue_create() function to create both concurrent and serial queues. Creating queues with the dispatch_queue_create() function The dispatch_queue_create() function is used to create both concurrent and serial queues. The syntax of the dispatch_queue_create() function looks similar to this: func dispatch_queue_t dispatch_queue_create (label:   UnsafePointer<Int8>, attr: dispatch_queue_attr_t!) - >     dispatch_queue_t! It takes the following parameters: label: This is a string label that is attached to the queue to uniquely identify it in debugging tools, such as Instruments and crash reports. It is recommended that we use a reverse DNS naming convention. This parameter is optional and can be nil. attr: This specifies the type of queue to make. This can be DISPATCH_QUEUE_SERIAL, DISPATCH_QUEUE_CONCURRENT or nil. If this parameter is nil, a serial queue is created. The return value for this function is the newly created dispatch queue. Let's see how to use the dispatch_queue_create() function by creating a concurrent queue and seeing how it works. Some programming languages use the reverse DNS naming convention to name certain components. This convention is based on a registered domain name that is reversed. As an example, if we worked for company that had a domain name mycompany.com with a product called widget, the reverse DNS name will be com.mycompany.widget. Creating concurrent dispatch queues with the dispatch_queue_create() function The following line creates a concurrent dispatch queue with the label of cqueue.hoffman.jon: let queue = dispatch_queue_create("cqueue.hoffman.jon", DIS-PATCH_QUEUE_CONCURRENT) As we saw in the beginning of this section, there are several functions that we can use to submit tasks to a dispatch queue. When we work with queues, we generally want to use the dispatch_async() function to submit tasks because when we submit a task to a queue, we usually do not want to wait for a response. The dispatch_async() function has the following signature: func dispatch_async (queue: dispatch_queue_t!, block: dis- patch_queue_block!) The following example shows how to use the dispatch_async() function with the concurrent queue we just created: let c = { performCalculation(1000, tag: "async0") } dispatch_async(queue, c) In the preceding code, we created a closure, which represents our task, that simply calls the performCalculation() function of the DoCalculation instance requesting that it runs through 1000 iterations of the doCalc() function. Finally, we use the dispatch_async() function to submit the task to the concurrent dispatch queue. This code will execute the task in a concurrent dispatch queue, which is separate from the main thread. While the preceding example works perfectly, we can actually shorten the code a little bit. The next example shows that we do not need to create a separate closure as we did in the preceding example; we can also submit the task to execute like this: dispatch_async (queue) { calculation.performCalculation(10000000, tag: "async1") } This shorthand version is how we usually submit small code blocks to our queues. If we have larger tasks, or tasks that we need to submit multiple times, we will generally want to create a closure and submit the closure to the queue as we showed originally. Let's see how the concurrent queue actually works by adding several items to the queue and looking at the order and time that they return. The following code will add three tasks to the queue. Each task will call the performCalculation() function with various iteration counts. Remember that the performCalculation() function will execute the calculation routine continuously until it is executed the number of times as defined by the iteration count passed in. Therefore, the larger the iteration count we pass into the performCalculation() function, the longer it should take to execute. Let's take a look at the following code: dispatch_async (queue) { calculation.performCalculation(10000000, tag: "async1") } dispatch_async(queue) { calculation.performCalculation(1000, tag: "async2") } dispatch_async(queue) { calculation.performCalculation(100000, tag: "async3") } Notice that each of the functions is called with a different value in the tag parameter. Since the performCalculation() function prints out the tag variable with the elapsed time, we can see the order in which the tasks complete and the time it took to execute. If we execute the preceding code, we should see the following results: time for async2: 0.000200986862182617 time for async3: 0.00800204277038574 time for async1: 0.461670994758606 The elapse time will vary from one run to the next and from system to system. Since the queues function in a FIFO order, the task that had the tag of async1 was executed first. However, as we can see from the results, it was the last task to finish. Since this is a concurrent queue, if it is possible (if the system has available resources), the blocks of code will execute concurrently. This is why the tasks with the tags of async2 and async3 completed prior to the task that had the async1 tag, even though the execution of the async1 task began before the other two. Now, let's see how a serial queue executes tasks. Creating a serial dispatch queue with the dispatch_queue_create() function A serial queue functions is a little different than a concurrent queue. A serial queue will only execute one task at a time and will wait for one task to complete before starting the next task. This queue, like the concurrent dispatch queue, follows a first-in first-out order. The following line of code will create a serial queue with the label of squeue.hoffman.jon: let queue2 = dispatch_queue_create("squeue.hoffman.jon", DIS-PATCH_QUEUE_SERIAL) Notice that we create the serial queue with the DISPATCH_QUEUE_SERIAL attribute. If you recall, when we created the concurrent queue, we created it with the DISPATCH_QUEUE_CONCURRENT attribute. We can also set this attribute to nil, which will create a serial queue by default. However, it is recommended to always set the attribute to either DISPATCH_QUEUE_SERIAL or DISPATCH_QUEUE_CONCURRENT to make it easier to identify which type of queue we are creating. As we saw with the concurrent dispatch queues, we generally want to use the dispatch_async() function to submit tasks because when we submit a task to a queue, we usually do not want to wait for a response. If, however, we did want to wait for a response, we would use the dispatch_synch() function. var calculation = DoCalculations() let c = { calculation.performCalculation(1000, tag: "sync0") } dispatch_async(queue2, c) Just like with the concurrent queues, we do not need to create a closure to submit a task to the queue. We can also submit the task like this: dispatch_async (queue2) { calculation.performCalculation(100000, tag: "sync1") } Let's see how the serial queues works by adding several items to the queue and looking at the order and time that they complete. The following code will add three tasks, which will call the performCalculation() function with various iteration counts, to the queue: dispatch_async (queue2) { calculation.performCalculation(100000, tag: "sync1") } dispatch_async(queue2) { calculation.performCalculation(1000, tag: "sync2") } dispatch_async(queue2) { calculation.performCalculation(100000, tag: "sync3") } Just like with the concurrent queue example, we call the performCalculation() function with various iteration counts and different values in the tag parameter. Since the performCalculation() function prints out the tag string with the elapsed time, we can see the order that the tasks complete in and the time it takes to execute. If we execute this code, we should see the following results: time for sync1: 0.00648999214172363 time for sync2: 0.00009602308273315 time for sync3: 0.00515800714492798 The elapse time will vary from one run to the next and from system to system.  Unlike the concurrent queues, we can see that the tasks completed in the same order that they were submitted, even though the sync2 and sync3 tasks took considerably less time to complete. This demonstrates that a serial queue only executes one task at a time and that the queue waits for each task to complete before starting the next one. Now that we have seen how to use the dispatch_queue_create() function to create both concurrent and serial queues, let's look at how we can get one of the four system- defined, global concurrent queues using the dispatch_get_global_queue() function. Requesting concurrent queues with the dispatch_get_global_queue() function The system provides each application with four concurrent global dispatch queues of different priority levels. The different priority levels are what distinguish these queues. The four priorities are: DISPATCH_QUEUE_PRIORITY_HIGH: The items in this queue run with the highest priority and are scheduled before items in the default and low priority queues DISPATCH_QUEUE_PRIORITY_DEFAULT: The items in this queue run at the default priority and are scheduled before items in the low priority queue but after items in the high priority queue DISPATCH_QUEUE_PRIORITY_LOW: The items in this queue run with a low priority and are schedule only after items in the high and default queues DISPATCH_QUEUE_PRIORITY_BACKGROUND: The items in this queue run with a background priority, which has the lowest priority Since these are global queues, we do not need to actually create them; instead, we ask for a reference to the queue with the priority level needed. To request a global queue, we use the dispatch_get_global_queue() function. This function has the following syntax: func dispatch_get_global_queue(identifier: Int, flags: UInt) -> ? dispatch_queue_t! Here, the following parameters are defined: identifier: This is the priority of the queue we are requesting flags: This is reserved for future expansion and should be set to zero at this time We request a queue using the dispatch_get_global_queue() function, as shown in the following example: let queue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) In this example, we are requesting the global queue with the default priority. We can then use this queue exactly as we used the concurrent queues that we created with the dispatch_queue_create() function. The difference between the queues returned with the dispatch_get_global_queue() function and the ones created with the dispatch_create_queue() function is that with the dispatch_create_queue() function, we are actually creating a new queue. The queues that are returned with the dispatch_get_global_queue() function are global queues that are created when our application first starts; therefore, we are requesting a queue rather than creating a new one. When we use the dispatch_get_global_queue() function, we avoid the overhead of creating the queue; therefore, I recommend using the dispatch_get_global_queue() function unless you have a specific reason to create a queue. Requesting the main queue with the dispatch_get_main_queue() function The dispatch_get_main_queue() function returns the main queue for our application. The main queue is automatically created for the main thread when the application starts. This main queue is a serial queue; therefore, items in this queue are executed one at a time, in the order that they were submitted. We will generally want to avoid using this queue unless we have a need to update the user interface from a background thread. The dispatch_get_main_queue() function has the following syntax: func dispatch_get_main_queue() -> dispatch_queue_t! The following code example shows how to request the main queue: let mainQueue = dispatch_get_main_queue(); We will then submit tasks to the main queue exactly as we would any other serial queue. Just remember that anything submitted to this queue will run on the main thread, which is the thread that all the user interface updates run on; therefore, if we submitted a long running task, the user interface will freeze until that task is completed. In the previous sections, we saw how the dispatch_async() functions submit tasks to concurrent and serial queues. Now, let's look at two additional functions that we can use to submit tasks to our queues. The first function we will look at is the dispatch_after() function. Using the dispatch_after() function There will be times that we need to execute tasks after a delay. If we were using a threading model, we would need to create a new thread, perform some sort of delay or sleep function, and execute our task. With GCD, we can use the dispatch_after() function. The dispatch_after() function takes the following syntax: func dispatch_after(when: dispatch_time_t, queue:   dispatch_queue_t, block: dispatch_block_t) Here, the dispatch_after() function takes the following parameters: when: This is the time that we wish the queue to execute our task in queue: This is the queue that we want to execute our task in block: This is the task to execute As with the dispatch_async() and dispatch_synch() functions, we do not need to include our task as a parameter. We can include our task to execute between two curly brackets exactly as we did previously with the dispatch_async() and dispatch_synch() functions. As we can see from the dispatch_after() function, we use the dispatch_time_t type to define the time to execute the task. We use the dispatch_time() function to create the dispatch_time_t type. The dispatch_time() function has the following syntax: func dispatch_time(when: dispatch_time_t, delta:Int64) ->   dispatch_time_t Here, the dispatch_time() function takes the following parameter: when: This value is used as the basis for the time to execute the task. We generally pass the DISPATCH_TIME_NOW value to create the time, based on the current time. delta: This is the number of nanoseconds to add to the when parameter to get our time. We will use the dispatch_time() and dispatch_after() functions like this: var delayInSeconds = 2.0 let eTime = dispatch_time(DISPATCH_TIME_NOW, Int64(delayInSeconds * Double(NSEC_PER_SEC))) dispatch_after(eTime, queue2) { print("Times Up") } The preceding code will execute the task after a two-second delay. In the dispatch_ time() function, we create a dispatch_time_t type that is two seconds in the future. The NSEC_PER_SEC constant is use to calculate the nanoseconds from seconds. After the two-second delay, we print the message, Times Up, to the console. There is one thing to watch out for with the dispatch_after() function. Let's take a look at the following code: let queue2 = dispatch_queue_create("squeue.hoffman.jon", DIS-PATCH_QUEUE_SERIAL) var delayInSeconds = 2.0 let pTime = dispatch_time (DISPATCH_TIME_NOW,Int64(delayInSeconds * Double(NSEC_PER_SEC))) dispatch_after(pTime, queue2) { print("Times Up") } dispatch_sync(queue2) { calculation.performCalculation(100000, tag: "sync1") } In this code, we begin by creating a serial queue and then adding two tasks to the queue. The first task uses the dispatch_after() function, and the second task uses the dispatch_sync() function. Our initial thought would be that when we executed this code within the serial queue, the first task would execute after a two-second delay and then the second task would execute; however, this would not be correct. The first task is submitted to the queue and executed immediately. It also returns immediately, which lets the queue execute the next task while it waits for the correct time to execute the first task. Therefore, even though we are running the tasks in a serial queue, the second task completes before the first task. The following is an example of the output if we run the preceding code: time for sync1: 0.00407701730728149 Times Up The final GCD function that we are going to look at is dispatch_once(). Using the dispatch_once() function The dispatch_once() function will execute a task once, and only once, for the lifetime of the application. What this means is that the task will be executed and marked as executed, then that task will not be executed again unless the application restarts. While the dispatch_once() function can be and has been used to implement the singleton pattern, there are other easier ways to do this. The dispatch_once() function is great for executing initialization tasks that need to run when our application initially starts. These initialization tasks can consist of initializing our data store or variables and objects. The following code shows the syntax for the dispatch_once() function: func dispatch_once (predicate: UnsafeMutablePoin- ter<dispatch_once_t>,block: dispatch_block_t!) Let's look at how to use the dispatch_once() function: var token: dispatch_once_t = 0 func example() { dispatch_once(&token) { print("Printed only on the first call") } print("Printed for each call") } In this example, the line that prints the message, Printed only on the first call, will be executed only once, no matter how many times the function is called. However, the line that prints the Printed for each call message will be executed each time the function is called. Let's see this in action by calling this function four times, like this: for i in 0..<4 { example() } If we execute this example, we should see the following output: Printed only on the first call Printed for each call Printed for each call Printed for each call Printed for each call Notice, in this example, that we only see the Printed only on the first call message once whereas we see the Printed for each call message all the four times that we call the function. Now that we have looked at GCD, let's take a look at operation queues. Using NSOperation and NSOperationQueue types The NSOperation and NSOperationQueues types, working together, provide us with an alternative to GCD for adding concurrency to our applications. Operation queues are Cocoa objects that function like dispatch queues and internally, operation queues are implemented using GCD. We define the tasks (NSOperations) that we wish to execute and then add the task to the operation queue (NSOperationQueue). The operation queue will then handle the scheduling and execution of tasks. Operation queues are instances of the NSOperationQueue class and operations are instances of the NSOperation class. The operation represents a single unit of work or task. The NSOperation type is an abstract class that provides a thread-safe structure for modeling the state, priority, and dependencies. This class must be subclassed in order to perform any useful work. Apple does provide two concrete implementations of the NSOperation type that we can use as-is for situations where it does not make sense to build a custom subclass. These subclasses are NSBlockOperation and NSInvocationOperation. More than one operation queue can exist at the same time, and actually, there is always at least one operation queue running. This operation queue is known as the main queue. The main queue is automatically created for the main thread when the application starts and is where all the UI operations are performed. There are several ways that we can use the NSOperation and NSOperationQueues classes to add concurrency to our application. In this artcle, we will look at three different ways. The first one we will look at is using the NSBlockOperation implementation of the NSOperation abstract class. Using the NSBlockOperation implementation of NSOperation In this section, we will be using the same DoCalculation class that we used in the Grand Central Dispatch section to keep our queues busy with work so that we can see how the NSOpererationQueues class work. The NSBlockOperation class is a concrete implementation of the NSOperation type that can manage the execution of one or more blocks. This class can be used to execute several tasks at once without the need to create separate operations for each task. Let's see how to use the NSBlockOperation class to add concurrency to our application. The following code shows how to add three tasks to an operation queue using a single NSBlockOperation instance: let calculation = DoCalculations() let operationQueue = NSOperationQueue() let blockOperation1: NSBlockOperation = NSBlockOpera-tion.init(block: { calculation.performCalculation(10000000, tag: "Operation 1") }) blockOperation1.addExecutionBlock( { calculation.performCalculation(10000, tag: "Operation 2") } ) blockOperation1.addExecutionBlock( { calculation.performCalculation(1000000, tag: "Operation 3") } ) operationQueue.addOperation(blockOperation1) In this code, we begin by creating an instance of the DoCalculation class and an instance of the NSOperationQueue class. Next, we created an instance of the NSBlockOperation class using the init constructor. This constructor takes a single parameter, which is a block of code that represents one of the tasks we want to execute in the queue. Next, we add two additional tasks to the NSBlockOperation instance using the addExecutionBlock() method. This is one of the differences between dispatch queues and operations. With dispatch queues, if resources are available, the tasks are executed as they are added to the queue. With operations, the individual tasks are not executed until the operation itself is submitted to an operation queue. Once we add all of the tasks to the NSBlockOperation instance, we then add the operation to the NSOperationQueue instance that we created at the beginning of the code. At this point, the individual tasks within the operation start to execute. This example shows how to use NSBlockOperation to queue up multiple tasks and then pass the tasks to the operation queue. The tasks are executed in a FIFO order; therefore, the first task that is added to the NSBlockOperation instance will be the first task executed. However, since the tasks can be executed concurrently if we have the available resources, the output from this code should look similar to this: time for Operation 2: 0.00546294450759888 time for Operation 3: 0.0800899863243103 time for Operation 1: 0.484337985515594 What if we do not want our tasks to run concurrently? What if we wanted them to run serially like the serial dispatch queue? We can set a property in our operation queue that defines the number of tasks that can be run concurrently in the queue. The property is called maxConcurrentOperationCount and is used like this: operationQueue.maxConcurrentOperatio nCount = 1 However, if we added this line to our previous example, it will not work as expected. To see why this is, we need to understand what the property actually defines. If we look at Apple's NSOperationQueue class reference, the definition of the property says, "The maximum number of queued operations that can execute at the same time." What this tells us is that the maxConcurrentOperationCount property defines the number of operations (this is the key word) that can be executed at the same time. The NSBlockOperation instance, which we added all of our tasks to, represents a single operation; therefore, no other NSBlockOperation added to the queue will execute until the first one is complete, but the individual tasks within the operation will execute concurrently. To run the tasks serially, we would need to create a separate instance of the NSBlockOperations for each task. Using an instance of the NSBlockOperation class good if we have a number of tasks that we want to execute concurrently, but they will not start executing until we add the operation to an operation queue. Let's look at a simpler way of adding tasks to an operation queue using the queues addOperationWithBlock() methods. Using the addOperationWithBlock() method of the operation queue The NSOperationQueue class has a method named addOperationWithBlock() that makes it easy to add a block of code to the queue. This method automatically wraps the block of code in an operation object and then passes that operation to the queue itself. Let's see how to use this method to add tasks to a queue: let operationQueue = NSOperationQueue() let calculation = DoCalculations() operationQueue.addOperationWithBlock() { calculation.performCalculation(10000000, tag: "Operation1") } operationQueue.addOperationWithBlock() { calculation.performCalculation(10000, tag: "Operation2") } operationQueue.addOperationWithBlock() { calculation.performCalculation(1000000, tag: "Operation3") } In the NSBlockOperation example, earlier in this artcle, we added the tasks that we wished to execute into an NSBlockOperation instance. In this example, we are adding the tasks directly to the operation queue, and each task represents one complete operation. Once we create the instance of the operation queue, we then use the addOperationWithBlock() method to add the tasks to the queue. Also, in the NSBlockOperation example, the individual tasks did not execute until all of the tasks were added to the NSBlockOperation object and then that operation was added to the queue. This addOperationWithBlock() example is similar to the GCD example where the tasks begin executing as soon as they are added to the operation queue. If we run the preceding code, the output should be similar to this: time for Operation2: 0.0115870237350464 time for Operation3: 0.0790849924087524 time for Operation1: 0.520610988140106 You will notice that the operations are executed concurrently. With this example, we can execute the tasks serially by using the maxConcurrentOperationCount property that we mentioned earlier. Let's try this by initializing the NSOperationQueue instance like this: var operationQueue = NSOperationQueue() operationQueue.maxConcurrentOperationCount = 1 Now, if we run the example, the output should be similar to this: time for Operation1: 0.418763995170593 time for Operation2: 0.000427007675170898 time for Operation3: 0.0441589951515198 In this example, we can see that each task waited for the previous task to complete prior to starting. Using the addOperationWithBlock() method to add tasks, the operation queue is generally easier than using the NSBlockOperation method; however, the tasks will begin as soon as they are added to the queue, which is usually the desired behavior. Now, let's look at how we can subclass the NSOperation class to create an operation that we can add directly to an operation queue. Subclassing the NSOperation class The previous two examples showed how to add small blocks of code to our operation queues. In these examples, we called the performCalculations method in the DoCalculation class to perform our tasks. These examples illustrate two really good ways to add concurrency for functionally that is already written, but what if, at design time, we want to design our DoCalculation class for concurrency? For this, we can subclass the NSOperation class. The NSOperation abstract class provides a significant amount of infrastructure. This allows us to very easily create a subclass without a lot of work. We should at least provide an initialization method and a main method. The main method will be called when the queue begins executing the operation: Let's see how to implement the DoCalculation class as a subclass of the NSOperation class; we will call this new class MyOperation: class MyOperation: NSOperation { let iterations: Int let tag: String init(iterations: Int, tag: String) { self.iterations = iterations self.tag = tag } override func main() { performCalculation() } func performCalculation() { let start = CFAbsoluteTimeGetCurrent() for var i=0; i<iterations; i++ { self.doCalc() } let end = CFAbsoluteTimeGetCurrent() print("time for (tag): (end-start)") } func doCalc() { let x=100 let y = x*x _ = y/x } } We begin by defining that the MyOperation class is a subclass of the NSOperation class. Within the implementation of the class, we define two class constants, which represent the iteration count and the tag that the performCalculations() method uses. Keep in mind that when the operation queue begins executing the operation, it will call the main() method with no parameters; therefore, any parameters that we need to pass in must be passed in through the initializer. In this example, our initializer takes two parameters that are used to set the iterations and tag classes constants. Then the main() method, that the operation queue is going to call to begin execution of the operation, simply calls the performCalculation() method. We can now very easily add instances of our MyOperation class to an operation queue, like this: var operationQueue = NSOperationQueue() operationQueue.addOperation(MyOperation (iterations: 10000000, tag: "Operation 1")) operationQueue.addOperation(MyOperation (iterations: 10000, tag: "Operation 2")) operationQueue.addOperation(MyOperation (iterations: 1000000, tag: "Operation 3")) If we run this code, we will see the following results: time for Operation 2: 0.00187397003173828 time for Operation 3: 0.104826986789703 time for Operation 1: 0.866684019565582 As we saw earlier, we can also execute the tasks serially by adding the following line, which sets the maxConcurrentOperationCount property of the operation queue: operationQueue.maxConcurrentOperationCount = 1 If we know that we need to execute some functionality concurrently prior to writing the code, I will recommend subclassing the NSOperation class, as shown in this example, rather than using the previous examples. This gives us the cleanest implementation; however, there is nothing wrong with using the NSBlockOperation class or the addOperationWithBlock() methods described earlier in this section. Summary Before we consider adding concurrency to our application, we should make sure that we understand why we are adding it and ask ourselves whether it is necessary. While concurrency can make our application more responsive by offloading work from our main application thread to a background thread, it also adds extra complexity to our code and overhead to our application. I have even seen numerous applications, in various languages, which actually run better after we pulled out some of the concurrency code. This is because the concurrency was not well thought out or planned. With this in mind, it is always a good idea to think and talk about concurrency while we are discussing the application's expected behavior. At the start of this artcle, we had a discussion about running tasks concurrently compared to running tasks in parallel. We also discussed the hardware limitation that limits how many tasks can run in parallel on a given device. Having a good understanding of those concepts is very important to understanding how and when to add concurrency to our projects. While GCD is not limited to system-level applications, before we use it in our application, we should consider whether operation queues would be easier and more appropriate for our needs. In general, we should use the highest level of abstraction that meets our needs. This will usually point us to using operation queues; however, there really is nothing preventing us from using GCD, and it may be more appropriate for our needs. One thing to keep in mind with operation queues is that they do add additional overhead because they are Cocoa objects. For the large majority of applications, this little extra overhead should not be an issue or even noticed; however, for some projects, such as games that need every last resource that they can get, this extra overhead might very well be an issue. Resources for Article: Further resources on this subject: Swift for Open Source Developers [article] Your First Swift 2 Project [article] Exploring Swift [article]
Read more
  • 0
  • 0
  • 5671

article-image-add-a-twitter-sign-in-to-your-ios-app-with-twitterkit
Doron Katz
15 Mar 2015
5 min read
Save for later

Add a Twitter Sign In To Your iOS App with TwitterKit

Doron Katz
15 Mar 2015
5 min read
What is TwitterKit & Digits? In this post we take a look at Twitter’s new Sign-in API, TwitterKit and Digits, bundled as part of it’s new Fabric suite of tools, announced by Twitter earlier this year, as well as providing you with two quick snippets on on how to integrate Twitter’s sign-in mechanism into your iOS app. Facebook, and to a lesser-extent Google, have for a while dominated the single sign-on paradigm, through their SDK or Accounts.framework on iOS, to encourage developers to provide a consolidated form of login for their users. Twitter has decided to finally get on the band-wagon and work toward improving its brand through increasing sign-on participation, and providing a concise way for users to log-in to their favorite apps without needing to remember individual passwords. By providing a Login via Twitter button, developers will gain the user’s Twitter identity, and subsequently their associated twitter tweets and associations. That is, once the twitter account is identified, the app can engage followers of that account (friends), or to access the user’s history of tweets, to data-mind for a specific keyword or hashtag. In addition to offering a single sign-on, Twitter is also offering Digits, the ability for users to sign-on anonymously using one’s phone number, synonymous with Facebook’s new Anonymous Login API.   The benefits of Digit The rationale behind Digits is to provide users with the option of trusting the app or website and providing their Twitter identification in order to log-in. Another option for the more hesitant ones wanting to protect their social graph history, is to only provide a unique number, which happens to be a mobile number, as a means of identification and authentication. Another benefit for users is that logging in is dead-simple, and rather than having to go through a deterring form of identification questions, you just ask them for their number, from which they will get an authentication confirmation SMS, allowing them to log-in. With a brief introduction to TwitterKit and Digits, let’s show you how simple it is to implement each one. Logging in with TwitterKit Twitter wanted to make implementing its authentication mechanism a more simpler and attractive process for developers, which they did. By using the SDK as part of Twitter’s Fabric suite, you will already get your Twitter app set-up and ready, registered for use with the company’s SDK. TwitterKit aims to leverage the existing Twitter account on the iOS, using the Accounts.framework, which is the preferred and most rudimentary approach, with a fallback to using the OAuth mechanism. The easiest way to implement Twitter authentication is through the generated button, TWTRLogInButton, as we will demonstrate using iOS’Swift language. let authenticationButton = TWTRLogInButton(logInCompletion: { (session, error) in if (session != nil) { //We signed in, storing session in session object. } else { //we get an error, accessible from error object } }) It’s quite simple, leaving you with a TWTRLoginButton button subclass, that users can add to your view hierarchy and have users interact with. Logging in with Digits Having created a login button using TwitterKit, we will now create the same feature using Digits. The simplicity of implementation is maintained with Digits, with the simplest process once again to create a pre-configured button, DGTAuthenticateButton: let authenticationButton = TWTRLogInButton(logInCompletion: { (session, error) in if (session != nil) { //We signed in, storing session in session object. } else { //we get an error, accessible from error object } }) Summary Implementing TwitterKit and Digits are both quite straight forward in iOS, with different intentions. Whereas TwitterKit allows you to have full-access to the authenticated user’s social history, the latter allows for a more abbreviated, privacy-protected approach to authenticating. If at some stage the user decides to trust the app and feels more comfortable providing full access of her or his social history, you can defer catering to that till later in the app usage. The complete iOS reference for TwitterKit and Digits can be found by clicking here. The popularity and uptake of TwitterKit remains to be seen, but as an extra option for developers, when adding Facebook and Google+ login, users will have the option to pick their most trusted social media tool as their choice of authentication. Providing an anonymous mode of login also falls in line with the more privacy-conscious world, and Digits certainly provides a seamless way of implementing, and impressively straight-forward way for users to authenticate using their phone number. We have briefly demonstrated how to interact with Twitter’s SDK using iOS and Swift, but there is also an Android SDK version, with a Web version in the pipeline very soon, according to Twitter. This is certainly worth exploring, along with the rest of the tools offered in the Fabric suite, including analytics and beta-distribution tools, and more. About the author Doron Katz is an established Mobile Project Manager, Architect and Developer, and a pupil of the methodology of Agile Project Management,such as applying Kanban principles. Doron also believes in BehaviourDriven Development (BDD), anticipating user interaction prior to design, that is. Doron is also heavily involved in various technical user groups, such as CocoaHeads Sydney, and Adobe user Group.
Read more
  • 0
  • 1
  • 5537

article-image-your-first-swift-program
Packt
20 Feb 2018
4 min read
Save for later

Your First Swift Program

Packt
20 Feb 2018
4 min read
 In this article, by Keith Moon author of the book Swift 4 Programming Cookbook, we will learn how to write your first swift program. (For more resources related to this topic, see here.) Your first Swift program In this first recipe will be get up and running with Swift using a Swift Playground, and run our first piece of Swift code. Getting ready To run our first Swift program, we first need to download and install our IDE. During the beta of Apple's Xcode 9, it is available as a direct download from Apple's developer website at http://developer.apple.com/download, access to this beta will require a free Apple developer account. Once the beta has ended and Xcode 9 is publically available, it will also be available from the Mac App Store. By obtaining it from the Mac App Store, you will automatically be informed of updates, so this is the preferred route, once Xcode 9 is out of beta. Xcode from the Mac App Store Open up the Mac App Store, either from the dock or via Spotlight: Search for xcode: Click Install: Xcode is a large download (over 4 GB). So, depending on your internet connection, this could take a while! Progress can be monitored from Launchpad: Xcode as a direct download Go to the Apple Developer download page at http://developer.apple.com/download  Click the Download button to download Xcode within a .xip file.  Double click on the downloaded file to unpack the Xcode application. Drag the Xcode application into your Applications folder How to do it... With Xcode downloaded, let create our first Swift playground: Launch Xcode from the icon in your dock. From the welcome screen, choose Get started with a playground. From the template chooser, select the blank template from the iOS tab: Choose a name for your playground and a location to save it: Xcode Playgrounds can be based on one of three different Apple platforms, iOS, tvOS and macOS (the operating system formerly known as OSX). Playgrounds provide full access to the frameworks available to either iOS, tvOS or macOS, depending on which you choose. An iOS playground will be assumed for the entirety of this chapter, chiefly because this is the platform of choice of the author. Where recipes do have UI components, the iOS platform will be used until otherwise stated. You are now presented with a view that looks like this: Let's replace the word playground with Swift!. Press the blue play button in the bottom left-hand corner of the window to execute the code in the playground: Congratulations! You have just run some Swift code. On the right-hand side of the window, you will see the output of each line of code in the playground. We can see our line of code has output "Hello, Swift!": There's more... If you put your cursor over the output on the left-hand side, you will see two buttons, one that looks like an eye, another that is a circle: Click on the eye button and you get a Quick Look box of the output. This isn't that useful for just a string, but can be useful for more visual output like colors and views. Click on the square button, and a box will be added in-line, under your code, showing the output of the code. This can be really useful if you want to see how the output changes as you change the code. Summary In this article, we learnt how to run your first swift program. Resources for Article: Further resources on this subject: Your First Swift App [article] Exploring Swift [article] Functions in Swift [article]
Read more
  • 0
  • 0
  • 5500
article-image-delegate-pattern-limitations-swift
Anthony Miller
18 Mar 2016
5 min read
Save for later

Delegate Pattern Limitations in Swift

Anthony Miller
18 Mar 2016
5 min read
If you've ever built anything using UIKit, then you are probably familiar with the delegate pattern. The delegate pattern is used frequently throughout Apple's frameworks and many open source libraries you may come in contact with. But many times, it is treated as a one-size-fits-all solution for problems that it is just not suited for. This post will describe the major shortcomings of the delegate pattern. Note: This article assumes that you have a working knowledge of the delegate pattern. If you would like to learn more about the delegate pattern, see The Swift Programming Language - Delegation. 1. Too Many Lines! Implementation of the delegate pattern can be cumbersome. Most experienced developers will tell you that less code is better code, and the delegate pattern does not really allow for this. To demonstrate, let's try implementing a new view controller that has a delegate using the least amount of lines possible. First, we have to create a view controller and give it a property for its delegate: class MyViewController: UIViewController { var delegate: MyViewControllerDelegate? } Then, we define the delegate protocol. protocol MyViewControllerDelegate { func foo() } Now we have to implement the delegate. Let's make another view controller that presents a MyViewController: class DelegateViewController: UIViewController { func presentMyViewController() { let myViewController = MyViewController() presentViewController(myViewController, animated: false, completion: nil) } } Next, our DelegateViewController needs to conform to the delegate protocol: class DelegateViewController: UIViewController, MyViewControllerDelegate { func presentMyViewController() { let myViewController = MyViewController() presentViewController(myViewController, animated: false, completion: nil) } func foo() { /// Respond to the delegate method. } } Finally, we can make our DelegateViewController the delegate of MyViewController: class DelegateViewController: UIViewController, MyViewControllerDelegate { func presentMyViewController() { let myViewController = MyViewController() myViewController.delegate = self presentViewController(myViewController, animated: false, completion: nil) } func foo() { /// Respond to the delegate method. } } That's a lot of boilerplate code that is repeated every time you want to create a new delegate. This opens you up to a lot of room for errors. In fact, the above code has a pretty big error already that we are going to fix now. 2. No Non-Class Type Delegates Whenever you create a delegate property on an object, you should use the weak keyword. Otherwise, you are likely to create a retain cycle. Retain cycles are one of the most common ways to create memory leaks and can be difficult to track down. Let's fix this by making our delegate weak: class MyViewController: UIViewController { weak var delegate: MyViewControllerDelegate? } This causes another problem though. Now we are getting a build error from Xcode! 'weak' cannot be applied to non-class type 'MyViewControllerDelegate'; consider adding a class bound. This is because you can't make a weak reference to a value type, such as a struct or an enum, so in order to use the weak keyword here, we have to guarantee that our delegate is going to be a class. Let's take Xcode's advice here and add a class bound to our protocol: protocol MyViewControllerDelegate: class { func foo() } Well, now everything builds just fine, but we have another issue. Now your delegate must be an object (sorry structs and enums!). You are now creating more constraints on what can conform to your delegate. The whole point of the delegate pattern is to allow an unknown "something" to respond to the delegate events. We should be putting as few constraints as possible on our delegate object, which brings us to the next issue with the delegate pattern. 3. Optional Delegate Methods In pure Swift, protocols don't have optional functions. This means, your delegate must implement every method in the delegate protocol, even if it is irrelevant in your case. For example, you may not always need to be notified when a user taps a cell in a UITableView. There are ways to get around this though. In Swift 2.0+, you can make a protocol extension on your delegate protocol that contains a default implementation for protocol methods that you want to make optional. Let's make a new optional method on our delegate protocol using this method: protocol MyViewControllerDelegate: class { func foo() func optionalFunction() } extension MyViewControllerDelegate { func optionalFunction() { } } This adds even more unnecessary code. It isn't really clear what the intention of this extension is unless you understand what's going on already, and there is no way to explicitly show that this method is optional. Alternatively, if you mark your protocol as @objc, you can use the optional keyword in your function declaration. The problem here is that now your delegate must be an Objective-C object. Just like our last example, this is creating additional constraints on your delegate, and this time they are even more restrictive. 4. There Can Be Only One The delegate pattern only allows for one delegate to respond to events. This may be just fine for some situations, but if you need multiple objects to be notified of an event, the delegate pattern may not work for you. Another common scenario you may come across is when you need different objects to be notified of different delegate events. The delegate pattern can be a very useful tool, which is why it is so widely used, but recognizing the limitations that it creates is important when you are deciding whether it is the right solution for any given problem. About the author Anthony Miller is the lead iOS developer at App-Order in Las Vegas, Nevada, USA. He has written and released numerous apps on the App Store and is an avid open source contributor. When he's not developing, Anthony loves board games, line-dancing, and frequent trips to Disneyland.
Read more
  • 0
  • 0
  • 5426

article-image-evenly-spaced-views-auto-layout-ios
Joe Masilotti
16 Apr 2015
5 min read
Save for later

Evenly Spaced Views with Auto Layout in iOS

Joe Masilotti
16 Apr 2015
5 min read
When the iPhone first came out there was only one screen size to worry about 320, 480. Then the Retina screen was introduced doubling the screen's resolution. Apple quickly introduced the iPhone 5 and added an extra 88 points to the bottom. With the most recent line of iPhones two more sizes were added to the mix. Before even mentioning the iPad line that is already five different combinations of heights and widths to account for. To help remedy this growing number of sizes and resolutions Apple introduced Auto Layout with iOS 6. Auto Layout is a dynamic way of laying out views with constraints and rules to let the content fit on multiple screen sizes; think "responsive" for mobile. Lots of layouts are possible with Auto Layout but some require an extra bit of work. One of the more common, albeit tricky, arrangements is to have evenly spaced elements. Having the view scale up to different resolutions and look great on all devices isn't hard and can be done in both Interface Builder or manually in code. Let's walk through how to evenly space views with Auto Layout using Xcode's Interface Builder. Using Interface Builder The easiest way to play around and test layout in IB is to create a new Single View Application iOS project.   Open Main.storyboard and select ViewController on the left. Don't worry that it is showing a square view since we will be laying everything out dynamically. The first addition to the view will be the three `UIView`s we will be evenly spacing. Add them along the view from left to right and assign different colors to each. This will make it easier to distinguish them later. Don't worry about where they are we will fix the layout soon enough.   Spacer View Layout Ideally we would be able to add constraints that evenly space these out directly. Unfortunately, you can not set *equals* restrictions on constraints, only on views. What that means is we have to create spacer views in between our content and then set equal constraints on those. Add four more views, one between the edges and the content views.   Before we add our first constraint let's name each view so we can have a little more context when adding their attributes. One of the most frustrating things when working with Auto Layout is seeing the little red arrow telling you something is wrong. Let's try and incrementally add constraints and get back to a working state as quickly as possible. The first item we want to add will constrain the Left Content view using the spacer. Select the Left Spacer and add left 20, top 20, and bottom 20 constraints.   To fix this first error we need to assign a width to the spacer. While we will be removing it later it makes sense to always have a clean slate when moving on to another view. Add in a width (50) constraint and let IB automatically and update its frame.   Now do the same thing to the Right Spacer.   Content View Layout We will remove the width constraints when everything else is working correctly. Consider them temporary placeholders for now. Next lets lay out the Left Content view. Add a left 0, top 20, bottom 20, width 20 constraint to it.   Follow the same method on the Right Content view.   Twice more follow the same procedure for the Middle Spacer Views giving them left/right 0, top 20, bottom 20, width 50 constraints.   Finally, let's constrain the Middle Content view. Add left 0, top 20, right 0, bottom 20 constraints to it and lay it out.   Final Constraints Remember when I said it was tricky? Maybe a better way to describe this process is long and tedious. All of the setup we have done so far was to set us up in a good position to give the constraints we actually want. If you look at the view it doesn't look very special and it won't even resize the right way yet. To start fix this we bring in the magic constraint of this entire example, Equal Widths on the spacer views. Go ahead and delete the four explicit Width constraints on the spacer views and add an Equal Width constraint to each. Select them all at the same time then add the constraint so they work off of each other.   Finally, set explicit widths on the three content views. This is where you can start customizing the layout to have it look the way you want. For my view I want the three views to be 75 points wide so I removed all of the Width constraints and added them back in for each. Now set the background color of the four spacer views to clear and hide them. Running the app on different size simulators will produce the same result: the three content views remain the same width and stay evenly spaced out along the screen. Even when you rotate the device the views remain spaced out correctly.   Try playing around with different explicit widths of the content views. The same technique can be used to create very dynamic layouts for a variety of applications. For example, this procedurecan be used to create a table cell with an image on the left, text in the middle, and a button on the right. Or it can make one row in a calculator that sizes to fit the screen of the device. What are you using it for? About the author Joe Masilotti is a test-driven iOS developer living in Brooklyn, NY. He contributes to open-source testing tools on GitHub and talks about development, cooking, and craft beer on Twitter.
Read more
  • 0
  • 0
  • 5114

article-image-string-management-in-swift
Jorge Izquierdo
21 Sep 2016
7 min read
Save for later

String management in Swift

Jorge Izquierdo
21 Sep 2016
7 min read
One of the most common tasks when building a production app is translating the user interface into multiple languages. I won't go into much detail explaining this or how to set it up, because there are lots of good articles and tutorials on the topic. As a summary, the default system is pretty straightforward. You have a file named Localizable.strings with a set of keys and then different values depending on the file's language. To use these strings from within your app, there is a simple macro in Foundation, NSLocalizedString(key, comment: comment), that will take care of looking up that key in your localizable strings and return the value for the user's device language. Magic numbers, magic strings The problem with this handy macro is that as you can add a new string inline, you will presumably end up with dozens of NSLocalizedStrings in the middle of the code of your app, resulting in something like this: mainLabel.text = NSLocalizedString("Hello world", comment: "") Or maybe, you will write a simple String extension for not having to write it every time. That extension would be something like: extension String { var localized: String { return NSLocalizedString(self, comment: "") } } mainLabel.text = "Hello world".localized This is an improvement, but you still have the problem that the strings are all over the place in the app, and it is difficult to maintain a scalable format for the strings as there is not a central repository of strings that follows the same structure. The other problem with this approach is that you have plain strings inside your code, where you could change a character and not notice it until seeing a weird string in the user interface. For that not to happen, you can take advantage of Swift's awesome strongly typed nature and make the compiler catch these errors with your strings, so that nothing unexpected happens at runtime. Writing a Swift strings file So that is what we are going to do. The goal is to be able to have a data structure that will hold all the strings in your app. The idea is to have something like this: enum Strings { case Title enum Menu { case Feed case Profile case Settings } } And then whenever you want to display a string from the app, you just do: Strings.Title.Feed // "Feed" Strings.Title.Feed.localized // "Feed" or the value for "Feed" in Localizable.strings This system is not likely to scale when you have dozens of strings in your app, so you need to add some sort of organization for the keys. The basic approach would be to just set the value of the enum to the key: enum Strings: String { case Title = "app.title" enum Menu: String { case Feed = "app.menu.feed" case Profile = "app.menu.profile" case Settings = "app.menu.settings" } } But you can see that this is very repetitive and verbose. Also, whenever you add a new string, you need to write its key in the file and then add it to the Localizable.strings file. We can do better than this. Autogenerating the keys Let’s look into how you can automate this process so that you will have something similar to the first example, where you didn't write the key, but you want an outcome like the second example, where you get a reasonable key organization that will be scalable as you add more and more strings during development. We will take advantage of protocol extensions to do this. For starters, you will define a Localizable protocol to make the string enums conform to: protocol Localizable { var rawValue: String { get } } enum Strings: String, Localizable { case Title case Description } And now with the help of a protocol extension, you can get a better key organization: extension Localizable { var localizableKey: String { return self.dynamicType.entityName + "." rawValue } static var entityName: String { return String(self) } } With that key, you can fetch the localized string in a similar way as we did with the String extension: extension Localizable { var localized: String { return NSLocalizedString(localizableKey, comment: "") } } What you have done so far allows you to do Strings.Title.localized, which will look in the localizable strings file for the key Strings.Title and return the value for that language. Polishing the solution This works great when you only have one level of strings, but if you want to group a bit more, say Strings.Menu.Home.Title, you need to make some changes. The first one is that each child needs to know who its parent is in order to generate a full key. That is impossible to do in Swift in an elegant way today, so what I propose is to explicitly have a variable that holds the type of the parent. This way you can recurse back the strings tree until the parent is nil, where we assume it is the root node. For this to happen, you need to change your Localizable protocol a bit: public protocol Localizable { static var parent: LocalizeParent { get } var rawValue: String { get } } public typealias LocalizeParent = Localizable.Type? Now that you have the parent idea in place, the key generation needs to recurse up the tree in order to find the full path for the key. rivate let stringSeparator: String = "." private extension Localizable { static func concatComponent(parent parent: String?, child: String) -> String { guard let p = parent else { return child.snakeCaseString } return p + stringSeparator + child.snakeCaseString } static var entityName: String { return String(self) } static var entityPath: String { return concatComponent(parent: parent?.entityName, child: entityName) } var localizableKey: String { return self.dynamicType.concatComponent(parent: self.dynamicType.entityPath, child: rawValue) } } And to finish, you have to make enums conform to the updated protocol: enum Strings: String, Localizable { case Title enum Menu: String, Localizable { case Feed case Profile case Settings static let parent: LocalizeParent = Strings.self } static let parent: LocalizeParent = nil } With all this in place you can do the following in your app: label.text = Strings.Menu.Settings.localized And the label will have the value for the "strings.menu.settings" key in Localizable.strings. Source code The final code for this article is available on Github. You can find there the instructions for using it within your project. But also you can just add the Localize.swift and modify it according to your project's needs. You can also check out a simple example project to see the whole solution together.  Next time The next steps we would need to take in order to have a full solution is a way for the Localizable.strings file to autogenerate. The solution for this at the current state of Swift wouldn't be very elegant, because it would require either inspecting the objects using the ObjC runtime (which would be difficult to do since we are dealing with pure Swift types here) or defining all the children of a given object explicitly, in the same way as open source XCTest does. Each test case defines all of its tests in a static property. About the author Jorge Izquierdo has been developing iOS apps for 5 years. The day Swift was released, he starting hacking around with the language and built the first Swift HTTP server, Taylor. He has worked on several projects and right now works as an iOS development contractor.
Read more
  • 0
  • 0
  • 4994
article-image-swift-open-source-developers
Packt
16 Feb 2016
43 min read
Save for later

Swift for Open Source Developers

Packt
16 Feb 2016
43 min read
Apple announced Swift at WWDC 2014 as a new programming language that combines experience with the Objective-C platform and advances in dynamic and statically typed languages over the last few decades. Before Swift, most code written for iOS and OS X applications was in Objective-C, a set of object-oriented extensions to the C programming language. Swift aims to build upon patterns and frameworks of Objective-C but with a more modern runtime and automatic memory management. In December 2015, Apple open sourced Swift at https://swift.org and made binaries available for Linux as well as OS X. The content in this article can be run on either Linux or OS X. Developing iOS applications requires Xcode and OS X. In this article, we will present the following topics: How to use the Swift REPL to evaluate Swift code The different types of Swift literals How to use arrays and dictionaries Functions and the different types of function arguments Compiling and running Swift from the command line (For more resources related to this topic, see here.) Open source Swift Apple released Swift as an open source project in December 2015, hosted at https://github.com/apple/swift/ and related repositories. Information about the open source version of Swift is available from the https://swift.org site. The open-source version of Swift is similar from a runtime perspective on both Linux and OS X; however, the set of libraries available differ between the two platforms. For example, the Objective-C runtime was not present in the initial release of Swift for Linux; as a result, several methods that are delegated to Objective-C implementations are not available. "hello".hasPrefix("he") compiles and runs successfully on OS X and iOS but is a compile error in the first Swift release for Linux. In addition to missing functions, there is also a different set of modules (frameworks) between the two platforms. The base functionality on OS X and iOS is provided by the Darwin module, but on Linux, the base functionality is provided by the Glibc module. The Foundation module, which provides many of the data types that are outside of the base-collections library, is implemented in Objective-C on OS X and iOS, but on Linux, it is a clean-room reimplementation in Swift. As Swift on Linux evolves, more of this functionality will be filled in, but it is worth testing on both OS X and Linux specifically if cross platform functionality is required. Finally, although the Swift language and core libraries have been open sourced, this does not apply to the iOS libraries or other functionality in Xcode. As a result, it is not possible to compile iOS or OS X applications from Linux, and building iOS applications and editing user interfaces is something that must be done in Xcode on OS X. Getting started with Swift Swift provides a runtime interpreter that executes statements and expressions. Swift is open source, and precompiled binaries can be downloaded from https://swift.org/download/ for both OS X and Linux platforms. Ports are in progress to other platforms and operating systems but are not supported by the Swift development team. The Swift interpreter is called swift and on OS X can be launched using the xcrun command in a Terminal.app shell: $ xcrun swift Welcome to Swift version 2.2!  Type :help for assistance. >  The xcrun command allows a toolchain command to be executed; in this case, it finds /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift. The swift command sits alongside other compilation tools, such as clang and ld, and permits multiple versions of the commands and libraries on the same machine without conflicting. On Linux, the swift binary can be executed provided that it and the dependent libraries are in a suitable location. The Swift prompt displays > for new statements and . for a continuation. Statements and expressions that are typed into the interpreter are evaluated and displayed. Anonymous values are given references so that they can be used subsequently: > "Hello World" $R0: String = "Hello World" > 3 + 4 $R1: Int = 7 > $R0 $R2: String = "Hello World" > $R1 $R3: Int = 7 Numeric literals Numeric types in Swift can represent both signed and unsigned integral values with sizes of 8, 16, 32, or 64 bits, as well as signed 32 or 64 bit floating point values. Numbers can include underscores to provide better readability; so, 68_040 is the same as 68040: > 3.141 $R0: Double = 3.141 > 299_792_458 $R1: Int = 299792458 > -1 $R2: Int = -1 > 1_800_123456 $R3: Int = 1800123456 Numbers can also be written in binary, octal, or hexadecimal using prefixes 0b, 0o (zero and the letter "o") or 0x. Please note that Swift does not inherit C's use of a leading zero (0) to represent an octal value, unlike Java and JavaScript which do. Examples include: > 0b1010011 $R0: Int = 83 > 0o123 $R1: Int = 83 > 0123 $R2: Int = 123 > 0x7b $R3: Int = 123   Floating point literals There are three floating point types that are available in Swift which use the IEEE754 floating point standard. The Double type represents 64 bits worth of data, while Float stores 32 bits of data. In addition, Float80 is a specialized type that stores 80 bits worth of data (Float32 and Float64 are available as aliases for Float and Double, respectively, although they are not commonly used in Swift programs). Some CPUs internally use 80 bit precision to perform math operations, and the Float80 type allows this accuracy to be used in Swift. Not all architectures support Float80 natively, so this should be used sparingly. By default, floating point values in Swift use the Double type. As floating point representation cannot represent some numbers exactly, some values will be displayed with a rounding error; for example: > 3.141 $R0: Double = 3.141 > Float(3.141) $R1: Float = 3.1400003 Floating point values can be specified in decimal or hexadecimal. Decimal floating point uses e as the exponent for base 10, whereas hexadecimal floating point uses p as the exponent for base 2. A value of AeB has the value A*10^B and a value of 0xApB has the value A*2^B. For example: > 299.792458e6 $R0: Double = 299792458 > 299.792_458_e6 $R1: Double = 299792458 > 0x1p8 $R2: Double = 256 > 0x1p10 $R3: Double = 1024 > 0x4p10 $R4: Double = 4096 > 1e-1 $R5: Double = 0.10000000000000001 > 1e-2 $R6: Double = 0.01> 0x1p-1 $R7: Double = 0.5 > 0x1p-2 $R8: Double = 0.25 > 0xAp-1 $R9: Double = 5 String literals Strings can contain escaped characters, Unicode characters, and interpolated expressions. Escaped characters start with a slash () and can be one of the following: \: This is a literal slash
Read more
  • 0
  • 0
  • 4802

article-image-bsd-socket-library
Packt
17 Jan 2014
9 min read
Save for later

BSD Socket Library

Packt
17 Jan 2014
9 min read
(For more resources related to this topic, see here.) Introduction The Berkeley Socket API (where API stands for Application Programming Interface) is a set of standard functions used for inter-process network communications. Other socket APIs also exist; however, the Berkeley socket is generally regarded as the standard. The Berkeley Socket API was originally introduced in 1983 when 4.2 BSD was released. The API has evolved with very few modifications into a part of the Portable Operating System Interface for Unix (POSIX) specification. All modern operating systems have some implementation of the Berkeley Socket Interface for connecting devices to the Internet. Even Winsock, which is MS Window's socket implementation, closely follows the Berkeley standards. BSD sockets generally rely on client/server architecture when they establish their connections. Client/server architecture is a networking approach where a device is assigned one of the two following roles: Server: A server is a device that selectively shares resources with other devices on the network Client: A client is a device that connects to a server to make use of the shared resources Great examples of the client/server architecture are web pages. When you open a web page in your favorite browser, for example https://www.packtpub.com, your browser (and therefore your computer) becomes the client and Packt Publishing's web servers become the servers. One very important concept to keep in mind is that any device can be a server, a client, or both. For example, you may be visiting the Packt Publishing website, which makes you a client, and at the same time you have file sharing enabled, which also makes your device a server. The Socket API generally uses one of the following two core protocols: Transmission Control Protocol (TCP): TCP provides a reliable, ordered, and error-checked delivery of a stream of data between two devices on the same network. TCP is generally used when you need to ensure that all packets are correctly received and are in the correct order (for example, web pages). User Datagram Protocol (UDP): UDP does not provide any of the error-checking or reliability features of TCP, but offers much less overhead. UDP is generally used when providing information to the client quickly is more important than missing packets (for example, a streaming video). Darwin, which is an open source POSIX compliant operating system, forms the core set of components upon which Mac OS X and iOS are based. This means that both OS X and iOS contain the BSD Socket Library. The last paragraph is very important to understand when you begin thinking about creating network applications for the iOS platform, because almost any code example that uses the BSD Socket Library will work on the iOS platform. The biggest difference between using the BSD Socket API on any standard Unix platform and the iOS platform is that the iOS platform does not support forking of processes. You will need to use multiple threads rather than multiple processes. The BSD Socket API can be used to build both client and server applications. There are a few networking concepts that you should understand: IP address: Any device on an Internet Protocol (IP) network, whether it is a client or server, has a unique identifier known as an IP address. The IP address serves two basic purposes: host identification and location identification. There are currently two IP address formats: IPv4: This is currently the standard for the Internet and most internal intranets. This is an example of an IPv4 address: 83.166.169.231. IPv6: This is the latest revision of the Internet Protocol (IP). It was developed to eventually replace IPv4 and to address the long-anticipated problem of running out of IPv4 addresses. This is an example of an IPv6 address: 2001:0db8:0000:0000:0000:ff00:0042:8329. An IPv6 can be shortened by replacing all the consecutive zero fields with two colons. The previous address could be rewritten as 2001:0db8::ff00:0042:8329. Ports: A port is an application or process-specific software construct serving as a communications endpoint on a device connected to an IP network, where the IP address identifies the device to connect to, and the port number identifies the application to connect to. The best way to think of network addressing is to think about how you mail a letter. For a letter to reach its destination, you must put the complete address on the envelope. For example, if you were going to send a letter to friend who lived at the following address: Your Friend 123 Main St Apt. 223 San Francisco CA, 94123 If I were to translate that into network addressing, the IP address would be equal to the street address, city, state, and zip code (123 Main St, San Francisco CA, 94123), and the apartment number would be equal to the port number (223). So the IP address gets you to the exact location, and the port number will tell you which door to knock on. A device has 65,536 available ports with the first 1024 being reserved for common protocols such as HTTP, HTTPS, SSH, and SMTP. Fully Qualified Domain Name (FQDN): As humans, we are not very good at remembering numbers; for example, if your friend tells you that he found a really excellent website for books and the address was 83.166.169.231, you probably would not remember that two minutes from now. However, if he tells you that the address was www.packtpub.com, you would probably remember it. FQDN is the name that can be used to refer to a device on an IP network. So now you may be asking yourself, how does the name get translated to the IP address? The Domain Name Server (DNS) would do that. Domain Name System Servers: A Domain Name System Server translates a fully qualified domain name to an IP address. When you use an FQDN of www.packtpub.com, your computer must get the IP address of the device from the DNS configured in your system. To find out what the primary DNS is for your machine, open a terminal window and type the following command: cat /etc/resolv.conf Byte order: As humans, when we look at a number, we put the most significant number first and the least significant number last; for example, in number 123, 1 represents 100, so it is the most significant number, while 3 is the least significant number. For computers, the byte order refers to the order in which data (not only integers) is stored into memory. Some computers store the most significant bytes first (at the lowest byte address), while others store the most significant bytes last. If a device stores the most significant bytes first, it is known as big-endian, while a device that stores the most significant bytes last is known as little-endian. The order of how data is stored in memory is of great importance when developing network applications, where you may have two devices that use different byte-ordering communication. You will need to account for this by using the Network-to-Host and Host-to-Network functions to convert between the byte order of your device and the byte order of the network. The byte order of the device is commonly referred to as the host byte order, and the byte order of the network is commonly referred to as the network byte order. Finding the byte order of your device One of the concepts that was briefly discussed in this article was how devices store information in memory (byte order). After that discussion, you may be wondering what the byte order of your device is. The byte order of a device depends on the Microprocessor architecture being used by the device. You can pretty easily go on to the Internet and search for "Mac OS X i386 byte order" and find out what the byte order is, but where is the fun in that? We are developers, so let's see if we can figure it out with code. We can determine the byte order of our devices with a few lines of C code; however, like most of the code in this article, we will put the C code within an Objective-C wrapper to make it easy to port to your projects. How to do it… Let's get started by defining an ENUM in our header file: We create an ENUM that will be used to identify the byte order of the system as shown in the following code: typedef NS_ENUM(NSUInteger, EndianType) { ENDIAN_UNKNOWN, ENDIAN_LITTLE, ENDIAN_BIG }; To determine the byte order of the device, we will use the byteOrder method as shown in the following code: -( EndianType)byteOrder { union { short sNum; char cNum[sizeof(short)]; } un; un.sNum = 0x0102; if (sizeof(short) == 2) { if(un.cNum[0] == 1 && un.cNum[1] == 2) return ENDIAN_BIG; else if (un.cNum[0] == 2 && un.cNum[1] == 1) return ENDIAN_LITTLE; else return ENDIAN_UNKNOWN; } else return ENDIAN_UNKNOWN; } How it works… In the ByteOrder header file, we defined an ENUM with three constants. The constants are as follows: ENDIAN_UNKNOWN: We are unable to determine the byte order of the device ENDIAN_LITTLE: This specifies that the most significant bytes are last (little-endian) ENDIAN_BIG: This specifies that the most significant bytes are first (big-endian) The byteOrder method determines the byte order of our device and returns an integer that can be translated using the constants defined in the header file. To determine the byte order of our device, we begin by creating a union of short int and char[]. We then store the value 0x0102 in the union. Finally, we look at the character array to determine the order in which the integer was stored in the character array. If the number one was stored first, it means that the device uses big-endian; if the number two was stored first, it means that the device uses little-endian.
Read more
  • 0
  • 0
  • 4710
Modal Close icon
Modal Close icon