Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-keeping-innovation-alive-oracle
Packt Publishing
23 May 2016
6 min read
Save for later

Keeping Innovation Alive at Oracle

Packt Publishing
23 May 2016
6 min read
Saurabh Gupta is the author of Advanced Oracle PL/SQL Developer's Guide. We spoke to him about his work at Oracle, and the organization’s future in a changing and increasingly open source landscape. All of the views expressed by Saurabh are his own and do not reflect the views of Oracle. Tell us about yourself – who are you and what do you do? I am a Database technologist with experience in database design, development and management. I work for the Database Product Management group at Oracle where I am fortunate to work alongside some really smart minds. As part of my job, I interact and engage with the Oracle partner community to drive the adoption of database technologies like Oracle 12c, Multitenant, Database In-Memory, and Database Cloud Services in their solution landscape. I evangelize Oracle database technologies through product road shows, conferences, workshops and various user group events. I love sharing my knowledge and I use two of the best mediums to achieve that. I have authored the first and second edition of "Oracle Advanced PL/SQL Developer Professional Guide" with Packt. I am a regular speaker at AIOUG events like Tech Days, OTN Yatra, and SANGAM. I was selected by IOUG committee to present at Collaborate'15. I'm a blogger and pretty active on Twitter – I tweet at @saurabhkg. Tell us about Oracle Pl/SQL. What is it for and what does it do? PL/SQL is the procedural extension of SQL (Structured Query Language). Although SQL is the de facto industry language for querying data from a database, it doesn't comply with high level programming concepts. For this reason, Oracle introduced PL/SQL language to code business logic in database and store it as a program for subsequent use. In its first release alongside Oracle 6, PL/SQL was limited in its capacity. But over the years, it has grown into one of the most mature high level languages. The language is practiced by almost all Oracle professionals who are into database development and design. How has the Oracle database landscape changed over the last few years? How does it compare to other databases such as SQL Server? In the last thirty years, Oracle has innovated; it has developed great products and has been a consistent leader in the database management space. Technologies such as Oracle Database, Real Application Clusters, Multitenant, Database In-Memory, Exadata smart innovations have all gained huge traction within Oracle's partner and customer community. A few years ago, Oracle focused on engineered systems family which were the smart blend of hardware and software. Oracle Exadata database machine is an engineered system that runs Oracle database in a grid computing model. The smart innovative features on Exadata help in addressing scenarios such as database consolidation, mixed workloads, and resource management. With the industry paradigm shifting to cloud services, Oracle's service offerings span across all tiers of the enterprise IT landscape, including Software As a Service (SaaS), Platform As a Service (PaaS) and Infrastructure As a Service (IaaS). Within PaaS, Oracle is investing lot of effort in strengthening their data management portfolio. The message is loud and clear that Oracle's cloud services are designed to cater the needs of an enterprise data driven application. It includes Oracle Database Cloud Service, Oracle Big Data Cloud Service, Oracle NoSQL Cloud Service, Oracle Big Data Preparation Cloud Service, Oracle Big Data Discovery Cloud Service, and Oracle Big Data SQL. Oracle Big Data SQL, a cutting-edge technology, is a superset of SQL that allows end users to issue a query against the data lying in different data artefacts i.e. HDFS, Hive, NoSQL or RDBMS. On Exadata, Big Data SQL performance is complemented by the smart features like smart scan. What are the biggest challenges you face - in terms of software and broader business pressures? In the software industry, the biggest challenge is to keep innovation alive and to control adoption timelines. If you look at IT industry trends, for any given problem a consumer has multiple solutions. This state of "flux" is pushing software vendors to the limit if they are to consider what makes their product distinctive. This is where innovation comes and gives an edge to the product. If you compromise on innovation, you lose the market in no time. At the same time, it is important to control the adoption rate in the market. The user community has to be briefed and empowered to work effectively with the product. Other challenges to be grounded are cost effectiveness, product marketing, rollout strategies, and supporting the community. What do you think the future holds for Oracle? Can it remain relevant in a world where open-source software is mainstream? Looking at the recent technology landscape, cloud services seem to be a central pillar in future roadmaps. Oracle has announced Oracle Public Cloud machine which brings Oracle Public Cloud's PaaS and IaaS capabilities to the partner's data center shielded within the company's firewall. Oracle is taking initiatives in nurturing the startups. Oracle has recently announced the Oracle Startup Cloud Accelerator program that will help IT startups to embrace Oracle cloud services by providing sufficient resources like technology, mentoring, go-to-market strategy, investors, and incubation centres. Well, there is no doubt that open-source community has grown steadily over the years. Developers across the world share the product code and help in developing a free software. However, I disagree with the assertion that "open-source software is mainstream". For enterprise-level IT management, you have to pick the products that are secure, compliant, and manageable and offer support services. Most open source databases are developer managed and create dependency on a limited set of resources. This is where commercial products have the edge; they satisfy compliance requirements, provide technical and business support, and invest heavily on innovation. It is advisable that Organizations must do a thorough TCO/ROI study before adopting open source products in IT mainstream. Oracle also offers some of the leading open source solutions for development You can find a list of Oracle’s open source initiatives listed here. Find Saurabh’s book - Advanced Oracle PL/SQL Developer's Guide - Second Edition here.
Read more
  • 0
  • 0
  • 1564

article-image-ios-9-speed
Samrat Shaw
20 May 2016
5 min read
Save for later

iOS 9: Up to Speed

Samrat Shaw
20 May 2016
5 min read
iOS 9 is the biggest iOS release to date. The new OS introduced new intricate features and refined existing ones. The biggest focus is on intelligence and proactivity, allowing iOS devices to learn user habits and act on that information. While it isn’t a groundbreaking change like iOS 7, there is a lot of new functionality for developers to learn. Along with iOS 9 and Xcode 7, Apple also announced major changes to the Swift language (Swift 2.0) and announced open source plans. In this post, I will discuss some of my favorite changes and additions in iOS 9. 1 List of new features Let’s examine the new features. 1.1 Search Extensibility Spotlight search in iOS now includes searching within third-party apps. This allows you to deep link from Search in iOS 9. You can allow users to supply relevant information that they can then navigate directly to. When a user clicks on any of the search results, the app will be opened and you can be redirected to the location where the search keyword is present. The new enhancements to the Search API include NSUserActivity APIs, Core Spotlight APIs, and web markup. 1.2 App Thinning App thinning optimizes the install sizes of apps to use the lowest amount of storage space while retaining critical functionality. Thus, users will only download those parts of the binary that are relevant to them. The app's resources are now split, so that if a user installs an app on iPhone 6, they do not download iPad code or other assets that are used to make an app universal. App thinning has three main aspects, namely app slicing, on-demand resources, and bitcode. Faster downloads and more space for other apps and content provide a better user experience. 1.3 3D Touch iPhone 6s and 6s Plus added a whole new dimension to UI interactions. A user can now press the Home screen icon to immediately access functionality provided by an app. Within the app, a user can now press views to see previews of additional content and gain accelerated access to features. 3D Touch works by detecting the amount of pressure that you are applying to your phone's screen in order to perform different actions. In addition to the UITouch APIs, Apple has also provided two new sets of classes, adding 3D Touch functionality to apps: UIPreviewAction and UIApplicationShortcutItem. This unlocks a whole new paradigm of iOS device interaction and will enable a new generation of innovation in upcoming iOS apps. 1.4 App Transport Security (ATS) With the introduction of App Transport Security, Apple is leading by example to improve the security of its operating system. Apple expects developers to adopt App Transport Security in their applications. With App Transport Security enabled, network requests are automatically made over HTTPS instead of HTTP. App Transport Security requires TLS 1.2 or higher. Developers also have an option to disable ATS, either selectively or as a whole, by specifying in the Info.plist of their applications. 1.5 UIStackView The newly introduced UIStackView is similar to Android’s LinearLayout. Developers embed views to the UIStackView (either horizontally or vertically), without the need to specify the auto layout constraints. The constraints are inserted by the UIKit at runtime, thus making it easier for developers. They have the option to specify the spacing between the subviews. It is important to note that UIStackViews don't scroll; they just act as containers that automatically fit their content. 1.6 SFSafariViewController With SFSafariViewController, developers can use nearly all of the benefits of viewing web content inside Safari without forcing users to leave an app. It saves developers a lot of time, since they no longer need to create their own custom browsing experiences. For the users too, it is more convenient, since they will have their passwords pre-filled, not have to leave the app, have their browsing history available, and more. The controller also comes with a built-in reader mode. 1.7 Multitasking for iPad Apple has introduced Slide Over, Split View, and Picture-in-Picture for iPad, thus allowing certain models to use the much larger screen space for more tasks. From the developer point of view, this can be supported by using the iOS AutoLayout and Size Classes. If the code base already uses these, then the app will automatically respond to the new multitasking setup. Starting from Xcode 7, each iOS app template will be preconfigured to support Slide Over and Split View. 1.8 The Contacts Framework Apple has introduced a brand new framework, Contacts. This replaces the function-based AddressBook framework. The Contacts framework provides an object-oriented approach to working with the user's contact information. It also provides an Objective-C API that works well with Swift too. This is a big improvement over the previous method of accessing a user’s contacts with the AddressBook framework. As you can see from this post, there are a lot of exciting new features and capabilities in iOS9 that developers can tap into, thus providing new and exciting apps for the millions of Apple users around the world. About the author Samrat Shaw is a graduate student (software engineering) at the National University Of Singapore and an iOS intern at Massive Infinity.
Read more
  • 0
  • 0
  • 1419

article-image-jupyter-data-laboratory-part-1
Marijn van
18 May 2016
5 min read
Save for later

Jupyter as a Data Laboratory: Part 1

Marijn van
18 May 2016
5 min read
This is part one of a two-part piece on Jupyter, a computing platform used my many scientists to perform their data analysis and modeling. This first part will help you understand what Jupyter is, and the second part will cover why it represents a leap forward in scientific computing. Jupyter: a data laboratory If you think that scientists, famous for being careful and precise, always produce well-documented, well-tested, and beautiful code, you are dead wrong. More often than not, a scientist's local code folder is an ill-organized heap of horrible spaghetti code that will give any seasoned software developer nightmares. But the scientist will sleep soundly. That is because usually, the sort of programming that scientists do is a lot different from software development. They tend to write programming code for a whole different purpose, with a whole different mindset, and with a whole different approach to computing. If you have never done scientific computing before—by which I mean you have never used your computer to analyze measurement data or to "do science"—then leave your TDD, SCRUM, agile, and so on at the door and come join me for a little excursion into Jupyter. The programming language is your user interface Over the years, programmers have created applications to cover most computing needs of most users. In domains such as content creation, communication, and entertainment, chances are good that someone already wrote an application that does what you want to do. If you're lucky, there's even a friendly GUI to help guide you through the process. But in science, the point is usually to try something that nobody has done before. Hence, any application used for data analysis needs to be flexible. The application has to enable the user to do, well, anything imaginable, with a dataset; and the GUI paradigm breaks down. Instead of presenting the user with a list of available options, it becomes more efficient to just ask the user what needs to be accomplished. When driven to the extreme, you end up dropping the whole concept of an application and working directly with a programming language. So it is understandable that when you start Jupyter, you are staring at a mostly blank screen with a blinking cursor. Realize that behind that blinking cursor sits the considerable computational power your computer—most likely a multicore processor, gigabytes of RAM, and terabytes of storage space, awaiting your command. In many domains, a programming language is used to create an application, which in turn presents you with an interface to do the operation you wanted to do in the first place. In scientific computing, however, the programming language is your interface. The ingredients of a data laboratory I think of Jupyter as a data laboratory. The heart of a data laboratory is a REPL (a read-eval-print loop, which allows you to enter lines of programming code that immediately get executed, and the result is displayed on the screen). The REPL can be regarded as a workbench, and loading a chunk of data into working memory can be regarded as placing a sample on it, ready to be examined. Jupyter offers several advanced REPL environments, most notably IPython, which runs on your terminal and also ships with its own tricked out terminal to display inline graphics and offer easier copy-paste. However, the most powerful REPL that Jupyter offers runs in your browser, allowing you to use multiple programming languages at the same time and embed inline markdown, images, videos, and basically anything the browser can render. The REPL allows access to the underlying programming language. Since the language acts as our primary user interface, it needs to get out of our way as much as possible. This generally means it should be high-level with terse syntax and not be too picky about correctness. And of course, it must support an interpreted mode to allow a quick back-and-forth between a line of code and the result of the computation. Of the multitude of programming languages supported by Jupyter, it ships with Python by default, which fulfills the above requirements nicely. In order to work with the data efficiently (for example, to get it onto your workbench in the first place), you'll want software libraries (which can be regarded as shelves that hold various tools like saws, magnifiers, and pipettes). Over the years, scientists have contributed a lot of useful libraries to the Python ecosystem, so you can have your pick of favorite tools. Since the APIs that are exposed by these libraries are as much a part of the user interface as the programming language, a lot of thought gets put into them. While executing single lines or blocks of code to interactively examine your data is essential, the final ingredient of the data laboratory is the text editor. The editor should be intimately connected to the REPL and allow for a seamless transmission of text between the two. The typical workflow is to first try a step of the data analysis live in the REPL and, when it seems to work, write it down into a growing analysis script. More complicated algorithms are written in the editor first in an iterative fashion, testing the implementation by executing the code in the REPL. Jupyter's notebook environment is notable in this regard, as it blends the REPL and the editor together. Go check it out If you are interested in learning more about Jupyter, I recommend installing it and checking out this wonderful collection of interesting Jupyter notebooks. About the author Marijn van Vliet is a postdoctoral researcher at the Department of Neuroscience and Biomedical Engineering of Aalto University in Finland. He received his PhD in biomedical sciences in 2015.
Read more
  • 0
  • 0
  • 2021

article-image-soft-skills-every-data-pro-needs
Sam Wood
16 May 2016
4 min read
Save for later

‘Soft’ Skills Every Data Pro Needs

Sam Wood
16 May 2016
4 min read
Your technical data skills are at the top of your game - you've mastered machine learning, are a wizard at stats, and know the tools of the trade from Excel to R. But to be a truly top-notch data professional, you're going to need some exceptional 'soft' data skills as well. It's not just enough to be good at crunching numbers - you've got to know how to ask the right question, and then how to explain the answers in a way that your business or clients can act upon. So what are the essential soft skills that you need to know to ensure you're not just a good data scientist - you're a great data scientist? Asking Questions, Not Proving Hunches As a data analyst, how many times have you been asked to produce some figures that proves something that your boss or colleague already believes to be true? The key to good data analysis is not starting with an assertion and then looking for the evidence to support it. It's coming up with the perfect questions that will get you the valuable insight your business needs. Don't go trying to prove that customers leave your business because of X reason - ask your data 'Why do our customers leave'? Playing to the Audience Who's making a data request? The way you want to present your findings, and even the kind of answers you give, will depend on the role of the person asking. Project Managers and executives are likely to be looking for a slate of options, with multiple scenarios and suggestions, and raw results that they can draw their own conclusions from. Directors, CEOs, and other busy leadership types will be looking for a specific recommendation - usually in a polished, quick presentation that they can simply say 'Yes' or 'No' too. They're busy people - they don't want to have to wade through reams of results to get to the core. Instead it's often your job to do that for them. Keeping It Simple One of the most essential skills of a data wrangler is defining a problem, and then narrowing down the answers you'll need to find. There are an endless number of questions you can end up asking your data - understanding the needs of a data request and not getting bogged down in too much information is vital to solving the core issues of a business. There's a saying that "Smart people ask hard questions, but very smart people ask simple ones." Still feel like you keep getting asked stupid questions, or to provide evidence for an assertion that's already been made? Cut your not-data analyst colleagues some slack - you've got an advantage over them by already knowing how data works. Working directly with databases gives you the discipline you need to start asking better questions, and to structure questions with the precision and accuracy needed to get the big answers. Developing these skills will allow you to contribute towards solving the challenges that your business faces. Delivering Your Results Your amazing data insight isn't going to be worth squat if you don't present it in a way so that people can recognize its importance. You might have great results - but without a great presentation or stunning visualization, you're going to find your findings put on the back burner or even ditched from a road-map entirely. If you've managed to get the right message, you need to make sure your message is delivered right. If you're not the most confident public speaker, don't underestimate the power of a good written report. Billionaire tyrant Amazon CEO Jeff Bezos notably requires all senior staff to put their ideas forward in written memos which are read in silence in order to start meetings. Presenting your results in writing allows you to be clear about the 'story' of your data, and resist the temptation to attempt to explain the meanings of your charts on the fly. Why Soft Skills Are Essential You might think you'll be able to get by on your technical mastery alone - and you might be right, for a while. But the future of business is data, and more and more people are going to start seeking roles in data analysis; people who are already in possession of the creative thinking and expert presentation skills that make a great data worker. So make sure you stay on the top of your game - and hone your soft data skills with almost as much rigor as you keep on top of the latest data tech.
Read more
  • 0
  • 0
  • 2422

article-image-reactive-programming-rxswift
Darren Sapalo
13 May 2016
7 min read
Save for later

Reactive Programming with RxSwift

Darren Sapalo
13 May 2016
7 min read
In a previous article, Building an iPhone app Using Swift, Ryan Loomba showed us how to build iOS apps using Swift, starting from a new project (Building an iPhone app Using Swift Part 1), and how to create lists using a table view and present a map using map view (Building an iPhone app Using Swift Part 2). In this article, we’ll discuss what RxSwift is and how it can be used to improve our Swift code. The Github repository for Rx.NET defines Reactive Extensions (Rx) as “a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.” Initially, Netflix developed this library to improve the way their API backend handles streams of data, but there were uses for the library even on the frontend to achieve a responsive user interface. The above links provide a better explanation of what the library is and the rationale for why they developed it. This article will focus on staying as simple as possible and explain how Rx can help Swift developers with the problems that they might encounter with regards to mobile development. Mobile Development and Background Threads If you’ve worked on mobile apps requiring Internet access, you’ll realize that there are things that should be done on the main thread (UI manipulation and accessing context-related resources) and things that should be done on a background thread (network queries and code that takes some time to perform). This is because you shouldn’t block the main thread with long-running code, such as performing a network query to get some JSON data from a server, or else your user interface will appear to be hanging! The Non-Rx Approach For our example, let’s say you need to query some JSON data from a server and display it on the screen. On your AppDelegate class, you could possibly have some queue setup for network requests. AppDelegate.swift static var networkQueue = dispatch_queue_create("com.appName.networkQueue", DISPATCH_QUEUE_CONCURRENT) We normally dispatch code to be run on a different thread by writing the code below: dispatch_async(AppDelegate.networkQueue) { // Query some server to get some json data } Let’s say that the best-case scenario will always happen and your network call will be successful. You have Internet access, the server was alive and responded, you have proper authorization to access the data you are requesting, and you successfully retrieve the data. I have enumerated these because I want to emphasize that there are so many things that can go wrong and prevent you from a successful network query. With the best-case scenario, you have your parsed data and you’re ready to display it on your UILabel. However, you’re currently on a background thread, which means that you should switch back to the main thread to manipulate the UI. This means your code will look something like this: dispatch_async(AppDelegate.networkQueue) { let url = "http://myapi.myserver.com/users" let request = NSMutableURLRequest(URL: NSURL(string: url)!) let task = session.dataTaskWithRequest(request, completionHandler: { data, response, error -> Void in let json = try NSJSONSerialization.JSONObjectWithData(data!, options: .MutableLeaves) as? NSDictionary dispatch_async(dispatch_get_main_queue()) { self.label.text = json.valueForKey("result") // Query some server to get some json data } } task.resume() } There are two things I want to point out here. Firstly, there are two calls to a global method called “dispatch_async” to run code on a specified queue, and they are nested inside each other. Another thing is that we’re expecting this code to run perfectly at all times; there are no error checking of whether the request was successful, whether data or response was nil, or whether error has some value or not. As mentioned above, there are many things that can go wrong when performing network queries, and your code needs to handle it elegantly. The RxSwift Approach With RxSwift, network queries and code that takes some time to perform are converted into Observables, which emit some data with its type specified. Views and controllers subscribe to them as Observers. Observables The network query can have three possible states: Currently emitting a new value (onNext), which can occur repeatedly or not at all An has error occurred and the stream has stopped completely (onError) The stream has ended (onComplete) For instance, the above example of a network query returning a JSON value could be defined as an observable that emits an NSDictionary, because that’s exactly the type of result we’re expecting: func rxGetUsers() -> Observable<NSDictionary> { return Observable.create { observer in let url = "http://myapi.myserver.com/users" let request = NSMutableURLRequest(URL: NSURL(string: url)!) let task = session.dataTaskWithRequest(request, completionHandler: { data, response, error -> Void in if (error != nil) { // an error occured observer.onError(NSError(domain: "Getting user data", code: 1, userInfo: nil)) } else if (data == nil){ // No data response observer.onError(NSError(domain: "Getting user data", code: 2, userInfo: nil)) } // other error checking let json = try NSJSONSerialization.JSONObjectWithData(data!, options: .MutableLeaves) as? NSDictionary if (json == nil) { // No json data found observer.onError(NSError(domain: "Getting user data", code: 3, userInfo: nil)) return } observer.onNext(json) observer.onComplete() } task.resume() return NopDisposable.instance } } With the rxGetUsers function defined above, it is easier to see what the code does: when an error occurs, observer.onError is called and the management of the error is deferred to the observer (a ViewController, for example) instead of the observable (the network query). Only when the error checking for the network is done is the observer.onNext method called, and the stream is finished with the observer.onComplete method call. Observers With the network query encapsulated in a single function and returned as an Observable instance, we can proceed to use this query by subscribing an observer (see the subscribe method). The Rx library provides options on what threads the code will run on (observeOn and subscribeOn), a way for you to handle the result or errors with direct access to the ViewController’s properties such as UI references (onNext and onError), a way for you to be informed when the observable stream is finished (onComplete), and a way for you to disregard the results of a network query (via disposing of the subscription variable). The relationship between an Observer that observes an Observable is called a subscription. You might need to do the last one I mentioned if your user suddenly chooses to press home and leaves your app, and you lose access to your context and resources to interact with. let subscription = rxGetUsers() // When finished, return to main thread to update UI .observeOn(MainScheduler.instance) // Perform parallel work on separate thread .subscribeOn(ConcurrentDispatchQueueScheduler.init(queue: AppDelegate.networkQueue)) .subscribe { // What to do on each emission onNext: (dict: NSDictionary) in { self.label.text = dict.valueForKey(“result”) as? String }, // What to do when an error occurs onError: (error) in { print(error) // or you could display an alert! }, // What to do when the stream is finished onCompleted: { print(“Done with the network request!”) // or perform another network query here! } } Summary Once you understand the basics of Rx, I am sure that you will come to appreciate its great use of the Observer pattern and move past the annoyance and difficulty in handling network requests that respect the life cycle of a mobile app, or the confusing callback-ception/hell required when multiple queries need to be put together. In the next article, we’ll show the simple usage of operators and data binding provided by Rx. About the author Darren Sapalo is a software developer, an advocate for UX, and a student taking up his Master's degree in Computer Science. He enjoyed developing games on his free time when he was twelve. Finally finished with his undergraduate thesis on computer vision, he took up some industry work with Apollo Technologies Inc. developing for both the Android and iOS platforms.
Read more
  • 0
  • 0
  • 2491

article-image-being-data-scientist-jupyter-part-2
Marijn van
04 May 2016
8 min read
Save for later

Being a Data Scientist with Jupyter – Part 2

Marijn van
04 May 2016
8 min read
This is the second part of a two-part piece on Jupyter, a computing platform used by many scientists to perform their data analysis and modeling. This second part will dive into some code and give you a taste of what it is like to be a data scientist. If you want to type along, you will need a Python installation with the following packages: the Jupyter-notebook (formerly the ipython notebook) and Python's scientific stack. For installation instructions, see here or here. Go ahead and fire up the notebook by typing jupyter notebook into a command prompt, which will start a web server and point your browser to localhost:8888, and click on the button to create a new notebook backed by an IPython kernel. The code for our first cell will be the following: Cell 1: %pylab inline By executing the cell (shift + enter), Jupyter will populate the namespace with various functions from the Numpy and Matplotlib packages as well as configuring the plotting engine to display figures as inline HTML images. Output 1: Populating the interactive namespace from numpy and matplotlib The experiment I'm a neuroscientist myself, so I'm going to show you a magic trick I once performed for my students. One student would volunteer to be equipped with an EEG cap and was put in front of a screen. On the screen, nine playing cards were presented to the volunteer with the instruction, "Pick any of these cards." Image of the different cards that the volunteer could select After the volunteer memorized the card, playing cards would flash across the screen one by one. The volunteer would mentally count the number of times his/her card was shown and not say anything to anyone. At the end of the sequence, I would analyze the EEG data and could tell with frightful accuracy which of the cards the volunteer had picked. The secret to the trick is an EEG component called P300: a sharp peak in the signal when something grabs your attention (such as your card flashing across the screen). The data I've got a recording of myself as a volunteer; grab it here. It is stored as a MATLAB file, which can be loaded using SciPy's loadmat function. The code will be the following. Cell 2: import scipy.io # Import the IO module of SciPy m = scipy.io.loadmat('tutorial1-01.mat') # Load the MATLAB file EEG = m['EEG'] # The EEG data stream labels = m['labels'].flatten() # Markers indicating when which card was shown # The 9 possible cards the volunteer could have picked cards = [ 'Ace of spades', 'Jack of clubs', 'Queen of hearts', 'King of diamonds', '10 of spaces', '3 of clubs', '10 of hearts', '3 of diamonds', 'King of spades', ] The preceding code slaps the data onto our workbench. From here, we can use a huge assortment of tools to visualize and manipulate the data. The EEG and label variables are of the numpy.ndarray type, which is a data structure that is the bread and butter of data analysis in Python. It makes it easy to work with numeric data in the form of a multidimensional array. For example, we can query the size of the array via the following code. Cell 3: print 'EEG dimensions:', EEG.shape print 'Label dimensions:', labels.shape Output 3: EEG dimensions: (7, 288349) Label dimensions: (288349,) I recorded EEG with seven electrodes, collecting numerous samples over time. Let's visualize the EEG stream through the following code. Cell 4: figure(figsize=(15,3)) # Make a new figure of the given dimensions (in inches) bases = 100 * arange(7) # Add some vertical whitespace between the 7 channels plot(EEG.T + bases) # The .T property returns a version where rows and columns are transposed xlabel('Time (samples)') # Label the X-axis, a good scientist always labels his/her axes! Output 4:   Output of cell 4 Note that NumPy's arrays are very clever concerning arithmetical operators such as addition. Adding a single value to an array will add the value to each element in the array. Adding two equally sized arrays will sum up the corresponding elements. Adding a 1D array (a vector) to a 2D array (a matrix) will sum up the 1D array to every row of the 2D array. This is known as broadcasting and can save a ton of tedious for loops. The labels variable is a 1D array that contains mostly zeros. However, on the exact onset of the presentation of a playing card, it contains the integer index (starting from 1) of the card being shown. Take a look at the following code. Cell 5: figure(figsize=(15,3)) scatter(arange(len(labels)), labels, edgecolor='white') # Scatter plot Output 5:   Output of cell 5 Slicing up the data Cards were shown at a rate of two per second. We are interested in the response generated whenever a card was shown, so we cut one-second-long pieces of the EEG signal that starts from the moment a card was shown. These pieces will be named “trials”. A useful function here is flatnonzero, which returns all the indices of an array that contain a non-zero value. It effectively gives us the time (as an index) when a card was shown if we use it in a clever way. Execute the following code. Cell 6: # Get the onset of the presentation of each card onsets = flatnonzero(labels) print 'Onset of the first 10 cards:', onsets[:10] print 'Total number of onsets:', len(onsets) # Here is how we can use the onsets variable classes = labels[onsets] print 'First 10 cards shown:', classes[:10] Output 6: Onsers of the first 10 cards:[ 7789 8790 9814 10838 11862 12886 13910 14934 15958 16982] Total number of onsets: 270 First 10 cards shown: [3 6 7 9 1 8 5 2 4 9] In Line 7, we used another cool feature of NumPy's arrays: fancy indexing. In addition to the classical indexing of an array using a single integer, we can index a NumPy array with another NumPy array as long as the second array contains only integers. Another useful way to index arrays is to use slices. Let’s use this to create a three-dimensional array containing all the trials. Take a look at the following code. Cell 7: nchannels = 7 # 7 EEG channels sample_rate = 2048. # The sample rate of the EEG recording device was 2048Hz nsamples = int(1.0 * sample_rate) # one second's worth of data samples ntrials = len(onsets) trials = zeros((ntrials, nchannels, nsamples)) for i, onset in enumerate(onsets): # Extract a slice of EEG data trials[i, :, :] = EEG[:, onset:onset + nsamples] print trials.shape Output 7: (270, 7, 2048) We now have 270 trials (one trial for each time a card was flashed across the screen). Each trial consists of a little one-second piece of EEG recorded with seven channels using 2,048 samples. Let’s plot one of the trials by running the following code. Cell 8: figure(figsize=(4,4)) bases = 100 * arange(7) # Add some vertical whitespace between the 7 channels plot(trials[0, :, :].T + bases) xlabel('Time (samples)') Output 8:   Output of cell 8 Reading my mind Looking at the individual trials is not all that informative. Let's calculate the average response to each card and plot it. To get all the trials where a particular card is shown, we can use the final way to index a NumPy array: using another array consisting of Boolean values. This is called Boolean or masked indexing. Take a look at the following code. Cell 9: # Lets give each response a different color colors = ['k', 'b', 'g', 'y', 'm', 'r', 'c', '#ffff00', '#aaaaaa'] figure(figsize=(4,8)) bases = 20 * arange(7) # Add some vertical whitespace between the 7 channels # Plot the mean EEG response to each card, such an average is called an ERP in the literature for i, card in enumerate(cards): # Use boolean indexing to get the right trial indices erp = mean(trials[classes == i+1, :, :], axis=0) plot(erp.T + bases, color=colors[i]) Output 9:   Output of cell 9 One of the cards jumps out: the one corresponding to the green line. This line corresponds to the third card, which turns out to be Queen of Hearts. Yes, this was indeed the card I had picked! Do you want to learn more? This was a small taste of the pleasure it can be to manipulate data with modern tools such as Jupyter and Python's scientific stack. To learn more, take a look at Numpy tutorial and Matplotlib tutorial. To learn even more, I recommend Cyrille Rossant's IPython Interactive Computing and Visualization Cookbook. About the author Marijn van Vliet is a postdoctoral researcher at the department of Neuroscience and Biomedical Engineering at Aalto University. He uses Jupyter to analyse EEG and MEG recordings of the human brain in order to understand more about it processes written and spoken language.  He can be found on Twitter @wmvanvliet.
Read more
  • 0
  • 0
  • 1473
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-python-oop-python-object-oriented-programming
Liz Tom
25 Apr 2016
5 min read
Save for later

Python - OOP! (Python: Object-Oriented Programming)

Liz Tom
25 Apr 2016
5 min read
Or Currency Conversion using Python I love to travel and one of my favorite programming languages is Python. Sometimes when I travel I like to make things a bit more difficult and instead of just asking Google to convert currency for me, I like to have a script on my computer that requires me to know the conversion rate and then calculate my needs. But seriously, let's use a currency converter to help explain some neat reasons why Object Oriented Programming is awesome. Money Money Money First let's build a currency class. class Currency(object): def __init__(self, country, value): self.country = country self.value = float(value) OK, neato. Here's a currency class. Let's break this down. In Python every class has an __init__ method. This is how we build an instance of a class. If I call Currency(), it's going to break because our class also happens to require two arguments. self is not required to be passed. In order to create an instance of currency we just use Currency('Canada', 1.41) and we've now got a Canadian instance of our currency class. Now let's add some helpful methods onto the Currency class. def from_usd(self, dollars): """If you provide USD it will convert to foreign currency """ return self.value * int(dollars)def to_usd(self, dollars): """If you provide foreign currency it will convert to USD """ return int(dollars) / self.value Again, self isn't needed by us to use the methods but needs to be passed to every method in our class. self is our instance. In some cases self will refer to our Canadian instance but if we were to create a new instance Currency('Mexico', 18.45) self can now also refer to our Mexican instance. Fun. We've got some awesome methods that help me do math without me having to think. How does this help us? Well, we don't have to write new methods for each country. This way, as currency rates change, we can update the changes rather quickly and also deal with many countries at once. Conversion between USD and foreign currency is all done the same way. We don't need to change the math based on the country we're planning on visiting, we only need to change the value of the currency relative to USD. I'm an American so I used USD because that's the currency I'd be converting to and from most often. But if I wanted, I could have named them from_home_country and to_home_country. Now how does this work? Well, if I wanted to run this script I'd just do this: again = True while again: country = raw_input('What country are you going to?n') value = float(raw_input('How many of their dollars equal 1 US dollarn')) foreign_country = Currency(country, value) convert = raw_input('What would you like to convert?n1. To USDn2. To %s dollarsn' % country) dollars = raw_input('How many dollars would you like to convert?n') if( convert == '1' ): print dollars + ' ' + country + ' dollars are worth ' + str(foreign_country.to_usd(dollars)) + ' US dollarsn' elif( convert == '2' ): print dollars + ' US dollars are worth ' + str(foreign_country.from_usd(dollars)) + dollars + ' ' + country again = raw_input('nnnWant to go again? (Y/N)n') if( again == 'y' or again == 'Y' ): again = True elif( again == 'n' or again == 'N' ): again = False **I'm still using Python 2 so if you're using Python 3 you'll want to change those raw_inputs to just input. This way we can convert as much currency as we want between USD and any country! I can now travel the world feeling comfortable that if I can't access the Internet and I happen to have my computer nearby and I am at a bank or the hotel lobby staring at the exchange rate board, I'll be able to convert currency with ease without having to remember which way converts my money to USD and which way converts USD to Canadian dollars. Object-oriented programming allows us to create objects that all behave in the same way but store different values, like a blue car, red car, or green car. The cars all behave the same way but they are all described differently. They might all have different MPG but the way we calculate their MPG is the same. They all have four wheels and an engine. While it can be harder to build your program with object oriented design in mind, it definitely helps with maintainability in the long run. About the Author Liz Tom is a Software Developer at Pop Art, Inc in Portland, OR.  Liz’s passion for full stack development and digital media makes her a natural fit at Pop Art.  When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 6211

article-image-net-core-and-future-net
Mark Price
22 Apr 2016
5 min read
Save for later

.NET Core and the future of .NET

Mark Price
22 Apr 2016
5 min read
.NET Core is Microsoft’s new cross-platform implementation of .NET. Although the .NET Framework isn’t vanishing, .NET Core is set to be the future focus of Microsoft’s development platform. Born of the “need and desire to have a modern runtime that is modular and whose features and libraries can be cherry picked”, the .NET Core execution engine is also open source on GitHub. What .NET Core means for the future of working with .NET There are three groups of programmers that need to evaluate how they work with .NET: Existing .NET developers who are happy with their applications as-is. Existing .NET developers who want to move their applications cross-platform. Developers new to .NET. For existing .NET developers who have been working with .NET Framework for the past 15 years, switching to the cross-platform .NET Core means giving up a vast number of familiar APIs and entire frameworks such as WPF and WCF. WPF is used to build Windows desktop applications so that is not a surprise, however WCF is the best technology to use to build and consume SOAP services so it is a shame that it can currently only be used on Windows. Microsoft have hinted that WCF might be included in future versions of .NET Core, however personally I wouldn’t bet my business on that. If existing .NET developers need to continue to use technologies such as WCF then they should be aware that they must continue to use Windows as the host operating system. For those new to .NET, I would recommend learning .NET Core first, evaluate it and see if it can provide the platform you need. Only if it cannot, then look at .NET Framework. Today, .NET Core has not had a final release. In November 2015 Microsoft released a first candidate (RC1) with a “go-live” licence meaning that they support its use in production environments. Since then there have been delays, primarily to the command line tools, and Microsoft announced there will be a RC2 release, likely in May 2016. A date for final release has not been made. I recommend that a .NET developer not switch an existing application to .NET Core today. .NET Core should be evaluated for future projects, especially those that would benefit from the flexibility of cross-platform deployment. Taking .NET Core open source The decision to make .NET Core a piece of open source technology is important because it enables Microsoft teams to collaborate with external developers in the community and accept code contributions to improve .NET. A richer flow of ideas and solutions helps Microsoft teams step outside their bubble. Open source developers use a variety of programming languages and platforms. They are quick to learn new platforms and Microsoft’s .NET platform is one of the easiest to learn and most productive to use. The biggest challenge will be trusting that Microsoft really has embraced open source. As long as Scott Guthrie, one of the proponents of open source within Microsoft, is in charge of .NET then I believe we can trust them. Building and working cross-platform with .NET Core .NET Core 1.0 does not support the creation of desktop applications, for Windows or other platforms such as Linux. For cross-platform development, .NET Core 1.0 only supports web applications and services by using ASP.NET Core, and command line applications. .NET Core 1.0 also supports Universal Windows Platform apps that are cross-device but they are limited to Microsoft Windows 10 platforms including Xbox One and HoloLens. For cross-platform mobile development, use Xamarin. Xamarin is based on the Mono project, not on .NET Core. It is likely that Microsoft will slowly merge .NET Core and Xamarin so that in a few years Microsoft has a single “.NET Core 2.0” that supports cross-platform mobile and web development. This fits with Microsoft CEO Satya Nadella’s “Mobile First, Cloud First” strategy. .NET Core is designed to be componentized so that only the minimum number of packages that your application requires will be deployed as part of a continuous integration strategy. .NET Core supports Linux and Docker to allow the virtual machine infrastructure flexibility that DevOps needs. Changing the landscape with “Bash on Ubuntu on Windows” The Linux world has a more vibrant and faster-moving developer tool set than Microsoft’s Windows. Although Microsoft has the powerful PowerShell command line platform, it cannot compete with open source Bash. Bringing Bash to Windows is the final piece of a master plan that enables Linux developers to immediately feel at home on Windows, and for Microsoft developers to instantly gain all the amazing tools available for Bash, as well as use the same tools on both Windows and Linux. “Bash on Ubuntu on Windows” (BUW) means that not only has Microsoft provided the cross-platform .NET Core, and the cross-platform Visual Studio Code, but also the cross-platform developer command line tools. Now a developer can do their work on any platform and not have to learn new tools or APIs. .NET Core and the future of Microsoft .NET Core (and BUW and Visual Studio Code) all point to Microsoft finally recognizing that their Windows platform is a de facto legacy platform. Windows had a long 30-year run, but it’s reign is almost over. Microsoft’s Azure is their new developer platform and it is amazing. With Azure Stack coming soon to enable on-premise private cloud with the same features as they offer in their public cloud, I believe Microsoft is in the best position to dominate as a cloud developer platform. To enable our heterogenous future, and build mobile and cloud solutions, Microsoft has embraced open source and cross-platform tools and technologies. Microsoft’s wonderful developer tools and technologies aren’t just for Windows users any more, they are for everyone. About the Author Mark J. Price is a Microsoft Certified Trainer (MCT) and Microsoft Specialist, Programming in C# and Architecting Microsoft Azure Solutions, with more than 20 years of educational and programming experience. He is the author of  C# 6 and .NET Core 1.0: Modern Cross-Platform Development.
Read more
  • 0
  • 1
  • 1676

article-image-rxswift-operators
Darren Sapalo
22 Apr 2016
6 min read
Save for later

RxSwift Operators

Darren Sapalo
22 Apr 2016
6 min read
In the previous article, we talked about how the Rx framework for Swift could help in performing asynchronous tasks, creating an observable from a network request, dealing with streams of data, and handling errors and displaying successfully retrieved data elegantly on the main thread coming from the background thread. This article will talk about how to take advantage of the operators on observables to transform data. Hot and Cold Observables There are different ways to create observables, and we saw an example of it previously using the Observable.create method. Conveniently, RxSwift provides extensions to arrays: the Array.toObservable method. var data = ["alpha" : ["title":"Doctor Who"], "beta" : ["title":"One Punch Man"]] var dataObservable = data.toObservable() Note however, that code inside the Observer.create method does not run when you call it. This is because it is a Cold Observable, meaning that it requires an observer to be subscribed on the observable before it will run the code segment defined in the Observable.create method. In the previous article, this means that running Observer.create won’t trigger the network query until an observer is subscribed to the Observable. IntroToRx provides a better explanation of Hot and Cold Observables in their article. Rx Operators When you begin to work with observables, you’ll realize that RxSwift provides numerous functions that encourages you to think of processing data as streams or sequences. For example, you might want to filter an array of numbers to only get the even numbers. You can do this using the filter operation on an observable. var data = [1, 2, 3, 4, 5, 6, 7, 8] var dataObservable = data.toObservable().filter{elem: Int -> Bool in return elem % 2 == 0 } dataObservable.subscribeNext { elem: Int in print(“Element value: (elem)”) } Chaining Operators These operators can be chained together and is actually much more readable (and easier to debug) than a lot of nested code caused by numerous callbacks. For example, I might want to query a list of news articles, get only the ones above a certain date, and only take three to be displayed at a time. API.rxGetAllNews() .filter{elem: News -> Bool in return elem.date.compare(dateParam) == NSOrderedDescending } .take(3) .subscribe( onNext: { elem: News in print(elem.description) } } Elegantly Handling Errors Rx gives you the control over your data streams so that you can handle errors easier. For example, your network call might fail because you don’t have any network connection. Some applications would then work better if they default to the data available in their local device. You can check the type of error (e.g. no server response) and use an Rx Observable as a replacement for the stream and still proceed to do the same observer code. API.rxGetAllNews() .filter{elem: News -> Bool in return elem.date.compare(dateParam) == NSOrderedDescending } .take(3) .catchError{ e: ErrorType -> Observable<Int> in return LocalData.rxGetAllNewsFromCache() } .subscribe( onNext: { elem: News in print(elem.description) } } Cleaning up Your Data One of my experiences wherein Rx was useful was when I was retrieving JSON data from a server but the JSON data had some items that needed to be merged. The data looked something like below: [ [“name”: “apple”, “count”: 4], [“name”: “orange”, “count”: 6], [“name”: “grapes”, “count”: 4], [“name”: “flour”, “count”: 2], [“name”: “apple”, “count”: 7], [“name”: “flour”, “count”: 1.3] ] The problem is, I need to update my local data based on the total of these quantities, not create multiple rows/instances in my database! What I did was first transform the JSON array entries into an observable, emitting each element. class func dictToObservable(dict: [NSDictionary]) -> Observable<NSDictionary> { return Observable.create{ observer in dict.forEach({ (e:NSDictionary) -> () in observer.onNext(e) }) observer.onCompleted() return NopDisposable.instance } } Afterwards, I called the observable, and performed a reduce function to merge the data. class func mergeDuplicates(dict: [NSDictionary]) -> Observable<[NSMutableDictionary]>{ let observable = dictToObservable(dict) as Observable<NSDictionary> return observable.reduce([], accumulator: { (var result, elem: NSDictionary) -> [NSMutableDictionary] in let filteredSet = result.filter({ (filteredElem: NSDictionary) -> Bool in return filteredElem.valueForKey("name") as! String == elem.valueForKey("name") as! String }) if filteredSet.count > 0 { if let element = filteredSet.first { let a = NSDecimalNumber(decimal: (element.valueForKey("count") as! NSNumber).decimalValue) let b = NSDecimalNumber(decimal: (elem.valueForKey("count") as! NSNumber).decimalValue) element.setValue(a.decimalNumberByAdding(b), forKey: "count") } } else { let m = NSMutableDictionary(dictionary: elem) m.setValue(NSDecimalNumber(decimal: (elem.valueForKey("count") as! NSNumber).decimalValue), forKey: "count") result.append(m) } return result }) } I created an accumulator variable, which I initialized to be [], an empty array. Then, for each element emitted by the observable, I checked if the name already exists in the accumulator (result) by filtering through the result to see if a name exists already. If the filteredSet returns a value greater than zero that means it already exists. That means that ‘element’ is the instance inside the result whose count should be updated, which ultimately updates my accumulator (result). If it doesn’t exist, then a new entry is added to the result. Once all entries are finished, the accumulator (result) is returned to be used by the next emission, or the final result after processing the data sequence. Where Do I Go From Here? The Rx community is slowly growing with more and more people contributing to the documentation and bringing it to their languages and platforms. I highly suggest you go straight to their website and documentation for a more thorough introduction to their framework. This gentle introduction to Rx was meant to prepare you for the wealth of knowledge and great design patterns they have provided in the documentation! If you’re having difficulty understanding streams, sequences, and what the operators do, RxMarbles.com provides interactive diagrams for some of the Rx operators. It’s an intuitive way of playing with Rx without touching code with only a higher level of understanding. Go check them out! RxMarbles is also available on the Android platform. About the Author Darren Sapalo is a software developer, an advocate for UX, and a student taking up his Master's degree in Computer Science. He enjoyed developing games on his free time when he was twelve. Finally finished with his undergraduate thesis on computer vision, he took up some industry work with Apollo Technologies Inc. developing for both the Android and iOS platforms.
Read more
  • 0
  • 0
  • 2890

article-image-tao-devops
Joakim Verona
21 Apr 2016
4 min read
Save for later

The Tao of DevOps

Joakim Verona
21 Apr 2016
4 min read
What is Tao? It's a complex idea - but one method of thinking of it is to find the natural way of doing things, and then making sure you do things that way. It is intuitive knowing, an approach that can't be grasped just in principle but only through putting them into practice in daily life. The principles of Tao can be applied to almost anything. We've seen the Tao of Physics, the Tao of Pooh, even the Tao of Dating. Tao principles apply just as well to DevOps - because who can know fully what DevOps actually is? It is an idiom as hard to define as "quality" - and good DevOps is closely tied to the good quality of a software product. Want a simple example? A recipe for cooking a dish normally starts with a list of ingredients, because that's the most efficient way of describing cooking. When making a simple desert, the recipe starts with a title: "Strawberries And Cream". Already we can infer a number of steps in making the dish. We must acquire strawberries and cream, and probably put them together on a plate. The recipe will continue to describe the preparation of the dish in more detail, but even if we read only the heading, we will make few mistakes. So what does this mean for DevOps and product creation? When you are putting things together and building things, the intuitive and natural way to describe the process is to do it declaratively. Describe the "whats" rather than the "hows", and then the "hows" can be inferred. The Tao of Building Software Most build tools have at their core a way of declaring relationships between software components. Here's a Make snippet: a : b cc b And here's an Ant snippet: cc b </build> And a Maven snippet: <dependency> lala </dep> Many people think they wound up in a Lovecraftian hell when they see XML, even though the brackets are perfectly euclidean. But if you squint hard enough, you will see that most tools at their core describe dependency trees. The Apache Maven tool is well-known, and very explicit about the declarative approach. So, let's focus on that and try to find the Tao of Maven. When we are having a good day with Maven and we are following the ways of Tao, we describe what type of software artifact we want to build, and the components we are going to use to put it together. That's all. The concrete building steps are inferred. Of course, since life is interesting and complex, we will often encounter situations were the way of Tao eludes us. Consider this example: type:pom antcall tar together ../*/target/*.jar Although abbreviated, I have observed this antipattern several times in real world projects. Whats wrong with it? After all, this antipattern occurs because the alternatives are non-obvious, or more verbose. You might think it's fine. But first of all, notice that we are not describing whats (at least not in a way that Maven can interpret). We are describing hows. Fixing this will probably require a lot of work, but any larger build will ensure that it eventually becomes mandatory to find a fix. Pause (perhaps in your Zen Garden) and consider that dependency trees are already described within the code of most programming languages. Isn't the "import" statement of Java, Python and the like enough? In theory this is adequate - if we disregard the dynamism afforded by Java, where it is possible to construct a class name as a string and load it. In practice, there are a lot of different artifact types that might contain various resources. Even so, it is clearly possible in theory to package all required code if the language just supported it. Jsr 294 - "modularity in Java" - is an effort to provide such support at the language level. In Summary So what have we learned? The two most important lessons are simple - when building software (or indeed, any product), focus on the "Whats" before the "Hows". And when you're empowered with building tools such as Maven, make sure you work with the tool rather than around it. About the Author Joakim Verona is a consultant with a specialty in Continuous Delivery and DevOps, and the author of Practical DevOps. He has worked as the lead implementer of complex multilayered systems such as web systems, multimedia systems, and mixed software/hardware systems. His wide-ranging technical interests led him to the emerging field of DevOps in 2004, where he has stayed ever since. Joakim completed his masters in computer science at Linköping Institute of Technology. He is a certified Scrum master, Scrum product owner, and Java professional.
Read more
  • 0
  • 0
  • 3850
article-image-beating-jquery-making-web-framework-worth-its-weight-code
Erik Kappelman
20 Apr 2016
5 min read
Save for later

Beating jQuery: Making a Web Framework Worth its Weight in Code

Erik Kappelman
20 Apr 2016
5 min read
Let me give you a quick disclaimer. This is a bit of a manifesto. Last year I started a little technology company with some friends of mine. We were lucky enough to get a solid client for web development right away. He was an author in need of a blogging app to communicate with the fans of his upcoming book. In another post I have detailed how I used Angular.js, among other tools, to build this responsive, dynamic web app. Using Angular.js is a wonderful experience and I would recommend it to anyone. However, Angular.js really only looks good by comparison. By this I mean, if we allow any web framework to exist in a vacuum and not simply rank them against one another, they are all pretty bad. Before you gather your pitchforks and torches to defend your favorite flavor let me explain myself. What I am arguing in this post is that many of the frameworks we use are not worth their weight in code. In other words, we add a whole lot of code to our apps when we import the frameworks, and then in practice using the framework is only a little bit better than using jQuery, or even pure JavaScript. And yes I know that using jQuery means including a whole bunch of code into your web app, but frameworks like Angular.js are many times built on top of jQuery anyway. So, the weight of jQuery seems to be a necessary evil. Let’s start with a simple http request for information from the backend. This is what it looks like in Angular.js: $http.get('/dataSource').success(function(data) { $scope.pageData = data; }); Here is a similar request using Ember.js: App.DataRoute = Ember.Route.extend({ model: function(params) { return this.store.find('data', params.data_id); } }); Here is a similar jQuery request: $.get( "ajax/stuff.html", function( data ) { $( ".result" ).html( data ); alert( "Load was performed." ); }); It's important for readers to remember that I am a front-end web developer. By this, I mean I am sure there are complicated, technical, and valid reasons why Ember.js and Angular.js are far superior to using jQuery. But, as a front-end developer, I am interested in speed and simplicity. When I look at these http requests and see that they are overwhelmingly similar I begin to wonder if these frameworks are actually getting any better. One of the big draws to Angular.js and Ember.js is the use of handlebars to ease the creation of dynamic content. Angular.js using handlebars looks something like this: <h1> {{ dynamicStuff }} </h1> This is great because I can go into my controller and make changes to the dynamicStuff variable and it shows up on my page. However, the following accomplishes a similar task using jQuery. $(function () { var dynamicStuff = “This is dog”; $(‘h1’).html( dynamicStuff ); }); I admit that there are many ways in which Angular.js or Ember.js make developing easier. DOM manipulation definitely takes less code and overall the development process is faster. However, there are many times that the limitations of the framework drive the development process. This means that developers sacrifice or change functionality simply to fit the framework. Of course, this is somewhat expected. What I am trying to say with this post is that if we are going to sacrifice load-times and constrict our development methods in order to use the framework of our choice can they at least be simpler to use? So, just for the sake of advancement lets think about what the perfect web framework would be able to do. First of all, there needs to be less set up. The brevity and simplicity of the http request in Angular.js is great, but it requires injecting the correct dependencies in multiple files. This adds stress, opportunities to make mistakes and development time. So, instead of requiring the developer to grab each specific tool for each specific implementation what if the framework took care of that for you? By this I mean if I were to make an http request like this: http( ‘targetURL’ , get , data) When the source is compiled or interpreted the needed dependencies for this http request are dynamically brought into the mix. This way we can make a simpler http request and we can avoid the hassle of setting up the dependencies. As far as DOM manipulation goes, the handlebars seem to be about as good as it gets. However, there needs to be better ways to target individual instances of a repeated elements such as <p> tags holding the captions in a photo gallery. The current solutions for problems like this one are overly complex. Especially when this issue involves one of the most common things on the internet, a photo gallery. About the Author As you can see, I am more of a critic than a problem solver. I believe the issues I bring up here are valid. As we all become more and more entrenched in the Internet of Things, it would be nice if the development process caught up with the standards of ease that end users demand.
Read more
  • 0
  • 0
  • 2071

article-image-adblocking-and-future-web
Sam Wood
11 Apr 2016
6 min read
Save for later

Adblocking and the Future of the Web

Sam Wood
11 Apr 2016
6 min read
Kicked into overdrive by Apple's iOS9 infamously coming with adblocking options for Safari, the content creators of the Internet have woken up to the serious challenge of ad-blocking tech. The AdBlock+ Chrome extension boasts over 50 million active users. I'm one of them. I'm willing to bet that you might be one too. AdBlock use is rising massively and globally and shows no sign of slowing down. Commentators have blamed the web-reading public, have declared web publishers have brought this on themselves and even made worryingly convincing arguments that adblocking is a conspiracy by corporate supergiants to kill the web as we know it. They all agree on one point though - the way we present and consume web content is going to have to evolve or die. So how might adblocking change the web? We All Go Native One of the most proposed and most popular solutions to the adblocking crisis is to embrace "native" advertising. Similar to sponsorship or product placement in other media, native advertising interweaves its sponsor into the body of the content piece. By doing so, an advert is made immune to the traditional scripts and methods that identify and block net ads. This might be a thank you note to a sponsor at the end of a blog, an 'advertorial' upsell of a product or service, or corporate content marketing where a company produces and promotes their own content in a bid to garner your attention for their paid products. (Just like this blog. I'm afraid it's content marketing. Would you like to buy a quality tech eBook? How about the Web Developer's Reference guide - your Bible for everything you need to know about web dev! Help keep this Millennial creative in a Netflix account and pop culture tee-shirts.) The Inevitable Downsides Turns out nobody wants to read sponsored content - only 24% of readers scroll down on a native ad. A 2014 survey by Contently revealed two-thirds of respondents saying they felt deceived by sponsored advertising. We may see this changing. As the practice becomes more mainstream, readers come to realize it does not impact on quality or journalistic integrity. But it's a worrying set of statistics for anyone who hoped direct advertising might save them from the scourge of adblock. The Great App Exodus There's a increasingly popular prediction that adblocking may lead to a great exodus of content from browser-based websites to exist more and more in a scattered app-based ecosystem. We can already see the start of this movement. Every major content site bugs you to download its dedicated app, where ads can live free of fear. If you consume most of your mobile media through Snapchat Discover channels, through Facebook mobile sharing, or even through IM services like Telegram, you'll be reading your web content in that apps dedicated built-in reader. That reader is, of course, free of adblocking extensions. The Inevitable Downsides The issue here is one of corporate monopoly. Some journalists have criticized Facebook Instant (the tech which has Facebook host articles from popular news sites for increased load times) for giving Facebook too much power over the news business. Vox's Matthew Yglesias's predicts restructuring where "instead of digital media brands being companies that build websites, they will operate more like television studios — bringing together teams that collaborate on the creation of content, which is then distributed through diverse channels that are not themselves controlled by the studio." The control that these platforms could exert raises troubling concerns for the future of the Internet as a bastion of free and public speech. User Experience with Added Guilt Alongside adding advertising <script> tags, web developers are increasingly creating sites that detect if you're using AdBlocking software and punishing you accordingly. This can take many forms - from a simple plea to be put on your whitelist in order to keep the servers running, to the cruel and inhuman: Some sites are going as far as actively blocking content for users using adblockers. Trying accessing an article on the likes of Forbes or CityAM with an adblocker turned on. You'll find yourself greeted with an officious note and a scrambled page that refuses to show you the goods unless you switch off the blocker. The Inevitable Downsides No website wants to be in a position where it has to beg or bully their visitors. Whilst your committed readers might be happy to whitelist your URL, antagonizing new users is a surefire way to get them to bounce from the site. Sadly, sabotaging their own sites for ad blocking visitors might be one of the most effective ways for 'traditional' web content providers to survive. After all, most users block ads in order to improve their browsing experience. If the UX of a site on the whitelist is vastly superior to the UX under adblock, it might end up being more pleasant to browse with the extension off. An Uneasy Truce between Adblockers and Content In many ways adblocking was a war that web adverts started. From the pop-up to the autoplaying video, web ad software has been built to be aggressive. The response of adblockers is an indiscriminate and all-or-nothing approach. As Marco Arment, creator of the Peace adblocking app, notes "Today’s web readers [are so] fed up that they disable all ads, or even all Javascript. Web developers and standards bodies couldn’t be more out of touch with this issue, racing ahead to give browsers and Javascript even more capabilities without adequately addressing the fundamental problems that will drive many people to disable huge chunks of their browser’s functionality." Both sides need to learn to trust one another again. The AdBlock+ Chrome extension now comes with an automatic whitelist of sites; 'guilt' website UX works to remind us that a few banner ads might be the vital price we pay to keep our favorite mid-sized content site free and accessible. If content providers work to restore sanity to the web on their ends, then our need for adblocking software as users will diminish accordingly. It's a complex balance that will need a lot of good will from both 'sides' - but if we're going to save the web as we know it, then a truce might be necessary. Building a better web? How about checking out our Essential Web Dev? Get five titles for only $50!  
Read more
  • 0
  • 1
  • 2042

article-image-future-service
Edward Gordon
07 Apr 2016
5 min read
Save for later

The Future as a Service

Edward Gordon
07 Apr 2016
5 min read
“As a Service” services (service2?) generally allow younger companies to scale quickly and efficiently. A lot of the hassle is abstracted away from the pain of implementation, and they allow start-ups to focus on the key drivers of any company – product quality and product availability. For less than the cost of proper infrastructure investment, you can have highly-available, fully distributed, buzzword enabled things at your fingertips to start running wild with. However, “as a Service” providers feel like they’re filling a short-term void rather than building long-term viable option for companies. Here’s why. 1. Cost The main driver of SaaS is that there’s lower upfront costs. But it’s a bit like the debit card versus credit card debate; if you have the money you can pay for it upfront and never worry about it again. If you don’t have the money but need it now, then credit is the answer – and the associated continued costs. For start-ups, a perceived low-cost model is ideal at first glance. With that, there’s the downside that you’ll be paying out of your aaS for the rest of your service with them, and moving out of the ecosystem that you thought looked so robust 4 years ago will give the sys admin that you have to hire in to fix it nightmares. Cost is a difficult thing to balance, but there’s still companies still happily running on SQL Server 2005 without any problems; a high upfront cost normally means that it’s going to stick around for ages (you’ll make it work!). To be honest, for most small businesses, investment in a developer who can stitch together open source technologies to suit your needs will be better than running to the closest spangly Service provider. However, aaS does mean you don’t need System Administrators stressing about ORM-generated queries. 2. Ownership of data An under-discussed but vital issue that lies behind the aaS movement is the ownership of data, and what this means to companies. How secure are the bank details of your clients? How does the aaS provider secure against attacks? Where does this fit in terms of compliance? To me, the risks associated with giving your data for another company to keep is too high to justify, even if it’s backed up by license agreements and all types of unhackable SSL things (#Heartbleed). After all, a bank is more appealing to thieves than a safe behind a picture in your living room. Probably*. As a company, regardless of size, your integrity is all. I think you should own that. 3. The Internet as kingmaker We once had an issue at the Packt office where, during a desk move, someone plugged an Internet cable (that’s the correct term for them, right?) from one port to another, rather than into their computer. The Internet went down for half the day without anyone really knowing what was going on. Luckily, we still had local access to stuff – chapters, databases, schedules, and so on. If we were fully bought into the cloud we would have lost a collective 240 man hours from one office because of an honest mistake. Using the Internet as your only connection point to the data you work with can, and will, have consequences for businesses who work with time-critical pieces of data. This leaves an interesting space open that, as far as I’m aware, very few “as a Service” providers have explored; hybrid cloud. If the issue, basically, is the Internet and what cloud storage means to you operationally and in terms of data compliance, then a world where you can keep sensitive and “critical” data local while keeping bulk data with your cloud provider, then you can leverage the benefits of both worlds. The advantages of speed and lack of overheads would still be there, as well as the added security of knowing that you’re still “owning” your data and your brand reputation. Hybrid clouds generally seem to be an emergent solution in the market at large. There are even solutions now on Kickstarter that provide you with a “cloud” where you own your data. Lovely. Hell, you can even make your own PaaS with Chef and Docker. I could go on. The quite clear popularity of “as a Service” products means there’s value in the services they’re offering. At the moment though, there’s enough problems inherent in adoption to believe that they’re a stop-gap to something more finite. The future, I think, lies away from the black and white of aaS and on-premises software. There’s advantages in both, and as we continue to develop services and solutions that blend the two, I think we’re going to end up at a more permanent solution to the argument. *I don’t actually advocate the safe behind a picture method. More of a loose floorboard man myself. From 4th-10th April, save 50% on 20 of out top cloud titles. From AWS to Azure and OpenStack - and even Docker for good measure - learn how to build the services of tomorrow. If one isn't enough, grab 5 for just $50! Find them here.
Read more
  • 0
  • 0
  • 1531
article-image-carthage-dependency-management-made-git
Nicholas Maccharoli
01 Apr 2016
4 min read
Save for later

Carthage: Dependency management made git-like

Nicholas Maccharoli
01 Apr 2016
4 min read
Why do I need another dependency manager? Carthage is a decentralized dependency manager for the iOS and OS X frameworks. Unlike CocoaPods, Carthage has no central location for hosting repository information (like pod specs). It dictates nothing as to what kind of project structure you should have, aside from optionally having a Carthage/ folder in your project's root folder, and housing built frameworks in Build/ and optionally source files in Checkouts/ if you are building directly from source.This folder hierarchy is automatically generated after running the command, bash carthage bootstrap. Carthage leaves it open to the end user to decide how they want to manage third-party libraries, either by having both the Cartfile and Carthage/* folders checked in under source control or just the Cartfile that lists the frameworks that you wish to use in your project under source control. Since there is no centralized source of information, project discovery is more difficult with carthage, but other than that, normal operation is simpler and less error prone when compared to other package managers. The Setup of Champions The best way to install and manage carthage, in my opinion, is through HomeBrew. Just run the following command and you should be in business in no time: brew install carthage If for some reason you don't want to go the homebrew route, you are still in luck! Just download the latest and greatest carthage.pkg from the Releases Page. Common Carthage Work-flow Create a cart file with dependencies listed, and optionally, branch or version info. Cartfile grammar notes: The first keyword is either 'git' for a repository not hosted on github, or 'github' for a repository hosted on github. Next is the location of the repository. If the prefix is 'git', then this will be the same as the address you type when running git clone. The third piece is either the branch you wish to pull the latest from, or the version number of a release with one of the following operators: ==, >= or ~>. github "ReactiveCocoa/ReactiveCocoa" "master" #Latest version of the master branch of reactive cocoa github "rs/SDWebImage" ~> 3.7 # Version 3.7 and versions compatible with 3.7 github "realm/realm-cocoa" == 0.96.2 #Only use version 0.96.2 Basic Commands Assuming that all went well with the installation step, you should now be able to run carthage bootstrap and watch carthage go through the Cartfile one by one and fetch the frameworks (or build them after fetching from source, if using --no-use-binaries) Given that this goes without a hitch, all that is left to do is add a new run script phase to your target. To do this, simply click on your target in XCode, and under the 'Build Phases' tab, click the '+' button and select "New Run Script Phase" Type this in the script section: /usr/local/bin/carthage copy-frameworks And then, below the box where you just typed the last line, add the input files of all the frameworks you wish to include and their dependencies. Last but not least Once again, click on your target and navigate to the General tab, and then go to the section Linked Frameworks and Libraries and add the frameworks from [Project Root]/Carthage/Build/[iOS or Mac ]/* to your project. At this point, everything should build and run just fine. As the project requirements change and you wish to add, remove, or upgrade framework versions, just edit the Cartfile, run the command carthage update, and if needed add new or remove unused frameworks from your project settings. It's that simple! A Note Source Control with Carthage Given that all of your project's third party source and frameworks are located under the Carthage/ folder, in my experience, it is much easier to just simply place this entire folder under source control. The merits of doing so are simple: when cloning the project or switching branches, there is no need to run carthage bootstrap or carthage update. This saves a considerable amount of time, and the only expense for doing so is an increase in the size of the repository. About the author Nick Maccharoli is an iOS / Backend developer and Open Source enthusiast working with a startup in Tokyo, enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma. 
Read more
  • 0
  • 0
  • 1989

article-image-computerizing-our-world-wearables-and-iot
Sam Wood
23 Mar 2016
5 min read
Save for later

Computerizing Our World with Wearables and IoT

Sam Wood
23 Mar 2016
5 min read
“Sure, the Frinkiac 7 looks impressive– Don’t touch it!– But I predict that within 100 years, computers will be twice as powerful, 10,000 times larger, and so expensive only the five richest kings in Europe will own them.” Professor Frink, The Simpsons, Series 7, Episode 23, 1996 We've always been laughably bad at predicting what technology is going to look like in the future. Alexander Graham Bell famously quoted “I truly believe that one day there will be a telephone in every town in America". Today, we've got a telephone - complete with a supercomputer - for every cerebral hemisphere. From the mainframe, through the PC, and on to the smartphone - computers have been getting smaller and smaller even as their processing power keeps increasing. This has gone hand in hand with the ubiquity of computing devices; from something that could only be afforded by the largest of organizations, to something that is owned by every individual. As the progress of computers is to get continuously smaller and cheap, it's easy to see we will soon see a world where they easily outnumbered people a world where they move from being separate devices to being integrated utterly into our lives. So what form will this integration take? The two most likely options are Wearables and Home Automation - computerizing ourselves, and computerizing our world. Automating Our Homes Currently, the smart home is a luxury product. But with ever-improving micro technology intersecting with rising energy prices, we may soon find that home automation becomes a necessity to us all. It's easy to imagine a home controlled from your mobile device - already, you can purchase commercial systems to set your thermostat from an app. In the next two years, we will be developing systems that cater to real life generic needs such as stable implementation of lighting systems, entertainment systems, intrusion detection and monitoring systems and so on. The immediate challenges for developers looking to work in the home automation space is likely to be wrangling amazing UI for our new remote controls. Are you relishing the prospect of creating an interface that lets a whole family squabble over the thermostat, the lighting, and more, all from their cellphones? Beyond that? Established systems will become more advanced and scalable to perform complex and dynamic tasks. Security will be much more than retina and fingerprint scans, and move towards mapping multiple biometric feeds. Home assistance will progress to speech, mood and behavioral recognition. How about your own AI assistant that learns the needs of your and your family, and controls your home accordingly? Already, the likes of GoogleNow can make predictions about your movements and habits. The future will see this technology become refined and integrated across our automated homes - smart houses that turn on the lights for when we get up, get the hot water ready for when we shower, track the weather forecast and adjust the heating accordingly - all without us lifting a finger. Wearing Our Tech Today, the successful wearables are either those with narrow applications like fitness monitoring, or providing 'second screen' options to our phones. Tomorrow? So many of the proposed areas of what the future of tech will look like require us to be plugged in to a wearable device. From Internet of Things monitoring and tracking our body's vital signs, to gesture computing, to virtual and augmented reality - all of these assume the taking computing devices out of our pockets and putting them onto our bodies. Skeptical? Sure, Google Glass may have died a death, but the history of tech is littered with the bodies of ideas that were just before their time. The Apple Newton did not usher in the age of personal devices like the iPhone did; the failure of the Rocket eBook did not mean the Kindle was doomed from the start. I'm personally hoping that just because Glass failed to get any serious traction does not mean I'm never going to get my Spider Jerusalem or Adam Jensen glasses-based HUDs. One thing's for sure though - we'll carry more and more computing power wherever we go. Just like advances in miniaturization and technology meant that the pocket watch was superseded by the wrist watch in the name of efficiency, we'll see our outfits come to incorporate these always-available personal assistants, giving us information and guidance based on context. One study by ONWorld has portrayed a wearable tech industry on the cusp of reaching $50 billion within the next five years. According to the research, more than 700 million wearable tech units will be shipped to a market eager for advanced technology on their wrists, on their face, and even incorporated into their clothing. The Internet of Everything Ultimately, the success of wearables and home automation will be about incorporating our own squishy, organic forms into the Internet of Things. As wearables and home automation allow the data of our lives to be seamlessly recorded and crunched, the ability of technology to predict our need and accommodate us only increases. In ten years time? Our phone might know us better than we know ourselves. From 23rd to the 25th you can save 50% on some of our best Internet of Things titles. If one simply isn't enough, grab any 5 of the featured titles for $50. Start exploring here.
Read more
  • 0
  • 0
  • 1747