Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-reactive-python-real-time-events-processing
Xavier Bruhiere
04 Oct 2016
8 min read
Save for later

Reactive Python - Real-time events processing

Xavier Bruhiere
04 Oct 2016
8 min read
A recent trend in programming literature promotes functional programming as a sensible alternative to object-oriented programs for many use cases. This subject feeds many discussions and highlights how important program design is as our applications are becoming more and more complex. Although there might be here some seductive intellectual challenge (because yeah, we love to juggle with elegant abstractions), there are also real business values : Building sustainable, maintainable programs Decoupling architecture components for proper team work Limiting bug exposure Better product iteration When developers spot an interesting approach to solve a recurrent issue in our industry, they formalize it as a design pattern. Today, we will discuss a powerful member of this family: the pattern observer. We won't dive into the strict rhetorical details (sorry, not sorry). Instead, we will delve how reactive programming can level up the quality of our work. It's Python Week. That means you can not only save 50% on some of our latest Python products, but you can also pick up a free Python eBook every single day! The scene That was a bold statement; let's illustrate that with a real-world scenario. Say we were tasked to build a monitoring system. We need some way to collect data, analyze it, and take actions when things go unexpected. Anomaly detection is an exciting yet challenging problem. We don't want our data scientists to be bothered by infrastructure failures. And in the same spirit, we need other engineers to focus only on how to react to specific disaster scenarios. The core of our approach consists of two components—a monitoring module firing and forgetting its discoveries on channels and another processing brick intercepting those events with an appropriate response. The UNIX philosophy at its best: do one thing and do it well. We split the infrastructure by concerns and the workers by event types. Assuming that our team defines well-documented interfaces, this is a promising design. The rest of the article will discuss the technical implementation but keep in mind that I/O documentation and proper processing of load estimation are also fundamental. The strategy Our local lab is composed of three elements: The alert module that we will emulate with a simple cli tool, which publishes alert messages. The actual processing unit subscribing to events it knows how to react to. A message broker supporting the Publish / Subscribe (or PUBSUB) pattern. For this purpose, Redis offers a popular, efficient, and rock solid solution. This is highly recommended, but the database isn't designed for this case. NATS, however, presents itself as follows: NATS acts as a central nervous system for distributed systems such as mobile devices, IoT networks, enterprise microservices and cloud native infrastructure. Unlike traditional enterprise messaging systems, NATS provides an always on ‘dial-tone’. Sounds promising! Client libraries are available for major languages, and Apcera, the company sponsoring the technology, has a solid reputation for building reliable distributed systems. Again, we won't delve how processing actually happens, only the orchestration of this three moving parts. The setup Since NATS is a message broker, we need to run a server locally (version 0.8.0 as of today). Gnatsd is the official and scalable first choice. It is written in Go, so we get performances and drop-in binary out of the box. For fans of microservices (as I am), an official Docker image is available for pulling. Also, for lazy ones (as I am), a demo server is already running at nats://demo.nats.io:4222. Services will use Python 3.5.1, but 2.7.10 should do the job with minimal changes. Our scenario is mostly about data analysis and system administration on the backend, and Python has a wide range of tools for both areas. So let's install the requirements: $ pip --version pip 8.1.1 $ pip install -e git+https://github.com/mcuadros/pynats@6851e84eb4b244d22ffae65e9fbf79bd9872a5b3#egg=pynats click==6.6 # for cli integration Thats'all. We are now ready to write services. Publishing events Let's warm up by sending some alerts to the cloud. First, we need to connect to the NATS server: # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: broker.py import pynats def nats_conn(conf): """Connect to nats server from environment variables. The point is to allow easy switching without to change the code. You can read more on this approach stolen from 12 factors apps. """ # the default value comes from docker-compose (https://docs.docker.com/compose/) services link behavior host = conf.get('__BROKER_HOST__', 'nats') port = conf.get('__BROKER_PORT__', 4222) opts = { 'url': conf.get('url', 'nats://{host}:{port}'.format(host=host, port=port)), 'verbose': conf.get('verbose', False) } print('connecting to broker ({opts})'.format(opts=opts)) conn = pynats.Connection(**opts) conn.connect() return conn This should be enough to start our client: # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: observer.py import os import broker def send(channel, msg): # use environment variables for configuration nats = broker.nats_conn(os.environ) nats.publish(channel, msg) nats.close() And right after that, a few lines of code to shape a cli tool: #! /usr/bin/env python # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: __main__.py import click @click.command() @click.argument('command') @click.option('--on', default='some_event', help='messages topic name') def main(command, on): if command == 'send': click.echo('publishing message') observer.send(on, 'Terminator just dropped in our space-time') if__name__ == '__main__': main() chmod +x ./__main__.py gives it execution permission so we can test how our first bytes are doing. $ # `click` package gives us a productive cli interface $ ./__main__.py --help Usage: __main__.py [OPTIONS] COMMAND Options: --on TEXT messages topic name --help Show this message and exit. $ __BROKER_HOST__="demo.nats.io"./__main__.py send --on=click connecting to broker ({'verbose': False, 'url': 'nats://demo.nats.io:4222'}) publishing message ... This is indeed quite poor in feedback, but no exception means that we did connect to the server and published a message. Reacting to events We're done with the heavy lifting! Now that interesting events are flying through the Internet, we can catch them and actually provide business values. Don't forget the point: let the team write reactive programs without worrying how it will be triggered. I found the following snippet to be a readable syntax for such a goal: # filename: __main__.py import observer @observer.On('terminator_detected') def alert_sarah_connor(msg): print(msg.data) As the capitalized letter of On suggests, this is a Python class, wrapping a NATS connection. It aims to call the decorated function whenever a new message goes through the given channel. Here is a naive implementation shamefully ignoring any reasonable error handling and safe connection termination (broker.nats_conn would be much more production-ready as a context manger, but hey, we do things that don't scale, move fast, and break things): # filename: observer.py class On(object): def__init__(self, event_name, **kwargs): self._count = kwargs.pop('count', None) self._event = event_name self._opts = kwargs or os.environ def__call__(self, fn): nats = broker.nats_conn(self._opts) subscription = nats.subscribe(self._event, fn) def inner(): print('waiting for incoming messages') nats.wait(self._count) # we are done nats.unsubscribe(subscription) return nats.close() return inner Instil some life into this file from the __main__.py: # filename: __main__.py @click.command() @click.argument('command') @click.option('--on', default='some_event', help='messages topic name') def main(command, on): if command == 'send': click.echo('publishing message') observer.send(on, 'bad robot detected') elif command == 'listen': try: alert_sarah_connor(): exceptKeyboardInterrupt: click.echo('caught CTRL-C, cleaning after ourselves...') Your linter might complain about the injection of the msg argument in alert_sarah_connor, but no offense, it should just work (tm): $ In a first terminal, listen to messages $ __BROKER_HOST__="demo.nats.io"./__main__.py listen connecting to broker ({'url': 'nats://demo.nats.io:4222', 'verbose': False}) waiting for incoming messages $ And fire up alerts in a second terminal __BROKER_HOST__="demo.nats.io"--on='terminator_detected' The data appears in the first terminal, celebrate! Conclusion Reactive programming implemented with the Publish/Subscribe pattern brings a lot of benefits for events-oriented products. Modular development, decoupled components, scalable distributed infrastructure, single-responsibility principle.One should think about how data flows into the system before diving into the technical details. This kind of approach also gains traction from real-time data processing pipelines (Riemann, Spark, and Kafka). NATS performances, indeed, allow ultra low-latency architectures development without too much of a deployment overhead. We covered in a few lines of Python the basics of a reactive programming design, with a lot of improvement opportunities: events filtering, built-in instrumentation, and infrastructure-wide error tracing. I hope you found in this article the building block to develop upon! About the author Xavier Bruhiere is the lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 4584

article-image-functions-swift
Packt
30 Sep 2016
15 min read
Save for later

Functions in Swift

Packt
30 Sep 2016
15 min read
In this article by Dr. Fatih Nayebi, the author of the book Swift 3 Functional Programming, we will see that as functions are the fundamental building blocks in functional programming, this article dives deeper into it and explains all the aspects related to the definition and usage of functions in functional Swift with coding examples. This article will cover the following topics with coding examples: The general syntax of functions Defining and using function parameters Setting internal and external parameters Setting default parameter values Defining and using variadic functions Returning values from functions Defining and using nested functions (For more resources related to this topic, see here.) What is a function? Object-oriented programming (OOP) looks very natural to most developers as it simulates a real-life situation of classes or, in other words, blueprints and their instances, but it brought a lot of complexities and problems such as instance and memory management, complex multithreading, and concurrency programming. Before OOP became mainstream, we were used to developing in procedural languages. In the C programming language, we did not have objects and classes; we would use structs and function pointers. So now we are talking about functional programming that relies mostly on functions just as procedural languages relied on procedures. We are able to develop very powerful programs in C without classes; in fact, most operating systems are developed in C. There are other multipurpose programming languages such as Go by Google that is not object-oriented and is getting very popular because of its performance and simplicity. So, are we going to be able to write very complex applications without classes in Swift? We might wonder why we should do this. Generally, we should not, but attempting it will introduce us to the capabilities of functional programming. A function is a block of code that executes a specific task, can be stored, can persist data, and can be passed around. We define them in standalone Swift files as global functions or inside other building blocks such as classes, structs, enums, and protocols as methods. They are called methods if they are defined in classes but in terms of definition, there is no difference between a function and method in Swift. Defining them in other building blocks enables methods to use the scope of the parent or to be able to change them. They can access the scope of their parent and they have their own scope. Any variable that is defined inside a function is not accessible outside of it. The variables defined inside them and the corresponding allocated memory goes away when the function terminates. Functions are very powerful in Swift. We can compose a program with only functions as functions can receive and return functions, capture variables that exist in the context they were declared, and can persist data inside themselves. To understand the functional programming paradigms, we need to understand the capability of functions in detail. We need to think if we can avoid classes and only use functions so we will cover all the details related to functions in the upcoming sections of this article. The general syntax of functions and methods We can define functions or methods as follows: accessControl func functionName(parameter: ParameterType) throws -> ReturnType { } As we know already, when functions are defined in objects, they become methods. The first step to define a method is to tell the compiler from where it can be accessed. This concept is called access control in Swift and there are three levels of access control. We are going to explain them for methods as follows: Public access: Any entity can access a method that is defined as public if it is in the same module. If an entity is not in the same module, we will need to import the module to be able to call the method. We need to mark our methods and objects as public when we develop frameworks in order to enable other modules to use them. Internal access: Any method that is defined as internal can be accessed from other entities in a module but cannot be accessed from other modules. Private access: Any method that is defined as private can be accessed only from the same source file. By default, if we do not provide the access modifier, a variable or function becomes internal. Using these access modifiers, we can structure our code properly, for instance, we can hide details from other modules if we define an entity as internal. We can even hide the details of a method from other files if we define them as private. Before Swift 2.0, we had to define everything as public or add all source files to the testing target. Swift 2.0 introduced the @testable import syntax that enables us to define internal or private methods that can be accessed from testing modules. Methods can generally be in three forms: Instance methods: We need to obtain an instance of an object (In this article we will refer to classes, structs, and enums as objects) in order to be able to call the method defined in it, and then we will be able to access the scope and data of the object. Static methods: Swift names them type methods also. They do not need any instances of objects and they cannot access the instance data. They are called by putting a dot after the name of the object type (for example, Person.sayHi()). The static methods cannot be overridden by the subclasses of the object that they reside in. Class methods: Class methods are like the static methods but they can be overridden by subclasses. We have covered the keywords that are required for method definitions; now we will concentrate on the syntax that is shared among functions and methods. There are other concepts related to methods that are out of scope of this article as we will concentrate on functional programming in Swift. Continuing to cover the function definition, now comes the func keyword that is mandatory and is used to tell the compiler that it is going to deal with a function. Then comes the function name that is mandatory and is recommended to be camel-cased with the first letter as lowercase. The function name should be stating what the function does and is recommended to be in the form of a verb when we define our methods in objects. Basically, our classes will be named nouns and methods will be verbs that are in the form of orders to the class. In pure functional programming, as the function does not reside in other objects, they can be named by their functionalities. Parameters follow the func name. They will be defined in parentheses to pass arguments to the function. Parentheses are mandatory even if we do not have any parameters. We will cover all aspects of parameters in an upcoming section of this article. Then comes throws, which is not mandatory. A function or method that is marked with the throw keyword may or may not throw errors. At this point, it is enough to know what they are when we see them in a function or method signature. The next entity in a function type declaration is the return type. If a function is not void, the return type will come after the -> sign. The return type indicates the type of entity that is going to be returned from a function. We will cover return types in detail in an upcoming section in this article, so now we can move on to the last piece of function that is present in most programming languages, our beloved { }. We defined functions as blocks of functionality and {} defines the borders of the block so that the function body is declared and execution happens in there. We will write the functionality inside {}. Best practices in function definition There are proven best practices for function and method definition provided by amazing software engineering resources, such as Clean Code, Code Complete, and Coding Horror, that we can summarize as follows: Try not to exceed 8-10 lines of code in each function as shorter functions or methods are easier to read, understand, and maintain. Keep the number of parameters minimal because the more parameters a function has, the more complex it is. Functions should have at least one parameter and one return value. Avoid using type names in function names as it is going to be redundant. Aim for one and only one functionality in a function. Name a function or method in a way that it describes its functionality properly and is easy to understand. Name functions and methods consistently. If we have a connect function, we can have a disconnect one. Write functions to solve the current problem and generalize it when needed. Try to avoid what if scenarios as probably you aren't going to need it (YAGNI). Calling functions We have covered a general syntax to define a function and method if it resides in an object. Now it is time to talk about how we call our defined functions and methods. To call a function, we will use its name and provide its required parameters. There are complexities with providing parameters that we will cover in the upcoming section. For now, we are going to cover the most basic type of parameter providing as follows: funcName(paramName, secondParam: secondParamName) This type of function calling should be familiar to Objective-C developers as the first parameter name is not named and the rest are named. To call a method, we need to use the dot notation provided by Swift. The following examples are for class instance methods and static class methods: let someClassInstance = SomeClass() someClassInstance.funcName(paramName, secondParam: secondParamName) StaticClass.funcName(paramName, secondParam: secondParamName)   Defining and using function parameters In function definition, parameters follow the function name and they are constants by default so we will not able to alter them inside the function body if we do not mark them with var. In functional programming, we avoid mutability, therefore, we would never use mutable parameters in functions. Parameters should be inside parentheses. If we do not have any parameters, we simply put open and close parentheses without any characters between them: func functionName() { } In functional programming, it is important to have functions that have at least one parameter. We will explain why it is important in upcoming sections. We can have multiple parameters separated by commas. In Swift, parameters are named so we need to provide the parameter name and type after putting a colon, as shown in the following example: func functionName(parameter: ParameterType, secondParameter: ParameterType) { } // To call: functionName(parameter, secondParameter: secondParam) ParameterType can also be an optional type so the function becomes the following if our parameters need to be optionals: func functionName(parameter: ParameterType?, secondParameter: ParameterType?) { } Swift enables us to provide external parameter names that will be used when functions are called. The following example presents the syntax: Func functionName(externalParamName localParamName: ParameterType) // To call: functionName(externalParamName: parameter) Only the local parameter name is usable in the function body. It is possible to omit the parameter names with the _ syntax, for instance, if we do not want to provide any parameter name when the function is called, we can use _ as externalParamName for the second or subsequent parameters. If we want to have a parameter name for the first parameter name in function calls, we can basically provide the local parameter name as external also. In this article, we are going to use the default function parameter definition. Parameters can have default values as follows: func functionName(parameter: Int = 3) { print("(parameter) is provided." } functionName(5) // prints "5 is provided." functionName() // prints "3 is provided" Parameters can be defined as inout to enable function callers obtaining parameters that are going to be changed in the body of a function. As we can use tuples for function returns, it is not recommended to use inout parameters unless we really need them. We can define function parameters as tuples. For instance, the following example function accepts a tuple of the (Int, Int) type: func functionWithTupleParam(tupleParam: (Int, Int)) {} As, under the hood, variables are represented by tuples in Swift, the parameters to a function can also be tuples. For instance, let's have a simple convert function that takes an array of Int and a multiplier and converts it to a different structure. Let's not worry about the implementation of this function for now: let numbers = [3, 5, 9, 10] func convert(numbers: [Int], multiplier: Int) -> [String] { let convertedValues = numbers.enumerate().map { (index, element) in return "(index): (element * multiplier)" } return convertedValues } If we use this function as convert(numbers, multiplier: 3), the result is going to be ["0: 9", "1: 15", "2: 27", "3: 30"]. We can call our function with a tuple. Let's create a tuple and pass it to our function: let parameters = (numbers, multiplier: 3) convert(parameters) The result is identical to our previous function call. However, passing tuples in function calls is deprecated and will be removed in Swift 3.0, so it is not recommended to use them. We can define higher-order functions that can receive functions as parameters. In the following example, we define funcParam as a function type of (Int, Int) -> Int: func functionWithFunctionParam(funcParam: (Int, Int)-> Int) In Swift, parameters can be of a generic type. The following example presents a function that has two generic parameters. In this syntax, any type (for example, T or V) that we put inside <> should be used in parameter definition: func functionWithGenerics<T, V>(firstParam: T, secondParam) Defining and using variadic functions Swift enables us to define functions with variadic parameters. A variadic parameter accepts zero or more values of a specified type. Variadic parameters are similar to array parameters but they are more readable and can only be used as the last parameter in the multiparameter functions. As variadic parameters can accept zero values, we will need to check whether it is empty. The following example presents a function with variadic parameters of the String type: func greet(names: String…) { for name in names { print("Greetings, (name)") } } // To call this function greet("Steve", "Craig") // prints twice greet("Steve", "Craig", "Johny") // prints three times Returning values from functions If we need our function to return a value, tuple, or another function, we can specify it by providing ReturnType after ->. For instance, the following example returns String: func functionName() -> String { } Any function that has ReturnType in its definition should have a return keyword with the matching type in its body. Return types can be optionals in Swift so the function becomes as follows if the return needs to be optional: func functionName() -> String? { } Tuples can be used to provide multiple return values. For instance, the following function returns tuple of the (Int, String) type: func functionName() -> (code: Int, status: String) { } As we are using parentheses for tuples, we should avoid using parentheses for single return value functions. Tuple return types can be optional too so the syntax becomes as follows: func functionName() -> (code: Int, status: String)? { } This syntax makes the entire tuple optional; if we want to make only status optional, we can define the function as follows: func functionName() -> (code: Int, status: String?) { } In Swift, functions can return functions. The following example presents a function with the return type of a function that takes two Int values and returns Int: func funcName() -> (Int, Int)-> Int {} If we do not expect a function to return any value, tuple, or function, we simply do not provide ReturnType: func functionName() { } We could also explicitly declare it with the Void keyword: func functionName() { } In functional programming, it is important to have return types in functions. In other words, it is a good practice to avoid functions that have Void as return types. A function with the Void return type typically is a function that changes another entity in the code; otherwise, why would we need to have a function? OK, we might have wanted to log an expression to the console/log file or write data to a database or file to a filesystem. In these cases, it is also preferable to have a return or feedback related to the success of the operation. As we try to avoid mutability and stateful programming in functional programming, we can assume that our functions will have returns in different forms. This requirement is in line with mathematical underlying bases of functional programming. In mathematics, a simple function is defined as follows: y = f(x) or f(x) -> y Here, f is a function that takes x and returns y. Therefore, a function receives at least one parameter and returns at least a value. In functional programming, following the same paradigm makes reasoning easier, function composition possible, and code more readable. Summary This article explained the function definition and usage in detail by giving examples for parameter and return types. You can also refer the following books on the similar topics: Protocol-Oriented Programming with Swift: https://www.packtpub.com/application-development/protocol-oriented-programming-swift OpenStack Object Storage (Swift) Essentials: https://www.packtpub.com/virtualization-and-cloud/openstack-object-storage-swift-essentials Implementing Cloud Storage with OpenStack Swift: https://www.packtpub.com/virtualization-and-cloud/implementing-cloud-storage-openstack-swift Resources for Article: Further resources on this subject: Introducing the Swift Programming Language [article] Swift for Open Source Developers [article] Your First Swift App [article]
Read more
  • 0
  • 0
  • 2548

article-image-how-to-apply-themes-sails-applications-part-1
Luis Lobo
29 Sep 2016
8 min read
Save for later

How to Apply Themes to Sails Applications, Part 1

Luis Lobo
29 Sep 2016
8 min read
The Sails Framework is a popular MVC framework that is designed for building practical, production-ready Node.js apps. Themes customize the look and feel of your app, but Sails does not come with a configuration or setting for handling themes by itself. This two-part post shows one of the ways you can set up theming for your Sails application, thus making use of some of Sails’ capabilities. You may have an application that needs to handle theming for different reasons, like custom branding, licensing, dynamic theme configuration, and so on. You can adjust the theming of your application, based on external factors, like patterns in the domain of the site you are browsing. Imagine you have an application that handles deliveries that you customize per client. So, your app renders the default theme when browsed as http://www.smartdelivery.com, but when logged in as a customer, let's say, "Burrito", it changes the domain name as http://burrito.smartdelivery.com. In this series we make use of Less as our language to define our CSS. Sails already handles Less right out of the box. The default Less file is located in /assets/styles/importer.less. We will also use Bootstrap as our base CSS Framework, importing its Less file into our importer.less file. The technique showed here consists of having a base CSS, and a theme CSS that varies according to the host name. Step 1 - Adding Bootstrap to Sails We use Bower to add Bootstrap to our project. First, install it by issuing the following command: npm install bower --save-dev Then, initialize the Bower configuration file. node_modules/bower/bin/bower init This command allows us to configure our bower.json file. Answer the questions asked by bower. ? name sails-themed-application ? description Sails Themed Application ? main file app.js ? keywords ? authors lobo ? license MIT ? homepage ? set currently installed components as dependencies? Yes ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? No { name: 'sails-themed-application', description: 'Sails Themed Application', main: 'app.js', authors: [ 'lobo' ], license: 'MIT', homepage: '', ignore: [ '**/.*', 'node_modules', 'bower_components', 'assets/vendor', 'test', 'tests' ] } This generates a bower.json file in the root of your project. Now we need to tell bower to install everything in a specific directory. Create a file named .bowerrc and put this configuration into it: {"directory" : "assets/vendor"} Finally, install Bootstrap: node_modules/bower/bin/bower install bootstrap --save --production This action creates a folder in assets named vendor, with boostrap inside of it. Since Bootstrap uses JQuery, you also have a jquery folder: ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ ├── templates │ ├── themes │ └── vendor │ ├── bootstrap │ │ ├── dist │ │ │ ├── css │ │ │ ├── fonts │ │ │ └── js │ │ ├── fonts │ │ ├── grunt │ │ ├── js │ │ ├── less │ │ │ └── mixins │ │ └── nuget │ └── jquery │ ├── dist │ ├── external │ │ └── sizzle │ │ └── dist │ └── src │ ├── ajax │ │ └── var │ ├── attributes │ ├── core │ │ └── var │ ├── css │ │ └── var │ ├── data │ │ └── var │ ├── effects │ ├── event │ ├── exports │ ├── manipulation │ │ └── var │ ├── queue │ ├── traversing │ │ └── var │ └── var ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views We need now to add Bootstrap into our importer. Edit /assets/styles/importer.less and add this instruction at the end of it: @import "../vendor/bootstrap/less/bootstrap.less"; Now you need to tell Sails where to import Bootstrap and JQuery JavaScript files from. Edit /tasks/pipeline.js and add the following code after it loads the sails.io.js file: // Load sails.io before everything else 'js/dependencies/sails.io.js', // <ADD THESE LINES> // JQuery JS 'vendor/jquery/dist/jquery.min.js', // Bootstrap JS 'vendor/bootstrap/dist/js/bootstrap.min.js', // </ADD THESE LINES> Now you have to edit your views layout and pages to use the Bootstrap style. In this series I created an application from scratch, so I have the default views and layouts. In your layout, insert the following line after your tag: <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> This loads a second CSS file, which defaults to /themes/default.css, into your views. As a sample, here are the /views/layout.ejs and /views/homepage.ejs I changed (the text under the headings is random text): /views/layout.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags --> <title><%= typeof title == 'undefined' ? 'Sails Themed Application' : title %></title> <!--STYLES--> <link rel="stylesheet" href="/styles/importer.css"> <!--STYLES END--> <!-- THIS IS WHERE THE THEME CSS IS LOADED --> <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> </head> <body> <%- body %> <!--TEMPLATES--> <!--TEMPLATES END--> <!--SCRIPTS--> <script src="/js/dependencies/sails.io.js"></script> <script src="/vendor/jquery/dist/jquery.min.js"></script> <script src="/vendor/bootstrap/dist/js/bootstrap.min.js"></script> <!--SCRIPTS END--> </body> </html> Notice the lines after the <!--STYLES END--> tag. /views/homepage.ejs <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Project name</a> </div> <div id="navbar" class="navbar-collapse collapse"> <form class="navbar-form navbar-right"> <div class="form-group"> <input type="text" placeholder="Email" class="form-control"> </div> <div class="form-group"> <input type="password" placeholder="Password" class="form-control"> </div> <button type="submit" class="btn btn-success">Sign in</button> </form> </div><!--/.navbar-collapse --> </div> </nav> <!-- Main jumbotron for a primary marketing message or call to action --> <div class="jumbotron"> <div class="container"> <h1>Hello, world!</h1> <p>This is a template for a simple marketing or informational website. It includes a large callout called a jumbotron and three supporting pieces of content. Use it as a starting point to create something more unique.</p> <p><a class="btn btn-primary btn-lg" href="#" role="button">Learn more &raquo;</a></p> </div> </div> <div class="container"> <!-- Example row of columns --> <div class="row"> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> </div> <hr> <footer> <p>&copy; 2015 Company, Inc.</p> </footer> </div> <!-- /container --> You can now lift Sails and see your Bootstrapped Sails application. Now that we have our Bootstrapped Sails app set up, in Part 2 we will compile our theme’s CSS and the necessary Less files, and we will set bup the theme Sails hook to complete our application. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing Software products and solutions, frameworks, and platforms for several kinds of industries. In the last years he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 3909
Banner background image

article-image-qt-style-sheets
Packt
29 Sep 2016
26 min read
Save for later

QT Style Sheets

Packt
29 Sep 2016
26 min read
In this article by Lee Zhi Eng, author of the book, Qt5 C++ GUI Programming Cookbook, we will see how Qt allows us to easily design our program's user interface through a method which most people are familiar with. Qt not only provides us with a powerful user interface toolkit called Qt Designer, which enables us to design our user interface without writing a single line of code, but it also allows advanced users to customize their user interface components through a simple scripting language called Qt Style Sheets. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Using style sheets with Qt Designer Basic style sheets customization Creating a login screen using style sheet Use style sheets with Qt Designer In this example, we will learn how to change the look and feel of our program and make it look more professional by using style sheets and resources. Qt allows you to decorate your GUIs (Graphical User Interfaces) using a style sheet language called Qt Style Sheets, which is very similar to CSS (Cascading Style Sheets) used by web designers to decorate their websites. How to do it… The first thing we need to do is open up Qt Creator and create a new project. If this is the first time you have used Qt Creator, you can either click the big button that says New Project with a + sign, or simply go to File | New File or New Project. Then, select Application under the Project window and select Qt Widgets Application. After that, click the Choose button at the bottom. A window will then pop out and ask you to insert the project name and its location. Once you're done with that, click Next several times and click the Finish button to create the project. We will just stick to all the default settings for now. Once the project is created, the first thing you will see is the panel with tons of big icons on the left side of the window which is called the Mode Selector panel; we will discuss this more later in the How it works section. Then, you will also see all your source files listed on the Side Bar panel which is located right next to the Mode Selector panel. This is where you can select which file you want to edit, which, in this case, is mainwindow.ui because we are about to start designing the program's UI! Double click mainwindow.ui and you will see an entirely different interface appearing out of nowhere. Qt Creator actually helped you to switch from the script editor to the UI editor (Qt Designer) because it detected .ui extension on the file you're trying to open. You will also notice that the highlighted button on the Mode Selector panel has changed from the Edit button to the Design button. You can switch back to the script editor or change to any other tools by clicking one of the buttons located at the upper half of the Mode Selector panel. Let's go back to the Qt Designer and look at the mainwindow.ui file. This is basically the main window of our program (as the file name implies) and it's empty by default, without any widget on it. You can try to compile and run the program by pressing the Run button (green arrow button) at the bottom of the Mode Selector panel, and you will see an empty window popping out once the compilation is complete: Now, let's add a push button to our program's UI by clicking on the Push Button item in the widget box (under the Buttons category) and drag it to your main window in the form editor. Then, keep the push button selected and now you will see all the properties of this button inside the property editor on the right side of your window. Scroll down to somewhere around the middle and look for a property called styleSheet. This is where you apply styles to your widget, which may or may not inherit to its children or grandchildren recursively depending on how you set your style sheet. Alternatively, you can also right click on any widget in your UI at the form editor and select Change Style Sheet from the pop up menu. You can click on the input field of the styleSheet property to directly write the style sheet code, or click on the … button besides the input field to open up the Edit Style Sheet window which has a bigger space for writing longer style sheet code. At the top of the window you can find several buttons such as Add Resource, Add Gradient, Add Color, and Add Font that can help you to kick-start your coding if you don't remember the properties' names. Let's try to do some simple styling with the Edit Style Sheet window. Click Add Color and choose color. Pick a random color from the color picker window, let's say, a pure red color. Then click Ok. Now, you will see a line of code has been added to the text field on the Edit Style Sheet window, which in my case is as follows: color: rgb(255, 0, 0); Click the Ok button and now you will see the text on your push button has changed to a red color. How it works Let's take a bit of time to get ourselves familiar with Qt Designer's interface before we start learning how to design our own UI: Menu bar: The menu bar houses application-specific menus which provide easy access to essential functions such as create new projects, save files, undo, redo, copy, paste, and so on. It also allows you to access development tools that come with Qt Creator, such as the compiler, debugger, profiler, and so on. Widget box: This is where you can find all the different types of widgets provided by Qt Designer. You can add a widget to your program's UI by clicking one of the widgets from the widget box and dragging it to the form editor. Mode selector: The mode selector is a side panel that places shortcut buttons for easy access to different tools. You can quickly switch between the script editor and form editor by clicking the Edit or Design buttons on the mode selector panel which is very useful for multitasking. You can also easily navigate to the debugger and profiler tools in the same speed and manner. Build shortcuts: The build shortcuts are located at the bottom of the mode selector panel. You can build, run, and debug your project easily by pressing the shortcut buttons here. Form editor: Form editor is where you edit your program's UI. You can add different widgets to your program by selecting a widget from the widget box and dragging it to the form editor. Form toolbar: From here, you can quickly select a different form to edit, click the drop down box located above the widget box and select the file you want to open with Qt Designer. Beside the drop down box are buttons for switching between different modes for the form editor and also buttons for changing the layout of your UI. Object inspector: The object inspector lists out all the widgets within your current .ui file. All the widgets are arranged according to its parent-child relationship in the hierarchy. You can select a widget from the object inspector to display its properties in the property editor. Property editor: Property editor will display all the properties of the widget you selected either from the object inspector window or the form editor window. Action Editor and Signals & Slots Editor: This window contains two editors – Action Editor and the Signals & Slots Editor which can be accessed from the tabs below the window. The action editor is where you create actions that can be added to a menu bar or toolbar in your program's UI. Output panes: Output panes consist of several different windows that display information and output messages related to script compilation and debugging. You can switch between different output panes by pressing the buttons that carry a number before them, such as 1-Issues, 2-Search Results, 3-Application Output, and so on. There's more… In the previous section, we discussed how to apply style sheets to Qt widgets through C++ coding. Although that method works really well, most of the time the person who is in charge of designing the program's UI is not the programmer himself, but a UI designer who specializes in designing user-friendly UI. In this case, it's better to let the UI designer design the program's layout and style sheet with a different tool and not mess around with the code. Qt provides an all-in-one editor called the Qt Creator. Qt Creator consists of several different tools, such as script editor, compiler, debugger, profiler, and UI editor. The UI editor, which is also called the Qt Designer, is the perfect tool for designers to design their program's UI without writing any code. This is because Qt Designer adopted the What-You-See-Is-What-You-Get approach by providing accurate visual representation of the final result, which means whatever you design with Qt Designer will turn out exactly the same when the program is compiled and run. The similarities between Qt Style Sheets and CSS are as follows: CSS: h1 { color: red; background-color: white;} Qt Style Sheets: QLineEdit { color: red; background-color: white;} As you can see, both of them contain a selector and a declaration block. Each declaration contains a property and a value, separated by a colon. In Qt, a style sheet can be applied to a single widget by calling QObject::setStyleSheet() function in C++ code. For example: myPushButton->setStyleSheet("color : blue"); The preceding code will turn the text of a button with the variable name myPushButton to a blue color. You can also achieve the same result by writing the declaration in the style sheet property field in Qt Designer. We will discuss more about Qt Designer in the next section. Qt Style Sheets also supports all the different types of selectors defined in CSS2 standard, including Universal selector, Type selector, Class selector, ID selector, and so on, which allows us to apply styling to a very specific individual or group of widgets. For instance, if we want to change the background color of a specific line edit widget with the object name usernameEdit, we can do this by using an ID selector to refer to it: QLineEdit#usernameEdit { background-color: blue } To learn about all the selectors available in CSS2 (which are also supported by Qt Style Sheets), please refer to this document: http://www.w3.org/TR/REC-CSS2/selector.html. Basic style sheet customization In the previous example, you learned how to apply a style sheet to a widget with Qt Designer. Let's go crazy and push things further by creating a few other types of widgets and change their style properties to something bizarre for the sake of learning. This time, however, we will not apply the style to every single widget one by one, but we will learn to apply the style sheet to the main window and let it inherit down the hierarchy to all the other widgets so that the style sheet is easier to manage and maintain in long run. How to do it… First of all, let's remove the style sheet from the push button by selecting it and clicking the small arrow button besides the styleSheet property. This button will revert the property to the default value, which in this case is the empty style sheet. Then, add a few more widgets to the UI by dragging them one by one from the widget box to the form editor. I've added a line edit, combo box, horizontal slider, radio button, and a check box. For the sake of simplicity, delete the menu bar, main toolbar, and the status bar from your UI by selecting them from the object inspector, right click, and choose Remove. Now your UI should look something similar to this: Select the main window either from the form editor or the object inspector, then right click and choose Change Stylesheet to open up the Edit Style Sheet. border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; Now what you will see is a completely bizarre-looking UI with everything covered in yellow with a thick border. This is because the precedingstyle sheet does not have any selector, which means the style will apply to the children widgets of the main window all the way down the hierarchy. To change that, let's try something different: QPushButton { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } This time, only the push button will get the style described in the preceding code, and all other widgets will return to the default styling. You can try to add a few more push buttons to your UI and they will all look the same: This happens because we specifically tell the selector to apply the style to all the widgets with the class called QPushButton. We can also apply the style to just one of the push buttons by mentioningit's name in thestyle sheet, like so: QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } Once you understand this method, we can add the following code to the style sheet : QPushButton { color: red; border: 0px; padding: 0 8px; background: white; } QPushButton#pushButton_2 { border: 1px solid red; border-radius: 10px; } QPushButton#pushButton_3 { border: 2px solid gray; border-radius: 10px; padding: 0 8px; background: yellow; } What it does is basically change the style of all the push buttons as well as some properties of a specific button named pushButton_2. We keep the style sheet of pushButton_3 as it is. Now the buttons will look like this: The first set of style sheet will change all widgets of QPushButton type to a white rectangular button with no border and red text. Then the second set of style sheets changes only the border of a specific QPushButton widget by name of pushButton_2. Notice that the background color and text color of pushButton_2 remain as white and red color respectively because we didn't override them in the second set of style sheet, hence it will follow back the style described in the first set of style sheet since it's applicable to all QPushButton type widgets. Do notice that the text of the third button has also changed to red because we didn't describe the color property in the third set of style sheet. After that, create another set of style using the universal selector, like so: * { background: qradialgradient(cx: 0.3, cy: -0.4, fx: 0.3, fy: -0.4, radius: 1.35, stop: 0 #fff, stop: 1 #888); color: rgb(255, 255, 255); border: 1px solid #ffffff; } The universal selector will affect all the widgets regardless of their type. Therefore, the preceding style sheet will apply a nice gradient color to all the widgets' background as well as setting their text as white color and giving them a one-pixel solid outline which is also in white color. Instead of writing the name of the color (that is, white), we can also use the rgb function (rgb(255, 255, 255)) or hex code (#ffffff) to describe the color value. Just as before, the preceding style sheet will not affect the push buttons because we have already given them their own styles which will override the general style described in the universal selector. Just remember that in Qt, the style which is more specific will ultimately be used when there is more than one style having influence on a widget. This is how the UI will look like now: How it works If you are ever involved in web development using HTML and CSS, Qt's style sheet works exactly the same way as CSS. Style Sheet provides the definitions for describing the presentation of the widgets – what the colors are for each element in the widget group, how thick the border should be, and so on and so forth. If you specify the name of the widget to the style sheet, it will change the style of a particular push button widget with the name you provide. All the other widgets will not be affected and will remain as the default style. To change the name of a widget, select the widget either from the form editor or the object inspector and change the property called objectName in the property window. If you have used the ID selector previously to change the style of the widget, changing its object name will break the style sheet and lose the style. To fix this problem, simply change the object name in the style sheet as well. Creating a login screen using style sheet Next, we will learn how to put all the knowledge we learned in the previous example together and create a fake graphical login screen for an imaginary operating system. Style sheet is not the only thing you need to master in order to design a good UI. You will also need to learn how to arrange the widgets neatly using the layout system in Qt Designer. How to do it… The first thing we need to do is design the layout of the graphical login screen before we start doing anything. Planning is very important in order to produce good software. The following is a sample layout design I made to show you how I imagine the login screen will look. Just a simple line drawing like this is sufficient as long as it conveys the message clearly: Now that we know exactly how the login screen should look, let's go back to Qt Designer again. We will be placing the widgets at the top panel first, then the logo and the login form below it. Select the main window and change its width and height from 400 and 300 to 800 and 600 respectively because we'll need a bigger space in which to place all the widgets in a moment. Click and drag a label under theDisplay Widgets category from the widget box to the form editor. Change the objectName property of the label to currentDateTime and change its Text property to the current date and time just for display purposes, such as Monday, 25-10-2015 3:14 PM. Click and drag a push button under the Buttons category to the form editor. Repeat this process one more time because we have two buttons on the top panel. Rename the two buttons to restartButton and shutdownButton respectively. Next, select the main window and click the small icon button on the form toolbar that says Lay Out Vertically when you mouse-over it. Now you will see the widgets are being automatically arranged on the main window, but it's not exactly what we want yet. Click and drag a horizontal layout widget under the Layouts category to the main window. Click and drag the two push buttons and the text label into the horizontal layout. Now you will see the three widgets being arranged in a horizontal row, but vertically they are located in the middle of the screen. The horizontal arrangement is almost correct, but the vertical position is totally off. Click and drag a vertical spacer from the Spacers category and place it below the horizontal layout we just created previously (below the red rectangular outline). Now you will see all the widgets are being pushed to the top by the spacer. Now, place a horizontal spacer between the text label and the two buttons to keep them apart. This will make the text label always stick to the left and the buttons align to the right. Set both the Horizontal Policy and Vertical Policy properties of the two buttons to Fixed and set the minimumSize property to 55x55. Then, set the text property of the buttons to empty as we will be using icons instead of text. We will learn how to place an icon in the button widgets in the following section: Now your UI should look similar to this: Next, we will be adding the logo by using the following steps: Add a horizontal layout between the top panel and the vertical spacer to serve as a container for the logo. After adding the horizontal layout, you will find the layout is way too thin in height to be able to add any widget to it. This is because the layout is empty and it's being pushed by the vertical spacer below it into zero height. To solve this problem, we can set its vertical margin (either layoutTopMargin or layoutBottomMargin) to be temporarily bigger until a widget is added to the layout. Next, add a label to the horizontal layout that you just created and rename it to logo. We will learn more about how to insert an image into the label to use it as logo in the next section. For now, just empty out the text property and set both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 150x150. Set the vertical margin of the layout back to zero if you haven't done so. The logo now looks invisible, so we will just place a temporary style sheet to make it visible until we add an image to it in the next section. The style sheet is really simple: border: 1px solid; Now your UI should look something similar to this: Now let's create the login form by using the following steps: Add a horizontal layout between the logo's layout and the vertical spacer. Just as we did previously, set the layoutTopMargin property to a bigger number (that is,100) so that you can add a widget to it more easily. After that, add a vertical layout inside the horizontal layout you just created. This layout will be used as a container for the login form. Set its layoutTopMargin to a number lower than that of the horizontal layout (that is, 20) so that we can place widgets in it. Next, right click the vertical layout you just created and choose Morph into -> QWidget. The vertical layout is now being converted into an empty widget. This step is essential because we will be adjusting the width and height of the container for the login form. A layout widget does not contain any properties for width and height, but only margins, due to the fact that a layout will expand toward the empty space surrounding it, which does make sense, considering that it does not have any size properties. After you have converted the layout to a QWidget object, it will automatically inherit all the properties from the widget class and so we are now able to adjust its size to suit our needs. Rename the QWidget object which we just converted from the layout to loginForm and change both its Horizontal Policy and Vertical Policy properties to Fixed. Then, set the minimumSize property to 350x200. Since we already placed the loginForm widget inside the horizontal layout, we can now set its layoutTopMargin property back to zero. Add the same style sheet as the logo to the loginForm widget to make it visible temporarily, except this time we need to add an ID selector in front so that it will only apply the style to loginForm and not its children widgets: #loginForm { border: 1px solid; } Now your UI should look something like this: We are not done with the login form yet. Now that we have created the container for the login form, it's time to put more widgets into the form: Place two horizontal layouts into the login form container. We need two layouts as one for the username field and another for the password field. Add a label and a line edit to each of the layouts you just added. Change the text property of the upper label to Username: and the one below as Password:. Then, rename the two line edits as username and password respectively. Add a push button below the password layout and change its text property to Login. After that, rename it as loginButton. You can add a vertical spacer between the password layout and the login button to distance them slightly. After the vertical spacer has been placed, change its sizeType property to Fixed and change the Height to 5. Now, select the loginForm container and set all its margins to 35. This is to make the login form look better by adding some space to all its sides. You can also set the Height property of the username, password, and loginButton widgets to 25 so that they don't look so cramped. Now your UI should look something like this: We're not done yet! As you can see the login form and the logo are both sticking to the top of the main window due to the vertical spacer below them. The logo and the login form should be placed at the center of the main window instead of the top. To fix this problem use the following steps: Add another vertical spacer between the top panel and the logo's layout. This way it will counter the spacer at the bottom which balances out the alignment. If you think that the logo is sticking too close to the login form, you can also add a vertical spacer between the logo's layout and the login form's layout. Set its sizeType property to Fixed and the Height property to 10. Right click the top panel's layout and choose Morph into -> QWidget. Then, rename it topPanel. The reason why the layout has to be converted into QWidget because we cannot apply style sheets to a layout, as it doesn't have any properties other than margins. Currently you can see there is a little bit of margin around the edges of the main window – we don't want that. To remove the margins, select the centralWidget object from the object inspector window, which is right under the MainWindow panel, and set all the margin values to zero. At this point, you can run the project by clicking the Run button (withgreen arrow icon) to see what your program looks like now. If everything went well, you should see something like this: After we've done the layout, it's time for us to add some fanciness to the UI using style sheets! Since all the important widgets have been given an object name, it's easier for us to apply the style sheets to it from the main window, since we will only write the style sheets to the main window and let them inherit down the hierarchy tree. Right click on MainWindow from the object inspector window and choose Change Stylesheet. Add the following code to the style sheet: #centralWidget { background: rgba(32, 80, 96, 100); } Now you will see that the background of the main window changes its color. We will learn how to use an image for the background in the next section so the color is just temporary. In Qt, if you want to apply styles to the main window itself, you must apply it to its central widget instead of the main window itself because the window is just a container. Then, we will add a nice gradient color to the top panel: #topPanel { background-color: qlineargradient(spread:reflect, x1:0.5, y1:0, x2:0, y2:0, stop:0 rgba(91, 204, 233, 100), stop:1 rgba(32, 80, 96, 100)); } After that, we will apply black color to the login form and make it look semi-transparent. After that, we will also make the corners of the login form container slightly rounded by setting the border-radius property: #loginForm { background: rgba(0, 0, 0, 80); border-radius: 8px; } After we're done applying styles to the specific widgets, we will apply styles to the general types of widgets instead: QLabel { color: white; } QLineEdit { border-radius: 3px; } The style sheets above will change all the labels' texts to a white color, which includes the text on the widgets as well because, internally, Qt uses the same type of label on the widgets that have text on it. Also, we made the corners of the line edit widgets slightly rounded. Next, we will apply style sheets to all the push buttons on our UI: QPushButton { color: white; background-color: #27a9e3; border-width: 0px; border-radius: 3px; } The preceding style sheet changes the text of all the buttons to a white color, then sets its background color to blue, and makes its corners slightly rounded as well. To push things even further, we will change the color of the push buttons when we mouse-over it, using the keyword hover: QPushButton:hover { background-color: #66c011; } The preceding style sheet will change the background color of the push buttons to green when we mouse-over. We will talk more about this in the following section. You can further adjust the size and margins of the widgets to make them look even better.Remember to remove the border line of the login form by removing the style sheet which we applied directly to it earlier. Now your login screen should look something like this: How it works This example focuses more on the layout system of Qt. The Qt layout system provides a simple and powerful way of automatically arranging child widgets within a widget to ensure that they make good use of the available space. The spacer items used in the preceding example help to push the widgets contained in a layout outward to create spacing along the width of the spacer item. To locate a widget to the middle of the layout, put two spacer items to the layout, one on the left side of the widget and another on the right side of the widget. The widget will then be pushed to the middle of the layout by the two spacers. Summary So in this article we saw how Qt allows us to easily design our program's user interface through a method which most people are familiar with. We also covered the toolkit, Qt Designer, which enables us to design our user interface without writing a single line of code. Finally, we saw how to create a login screen. For more information on Qt5 and C++ you can check other books by Packt, mentioned as follows: Qt 5 Blueprints: https://www.packtpub.com/application-development/qt-5-blueprints Boost C++ Application Development Cookbook: https://www.packtpub.com/application-development/boost-c-application-development-cookbook Learning Boost C++ Libraries: https://www.packtpub.com/application-development/learning-boost-c-libraries Resources for Article: Further resources on this subject: OpenCart Themes: Styling Effects of jQuery Plugins [article] Responsive Web Design [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 21833

article-image-how-to-add-frameworks-with-carthage
Fabrizio Brancati
27 Sep 2016
5 min read
Save for later

How to Add Frameworks to iOS Applications with Carthage

Fabrizio Brancati
27 Sep 2016
5 min read
With the advent of iOS 8, Apple allowed the option of creating dynamic frameworks. In this post, you will learn how to create a dynamic framework from the ground up, and you will use Carthage to add frameworks to your Apps. Let’s get started! Creating Xcode project Open Xcode and create a new project. Select Frameworks & Library under the iOS menù from the templates and then Cocoa Touch Framework. Type a name for your framework and select Swift for the language. Now we will create a framework that helps to store data using NSUserDefaults. We can name it DataStore, which is a generic name, in case we want to expand it in the future to allow for the use of other data stores such as CoreData. The project is now empty and you have to add your first class, so add a new Swift file and name it DataStore, like the framework name. You need to create the class: public enum DataStoreType { case UserDefaults } public class DataStore { private init() {} public static func save(data: AnyObject, forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().setObject(data, forKey: key) } } public static func read(forKey key: String, in store: DataStoreType) -> AnyObject? { switch store { case .UserDefaults: return NSUserDefaults.standardUserDefaults().objectForKey(key) } } public static func delete(forKey key: String, in store: DataStoreType) { switch store { case .UserDefaults: NSUserDefaults.standardUserDefaults().removeObjectForKey(key) } } } Here we have created a DataStoreType enum to allow the expand feature in the future, and the DataStore class with the functions to save, read and delete. That’s it! You have just created the framework! How to use the framework To use the created framework, build it with CMD + B, right-click on the framework in the Products folder in the Xcode project, and click on Show in Finder. To use it you must drag and dropbox this file in your project. In this case, we will create an example project to show you how to do it. Add the framework to your App project by adding it in the Embedded Binaries section in the General page of the Xcode project. Note that if you see it duplicated in the Linked Frameworks and Libraries section, you can remove the first one. You have just included your framework in the App. Now we have to use it, so import it (I will import it in the ViewController class for test purposes, but you can include it whenever you want). And let’s use the DataStore framework by saving and reading a String from the NSUserDefaults. This is the code: import UIKit import DataStore class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. DataStore.save("Test", forKey: "Test", in: .UserDefaults) print(DataStore.read(forKey: "Test", in: .UserDefaults)!) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } Build the App and see the framework do its work! You should see this in the Xcode console: Test Now you have created a framework in Swift and you have used it with an App! Note that the framework created for the iOS Simulator is different from the one created for a device, because is built for a different architetture. To build a universal framework, you can use Carthage, which is shown in the next section. Using Carthage Carthage is a decentralized dependency manager that builds your dependencies and provides you with binary frameworks. To install it you can download the Carthage.pkg file from GitHub or with Homebrew: brew update brew install carthage Because Carthage is only able to build a framework from Git, we will use Alamofire, a popular HTTP networking library available on GitHub. Oper the project folder and create a file named Cartfile. Here is where we will tell Carthage what it has to build and in what version: github “Alamofire/Alamofire” We don’t specify a version because this is only a test, but it’s good practice. Here you can see an example, but opening the Terminal App, going into the project folder, and typing: carthage update You should see Carthage do some things, but when it has finished, with Finder go to project folder, then Carthage, Build, iOS and there is where the framework is[VP1] . To add it to the App, we have to do more work than what we have done before. Drag and drop the framework from the Carthage/Build/iOS folder in the Linked Frameworks and Libraries section on the General setting tab of the Xcode project. On the Build Phases tab, click on the + icon and choose New Run Script Phase with the following script: /usr/local/bin/carthage copy-frameworks Now you can add the paths of the frameworks under Input Files, which in this case is: $(SRCROOT)/FrameworkTest/Carthage/Build/iOS/Alamofire.framework This script works around an App Store submission bug triggered by universal binaries and ensures that the necessary bitcode-related files and dSYMs are copied when archiving. Now you only have to import the frameworks in your Swift file and use it like we did earlier in this post! Summary In this post, you learned how to create a custom framework for creating shared code between your apps, along with the creation of a GitHub repository to share your open source framework with the community of developers. You also learned how to use Carthage for your GitHub repository, or with a popular framework like Alamofire, and how to import it in your Apps. About The Author Fabrizio Brancati is a mobile app developer and web developer currently working and living in Milan, Italy, with a passion for innovation and discover new things. He develops with Objective-C for iOS 3 and iPod touch. When Swift came out, he learned it and was so excited that he remade an Objective-C framework available on GitHub in Swift (BFKit / BFKit-Swift). Software development is his driving passion, and he loves when others make use of his software.
Read more
  • 0
  • 0
  • 3876

article-image-how-add-unit-tests-sails-framework-application
Luis Lobo
26 Sep 2016
8 min read
Save for later

How to add Unit Tests to a Sails Framework Application

Luis Lobo
26 Sep 2016
8 min read
There are different ways to implement Unit Tests for a Node.js application. Most of them use Mocha, for their test framework, Chai as the assertion library, and some of them include Istanbul for Code Coverage. We will be using those tools, not entering in deep detail on how to use them but rather on how to successfully configure and implement them for a Sails project. 1) Creating a new application from scratch (if you don't have one already) First of all, let’s create a Sails application from scratch. The Sails version in use for this article is 0.12.3. If you already have a Sails application, then you can continue to step 2. Issuing the following command creates the new application: $ sails new sails-test-article Once we create it, we will have the following file structure: ./sails-test-article ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ └── templates ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views 2) Create a basic test structure We want a folder structure that contains all our tests. For now we will only add unit tests. In this project we want to test only services and controllers. Add necessary modules npm install --save-dev mocha chai istanbul supertest Folder structure Let's create the test folder structure that supports our tests: mkdir -p test/fixtures test/helpers test/unit/controllers test/unit/services After the creation of the folders, we will have this structure: ./sails-test-article ├── api [...] ├── test │ ├── fixtures │ ├── helpers │ └── unit │ ├── controllers │ └── services └── views We now create a mocha.opts file inside the test folder. It contains mocha options, such as a timeout per test run, that will be passed by default to mocha every time it runs. One option per line, as described in mocha opts. --require chai --reporter spec --recursive --ui bdd --globals sails --timeout 5s --slow 2000 Up to this point, we have all our tools set up. We can do a very basic test run: mocha test It prints out this: 0 passing (2ms) Normally, Node.js applications define a test script in the packages.json file. Edit it so that it now looks like this: "scripts": { "debug": "node debug app.js", "start": "node app.js", "test": "mocha test" } We are ready for the next step. 3) Bootstrap file The boostrap.js file is the one that defines the environment that all tests use. Inside it, we define before and after events. In them, we are starting and stopping (or 'lifting' and 'lowering' in Sails language) our Sails application. Since Sails makes globally available models, controller, and services at runtime, we need to start them here. var sails = require('sails'); var _ = require('lodash'); global.chai = require('chai'); global.should = chai.should(); before(function (done) { // Increase the Mocha timeout so that Sails has enough time to lift. this.timeout(5000); sails.lift({ log: { level: 'silent' }, hooks: { grunt: false }, models: { connection: 'unitTestConnection', migrate: 'drop' }, connections: { unitTestConnection: { adapter: 'sails-disk' } } }, function (err, server) { if (err) returndone(err); // here you can load fixtures, etc. done(err, sails); }); }); after(function (done) { // here you can clear fixtures, etc. if (sails && _.isFunction(sails.lower)) { sails.lower(done); } }); This file will be required on each of our tests. That way, each test can individually be run if needed, or run as a whole. 4) Services tests We now are adding two models and one service to show how to test services: Create a Comment model in /api/models/Comment.js: /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; Create a Post model in /api/models/Post.js: /** * Post.js */ module.exports = { attributes: { title: {type: 'string'}, body: {type: 'string'}, timestamp: {type: 'datetime'}, comments: {model: 'Comment'} } }; Create a Post service in /api/services/PostService.js: /** * PostService * * @description :: Service that handles posts */ module.exports = { getPostsWithComments: function () { return Post .find() .populate('comments'); } }; To test the Post service, we need to create a test for it in /test/unit/services/PostService.spec.js. In the case of services, we want to test business logic. So basically, you call your service methods and evaluate the results using an assertion library. In this case, we are using Chai's should. /* global PostService */ // Here is were we init our 'sails' environment and application require('../../bootstrap'); // Here we have our tests describe('The PostService', function () { before(function (done) { Post.create({}) .then(Post.create({}) .then(Post.create({}) .then(function () { done(); }) ) ); }); it('should return all posts with their comments', function (done) { PostService .getPostsWithComments() .then(function (posts) { posts.should.be.an('array'); posts.should.have.length(3); done(); }) .catch(done); }); }); We can now test our service by running: npm test The result should be similar to this one: > [email protected] test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostService ✓ should return all posts with their comments 1 passing (979ms) 5) Controllers tests In the case of controllers, we want to validate that our requests are working, that they are returning the correct error codes and the correct data. In this case, we make use of the SuperTest module, which provides HTTP assertions. We add now a Post controller with this content in /api/controllers/PostController.js: /** * PostController */ module.exports = { getPostsWithComments: function (req, res) { PostService.getPostsWithComments() .then(function (posts) { res.ok(posts); }) .catch(res.negotiate); } }; And now we create a Post controller test in: /test/unit/controllers/PostController.spec.js: // Here is were we init our 'sails' environment and application var supertest = require('supertest'); require('../../bootstrap'); describe('The PostController', function () { var createdPostId = 0; it('should create a post', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .post('/post') .set('Accept', 'application/json') .send({"title": "a post", "body": "some body"}) .expect('Content-Type', /json/) .expect(201) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('object'); result.body.should.have.property('id'); result.body.should.have.property('title', 'a post'); result.body.should.have.property('body', 'some body'); createdPostId = result.body.id; done(); } }); }); it('should get posts with comments', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .get('/post/getPostsWithComments') .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('array'); result.body.should.have.length(1); done(); } }); }); it('should delete post created', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .delete('/post/' + createdPostId) .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { returndone(err); } else { returndone(null, result.text); } }); }); }); After running the tests again: npm test We can see that now we have 4 tests: > [email protected] test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) 6) Code Coverage Finally, we want to know if our code is being covered by our unit tests, with the help of Istanbul. To generate a report, we just need to run: istanbul cover _mocha test Once we run it, we will have a result similar to this one: The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 26.95% ( 45/167 ) Branches : 3.28% ( 4/122 ) Functions : 35.29% ( 6/17 ) Lines : 26.95% ( 45/167 ) ================================================================================ In this case, we can see that the percentages are not very nice. We don't have to worry much about these since most of the “not covered” code is in /api/policies and /api/responses. You can check that result in a file that was created after istanbul ran, in ./coverage/lcov-report/index.html. If you remove those folders and run it again, you will see the difference. rm -rf api/policies api/responses istanbul cover _mocha test ⬡ 4.4.2 [±master ●●●] Now the result is much better: 100% coverage! The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 100% ( 24/24 ) Branches : 100% ( 0/0 ) Functions : 100% ( 4/4 ) Lines : 100% ( 24/24 ) ================================================================================ Now if you check the report again, you will see a different picture: Coverage report You can get the source code for each of the steps here. I hope you enjoyed the post! Reference Sails documentation on Testing your code Follows recommendations from Sails author, Mike McNeil, Adds some extra stuff based on my own experience developing applications using Sails Framework. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, mentor and advisor, independent software engineer, consultant, and conference speaker. He has a background as a software analyst and designer—creating, designing, and implementing software products and solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 8152
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-making-history-event-sourcing
Packt
15 Sep 2016
18 min read
Save for later

Making History with Event Sourcing

Packt
15 Sep 2016
18 min read
In this article by Christian Baxter, author of the book Mastering Akka, we will see, the most common, tried, and true approach is to model the data in a relational database when it comes to the persistence needs for an application. Following this approach has been the de facto way to store data until recently, when NoSQL (and to a lesser extent NewSQL) started to chip away at the footholds of relational database dominance. There's nothing wrong with storing your application's data this way—it's how we initially chose to do so for the bookstore application using PostgreSQL as the storage engine. This article deals with event sourcing and how to implement that approach using Akka Persistence. These are the main things you can expect to learn from this article: (For more resources related to this topic, see here.) Akka persistence for event sourcing Akka persistence is a relatively newer module within the Akka toolkit. It became available as experimental in the 2.3.x series. Throughout that series, it went through quite a few changes as the team worked on getting the API and functionality right. When Akka 2.4.2 was released, the experimental label was removed, signifying that persistence was stable and ready to be leveraged in production code. Akka persistence allows stateful actors to persist their internal state. It does this not to persisting the state itself, but instead as changes to that state. It uses an append-only model to persist these state changes, allowing you to later reconstitute the state by replaying the changes to that state. It also allows you to take periodic snapshots and use those to reestablish an actor's state as a performance optimization for long-lived entities with lots of state changes. Akka persistence's approach should certainly sound familiar as it's almost a direct overlay to the features of event sourcing. In fact, it was inspired by the eventsourced Scala library, so that overlay is no coincidence. Because of this alignment with event sourcing, Akka persistence will be the perfect tool for us to switch over to an event sourced model. Before getting into the details of the refactor, I want to describe some of the high-level concepts in the framework. The PersistentActor trait The PersistentActor trait is the core building block to create event sourced entities. This actor is able to persist its events to a pluggable journal. When a persistent actor is restarted (reloaded), it will replay its journaled events to reestablish its current internal state. These two behaviors perfectly fit what we need to do for our event sourced entities, so this will be our core building block. The PersistentActor trait has a log of features, more that I will cover in the next few sections. I'll cover the things that we will use in the bookstore refactoring, which I consider to be the most useful features in PersistentActor. If you want to learn more, then I suggest you take a look at the Akka documents as they pretty much cover everything else that you can do with PersistentActor. Persistent actor state handling A PersistentActor implementation has two basic states that it can be in—Recovering and Receiving Commands. When Recovering, it's in the process of reloading its event stream from the journal to rebuild its internal state. Any external messages that come in during this time will be stashed until the recovery process is complete. Once the recovery process completes, the persistent actor transitions into the Receiving Commands state where it can start to handle commands. These commands can then generate new events that can further modify the state of this entity. This two-state flow can be visualized in the following diagram: These two states are both represented by custom actor receive handling partial functions. You must provide implementations for both of the following vals in order to properly implement these two states for your persistent actor: val receiveRecover: Receive = { . . . } val receiveCommand: Receive = { . . . } While in the recovering state, there are two possible messages that you need to be able to handle. The first is one of the event types that you previously persisted for this entity type. When you get that type of message, you have to reapply the change implied by that event to the internal state of the actor. For example, if we had a SalesOrderFO fields object as the internal state, and we received a replayed event indicating that the order was approved, the handling might look something like this: var state:SalesOrderFO = ... val receiveRecover: Receive = { case OrderApproved(id) => state = state.copy(status = SalesOrderStatus.Approved) } We'd, of course, need to handle a lot more than that one event. This code sample was just to show you how you can modify the internal state of a persistent actor when it's being recovered. Once the actor has completed the recovery process, it can transition into the state where it starts to handle incoming command requests. Event sourcing is all about Action (command) and Reaction (events). When the persistent actor receives a command, it has the option to generate zero to many events as a result of that command. These events represent a happening on that entity that will affect its current state. Events you receive while in the Recovering state will be previously generated while in the Receiving Commands state. So, the preceding example that I coded, where we receive OrderApproved, must have previously come from some command that we handled earlier. The handling of that command could have looked something like this: val receiveCommand: Receive = { case ApproveOrder(id) => persist(OrderApproved(id)){ event => state = state.copy(status = SalesOrderStatus.Approved) sender() ! FullResult(state) } } After receiving the command request to change the order status to approved, the code makes a call to persist, which will asynchronously write an event into the journal. The full signature for persist is: persist[A](event: A)(handler: (A) ⇒ Unit): Unit The first argument there represents the event that you want to write to the journal. The second argument is a callback function that will be executed after the event has been successfully persisted (and won't be called at all if the persistence fails). For our example, we will use that callback function to mutate the internal state to update the status field to match the requested action. One thing to note is that the writing in the journal is asynchronous. So, one may then think that it's possible to be closing over that internal state in an unsafe way when the callback function is executed. If you persisted two events in rapid succession, couldn't it be possible for both of them to access that internal state at the same time in separate threads, kind of like when using Futures in an actor? Thankfully, this is not the case. The completion of a persistence action is sent back as a new message to the actor. The hidden receive handling for this message will then invoke the callback associated with that persistence action. By using the mailbox again, we will know these post-persistence actions will be executed one at a time, in a safe manner. As an added bonus, the sender associated with those post-persistence messages will be the original sender of the command so you can safely use sender() in a persistence callback to reply to the original requestor, as shown in my example. Another guarantee that the persistence framework makes when persisting events is that no other commands will be processed in between the persistence action and the associated callback. Any commands that come in during that time will be stashed until all of the post-persistence actions have been completed. This makes the persist/callback sequence atomic and isolated, in that nothing else can interfere with it while it's happening. Allowing additional commands to be executed during this process may lead to an inconsistent state and response to the caller who sent the commands. If for some reason, the persisting to the journal fails, there is an onPersistFailure callback that will be invoked. If you want to implement custom handling for this, you can override this method. No matter what, when persistence fails, the actor will be stopped after making this callback. At this point, it's possible that the actor is in an inconsistent state, so it's safer to stop it than to allow it to continue on in this state. Persistence failures probably mean something is failing with the journal anyway so restarting as opposed to stopping will more than likely lead to even more failures. There's one more callback that you can implement in your persistent actors and that's onPersistRejected. This will happen if the serialization framework rejects the serialization of the event to store. When this happens, the persist callback does not get invoked, so no internal state update will happen. In this case, the actor does not stop or restart because it's not in an inconsistent state and the journal itself is not failing. The PersistenceId Another important concept that you need to understand with PersistentActor is the persistenceId method. This abstract method must be defined for every type of PersistentActor you define, returning a String that is to be unique across different entity types and also between actor instances within the same type. Let's say I will create the Book entity as a PersistentActor and define the persistenceId method as follows: override def persistenceId = "book" If I do that, then I will have a problem with this entity, in that every instance will share the entire event stream for every other Book instance. If I want each instance of the Book entity to have its own separate event stream (and trust me, you will), then I will do something like this when defining the Book PersistentActor: class Book(id:Int) extends PersistentActor{ override def persistenceId = s"book-$id" } If I follow an approach like this, then I can be assured that each of my entity instances will have its own separate event stream as the persistenceId will be unique for every Int keyed book we have. In the current model, when creating a new instance of an entity, we will pass in the special ID of 0 to indicate that this entity does not yet exist and needs to be persisted. We will defer ID creation to the database, and once we have an ID (after persistence), we will stop that actor instance as it is not properly associated with that newly generated ID. With the persistenceId model of associating the event stream to an entity, we will need the ID as soon as we create the actor instance. This means we will need a way to have a unique identifier even before persisting the initial entity state. This is something to think about before we get to the upcoming refactor. Taking snapshots for faster recovery I've mentioned the concept of taking a snapshot of the current state of an entity to speed up the process of recovering its state. If you have a long-lived entity that has generated a large amount of events, it will take progressively more and more time to recover its state. Akka's PersistentActor supports the snapshot concept, putting it in your hands as to when to take the snapshot. Once you have taken the snapshots, the latest one will be offered to the entity during the recovery phase instead of all of the events that led up to it. This will reduce the total number of events to process to recover state, thus speeding up that process. This is a two-part process, with the first part being taking snapshots periodically and the second being handling them during the recovery phase. Let's take a look at the snapshot taking process first. Let's say that you coded a particular entity to save a new snapshot for every one hundred events received. To make this happen, your command handling block may look something like this: var eventTotal = ... val receiveCommand:Receive = { case UpdateStatus(status) => persist(StatusUpdated(status)){ event => state = state.copy(status = event.status) eventTotal += 1 if (eventTotal % 100 == 0) saveSnapshot(state) } case SaveSnapshotSuccess(metadata) => . . . case SaveSnapshotFailure(metadata, reason) => . . . } You can see in the post-persist logic that if we we're making a specific call to saveSnapshot, we are passing the latest version of the actor's internal state. You're not limited to doing this just in the post-persist logic in reaction to a new event, but you can also set up the actor to publish a snapshot on regular intervals. You can leverage Akka's scheduler to send a special message to the entity to instruct it to save the snapshot periodically. If you start saving snapshots, then you will have to start handling the two new messages that will be sent to the entity indicating the status of the saved snapshot. These two new message types are SaveSnapshotSuccess and SaveSnapshotFailure. The metadata that appears on both messages will tell you things, such as the persistence ID where the failure occurred, the sequence number of the snapshot that failed, and the timestamp of the failure. You can see these two new messages in the command handling block shown in the preceding code. Once you have saved a snapshot, you will need to start handling it in the recovery phase. The logic to handle a snapshot during recovery will look like the following code block: val receiveRecover:Receive = { case SnapshotOffer(metadata, offeredSnapshot) => state = offeredSnapshot case event => . . . } Here, you can see that if we get a snapshot during recovery, instead of just making an incremental change, as we do with real replayed events, we set the entire state to whatever the offered snapshot is. There may be hundreds of events that led up to that snapshot, but all we need to handle here is one message in order to wind the state forward to when we took that snapshot. This process will certainly pay dividends if we have lots of events for this entity and we continue to take periodic snapshots. One thing to note about snapshots is that you will only ever be offered the latest snapshot (per persistence id) during the recovery process. Even though I'm taking a new snapshot every 100 events, I will only ever be offered one,the latest one, during the recovery phase. Another thing to note is that there is no real harm in losing a snapshot. If your snapshot storage was wiped out for some reason, the only negative side effect is that you'll be stuck processing all of the events for an entity when recovering it. When you take snapshots, you don't lose any of the event history. Snapshots are completely supplemental and only benefit the performance of the recovery phase. You don't need to take them, and you can live without them if something happens to the ones you had taken. Serialization of events and snapshots Within both the persistence and snapshot examples, you can see I was passing objects into the persist and saveSnapshot calls. So, how are these objects marshaled to and from a format that can actually be written to those stores? The answer is—via Akka serialization. Akka persistence is dependent on Akka serialization to convert event and snapshot objects to and from a binary format that can be saved into a data store. If you don't make any changes to the default serialization configuration, then your objects will be converted into binary via Java serialization. Java serialization is both slow and inefficient in terms of size of the serialized object. It's also not flexible in terms of the object definition changing after producing the binary when you are trying to read it back in. It's not a good choice for our needs with our event sourced app. Luckily, Akka serialization allows you to provide your own custom serializers. If you, perhaps, wanted to use JSON as your serialized object representation then you can pretty easily build a custom serializer to do that. They also have a built-in Google Protobuf serializer that can convert your Protobuf binding classes into their binary format. We'll explore both custom serializers and the Protobuf serializer when we get into the sections dealing with the refactors. The AsyncWriteJournal Another important component in Akka persistence, which I've mentioned a few times already, is the AsyncWriteJournal. This component is an append-only data store that stores the sequence of events (per persistence id) a PersistentActor generates via calls to persist. The journal also stores the highestSequenceNr per persistence id that tracks the total number of persisted events for that persistence id. The journal is a pluggable component. You have the ability to configure the default journal and, also, override it on a per-entity basis. The default configuration for Akka does not provide a value for the journal to use, so you must either configure this setting or add a per-entity override (more on that in a moment) in order to start using persistence. If you want to set the default journal, then it can be set in your config with the following property: akka.persistence.journal.plugin="akka.persistence.journal.leveldb" The value in the preceding code must be the fully qualified path to another configuration section of the same name where the journal plugin's config lives. For this example, I set it to the already provided leveldb config section (from Akka's reference.conf). If you want to override the journal plugin for a particular entity instance only, then you can do so by overriding the journalPluginId method on that entity actor, as follows: class MyEntity extends PersistentActor{ override def journalPluginId = "my-other-journal" . . . } The same rules apply here, in which, my-other-journal must be the fully qualified name to a config section where the config for that plugin lives. My example config showed the use of the leveldb plugin that writes to the local file system. If you actually want to play around using this simple plugin, then you will also need to add the following dependencies into your sbt file: "org.iq80.leveldb" % "leveldb" % "0.7" "org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8" If you want to use something different, then you can check the community plugins page on the Akka site to find one that suits your needs. For our app, we will use the Cassandra journal plugin. I'll show you how to set up the config for that in the section dealing with the installation of Cassandra. The SnapshotStore The last thing I want to cover before we start the refactoring process is the SnapshotStore. Like the AsyncWriteJournal, the SnapshotStore is a pluggable and configurable storage system, but this one stores just snapshots as opposed to the entire event stream for a persistence id. As I mentioned earlier, you don't need snapshots, and you can survive if the storage system you used for them gets wiped out for some reason. Because of this, you may consider using a separate storage plugin for them. When selecting the storage system for your events, you need something that is robust, distributed, highly available, fault tolerant, and backup capable. If you lose these events, you lose the entire data set for your application. But, the same is not true for snapshots. So, take that information into consideration when selecting the storage. You may decide to use the same system for both, but you certainly don't have to. Also, not every journal plugin can act as a snapshot plugin; so, if you decide to use the same for both, make sure that the journal plugin you select can handle snapshots. If you want to configure the snapshot store, then the config setting to do that is as follows: akka.persistence.snapshot-store.plugin="my-snapshot-plugin" The setting here follows the same rules as the write journal; the value must be the fully qualified name to a config section where the plugin's config lives. If you want to override the default setting on a per entity basis, then you can do so by overriding the snapshotPluginId command on your actor like this: class MyEntity extends PersistentActor{ override def snapshotPluginId = "my-other-snap-plugin" . . . } The same rules apply here as well, in which, the value must be a fully qualified path to a config section where the plugin's config lives. Also, there are no out-of-the-box default settings for the snapshot store, so if you want to use snapshots, you must either set the appropriate setting in your config or provide the earlier mentioned override on a per entity basis. For our needs, we will use the same storage mechanism—Cassandra—for both the write journal and the snapshot storage. We have a multi-node system currently, so using something that writes to the local file system, or a simple in-memory plugin, won't work for us. Summary In this article, you learned about Akka persistence for event sourcing and the need to take a snapshot of the current state of an entity to speed up the process to recover its state. Resources for Article: Further resources on this subject: Introduction to Akka [article] Using NoSQL Databases [article] PostgreSQL – New Features [article]
Read more
  • 0
  • 0
  • 1951

article-image-decoding-why-good-php-developerisnt-oxymoron
Packt
14 Sep 2016
20 min read
Save for later

Decoding Why "Good PHP Developer"Isn't an Oxymoron

Packt
14 Sep 2016
20 min read
In this article by Junade Ali, author of the book Mastering PHP Design Patterns, we will be revisiting object-oriented programming. Back in 2010 MailChimp published a post on their blog, it was entitled Ewww, You Use PHP? In this blog post they described the horror when they explained their choice of PHP to developers who consider the phrase good PHP programmer an oxymoron. In their rebuttal they argued that their PHP wasn't your grandfathers PHP and they use a sophisticated framework. I tend to judge the quality of PHP on the basis of, not only how it functions, but how secure it is and how it is architected. This book focuses on ideas of how you should architect your code. The design of software allows for developers to ease the extension of the code beyond its original purpose, in a bug free and elegant fashion. (For more resources related to this topic, see here.) As Martin Fowler put it: Any fool can write code that a computer can understand. Good programmers write code that humans can understand. This isn't just limited to code style, but how developers architect and structure their code. I've encountered many developers with their noses constantly stuck in documentation, copying and pasting bits of code until it works; hacking snippets together until it works. Moreover, I far too often see the software development process rapidly deteriorate as developers ever more tightly couple their classes with functions of ever increasing length. Software engineers mustn't just code software; they must know how to design it. Indeed often a good software engineer, when interviewing other software engineers will ask questions surrounding the design of the code itself. It is trivial to get a piece of code that will execute, and it is also benign to question a developer as to whether strtolower or str2lower is the correct name of a function (for the record, it's strtolower). Knowing the difference between a class and an object doesn't make you a competent developer; a better interview question would, for example, be how one could apply subtype polymorphism to a real software development challenge. Failure to assess software design skills dumbs down an interview and results in there being no way to differentiate between those who are good at it, and those who aren't. These advanced topics will be discussed throughout this book, by learning these tactics you will better understand what the right questions to ask are when discussing software architecture. Moxie Marlinspike once tweeted: As a software developer, I envy writers, musicians, and filmmakers. Unlike software, when they create something it is really done, forever. When developing software we mustn't forget we are authors, not just of instructions for a machine, but we are also authoring something that we later expect others to extend upon. Therefore, our code mustn't just be targeted at machines, but humans also. Code isn't just poetry for a machine, it should be poetry for humans also. This is, of course, better said than done. In PHP this may be found especially difficult given the freedom PHP offers developers on how they may architect and structure their code. By the very nature of freedom, it may be both used and abused, so it is true with the freedom offered in PHP. PHP offers freedom to developers to decide how to architect this code. By the very nature of freedom it can be both used and abused, so it is true with the freedom offered in PHP. Therefore, it is increasingly important that developers understand proper software design practices to ensure their code maintains long term maintainability. Indeed, another key skill lies in refactoringcode, improving design of existing code to make it easier to extend in the longer term Technical debt, the eventual consequence of poor system design, is something that I've found comes with the career of a PHP developer. This has been true for me whether it has been dealing with systems that provide advanced functionality or simple websites. It usually arises because a developer elects to implement bad design for a variety of reasons; this is when adding functionality to an existing codebase or taking poor design decisions during the initial construction of software. Refactoring can help us address these issues. SensioLabs (the creators of the Symfonyframework) have a tool called Insight that allows developers to calculate the technical debt in their own code. In 2011 they did an evaluation of technical debt in various projects using this tool; rather unsurprisingly they found that WordPress 4.1 topped the chart of all platforms they evaluated with them claiming it would take 20.1 years to resolve the technical debt that the project contains. Those familiar with the WordPress core may not be surprised by this, but this issue of course is not only associated to WordPress. In my career of working with PHP, from working with security critical cryptography systems to working with systems that work with mission critical embedded systems, dealing with technical debt comes with the job. Dealing with technical debt is not something to be ashamed of for a PHP Developer, indeed some may consider it courageous. Dealing with technical debt is no easy task, especially in the face of an ever more demanding user base, client, or project manager; constantly demanding more functionality without being familiar with the technical debt the project has associated to it. I recently emailed the PHP Internals group as to whether they should consider deprecating the error suppression operator @. When any PHP function is prepended by an @ symbol, the function will suppress an error returned by it. This can be brutal; especially where that function renders a fatal error that stops the execution of the script, making debugging a tough task. If the error is suppressed, the script may fail to execute without providing developers a reason as to why this is. Despite the fact that no one objected to the fact that there were better ways of handling errors (try/catch, proper validation) than abusing the error suppression operator and that deprecation should be an eventual aim of PHP, it is the case that some functions return needless warnings even though they already have a success/failure value. This means that due to technical debt in the PHP core itself, this operator cannot be deprecated until a lot of other prerequisite work is done. In the meantime, it is down to developers to decide the best methodologies of handling errors. Until the inherent problem of unnecessary error reporting is addressed, this operator cannot be deprecated. Therefore, it is down to developers to be educated as to the proper methodologies that should be used to address error handling and not to constantly resort to using an @ symbol. Fundamentally, technical debt slows down development of a project and often leads to code being deployed that is broken as developers try and work on a fragile project. When starting a new project, never be afraid to discus architecture as architecture meetings are vital to developer collaboration; as one scrum master I've worked with said in the face of criticism that "meetings are a great alternative to work", he said "meetings are work…how much work would you be doing without meetings?". Coding style - thePSR standards When it comes to coding style, I would like to introduce you to the PSR standards created by the PHP Framework Interop Group. Namely, the two standards that apply to coding standards are PSR-1 (Basic Coding Style) and PSR-2 (Coding Style Guide). In addition to this there are PSR standards that cover additional areas, for example, as of today; the PSR-4 standard is the most up-to-date autoloading standard published by the group. You can find out more about the standards at http://www.php-fig.org/. Coding style being used to enforce consistency throughout a codebase is something I strongly believe in, it does make a difference to your code readability throughout a project. It is especially important when you are starting a project (chances are you may be reading this book to find out how to do that right) as your coding style determines the style the developers following you in working on this project will adopt. Using a global standard such as PSR-1 or PSR-2 means that developers can easily switch between projects without having to reconfigure their code style in their IDE. Good code style can make formatting errors easier to spot. Needless to say that coding styles will develop as time progresses, to date I elect to work with the PSR standards. I am a strong believer in the phrase: Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. It isn't known who wrote this phrase originally; but it's widely thought that it could have been John Woods or potentially Martin Golding. I would strongly recommend familiarizingyourself with these standards before proceeding in this book. Revising object-oriented programming Object-oriented programming is more than just classes and objects, it's a whole programming paradigm based around objects(data structures) that contain data fields and methods. It is essential to understand this; using classes to organize a bunch of unrelated methods together is not object orientation. Assuming you're aware of classes (and how to instantiate them), allow me to remind you of a few different bits and pieces. Polymorphism Polymorphism is a fairly long word for a fairly simple concept. Essentially, polymorphism means the same interfaceis used with a different underlying code. So multiple classes could have a draw function, each accepting the same arguments, but at an underlying level the code is implemented differently. In this article, I would also like to talk about Subtype Polymorphism in particular (also known as Subtyping or Inclusion Polymorphism). Let's say we have animals as our supertype;our subtypes may well be cats, dogs, and sheep. In PHP, interfaces allow you to define a set of functionality that a class that implements it must contain, as of PHP 7 you can also use scalar type hints to define the return types we expect. So for example, suppose we defined the following interface: interface Animal { public function eat(string $food) : bool; public function talk(bool $shout) : string; } We could then implement this interface in our own class, as follows: class Cat implements Animal { } If we were to run this code without defining the classes we would get an error message as follows: Class Cat contains 2 abstract methods and must therefore be declared abstract or implement the remaining methods (Animal::eat, Animal::talk) Essentially, we are required to implement the methods we defined in our interface, so now let's go ahead and create a class that implements these methods: class Cat implements Animal { public function eat(string $food): bool { if ($food === "tuna") { return true; } else { return false; } } public function talk(bool $shout): string { if ($shout === true) { return "MEOW!"; } else { return "Meow."; } } } Now that we've implemented these methods we can then just instantiate the class we are after and use the functions contained in it: $felix = new Cat(); echo $felix->talk(false); So where does polymorphism come into this? Suppose we had another class for a dog: class Dog implements Animal { public function eat(string $food): bool { if (($food === "dog food") || ($food === "meat")) { return true; } else { return false; } } public function talk(bool $shout): string { if ($shout === true) { return "WOOF!"; } else { return "Woof woof."; } } } Now let's suppose we have multiple different types of animals in a pets array: $pets = array( 'felix' => new Cat(), 'oscar' => new Dog(), 'snowflake' => new Cat() ); We can now actually go ahead and loop through all these pets individually in order to run the talk function.We don't care about the type of pet because the talkmethod that is implemented in every class we getis by virtue of us having extended the Animals interface. So let's suppose we wanted to have all our animals run the talk method, we could just use the following code: foreach ($pets as $pet) { echo $pet->talk(false); } No need for unnecessary switch/case blocks in order to wrap around our classes, we just use software design to make things easier for us in the long-term. Abstract classes work in a similar way, except for the fact that abstract classes can contain functionality where interfaces cannot. It is important to note that any class that defines one or more abstract classes must also be defined as abstract. You cannot have a normal class defining abstract methods, but you can have normal methods in abstract classes. Let's start off by refactoring our interface to be an abstract class: abstract class Animal { abstract public function eat(string $food) : bool; abstract public function talk(bool $shout) : string; public function walk(int $speed): bool { if ($speed > 0) { return true; } else { return false; } } } You might have noticed that I have also added a walk method as an ordinary, non-abstract method; this is a standard method that can be used or extended by any classes that inherit the parent abstract class. They already have implementation. Note that it is impossible to instantiate an abstract class (much like it's not possible to instantiate an interface). Instead we must extend it. So, in our Cat class let's substitute: class Cat implements Animal With the following code: class Cat extends Animal That's all we need to refactor in order to get classes to extend the Animal abstract class. We must implement the abstract functions in the classes as we outlined for the interfaces, plus we can use the ordinary functions without needing to implement them: $whiskers = new Cat(); $whiskers->walk(1); As of PHP 5.4 it has also become possible to instantiate a class and access a property of it in one system. PHP.net advertised it as: Class member access on instantiation has been added, e.g. (new Foo)->bar(). You can also do it with individual properties, for example,(new Cat)->legs. In our example, we can use it as follows: (new IcyAprilChapterOneCat())->walk(1); Just to recap a few other points about how PHP implemented OOP, the final keyword before a class declaration or indeed a function declaration means that you cannot override such classes or functions after they've been defined. So, if we were to try extending a class we have named as final: final class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { } This results in the following output: Fatal error: Class Cat may not inherit from final class (Animal) Similarly, if we were to do the same except at a function level: class Animal { final public function walk() { return "walking..."; } } class Cat extends Animal { public function walk () { return "walking with tail wagging..."; } } This results in the following output: Fatal error: Cannot override final method Animal::walk() Traits (multiple inheritance) Traits were introduced into PHP as a mechanism for introducing Horizontal Reuse. PHP conventionally acts as a single inheritance language, namely because of the fact that you can't inherit more than one class into a script. Traditional multiple inheritance is a controversial process that is often looked down upon by software engineers. Let me give you an example of using Traits first hand; let's define an abstract Animal class which we want to extend into another class: class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { public function walk () { return "walking with tail wagging..."; } } So now let's suppose we have a function to name our class, but we don't want it to apply to all our classes that extend the Animal class, we want it to apply to certain classes irrespective of whether they inherit the properties of the abstract Animal class or not. So we've defined our functions like so: function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } The problem now is that there is no place we can put them without using Horizontal Reuse, apart from copying and pasting different bits of code or resorting to using conditional inheritance. This is where Traits come to the rescue; let's start off by wrapping these methods in a Trait called Name: trait Name { function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } } So now that we've defined our Trait, we can just tell PHP to use it in our Cat class: class Cat extends Animal { use Name; public function walk() { return "walking with tail wagging..."; } } Notice the use of theName statement? That's where the magic happens. Now you can call the functions in that Trait without any problems: $whiskers = new Cat(); $whiskers->setFirstName('Paul'); echo $whiskers->firstName; All put together, the new code block looks as follows: trait Name { function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } } class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { use Name; public function walk() { return "walking with tail wagging..."; } } $whiskers = new Cat(); $whiskers->setFirstName('Paul'); echo $whiskers->firstName; Scalar type hints Let me take this opportunity to introduce you to a PHP7 concept known as scalar type hinting; it allows you to define the return types (yes, I know this isn't strictly under the scope of OOP; deal with it). Let's define a function, as follows: function addNumbers (int $a, int $b): int { return $a + $b; } Let's take a look at this function; firstly you will notice that before each of the arguments we define the type of variable we want to receive, in this case,int or integer. Next up you'll notice there's a bit of code after the function definition : int, which defines our return type so our function can only receive an integer. If you don't provide the right type of variable as a function argument or don't return the right type of variable from the function; you will get a TypeError exception. In strict mode, PHP will also throw a TypeError exception in the event that strict mode is enabled and you also provide the incorrect number of arguments. It is also possible in PHP to define strict_types; let me explain why you might want to do this. Without strict_types, PHP will attempt to automatically convert a variable to the defined type in very limited circumstances. For example, if you pass a string containing solely numbers it will be converted to an integer, a string that's non-numeric, however, will result in a TypeError exception. Once you enable strict_typesthis all changes, you can no longer have this automatic casting behavior. Taking our previous example, without strict_types, you could do the following: echo addNumbers(5, "5.0"); Trying it again after enablingstrict_types, you will find that PHP throws a TypeError exception. This configuration only applies on an individual file basis, putting it before you include other files will not result in this configuration being inherited to those files. There are multiple benefits of why PHP chose to go down this route; they are listed very clearly in Version: 0.5.3 of the RFC that implemented scalar type hints called PHP RFC: Scalar Type Declarations. You can read about it by going to http://www.wiki.php.net (the wiki, not the main PHP website) and searching for scalar_type_hints_v5. In order to enable it, make sure you put this as the very first statement in your PHP script: declare(strict_types=1); This will not work unless you define strict_typesas the very first statement in a PHP script; no other usages of this definition are permitted. Indeed if you try to define it later on, your script PHP will throw a fatal error. Of course, in the interests of the rage induced PHP core fanatic reading this book in its coffee stained form, I should mention that there are other valid types that can be used in type hinting. For example, PHP 5.1.0 introduced this with arrays and PHP 5.0.0 introduced the ability for a developer to do this with their own classes. Let me give you a quick example of how this would work in practice, suppose we had an Address class: class Address { public $firstLine; public $postcode; public $country; public function __construct(string $firstLine, string $postcode, string $country) { $this->firstLine = $firstLine; $this->postcode = $postcode; $this->country = $country; } } We can then type the hint of the Address class that we inject into a Customer class: class Customer { public $name; public $address; public function __construct($name, Address $address) { $this->name = $name; $this->address = $address; } } And just to show how it all can come together: $address = new Address('10 Downing Street', 'SW1A2AA', 'UK'); $customer = new Customer('Davey Cameron', $address); var_dump($customer); Limiting debug access to private/protected properties If you define a class which contains private or protected variables, you will notice an odd behavior if you were to var_dumpthe object of that class. You will notice that when you wrap the object in a var_dumpit reveals all variables; be they protected, private, or public. PHP treats var_dump as an internal debugging function, meaning all data becomes visible. Fortunately, there is a workaround for this. PHP 5.6 introduced the __debugInfo magic method. Functions in classes preceded by a double underscore represent magic methods and have special functionality associated to them. Every time you try to var_dump an object that has the __debugInfo magic method set, the var_dump will be overridden with the result of that function call instead. Let me show you how this works in practice, let's start by defining a class: class Bear { private $hasPaws = true; } Let's instantiate this class: $richard = new Bear(); Now if we were to try and access the private variable that ishasPaws, we would get a fatal error; so this call: echo $richard->hasPaws; Would result in the following fatal error being thrown: Fatal error: Cannot access private property Bear::$hasPaws That is the expected output, we don't want a private property visible outside its object. That being said, if we wrap the object with a var_dump as follows: var_dump($richard); We would then get the following output: object(Bear)#1 (1) { ["hasPaws":"Bear":private]=> bool(true) } As you can see, our private property is marked as private, but nevertheless it is visible. So how would we go about preventing this? So, let's redefine our class as follows: class Bear { private $hasPaws = true; public function __debugInfo () { return call_user_func('get_object_vars', $this); } } Now, after we instantiate our class and var_dump the resulting object, we get the following output: object(Bear)#1 (0) { } The script all put together looks like this now, you will notice I've added an extra public property called growls, which I have set to true: <?php class Bear { private $hasPaws = true; public $growls = true; public function __debugInfo () { return call_user_func('get_object_vars', $this); } } $richard = new Bear(); var_dump($richard); If we were to var_dump this script (with both public and private property to play with), we would get the following output: object(Bear)#1 (1) { ["growls"]=> bool(true) } As you can see, only the public property is visible. So what is the moral of the story from this little experiment? Firstly, that var_dumps exposesprivate and protected properties inside objects, and secondly, that this behavior can be overridden. Summary In this article, we revised some PHP principles, including OOP principles. We also revised some PHP syntax basics. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] Understanding PHP basics [article]
Read more
  • 0
  • 0
  • 2644

article-image-its-all-about-data
Packt
14 Sep 2016
12 min read
Save for later

It's All About Data

Packt
14 Sep 2016
12 min read
In this article by Samuli Thomasson, the author of the book, Haskell High Performance Programming, we will know how to choose and design optimal data structures in applications. You will be able to drop the level of abstraction in slow parts of code, all the way to mutable data structures if necessary. (For more resources related to this topic, see here.) Annotating strictness and unpacking datatype fields We used the BangPatterns extension to make function arguments strict: {-# LANGUAGE BangPatterns #-} f !s (x:xs) = f (s + 1) xs f !s _ = s Using bangs for annotating strictness in fact predates the BangPatterns extension (and the older compiler flag -fbang-patterns in GHC6.x). With just plain Haskell98, we are allowed to use bangs to make datatype fields strict: > data T = T !Int A bang in front of a field ensures that whenever the outer constructor (T) is in WHNF, the inner field is as well in WHNF. We can check this: > T undefined `seq` () *** Exception: Prelude.undefined There are no restrictions to which fields can be strict, be it recursive or polymorphic fields, although, it rarely makes sense to make recursive fields strict. Consider the fully strict linked list: data List a = List !a !(List a) | ListEnd With this much strictness, you cannot represent parts of infinite lists without always requiring infinite space. Moreover, before accessing the head of a finite strict list you must evaluate the list all the way to the last element. Strict lists don't have the streaming property of lazy lists. By default, all data constructor fields are pointers to other data constructors or primitives, regardless of their strictness. This applies to basic data types such asInt, Double, Char, and so on, which are not primitive in Haskell. They are data constructors over their primitive counterparts Int#, Double#, and Char#: > :info Int data Int = GHC.Types.I# GHC.Prim.Int# There is a performance overhead, the size of pointer dereference between types, say, Int and Int#, but an Int can represent lazy values (called thunks), whereas primitives cannot. Without thunks, we couldn't have lazy evaluation. Luckily,GHC is intelligent enough to unroll wrapper types as primitives in many situations, completely eliminating indirect references. The hash suffix is specific to GHC and always denotes a primitive type. The GHC modules do expose the primitive interface. Programming with primitives you can further micro-optimize code and get C-like performance. However, several limitations and drawbacks apply. Using anonymous tuples Tuples may seem harmless at first; they just lump a bunch of values together. But note that the fields in a tuple aren't strict, so a twotuple corresponds to the slowest PairP data type from our previous benchmark. If you need a strict Tuple type, you need to define one yourself. This is also one more reason to prefer custom types over nameless tuples in many situations. These two structurally similar tuple types have widely different performance semantics: data Tuple = Tuple {-# UNPACK #-} !Int {-# UNPACK #-} !Int data Tuple2 = Tuple2 {-# UNPACK #-} !(Int, Int) If you really want unboxed anonymous tuples, you can enable the UnboxedTuples extension and write things with types like (# Int#, Char# #). But note that a number of restrictions apply to unboxed tuples like to all primitives. The most important restriction is that unboxed types may not occur where polymorphic types or values are expected, because polymorphic values are always considered as pointers. Representing bit arrays One way to define a bitarray in Haskell that still retains the convenience of Bool is: import Data.Array.Unboxed type BitArray = UArrayInt Bool This representation packs 8 bits per byte, so it's space efficient. See the following section on arrays in general to learn about time efficiency – for now we only note that BitArray is an immutable data structure, like BitStruct, and that copying small BitStructs is cheaper than copying BitArrays due to overheads in UArray. Consider a program that processes a list of integers and tells whether they are even or odd counts of numbers divisible by 2, 3, and 5. We can implement this with simple recursion and a three-bit accumulator. Here are three alternative representations for the accumulator: type BitTuple = (Bool, Bool, Bool) data BitStruct = BitStruct !Bool !Bool !Bool deriving Show type BitArray = UArrayInt Bool And the program itself is defined along these lines: go :: acc -> [Int] ->acc go acc [] = acc go (two three five) (x:xs) = go ((test 2 x `xor` two) (test 3 x `xor` three) (test 5 x `xor` five)) xs test n x = x `mod` n == 0 I've omitted the details here. They can be found in the bitstore.hs file. The fastest variant is BitStruct, then comes BitTuple (30% slower), and BitArray is the slowest (130% slower than BitStruct). Although BitArray is the slowest (due to making a copy of the array on every iteration), it would be easy to scale the array in size or make it dynamic. Note also that this benchmark is really on the extreme side; normally programs do a bunch of other stuff besides updating an array in a tight loop. If you need fast array updates, you can resort to mutable arrays discussed later on. It might also be tempting to use Data.Vector.Unboxed.VectorBool from the vector package, due to its nice interface. But beware that that representation uses one byte for every bit, wasting 7 bits for every bit. Mutable references are slow Data.IORef and Data.STRef are the smallest bits of mutable state, references to mutable variables, one for IO and other for ST. There is also a Data.STRef.Lazy module, which provides a wrapper over strict STRef for lazy ST. However, because IORef and STRef are references, they imply a level of indirection. GHC intentionally does not optimize it away, as that would cause problems in concurrent settings. For this reason, IORef or STRef shouldn't be used like variables in C, for example. Performance will for sure be very bad. Let's verify the performance hit by considering the following ST-based sum-of-range implementation: -- file: sum_mutable.hs import Control.Monad.ST import Data.STRef count_st :: Int ->Int count_st n = runST $ do ref <- newSTRef 0 let go 0 = readSTRef ref go i = modifySTRef' ref (+ i) >> go (i - 1) go n And compare it to this pure recursive implementation: count_pure :: Int ->Int count_pure n = go n 0 where go 0 s = s go i s = go (i - 1) $! (s + i) The ST implementation is many times slower when at least -O is enabled. Without optimizations, the two functions are more or less equivalent in performance;there is similar amount of indirection from not unboxing arguments in the latter version. This is one example of the wonders that can be done to optimize referentially transparent code. Bubble sort with vectors Bubble sort is not an efficient sort algorithm, but because it's an in-place algorithm and simple, we will implement it as a demonstration of mutable vectors: -- file: bubblesort.hs import Control.Monad.ST import Data.Vector as V import Data.Vector.Mutable as MV import System.Random (randomIO) -- for testing The (naive) bubble sort compares values of all adjacent indices in order, and swaps the values if necessary. After reaching the last element, it starts from the beginning or, if no swaps were made, the list is sorted and the algorithm is done: bubblesortM :: (Ord a, PrimMonad m) =>MVector (PrimState m) a -> m () bubblesortM v = loop where indices = V.fromList [1 .. MV.length v - 1] loop = do swapped <- V.foldM' f False indices – (1) if swapped then loop else return () – (2) f swapped i = do – (3) a <- MV.read v (i-1) b <- MV.read v i if a > b then MV.swap v (i-1) i>> return True else return swapped At (1), we fold monadically over all but the last index, keeping state about whether or not we have performed a swap in this iteration. If we had, at (2) we rerun the fold or, if not, we can return. At (3) we compare an index and possibly swap values. We can write a pure function that wraps the stateful algorithm: bubblesort :: Ord a => Vector a -> Vector a bubblesort v = runST $ do mv <- V.thaw v bubblesortM mv V.freeze mv V.thaw and V.freeze (both O(n)) can be used to go back and forth with mutable and immutable vectors. Now, there are multiple code optimization opportunities in our implementation of bubble sort. But before tackling those, let's see how well our straightforward implementation fares using the following main: main = do v <- V.generateM 10000 $ _ ->randomIO :: IO Double let v_sorted = bubblesort v median = v_sorted ! 5000 print median We should remember to compile with -O2. On my machine, this program takes about 1.55s, and Runtime System reports 99.9% productivity, 18.7 megabytes allocated heap and 570 Kilobytes copied during GC. So now with a baseline, let's see if we can squeeze out more performance from vectors. This is a non-exhaustive list: Use unboxed vectors instead. This restricts the types of elements we can store, but it saves us a level of indirection. Down to 960ms and approximately halved GC traffic. Large lists are inefficient, and they don't compose with vectors stream fusion. We should change indices so that it uses V.enumFromTo instead (alternatively turn on OverloadedLists extension and drop V.fromList). Down to 360ms and 94% less GC traffic. Conversion functions V.thaw and V.freeze are O(n), that is, they modify copies. Using in-place V.unsafeThaw and V.unsafeFreeze instead is sometimes useful. V.unsafeFreeze in the bubblesort wrapper is completely safe, but V.unsafeThaw is not. In our example, however, with -O2, the program is optimized into a single loop and all those conversions get eliminated. Vector operations (V.read, V.swap) in bubblesortM are guaranteed to never be out of bounds, so it's perfectly safe to replace these with unsafe variants (V.unsafeRead, V.unsafeSwap) that don't check bounds. Speed-up of about 25 milliseconds, or 5%. To summarize, applying good practices and safe usage of unsafe functions, our Bubble sort just got 80% faster. These optimizations are applied in thebubblesort-optimized.hsfile (omitted here). We noticed that almost all GC traffic came from a linked list, which was constructed and immediately consumed. Lists are bad for performance in that they don't fuse like vectors. To ensure good vector performance, ensure that the fusion framework can work effectively. Anything that can be done with a vector should be done. As final note, when working with vectors(and other libraries) it's a good idea to keep the Haddock documentation handy. There are several big and small performance choices to be made. Often the difference is that of between O(n) and O(1). Speedup via continuation-passing style Implementing monads in continuation-passing style (CPS) can have very good results. Unfortunately, no widely-used or supported library I'm aware of would provide drop-in replacements for ubiquitous Maybe, List, Reader, Writer, and State monads. It's not that hard to implement the standard monads in CPS from scratch. For example, the State monad can be implemented using the Cont monad from mtl as follows: -- file: cont-state-writer.hs {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleContexts #-} import Control.Monad.State.Strict import Control.Monad.Cont newtypeStateCPS s r a = StateCPS (Cont (s -> r) a) deriving (Functor, Applicative, Monad, MonadCont) instance MonadState s (StateCPS s r) where get = StateCPS $ cont $ next curState→ next curStatecurState put newState = StateCPS $ cont $ next curState→ next () newState runStateCPS :: StateCPS s s () -> s -> s runStateCPS (StateCPS m) = runCont m (_ -> id) In case you're not familiar with the continuation-passing style and the Cont monad, the details might not make much sense, instead of just returning results from a function, a function in CPS applies its results to a continuation. So in short, to "get" the state in continuation-passing style, we pass the current state to the "next" continuation (first argument) and don't change the state (second argument). To "put", we call the continuation with unit (no return value) and change the state to new state (second argument to next). StateCPS is used just like the State monad: action :: MonadStateInt m => m () action = replicateM_ 1000000 $ do i<- get put $! i + 1 main = do print $ (runStateCPS action 0 :: Int) print $ (snd $ runState action 0 :: Int) That action operation is, in the CPS version of the state monad, about 5% faster and performs 30% less heap allocation than the state monad from mtl. This program is limited pretty much only by the speed of monadic composition, so these numbers are at least very close to maximum speedup we can have from CPSing the state monad. Speedups of the writer monad are probably near these results. Other standard monads can be implemented similarly to StateCPS. The definitions can also be generalized to monad transformers over an arbitrary monad (a la ContT). For extra speed, you might wish to combine many monads in a single CPS monad, similarly to what RWST does. Summary We witnessed the performance of the bytestring, text, and vector libraries, all of which get their speed from fusion optimizations, in contrast to linked lists, which have a huge overhead despite also being subject to fusion to some degree. However, linked lists give rise to simple difference lists and zippers. The builder patterns for lists, bytestring, and textwere introduced. We discovered that the array package is low-level and clumsy compared to the superior vector package, unless you must support Haskell 98. We also saw how to implement Bubble sort using vectors and how to speedup via continuation-passing style. Resources for Article: Further resources on this subject: Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] Data Extracting, Transforming, and Loading [article] Linking Data to Shapes [article]
Read more
  • 0
  • 0
  • 1897

article-image-simple-slack-websocket-integrations-10-lines-python
Bradley Cicenas
09 Sep 2016
3 min read
Save for later

Simple Slack Websocket Integrations in <10 lines of Python

Bradley Cicenas
09 Sep 2016
3 min read
If you use Slack, you've probably added a handful of integrations for your team from the ever-growing App Directory, and maybe even had an idea for your own Slack app. While the Slack API is featureful and robust, writing your own integration can be exceptionally easy. Through the Slack RTM (Real Time Messaging) API, you can write our own basic integrations in just a few lines of Python using the SlackSocket library. Want an accessible introduction to Python that's comprehensive enough to give you the confidence you need to dive deeper? This week, follow our Python Fundamentals course inside Mapt. It's completely free - so what have you got to lose? Structure Our integration will be structured with the following basic components: Listener Integration/bot logic Response The listener watches for one or more pre-defined "trigger" words, while the response posts the result of our intended task. Basic Integration We'll start by setting up SlackSocket with our API token: fromslacksocketimportSlackSocket slack=SlackSocket('<slack-token>', event_filter=['message']) By default, SlackSocket will listen for all Slack events. There are a lot of different events sent via RTM, but we're only concerned with 'message' events for our integration, so we've set an event_filter for only this type. Using the SlackSocketevents() generator, we'll read each 'message' event that comes in and can act on various conditions: for e inslack.events(): ife.event['text'] =='!hello': slack.send_msg('it works!', channel_name=e.event['channel']) If our message text matches the string '!hello', we'll respond to the source channel of the event with a given message('it works!'). At this point, we've created a complete integration that can connect to Slack as a bot user(or regular user), follow messages, and respond accordingly. Let's build something a bit more useful, like a password generator for throwaway accounts. Expanding Functionality For this integration command, we'll write a simple function to generate a random alphanumeric string 15 characters long: import random import string defrandomstr(): chars=string.ascii_letters+string.digits return''.join(random.choice(chars) for _ inrange(15)) Now we're ready to provide our random string generator to the rest of the team using the same chat logic as before, responding to the source channel with our generated password: for e inslack.events(): e.event['text'].startswith('!random'): slack.send_msg(randomstr(), channel_name=e.event['channel']) Altogether: import random import string fromslacksocketimportSlackSocket slack=SlackSocket('<slack-token>', event_filter=['message']) defrandomstr(): chars=string.ascii_letters+string.digits return''.join(random.choice(chars) for _ inrange(15)) for e inslack.events(): ife.event['text'].startswith('!random'): slack.send_msg(randomstr(), channel_name=e.event['channel']) And the results:  A complete integration in 10 lines of Python. Not bad! Beyond simplicity, SlackSocket provides a great deal of flexibility for writing apps, bots, or integrations. In the case of massive Slack groups with several thousand users, messages are buffered locally to ensure that none are missed. Dropped websocket connections are automatically re-connected as well, making it an ideal base for chat client. The code for SlackSocket is available on GitHub, and as always, we welcome any contributions or feature requests! About the author Bradley Cicenas is a New York City-based infrastructure engineer with an affinity for microservices, systems design, data science, and stoops.
Read more
  • 0
  • 0
  • 2733
article-image-using-web-api-extend-your-application
Packt
08 Sep 2016
14 min read
Save for later

Using Web API to Extend Your Application

Packt
08 Sep 2016
14 min read
In this article by Shahed Chowdhuri author of book ASP.Net Core Essentials, we will work through a working sample of a web API project. During this lesson, we will cover the following: Web API Web API configuration Web API routes Consuming Web API applications (For more resources related to this topic, see here.) Understanding a web API Building web applications can be a rewarding experience. The satisfaction of reaching a broad set of potential users can trump the frustrating nights spent fine-tuning an application and fixing bugs. But some mobile users demand a more streamlined experience that only a native mobile app can provide. Mobile browsers may experience performance issues in low-bandwidth situations, where HTML5 applications can only go so far with a heavy server-side back-end. Enter web API, with its RESTful endpoints, built with mobile-friendly server-side code. The case for web APIs In order to create a piece of software, years of wisdom tell us that we should build software with users in mind. Without use cases, its features are literally useless. By designing features around user stories, it makes sense to reveal public endpoints that relate directly to user actions. As a result, you will end up with a leaner web application that works for more users. If you need more convincing, here's a recap of features and benefits: It lets you build modern lightweight web services, which are a great choice for your application, as long as you don't need SOAP It's easier to work with than any past work you may have done with ASP.NET Windows Communication Foundation (WCF) services It supports RESTful endpoints It's great for a variety of clients, both mobile and web It's unified with ASP.NET MVC and can be included with/without your web application Creating a new web API project from scratch Let's build a sample web application named Patient Records. In this application, we will create a web API from scratch to allow the following tasks: Add a new patient Edit an existing patient Delete an existing patient View a specific patient or a list of patients These four actions make up the so-called CRUD operations of our system: to Create, Read, Update or Delete patient records. Following the steps below, we will create a new project in Visual Studio 2015: Create a new web API project. Add an API controller. Add methods for CRUD operations. The preceding steps have been expanded into detailed instructions with the following screenshots: In Visual Studio 2015, click File | New | Project. You can also press Ctrl+Shift+N on your keyboard. On the left panel, locate the Web node below Visual C#, then select ASP.NET Core Web Application (.NET Core), as shown in the following screenshot: With this project template selected, type in a name for your project, for examplePatientRecordsApi, and choose a location on your computer, as shown in the following screenshot: Optionally, you may select the checkboxes on the lower right to create a directory for your solution file and/or add your new project to source control. Click OK to proceed. In the dialog that follows, select Empty from the list of the ASP.NET Core Templates, then click OK, as shown in the following screenshot: Optionally, you can check the checkbox for Microsoft Azure to host your project in the cloud. Click OK to proceed. Building your web API project In the Solution Explorer, you may observe that your References are being restored. This occurs every time you create a new project or add new references to your project that have to be restored through NuGet,as shown in the following screenshot: Follow these steps, to fix your references, and build your Web API project: Rightclickon your project, and click Add | New Folder to add a new folder, as shown in the following screenshot: Perform the preceding step three times to create new folders for your Controllers, Models, and Views,as shown in the following screenshot: Rightclick on your Controllers folder, then click Add | New Item to create a new API controller for patient records on your system, as shown in the following screenshot: In the dialog box that appears, choose Web API Controller Class from the list of options under .NET Core, as shown in the following screenshot: Name your new API controller, for examplePatientController.cs, then click Add to proceed. In your new PatientController, you will most likely have several areas highlighted with red squiggly lines due to a lack of necessary dependencies, as shown in the following screenshot. As a result, you won't be able to build your project/solution at this time: In the next section, we will learn about how to configure your web API so that it has the proper references and dependencies in its configuration files. Configuring the web API in your web application How does the web server know what to send to the browser when a specific URL is requested? The answer lies in the configuration of your web API project. Setting up dependencies In this section, we will learn how to set up your dependencies automatically using the IDE, or manually by editing your project's configuration file. To pull in the necessary dependencies, you may right-click on the using statement for Microsoft.AspNet.Mvc and select Quick Actions and Refactorings…. This can also be triggered by pressing Ctrl +. (period) on your keyboard or simply by hovering over the underlined term, as shown in the following screenshot: Visual Studio should offer you several possible options, fromwhich you can select the one that adds the package Microsoft.AspNetCore.Mvc.Corefor the namespace Microsoft.AspNetCore.Mvc. For the Controller class, add a reference for the Microsoft.AspNetCore.Mvc.ViewFeaturespackage, as shown in the following screenshot: Fig12: Adding the Microsoft.AspNetCore.Mvc.Core 1.0.0 package If you select the latest version that's available, this should update your references and remove the red squiggly lines, as shown in the following screenshot: Fig13:Updating your references and removing the red squiggly lines The precedingstep should automatically update your project.json file with the correct dependencies for theMicrosoft.AspNetCore.Mvc.Core, and Microsoft.AspNetCore.Mvc.ViewFeatures, as shown in the following screenshot: The "frameworks" section of theproject.json file identifies the type and version of the .NET Framework that your web app is using, for examplenetcoreapp1.0 for the 1.0 version of .NET Core. You will see something similar in your project, as shown in the following screenshot: Click the Build Solution button from the top menu/toolbar. Depending on how you have your shortcuts set up, you may press Ctrl+Shift+B or press F6 on your keyboard to build the solution. You should now be able to build your project/solution without errors, as shown in the following screenshot: Before running the web API project, open the Startup.cs class file, and replace the app.Run() statement/block (along with its contents) with a call to app.UseMvc()in the Configure() method. To add the Mvc to the project, add a call to the services.AddMvcCore() in the ConfigureServices() method. To allow this code to compile, add a reference to Microsoft.AspNetCore.Mvc. Parts of a web API project Let's take a closer look at the PatientController class. The auto-generated class has the following methods: public IEnumerable<string> Get() public string Get(int id) public void Post([FromBody]string value) public void Put(int id, [FromBody]string value) public void Delete(int id) The Get() method simply returns a JSON object as an enumerable string of values, while the Get(int id) method is an overridden variant that gets a particular value for a specified ID. The Post() and Put() methods can be used for creating and updating entities. Note that the Put() method takes in an ID value as the first parameter so that it knows which entity to update. Finally, we have the Delete() method, which can be used to delete an entity using the specified ID. Running the web API project You may run the web API project in a web browser that can display JSON data. If you use Google Chrome, I would suggest using the JSONView Extension (or other similar extension) to properly display JSON data. The aforementioned extension is also available on GitHub at the following URL: https://github.com/gildas-lormeau/JSONView-for-Chrome If you use Microsoft Edge, you can view the raw JSON data directly in the browser.Once your browser is ready, you can select your browser of choice from the top toolbar of Visual Studio. Click on the tiny triangle icon next to the Debug button, then select a browser, as shown in the following screenshot: In the preceding screenshot, you can see that multiple installed browsers are available, including Firefox, Google Chrome, Internet Explorer,and Edge. To choose a different browser, simply click on Browse With…, in the menu to select a different one. Now, click the Debug button (that isthe green play button) to see the web API project in action in your web browser, as shown in the following screenshot. If you don't have a web application set up, you won't be able to browse the site from the root URL: Don’t worry if you see this error, you can update the URL to include a path to your API controller, for an example seehttp://localhost:12345/api/Patient. Note that your port number may vary. Now, you should be able to see a list of views that are being spat out by your API controller, as shown in the following screenshot: Adding routes to handle anticipated URL paths Back in the days of classic ASP, application URL paths typically reflected physical file paths. This continued with ASP.NET web forms, even though the concept of custom URL routing was introduced. With ASP.NET MVC, routes were designed to cater to functionality rather than physical paths. ASP.NET web API continues this newer tradition, with the ability to set up custom routes from within your code. You can create routes for your application using fluent configuration in your startup code or with declarative attributes surrounded by square brackets. Understanding routes To understand the purpose of having routes, let's focus on the features and benefits of routes in your application. This applies to both ASP.NET MVC and ASP.NET web API: By defining routes, you can introduce predictable patterns for URL access This gives you more control over how URLs are mapped to your controllers Human-readable route paths are also SEO-friendly, which is great for Search Engine Optimization It provides some level of obscurity when it comes to revealing the underlying web technology and physical file names in your system Setting up routes Let's start with this simple class-level attribute that specifies a route for your API controller, as follows: [Route("api/[controller]")] public class PatientController : Controller { // ... } Here, we can dissect the attribute (seen in square brackets, used to affect the class below it) and its parameter to understand what's going on: The Route attribute indicates that we are going to define a route for this controller. Within the parentheses that follow, the route path is defined in double quotes. The first part of this path is thestring literal api/, which declares that the path to an API method call will begin with the term api followed by a forward slash. The rest of the path is the word controller in square brackets, which refers to the controller name. By convention, the controller's name is part of the controller's class name that precedes the term Controller. For a class PatientController, the controller name is just the word Patient. This means that all API methods for this controller can be accessed using the following syntax, where MyApplicationServer should be replaced with your own server or domain name:http://MyApplicationServer/api/Patient For method calls, you can define a route with or without parameters. The following two examples illustrate both types of route definitions: [HttpGet] public IEnumerable<string> Get() {     return new string[] { "value1", "value2" }; } In this example, the Get() method performs an action related to the HTTP verb HttpGet, which is declared in the attribute directly above the method. This identifies the default method for accessing the controller through a browser without any parameters, which means that this API method can be accessed using the following syntax: http://MyApplicationServer/api/Patient To include parameters, we can use the following syntax: [HttpGet("{id}")] public string Get(int id) {     return "value"; } Here, the HttpGet attribute is coupled with an "{id}" parameter, enclosed in curly braces within double quotes. The overridden version of the Get() method also includes an integer value named id to correspond with the expected parameter. If no parameter is specified, the value of id is equal to default(int) which is zero. This can be called without any parameters with the following syntax: http://MyApplicationServer/api/Patient/Get In order to pass parameters, you can add any integer value right after the controller name, with the following syntax: http://MyApplicationServer/api/Patient/1 This will assign the number 1 to the integer variable id. Testing routes To test the aforementioned routes, simply run the application from Visual Studio and access the specified URLs without parameters. The preceding screenshot show the results of accessing the following path: http://MyApplicationServer/api/Patient/1 Consuming a web API from a client application If a web API exposes public endpoints, but there is no client application there to consume it, does it really exist? Without getting too philosophical, let's go over the possible ways you can consume a client application. You can do any of the following: Consume the Web API using external tools Consume the Web API with a mobile app Consume the Web API with a web client Testing with external tools If you don't have a client application set up, you can use an external tool such as Fiddler. Fiddler is a free tool that is now available from Telerik, available at http://www.telerik.com/download/fiddler, as shown in the following screenshot: You can use Fiddler to inspect URLs that are being retrieved and submitted on your machine. You can also use it to trigger any URL, and change the request type (Get, Post, and others). Consuming a web API from a mobile app Since this article is primarily about the ASP.NET core web API, we won't go into detail about mobile application development. However, it's important to note that a web API can provide a backend for your mobile app projects. Mobile apps may include Windows Mobile apps, iOS apps, Android apps, and any modern app that you can build for today's smartphones and tablets. You may consult the documentation for your particular platform of choice, to determine what is needed to call a RESTful API. Consuming a web API from a web client A web client, in this case, refers to any HTML/JavaScript application that has the ability to call a RESTful API. At the least, you can build a complete client-side solution with straight JavaScript to perform the necessary actions. For a better experience, you may use jQuery and also one of many popular JavaScript frameworks. A web client can also be a part of a larger ASP.NET MVC application or a Single-Page Application (SPA). As long as your application is spitting out JavaScript that is contained in HTML pages, you can build a frontend that works with your backend web API. Summary In this article, we've taken a look at the basic structure of an ASP.NET web API project, and observed the unification of web API with MVC in an ASP.NET core. We also learned how to use a web API as our backend to provide support for various frontend applications. Resources for Article:   Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] Schema Validation with Oracle JDeveloper - XDK 11g [article] Getting Started with Spring Security [article]
Read more
  • 0
  • 0
  • 2022

article-image-customizing-xtext-components
Packt
08 Sep 2016
30 min read
Save for later

Customizing Xtext Components

Packt
08 Sep 2016
30 min read
In this article written by Lorenzo Bettini, author of the book Implementing Domain Specific Languages Using Xtend and Xtext, Second Edition, the author describes the main mechanism for customizing Xtext components—Google Guice, a Dependency Injection framework. With Google Guice, we can easily and consistently inject custom implementations of specific components into Xtext. In the first section, we will briefly show some Java examples that use Google Guice. Then, we will show how Xtext uses this dependency injection framework. In particular, you will learn how to customize both the runtime and the UI aspects. This article will cover the following topics: An introduction to Google Guice dependency injection framework How Xtext uses Google Guice How to customize several aspects of an Xtext DSL (For more resources related to this topic, see here.) Dependency injection The Dependency Injection pattern (see the article Fowler, 2004) allows you to inject implementation objects into a class hierarchy in a consistent way. This is useful when classes delegate specific tasks to objects referenced in fields. These fields have abstract types (that is, interfaces or abstract classes) so that the dependency on actual implementation classes is removed. In this first section, we will briefly show some Java examples that use Google Guice. Of course, all the injection principles naturally apply to Xtend as well. If you want to try the following examples yourself, you need to create a new Plug-in Project, for example, org.example.guice and add com.google.inject and javax.inject as dependencies in the MANIFEST.MF. Let's consider a possible scenario: a Service class that abstracts from the actual implementation of a Processor class and a Logger class. The following is a possible implementation: public class Service {   private Logger logger;   private Processor processor;     public void execute(String command) { logger.log("executing " + command); processor.process(command); logger.log("executed " + command);   } }   public class Logger {   public void log(String message) { out.println("LOG: " + message);   } }   public interface Processor {   public void process(Object o); }   public classProcessorImplimplements Processor {   private Logger logger;     public void process(Object o) { logger.log("processing"); out.println("processing " + o + "...");   } } These classes correctly abstract from the implementation details, but the problem of initializing the fields correctly still persists. If we initialize the fields in the constructor, then the user still needs to hardcode the actual implementation classnames. Also, note that Logger is used in two independent classes; thus, if we have a custom logger, we must make sure that all the instances use the correct one. These issues can be dealt with using dependency injection. With dependency injection, hardcoded dependencies will be removed. Moreover, we will be able to easily and consistently switch the implementation classes throughout the code. Although the same goal can be achieved manually by implementing factory method or abstract factory patterns (see the book Gamma et al, 1995), with dependency injection framework it is easier to keep the desired consistency and the programmer needs to write less code. Xtext uses the dependency injection framework Google Guice, https://github.com/google/guice. We refer to the Google Guice documentation for all the features provided by this framework. In this section, we just briefly describe its main features. You annotate the fields you want Guice to inject with the @Inject annotation (com.google.inject.Inject): public class Service {   @Inject private Logger logger;   @Inject private Processor processor;     public void execute(String command) { logger.log("executing " + command); processor.process(command); logger.log("executed " + command);   } }   public class ProcessorImpl implements Processor {   @Inject private Logger logger;     public void process(Object o) { logger.log("processing"); out.println("processing " + o + "...");   } } The mapping from injection requests to instances is specified in a Guice Module, a class that is derived from com.google.inject.AbstractModule. The method configure is implemented to specify the bindings using a simple and intuitive API. You only need to specify the bindings for interfaces, abstract classes, and for custom classes. This means that you do not need to specify a binding for Logger since it is a concrete class. On the contrary, you need to specify a binding for the interface Processor. The following is an example of a Guice module for our scenario: public class StandardModule extends AbstractModule {   @Override   protected void configure() {     bind(Processor.class).to(ProcessorImpl.class);   } } You create an Injector using the static method Guice.createInjector by passing a module. You then use the injector to create instances: Injector injector = Guice.createInjector(newStandardModule()); Service service = injector.getInstance(Service.class); service.execute("First command"); The initialization of injected fields will be done automatically by Google Guice. It is worth noting that the framework is also able to initialize (inject) private fields, like in our example. Instances of classes that use dependency injection must be created only through an injector. Creating instances with new will not trigger injection, thus all the fields annotated with @Inject will be null. When implementing a DSL with Xtext you will never have to create a new injector manually. In fact, Xtext generates utility classes to easily obtain an injector, for example, when testing your DSL with JUnit. We also refer to the article Köhnlein, 2012 for more details. The example shown in this section only aims at presenting the main features of Google Guice. If we need a different configuration of the bindings, all we need to do is define another module. For example, let's assume that we defined additional derived implementations for logging and processing. Here is an example where Logger and Processor are bound to custom implementations: public class CustomModule extends AbstractModule {   @Override   protected void configure() {     bind(Logger.class).to(CustomLogger.class);     bind(Processor.class).to(AdvancedProcessor.class);   } } Creating instances with an injector obtained using this module will ensure that the right classes are used consistently. For example, the CustomLogger class will be used both by Service and Processor. You can create instances from different injectors in the same application, for example: executeService(Guice.createInjector(newStandardModule())); executeService(Guice.createInjector(newCustomModule()));   voidexecuteService(Injector injector) {   Service service = injector.getInstance(Service.class); service.execute("First command"); service.execute("Second command"); } It is possible to request injection in many different ways, such as injection of parameters to constructors, using named instances, specification of default implementation of an interface, setter methods, and much more. In this book, we will mainly use injected fields. Injected fields are instantiated only once when the class is instantiated. Each injection will create a new instance, unless the type to inject is marked as @Singleton(com.google.inject.Singleton). The annotation @Singleton indicates that only one instance per injector will be used. We will see an example of Singleton injection. If you want to decide when you need an element to be instantiated from within method bodies, you can use a provider. Instead of injecting an instance of the wanted type C, you inject a com.google.inject.Provider<C> instance, which has a get method that produces an instance of C. For example: public class Logger {   @Inject   private Provider<Utility>utilityProvider;     public void log(String message) { out.println("LOG: " + message + " - " + utilityProvider.get().m());    } } Each time we create a new instance of Utility using the injected Provider class. Even in this case, if the type of the created instance is annotated with @Singleton, then the same instance will always be returned for the same injector. The nice thing is that to inject a custom implementation of Utility, you do not need to provide a custom Provider: you just bind the Utility class in the Guice module and everything will work as expected: public classCustomModule extends AbstractModule {   @Override   protected void configure() {     bind(Logger.class).to(CustomLogger.class);     bind(Processor.class).to(AdvancedProcessor.class);     bind(Utility.class).to(CustomUtility.class);   } }   It is crucial to keep in mind that once classes rely on injection, their instances must be created only through an injector; otherwise, all the injected elements will be null. In general, once dependency injection is used in a framework, all classes of the framework must rely on injection. Google Guice in Xtext All Xtext components rely on Google Guice dependency injection, even the classes that Xtext generates for your DSL. This means that in your classes, if you need to use a class from Xtext, you just have to declare a field of such type with the @Inject annotation. The injection mechanism allows a DSL developer to customize basically every component of the Xtext framework. This boils down to another property of dependency injection, which, in fact, inverts dependencies. The Xtext runtime can use your classes without having a dependency to its implementer. Instead, the implementer has a dependency on the interface defined by the Xtext runtime. For this reason, dependency injection is said to implement inversion of control and the dependency inversion principle. When running the MWE2 workflow, Xtext generates both a fully configured module and an empty module that inherits from the generated one. This allows you to override generated or default bindings. Customizations are added to the empty stub module. The generated module should not be touched. Xtext generates one runtime module that defines the non-user interface-related parts of the configuration and one specific for usage in the Eclipse IDE. Guice provides a mechanism for composing modules that is used by Xtext—the module in the UI project uses the module in the runtime project and overrides some bindings. Let's consider the Entities DSL example. You can find in the src directory of the runtime project the Xtend class EntitiesRuntimeModule, which inherits from AbstractEntitiesRuntimeModule in the src-gen directory. Similarly, in the UI project, you can find in the src directory the Xtend class EntitiesUiModule, which inherits from AbstractEntitiesUiModule in the src-gen directory. The Guice modules in src-gen are already configured with the bindings for the stub classes generated during the MWE2 workflow. Thus, if you want to customize an aspect using a stub class, then you do not have to specify any specific binding. The generated stub classes concern typical aspects that the programmer usually wants to customize, for example, validation and generation in the runtime project, and labels, and outline in the UI project (as we will see in the next sections). If you need to customize an aspect which is not covered by any of the generated stub classes, then you will need to write a class yourself and then specify the binding for your class in the Guice module in the src folder. We will see an example of this scenario in the Other customizations section. Bindings in these Guice module classes can be specified as we saw in the previous section, by implementing the configure method. However, Xtext provides an enhanced API for defining bindings; Xtext reflectively searches for methods with a specific signature in order to find Guice bindings. Thus, assuming you want to bind a BaseClass class to your derived CustomClass, you can simply define a method in your module with a specific signature, as follows: def Class<? extendsBaseClass>bindBaseClass() {   returnCustomClass } Remember that in Xtend, you must explicitly specify that you are overriding a method of the base class; thus, in case the bind method is already defined in the  base class, you need to use override instead of def. These methods are invoked reflectively, thus their signature must follow the expected convention. We refer to the official Xtext documentation for the complete description of the module API. Typically, the binding methods that you will see in this book will have the preceding shape, in particular, the name of the method must start with bind followed by the name of the class or interface we want to provide a binding for. It is important to understand that these bind methods do not necessarily have to override a method in the module base class. You can also make your own classes, which are not related to Xtext framework classes at all, participants of this injection mechanism, as long as you follow the preceding convention on method signatures. In the rest of this article, we will show examples of customizations of both IDE and runtime concepts. For most of these customizations, we will modify the corresponding Xtend stub class that Xtext generated when running the MWE2 workflow. As hinted before, in these cases, we will not need to write a custom Guice binding. We will also show an example of a customization, which does not have an automatically generated stub class. Xtext uses injection to inject services and not to inject state (apart from EMF Singleton registries). Thus, the things that are injected are interfaces consisting of functions that take state as arguments (for example, the document, the resource, and so on.). This leads to a service-oriented architecture, which is different from an object-oriented architecture where state is encapsulated with operations. An advantage of this approach is that there are far less problems with synchronization of multiple threads. Customizations of IDE concepts In this section, we show typical concepts of the IDE for your DSL that you may want to customize. Xtext shows its usability in this context as well, since, as you will see, it reduces the customization effort. Labels Xtext UI classes make use of an ILabelProvider interface to obtain textual labels and icons through its methods getText and getImage, respectively. ILabelProvider is a standard component of Eclipse JFace-based viewers. You can see the label provider in action in the Outline view and in content assist proposal popups (as well as in various other places). Xtext provides a default implementation of a label provider for all DSLs, which does its best to produce a sensible representation of the EMF model objects using the name feature, if it is found in the corresponding object class, and a default image. You can see that in the Outline view when editing an entities file, refer to the following screenshot: However, you surely want to customize the representation of some elements of your DSL. The label provider Xtend stub class for your DSL can be found in the UI plug-in project in the subpackageui.labeling. This stub class extends the base class DefaultEObjectLabelProvider. In the Entities DSL, the class is called EntitiesLabelProvider. This class employs a Polymorphic Dispatcher mechanism, which is also used in many other places in Xtext. Thus, instead of implementing the getText and getImage methods, you can simply define several versions of methods text and image taking as parameter an EObject object of the type you want to provide a representation for. Xtext will then search for such methods according to the runtime type of the elements to represent. For example, for our Entities DSL, we can change the textual representation of attributes in order to show their names and a better representation of types (for example, name : type). We then define a method text taking Attribute as a parameter and returning a string: classEntitiesLabelProviderextends ... {     @Inject extensionTypeRepresentation   def text(Attribute a) { a.name +       if (a.type != null)          " : " + a.type.representation       else ""   } } To get a representation of the AttributeType element, we use an injected extension, TypeRepresentation, in particular its method representation: classTypeRepresentation { def representation(AttributeType t) { valelementType = t.elementType valelementTypeRepr =       switch (elementType) { BasicType : elementType.typeName EntityType : elementType?.entity.name       } elementTypeRepr + if (t.array) "[]"else""   } } Remember that the label provider is used, for example, for the Outline view, which is refreshed when the editor contents change, and its contents might contain errors. Thus, you must be ready to deal with an incomplete model, and some features might still be null. That is why you should always check that the features are not null before accessing them. Note that we inject an extension field of type TypeRepresentation instead of creating an instance with new in the field declaration. Although it is not necessary to use injection for this class, we decided to rely on that because in the future we might want to be able to provide a different implementation for that class. Another point for using injection instead of new is that the other class may rely on injection in the future. Using injection leaves the door open for future and unanticipated customizations. The Outline view now shows as in the following screenshot: We can further enrich the labels for entities and attributes using images for them. To do this, we create a directory in the org.example.entities.ui project where we place the image files of the icons we want to use. In order to benefit from Xtext's default handling of images, we call the directory icons, and we place two gif images there, Entity.gif and Attribute.gif (for entities and attributes, respectively). You fill find the icon files in the accompanying source code in the org.example.entities.ui/icons folder. We then define two image methods in EntitiesLabelProvider where we only need to return the name of the image files and Xtext will do the rest for us: class EntitiesLabelProvider extends DefaultEObjectLabelProvider {   ... as before def image(Entity e) { "Entity.gif" }   def image(Attribute a) { "Attribute.gif" } } You can see the result by relaunching Eclipse, as seen in the following screenshot: Now, the entities and attributes labels look nicer. If you plan to export the plugins for your DSL so that others can install them in their Eclipse, you must make sure that the icons directory is added to the build.properties file, otherwise that directory will not be exported. The bin.includes section of the build.properties file of your UI plugin should look like the following: bin.includes = META-INF/,                ., plugin.xml,                icons/ The Outline view The default Outline view comes with nice features. In particular, it provides toolbar buttons to keep the Outline view selection synchronized with the element currently selected in the editor. Moreover, it provides a button to sort the elements of the tree alphabetically. By default, the tree structure is built using the containment relations of the metamodel of the DSL. This strategy is not optimal in some cases. For example, an Attribute definition also contains the AttributeType element, which is a structured definition with children (for example, elementType, array, and length). This is reflected in the Outline view (refer to the previous screenshot) if you expand the Attribute elements. This shows unnecessary elements, such as BasicType names, which are now redundant since they are shown in the label of the attribute, and additional elements which are not representable with a name, such as the array feature. We can influence the structure of the Outline tree using the generated stub class EntitiesOutlineTreeProvider in the src folder org.example.entities.ui.outline. Also in this class, customizations are specified in a declarative way using the polymorphic dispatch mechanism. The official documentation, https://www.eclipse.org/Xtext/documentation/, details all the features that can be customized. In our example, we just want to make sure that the nodes for attributes are leaf nodes, that is, they cannot be further expanded and they have no children. In order to achieve this, we just need to define a method named _isLeaf (note the underscore) with a parameter of the type of the element, returning true. Thus, in our case we write the following code: classEntitiesOutlineTreeProviderextends DefaultOutlineTreeProvider { def _isLeaf(Attribute a) { true } } Let's relaunch Eclipse, and now see that the attribute nodes do not expose children anymore. Besides defining leaf nodes, you can also specify the children in the tree for a specific node by defining a _createChildren method taking as parameters the type of outline node and the type of the model element. This can be useful to define the actual root elements of the Outline tree. By default, the tree is rooted with a single node for the source file. In this example, it might be better to have a tree with many root nodes, each one representing an entity. The root of the Outline tree is always represented by a node of type DefaultRootNode. The root node is actually not visible, it is just the container of all nodes that will be displayed as roots in the tree. Thus, we define the following method (our Entities model is rooted by a Model element): public classEntitiesOutlineTreeProvider ... {   ... as before def void _createChildren(DocumentRootNodeoutlineNode,                            Model model) { model.entities.forEach[       entity | createNode(outlineNode, entity);    ]   } } This way, when the Outline tree is built, we create a root node for each entity instead of having a single root for the source file. The createNode method is part of the Xtext base class. The result can be seen in the following screenshot: Customizing other aspects We will show how to customize the content assistant. There is no need to do this for the simple Entities DSL since the default implementation already does a fine job. Custom formatting An editor for a DSL should provide a mechanism for rearranging the text of the program in order to improve its readability, without changing its semantics. For example, nested regions inside blocks should be indented, and the user should be able to achieve that with a menu. Besides that, implementing a custom formatter has also other benefits, since the formatter is automatically used by Xtext when you change the EMF model of the AST. If you tried to apply the quickfixes, you might have noticed that after the EMF model has changed, the editor immediately reflects this change. However, the resulting textual representation is not well formatted, especially for the quickfix that adds the missing referred entity. In fact, the EMF model representing the AST does not contain any information about the textual representation, that is, all white space characters are not part of the EMF model (after all, the AST is an abstraction of the actual program). Xtext keeps track of such information in another in-memory model called the nodemodel. The node model carries the syntactical information, that is, offset and length in the textual document. However, when we manually change the EMF model, we do not provide any formatting directives, and Xtext uses the default formatter to get a textual representation of the modified or added model parts. Xtext already generates the menu for formatting your DSL source programs in the Eclipse editor. As it is standard in Eclipse editors (for example, the JDT editor), you can access the Format menu from the context menu of the editor or using the Ctrl + Shift + F key combination. The default formatter is OneWhitespaceFormatter and you can test this in the Entities DSL editor; this formatter simply separates all tokens of your program with a space. Typically, you will want to change this default behavior. If you provide a custom formatter, this will be used not only when the Format menu is invoked, but also when Xtext needs to update the editor contents after a manual modification of the AST model, for example, a quickfix performing a semantic modification. The easiest way to customize the formatting is to have the Xtext generator create a stub class. To achieve this, you need to add the following formatter specification in the StandardLanguage block in the MWE2 workflow file, requesting to generate an Xtend stub class: language = StandardLanguage {     name = "org.example.entities.Entities" fileExtensions = "entities"     ...     formatter = { generateStub = true generateXtendStub = true     } } If you now run the workflow, you will find the formatter Xtend stub class in the main plugin project in the formatting2 package. For our Entities DSL, the class is org.example.entities.formatting2.EntitiesFormatter. This stub class extends the Xtext class AbstractFormatter2. Note that the name of the package ends with 2. That is because Xtext recently completely changed the customization of the formatter to enhance its mechanisms. The old formatter is still available, though deprecated, so the new formatter classes have the 2 in the package in order not to be mixed with the old formatter classes. In the generated stub class, you will get lots of warnings of the shape Discouraged access: the type AbstractFormatter2 is not accessible due to restriction on required project org.example.entities. That is because the new formatting API is still provisional, and it may change in future releases in a non-backward compatible way. Once you are aware of that, you can decide to ignore the warnings. In order to make the warnings disappear from the Eclipse project, you configure the specific project settings to ignore such warnings, as shown in the following screenshot: The Xtend stub class already implements a few dispatch methods, taking as parameters the AST element to format and an IFormattableDocument object. The latter is used to specify the formatting requests. A formatting request will result in a textual replacement in the program text. Since it is an extension parameter, you can use its methods as extension methods (for more details on extension methods. The IFormattableDocument interface provides a Java API for specifying formatting requests. Xtend features such as extension methods and lambdas will allow you to specify formatting request in an easy and readable way. The typical formatting requests are line wraps, indentations, space addition and removal, and so on. These will be applied on the textual regions of AST elements. As we will show in this section, the textual regions can be specified by the EObject of AST or by its keywords and features. For our Entities DSL, we decide to perform formatting as follows: Insert two newlines after each entity so that entities will be separated by an empty line; after the last entity, we want a single empty line. Indent attributes between entities curly brackets. Insert one line-wrap after each attribute declaration. Make sure that entity name, super entity, and the extends keyword are surrounded by a single space. Remove possible white spaces around the ; of an attribute declaration. To achieve the empty lines among entities, we modify the stub method for the Entities Model element: def dispatch void format(Model model,                                 extensionIFormattableDocument document) { vallastEntity = model.entities.last   for (entity : model.entities) { entity.format     if (entity === lastEntity) entity.append[setNewLines(1)]     else entity.append[setNewLines(2)]   } } We append two newlines after each entity. This way, each entity will be separated by an empty line, since each entity, except for the first one, will start on the second added newline. We append only one newline after the last entity. Now start a new Eclipse instance and manually test the formatter with some entities, by pressing Ctrl + Shift + F. We modify the format stub method for the Entity elements. In order to separate each attribute, we follow a logic similar to the previous format method. For the sake of the example, we use a different version of setNewLines, that is setNewLines(intminNewLines, intdefaultNewLines, intmaxNewLines), whose signature is self-explanatory: for (attribute : entity.attributes) { attribute.append[setNewLines(1, 1, 2)] } Up to now, we referred to a textual region of the AST by specifying the EObject. Now, we need to specify the textual regions of keywords and features of a given AST element. In order to specify that the "extends" keyword is surrounded by one single space we write the following: entity.regionFor.keyword("extends").surround[oneSpace] We also want to have no space around the terminating semicolon of attributes, so we write the following: attribute.regionFor.keyword(";").surround[noSpace] In order to specify that the the entity's name and the super entity are surrounded by one single space we write the following: entity.regionFor.feature(ENTITY__NAME).surround[oneSpace] entity.regionFor.feature(ENTITY__SUPER_TYPE).surround[oneSpace] After having imported statically all the EntitiesPackage.Literals members, as follows: import staticorg.example.entities.entities.EntitiesPackage.Literals.* Finally, we want to handle the indentation inside the curly brackets of an entity and to have a newline after the opening curly bracket. This is achieved with the following lines: val open = entity.regionFor.keyword("{") val close = entity.regionFor.keyword("}") open.append[newLine] interior(open, close)[indent] Summarizing, the format method for an Entity is the following one: def dispatch void format(Entity entity,                           extensionIFormattableDocument document) { entity.regionFor.keyword("extends").surround[oneSpace] entity.regionFor.feature(ENTITY__NAME).surround[oneSpace] entity.regionFor.feature(ENTITY__SUPER_TYPE).surround[oneSpace]   val open = entity.regionFor.keyword("{") val close = entity.regionFor.keyword("}") open.append[newLine]   interior(open, close)[indent]     for (attribute : entity.attributes) { attribute.regionFor.keyword(";").surround[noSpace] attribute.append[setNewLines(1, 1, 2)]   } } Now, start a new Eclipse instance and manually test the formatter with some attributes and entities, by pressing Ctrl + Shift + F. In the generated Xtend stub class, you also find an injected extension for accessing programmatically the elements of your grammar. In this DSL it is the following: @Inject extensionEntitiesGrammarAccess For example, to specify the left curly bracket of an entity, we could have written this alternative line: val open = entity.regionFor.keyword(entityAccess.leftCurlyBracketKeyword_3) Similarly, to specify the terminating semicolon of an attribute, we could have written this alternative line: attribute.regionFor.keyword(attributeAccess.semicolonKeyword_2)   .surround[noSpace] Eclipse content assist will help you in selecting the right method to use. Note that the method names are suffixed with numbers that relate to the position of the keyword in the grammar's rule. Changing a rule in the DSL's grammar with additional elements or by removing some parts will make such method invocations invalid since the method names will change. On the other hand, if you change a keyword in your grammar, for example, you use square brackets instead of curly brackets, then referring to keywords with string literals as we did in the original implementation of the format methods will issue no compilation errors, but the formatting will not work anymore as expected. Thus, you need to choose your preferred strategy according to the likeliness of your DSL's grammar evolution. You can also try and apply our quickfixes for missing entities and you will see that the added entity is nicely formatted, according to the logic we implemented. What is left to be done is to format the attribute type nicely, including the array specification. This is left as an exercise. The EntitiesFormatter you find in the accompanying sources of this example DSL contains also this formatting logic for attribute types. You should specify formatting requests avoiding conflicting requests on the same textual region. In case of conflicts, the formatter will throw an exception with the details of the conflict. Other customizations All the customizations you have seen so far were based on modification of a generated stub class with accompanying generated Guice bindings in the module under the src-gen directory. However, since Xtext relies on injection everywhere, it is possible to inject a custom implementation for any mechanism, even if no stub class has been generated. If you installed Xtext SDK in your Eclipse, the sources of Xtext are available for you to inspect. You should learn to inspect these sources by navigating to them and see what gets injected and how it is used. Then, you are ready to provide a custom implementation and inject it. You can use the Eclipse Navigate menu. In particular, to quickly open a Java file (even from a library if it comes with sources), use Ctrl + Shift + T (Open Type…). This works both for Java classes and Xtend classes. If you want to quickly open another source file (for example, an Xtext grammar file) use Ctrl + Shift + R (Open Resource…). Both dialogs have a text field where, if you start typing, the available elements soon show up. Eclipse supports CamelCase everywhere, so you can just type the capital letters of a compound name to quickly get to the desired element. For example, to open the EntitiesRuntimeModule Java class, use the Open Type… menu and just digit ERM to see the filtered results. As an example, we show how to customize the output directory where the generated files will be stored (the default is src-gen). Of course, this output directory can be modified by the user using the Properties dialog that Xtext generated for your DSL, but we want to customize the default output directory for Entities DSL so that it becomes entities-gen. The default output directory is retrieved internally by Xtext using an injected IOutputConfigurationProvider instance. If you take a look at this class (see the preceding tip), you will see the following: importcom.google.inject.ImplementedBy; @ImplementedBy(OutputConfigurationProvider.class) public interfaceIOutputConfigurationProvider {   Set<OutputConfiguration>getOutputConfigurations();   ...  The @ImplementedByGuice annotation tells the injection mechanism the default implementation of the interface. Thus, what we need to do is create a subclass of the default implementation (that is, OutputConfigurationProvider) and provide a custom binding for the IOutputConfigurationProvider interface. The method we need to override is getOutputConfigurations; if we take a look at its default implementation, we see the following: public Set<OutputConfiguration>getOutputConfigurations() { OutputConfigurationdefaultOutput = new OutputConfiguration(IFileSystemAccess.DEFAULT_OUTPUT); defaultOutput.setDescription("Output Folder"); defaultOutput.setOutputDirectory("./src-gen"); defaultOutput.setOverrideExistingResources(true); defaultOutput.setCreateOutputDirectory(true); defaultOutput.setCleanUpDerivedResources(true); defaultOutput.setSetDerivedProperty(true); defaultOutput.setKeepLocalHistory(true);   returnnewHashSet(defaultOutput); } Of course, the interesting part is the call to setOutputDirectory. We define an Xtend subclass as follows: classEntitiesOutputConfigurationProviderextends OutputConfigurationProvider {     public static valENTITIES_GEN = "./entities-gen"     overridegetOutputConfigurations() { super.getOutputConfigurations() => [ head.outputDirectory = ENTITIES_GEN     ]   } } Note that we use a public constant for the output directory since we might need it later in other classes. We use several Xtend features: the with operator, the implicit static extension method head, which returns the first element of a collection, and the syntactic sugar for setter method. We create this class in the main plug-in project, since this concept is not just an UI concept and it is used also in other parts of the framework. Since it deals with generation, we create it in the generatorsubpackage. Now, we must bind our implementation in the EntitiesRuntimeModule class: classEntitiesRuntimeModuleextends AbstractEntitiesRuntimeModule {   def Class<? extendsIOutputConfigurationProvider> bindIOutputConfigurationProvider() {     returnEntitiesOutputConfigurationProvider   } } If we now relaunch Eclipse, we can verify that the Java code is generated into entities-gen instead of src-gen. If you previously used the same project, the src-gen directory might still be there from previous generations; you need to manually remove it and set the new entities-gen as a source folder. Summary In this article, we introduced the Google Guice dependency injection framework on which Xtext relies. You should now be aware of how easy it is to inject custom implementations consistently throughout the framework. You also learned how to customize some basic runtime and IDE concepts for a DSL. Resources for Article: Further resources on this subject: Testing with Xtext and Xtend [article] Clojure for Domain-specific Languages - Design Concepts with Clojure [article] Java Development [article]
Read more
  • 0
  • 0
  • 3611

article-image-hello-small-world
Packt
07 Sep 2016
20 min read
Save for later

Hello, Small World!

Packt
07 Sep 2016
20 min read
In this article by Stefan Björnander, the author of the book C++ Windows Programming, we will see how to create Windows applications using C++. This article introduces Small Windows by presenting two small applications: The first application writes "Hello, Small Windows!" in a window The second application handles circles of different colors in a document window (For more resources related to this topic, see here.) Hello, Small Windows! In The C Programming Language by Brian Kernighan and Dennis Richie, the hello-world example was introduced. It was a small program that wrote hello, world on the screen. In this section, we shall write a similar program for Small Windows. In regular C++, the execution of the application starts with the main function. In Small Windows, however, main is hidden in the framework and has been replaced by MainWindow, which task is to define the application name and create the main window object. The argumentList parameter corresponds to argc and argv in main. The commandShow parameter forwards the system's request regarding the window's appearance. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Hello"); Application::MainWindowPtr() = new HelloWindow(windowShow); } In C++, there are to two character types: char and wchar_t, where char holds a regular character of one byte and wchar_t holds a wide character of larger size, usually two bytes. There is also the string class that holds a string of char values and the wstring class that holds a string of wchar_t values. However, in Windows there is also the generic character type TCHAR that is char or wchar_t, depending on system settings. There is also the String class holds a string of TCHAR values. Moreover, TEXT is a macro that translates a character value to TCHAR and a text value to an array of TCHAR values. To sum it up, following is a table with the character types and string classes: Regular character Wide character Generic character char wchar_t TCHAR string wstring String In the applications of this book, we always use the TCHAR type, the String class, and the TEXT macro. The only exception to that rule is the clipboard handling. Our version of the hello-world program writes Hello, Small Windows! in the center of the client area. The client area of the window is the part of the window where it is possible to draw graphical objects. In the following window, the client area is the white area. The HelloWindow class extends the Small Windows Window class. It holds a constructor and the Draw method. The constructor calls the Window constructor with suitable information regarding the appearance of the window. Draw is called every time the client area of the window needs to be redrawn. HelloWindow.h class HelloWindow : public Window { public: HelloWindow(WindowShow windowShow); void OnDraw(Graphics& graphics, DrawMode drawMode); }; The constructor of HelloWindow calls the constructor of Window with the following parameter: The first parameter of the HelloWindow constructor is the coordinate system. LogicalWithScroll indicates that each logical unit is one hundredth of a millimeter, regardless of the physical resolution of the screen. The current scroll bar settings are taken into consideration. The second parameter of the window constructor is the preferred size of the window. It indicates that a default size shall be used. The third parameter is a pointer to the parent window. It is null since the window has no parent window. The fourth and fifth parameters set the window's style, in this case overlapped windows. The last parameter is windowShow given by the surrounding system to MainWindow, which decide the window's initial appearance (minimized, normal, or maximized). Finally, the constructor sets the header of the window by calling the Window method SetHeader. HelloWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" HelloWindow::HelloWindow(WindowShow windowShow) :Window(LogicalWithScroll, ZeroSize, nullptr, OverlappedWindow, NoStyle, windowShow) { SetHeader(TEXT("Hello Window")); } The OnDraw method is called every time the client area of the window needs to be redrawn. It obtains the size of the client area and draws the text in its center with black text on white background. The SystemFont parameter will make the text appear in the default system font. The Small Windows Color class holds the constants Black and White. Point holds a 2-dimensional point. Size holds a width and a height. The Rect class holds a rectangle. More specifically, it holds the four corners of a rectangle. void HelloWindow::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { Size clientSize = GetClientSize(); Rect clientRect(Point(0, 0), clientSize); Font textFont("New Times Roman", 12, true); graphics.DrawText(clientRect, TEXT("Hello, Small Windows!"), textFont , Black, White); } The Circle application In this section, we look into a simple circle application. As the name implies, it provides the user the possibility to handle circles in a graphical application. The user can add a new circle by clicking the left mouse button. They can also move an existing circle by dragging it. Moreover, the user can change the color of a circle as well as save and open the document.   The main window As we will see thought out this book, MainWindow does always do the same thing: it sets the application name and creates the main window of the application. The name is used by the Save and Open standard dialogs, the About menu item, and the registry. The difference between the main window and other windows of the application is that when the user closes the main window, the application exits. Moreover, when the user selects the Exit menu item the main window is closed, and its destructor is called. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Circle"); Application::MainWindowPtr() = new CircleDocument(windowShow); } The CircleDocument class The CircleDocumentclass extends the Small Windows class StandardDocument, which in turn extends Document and Window. In fact, StandardDocument constitutes of a framework; that is, a base class with a set of virtual methods with functionality we can override and further specify. The OnMouseDown and OnMouseUp methods are overridden from Window and are called when the user presses or releases one of the mouse buttons. OnMouseMove is called when the user moves the mouse. The OnDraw method is also overridden from Window and is called every time the window needs to be redrawn. The ClearDocument, ReadDocumentFromStream, and WriteDocumentToStream methods are overridden from Standard­Document and are called when the user creates a new file, opens a file, or saves a file. CircleDocument.h class CircleDocument : public StandardDocument { public: CircleDocument(WindowShow windowShow); ~CircleDocument(); void OnMouseDown(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseUp(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseMove(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnDraw(Graphics& graphics, DrawMode drawMode); bool ReadDocumentFromStream(String name, istream& inStream); bool WriteDocumentToStream(String name, ostream& outStream) const; void ClearDocument(); The DEFINE_BOOL_LISTENER and DEFINE_VOID_LISTENER macros define listeners: methods without parameters that are called when the user selects a menu item. The only difference between the macros is the return type of the defined methods: bool or void. In the applications of this book, we use the common standard that the listeners called in response to user actions are prefixed with On, for instance OnRed. The methods that decide whether the menu item shall be enabled are suffixed with Enable, and the methods that decide whether the menu item shall be marked with a check mark or a radio button are suffixed with Check or Radio. In this application, we define menu items for the red, green, and blue colors. We also define a menu item for the Color standard dialog.     DEFINE_VOID_LISTENER(CircleDocument,OnRed);     DEFINE_VOID_LISTENER(CircleDocument,OnGreen);     DEFINE_VOID_LISTENER(CircleDocument,OnBlue);     DEFINE_VOID_LISTENER(CircleDocument,OnColorDialog); When the user has chosen one of the color red, green, or blue, its corresponding menu item shall be checked with a radio button. RedRadio, GreenRadio, and BlueRadio are called before the menu items become visible and return a Boolean value indicating whether the menu item shall be marked with a radio button.     DEFINE_BOOL_LISTENER(CircleDocument, RedRadio);     DEFINE_BOOL_LISTENER(CircleDocument, GreenRadio);     DEFINE_BOOL_LISTENER(CircleDocument, BlueRadio); The circle radius is always 500 units, which correspond to 5 millimeters.     static const int CircleRadius = 500; The circleList field holds the circles, where the topmost circle is located at the beginning of the list. The nextColor field holds the color of the next circle to be added by the user. It is initialized to minus one to indicate that no circle is being moved at the beginning. The moveIndex and movePoint fields are used by OnMouseDown and OnMouseMove to keep track of the circle being moved by the user. private: vector<Circle> circleList; Color nextColor; int moveIndex = -1; Point movePoint; }; In the StandardDocument constructor call, the first two parameters are LogicalWithScroll and USLetterPortrait. They indicate that the logical size is hundredths of millimeters and that the client area holds the logical size of a US letter: 215.9 * 279.4 millimeters (8.5 * 11 inches). If the window is resized so that the client area becomes smaller than a US letter, scroll bars are added to the window. The third parameter sets the file information used by the standard Save and Open dialogs, the text description is set to Circle Files and the file suffix is set to cle. The null pointer parameter indicates that the window does not have a parent window. The OverlappedWindow constant parameter indicates that the window shall overlap other windows and the windowShow parameter is the window's initial appearance passed on from the surrounding system by MainWindow. CircleDocument.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" CircleDocument::CircleDocument(WindowShow windowShow) :StandardDocument(LogicalWithScroll, USLetterPortrait, TEXT("Circle Files, cle"), nullptr, OverlappedWindow, windowShow) { The StandardDocument framework adds the standard File, Edit, and Help menus to the window menu bar. The File menu holds the New, Open, Save, Save As, Page Setup, Print Preview, and Exit items. The Page Setup and Print Preview items are optional. The seventh parameter of the StandardDocument constructor (default false) indicates their presence. The Edit menu holds the Cut, Copy, Paste, and Delete items. They are disabled by default; we will not use them in this application. The Help menu holds the About item, the application name set in MainWindow is used to display a message box with a standard message: Circle, version 1.0. We add the standard File and Edit menus to the menu bar. Then we add the Color menu, which is the application-specific menu of this application. Finally, we add the standard Help menu and set the menu bar of the document. The Color menu holds the menu items used to set the circle colors. The OnRed, OnGreen, and OnBlue methods are called when the user selects the menu item, and the RedRadio, GreenRadio, BlueRadio are called before the user selects the color menu in order to decide if the items shall be marked with a radio button. OnColorDialog opens a standard color dialog. In the text &RedtCtrl+R, the ampersand (&) indicates that the menu item has a mnemonic; that is, the letter R will be underlined and it is possible to select the menu item by pressing R after the menu has been opened. The tabulator character (t) indicates that the second part of the text defines an accelerator; that is, the text Ctrl+R will occur right-justified in the menu item and the item can be selected by pressing Ctrl+R. Menu menuBar(this); menuBar.AddMenu(StandardFileMenu(false)); The AddItem method in the Menu class also takes two more parameters for enabling the menu item and setting a check box. However, we do not use them in this application. Therefore, we send null pointers. Menu colorMenu(this, TEXT("&Color")); colorMenu.AddItem(TEXT("&RedtCtrl+R"), OnRed, nullptr, nullptr, RedRadio); colorMenu.AddItem(TEXT("&GreentCtrl+G"), OnGreen, nullptr, nullptr, GreenRadio); colorMenu.AddItem(TEXT("&BluetCtrl+B"), OnBlue, nullptr, nullptr, BlueRadio); colorMenu.AddSeparator(); colorMenu.AddItem(TEXT("&Dialog ..."), OnColorDialog); menuBar.AddMenu(colorMenu); menuBar.AddMenu(StandardHelpMenu()); SetMenuBar(menuBar); Finally, we read the current color (the color of the next circle to be added) from the registry; red is the default color in case there is no color stored in the registry. nextColor.ReadColorFromRegistry(TEXT("NextColor"), Red); } The destructor saves the current color in the registry. In this application, we do not need to perform the destructor's normal tasks, such as deallocate memory or closing files. CircleDocument::~CircleDocument() { nextColor.WriteColorToRegistry(TEXT("NextColor")); } The ClearDocument method is called when the user selects the New menu item. In this case, we just clear the circle list. Every other action, such as redrawing the window or changing its title, is taken care of by StandardDocument. void CircleDocument::ClearDocument() { circleList.clear(); } The WriteDocumentToStream method is called by StandardDocument when the user saves a file (by selecting Save or Save As). It writes the number of circles (the size of the circle list) to the output stream and calls WriteCircle for each circle in order to write their states to the stream. bool CircleDocument::WriteDocumentToStream(String name, ostream& outStream) const { int size = circleList.size(); outStream.write((char*) &size, sizeof size); for (Circle circle : circleList) { circle.WriteCircle(outStream); } return ((bool) outStream); } The ReadDocumentFromStream method is called by StandardDocument when the user opens a file by selecting the Open menu item. It reads the number of circles (the size of the circle list) and for each circle it creates a new object of the Circle class, calls ReadCircle in order to read the state of the circle, and adds the circle object to circleList. bool CircleDocument::ReadDocumentFromStream(String name, istream& inStream) { int size; inStream.read((char*) &size, sizeof size); for (int count = 0; count < size; ++count) { Circle circle; circle.ReadCircle(inStream); circleList.push_back(circle); } return ((bool) inStream); } The OnMouseDown method is called when the user presses one of the mouse buttons. First we need to check that they have pressed the left mouse button. If they have, we loop through the circle list and call IsClick for each circle in order to decide whether they have clicked at a circle. Note that the top-most circle is located at the beginning of the list; therefore, we loop from the beginning of the list. If we find a clicked circle, we break the loop. If the user has clicked at a circle, we store its index moveIndex and the current mouse position in movePoint. Both values are needed by OnMouseMove method that will be called when the user moves the mouse. void CircleDocument::OnMouseDown (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if (mouseButtons == LeftButton) { moveIndex = -1; int size = circleList.size(); for (int index = 0; index < size; ++index) { if (circleList[index].IsClick(mousePoint)) { moveIndex = index; movePoint = mousePoint; break; } } However, if the user has not clicked at a circle, we add a new circle. A circle is defined by its center position (mousePoint), radius (CircleRadius), and color (nextColor). An invalidated area is a part of the client area that needs to be redrawn. Remember that in Windows we normally do not draw figures directly. Instead, we call Invalidate to tell the system that an area needs to be redrawn and forces the actually redrawing by calling UpdateWindow, which eventually results in a call to OnDraw. The invalidated area is always a rectangle. Invalidate has a second parameter (default true) indicating that the invalidated area shall be cleared. Technically, it is painted in the window's client color, which in this case is white. In this way, the previous location of the circle becomes cleared and the circle is drawn at its new location. The SetDirty method tells the framework that the document has been altered (the document has become dirty), which causes the Save menu item to be enabled and the user to be warned if they try to close the window without saving it. if (moveIndex == -1) { Circle newCircle(mousePoint, CircleRadius, nextColor); circleList.push_back(newCircle); Invalidate(newCircle.Area()); UpdateWindow(); SetDirty(true); } } } The OnMouseMove method is called every time the user moves the mouse with at least one mouse button pressed. We first need to check whether the user is pressing the left mouse button and is clicking at a circle (whether moveIndex does not equal minus one). If they have, we calculate the distance from the previous mouse event (OnMouseDown or OnMouseMove) by comparing the previous mouse position movePoint by the current mouse position mousePoint. We update the circle position, invalidate both the old and new area, forcing a redrawing of the invalidated areas with UpdateWindow, and set the dirty flag. void CircleDocument::OnMouseMove (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if ((mouseButtons == LeftButton)&&(moveIndex != -1)) { Size distanceSize = mousePoint - movePoint; movePoint = mousePoint; Circle& movedCircle = circleList[moveIndex]; Invalidate(movedCircle.Area()); movedCircle.Center() += distanceSize; Invalidate(movedCircle.Area()); UpdateWindow(); SetDirty(true); } } Strictly speaking, OnMouseUp could be excluded since moveIndex is set to minus one in OnMouseDown, which is always called before OnMouseMove. However, it has been included for the sake of completeness. void CircleDocument::OnMouseUp (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { moveIndex = -1; } The OnDraw method is called every time the window needs to be (partly or completely) redrawn. The call can have been initialized by the system as a response to an event (for instance, the window has been resized) or by an earlier call to UpdateWindow. The Graphics reference parameter has been created by the framework and can be considered a toolbox for drawing lines, painting areas and writing text. However, in this application we do not write text. We iterate throw the circle list and, for each circle, call the Draw method. Note that we do not care about which circles are to be physically redrawn. We simple redraw all circles. However, only the circles located in an area that has been invalidated by a previous call to Invalidate will be physically redrawn. The Draw method has a second parameter indicating the draw mode, which can be Paint or Print. Paint indicates that OnDraw is called by OnPaint in Window and that the painting is performed in the windows' client area. The Print method indicates that OnDraw is called by OnPrint and that the painting is sent to a printer. However, in this application we do not use that parameter. void CircleDocument::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { for (Circle circle : circleList) { circle.Draw(graphics); } } The RedRadio, GreenRadio, and BlueRadio methods are called before the menu items are shown, and the items will be marked with a radio button in case they return true. The Red, Green, and Blue constants are defined in the Color class. bool CircleDocument::RedRadio() const { return (nextColor == Red); } bool CircleDocument::GreenRadio() const { return (nextColor == Green); } bool CircleDocument::BlueRadio() const { return (nextColor == Blue); } The OnRed, OnGreen, and OnBlue methods are called when the user selects the corresponding menu item. They all set the nextColor field to an appropriate value. void CircleDocument::OnRed() { nextColor = Red; } void CircleDocument::OnGreen() { nextColor = Green; } void CircleDocument::OnBlue() { nextColor = Blue; } The OnColorDialog method is called when the user selects the Color dialog menu item and displays the standard Color dialog. If the user choses a new color, nextcolor will be given the chosen color value. void CircleDocument::OnColorDialog() { ColorDialog(this, nextColor); } The Circle class The Circle class is a class holding the information about a single circle. The default constructor is used when reading a circle from a file. The second constructor is used when creating a new circle. The IsClick method returns true if the given point is located inside the circle (to check whether the user has clicked in the circle), Area returns the circle's surrounding rectangle (for invalidating), and Draw is called to redraw the circle. Circle.h class Circle { public: Circle(); Circle(Point center, int radius, Color color); bool WriteCircle(ostream& outStream) const; bool ReadCircle(istream& inStream); bool IsClick(Point point) const; Rect Area() const; void Draw(Graphics& graphics) const; Point Center() const {return center;} Point& Center() {return center;} Color GetColor() {return color;} As mentioned in the previous section, a circle is defined by its center position (center), radius (radius), and color (color). private: Point center; int radius; Color color; }; The default constructor does not need to initialize the fields, since it is called when the user opens a file and the values are read from the file. The second constructor, however, initializes the center point, radius, and color of the circle. Circle.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" Circle::Circle() { // Empty. } Circle::Circle(Point center, int radius, Color color) :color(color), center(center), radius(radius) { // Empty. } The WriteCircle method writes the color, center point, and radius to the stream. Since the radius is a regular integer, we simply use the C standard function write, while Color and Point have their own methods to write their values to a stream. In ReadCircle we read the color, center point, and radius from the stream in a similar manner. bool Circle::WriteCircle(ostream& outStream) const { color.WriteColorToStream(outStream); center.WritePointToStream(outStream); outStream.write((char*) &radius, sizeof radius); return ((bool) outStream); } bool Circle::ReadCircle(istream& inStream) { color.ReadColorFromStream(inStream); center.ReadPointFromStream(inStream); inStream.read((char*) &radius, sizeof radius); return ((bool) inStream); } The IsClick method uses the Pythagoras theorem to calculate the distance between the given point and the circle's center point, and return true if the point is located inside the circle (if the distance is less than or equal to the circle radius). Circle::IsClick(Point point) const { int width = point.X() - center.X(), height = point.Y() - center.Y(); int distance = (int) sqrt((width * width) + (height * height)); return (distance <= radius); } The top-left corner of the resulting rectangle is the center point minus the radius, and the bottom-right corner is the center point plus the radius. Rect Circle::Area() const { Point topLeft = center - radius, bottomRight = center + radius; return Rect(topLeft, bottomRight); } We use the FillEllipse method (there is no FillCircle method) of the Small Windows Graphics class to draw the circle. The circle's border is always black, while its interior color is given by the color field. void Circle::Draw(Graphics& graphics) const { Point topLeft = center - radius, bottomRight = center + radius; Rect circleRect(topLeft, bottomRight); graphics.FillEllipse(circleRect, Black, color); } Summary In this article, we have looked into two applications in Small Windows: a simple hello-world application and a slightly more advance circle application, which has introduced the framework. We have looked into menus, circle drawing, and mouse handling. Resources for Article: Further resources on this subject: C++, SFML, Visual Studio, and Starting the first game [article] Game Development Using C++ [article] Boost.Asio C++ Network Programming [article]
Read more
  • 0
  • 0
  • 1808
article-image-running-your-applications-aws-part-2
Cheryl Adams
19 Aug 2016
6 min read
Save for later

Running Your Applications with AWS - Part 2

Cheryl Adams
19 Aug 2016
6 min read
An active account with AWS means you are on your way with building in the cloud.  Before you start building, you need to tackle the Billing and Cost Management, under Account. It is likely that you are starting with a Free-Tier, so it is important to know that you still have the option of paying for additional services. Also, if you decide to continue with AWS,you should get familiar with this page.  This is not your average bill or invoice page—it is much more than that. The Billing & Cost Management Dashboard is a bird’s-eye view of all of your account activity. Once you start accumulating pay-as-you-go services, this page will give you a quick review of your monthly spending based on services. Part of managing your cloud services includes billing, so it is a good idea to become familiar with this from the start. Amazon also gives you the option of setting up cost-based alerts for your system, which is essential if youwant to be alerted by any excessive cost related to your cloud services. Budgets allow you to receive e-mailed notifications or alerts if spending exceeds the budget that you have created.    If you want to dig in even deeper, try turning on the Cost Explorer for an analysis of your spending. The Billing and Cost Management section of your account is much more than just invoices. It is the AWS complete cost management system for your cloud. Being familiar with all aspects of the cost management system will help you to monitor your cloud services, and hopefully avoid any expenses that may exceed your budget. In our previous discussion, we considered all AWSservices.  Let’s take another look at the details of the services. Amazon Web Services Based on this illustration, you can see that the build options are grouped by words such asCompute, Storage & Content Delivery and  Databases.  Each of these objects or services lists a step-by-step routine that is easy to follow. Within the AWS site, there are numerous tutorials with detailed build instructions. If you are still exploring in the free-tier, AWS also has an active online community of users whotry to answer most questions. Let’s look at the build process for Amazon’s EC2 Virtual Server. The first thing that you will notice is that Amazon provides 22 different Amazon Machine Images (AMIs) to choose from (at the time this post was written).At the top of the screen is a Step process that will guide you through the build. It should be noted that some of the images available are not defined as a part of the free-tier plan. The remaining images that do fit into the plan should fit almost any project need. For this walkthrough, let’s select SUSE Linux (free eligible). It is important to note that just because the image itself is free, that does not mean all the options available within that image are free. Notice on this screen that Amazon has pre-selected the only free-tier option available for this image. From this screen you are given two options: (Review and Launch) or (Next Configure Instance Details).  Let’s try Review and Launch to see what occurs. Notice that our Step process advanced to Step 7. Amazon gives you a soft warning regarding the state of the build and potential risk. If you are okay with these risks, you can proceed and launch your server. It is important to note that the Amazon build process is user driven. It will allow you to build a server with these potential risks in your cloud. It is recommended that you carefully consider each screen before proceeding. In this instance,select Previous and not Cancel to return to Step 3. Selecting Cancelwill stop the build process and return you to the AWS main services page. Until you actually launch your server, nothing is built or saved. There are information bubbles for each line in Step 3: Configure Instance Details. Review the content of each bubble, make any changes if needed, and then proceed to the next step. Select the storage size; then select Next Tag Instance. Enter Values and Continue or Learn More for further information. Select the Next: Configure Security Group button. Security is an extremely important part of setting up your virtual server. It is recommended that you speak to your security administrator to determine the best option. For source, it is recommended that you avoid using the Anywhereoption. This selection will put your build at risk. Select my IP or custom IP as shown. If you are involved in a self-study plan, you can select the Learn More link to determine the best option. Next: Review and Launch The full details of this screen be expanded, reviewed or edited. If everything appears to be okay,proceed to Launch. One additional screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to the Launch Instances. One more screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to Launch Instances to see the build process. You can access your new server from the EC2 Dashboard. This example of a build process gives you a window into how the  AWS build process works. The other objects and services have a similar step-through process. Once you have launched your server, you should be able to access it and proceed with your development. Additional details for development are also available through the site. Amazon’s Web Services Platform is an all-in-one solution for your graduation to the cloud. Not only can you manage your technical environment, but also it has features that allow you to manage your budget. By setting up your virtual applicances and servers appropriately, you can maximize the value of the first  12 months of your free-tier. Carefully monitoring activities through alerts and notification will help you to avoid having any billing surprises. Going through the tutorials and visting the online community will only aid to increase your knowledge base of AWS. AWS is inviting everyone to test their services on this exciting platform, so I would definitely recommend taking advantage of it. Have fun! About the author Cheryl Adams is a senior cloud data andinfrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 1211

article-image-running-your-applications-aws
Cheryl Adams
17 Aug 2016
4 min read
Save for later

Running Your Applications with AWS

Cheryl Adams
17 Aug 2016
4 min read
If you’ve ever been told not to run with scissors, you should not have the same concern when running with AWS. It is neither dangerous nor unsafe when you know what you are doing and where to look when you don’t. Amazon’s current service offering, AWS (Amazon Web Services), is a collection of services, applications and tools that can be used to deploy your infrastructure and application environment to the cloud.  Amazon gives you the option to start their service offerings with a ‘free tier’ and then move toward a pay as you go model.  We will highlight a few of the features when you open your account with AWS. One of the first things you will notice is that Amazon offers a bulk of information regarding cloud computing right up front. Whether you are a novice, amateur or an expert in cloud computing, Amazon offers documented information before you create your account.  This type of information is essential if you are exploring this tool for a project or doing some self-study on your own. If you are a pre-existing Amazon customer, you can use your same account to get started with AWS. If you want to keep your personal account separate from your development or business, it would be best to create a separate account. Amazon Web Services Landing Page The Free Tier is one of the most attractive features of AWS. As a new account you are entitled to twelve months within the Free Tier. In addition to this span of time, there are services that can continue after the free tier is over. This gives the user ample time to explore the offerings within this free-tier period. The caution is not to exceed the free service limitations as it will incur charges. Setting up the free-tier still requires a credit card. Fee-based services will be offered throughout the free tier, so it is important not to select a fee-based charge unless you are ready to start paying for it. Actual paid use will vary based on what you have selected.   AWS Service and Offerings (shown on an open account)     AWS overview of services available on the landing page Amazon’s service list is very robust. If you are already considering AWS, hopefully this means you are aware of what you need or at least what you would like to use. If not, this would be a good time to press pause and look at some resource-based materials. Before the clock starts ticking on your free-tier, I would recommend a slow walk through the introductory information on this site to ensure that you are selecting the right mix of services before creating your account. Amazon’s technical resources has a 10-minute tutorial that gives you a complete overview of the services. Topics like ‘AWS Training and Introduction’ and ‘Get Started with AWS’ include a list of 10-minute videos as well as a short list of ‘how to’ instructions for some of the more commonly used features. If you are a techie by trade or hobby, this may be something you want to dive into immediately.In a company, generally there is a predefined need or issue that the organization may feel can be resolved by the cloud.  If it is a team initiative, it would be good to review the resources mentioned in this article so that everyone is on the same page as to what this solution can do.It’s recommended before you start any trial, subscription or new service that you have a set goal or expectation of why you are doing it. Simply stated, a cloud solution is not the perfect solution for everyone.  There is so much information here on the AWS site. It’s also great if you are comparing between competing cloud service vendors in the same space. You will be able to do a complete assessment of most services within the free-tier. You can map use case scenarios to determine if AWS is the right fit for your project. AWS First Project is a great place to get started if you are new to AWS. If you are wondering how to get started, these technical resources will set you in the right direction. By reviewing this information during your setup or before you start, you will be able to make good use out of your first few months and your introduction to AWS. About the author Cheryl Adams is a senior cloud data and infrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 1708