Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-understanding-dependencies-c-application
Packt
05 Apr 2017
9 min read
Save for later

Understanding the Dependencies of a C++ Application

Packt
05 Apr 2017
9 min read
This article by Richard Grimes, author of the book, Beginning C++ Programming explains the dependencies of a C++ application. A C++ project will produce an executable or library, and this will be built by the linker from object files. The executable or library is dependent upon these object files. An object file will be compiled from a C++ source file (and potentially one or more header files). The object file is dependent upon these C++ source and header files. Understanding dependencies is important because it helps you understand the order to compile the files in your project, and it allows you to make your project builds quicker by only compiling those files that have changed. (For more resources related to this topic, see here.) Libraries When you include a file within your source file the code within that header file will be accessible to your code. Your include file may contain whole function or class definitions (these will be covered in later chapters) but this will result in a problem: multiple definitions of a function or class. Instead, you can declare a class or function prototype, which indicates how calling code will call the function without actually defining it. Clearly the code will have to be defined elsewhere, and this could be a source file or a library, but the compiler will be happy because it only sees one definition. A library is code that has already been defined, it has been fully debugged and tested, and therefore users should not need to have access to the source code. The C++ Standard Library is mostly shared through header files, which helps you when you debug your code, but you must resist any temptation to edit these files. Other libraries will be provided as compiled libraries. There are essentially two types of compiled libraries: static libraries and dynamic link libraries. If you use a static library then the compiler will copy the compiled code that you use from the static library and place it in your executable. If you use a dynamic link (or shared) library then the linker will add information used during runtime (it may be when the executable is loaded, or it may even be delayed until the function is called) to load the shared library into memory and access the function. Windows uses the extension lib for static libraries and dll for dynamic link libraries. GNU gcc uses the extension a for static libraries and so for shared libraries. If you use library code in a static or dynamic link library the compiler will need to know that you are calling a function correctly—to make sure your code calls a function with the correct number of parameters and correct types. This is the purpose of a function prototype—it gives the compiler the information it needs to know about calling the function without providing the actual body of the function, the function definition. In general, the C++ Standard Library will be included into your code through the standard header files. The C Runtime Library (which provides some code for the C++ Standard Library) will be static linked, but if the compiler provides a dynamic linked version you will have a compiler option to use this. Pre-compiled Headers When you include a file into your source file the preprocessor will include the contents of that file (after taking into account any conditional compilation directives) and recursively any files included by that file. As illustrated earlier, this could result in thousands of lines of code. As you develop your code you will often compile the project so that you can test the code. Every time you compile your code the code defined in the header files will also be compiled even though the code in library header files will not have changed. With a large project this can make the compilation take a long time. To get around this problem compilers often offer an option to pre-compile headers that will not change. Creating and using precompiled headers is compiler specific. For example, with gcc you compile a header as if it is a C++ source file (with the /x switch) and the compiler creates a file with an extension of gch. When gcc compiles source files that use the header it will search for the gch file and if it finds the precompiled header it will use that, otherwise it will use the header file. In Visual C++ the process is a bit more complicated because you have to specifically tell the compiler to look for a precompiled header when it compiles a source file. The convention in Visual C++ projects is to have a source file called stdafx.cpp which has a single line that includes the file stdafx.h. You put all your stable header file includes in stdafx.h. Next, you create a precompiled header by compiling stdafx.cpp using the /Yc compiler option to specify that stdafx.h contains the stable headers to compile. This will create a pch file (typically, Visual C++ will name it after your project) containing the code compiled up to the point of the inclusion of the stdafx.h header file. Your other source files must include the stdafx.h header file as the first header file, but it may also include other files. When you compile your source files you use the /Yu switch to specify the stable header file (stdafx.h) and the compiler will use the precompiled header pch file instead of the header. When you examine large projects you will often find precompiled headers are used, and as you can see, it alters the file structure of the project. The example later in this chapter will show how to create and use precompiled headers. Project Structure It is important to organize your code into modules to enable you to maintain it effectively. Even if you are writing C-like procedural code (that is, your code involves calls to functions in a linear way) you will also benefit from organizing it into modules. For example, you may have functions that manipulate strings and other functions that access files, so you may decide to put the definition of the string functions in one source file, string.cpp, and the definition of the file functions in another file, file.cpp. So that other modules in the project can use these files you must declare the prototypes of the functions in a header file and include that header in the module that uses the functions. There is no absolute rule in the language about the relationship between the header files and the source files that contain the definition of the functions. You may have a header file called string.h for the functions in string.cpp and a header file called file.h for the functions in file.cpp. Or you may have just one file called utilities.h that contains the declarations for all the functions in both files. The only rule that you have to abide by is that at compile time the compiler must have access to a declaration of the function in the current source file, either through a header file, or the function definition itself. The compiler will not look forward in a source file, so if a function calls another function in the same source file that called function must have already been defined before the calling function, or there must be a prototype declaration. This leads to a typical convention of having a header file associated with each source file that contains the prototypes of the functions in the source file, and the source file includes this header. This convention becomes more important when you write classes. Managing Dependencies When a project is built with a building tool, checks are performed to see if the output of the build exist and if not, perform the appropriate actions to build it. Common terminology is that the output of a build step is called a target and the inputs of the build step (for example, source files) are the dependencies of that target. Each target's dependencies are the files used to make them. The dependencies may themselves be a target of a build action and have their own dependencies. For example, the following picture shows the dependencies in a project: In this project there are three source files (main.cpp, file1.cpp, file2.cpp) each of these includes the same header utils.h which is precompiled (and hence why there is a fourth source file, utils.cpp, that only contains utils.h). All of the source files depend on utils.pch, which in turn depends upon utils.h. The source file main.cpp has the main function and calls functions in the other two source files (file1.cpp and file2.cpp), and accesses the functions through the associated header files file1.h and file2.h. On the first compilation the build tool will see that the executable depends on the four object files and so it will look for the rule to build each one. In the case of the three C++ source files this means compiling the cpp files, but since utils.obj is used to support the precompiled header, the build rule will be different to the other files. When the build tool has made these object files it will then link them together along with any library code (not shown here). Subsequently, if you change file2.cpp and build the project, the build tool will see that only file2.cpp has changed and since only file2.obj depends on file2.cpp all the make tool needs to do is compile file2.cpp and then link the new file2.obj with the existing object files to create the executable. If you change the header file, file2.h, the build tool will see that two files depend on this header file, file2.cpp and main.cpp and so the build tool will compile these two source files and link the new two object files file2.obj and main.obj with the existing object files to form the executable. If, however, the precompiled header source file, util.h, changes it means that all of the source files will have to be compiled. Summary For a small project, dependencies are easy to manage, and as you have seen, for a single source file project you do not even have to worry about calling the linker because the compiler will do that automatically. As a C++ project gets bigger, managing dependencies gets more complex and this is where development environments like Visual C++ become vital. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Preparing to Build Your Own GIS Application [article] Writing a Fully Native Application [article]
Read more
  • 0
  • 0
  • 16345

article-image-c-compiler-device-drivers-and-useful-developing-techniques
Packt
17 Mar 2017
22 min read
Save for later

C compiler, Device Drivers and Useful Developing Techniques

Packt
17 Mar 2017
22 min read
In this article by Rodolfo Giometti, author of the book GNU/Linux Rapid Embedded Programming, in this article we’re going to focusing our attention to the C compiler (with its counter part: the cross-compiler) and when we have (or we can choose to) to use the native or cross-compilation and the differences between them. (For more resources related to this topic, see here.) Then we’ll see some kernel stuff used later in this article (configuration, recompilation and the device tree) and then we’ll look a bit deeper at the device drivers, how they can be compiled and how they can be put into a kernel module (that is kernel code that can be loaded at runtime). We'll present different kinds of computer peripherals and, for each of them, we'll try to explain how the corresponding device driver works starting from the compilation stage through the configuration till the final usage. As example we’ll try to implement a very simple driver in order to give to the reader some interesting points of view and very simple advices about kernel programming (which is not covered by this article!). We’re going to present the root filesystem’s internals and we’ll spend some words about a particular root filesystem that can be very useful during the early developing stages: the Network File System. As final step we’ll propose the usage of an emulator in order to execute a complete target machine’s Debian distribution on a host PC. This article still is part of the introductory part of this article, experienced developers whose already well know these topics may skip this article but the author's suggestion still remains the same, that is to read the article anyway in order to discover which developing tools will be used in the article and, maybe, some new technique to manage their programs. The C compiler The C compiler is a program that translate the C language) into a binary format that the CPU can understand and execute. This is the vary basic way (and the most powerful one) to develop programs into a GNU/Linux system. Despite this fact most developer prefer using another high level languages rather than C due the fact the C language has no garbage collection, has not objects oriented programming and other issue, giving up part of the execution speed that a C program offers, but if we have to recompile the kernel (the Linux kernel is written in C – plus few assembler), to develop a device driver or to write high performance applications then the C language is a must-have. We can have a compiler and a cross-compiler and till now, we’ve already used the cross-compiler several times to re-compile the kernel and the bootloaders, however we can decide to use a native compiler too. In fact using native compilation may be easier but, in most cases, very time consuming that’s why it’s really important knowing the pros and cons. Programs for embedded systems are traditionally written and compiled using a cross-compiler for that architecture on a host PC. That is we use a compiler that can generate code for a foreign machine architecture, meaning a different CPU instruction set from the compiler host's one. Native & foreign machine architecture For example the developer kits shown in this article are an ARM machines while (most probably) our host machine is an x86 (that is a normal PC), so if we try to compile a C program on our host machine the generated code cannot be used on an ARM machine and vice versa. Let's verify it! Here the classic Hello World program below: #include <stdio.h> int main() { printf("Hello Worldn"); return 0; } Now we compile it on my host machine using the following command: $ make CFLAGS="-Wall -O2" helloworld cc -Wall -O2 helloworld.c -o helloworld Careful reader should notice here that we’ve used command make instead of the usual cc. This is a perfectly equivalent way to execute the compiler due the fact, even if without a Makefile, command make already knows how to compile a C program. We can verify that this file is for the x86 (that is the PC) platform by using the file command: $ file helloworld helloworld: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0f0db5e65e1cd09957ad06a7c1b7771d949dfc84, not stripped Note that the output may vary according to the reader's host machine platform. Now we can just copy the program into one developer kit (for instance the the BeagleBone Black) and try to execute it: root@bbb:~# ./helloworld -bash: ./helloworld: cannot execute binary file As we expected the system refuses to execute code generated for a different architecture! On the other hand, if we use a cross-compiler for this specific CPU architecture the program will run as a charm! Let's verify this by recompiling the code but paying attention to specify that we wish to use the cross-compiler instead. So delete the previously generated x86 executable file (just in case) by using the rm helloworld command and then recompile it using the cross-compiler: $ make CC=arm-linux-gnueabihf-gcc CFLAGS="-Wall -O2" helloworld arm-linux-gnueabihf-gcc -Wall -O2 helloworld.c -o helloworld Note that the cross-compiler's filename has a special meaning: the form is <architecture>-<platform>-<binary-format>-<tool-name>. So the filename arm-linux-gnueabihf-gcc means: ARM architecture, Linux platform, gnueabihf (GNU EABI Hard-Float) binary format and gcc (GNU C Compiler) tool. Now we use the file command again to see if the code is indeed generated for the ARM architecture: $ file helloworld helloworld: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=31251570b8a17803b0e0db01fb394a6394de8d2d, not stripped Now if we transfer the file as before on the BeagleBone Black and try to execute it, we get: root@bbb:~# ./helloworld Hello World! Therefore we see the cross-compiler ensures that the generated code is compatible with the architecture we are executing it on. In reality in order to have a perfectly functional binary image we have to make sure that the library versions, header files (also the headers related to the kernel) and cross compiler options match the target exactly or, at least, they are compatible. In fact we cannot execute cross-compiled code against the glibc on a system having, for example, musl libc (or it can run in a no predictable manner). In this case we have perfectly compatible libraries and compilers but, in general, the embedded developer should perfectly know what he/she is doing. A common trick to avoid compatibility problems is to use static compilation but, in this case, we get huge binary files. Now the question is: when should we use the compiler and when the cross-compiler? We should compile on an embedded system because: We can (see below why). There would be no compatibility issues as all the target libraries will be available. In cross-compilation it becomes hell when we need all the libraries (if the project uses any) in the ARM format on the host PC. So we not only have to cross-compile the program but also its dependencies. And if the same version dependencies are not installed on the embedded system's rootfs, then good luck with troubleshooting! It's easy and quick. We should cross-compile because: We are working on a large codebase and we don't want to waste too much time compiling the program on the target, which may take from several minutes to several hours (or even it may result impossible). This reason might be strong enough to overpower the other reasons in favor of compiling on the embedded system itself. PCs nowadays have multiple cores so the compiler can process more files simultaneously. We are building a full Linux system from scratch. In any case, below, we will show an example of both native compilation and cross-compilation of a software package, so the reader may well understand the differences between them. Compiling a C program As first step let's see how we can compile a C program. To keep it simple we’ll start compiling a user-space program them in the next sections, we’re going to compile some kernel space code. Knowing how to compile an C program can be useful because it may happen that a specific tool (most probably) written in C is missing into our distribution or it’s present but with an outdated version. In both cases we need to recompile it! To show the differences between a native compilation and a cross-compilation we will explain both methods. However a word of caution for the reader here, this guide is not exhaustive at all! In fact the cross-compilation steps may vary according to the software packages we are going to cross-compile. The package we are going to use is the PicoC interpreter. Each Real-Programmers(TM) know the C compiler, which is normally used to translate a C program into the machine language, but (maybe) not all of them know that a C interpreter exists too! Actually there are many C interpreters, but we focus our attention on PicoC due its simplicity in cross-compiling it. As we already know, an interpreter is a program that converts the source code into executable code on the fly and does not need to parse the complete file and generate code at once. This is quite useful when we need a flexible way to write brief programs to resolve easy tasks. In fact to fix bugs in the code and/or changing the program behavior we simply have to change the program source and then re-executing it without any compilation at all. We just need an editor to change our code! For instance, if we wish to read some bytes from a file we can do it by using a standard C program, but for this easy task we can write a script for an interpreter too. Which interpreter to choose is up to developer and, since we are C programmers, the choice is quite obvious. That's why we have decided to use PicoC. Note that the PicoC tool is quite far from being able to interpret all C programs! In fact this tool implements a fraction of the features of a standard C compiler; however it can be used for several common and easy tasks. Please, consider the PicoC as an education tool and avoid using it in a production environment! The native compilation Well, as a first step we need to download the PicoC source code from its repository at: http://github.com/zsaleeba/picoc.git into our embedded system. This time we decided to use the BeagleBone Black and the command is as follows: root@bbb:~# git clone http://github.com/zsaleeba/picoc.git When finished we can start compiling the PicoC source code by using: root@bbb:~# cd picoc/ root@bbb:~/picoc# make Note that if we get the error below during the compilation we can safely ignore it: /bin/sh: 1: svnversion: not found However during the compilation we get: platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or directory #include <readline/readline.h> ^ compilation terminated. <builtin>: recipe for target 'platform/platform_unix.o' failed make: *** [platform/platform_unix.o] Error 1 Bad news, we have got an error! This because the readline library is missing; hence we need to install it to keep this going. In order to discover which package's name holds a specific tool, we can use the following command to discover the package that holds the readline library: root@bbb:~# apt-cache search readline The command output is quite long, but if we carefully look at it we can see the following lines: libreadline5 - GNU readline and history libraries, run-time libraries libreadline5-dbg - GNU readline and history libraries, debugging libraries libreadline-dev - GNU readline and history libraries, development files libreadline6 - GNU readline and history libraries, run-time libraries libreadline6-dbg - GNU readline and history libraries, debugging libraries libreadline6-dev - GNU readline and history libraries, development files This is exactly what we need to know! The required package is named libreadline-dev. In the Debian distribution all libraries packages are prefixed by the lib string while the -dev postfix is used to mark the development version of a library package. Note also that we choose the package libreadline-dev intentionally leaving the system to choose to install version 5 o 6 of the library. The development version of a library package holds all needed files whose allow the developer to compile his/her software to the library itself and/or some documentation about the library functions. For instance, into the development version of the readline library package (that is into the package libreadline6-dev) we can find the header and the object files needed by the compiler. We can see these files using the following command: #root@bbb:~# dpkg -L libreadline6-dev | egrep '.(so|h)' /usr/include/readline/rltypedefs.h /usr/include/readline/readline.h /usr/include/readline/history.h /usr/include/readline/keymaps.h /usr/include/readline/rlconf.h /usr/include/readline/tilde.h /usr/include/readline/rlstdc.h /usr/include/readline/chardefs.h /usr/lib/arm-linux-gnueabihf/libreadline.so /usr/lib/arm-linux-gnueabihf/libhistory.so So let's install it: root@bbb:~# aptitude install libreadline-dev When finished we can relaunch the make command to definitely compile our new C interpreter: root@bbb:~/picoc# make gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o clibrary.o clibrary.c ... gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm -lreadline Well now the tool is successfully compiled as expected! To test it we can use again the standard Hello World program above but with a little modification, in fact the main() function is not defined as before! This is due the fact PicoC returns an error if we use the typical function definition. Here the code: #include <stdio.h> int main() { printf("Hello Worldn"); return 0; } Now we can directly execute it (that is without compiling it) by using our new C interpreter: root@bbb:~/picoc# ./picoc helloworld.c Hello World An interesting feature of PicoC is that it can execute C source file like a script, that is we don't need to specify a main() function as C requires and the instructions are executed one by one from the beginning of the file as a normal scripting language does. Just to show it we can use the following script which implements the Hello World program as C-like script (note that the main() function is not defined!): printf("Hello World!n"); return 0; If we put the above code into the file helloworld.picoc we can execute it by using: root@bbb:~/picoc# ./picoc -s helloworld.picoc Hello World! Note that this time we add the -s option argument to the command line in order to instruct the PicoC interpreter that we wish using its scripting behavior. The cross-compilation Now let's try to cross-compile the PicoC interpreter on the host system. However, before continuing, we’ve to point out that this is just an example of a possible cross-compilation useful to expose a quick and dirty way to recompile a program when the native compilation is not possible. As already reported above the cross-compilation works perfectly for the bootloader and the kernel while for user-space application we must ensure that all involved libraries (and header files) used by the cross-compiler are perfectly compatible with the ones present on the target machine otherwise the program may not work at all! In our case everything is perfectly compatible so we can go further. As before we need to download the PicoC's source code by using the same git command as above. Then we have to enter the following command into the newly created directory picoc: $ cd picoc/ $ make CC=arm-linux-gnueabihf-gcc arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o picoc.o picoc.c ... platform/platform_unix.c:5:31: fatal error: readline/readline.h: No such file or directory compilation terminated. <builtin>: recipe for target 'platform/platform_unix.o' failed make: *** [platform/platform_unix.o] Error 1 We specify the CC=arm-linux-gnueabihf-gcc commad line option to force the cross-compilation. However, as already stated before, the cross-compilation commands may vary according to the compilation method used by the single software package. As before the system returns a linking error due to the fact that thereadline library is missing, however, this time, we cannot install it as before since we need the ARM version (specifically the armhf version) of this library and my host system is a normal PC! Actually a way to install a foreign package into a Debian/Ubuntu distribution exists, but it's not a trivial task nor it's an argument. A curious reader may take a look at the Debian/Ubuntu Multiarch at https://help.ubuntu.com/community/MultiArch. Now we have to resolve this issue and we have two possibilities: We can try to find a way to install the missing package, or We can try to find a way to continue the compilation without it. The former method is quite complex since the readline library has in turn other dependencies and we may take a lot of time trying to compile them all, so let's try to use the latter option. Knowing that the readline library is just used to implement powerful interactive tools (such as recalling a previous command line to re-edit it, etc.) and since we are not interested in the interactive usage of this interpreter, we can hope to avoid using it. So, looking carefully into the code we see that the define USE_READLINE exists and changing the code as shown below should resolve the issue allowing us to compile the tool without the readline support: $ git diff diff --git a/Makefile b/Makefile index 6e01a17..c24d09d 100644 --- a/Makefile +++ b/Makefile @@ -1,6 +1,6 @@ CC=gcc CFLAGS=-Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -LIBS=-lm -lreadline +LIBS=-lm TARGET = picoc SRCS = picoc.c table.c lex.c parse.c expression.c heap.c type.c diff --git a/platform.h b/platform.h index 2d7c8eb..c0b3a9a 100644 --- a/platform.h +++ b/platform.h @@ -49,7 +49,6 @@ # ifndef NO_FP # include <math.h> # define PICOC_MATH_LIBRARY -# define USE_READLINE # undef BIG_ENDIAN # if defined(__powerpc__) || defined(__hppa__) || defined(__sparc__) # define BIG_ENDIAN The above output is in the unified context diff format; so the code above means that into the file Makefile the option -lreadline must be removed from variable LIBS and that into the file platform.h the define USE_READLINE must be commented out. After all the changes are in place we can try to recompile the package with the same command as before: $ make CC=arm-linux-gnueabihf-gcc arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -c -o table.o table.c ... arm-linux-gnueabihf-gcc -Wall -pedantic -g -DUNIX_HOST -DVER="`svnversion -n`" -o picoc picoc.o table.o lex.o parse.o expression.o heap.o type.o variable.o clibrary.o platform.o include.o debug.o platform/platform_unix.o platform/library_unix.o cstdlib/stdio.o cstdlib/math.o cstdlib/string.o cstdlib/stdlib.o cstdlib/time.o cstdlib/errno.o cstdlib/ctype.o cstdlib/stdbool.o cstdlib/unistd.o -lm Great! We did it! Now, just to verify that everything is working correctly, we can simply copy the picoc file into our BeagleBone Black and test it as before. Compiling a kernel module As a special example of cross-compilation we'll take a look at a very simple code which implement a dummy module for the Linux kernel (the code does nothing but printing some messages on the console) and we’ll try to cross-compile it. Let's consider this following kernel C code of the dummy module: #include <linux/module.h> #include <linux/init.h> /* This is the function executed during the module loading */ static int dummy_module_init(void) { printk("dummy_module loaded!n"); return 0; } /* This is the function executed during the module unloading */ static void dummy_module_exit(void) { printk("dummy_module unloaded!n"); return; } module_init(dummy_module_init); module_exit(dummy_module_exit); MODULE_AUTHOR("Rodolfo Giometti <[email protected]>"); MODULE_LICENSE("GPL"); MODULE_VERSION("1.0.0"); Apart some defines relative to the kernel tree the file holds two main functions  dummy_module_init() and  dummy_module_exit() and some special definitions, in particular the module_init() and module_exit(), that address the first two functions as the entry and exit functions of the current module (that is the function which are called at module loading and unloading). Then consider the following Makefile: ifndef KERNEL_DIR $(error KERNEL_DIR must be set in the command line) endif PWD := $(shell pwd) CROSS_COMPILE = arm-linux-gnueabihf- # This specifies the kernel module to be compiled obj-m += module.o # The default action all: modules # The main tasks modules clean: make -C $(KERNEL_DIR) ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- SUBDIRS=$(PWD) $@ OK, now to cross-compile the dummy module on the host PC we can use the following command: $ make KERNEL_DIR=~/A5D3/armv7_devel/KERNEL/ make -C /home/giometti/A5D3/armv7_devel/KERNEL/ SUBDIRS=/home/giometti/github/chapter_03/module modules make[1]: Entering directory '/home/giometti/A5D3/armv7_devel/KERNEL' CC [M] /home/giometti/github/chapter_03/module/dummy.o Building modules, stage 2. MODPOST 1 modules CC /home/giometti/github/chapter_03/module/dummy.mod.o LD [M] /home/giometti/github/chapter_03/module/dummy.ko make[1]: Leaving directory '/home/giometti/A5D3/armv7_devel/KERNEL' It's important to note that when a device driver is released as a separate package with a Makefile compatible with the Linux's one we can compile it natively too! However, even in this case, we need to install a kernel source tree on the target machine anyway. Not only, but the sources must also be configured in the same manner of the running kernel or the resulting driver will not work at all! In fact a kernel module will only load and run with the kernel it was compiled against. The cross-compilation result is now stored into the file dummy.ko, in fact we have: $ file dummy.ko dummy.ko: ELF 32-bit LSB relocatable, ARM, EABI5 version 1 (SYSV), BuildID[sha1]=ecfcbb04aae1a5dbc66318479ab9a33fcc2b5dc4, not stripped The kernel modules as been compiled for the SAMA5D3 Xplained but, of course, it can be cross-compiled for the other developer kits in a similar manner. So let’s copy our new module to the SAMA5D3 Xplained by using the scp command through the USB Ethernet connection: $ scp dummy.ko [email protected]: [email protected]'s password: dummy.ko 100% 3228 3.2KB/s 00:00 Now, if we switch on the SAMA5D3 Xplained, we can use the modinfo command to get some information of the kernel module: root@a5d3:~# modinfo dummy.ko filename: /root/dummy.ko version: 1.0.0 license: GPL author: Rodolfo Giometti <[email protected]> srcversion: 1B0D8DE7CF5182FAF437083 depends: vermagic: 4.4.6-sama5-armv7-r5 mod_unload modversions ARMv7 thumb2 p2v8 Then to load and unload it into and from the kernel we can use the insmod and rmmod commands as follow: root@a5d3:~# insmod dummy.ko [ 3151.090000] dummy_module loaded! root@a5d3:~# rmmod dummy.ko [ 3153.780000] dummy_module unloaded! As expected the dummy’s messages has been displayed on the serial console. Note that if we are using a SSH connection we have to use the dmesg or tail -f /var/log/kern.log commands to see kernel’s messages. Note also that the commands modinfo, insmod and rmmod are explained in detail in a section below. The Kernel and DTS files Main target of this article is to give several suggestions for rapid programming methods to be used on an embedded GNU/Linux system, however the main target of every embedded developer is to realize programs to manage peripherals, to monitor or to control devices and other similar tasks to interact with the real world, so we mainly need to know the techniques useful to get access to the peripheral’s data and settings. That’s why we need to know firstly how to recompile the kernel and how to configure it. Summary In this article we did a very long tour into three of the most important topics of the GNU/Linux embedded programming: the C compiler (and the cross-compiler), the kernel (and the device drivers with the device tree) and the root filesystem. Also we presented the NFS in order to have a remote root filesystem over the network and we introduced the emulator usage in order to execute foreign code on the host PC. Resources for Article: Further resources on this subject: Visualizations made easy with gnuplot [article] Revisiting Linux Network Basics [article] Fundamental SELinux Concepts [article]
Read more
  • 0
  • 0
  • 3858

Packt
15 Mar 2017
13 min read
Save for later

About Java Virtual Machine – JVM Languages

Packt
15 Mar 2017
13 min read
In this article by Vincent van der Leun, the author of the book, Introduction to JVM Languages, you will learn the history of the JVM and five important languages that run on the JVM. (For more resources related to this topic, see here.) While many other programming languages have come in and gone out of the spotlight, Java always managed to return to impressive spots, either near to, and lately even on, the top of the list of the most used languages in the world. It didn't take language designers long to realize that they as well could run their languages on the JVM—the virtual machine that powers Java applications—and take advantage of its performance, features, and extensive class library. In this article, we will take a look at common JVM use cases and various JVM languages. The JVM was designed from the ground up to run anywhere. Its initial goal was to run on set-top boxes, but when Sun Microsystems found out the market was not ready in the mid '90s, they decided to bring the platform to desktop computers as well. To make all those use cases possible, Sun invented their own binary executable format and called it Java bytecode. To run programs compiled to Java bytecode, a Java Virtual Machine implementation must be installed on the system. The most popular JVM implementations nowadays are Oracle's free but partially proprietary implementation and the fully open source OpenJDK project (Oracle's Java runtime is largely based on OpenJDK). This article covers the following subjects: Popular JVM use cases Java language Scala language Clojure language Kotlin language Groovy The Java platform as published by Google on Android phones and tablets is not covered in this article. One of the reasons is that the Java version used on Android is still based on the Java 6 SE platform from 2006. However, some of the languages covered in this article can be used with Android. Kotlin, in particular, is a very popular choice for modern Android development. Popular use cases Since the JVM platform was designed with a lot of different use cases in mind, it will be no surprise that the JVM can be a very viable choice for very different scenarios. We will briefly look at the following use cases: Web applications Big data Internet of Things (IoT) Web applications With its focus on performance, the JVM is a very popular choice for web applications. When built correctly, applications can scale really well if needed across many different servers. The JVM is a well-understood platform, meaning that it is predictable and many tools are available to debug and profile problematic applications. Because of its open nature, the monitoring of JVM internals is also very well possible. For web applications that have to serve thousands of users concurrently, this is an important advantage. The JVM already plays a huge role in the cloud. Popular examples of companies that use the JVM for core parts of their cloud-based services include Twitter (famously using Scala), Amazon, Spotify, and Netflix. But the actual list is much larger. Big data Big data is a hot topic. When data is regarded too big for traditional databases to be analyzed, one can set up multiple clusters of servers that will process the data. Analyzing the data in this context can, for example, be searching for something specific, looking for patterns, and calculating statistics. This data could have been obtained from data collected from web servers (that, for example, logged visitor's clicks), output obtained from external sensors at a manufacturer plant, legacy servers that have been producing log files over many years, and so forth. Data sizes can vary wildly as well, but often, will take up multiple terabytes in total. Two popular technologies in the big data arena are: Apache Hadoop (provides storage of data and takes care of data distribution to other servers) Apache Spark (uses Hadoop to stream data and makes it possible to analyze the incoming data) Both Hadoop and Spark are for the most part written in Java. While both offer interfaces for a lot of programming languages and platforms, it will not be a surprise that the JVM is among them. The functional programming paradigm focuses on creating code that can run safely on multiple CPU cores, so languages that are fully specialized in this style, such as Scala or Clojure, are very appropriate candidates to be used with either Spark or Hadoop. Internet of Things - IoT Portable devices that feature internet connectivity are very common these days. Since Java was created with the idea of running on embedded devices from the beginning, the JVM is, yet again, at an advantage here. For memory constrained systems, Oracle offers Java Micro Edition Embedded platform. It is meant for commercial IoT devices that do not require a standard graphical or console-based user interface. For devices that can spare more memory, the Java SE Embedded edition is available. The Java SE Embedded version is very close to the Java Standard Edition discussed in this article. When running a full Linux environment, it can be used to provide desktop GUIs for full user interaction. Java SE Embedded is installed by default on Raspbian, the standard Linux distribution of the popular Raspberry Pi low-cost, credit card-sized computers. Both Java ME Embedded and Java SE Embedded can access the General Purpose input/output (GPIO) pins on the Raspberry Pi, which means that sensors and other peripherals connected to these ports can be accessed by Java code. Java Java is the language that started it all. Source code written in Java is generally easy to read and comprehend. It started out as a relatively simple language to learn. As more and more features were added to the language over the years, its complexity increased somewhat. The good news is that beginners don't have to worry about the more advanced topics too much, until they are ready to learn them. Programmers that want to choose a different JVM language from Java can still benefit from learning the Java syntax, especially once they start using libraries or frameworks that provide Javadocs as API documentation. Javadocs is a tool that generates HTML documentation based on special comments in the source code. Many libraries and frameworks provide the HTML documents generated by Javadocs as part of their documentation. While Java is not considered a pure Object Orientated Programming (OOP) language because of its support for primitive types, it is still a serious OOP language. Java is known for its verbosity, it has strict requirements for its syntax. A typical Java class looks like this: package com.example; import java.util.Date; public class JavaDemo { private Date dueDate = new Date(); public void getDueDate(Date dueDate) { this.dueDate = dueDate; } public Date getValue() { return this.dueDate; } } A real-world example would implement some other important additional methods that were omitted for readability. Note that declaring the dueDate variable, the Date class name has to be specified twice; first, when declaring the variable type and the second time, when instantiating an object of this class. Scala Scala is a rather unique language. It has a strong support for functional programming, while also being a pure object orientated programming language at the same time. While a lot more can be said about functional programming, in a nutshell, functional programming is about writing code in such a way that existing variables are not modified while the program is running. Values are specified as function parameters and output is generated based on their parameters. Functions are required to return the same output when specifying the same parameters on each call. A class is supposed to not hold internal states that can change over time. When data changes, a new copy of the object must be returned and all existing copies of the data must be left alone. When following the rules of functional programming, which requires a specific mindset of programmers, the code is safe to be executed on multiple threads on different CPU cores simultaneously. The Scala installation offers two ways of running Scala code. It offers an interactive shell where code can directly be entered and is run right away. This program can also be used to run Scala source code directly without manually first compiling it. Also offered is scalac, a traditional compiler that compiles Scala source code to Java bytecode and compiles to files with the .class extension. Scala comes with its own Scala Standard Library. It complements the Java Class Library that is bundled with the Java Runtime Environment (JRE) and installed as part of the Java Developers Kit (JDK). It contains classes that are optimized to work with Scala's language features. Among many other things, it implements its own collection classes, while still offering compatibility with Java's collections. Scala's equivalent of the code shown in the Java section would be something like the following: package com.example import java.util.Date class ScalaDemo(var dueDate: Date) { } Scala will generate the getter and setter methods automatically. Note that this class does not follow the rules of functional programming as the dueDate variable is mutable (it can be changed at any time). It would be better to define the class like this: class ScalaDemo(val dueDate: Date) { } By defining dueDate with the val keyword instead of the var keyword, the variable has become immutable. Now Scala only generates a getter method and the dueDate can only be set when creating an instance of ScalaDemo class. It will never change during the lifetime of the object. Clojure Clojure is a language that is rather different from the other languages covered in this article. It is a language largely inspired by the Lisp programming language, which originally dates from the late 1950s. Lisp stayed relevant by keeping up to date with technology and times. Today, Common Lisp and Scheme are arguably the two most popular Lisp dialects in use today and Clojure is influenced by both. Unlike Java and Scala, Clojure is a dynamic language. Variables do not have fixed types and when compiling, no type checking is performed by the compiler. When a variable is passed to a function that it is not compatible with the code in the function, an exception will be thrown at run time. Also noteworthy is that Clojure is not an object orientated language, unlike all other languages in this article. Clojure still offers interoperability with Java and the JVM as it can create instances of objects and can also generate class files that other languages on the JVM can use to run bytecode compiled by Clojure. Instead of demonstrating how to generate a class in Clojure, let's write a function in Clojure that would consume a javademo instance and print its dueDate: (defn consume-javademo-instance [d] (println (.getDueDate d))) This looks rather different from the other source code in this article. Code in Clojure is written by adding code to a list. Each open parenthesis and the corresponding closing parenthesis in the preceding code starts and ends a new list. The first entry in the list is the function that will be called, while the other entries of that list are its parameters. By nesting the lists, complex evaluations can be written. The defn macro defines a new function that will be called consume-javademo-instance. It takes one parameter, called d. This parameter should be the javademo instance. The list that follows is the body of the function, which prints the value of the getDueDate function of the passed javademo instance in the variable, d. Kotlin Like Java and Scala, Kotlin, is a statically typed language. Kotlin is mainly focused on object orientated programming but supports procedural programming as well, so the usage of classes and objects is not required. Kotlin's syntax is not compatible with Java; the code in Kotlin is much less verbose. It still offers a very strong compatibility with Java and the JVM platform. The Kotlin equivalent of the Java code would be as follows: import java.util.Date data class KotlinDemo(var dueDate: Date) One of the more noticeable features of Kotlin is its type system, especially its handling of null references. In many programming languages, a reference type variable can hold a null reference, which means that a reference literally points to nothing. When accessing members of such null reference on the JVM, the dreaded NullPointerException is thrown. When declaring variables in the normal way, Kotlin does not allow references to be assigned to null. If you want a variable that can be null, you'll have to add the question mark (?)to its definition: var thisDateCanBeNull: Date? = Date() When you now access the variable, you'll have to let the compiler know that you are aware that the variable can be null: if (thisDateCanBeNull != null) println("${thisDateCanBeNull.toString()}") Without the if check, the code would refuse to compile. Groovy Groovy was an early alternative language for the JVM. It offers, for a large degree, Java syntax compatibility, but the code in Groovy can be much more compact because many source code elements that are required in Java are optional in Groovy. Like Clojure and mainstream languages such as Python, Groovy is a dynamic language (with a twist, as we will discuss next). Unusually, while Groovy is a dynamic language (types do not have to be specified when defining variables), it still offers optional static compilation of classes. Since statically compiled code usually performs better than dynamic code, this can be used when the performance is important for a particular class. You'll give up some convenience when switching to static compilation, though. Some other differences with Java is that Groovy supports operator overloading. Because Groovy is a dynamic language, it offers some tricks that would be very hard to implement with Java. It comes with a huge library of support classes, including many wrapper classes that make working with the Java Class Library a much more enjoyable experience. A JavaDemo equivalent in Groovy would look as follows: @Canonical class GroovyDemo { Date dueDate } The @Canonical annotation is not necessary but recommended because it will generate some support methods automatically that are used often and required in many use cases. Even without it, Groovy will automatically generate the getter and setter methods that we had to define manually in Java. Summary We started by looking at the history of the Java Virtual Machine and studied some important use cases of the Java Virtual Machine: web applications, big data, and IoT (Internet of Things). We then looked at five important languages that run on the JVM: Java (a very readable, but also very verbose statically typed language), Scala (both a strong functional and OOP programming language), Clojure (a non-OOP functional programming language inspired by Lisp and Haskell), Kotlin (a statically typed language, that protects the programmer from very common NullPointerException errors), and Groovy (a dynamic language with static compiler support that offers a ton of features). Resources for Article: Further resources on this subject: Using Spring JMX within Java Applications [article] Tuning Solr JVM and Container [article] So, what is Play? [article]
Read more
  • 0
  • 0
  • 2612
Banner background image

article-image-testing-agile-development-and-state-agile-adoption
Packt
15 Mar 2017
6 min read
Save for later

Testing in Agile Development and the State of Agile Adoption

Packt
15 Mar 2017
6 min read
In this article written by Renu Rajani, author of the book Testing Practitioner Handbook, we will discuss agile development. Organizations are increasingly struggling to reach the right level of quality versus speed. Some key issues with traditional development and testing include the following: Excessively long time to market for products and applications Inadequate customer orientation and regular interaction Over-engineered products--most of the features on a product or application may not be used High project failure rate ROI below expectation Inability to respond quickly to change Inadequate software quality (For more resources related to this topic, see here.) To address this, QA and testing should be blended with agile development. Agile engagements should take a business-centric approach to select the right test focus areas, such as behavior-driven development (BDD), to define acceptance criteria. This requires skills not only in testing but also in business and software development. The latest World Quality Report reveals an increase in the adoption of agile testing methodologies, which helps expedite time to market for products and services. The need for agile development (and testing) is primarily driven by digital transformation. Let's take a look at the major trends in digital transformation: More continual integration fueled by digital transformation Complex integration using multi-channel, omnipresent commerce, making it necessary to integrate multiple channels, devices, and wearable technology Unlike yesterday's nomenclature, when agile meant colocation, today's advanced telepresence infrastructure makes it possible to work in distributed agile models and has removed colocation dependency. Agile is not just a concept. It is a manner of working, made possible with multiple tools to enable development and testing in agile environments. What do agile projects promise compared to traditional waterfall? The next diagram summarizes the value an agile approach offers compared to traditional waterfall. Waterfall engagements are characterized as plan driven. One should know the software requirements and estimate the time and effort needed to accomplish the task at hand. In the case of agile engagements, one knows the time and resources available and needs to estimate the features that can go into a release. Flavors of agile There are various flavors of agile, including the following: Scrum: This prioritizes the highest-value features and incremental delivery once every 2-4 weeks Kanban: This pinpoints bottlenecks to avoid holdups Lean: This eliminates waste and unnecessary documentation and provides future flexibility XP: This reconfigures and ensures the simplest design to deliver iteration features. Let's look at their features. Scrum Reacts quickly in volatile markets Focuses on customer benefits and avoids both unnecessary outlays and time investments Utilizes organized development teams within a structured framework in order to coordinate activities and work together for quick decision-making Involves customers directly in the development process Kanban Works with existing roles and processes and may be introduced either step by step or by establishing pioneer teams. Scrum and Kanban complement one another. While Scrum ensures adaptability and agile, Kanban improves efficiency and throughput. Both techniques increase overall transparency. How is testing done in agile sprints? I have often heard that agile projects do not require testers. Is this true? Would you compromise on quality in the name of agile? Like any other development life cycle, agile also needs quality and testing. Agile engagements involve testers from the start of the sprint, that is, from the requirement analysis stage, in a process known as user story grooming. In sprint planning, the team selects the story points depending on various factors, including availability of resources and user story complexity. All the members of the sprint team (cross-functional teams) are involved in this process (developers, business analysts, testers, configuration teams, build teams, the scrum master, and the production owner). Once the user stories destined for the sprint are finalized, they are analyzed. Then, developers work on the design while testers write the test cases and share these with business analysts for review. At the end of each sprint, the team discloses the user stories selected during the sprint to the product owner and gets a go or no-go ruling. Once the demo is complete, the team gathers for the retrospective. Take a look at the following diagram: The benefits of this approach include: Productive, collaborative, and high-performing teams Predictability and project control featuring transparency and flexibility Superior prioritization and risk management for business success High-value revenue with low upfront and ongoing costs High-quality products delivered with minimum time to market Increased possibility of stakeholder engagement and high customer satisfaction Agile in distributed environments Often, people assume agile means colocation. Today's technology infrastructure and maturity of distributed teams have enabled agile to be practiced in a distributed mode. As per the World Quality Report 2016-2017, more than 42% of the organizations that adopt an agile delivery model use distributed agile. Distributed agile allows the organizations to achieve higher cost savings with the global delivery model. Take a look at the following figure: Key challenges in distributed agile model include: Communication challenges across the distributed team Increasing product backlogs An ever-growing regression pack Poor knowledge management and handover for new people due to less documentation and high-level placeholder tests Little time overlap with isolated regional developers for distributed teams These challenges can be addressed through the following: Communication: Live meetings, video conference calls, and common chat rooms Product backlogs: Better prioritization within the iteration scope Regression scope: Better impact analysis and targeted regression only Knowledge management: Efficient tools and processes along with audio and video recordings of important tests, virtual scrum boards, and the latest communication and tracking tools Distributed teams: Optimal overlap timings through working shifts (40–50 %) State of agile adoption – findings from the World Quality Report 2016-2017 As per the latest World Quality Report, there are various challenges in applying testing to agile environments. Colocation and a lack of required skills are the two biggest challenges that are considered major risks associated with agile adoption. That said, organizations have been able to find solutions to these challenges. Approaches to testing in agile development environments Organizations use different ways to speed up cycle times and utilize agile. Some of these tactics include predictive analytics, BDD/TDD, continuous monitoring, automated test data generation, and test environment virtualization. The following diagram provides a snapshot of the practices used to convert to agile: Skills needed from QA and testing professions for agile The following diagram from WQR2016 depicts the state of skills relating to agile testing as organizations strive to adopt agile methodologies: Conclusion An ideal agile engagement needs a test ecosystem that is flexible and supports both continual testing and quality monitoring. Given the complexity in agile engagements, there would be value from automated decision-making to achieve both speed and quality. Agile development has attained critical mass and is now being widely adopted; the initial hesitation no longer prevails. The QA function is a key enabler in this journey. The coexistence of traditional IT along with agile delivery principles is giving rise to a new methodology based on bimodal development. Resources for Article: Further resources on this subject: Unit Testing and End-To-End Testing [article] Storing Records and Interface customization [article] Overview of Certificate Management [article]
Read more
  • 0
  • 0
  • 1604

article-image-microservices-and-service-oriented-architecture
Packt
09 Mar 2017
6 min read
Save for later

Microservices and Service Oriented Architecture

Packt
09 Mar 2017
6 min read
Microservices are an architecture style and an approach for software development to satisfy modern business demands. They are not a new invention as such. They are instead an evolution of previous architecture styles. Many organizations today use them - they can improve organizational agility, speed of delivery, and ability to scale. Microservices give you a way to develop more physically separated modular applications. This tutorial has been taken from Spring 5.0 Microsevices - Second Edition Microservices are similar to conventional service-oriented architectures. In this article, we will see how microservices are related to SOA. The emergence of microservices Many organizations, such as Netflix, Amazon, and eBay, successfully used what is known as the 'divide and conquer' technique to functionally partition their monolithic applications into smaller atomic units. Each one performs a single function - a 'service'. These organizations solved a number of prevailing issues they were experiencing with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern as microservices architecture. Microservices originated from the idea of Hexagonal Architecture, coined by Alistair Cockburn back in 2005. Hexagonal Architecture or Hexagonal pattern is also known as the Ports and Adapters pattern. Cockburn defined microservices as: "...an architectural style or an approach for building IT systems as a set of business capabilities that are autonomous, self contained, and loosely coupled." The following diagram depicts a traditional N-tier application architecture having presentation layer, business layer, and database layer: Modules A, B, and C represent three different business capabilities. The layers in the diagram represent separation of architecture concerns. Each layer holds all three business capabilities pertaining to that layer. Presentation layer has web components of all three modules, business layer has business components of all three modules, and database hosts tables of all three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservice-based architecture: As we can see in the preceding diagram, the boundaries are inversed in the microservices architecture. Each vertical slice represents a microservice. Each microservice will have its own presentation layer, business layer, and database layer. Microservices is aligned toward business capabilities. By doing so, changes to one microservice do not impact the others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols, such as HTTP and REST, or messaging protocols, such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols, such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices is more aligned to the business capabilities and has independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two facets of microservices. How do microservices compare to Service Oriented Architectures? One of the common question arises when dealing with microservices architecture is, how is it different from SOA. SOA and microservices follow similar concepts. Earlier in this article, we saw that microservices is evolved from SOA and many service characteristics that are common in both approaches. However, are they the same or different? As microservices evolved from SOA, many characteristics of microservices is similar to SOA. Let’s first examine the definition of SOA. The Open Group definition of SOA is as follows: "SOA is an architectural style that supports service-orientation. Service-orientation is a way of thinking in terms of services and service-based development and the outcomes of services. Is self-contained May be composed of other services Is a “black box” to consumers of the service" You have learned similar aspects in microservices as well. So, in what way is microservices different? The answer is--it depends. The answer to the previous question could be yes or no, depending upon the organization and its adoption of SOA. SOA is a broader term and different organizations approached SOA differently to solve different organizational problems. The difference between microservices and SOA is in the way based on how an organization approaches SOA. In order to get clarity, a few cases will be examined here. Service oriented integration Service-oriented integration refers to a service-based integration approach used by many organizations: Many organizations would have used SOA primarily to solve their integration complexities, also known as integration spaghetti. Generally, this is termed as Service Oriented Integration (SOI). In such cases, applications communicate with each other through a common integration layer using standard protocols and message formats, such as SOAP/XML-based web services over HTTP or Java Message Service (JMS). These types of organizations focus on Enterprise Integration Patterns (EIP) to model their integration requirements. This approach strongly relies on heavyweight Enterprise Service Bus (ESB),such as TIBCO Business Works, WebSphere ESB, Oracle ESB, and the likes. Most of the ESB vendors also packed a set of related product, such as Rules Engines, Business Process Management Engines, and so on as a SOA suite. Such organization's integrations are deeply rooted into these products. They either write heavy orchestration logic in the ESB layer or business logic itself in the service bus. In both cases, all enterprise services are deployed and accessed through the ESB. These services are managed through an enterprise governance model. For such organizations, microservices is altogether different from SOA. Legacy modernization SOA is also used to build service layers on top of legacy applications which is shown in the following diagram: Another category of organizations would have used SOA in transformation projects or legacy modernization projects. In such cases, the services are built and deployed in the ESB connecting to backend systems using ESB adapters. For these organizations, microservices are different from SOA. Service oriented application Some organizations would have adopted SOA at an application level: In this approach as shown in the preceding diagram, lightweight Integration frameworks, such as Apache Camel or Spring Integration, are embedded within applications to handle service related cross-cutting capabilities, such as protocol mediation, parallel execution, orchestration, and service integration. As some of the lightweight integration frameworks had native Java object support, such applications would have even used native Plain Old Java Objects (POJO) services for integration and data exchange between services. As a result, all services have to be packaged as one monolithic web archive. Such organizations could see microservices as the next logical step of their SOA. Monolithic migration using SOA The following diagram represents Logical System Boundaries: The last possibility is transforming a monolithic application into smaller units after hitting the breaking point with the monolithic system. They would have broken the application into smaller physically deployable subsystems, similar to the Y axis scaling approach explained earlier and deployed them as web archives on web servers or as jars deployed on some home grown containers. These subsystems as service would have used web services or other lightweight protocols to exchange data between services. They would have also used SOA and service design principles to achieve this. For such organizations, they may tend to think that microservices is the same old wine in a new bottle. Further resources on this subject: Building Scalable Microservices [article] Breaking into Microservices Architecture [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 6222

article-image-interface
Packt
09 Mar 2017
18 min read
Save for later

The Interface

Packt
09 Mar 2017
18 min read
In this article by Tim Woodruff, authors of the book Learning ServiceNow, No matter what software system you're interested in learning about, understanding the interface is likely to be the first step toward success. ServiceNow is a very robust IT service management tool, and has an interface to match. Designed both to be easy to use, and to support a multitude of business processes and applications (both foreseen and unforeseen), it must be able to bend to the will of the business, and be putty in the hands of a capable developer. (For more resources related to this topic, see here.) You'll learn all the major components of the UI (user interface), and how to manipulate them to suit your business needs, and look good doing it. You'll also learn some time-saving tips, tricks, and UI shortcuts that have been built into the interface for power users to get around more quickly. This article will cover the key components of the user interface, including: The content and ServiceNow frames The application navigator UI settings and personalization We recommend that you follow along in your own development instance as you read through this section, to gain a more intimate familiarity with the interface. Frames ServiceNow is a cloud platform that runs inside your browser window. Within your browser, the ServiceNow interface is broken up into frames. Frames, in web parlance, are just separately divided sections of a page. In ServiceNow, there are two main frames: the ServiceNow frame, and the content frame.  Both have different controls and display different information. This section will show you what the different frames are, what they generally contain, and the major UI elements within them. The ServiceNow frame consists of many UI elements spanning across both the top, and left side of the ServiceNow window in your browser. ServiceNow frame Technically, the ServiceNow frame can be further broken up into two frames: The banner frame along the top edge of the interface, and the application navigator along the left side. Banner frame The banner frame runs along the top of every page in ServiceNow, save for a few exceptions. It's got room for some branding and a logo, but the more functional components for administrators and developers is on the right. There, you'll find: System settings cog Help and documentation button Conversations panel button Instance search button Profile/session dropdown System settings In your developer instance, on the far-top-right, you will see a sort of cog or sprocket. That is a universal sort of the Settings menu icon. Clicking on that icon reveals the System settings menu. This menu is broken down into several sections: General Theme Lists Forms Notifications Developer (Admins only) Fig 1.1: System Settings The settings in this menu generally apply only to the current user who's signed in, so you can freely toggle and modify these settings without worrying about breaking anything. In the General tab (as seen in the preceding figure) of the System settings UI, you'll find toggles to control accessibility options, compact the user interface, select how date/time fields are shown, select your time-zone, and even an option to display a printer-friendly version of the page you're on. In Geneva, you'll also see an option to Wrap Longer Text in List Columns and Compact list date/time. On the Theme tab (in the preceding figure), you'll find several pre-made ServiceNow themes with names like System and Blues. One of the first things that a company often does when deploying ServiceNow, is to create a custom-branded theme. We'll go over how to do that in a later section, and you'll be able to see your custom themes there. The Lists tab (not available in Geneva) contains the option to wrap longer text in list columns (which was under the General tab in Geneva), as well as options to enable striped table rows (which alternates rows in a table between contrasting shades of gray, making it easier to follow with the eye from left to right) and modern cell styles. All options in the Lists tab except Wrap longer text in list columns require the List V3 plugin to be enabled before they'll show up, as they only apply to List V3. If you've installed a fresh ServiceNow instance using Helsinki or a later version, the List V3 plugin will be enabled by default. However, if you've upgraded from Geneva or an earlier version, to Helsinki, you'll be on List V2 by default, and list V3 will need to be enabled. This, and any other plugins, can be enabled from System Definition | Plugins in the application navigator. The Forms tab contains settings to enable tabbed forms, as well as to control how and when related lists load. Related lists are lists (like tables in a spreadsheet) of related that appear at the bottom of forms. Forms are where key data about an individual record are displayed. The Notifications tab (not available in Geneva) allows you to choose whether to get notifications on your mobile device, desktop toast notifications, e-mail notifications, or audio notifications. Finally, the Developer tab (only available to users with the admin role) is where you can find settings relating to application and update set-based development. By default, your selected update set should say Default [Global], which means that any configuration changes you make in the instance will not be captured in a portable update set that you can move between instances. We'll go into detail about what these things mean later on. For now, follow along with the following steps in your developer instance using your Administrator account, as we create a new update set to contain any configuration changes we'll be making in this article: If you don't already have the System Settings menu open, click on the System Settings gear in the top-right of the ServiceNow interface. If you haven't already done so, click on the Developer tab on the bottom-left. Next, navigate to the Local Update Setstable. In the main section of the System Settings dialog, you should see the third row down labeled Update Sets. To the right of that should be a dropdown with Default [Global] selected, followed by three buttons. The first button () is called a Reference icon. Clicking it will take you to the currently selected update set (in this case, Default). The second button () will take you to the list view, showing you all of the local update sets. The third button will refresh the currently selected update set, in case you've changed update sets in another window or tab. Click on the second button, to navigate to the Local Update Sets list view. Click on the blue New button at the top-left of the page to go to the new update set form. Give this update set a name. Let's enter Article 1 into the Name field. Fill out the Description by writing in something like Learning about the ServiceNow interface! Leave State and Release date to their default values. Click Submit and Make Current. Alternately, you could click Submitor right-click the header and click Save, then return to the record and click the Make This My Current Set related link. Now that we've created an update set, any configuration changes we make will be captured and stored in a nice little package that we can back out or move into another instance to deploy the same changes. Now let's just confirm that we've got the right update set selected: Once again, click on the System Settings gear at the top-right of the ServiceNow window, and open the Developer tab. If the selected update set still shows as Default, click the Refresh button (the third icon to the right of the selected update set). If the update set still shows as Default, just select your new Article1 update set from the Update Set drop-down list. Help Next on the right side of the banner frame, is the Help icon. Clicking on this icon opens up the Help panel on the right side of the page. The Help menu has three sections: What's New, User Guide, and Search Documentation. Or, if you're in Geneva, it shows only What's New and Search Product Documentation. Clicking What's New just brings up the introduction to your instance version, with a couple of examples of the more prominent new features over the previous version. The User Guidewill redirect you to an internal mini-guide with some useful pocket-reference types of info in Helsinki. It's very slim on the details though, so you might be better off searching the developer site (http://developer.servicenow.com) or documentation (http://docs.servicenow.com ) if you have any specific questions. Speaking of the documentation site, Search Documentation is essentially a link. Clicking this link from a form or list will automatically populate a query relating to the type of record(s) you were viewing. Conversations Moving further left in the banner frame, you'll find the Conversations button. This opens up the Conversations side-bar, showing an (initially blank) list of the conversations you've recently been a part of. You can enter text in the filter box to filter the conversation list by participant name. Unfortunately, it doesn't allow you to filter/search by message contents at this point. You can also click the Plus icon to initiate a new conversation with a user of your choice. Global text search The next link to the right in the banner frame is probably the most useful one of all – the global text search. The global text search box allows you to enter a term, ticket number, or keyword and search a configurable multitude of tables. As an example of this functionality, let's search for a user that should be present in the demo data that came with your developer instance: Click on the Search icon (the one that looks like a magnifying glass). It should expand to the left, displaying a search keyword input box. In that input box, type in abel tuter. This is the name of one of the demo users that comes with your developer instance. Press Enter, and you should see the relevant search results divided into sections. Entering an exact ticket number for a given task (such as an incident, request, or problem ticket) will take you directly to that ticket rather than showing the search results. This is a great way to quickly navigate to a ticket you've received an e-mail notification about, or for a service desk agent to look up a ticket number provided by a customer. The search results from the Global Text Search are divided into search groups. The default groups are Tasks, Live Feed, Policy, and People & Places. To the right of each search group is a list of the tables that the search is run against for that group. The Policy search group, for example, contains several script types, including Business Rules, UI Actions, Client Scripts, and UI Policies. Profile The last item on our list of banner-frame elements, is the profile link. This will show your photo/icon (if you've uploaded one), and your name. As indicated by the small down-facing arrow to the right of your name (or System Administrator), clicking on this will show a little drop-down menu. This menu consists of up to four main components: Profile Impersonate User Elevate Roles Logout The Profile link in the dropdown will take you directly to the Self Service view of your profile. This is generally not what Administrators want, but it's a quick way for users to view their profile information. Impersonate User is a highly useful tool for administrators and developers, allowing them to view the instance as though they were another user, including that user's security permissions, and viewing the behavior of UI policies and scripts when that user is logged in. Elevate Roles is an option only available when the High Security plugin is enabled (which may or may not be turned on by default in your organization). Clicking this option opens a dialog that allows you to check a box, and re-initialize your session with a special security role called security_admin (assuming you have this role in your instance). With high security settings enabled, the security_admin role allows you to perform certain actions, such as modifying ACLs (Access Control Lists – security rules), and running background scripts (scripts you can write and execute directly on the server). Finally, the Logout link does just what you'd expect: logs you out. If you have difficulty with a session that you can't log out, you can always log out by visiting /logout.do on your instance. For example: http://your-instance.service-now.com/logout.do/. The application navigator The application navigator is one of the UI components with which you will become most familiar, as you work in ServiceNow. Nearly everything you do will begin either by searching in the Global Text Search box, or by filtering the application navigator. The contents of the Application Navigator consists of Modules nested underneath application menu. The first application menu in the application navigator is Self-Service. This application menu is generally what's available to a user who doesn't have any special roles or permissions. Underneath this application menu, you'll see various modules such as Homepage, Service Catalog, Knowledge, and so on. The Self-Service application menu, and several modules under it. When you hear the term application as it relates to ServiceNow, you might think of an application on your smartphone. Applications in ServiceNow and applications on your smartphone both generally consist of packaged functionality, presented in a coherent way. However in ServiceNow, there are some differences. For example, an application header might consist only of links to other areas in ServiceNow, and contain no new functionality of its' own. An application might not even necessarily have an application header. Generally, we refer to the major ITIL processes in ServiceNow as applications (Incident, Change, Problem, Knowledge, and so on) – but these can often consist of various components linked up with one another; so the functionality within an application need not necessarily be packaged in a way that it's closed off from the rest of the system. You'll often be given instructions to navigate to a particular module in a way similar to this: Self-Service | My Requests. In this example, the left portion (Self-Service) is the application menu header, and the right portion (My Requests) is the module. Filter text box The filter text box in the Application Navigator allows you to enter a string to – you guessed it – filter the Application Navigator list with! It isn't strictly a search, it's just filtering the list of items in the application navigator, which means that the term you enter must appear somewhere in the name of either an application menu, or a module. So if you enter the term Incident, you'll see modules with names like Incidents and Watched Incidents, as well as every module inside the Incident application menu. However, if you enter Create Incident, you won't get any results. This is because the module for creating a new Incident, is called Create New, inside the Incident module, and the term Create Incident doesn't appear in that title. In addition to filtering the application navigator, the filter text box has some hidden shortcuts that ServiceNow wizards use to fly around the interface with the speed of a ninja. Here are a few pro tips for you: Once you've entered a term into the filter text box in the application navigator, the first module result is automatically selected. You can navigate to it by pressing Enter. Enter a table name followed by .list and then press Enter to navigate directly to the default list view for that table. For example, entering sc_req_item.list [Enter] will direct you to the list view for the sc_req_item (Requested Item) table. Enter a table name followed by either .form, or .do and then press Enter to take you directly to the default view of that table's form (allowing you to quickly create a new record). For example, entering sc_request.form [Enter] will take you to the New Record intake form for the sc_request (Request) table. Each table has a corresponding form, with certain fields displayed by default. Use either .FORM or .LIST in caps, to navigate to the list or form view in a new tab or window!  Opening a list or form in a new tab (either using this method, by middle-clicking a link, or otherwise) breaks it out of the ServiceNow frame, showing only the Content frame. Try it yourself: Enter sys_user.list into the application navigator filter text field in your developer instance, and press Enter. You should see the list of all the demo users in your instance! No matter which application navigator tab you have selected when you start typing in the filter text box, it will always show you results from the all applications tab, with any of your favorites that match the filter showing up first. Favorites Users can add favorites within the Application Navigator by clicking the star icon, visible on the right when hovering over any application menu or module in the application navigator. Adding a favorite will make it come up first when filtering the application navigator using any term that it matches. It'll also show up under your favorites list, which you can see by clicking the tab at the top of the application navigator, below the filter text box, with the same star icon you see when adding a module to your favorites. Let's try out favorites now by adding some favorites that an admin or developer is likely to want to come back to on frequent occasions. Add the following modules to your favorites list by filtering the application navigator by the module name, hovering over the module, and clicking the star icon on the right: Workflow | Workflow Editor System Definition | Script Includes System Definition | Dictionary System Update Sets | Local Update Sets System Logs | System Log | All This one (All) is nested under a module (System Log) that doesn't point anywhere, but it is just there to serve as a separator for other modules. It's not much use searching for All, so try searching for System Log! Now that we've got a few favorites, let's rename them so they're easier to identify at a glance. While we're at it, we'll give them some new icons as well: Click the favorites tab in the application navigator, and you should see your newly added favorites in the list. At the bottom-right of the application navigator in the ServiceNow frame, click on Edit Favorites. Click on the favorite item called Workflow – Workflow Editor. This will select it so you can edit it in the content frame on the right: 01-10-Editing workflow favorite.png In the Name field, give it something simpler, such as Workflow Editor. Then choose a color and an icon. I chose white, and the icon that looks like a flowchart. I also removed my default Home favorite, but you don't have to. Here is what my favorites look like after I make my modifications: 01-11-Favorites after customizing.png Another way to add something to your favorites is to drag it there. Certain specific elements in the ServiceNow UI can be dragged directly into your Favorites tab. Let's give it a try! Head over to the Incident table by using the .list trick. In your developer instance, enter incident.list into the filter text box in the application navigator; and then press Enter. Click on the Filter icon at the top-left of the Incident list, and filter the Incident list using the condition builder. Add some conditions so that it only displays records where Active is true, and Assigned to is empty. Then click on Run. 01-12-Incident condition for favorites.png The list should now be filtered, after you hit Run. You should see just a few incidents in the list. Now, at the top-left of the Incident table, to the left of the Incidents table label, click on the hamburger menu (yeah, that's really what it's called). It looks like three horizontal bars atop one another. In that menu, click on Create Favorite. Choose a good name, like Unassigned Incidents, and an appropriate icon and color. Then click Done. You should now have an Unassigned Incidents favorite listed! Finally, if you click on the little white left-facing arrow at the bottom-left of the application navigator, you'll notice that whichever navigator tab you have selected, your favorites show up in a stacked list on the left. This gives you a bit more screen real-estate for the content frame. Summary In this article, we learned about: How content is organized on the screen, within frames – the banner frame, Application Navigator, and content frame. How to access the built-in help and documentation for ServiceNow. How to use the global text search functionality, to find the records we're looking for. What it means to elevate roles or impersonate a user. How to get around the Application Navigator, including some pro tips on getting around like a power user from the filter text box. How to use favorites and navigation history within ServiceNow, to our advantage. What UI settings are available, and how to personalize our interface. Resources for Article: Further resources on this subject: Getting Things Done with Tasks [article] Events, Notifications, and Reporting [article] Start Treating your Infrastructure as Code [article]
Read more
  • 0
  • 0
  • 1427
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-members-inheritance-and-polymorphism
Packt
09 Mar 2017
16 min read
Save for later

Members Inheritance and Polymorphism

Packt
09 Mar 2017
16 min read
In this article by Gastón C. Hillar, the author of the book Java 9 with JShell, we will learn about one of the most exciting features of object-oriented programming in Java 9: polymorphism. We will code many classes and then we will work with their instances in JShell to understand how objects can take many different forms. We will: Create concrete classes that inherit from abstract superclasses Work with instances of subclasses Understand polymorphism Control whether subclasses can or cannot override members Control whether classes can be subclassed Use methods that perform operations with instances of different subclasses (For more resources related to this topic, see here.) Creating concrete classes that inherit from abstract superclasses We will consider the existence of an abstract base class named VirtualAnimal and the following three abstract subclasses: VirtualMammal, VirtualDomesticMammal, and VirtualHorse. Next, we will code the following three concrete classes. Each class represents a different horse breed and is a subclass of the VirtualHorse abstract class. AmericanQuarterHorse: This class represents a virtual horse that belongs to the American Quarter Horse breed. ShireHorse: This class represents a virtual horse that belongs to the Shire Horse breed. Thoroughbred: This class represents a virtual horse that belongs to the Thoroughbred breed. The three concrete classes will implement the following three abstract methods they inherited from abstract superclasses: String getAsciiArt(): This abstract method is inherited from the VirtualAnimal abstract class. String getBaby(): This abstract method is inherited from the VirtualAnimal abstract class. String getBreed(): This abstract method is inherited from the VirtualHorse abstract class. The following UML diagram shows the members for the three concrete classes that we will code: AmericanQuarterHorse, ShireHorse, and Thoroughbred. We don’t use bold text format for the three methods that each of these concrete classes will declare because they aren’t overriding the methods, they are implementing the abstract methods that the classes inherited. First, we will create the AmericanQuarterHorse concrete class. The following lines show the code for this class in Java 9. Notice that there is no abstract keyword before class, and therefore, our class must make sure that it implements all the inherited abstract methods. public class AmericanQuarterHorse extends VirtualHorse { public AmericanQuarterHorse( int age, boolean isPregnant, String name, String favoriteToy) { super(age, isPregnant, name, favoriteToy); System.out.println("AmericanQuarterHorse created."); } public AmericanQuarterHorse( int age, String name, String favoriteToy) { this(age, false, name, favoriteToy); } public String getBaby() { return "AQH baby "; } public String getBreed() { return "American Quarter Horse"; } public String getAsciiArt() { return " >>\.n" + " /* )`.n" + " // _)`^)`. _.---. _n" + " (_,' \ `^-)'' `.\n" + " | | \n" + " \ / |n" + " / \ /.___.'\ (\ (_n" + " < ,'|| \ |`. \`-'n" + " \\ () )| )/n" + " |_>|> /_] //n" + " /_] /_]n"; } } Now, we will create the ShireHorse concrete class. The following lines show the code for this class in Java 9: public class ShireHorse extends VirtualHorse { public ShireHorse( int age, boolean isPregnant, String name, String favoriteToy) { super(age, isPregnant, name, favoriteToy); System.out.println("ShireHorse created."); } public ShireHorse( int age, String name, String favoriteToy) { this(age, false, name, favoriteToy); } public String getBaby() { return "ShireHorse baby "; } public String getBreed() { return "Shire Horse"; } public String getAsciiArt() { return " ;;n" + " .;;'*\n" + " __ .;;' ' \n" + " /' '\.~~.~' \ /'\.)n" + " ,;( ) / |n" + " ,;' \ /-.,,( )n" + " ) /| ) /|n" + " ||(_\ ||(_\n" + " (_\ (_\n"; } } Finally, we will create the Thoroughbred concrete class. The following lines show the code for this class in Java 9: public class Thoroughbred extends VirtualHorse { public Thoroughbred( int age, boolean isPregnant, String name, String favoriteToy) { super(age, isPregnant, name, favoriteToy); System.out.println("Thoroughbred created."); } public Thoroughbred( int age, String name, String favoriteToy) { this(age, false, name, favoriteToy); } public String getBaby() { return "Thoroughbred baby "; } public String getBreed() { return "Thoroughbred"; } public String getAsciiArt() { return " })\-=--.n" + " // *._.-'n" + " _.-=-...-' /n" + " {{| , |n" + " {{\ | \ /_n" + " }} \ ,'---'\___\n" + " / )/\\ \\ >\n" + " // >\ >\`-n" + " `- `- `-n"; } } We have more than one constructor defined for the three concrete classes. The first constructor that requires four arguments uses the super keyword to call the constructor from the base class or superclass, that is, the constructor defined in the VirtualHorse class. After the constructor defined in the superclass finishes its execution, the code prints a message indicating that an instance of each specific concrete class has been created. The constructor defined in each class prints a different message. The second constructor uses the this keyword to call the previously explained constructor with the received arguments and with false as the value for the isPregnant argument. Each class returns a different String in the implementation of the getBaby and getBreed methods. In addition, each class returns a different ASCII art representation for a virtual horse in the implementation of the getAsciiArt method. Understanding polymorphism We can use the same method, that is, a method with the same name and arguments, to cause different things to happen according to the class on which we invoke the method. In object-oriented programming, this feature is known as polymorphism. Polymorphism is the ability of an object to take on many forms, and we will see it in action by working with instances of the previously coded concrete classes. The following lines create a new instance of the AmericanQuarterHorse class named american and use one of its constructors that doesn’t require the isPregnant argument: AmericanQuarterHorse american = new AmericanQuarterHorse( 8, "American", "Equi-Spirit Ball"); american.printBreed(); The following lines show the messages that the different constructors displayed in JShell after we enter the previous code: VirtualAnimal created. VirtualMammal created. VirtualDomesticMammal created. VirtualHorse created. AmericanQuarterHorse created. The constructor defined in the AmericanQuarterHorse calls the constructor from its superclass, that is, the VirtualHorse class. Remember that each constructor calls its superclass constructor and prints a message indicating that an instance of the class is created. We don’t have five different instances; we just have one instance that calls the chained constructors of five different classes to perform all the necessary initialization to create an instance of AmericanQuarterHorse. If we execute the following lines in JShell, all of them will display true as a result, because american belongs to the VirtualAnimal, VirtualMammal, VirtualDomesticMammal, VirtualHorse, and AmericanQuarterHorse classes. System.out.println(american instanceof VirtualAnimal); System.out.println(american instanceof VirtualMammal); System.out.println(american instanceof VirtualDomesticMammal); System.out.println(american instanceof VirtualHorse); System.out.println(american instanceof AmericanQuarterHorse); The results of the previous lines mean that the instance of the AmericanQuarterHorse class, whose reference is saved in the american variable of type AmericanQuarterHorse, can take on the form of an instance of any of the following classes: VirtualAnimal VirtualMammal VirtualDomesticMammal VirtualHorse AmericanQuarterHorse The following screenshot shows the results of executing the previous lines in JShell: We coded the printBreed method within the VirtualHorse class, and we didn’t override this method in any of the subclasses. The following is the code for the printBreed method: public void printBreed() { System.out.println(getBreed()); } The code prints the String returned by the getBreed method, declared in the same class as an abstract method. The three concrete classes that inherit from VirtualHorse implemented the getBreed method and each of them returns a different String. When we called the american.printBreed method, JShell displayed American Quarter Horse. The following lines create an instance of the ShireHorse class named zelda. Note that in this case, we use the constructor that requires the isPregnant argument. As happened when we created an instance of the AmericanQuarterHorse class, JShell will display a message for each constructor that is executed as a result of the chained constructors we coded. ShireHorse zelda = new ShireHorse(9, true, "Zelda", "Tennis Ball"); The next lines call the printAverageNumberOfBabies and printAsciiArt instance methods for american, the instance of AmericanQuarterHorse, and zelda, which is the instance of ShireHorse. american.printAverageNumberOfBabies(); american.printAsciiArt(); zelda.printAverageNumberOfBabies(); zelda.printAsciiArt(); We coded the printAverageNumberOfBabies and printAsciiArt methods in the VirtualAnimal class, and we didn’t override them in any of its subclasses. Hence, when we call these methods for either american or zelda, Java will execute the code defined in the VirtualAnimal class. The printAverageNumberOfBabies method uses the int value returned by the getAverageNumberOfBabies and the String returned by the getBaby method to generate a String that represents the average number of babies for a virtual animal. The VirtualHorse class implemented the inherited getAverageNumberOfBabies abstract method with code that returns 1. The AmericanQuarterHorse and ShireHorse classes implemented the inherited getBaby abstract method with code that returns a String that represents a baby for the virtual horse breed: "AQH baby" and "ShireHorse baby". Thus, our call to the printAverageNumberOfBabies method will produce different results in each instance because they belong to a different class. The printAsciiArt method uses the String returned by the getAsciiArt method to print the ASCII art that represents a virtual horse. The AmericanQuarterHorse and ShireHorse classes implemented the inherited getAsciiArt abstract method with code that returns a String with the ASCII art that is appropriate for each virtual horse that the class represents. Thus, our call to the printAsciiArt method will produce different results in each instance because they belong to a different class. The following screenshot shows the results of executing the previous lines in JShell. Both instances run the same code for the two methods that were coded in the VirtualAnimal abstract class. However, each class provided a different implementation for the methods that end up being called to generated the result and cause the differences in the output. The following lines create an instance of the Thoroughbred class named willow, and then call its printAsciiArt method. As happened before, JShell will display a message for each constructor that is executed as a result of the chained constructors we coded. Thoroughbred willow = new Thoroughbred(5, "Willow", "Jolly Ball"); willow.printAsciiArt(); The following screenshot shows the results of executing the previous lines in JShell. The new instance is from a class that provides a different implementation of the getAsciiArt method, and therefore, we will see a different ASCII art than in the previous two calls to the same method for the other instances. The following lines call the neigh method for the instance named willow with a different number of arguments. This way, we take advantage of the neigh method that we overloaded four times with different arguments. Remember that we coded the four neigh methods in the VirtualHorse class and the Thoroughbred class inherits the overloaded methods from this superclass through its hierarchy tree. willow.neigh(); willow.neigh(2); willow.neigh(2, american); willow.neigh(3, zelda, true); american.nicker(); american.nicker(2); american.nicker(2, willow); american.nicker(3, willow, true); The following screenshot shows the results of calling the neigh and nicker methods with the different arguments in JShell: We called the four versions of the neigh method defined in the VirtualHorse class for the Thoroughbred instance named willow. The third and fourth lines that call the neigh method specify a value for the otherDomesticMammal argument of type VirtualDomesticMammal. The third line specifies american as the value for otherDomesticMammal and the fourth line specifies zelda as the value for the same argument. Both the AmericanQuarterHorse and ShireHorse concrete classes are subclasses of VirtualHorse, and VirtualHorse is a subclass or VirtualDomesticMammal. Hence, we can use american and zelda as arguments where a VirtualDomesticMammal instance is required. Then, we called the four versions of the nicker method defined in the VirtualHorse class for the AmericanQuarterHorse instance named american. The third and fourth lines that call the nicker method specify willow as the value for the otherDomesticMammal argument of type VirtualDomesticMammal. The Thoroughbred concrete class is also a subclass of VirtualHorse, and VirtualHorse is a subclass or VirtualDomesticMammal. Hence, we can use willow as an argument where a VirtualDomesticMammal instance is required. Controlling overridability of members in subclasses We will code the VirtualDomesticCat abstract class and its concrete subclass: MaineCoon. Then, we will code the VirtualBird abstract class, its VirtualDomesticBird abstract subclass and the Cockatiel concrete subclass. Finally, we will code the VirtualDomesticRabbit concrete class. While coding these classes, we will use Java 9 features that allow us to decide whether the subclasses can or cannot override specific members. All the virtual domestic cats must be able to talk, and therefore, we will override the talk method inherited from VirtualDomesticMammal to print the word that represents a cat meowing: "Meow". We also want to provide a method to print "Meow" a specific number of times. Hence, at this point, we realize that we can take advantage of the printSoundInWords method we had declared in the VirtualHorse class. We cannot access this instance method in the VirtualDomesticCat abstract class because it doesn’t inherit from VirtualHorse. Thus, we will move this method from the VirtualHorse class to its superclass: VirtualDomesticMammal. We will use the final keyword before the return type for the methods that we don’t want to be overridden in subclasses. When a method is marked as a final method, the subclasses cannot override the method and the Java 9 compiler shows an error if they try to do so. Not all the birds are able to fly in real-life. However, all our virtual birds are able to fly, and therefore, we will implement the inherited isAbleToFly abstract method as a final method that returns true. This way, we make sure that all the classes that inherit from the VirtualBird abstract class will always run this code for the isAbleToFly method and that they won’t be able to override it. The following UML diagram shows the members for the new abstract and concrete classes that we will code. In addition, the diagram shows the printSoundInWords method moved from the VirtualHorse abstract class to the VirtualDomesticMammal abstract class. First, we will create a new version of the VirtualDomesticMammal abstract class. We will add the printSoundInWords method that we have in the VirtualHorse abstract class and we will use the final keyword to indicate that we don’t want to allow subclasses to override this method. The following lines show the new code for the VirtualDomesticMammal class. public abstract class VirtualDomesticMammal extends VirtualMammal { public final String name; public String favoriteToy; public VirtualDomesticMammal( int age, boolean isPregnant, String name, String favoriteToy) { super(age, isPregnant); this.name = name; this.favoriteToy = favoriteToy; System.out.println("VirtualDomesticMammal created."); } public VirtualDomesticMammal( int age, String name, String favoriteToy) { this(age, false, name, favoriteToy); } protected final void printSoundInWords( String soundInWords, int times, VirtualDomesticMammal otherDomesticMammal, boolean isAngry) { String message = String.format("%s%s: %s%s", name, otherDomesticMammal == null ? "" : String.format(" to %s ", otherDomesticMammal.name), isAngry ? "Angry " : "", new String(new char[times]).replace(" ", soundInWords)); System.out.println(message); } public void talk() { System.out.println( String.format("%s: says something", name)); } } After we enter the previous lines, JShell will display the following messages: | update replaced class VirtualHorse which cannot be referenced until this error is corrected: | printSoundInWords(java.lang.String,int,VirtualDomesticMammal,boolean) in VirtualHorse cannot override printSoundInWords(java.lang.String,int,VirtualDomesticMammal,boolean) in VirtualDomesticMammal | overridden method is final | protected void printSoundInWords(String soundInWords, int times, | ^---------------------------------------------------------------... | update replaced class AmericanQuarterHorse which cannot be referenced until class VirtualHorse is declared | update replaced class ShireHorse which cannot be referenced until class VirtualHorse is declared | update replaced class Thoroughbred which cannot be referenced until class VirtualHorse is declared | update replaced variable american which cannot be referenced until class AmericanQuarterHorse is declared | update replaced variable zelda which cannot be referenced until class ShireHorse is declared | update replaced variable willow which cannot be referenced until class Thoroughbred is declared | update overwrote class VirtualDomesticMammal JShell indicates us that the VirtualHorse class and its subclasses cannot be referenced until we correct an error for this class. The class declares the printSoundInWords method and overrides the recently added method with the same name and arguments in the VirtualDomesticMammal. We used the final keyword in the new declaration to make sure that any subclass cannot override it, and therefore, the Java compiler generates the error message that JShell displays. Now, we will create a new version of the VirtualHorse abstract class. The following lines show the new version that removes the printSoundInWords method and uses the final keyword to make sure that many methods cannot be overridden by any of the subclasses. The declarations that use the final keyword to avoid the methods to be overridden are highlighted in the next lines. public abstract class VirtualHorse extends VirtualDomesticMammal { public VirtualHorse( int age, boolean isPregnant, String name, String favoriteToy) { super(age, isPregnant, name, favoriteToy); System.out.println("VirtualHorse created."); } public VirtualHorse( int age, String name, String favoriteToy) { this(age, false, name, favoriteToy); } public final boolean isAbleToFly() { return false; } public final boolean isRideable() { return true; } public final boolean isHervibore() { return true; } public final boolean isCarnivore() { return false; } public int getAverageNumberOfBabies() { return 1; } public abstract String getBreed(); public final void printBreed() { System.out.println(getBreed()); } public final void printNeigh( int times, VirtualDomesticMammal otherDomesticMammal, boolean isAngry) { printSoundInWords("Neigh ", times, otherDomesticMammal, isAngry); } public final void neigh() { printNeigh(1, null, false); } public final void neigh(int times) { printNeigh(times, null, false); } public final void neigh(int times, VirtualDomesticMammal otherDomesticMammal) { printNeigh(times, otherDomesticMammal, false); } public final void neigh(int times, VirtualDomesticMammal otherDomesticMammal, boolean isAngry) { printNeigh(times, otherDomesticMammal, isAngry); } public final void printNicker(int times, VirtualDomesticMammal otherDomesticMammal, boolean isAngry) { printSoundInWords("Nicker ", times, otherDomesticMammal, isAngry); } public final void nicker() { printNicker(1, null, false); } public final void nicker(int times) { printNicker(times, null, false); } public final void nicker(int times, VirtualDomesticMammal otherDomesticMammal) { printNicker(times, otherDomesticMammal, false); } public final void nicker(int times, VirtualDomesticMammal otherDomesticMammal, boolean isAngry) { printNicker(times, otherDomesticMammal, isAngry); } @Override public final void talk() { nicker(); } } After we enter the previous lines, JShell will display the following messages: | update replaced class AmericanQuarterHorse | update replaced class ShireHorse | update replaced class Thoroughbred | update replaced variable american, reset to null | update replaced variable zelda, reset to null | update replaced variable willow, reset to null | update overwrote class VirtualHorse We could replace the definition for the VirtualHorse class and the subclasses were also updated. It is important to know that the variables we declared in JShell that held references to instances of subclasses of VirtualHorse were set to null. Summary In this article, we created many abstract and concrete classes. We learned to control whether subclasses can or cannot override members, and whether classes can be subclassed. We worked with instances of many subclasses and we understood that objects can take many forms. We worked with many instances and their methods in JShell to understand how the classes and the methods that we coded are executed. We used methods that performed operations with instances of different classes that had a common superclass. Resources for Article: Further resources on this subject: Getting Started with Sorting Algorithms in Java [article]  Introduction to JavaScript [article]  Using Spring JMX within Java Applications [article]
Read more
  • 0
  • 0
  • 1394

article-image-replication-solutions-postgresql
Packt
09 Mar 2017
14 min read
Save for later

Replication Solutions in PostgreSQL

Packt
09 Mar 2017
14 min read
In this article by Chitij Chauhan, Dinesh Kumar, the authors of the book PostgreSQL High Performance Cookbook, we will talk about various high availability and replication solutions including some popular third-party replication tool like Slony. (For more resources related to this topic, see here.) Setting up hot streaming replication Here in this recipe we are going to set up a master/slave streaming replication. Getting ready For this exercise you would need two Linux machines each with the latest version of PostgreSQL 9.6 installed. We will be using the following IP addresses for master and slave servers. Master IP address: 192.168.0.4 Slave IP Address: 192.168.0.5 How to do it… Given are the sequence of steps for setting up master/slave streaming replication: Setup password less authentication between master and slave for postgres user. First we are going to create a user ID on the master which will be used by slave server to connect to the PostgreSQL database on the master server: psql -c "CREATE USER repuser REPLICATION LOGIN ENCRYPTED PASSWORD 'charlie';" Next would be to allow the replication user that was created in the previous step to allow access to the master PostgreSQL server. This is done by making the necessary changes as mentioned in the pg_hba.conf file: Vi pg_hba.conf host replication repuser 192.168.0.5/32 md5 In the next step we are going to configure parameters in the postgresql.conf file. These parameters are required to be set in order to get the streaming replication working: Vi /var/lib/pgsql/9.6/data/postgresql.conf listen_addresses = '*' wal_level = hot_standby max_wal_senders = 3 wal_keep_segments = 8 archive_mode = on archive_command = 'cp %p /var/lib/pgsql/archive/%f && scp %p [email protected]:/var/lib/pgsql/archive/%f' Once the parameter changes have been made in the postgresql.conf file in the previous step ,the next step would be restart the PostgreSQL server on the master server in order to get the changes made in the previous step come into effect: pg_ctl -D /var/lib/pgsql/9.6/data restart Before the slave can replicate the master we would need to give it the initial database to build off. For this purpose we will make a base backup by copying the primary server's data directory to the standby: psql -U postgres -h 192.168.0.4 -c "SELECT pg_start_backup('label', true)" rsync -a /var/lib/pgsql/9.6/data/ 192.168.0.5:/var/lib/pgsql/9.6/data/ --exclude postmaster.pid psql -U postgres -h 192.168.0.4 -c "SELECT pg_stop_backup()" Once the data directory in the previous step is populated ,next step is to configure the following mentioned parameters in the postgresql.conf file on the slave server: hot_standby = on Next would be to copy the recovery.conf.sample in the $PGDATA location on the slave server and then configure the following mentioned parameters: cp /usr/pgsql-9.6/share/recovery.conf.sample /var/lib/pgsql/9.6/data/recovery.conf standby_mode = on primary_conninfo = 'host=192.168.0.4 port=5432 user=repuser password=charlie' trigger_file = '/tmp/trigger.replication' restore_command = 'cp /var/lib/pgsql/archive/%f "%p"' Next would be to start the slave server: service postgresql-9.6 start Now that the preceding mentioned replication steps are set up we will now test for replication. On the master server, login and issue the following mentioned SQL commands: psql -h 192.168.0.4 -d postgres -U postgres -W postgres=# create database test; postgres=# c test; test=# create table testtable ( testint int, testchar varchar(40) ); CREATE TABLE test=# insert into testtable values ( 1, 'What A Sight.' ); INSERT 0 1 On the slave server we will now check if the newly created database and the corresponding table in the previous step are replicated: psql -h 192.168.0.5 -d test -U postgres -W test=# select * from testtable; testint | testchar --------+---------------- 1 | What A Sight. (1 row) The wal_keep_segments parameter ensures that how many WAL files should be retained in the master pg_xlog in case of network delays. However if you do not want assume a value for this, you can create a replication slot which makes sure master does not remove the WAL files in pg_xlog until they have been received by standbys. For more information refer to: https://www.postgresql.org/docs/9.6/static/warm-standby.html#STREAMING-REPLICATION-SLOTS. How it works… The following is the explanation given for the steps done in the preceding section: In the initial step of the preceding section we create a user called repuser which will be used by the slave server to make a connection to the primary server. In step 2 of the preceding section we make the necessary changes in the pg_hba.conf file to allow the master server to be accessed by the slave server using the user ID repuser that was created in the step 2. We then make the necessary parameter changes on the master in step 4 of the preceding section for configuring streaming replication. Given is a description for these parameters:     Listen_Addresses: This parameter is used to provide the IP address that you want to have PostgreSQL listen too. A value of * indicates that all available IP addresses.     wal_level: This parameter determines the level of WAL logging done. Specify hot_standby for streaming replication.    wal_keep_segments: This parameter specifies the number of 16 MB WAL files to retain in the pg_xlog directory. The rule of thumb is that more such files may be required to handle a large checkpoint.     archive_mode: Setting this parameter enables completed WAL segments to be sent to archive storage.     archive_command: This parameter is basically a shell command is executed whenever a WAL segment is completed. In our case we are basically copying the file to the local machine and then we are using the secure copy command to send it across to the slave.     max_wal_senders: This parameter specifies the total number of concurrent connections allowed from the slave servers. Once the necessary configuration changes have been made on the master server we then restart the PostgreSQL server on the master in order to get the new configuration changes come into effect. This is done in step 5 of the preceding section. In step 6 of the preceding section, we were basically building the slave by copying the primary's data directory to the slave. Now with the data directory available on the slave the next step is to configure it. We will now make the necessary parameter replication related parameter changes on the slave in the postgresql.conf directory on the slave server. We set the following mentioned parameter on the slave:    hot_standby: This parameter determines if we can connect and run queries during times when server is in the archive recovery or standby mode In the next step we are configuring the recovery.conf file. This is required to be setup so that the slave can start receiving logs from the master. The following mentioned parameters are configured in the recovery.conf file on the slave:    standby_mode: This parameter when enabled causes PostgreSQL to work as a standby in a replication configuration.    primary_conninfo: This parameter specifies the connection information used by the slave to connect to the master. For our scenario the our master server is set as 192.168.0.4 on port 5432 and we are using the user ID repuser with password charlie to make a connection to the master. Remember that the repuser was the user ID which was created in the initial step of the preceding section for this purpose that is, connecting to the  master from the slave.    trigger_file: When slave is configured as a standby it will continue to restore the XLOG records from the master. The trigger_file parameter specifies what is used to trigger a slave to switch over its duties from standby and take over as master or being the primary server. At this stage the slave has been now fully configured and we then start the slave server and then replication process begins. In step 10 and 11 of the preceding section we are simply testing our replication. We first begin by creating a database test and then log into the test database and create a table by the name test table and then begin inserting some records into the test table. Now our purpose is to see whether these changes are replicated across the slave. To test this we then login into slave on the test database and then query the records from the test table as seen in step 10 of the preceding section. The final result that we see is that the all the records which are changed/inserted on the primary are visible on the slave. This completes our streaming replication setup and configuration. Replication using Slony Here in this recipe we are going to setup replication using Slony which is widely used replication engine. It replicates a desired set of tables data from one database to other. This replication approach is based on few event triggers which will be created on the source set of tables which will log the DML and DDL statements into a Slony catalog tables. By using Slony, we can also setup the cascading replication among multiple nodes. Getting ready The steps followed in this recipe are carried out on a CentOS Version 6 machine. We would first need to install Slony. The following mentioned are the steps needed to install Slony: First go to the mentioned web link and download the given software at http://slony.info/downloads/2.2/source/. Once you have downloaded the following mentioned software the next step is to unzip the tarball and then go the newly created directory: tar xvfj slony1-2.2.3.tar.bz2 cd slony1-2.2.3 In the next step we are going to configure, compile, and build the software: /configure --with-pgconfigdir=/usr/pgsql-9.6/bin/ make make install How to do it… The following mentioned are the sequence of steps required to replicate data between two tables using Slony replication: First start the PostgreSQL server if not already started: pg_ctl -D $PGDATA start In the next step we will be creating two databases test1 and test 2 which will be used as source and target databases: createdb test1 createdb test2 In the next step we will create the table t_test on the source database test1 and will insert some records into it: psql -d test1 test1=# create table t_test (id numeric primary key, name varchar); test1=# insert into t_test values(1,'A'),(2,'B'), (3,'C'); We will now set up the target database by copying the table definitions from the source database test1: pg_dump -s -p 5432 -h localhost test1 | psql -h localhost -p 5432 test2 We will now connect to the target database test2 and verify that there is no data in the tables of the test2 database: psql -d test2 test2=# select * from t_test; We will now setup a slonik script for master/slave that is, source/target setup: vi init_master.slonik #! /bin/slonik cluster name = mycluster; node 1 admin conninfo = 'dbname=test1 host=localhost port=5432 user=postgres password=postgres'; node 2 admin conninfo = 'dbname=test2 host=localhost port=5432 user=postgres password=postgres'; init cluster ( id=1); create set (id=1, origin=1); set add table(set id=1, origin=1, id=1, fully qualified name = 'public.t_test'); store node (id=2, event node = 1); store path (server=1, client=2, conninfo='dbname=test1 host=localhost port=5432 user=postgres password=postgres'); store path (server=2, client=1, conninfo='dbname=test2 host=localhost port=5432 user=postgres password=postgres'); store listen (origin=1, provider = 1, receiver = 2); store listen (origin=2, provider = 2, receiver = 1); We will now create a slonik script for subscription on the slave that is, target: vi init_slave.slonik #! /bin/slonik cluster name = mycluster; node 1 admin conninfo = 'dbname=test1 host=localhost port=5432 user=postgres password=postgres'; node 2 admin conninfo = 'dbname=test2 host=localhost port=5432 user=postgres password=postgres'; subscribe set ( id = 1, provider = 1, receiver = 2, forward = no); We will now run the init_master.slonik script created in step 6 and will run this on the master: cd /usr/pgsql-9.6/bin slonik init_master.slonik We will now run the init_slave.slonik script created in step 7 and will run this on the slave that is, target: cd /usr/pgsql-9.6/bin slonik init_slave.slonik In the next step we will start the master slon daemon: nohup slon mycluster "dbname=test1 host=localhost port=5432 user=postgres password=postgres" & In the next step we will start the slave slon daemon: nohup slon mycluster "dbname=test2 host=localhost port=5432 user=postgres password=postgres" & In the next step we will connect to the master that is, source database test1 and insert some records in the t_test table: psql -d test1 test1=# insert into t_test values (5,'E'); We will now test for replication by logging to the slave that is, target database test2 and see if the inserted records into the t_test table in the previous step are visible: psql -d test2 test2=# select * from t_test; id | name ----+------ 1 | A 2 | B 3 | C 5 | E (4 rows) How it works… We will now discuss about the steps followed in the preceding section: In step 1, we first start the PostgreSQL server if not already started. In step 2 we create two databases namely test1 and test2 that will serve as our source (master) and target (slave) databases. In step 3 of the preceding section we log into the source database test1 and create a table t_test and insert some records into the table. In step 4 of the preceding section we set up the target database test2 by copying the table definitions present in the source database and loading them into the target database test2 by using pg_dump utility. In step 5 of the preceding section we login into the target database test2 and verify that there are no records present in the table t_test because in step 5 we only extracted the table definitions into test2 database from test1 database. In step 6 we setup a slonik script for master/slave replication setup. In the file init_master.slonik we first define the cluster name as mycluster. We then define the nodes in the cluster. Each node will have a number associated to a connection string which contains database connection information. The node entry is defined both for source and target databases. The store_path commands are necessary so that each node knows how to communicate with the other. In step 7 we setup a slonik script for subscription of the slave that is, target database test2. Once again the script contains information such as cluster name, node entries which are designed a unique number related to connect string information. It also contains a subscriber set. In step 8 of the preceding section we run the init_master.slonik on the master. Similarly in step 9 we run the init_slave.slonik on the slave. In step 10 of the preceding section we start the master slon daemon. In step 11 of the preceding section we start the slave slon daemon. Subsequent section from step 12 and 13 of the preceding section are used to test for replication. For this purpose in step 12 of the preceding section we first login into the source database test1 and insert some records into the t_test table. To check if the newly inserted records have been replicated to target database test2 we login into the test2 database in step 13 and then result set obtained by the output of the query confirms that the changed/inserted records on the t_test table in the test1 database are successfully replicated across the target database test2. You may refer to the link given for more information regarding Slony replication at http://slony.info/documentation/tutorial.html. Summary We have seen how to setup streaming replication and then we looked at how to install and replicate using one popular third-party replication tool Slony. Resources for Article: Further resources on this subject: Introducing PostgreSQL 9 [article] PostgreSQL Cookbook - High Availability and Replication [article] PostgreSQL in Action [article]
Read more
  • 0
  • 0
  • 4052

article-image-getting-started-salesforce-lightning-experience
Packt
02 Mar 2017
8 min read
Save for later

Getting Started with Salesforce Lightning Experience

Packt
02 Mar 2017
8 min read
In this article by Rakesh Gupta, author of the book Mastering Salesforce CRM Administration, we will start with the overview of the Salesforce Lightning Experience and its benefits, which takes the discussion forward to the various business use cases where it can boost the sales representatives’ productivity. We will also discuss different Sales Cloud and Service Cloud editions offered by Salesforce. (For more resources related to this topic, see here.) Getting started with Lightning Experience Lightning Experience is a new generation productive user interface designed to help your sales team to close more deals and sell quicker and smarter. The upswing in mobile usages is influencing the way people work. Sales representatives are now using mobile to research potential customers, get the details of nearby customer offices, socially connect with their customers, and even more. That's why Salesforce synced the desktop Lightning Experience with mobile Salesforce1. Salesforce Lighting Editions With its Summer'16 release, Salesforce announced the Lightning Editions of Sales Cloud and Service Cloud. The Lightning Editions are a completely reimagined packaging of Sales Cloud and Service Cloud, which offer additional functionality to their customers and increased productivity with a relatively small increase in cost. Sales Cloud Lightning Editions Sales Cloud is a product designed to automate your sales process. By implementing this, an organization can boost its sales process. It includes Campaign,Lead, Account, Contact,OpportunityReport, Dashboard, and many other features as well,. Salesforce offers various Sales Cloud editions, and as per business needs, an organization can buy any of these different editions, which are shown in the following image: Let’s take a closer look at the three Sales Cloud Lightning Editions: Lightning Professional: This edition is for small and medium enterprises (SMEs). It is designed for business needs where a full-featured CRM functionality is required. It provides the CRM functionality for marketing, sales, and service automation. Professional Edition is a perfect fit for small- to mid-sized businesses. After the Summer'16 release, in this edition, you can create a limited number of processes, record types, roles, profiles, and permission sets. For each Professional Edition license, organizations have to pay USD 75 per month. Lightning Enterprise: This edition is for businesses with large and complex business requirements. It includes all the features available in the Professional Edition, plus it provides advanced customization capabilities to automate business processes and web service API access for integration with other systems. Enterprise Editions also include processes, workflow, approval process, profile, page layout, and custom app development. In addition, organizations also get the Salesforce Identity feature with this edition. For each Enterprise Edition license, organizations have to pay USD 150 per month. Lightning Unlimited: This edition includes all Salesforce.com features for an entire enterprise. It provides all the features of Enterprise Edition and a new level of Platform flexibility for managing and sharing all of their information on demand. The key features of Salesforce.com Unlimited Edition (in addition to Enterprise features) are premier support, full mobile access, and increased storage limits. It also includes Work.com, Service Cloud, knowledge base, live agent chat, multiple sandboxes and unlimited custom app development. While purchasing Salesforce.com licenses, organizations have to negotiate with Salesforce to get the maximum number of sandboxes. To know more about these license types, please visit the Salesforce website at https://www.salesforce.com/sales-cloud/pricing/. Service Cloud Lightning Editions Service Cloud helps your organization to streamline the customer service process. Users can access it anytime, anywhere, and from any device. It will help your organization to close a case faster. Service agents can connect with customers through the agent console, meaning agents can interact with customers through multiple channels. Service Cloud includes case management, computer telephony integration (CTI), Service Cloud console, knowledge base, Salesforce communities, Salesforce Private AppExchange, premier+ success plan, report, and dashboards, with many other analytics features. The various Service Cloud Lightning Editions are shown in the following image: Let’s take a closer look at the three Service Cloud Lightning Edition: Lightning Professional: This edition is for SMEs. It provides CRM functionality for customer support through various channels. It is a perfect fit for small- to mid-sized businesses. It includes features, such as case management, CTI integration, mobile access, solution management, content library, reports, and analytics, along with Sales features such as opportunity management and forecasting. After the Summer'16 release, in this edition, you can create a limited number of processes, record types, roles, profiles, and permission sets. For each Professional Edition license, organizations have to pay USD 75 per month. Lightning Enterprise: This edition is for businesses with large and complex business requirements. It includes all the features available in the Professional edition, plus it provides advanced customization capabilities to automate business processes and web service API access for integration with other systems. It also includes Service console, Service contract and entitlement management, workflow and approval process, web chat, offline access, and knowledge base. Organizations get Salesforce Identity feature with this edition. For each Enterprise Edition license, organizations have to pay USD 150 per month. Lightning Unlimited: This edition includes all Salesforce.com features for an entire enterprise. It provides all the features of Enterprise Edition and a new level of platform flexibility for managing and sharing all of their information on demand. The key features of Salesforce.com Unlimited edition (in addition to the Enterprise features) are premier support, full mobile access, unlimited custom apps, and increased storage limits. It also includes Work.com, Service Cloud, knowledge base, live agent chat, multiple sandboxes, and unlimited custom app development. While purchasing the licenses, organizations have to negotiate with Salesforce to get the maximum number of sandboxes. To know more about these license types, please visit the Salesforce website at https://www.salesforce.com/service-cloud/pricing/. Creating a Salesforce developer account To get started with the given topics in this, it is recommended to use a Salesforce developer account. Using Salesforce production instance is not essential for practicing. If you currently do not have your developer account, you can create a new Salesforce developer account. The Salesforce developer account is completely free and can be used to practice newly learned concepts, but you cannot use this for commercial purposes. To create a Salesforce developer account follow these steps: Visit the website http://developer.force.com/. Click on the Sign Up button. It will open a sign up page; fill it out to create one for you. The signup page will look like the following screenshot: Once you register for the developer account, Salesforce.com will send you login details on the e-mail ID you have provided during the registration. By following the instructions in the e-mail, you are ready to get started with Salesforce. Enabling the Lightning Experience for Users Once you are ready to roll out the Lightning Experience for your users, navigate to the Lightning Setup page, which is available in Setup, by clicking Lightning Experience. The slider button at the bottom of the Lightning Setup page, shown in the following screenshot, enables Lightning Experience for your organization:. Flip that switch, and Lightning Experience will be enabled for your Salesforce organization. The Lightning Experience is now enabled for all standard profiles by default. Granting permission to users through Profile Depending on the number of users for a rollout, you have to decide how to enable the Lightning Experience for them. If you are planning to do a mass rollout, it is better to update Profiles. Business scenario:Helina Jolly is working as a system administrator in Universal Container. She has received a requirement to enable Lightning Experience for a custom profile, Training User. First of all, create a custom profile for the license type, Salesforce, and give it the name, Training User. To enable the Lightning Experience for a custom profile, follow these instructions: In the Lightning Experience user interface, click on page-level action-menu | ADMINISTRATION | Users | Profiles, and then select the Training User profile, as shown in the following screenshot: Then, navigate to theSystem Permission section, and select the Lightning Experience User checkbox. Granting permission to users through permission sets If you want to enable the Lightning Experience for a small group of users, or if you are not sure whether you will keep the Lightning Experience on for a group of users, consider using permission sets. Permission sets are mainly a collection of settings and permissions that give the users access to numerous tools and functions within Salesforce. By creating a permission set, you can grant the Lightning Experience user permission to the users in your organization. Switching between Lightning Experience and Salesforce Classic If you have enabled Lightning Experience for your users, they can use the switcher to switch back and forth between Lightning Experience and Salesforce Classic. The switcher is very smart. Every time a user switches, it remembers that user experience as their new default preference. So, if a user switches to Lightning Experience, it is now their default user experience until they switch back to Salesforce Classic. If you want to restrict your users to switch back to Salesforce Classic, you have to develop an Apex trigger or process with flow. When the UserPreferencesLightningExperiencePreferred field on the user object is true, then it redirects the user to the Lightning Experience interface. Summary In this article, we covered the overview of Salesforce Lightning Experience. We also covered various Salesforce editions available in the market. We also went through standard and custom objects. Resources for Article: Further resources on this subject: Configuration in Salesforce CRM [article] Salesforce CRM Functions [article] Introduction to vtiger CRM [article]
Read more
  • 0
  • 0
  • 2596

article-image-functional-building-blocks
Packt
02 Mar 2017
3 min read
Save for later

Functional Building Blocks

Packt
02 Mar 2017
3 min read
In this article by Atul Khot, author of the book, Learning Functional Data Structures and Algorithms provides refresher on some fundamentals concepts. How fast could an algorithm run? How does it fare when you have ten input elements versus a million? To answer such questions, we need to be aware of the notion of algorithmic complexity, which is expressed using the Big O notation. An O(1) algorithm is faster than O(logn), for example. What is this notation? It talks about measuring the efficiency of an algorithm, which is proportional to the number of data items, N, being processed. (For more resources related to this topic, see here.) The Big O notation In simple words, this notation is used to describe how fast an algorithm will run. It describes the growth of the algorithm's running time versus the size of input data. Here is a simple example. Consider the following Scala snippet, reversing a linked list: scala> def revList(list: List[Int]): List[Int] = list match { | case x :: xs => revList(xs) ++ List(x) | case Nil => Nil | } revList: (list: List[Int])List[Int] scala> revList(List(1,2,3,4,5,6)) res0: List[Int] = List(6, 5, 4, 3, 2, 1) A quick question for you: how many times the first case clause, namely case x :: xs => revList(xs) ++ List(x), matches a list of six elements? Note that the clause matches when the list is non-empty. When it matches, we reduce the list by one element and recursively call the method. It is easy to see the clause matches six times. As a result, the list method, ++, also gets invoked four times. The ++ method takes time and is directly proportional to the length of the list on left-hand side. Here is a plot of the number of iterations against time: To reverse a list of nine elements, we iterate over the elements 55 times (9+8+7+6+5+4+3+2+1). For a list with 20 elements, the number of iterations are 210. Here is a table showing some more example values: The number of iterations are proportional to n2/2. It gives us an idea of how the algorithm runtime grows for a given number of elements. The moment we go from 100 to 1,000 elements, the algorithm needs to do 500 times more work. Another example of a quadratic runtime algorithm is a selection sorting. It keeps finding the minimum and increases the sorted sublist. The algorithm keeps scanning the list for the next minimum and hence always has O(n2) complexity; this is because it does O(n2) comparisons. Refer to http://www.studytonight.com/data-structures/selection-sorting for more information. Binary search is a very popular search algorithm with a complexity of O(logn). The succeeding figure shows the growth table for O(logn). When the number of input elements jumps from 256 to 10,48,576, the growth is from 8 to 20. Binary search is a blazing fast algorithm as the runtime grows marginally when the number of input elements jump from a couple hundreds to hundred thousand. Refer to https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/ for an excellent description of the O notation. Refer to the following link that has a graphical representation of the various growth functions: https://therecyclebin.files.wordpress.com/2008/05/time-complexity.png Summary This article was a whirlwind tour of the basics. We started with a look at the Big O notation, which is used to reason about how fast an algorithm could run FP programs use collections heavily. We looked at some common collection operations and their complexities. Resources for Article: Further resources on this subject: Algorithm Analysis [article] Introducing Algorithm Design Paradigms [article] Getting Started with Sorting Algorithms in Java [article]
Read more
  • 0
  • 1
  • 1628
article-image-cloud-native-applications
Packt
09 Feb 2017
5 min read
Save for later

Cloud Native Applications

Packt
09 Feb 2017
5 min read
In this article by Ranga Rao Karanam, the author of the book Mastering Spring, we will see what are Cloud Native applications and Twelve Factor App. (For more resources related to this topic, see here.) Cloud Native applications Cloud is disrupting the world. A number of possibilities emerge that were never possible before. Organizations are able to provision computing, network and storage devices on demand. This has high potential to reduce costs in a number of industries. Consider the retail industry where there is high demand in pockets (Black Friday, Holiday Season and so on). Why should they pay for hardware round the year when they could provision it on demand? While we would like to be benefit from the possibilities of the cloud, these possibilities are limited by architecture and the nature of applications. How do we build applications that can be easily deployed on the cloud? That's where Cloud Native applications come into picture. Cloud Native applications are those that can easily be deployed on the cloud. These applications share a few common characteristics. We will begin with looking at Twelve Factor App - A combination of common patterns among Cloud Native applications. Twelve Factor App Twelve Factor App evolved from experiences of engineers at Heroku. This is a list of patterns that are typically used in Cloud Native application architectures. It is important to note, that an App here refers to a single deployable unit. Essentially every microservice is an App (because each microservice is independently deployable). One codebase Each App has one codebase in revision control. There can be multiple environments where the App can be deployed. However, all these environments use code from a single codebase. An example for anti-pattern is building a deployable from multiple codebases. Dependencies Explicitly declare and isolate dependencies. Typical Java applications use build management tools like Maven and Gradle to isolate and track dependencies. The following screenshot shows the typical Java applications managing dependencies using Maven: The following screenshot shows the content of the file: Config All applications have configuration that varies from one environment to another environment. Configuration is typically littered at multiple locations - Application code, property files, databases, environment variables, Java Naming and Directory Interface (JNDI) and system variables are a few examples. A Twelve Factor App should store config in the environment. While environment variables are recommended to manage configuration in a Twelve Factor App, other alternatives like having a centralized repository for application configuration should be considered for more complex systems. Irrespective of mechanism used, we recommended to manage configuration outside application code (independent of the application deployable unit). Use one standardized way of configuration Backing services Typically applications depend on other services being available - data-stores, external services among others. Twelve Factor App treats backing services as attached resources. A backing service is typically declared via an external configuration. Loose coupling to a backing service has many advantages including ability to gracefully handle an outage of a backing service. Build, release, run Strictly separate build and run stages. Build: Creates an executable bundle (ear, war or jar) from code and dependencies that can be deployed to multiple environments. Release: Combine the executable bundle with specific environment configuration to deploy in an environment. Run: Run the application in an execution environment using a specific release An anti-pattern is to build separate executable bundles specific for each environment. Stateless A Twelve Factor App does not have state. All data that it needs is stored in a persistent store. An anti-pattern is a sticky session. Port binding A Twelve Factor App exposes all services using port binding. While it is possible to have other mechanisms to expose services, these mechanisms are implementation dependent. Port binding gives full control of receiving and handling messages irrespective of where an application is deployed. Concurrency A Twelve Factor App is able to achieve more concurrency by scaling out horizontally. Scaling vertically has its limits. Scaling out horizontally provides opportunities to expand without limits. Disposability A Twelve Factor App should promote elastic scaling. Hence, they should be disposable. They can be started and stopped when needed. A Twelve Factor App should: Have minimum start up time. Long start up times means long delay before an application can take requests. Shutdown gracefully. Handle hardware failures gracefully. Environment parity All the environments - development, test, staging, and production - should be similar. They should use same processes and tools. With continuous deployment, they should have similar code very frequently. This makes finding and fixing problems easier. Logs as event streams Visibility is critical to a Twelve Factor App. Since applications are deployed on the cloud and are automatically scaled, it is important to have a centralized visibility into what's happening across different instances of the applications. Treating all logs as stream enables routing of the log stream to different destinations for viewing and archival. This stream can be used to debug issues, perform analytics and create alerting systems based on error patterns. No distinction of admin processes Twelve Factor Apps treat administrative tasks (migrations, scripts) similar to normal application processes. Summary This article thus explains about Cloud Native applications and what are Twelve Factor Apps. Resources for Article: Further resources on this subject: Cloud and Async Communication [article] Setting up of Software Infrastructure on the Cloud [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 1815

article-image-writing-applications-scale
Packt
08 Feb 2017
13 min read
Save for later

Writing Applications that Scale

Packt
08 Feb 2017
13 min read
In this article by Anand Balachandran Pillai, the author of the book Software Architecture with Python, learn to build complex software architecture for a software application using Python. (For more resources related to this topic, see here.) Imagine the checkout counter of a supermarket on a Saturday evening, the usual rush hour time. It is common to see long queues of people waiting to check out with their purchases. What would a store manager do to reduce the rush and waiting time? A typical manager would try a few approaches includingtelling those manning the checkout counters to pick up their speed and try and redistribute people to different queues so that each queue roughly has the same wait time. In other words, the manager would manage the current load with available resources by optimizing performance of existing resources. However, if the store has existing counters which are not in operation and enough people at hand to manage them, the manager could enable those counters and move people to these new counters,in other words, add resources to the store to scalethe operation. Software systems too scale in a similar way. An existing software application can be scaled by adding compute resources to it. When the system scales by either adding or making better use of resources inside a compute node, such as CPU or RAM, it is said to scale verticallyor scale up. On the contrary, when a system scales by adding more compute nodes to it, such as a creating a load balanced cluster, it is said to scale horizontallyor scale out. The degree to which a software system is able to scale when compute resources are added is called its scalability. Scalability is measured in terms of how much the systems performance characteristics, such as throughput or latency, improve with respect to the addition of resources. For example, if a system doubles its capacity by doubling the number of servers, it is scaling linearly. Increasing the concurrency of a system often increases its scalability. In the preceding supermarket example, the manager is able to scale out his operations by opening additional counters. In other words, he increases the amount of concurrent processing done in his store. Concurrency is the amount of work that gets done simultaneously in a system. We look at different techniques of scaling a software application with Python. We start with concurrency techniques within a machine, such as multithreading and multiprocessing, and go on to discuss asynchronous execution. We also look at how to scale outan application across multiple servers and also some theoretical aspects of scalability and its relation to availability. Scalability andperformance How do we measure the scalability of a system? Let's take an example and see how this could be done. Let's say our application is a simple report generation system for employees. It is able to load employee data from a database and generate a variety of reports at bulk, such as payslips, tax deduction reports, employee leave reports, and so on. The system is able to generate 120 reports per minute––this is the throughputor capacityof the system expressed as the number of successfully completed operations in a given unit of time. Let's say the time it takes to generate a report at the server side (latency) is roughly 2seconds. Let's say, the architect decides to scale up the system by doubling the RAM on its server ––scaling upthe system. Once this is done, a test shows that the system is able to increase its throughput to 180 reports per minute. The latency remains the same at 2 seconds. So at this point, the system has scaled close to linearin terms of the memory added. The scalability of the system expressed in terms of throughput increase is as follows: Scalability (throughput) = 180/120 = 1.5X As the second step, the architect decides to double the number of servers on the backend all with the same memory. After this step, he finds that the system's performance throughput has now increased to 350 reports per minute. The scalability achieved by this step is as follows: Scalability (throughput) = 350/180 = 1.9X The system has now responded much better, with a close to linear increase in scalability. After further analysis, the architect finds that by rewriting the code that was processing reports on the server to run in multiple processes instead of a single process, he is able to reduce the processing time at the server and hence the latency of each request by roughly 1 second per request at peak time. The latency has now gone down from 2seconds to 1 second. The system's performance with respect to latency has become better,as follows: Performance (latency) X = 2/1 = 2X How does this affect thescalability ? Since the latency per request has come down, the system overall would be able to respond to similar loads at a faster rate (since processing time per request is lesser now) than what it was able to earlier. In other words, with the exact same resources, the system's throughput performance, and hence scalability, would have increased, assuming other factors remain the same. Let's summarize what we discussed so far in the following lists: First, the architect increased the throughput of a single system by scaling it up by adding extra memory as a resource, which increased the overall scalability of the system. In other words, he scaled the performance of a single system by scaling up which boosted overall performance of the whole system. Next, he added more nodes to the system, and hence its ability to perform work concurrently, and found that the system responded well by rewarding him with a near linear scalability factor. Simply put, he increased the throughput of the system by scaling its resource capacity. In other words, he increased scalability of the system by scaling out by adding more compute nodes. Finally, he made a critical fix by running a computation in more than one process. In other words, he increased the concurrency of a single system by dividing the computation to more than one part. He found that this increased the performance characteristic of the application by reducing its latency, potentially setting up the application to handle workloads better at high stress. We find that there is a relation between scalability, performance, concurrency, and latency as follows: When performance of a single system goes up, the scalability of the total system goes up When an application scales in a single machine by increasing its concurrency, it has the potential to improve performance and hence the net scalability of the system in deployment When a system reduces its performance time at server or its latency,it positively contributes to scalability We have captured the relationships between concurrency, latency, performance and scalability in the following table: Concurrency Latency Performance Scalability High Low High High High High Variable Variable Low High Poor Poor An ideal system is one which has good concurrency and low latency––a system that has high performance and would respond better to scaling up and/or scaling out. A system with high concurrency but also high latency would have variable characteristics,its performance, and hence scalability, would be potentially very sensitive to other factors such as network load, current system load, geographical distribution of compute resources and requests,and so on. A system with low concurrency and high latency is the worst case––it would be difficult to scale such a system as it has poor performance characteristics. The latency and concurrency issues should be addressed before the architect decides to scale the system either horizontally or vertically. Scalability is always described in terms of variation in performance throughput. Concurrency A system's concurrency is the degree to which the system is able to perform work simultaneously instead of sequentially. An application written to be concurrent in general can execute more units of work in a given time than one which is written to be sequential or serial. When wemake a serial application concurrent, we make the application make better use of existing compute resources in the system––CPU and/or RAM at a given time. Concurrency in other words is the cheapest way of making an application scale inside a machinein terms of the cost of compute resources. Concurrency can be achieved using different techniques. The common techniquesare as follows: Multithreading: The simplest form of concurrency is to rewrite the application to perform parallel tasks in different threads. A thread is the simplest sequence of programming instructions that can be performance by a CPU. A program can consist of any number of threads. By distributing tasks to multiple threads, a program can execute more work simultaneously. All threads run inside the same process. Multiprocessing: The next step of concurrency is to scale the program to run in multiple processes instead of a single process. Multiprocessing involves more overhead than multithreading in terms of message passing and shared memory. However, programs that perform a lot of latent operations such as disk reads and those which perform lot of CPU heavy computation can benefit more from multiple processes than multiple threads. Asynchronous Processing: In this technique, operations are performed asynchronously,in other words, there is no ordering of concurrent tasks with respect to time. Asynchronous processing picks tasks usually from a queue of tasks and schedules them to execute at a future time, often receiving the results in callback functions or special future objects. Typically, operations are performed in a single thread. There are other forms of concurrent computing, but in this article, we will focus our attention to only these three, hence we are not introducing any other types of concurrent computing here. Python, especially Python3, has built-in support for all these types of concurrent computing techniques in its standard library. For example, it supports multithreading via its threading module and multiple processes via its multiprocessing module. Asynchronous execution support is available via the asynciomodule. A form of concurrent processing which combines asynchronous execution with threads and processes is available via the concurrent.futuresmodule. Concurrency versusparallelism We will take a brief look at the concept of concurrency and its close cousin, namely parallelism. Both concurrency and parallelism are about executing work simultaneously than sequentially. However, in concurrency, the two tasks need not be executing at the exact same time. Instead, they just need to be scheduled to be executed simultaneously. Parallelism, on the other hand, requires that both the tasks execute together at a given pointin time. To take a real-life example, let's say you are painting two exterior walls of your house. You have employed just one painter and you find that he is taking a lot more time than you thought. You can solve the problem in two ways. Instruct the painter to paint a few coats on one wall before switching to the next wall and doing the same there. Assuming he is efficient, he will work on both walls simultaneously (though not at the same time) and achieve same degree completion on both walls for a given time. This is a concurrent solution. Employ one more painter, and instruct first painter to paint the first wall and second painter to paint the second wall. This is a truly parallel solution. For example, two threads performing byte code computations in a single core CPU are not exactly performing parallel computation as the CPU can accommodate only one thread at a time. However, from a programmer's perspective, they are concurrent, since the CPU scheduler performs fast switching in and out of the threads, so theylook parallel in all appearances and purposes. But they are not truly parallel. However on a multi-core CPU, two threads can perform parallel computations at any given time in its different cores. This is true parallelism. Parallel computation requires computation resources to increase at least linearly with respect to its scale. Concurrent computation can be achieved using techniques of multi-tasking where work is scheduled and executed in batches, making better use of existing resources. In this article, we will use the term concurrent nearly uniformly to indicate both types of execution. In some places, it may indicate concurrent processing in the traditional way, and in some other, it may indicate true parallel processing. Kindly use the context to disambiguate. Concurrency in Python –multithreading We will start our discussion on concurrent techniques in Python with multithreading. Python supports multiple threads in programming via its threading module. The threading module exposes a Threadclass thatencapsulates a thread of execution. Along with it, it also exposes the following synchronization primitives: Lock object:This is useful for synchronized, protected access to share resources and its cousin RLock Condition object:This is useful for threads to synchronize while waiting for arbitrary conditions Event object: This provides a basic signaling mechanism between threads Semaphore object:This allows synchronized access to limited resources Barrier object:This allows a set of fixed number of threads to wait for each other and synchronize to a particular state and proceed The thread objects in Python can be combined with the synchronized Queueclass in the queue module for implementing thread-safe producer/consumer workflows. Thumbnail generator Let's start our discussion of multithreading in Python with the example of a program which is used to generate thumbnails of image URLs. We use the Python Imaging Library (PIL) for performing the following operation: # thumbnail_converter.py from PIL import Image import urllib.request def thumbnail_image(url, size=(64, 64), format='.png'): """ Save thumbnail of an image URL """ im = Image.open(urllib.request.urlopen(url)) # filename is last part of the URL minus extension + '.format' pieces = url.split('/') filename = ''.join((pieces[-2],'_',pieces[-1].split('.')[0],'_thumb',format)) im.thumbnail(size, Image.ANTIALIAS) im.save(filename) print('Saved',filename) This works very well for single URLs. Let's say we want to convert five image URLs to their thumbnails as shown in the following code snippet: img_urls = ['https://dummyimage.com/256x256/000/fff.jpg', 'https://dummyimage.com/320x240/fff/00.jpg', 'https://dummyimage.com/640x480/ccc/aaa.jpg', 'https://dummyimage.com/128x128/ddd/eee.jpg', 'https://dummyimage.com/720x720/111/222.jpg'] The code forusing the preceding function would be as follows: for url in img_urls: thumbnail_image(urls) Let's see how such a function performs with respect to the time taken: Let's now scale the program to multiple threads so that we can perform the conversions concurrently. Here is the rewritten code to run each conversion in its own thread (not showing the function itself as it hasn't changed): import threading for url in img_urls: t=threading.Thread(target=thumbnail_image,args=(url,)) t.start() Take a look at the response time of the threaded thumbnail convertor for five URLs as shown in the following screenshot: With this change, the program returns in 1.76 seconds, almost equal to the time taken by a single URL in the serial execution we saw earlier. In other words, the program has now linearly scaled with respect to the number of threads. Note that we had to make no change to the function itself to get this scalability boost. Summary In this article, you learned the importance of writing Scalable applications. We also saw the relationships between concurrency, latency, performance, and scalability and the techniques we can use to achieve concurrency. You also learned how to generate the thumbnail of image URLs using PIL. Resources for Article: Further resources on this subject: Putting the Fun in Functional Python [article] Basics of Jupyter Notebook and Python [article] Jupyter and Python Scripting [article]
Read more
  • 0
  • 0
  • 942

article-image-measuring-geographic-distributions-arcgis-tool
Packt
08 Feb 2017
5 min read
Save for later

Measuring Geographic Distributions with ArcGIS Tool

Packt
08 Feb 2017
5 min read
In this article by Eric Pimpler, the author of the book Spatial Analytics with ArcGIS, you will be introduced to the use of spatial statistics tool available in ArcGIS to solve complex geographic analysis. Obtaining basic spatial statistics about a dataset is often the first step in the analysis of geographic data. The Measuring Geographic Distributions toolset in the ArcGIS Spatial Statistics Tools toolbox contains a tool that provides descriptive geographic statistics such as the Central Feature tool. In this article, you will learn how to use the central feature tool to obtain basic spatial statistical information about a dataset including the following topics: Preparing for geographic analysis Measuring geographic centrality with the central feature tool (For more resources related to this topic, see here.) Measuring geographic centrality The Central Feature tool in the Measuring Geographic Distributions toolset can all be used to measure the geographic centrality of spatial data. In this exercise, the central feature tool will be used to obtain descriptive spatial statistics about crime data for the city of Denver. Preparation Let's get prepared for the geographic analysis by performing the following steps: In ArcMap, open the C:GeospatialTrainingSpatialStatsDenverCrime.mxd file. You should see a point feature class called Crime, as shown in the following screenshot: The Crime feature class contains point locations for all crimes for the city of Denver in 2013. The first thing we need to do is isolate a type of crime for our analysis. Open the attribute table for the crime feature class. Use the Select by Attributes tool to select all records, where OFFENSE_CATEGORY_ID = 'burglary' as shown in the following screenshot. This will select 25,743 burglaries from the dataset. These are burglaries within the city limits of Denver in 2013: Close the attribute table. In the Table of Contents block, right-click on the Crime layer and select Properties. Go to the Source tab and under Geographic Coordinate System note that the value is GCS_WGS_1984. Data is often stored in this WGS84 Web Mercator coordinate system for display purposes on the Web. The WGS84 Web Mercator coordinate system that is so popular today for online mapping applications is not suitable for use with the Spatial Statistics tools. These tools require accurate distance measurements that aren't possible with WGS84 Web Mercator, so it's important to project your datasets to a coordinate system that supports accurate distance measurements. Close this dialog by clicking on the Cancel button. Now, right-click on the Layers data frame and select Properties… and then Coordinate System. The current coordinate system of the data frame should be set to NAD_1983_UTM_Zone_13N, which is acceptable for our analysis. With the records from the crime layer still selected, right-click on the Layers and go to Data | Export Data. The next dialog is very important. Select the the data frame option as the coordinate system, as shown in the following screenshot. Name the layer Burglary and export it to the crime geodatabase in C:GeospatialTrainingSpatialStatsExercisesDatacrime.gdb and then click on OK. The new Burglary layer will be added to the Layers data frame. Rename the layer to Denver Burglary. You can now remove the Crime layer. Save your map document file. Run the Central Feature tool The Central Feature tool identifies the most centrally located feature from a point, line, or polygon feature class. It adds and sums the distances from each feature to every other feature. The one with the shortest distance is the central feature. This tool creates an output feature class containing a single feature that represents the most centrally located feature. For example, if you have a feature class of burglaries, the Central Feature tool will identify the crime location that is the central most location from the group and create a new feature class with a single point feature that represents this location:  If necessary, open ArcToolbox and find the Spatial Statistics Tools toolbox. Open the toolbox and expand the Measuring Geographic Distributions toolset. Double-click on Central Feature to display the tool as shown in the following screenshot: Select Denver Burglary as the Input Feature Class, C:GeospatialTrainingSpatialStatsDatacrime.gdbBurglary_CentralFeature as the Output Feature Class, and EUCLIDEAN_DISTANCE as the Distance Method. Euclidean distance is a straight-line distance between two points. The other distance method is Manhattan distance, which is the distance between two points, measured along axes at right angles and is calculated by summing the difference between the x and y coordinates. There are the following three optional parameters for the Central Feature tool, including Weight Field(optional), Self Potential Weight Field(optional), and Case Field(optional). We won't use any of these optional parameters for this analysis, but they do warrant an explanation: Weight Field(optional): This parameter is a numeric field used to weigh distances in the origin-destination matrix. For example, if you had a dataset containing real-estate sales information each point might contain a sales price. The sales price could be used to weigh the output of the Central Feature tool. Self Potential Weight Field: This is a field representing self-potential or the distance or weight between a feature and itself. Case Field(optional): This parameter is a field used to group feature for separate central feature computations. This field can be an integer, data, or string. Click on the OK button. The most centrally located burglary will be displayed as shown in the following screenshot. The output is a single point feature: Summary This article covered the use of a descriptive spatial statistics tool, Central Feature tool found in the Measuring Geographic Distributions toolset. This central feature tool returns basic spatial statistical information about a dataset. Resources for Article: Further resources on this subject: Introduction to Mobile Web ArcGIS Development [article] Learning to Create and Edit Data in ArcGIS [article] ArcGIS – Advanced ArcObjects [article]
Read more
  • 0
  • 0
  • 1625
article-image-encapsulation-data-properties
Packt
07 Feb 2017
7 min read
Save for later

Encapsulation of Data with Properties

Packt
07 Feb 2017
7 min read
In this article by Gastón C. Hillar, the author of the book Swift 3 Object Oriented Programming - Second Edition, you will learn about all the elements that might compose a class. We will start organizing data in blueprints that generate instances. We will work with examples to understand how to encapsulate and hide data by working with properties combined with access control. In addition, you will learn about properties, methods, and mutable versus immutable classes. (For more resources related to this topic, see here.) Understanding the elements that compose a class So far, we worked with a very simple class and many instances of this class in the Playground, the Swift REPL and the web-based Swift Sandbox. Now, it is time to dive deep into the different members of a class. The following list enumerates the most common element types that you can include in a class definition in Swift and their equivalents in other programming languages. We have already worked with a few of these elements: Initializers: This is equivalent to constructors in other programming languages Deinitializer: This is equivalent to destructors in other programming languages Type properties: This is equivalent to class fields or class attributes in other programming languages Type methods: This is equivalent to class methods in other programming languages Subscripts: This is also known as shortcuts Instance properties: This is equivalent to instance fields or instance attributes in other programming languages Instance methods: This is equivalent to instance functions in other programming languages Nested types: These are types that only exist within the class in which we define them We could access the instance property without any kind of restrictions as a variable within an instance. However, as it happens sometimes in real-world situations, restrictions are necessary to avoid serious problems. Sometimes, we want to restrict access or transform specific instance properties into read-only attributes. We can combine the restrictions with computed properties that can define getters and/or setters. Computed properties can define get and or set methods, also known as getters and setters. Setters allow us to control how values are set, that is, these methods are used to change the values of related properties. Getters allow us to control the values that we return when computed properties are accessed. Getters don't change the values of related properties. Sometimes, all the members of a class share the same attribute, and we don't need to have a specific value for each instance. For example, the superhero types have some profile values, such as the average strength, average running speed, attack power, and defense power. We can define the following type properties to store the values that are shared by all the instances: averageStrength, averageRunningSpeed, attackPower, and defensePower. All the instances have access to the same type properties and their values. However, it is also possible to apply restrictions to their access. It is also possible to define methods that don't require an instance of a specific class to be called; therefore, you can invoke them by specifying both the class and method names. These methods are known as type methods, operate on a class as a whole, and have access to type properties, but they don't have access to any instance members, such as instance properties or methods, because there is no instance at all. Type methods are useful when you want to include methods related to a class and don't want to generate an instance to call them. Type methods are also known as static or class methods. However, we have to pay attention to the keyword we use to declare type methods in Swift because a type method declared with the static keyword has a different behavior from a type method declared with the class keyword. Declaring stored properties When we design classes, we want to make sure that all the necessary data is available to the methods that will operate on this data; therefore, we encapsulate data. However, we just want relevant information to be visible to the users of our classes that will create instances, change values of accessible properties, and call the available methods. Thus, we want to hide or protect some data that is just needed for internal use. We don't want to make accidental changes to sensitive data. For example, when we create a new instance of any superhero, we can use both its name and birth year as two parameters for the initializer. The initializer sets the values of two properties: name and birthYear. The following lines show a sample code that declares the SuperHero class. class SuperHero { var name: String var birthYear: Int init(name: String, birthYear: Int) { self.name = name self.birthYear = birthYear } } The next lines create two instances that initialize the values of the two properties and then use the print function to display their values in the Playground: var antMan = SuperHero(name: "Ant-Man", birthYear: 1975) print(antMan.name) print(antMan.birthYear) var ironMan = SuperHero(name: "Iron-Man", birthYear: 1982) print(ironMan.name) print(ironMan.birthYear) The following screenshot shows the results of the declaration of the class and the execution of the lines in the Playground: The following screenshot shows the results of running the code in the Swift REPL. The REPL displays details about the two instances we just created: antMan and ironMan. The details include the values of the name and birthYear properties: The following lines show the output that the Swift REPL displays after we create the two SuperHero instances: antMan: SuperHero = { name = "Ant-Man" birthYear = 1975 } ironMan: SuperHero = { name = "Iron-Man" birthYear = 1982 } We can read the two lines as follows: the antMan variable holds an instance of SuperHero with its name set to "Ant-Man" and its birthYear set to 1975. The ironMan variable holds an instance of SuperHero with its name set to "Iron-Man" and its birthYear set to 1982. The following screenshot shows the results of running the code in the web-based IBM Swift Sandbox: We don't want a user of our SuperHero class to be able to change a superhero's name after an instance is initialized because the name is not supposed to change. There is a simple way to achieve this goal in our previously declared class. We can use the let keyword to define an immutable name stored property of type string instead of using the var keyword. We can also replace the var keyword with let when we define the birthYear stored property because the birth year will never change after we initialize a superhero instance. The following lines show the new code that declares the SuperHero class with two stored immutable properties: name and birthYear. Note that the initializer code hasn't changed, and it is possible to initialize both the immutable stored properties with the same code: class SuperHero { let name: String let birthYear: Int init(name: String, birthYear: Int) { self.name = name self.birthYear = birthYear } } Stored immutable properties are also known as stored nonmutating properties. The next lines create an instance that initializes the values of the two immutable stored properties and then use the print function to display their values in the Playground. Then, the two highlighted lines of code try to assign a new value to both properties and fail to do so because they are immutable properties: var antMan = SuperHero(name: "Ant-Man", birthYear: 1975) print(antMan.name) print(antMan.birthYear) antMan.name = "Batman" antMan.birthYear = 1976 The Playground displays the following two error messages for the last two lines, as shown in the next screenshot. We will see similar error messages in the Swift REPL and in the Swift Sandbox: Cannot assign to property: 'name' is a 'let' constant Cannot assign to property: 'birthYear' is a 'let' constant When we use the let keyword to declare a stored property, we can initialize the property, but it becomes immutable-that is, a constant-after its initialization. Summary In this article, you learned about the different members of a class or blueprint. We worked with instance properties, type properties, instance methods, and type methods. We worked with stored properties, getters, setters. Resources for Article: Further resources on this subject: Swift's Core Libraries [article] The Swift Programming Language [article] Network Development with Swift [article]
Read more
  • 0
  • 0
  • 1398

article-image-building-data-driven-application
Packt
06 Feb 2017
8 min read
Save for later

Building Data Driven Application

Packt
06 Feb 2017
8 min read
In this article by Ashish Srivastav the authors of the book ServiceNow Cookbook, we will learn following recipes: Starting a new application Getting into new modules (For more resources related to this topic, see here.) Service-Now provides a great platform to developers. Although it's a java-based application. Starting a new application Out of the box, Service-Now provides many applications for facilitating business operations and other activities in an IT environment, but if you find that the customer's requirements are not fitting into the system's applications boundaries then you can think of creating new applications. In these recipes, we will build a small application which will include table, form, business rules, client script, ACL, update sets, deployment, and so on. Getting ready To step through this recipe, you must have an active Service-Now instance, valid credentials, and an admin role. How to do it… Open any standard web browser and type the instance address. Log in to the Service-Now instance with the credentials. On the left-hand side in the search box, type local update and Service-Now will search the module for you: Local update set for a new application Now, you need to click on Local Update Sets to create a new update set so that you can capture all your configuration in the update set:   Create new update set On clicking, you need to give the name as Book registration application and click on the Submit and Make Current button:   Local update set – book registration application Now you will able to see the Book registration application update set next to your profile, which means you are ready to create a new application:   Current update set On the left-hand side in the search box, type system definition and click on the Application Menus module:   Application menu to create a new application To understand better, you can click on any application menu as shown here, where you can see the application and associated modules. For example, you can clicked on the self-service application as shown here:   Self-service application Now to see the associated modules, you need to scroll down as shown here, and if you want to create a new module then you need to click on the New button:   Self-service application's modules Now you should have a better understanding of how applications look within the Service-Now environment. So, to make a new application, you need to click on the New button on the Application Menus page:   Applications repository After clicking on the New button, you will able to see the configuration page. To understand better, let's consider this example. For instance, you are creating a Book Registration application for your customer, with the following configuration: Title: Book Registration Role: Leave blank Application: Global Active: True Category: Custom Applications (You can change it as per your requirement)   Book registration configuration Click on the Submit button. After sometime, a new Book Registration application menu will be visible under the application menu:   Book registration Getting into new modules A module is a part of an application which contains actually workable items. As a developer, you will always have an option to add a new module to support business requirements. Getting ready To step through this recipe, you should have an active Service-Now instance, valid credentials, and an admin role. How to do it… Open any standard web browser and type the instance address. Log into the Service-Now instance with the credentials. Service-Now gives you many options to create a new module. You can create a new module from the Application Menu module or you can go through the Table & Column module as well. If you have chosen to create a new module from the Application Menu module, then in order to create the module, click on the Book Registration application menu and scroll down. To create a new module, click on the New button:   Creating a New Module After clicking on the New button, you will able to see a configuration screen, where you need configure the following fields: Title: Author Registration Application Menu: Book Registration   The Author Registration module registration under the Book Registration menu Now in the Link Type section, you will need to configure the new module, rather I would say, you will need to define the base line regarding what your new module will do? Whether the new module will show a form to create a record or it will show a list of reports from a table or it will execute some the reports? That's why this is a critical step:   Link type to create a new module and select a table Link type gives you many options to decide the behavior of the new module.   Link type options Now, let's take a different approach to create a new module: On the left-hand side, type column and Service-Now will search the Tables & Columns module for you. Click on the Tables & Columns module:   The Tables & Columns module Now you will able to see the configuration page as shown here, where you need to click on the Create Table button. Note that by clicking on Create Table, you can create a new table:   Tables & Columns – Create a new table After clicking on the Create Table button, you will able to see the configuration page as given here, where you need to configure the following fields: Label: Author Registration (Module name) Name: Auto populate Extends table: Task (By extends, your module will incorporate all fields of the base table, in this scenario, Task table)   Module Configuration Create module: To create a new module through table, check the Create module check box, and to add a module under your application, you will need to select the application name:   Add module in application Controls is a critical section as out of the box, Service-Now gives an option to auto create number. For incident, INC is the prefix or for change ticker CHG is the prefix; here you are also allowed to create your prefix for a new module record:   Configure the new module Controls section Now you will able to auto number the configuration, as shown here; your new records will start with the AUT prefix:   New module auto numbering Click on Submit. After submission of the form, Service-Now will create a role automatically for the new module, as shown here. Only the u_author_registration_user role holder will able to view the module. So whenever a request is generated, you will need to go into that particular user's profile from the user administration module, to add a role:   Role created for module Your module is created, but there are no fields. So for rapid development, you can directly add a column to the table by clicking on Insert a new row...:   The Insert field in the form As an output, you will able to see that a new Author Registrations module is added under the Book Registration application:   Search newly created module Now if you click on the Author Registrations module, you will be able to see the following page:   The Author Registration Page On clicking the New button, you will be able to see the form as shown here. Note that you have not added any field on the u_author_registration table, but the table extends to the TASK table. That's why you are able to see fields on the form, but they are coming through the TASK table:   Author registration form without new fields If you want to add new fields to the form, then you can do so by performing the following steps: Right-click on banner. Select Configure. Click on Form Layout. Now in the Create new field section, you can enter the desired field Name and Type, as shown here: Form Fields Field Type Author Name String Author Primary Email Address String Author Secondary Email Address String Author Mobile Number String Author Permanent Address String Author Temporary Address String Country India USA UK Australia Author Experience String Book Type Choice Book Title String Contract Start Date String Contract End Date String Author registration form fields Click on Add and Save. After saving the form, new fields are added to the Author Registration form:   Author registration form To create a new form section in the form view and section, click on the New option and add the name:   Create a new section After creating the new Payment Details section, you can add fields under this section: Form Fields Field Type Country String Preferred Payment Currency Currency Bank Name String Branch Name String IFSC Code String Bank Address String Payment Details section fields As an output, you will able to see the following screen:   Author registration Payment Details form section Summary In this article we have learned how Service-Now provides a great platform to developers and also starting a new application, and getting into new modules. Resources for Article: Further resources on this subject: Client and Server Applications [article] Modules and Templates [article] My First Puppet Module [article]
Read more
  • 0
  • 0
  • 866