Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-bsp-layer
Packt
02 Apr 2015
14 min read
Save for later

The BSP Layer

Packt
02 Apr 2015
14 min read
In this article by Alex González, author of the book Embedded LinuxProjects Using Yocto Project Cookbook, we will see how the embedded Linux projects require both custom hardware and software. An early task in the development process is to test different hardware reference boards and the selection of one to base our design on. We have chosen the Wandboard, a Freescale i.MX6-based platform, as it is an affordable and open board, which makes it perfect for our needs. On an embedded project, it is usually a good idea to start working on the software as soon as possible, probably before the hardware prototypes are ready, so that it is possible to start working directly with the reference design. But at some point, the hardware prototypes will be ready and changes will need to be introduced into Yocto to support the new hardware. This article will explain how to create a BSP layer to contain those hardware-specific changes, as well as show how to work with the U-Boot bootloader and the Linux kernel, components which are likely to take most of the customization work. (For more resources related to this topic, see here.) Creating a custom BSP layer These custom changes are kept on a separate Yocto layer, called a Board Support Package (BSP) layer. This separation is best for future updates and patches to the system. A BSP layer can support any number of new machines and any new software feature that is linked to the hardware itself. How to do it... By convention, Yocto layer names start with meta, short for metadata. A BSP layer may then add a bsp keyword, and finally a unique name. We will call our layer meta-bsp-custom. There are several ways to create a new layer: Manually, once you know what is required By copying the meta-skeleton layer included in Poky By using the yocto-layer command-line tool You can have a look at the meta-skeleton layer in Poky and see that it includes the following elements: A layer.conf file, where the layer configuration variables are set A COPYING.MIT license file Several directories named with the recipes prefix with example recipes for BusyBox, the Linux kernel and an example module, an example service recipe, an example user management recipe, and a multilib example. How it works... We will cover some of the use cases that appear in the available examples, so for our needs, we will use the yocto-layer tool, which allows us to create a minimal layer. Open a new terminal and change to the fsl-community-bsp directory. Then set up the environment as follows: $ source setup-environment wandboard-quad Note that once the build directory has been created, the MACHINE variable has already been configured in the conf/local.conf file and can be omitted from the command line. Change to the sources directory and run: $ yocto-layer create bsp-custom Note that the yocto-layer tool will add the meta prefix to your layer, so you don't need to. It will prompt a few questions: The layer priority which is used to decide the layer precedence in cases where the same recipe (with the same name) exists in several layers simultaneously. It is also used to decide in what order bbappends are applied if several layers append the same recipe. Leave the default value of 6. This will be stored in the layer's conf/layer.conf file as BBFILE_PRIORITY. Whether to create example recipes and append files. Let's leave the default no for the time being. Our new layer has the following structure: meta-bsp-custom/    conf/layer.conf    COPYING.MIT    README There's more... The first thing to do is to add this new layer to your project's conf/bblayer.conf file. It is a good idea to add it to your template conf directory's bblayers.conf.sample file too, so that it is correctly appended when creating new projects. The highlighted line in the following code shows the addition of the layer to the conf/bblayers.conf file: LCONF_VERSION = "6"   BBPATH = "${TOPDIR}" BSPDIR := "${@os.path.abspath(os.path.dirname(d.getVar('FILE', "" True)) + '/../..')}"   BBFILES ?= "" BBLAYERS = " ${BSPDIR}/sources/poky/meta ${BSPDIR}/sources/poky/meta-yocto ${BSPDIR}/sources/meta-openembedded/meta-oe ${BSPDIR}/sources/meta-openembedded/meta-multimedia ${BSPDIR}/sources/meta-fsl-arm ${BSPDIR}/sources/meta-fsl-arm-extra ${BSPDIR}/sources/meta-fsl-demos ${BSPDIR}/sources/meta-bsp-custom " Now, BitBake will parse the bblayers.conf file and find the conf/layers.conf file from your layer. In it, we find the following line: BBFILES += "${LAYERDIR}/recipes-*/*/*.bb        ${LAYERDIR}/recipes-*/*/*.bbappend" It tells BitBake which directories to parse for recipes and append files. You need to make sure your directory and file hierarchy in this new layer matches the given pattern, or you will need to modify it. BitBake will also find the following: BBPATH .= ":${LAYERDIR}" The BBPATH variable is used to locate the bbclass files and the configuration and files included with the include and require directives. The search finishes with the first match, so it is best to keep filenames unique. Some other variables we might consider defining in our conf/layer.conf file are: LAYERDEPENDS_bsp-custom = "fsl-arm" LAYERVERSION_bsp-custom = "1" The LAYERDEPENDS literal is a space-separated list of other layers your layer depends on, and the LAYERVERSION literal specifies the version of your layer in case other layers want to add a dependency to a specific version. The COPYING.MIT file specifies the license for the metadata contained in the layer. The Yocto project is licensed under the MIT license, which is also compatible with the General Public License (GPL). This license applies only to the metadata, as every package included in your build will have its own license. The README file will need to be modified for your specific layer. It is usual to describe the layer and provide any other layer dependencies and usage instructions. Adding a new machine When customizing your BSP, it is usually a good idea to introduce a new machine for your hardware. These are kept under the conf/machine directory in your BSP layer. The usual thing to do is to base it on the reference design. For example, wandboard-quad has the following machine configuration file: include include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_config"   KERNEL_DEVICETREE = "imx6q-wandboard.dtb"   MACHINE_FEATURES += "bluetooth wifi"   MACHINE_EXTRA_RRECOMMENDS += " bcm4329-nvram-config bcm4330-nvram-config " A machine based on the Wandboard design could define its own machine configuration file, wandboard-quad-custom.conf, as follows: include conf/machine/include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_custom_config"   KERNEL_DEVICETREE = "imx6q-wandboard-custom.dtb"   MACHINE_FEATURES += "wifi" The wandboard.inc file now resides on a different layer, so in order for BitBake to find it, we need to specify the full path from the BBPATH variable in the corresponding layer. This machine defines its own U-Boot configuration file and Linux kernel device tree in addition to defining its own set of machine features. Adding a custom device tree to the Linux kernel To add this device tree file to the Linux kernel, we need to add the device tree file to the arch/arm/boot/dts directory under the Linux kernel source and also modify the Linux build system's arch/arm/boot/dts/Makefile file to build it as follows: dtb-$(CONFIG_ARCH_MXC) += +imx6q-wandboard-custom.dtb This code uses diff formatting, where the lines with a minus prefix are removed, the ones with a plus sign are added, and the ones without a prefix are left as reference. Once the patch is prepared, it can be added to the meta-bsp-custom/recipes-kernel/linux/linux-wandboard-3.10.17/ directory and the Linux kernel recipe appended adding a meta-bsp-custom/recipes-kernel/linux/linux-wandboard_3.10.17.bbappend file with the following content: SRC_URI_append = " file://0001-ARM-dts-Add-wandboard-custom-dts- "" file.patch" Adding a custom U-Boot machine In the same way, the U-Boot source may be patched to add a new custom machine. Bootloader modifications are not as likely to be needed as kernel modifications though, and most custom platforms will leave the bootloader unchanged. The patch would be added to the meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc-v2014.10/ directory and the U-Boot recipe appended with a meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc_2014.10.bbappend file with the following content: SRC_URI_append = " file://0001-boards-Add-wandboard-custom.patch" Adding a custom formfactor file Custom platforms can also define their own formfactor file with information that the build system cannot obtain from other sources, such as defining whether a touchscreen is available or defining the screen orientation. These are defined in the recipes-bsp/formfactor/ directory in our meta-bsp-custom layer. For our new machine, we could define a meta-bsp-custom/recipes-bsp/formfactor/formfactor_0.0.bbappend file to include a formfactor file as follows: FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" And the machine-specific meta-bsp-custom/recipes-bsp/formfactor/formfactor/wandboard-quadcustom/machconfig file would be as follows: HAVE_TOUCHSCREEN=1 Debugging the Linux kernel booting process We have seen the most general techniques for debugging the Linux kernel. However, some special scenarios require the use of different methods. One of the most common scenarios in embedded Linux development is the debugging of the booting process. This section will explain some of the techniques used to debug the kernel's booting process. How to do it... A kernel crashing on boot usually provides no output whatsoever on the console. As daunting as that may seem, there are techniques we can use to extract debug information. Early crashes usually happen before the serial console has been initialized, so even if there were log messages, we would not see them. The first thing we will show is how to enable early log messages that do not need the serial driver. In case that is not enough, we will also show techniques to access the log buffer in memory. How it works... Debugging booting problems have two distinctive phases, before and after the serial console is initialized. After the serial is initialized and we can see serial output from the kernel, debugging can use the techniques described earlier. Before the serial is initialized, however, there is a basic UART support in ARM kernels that allows you to use the serial from early boot. This support is compiled in with the CONFIG_DEBUG_LL configuration variable. This adds supports for a debug-only series of assembly functions that allow you to output data to a UART. The low-level support is platform specific, and for the i.MX6, it can be found under arch/arm/include/debug/imx.S. The code allows for this low-level UART to be configured through the CONFIG_DEBUG_IMX_UART_PORT configuration variable. We can use this support directly by using the printascii function as follows: extern void printascii(const char *); printascii("Literal stringn"); However, much more preferred would be to use the early_print function, which makes use of the function explained previously and accepts formatted input in printf style; for example: early_print("%08xt%sn", p->nr, p->name); Dumping the kernel's printk buffer from the bootloader Another useful technique to debug Linux kernel crashes at boot is to analyze the kernel log after the crash. This is only possible if the RAM memory is persistent across reboots and does not get initialized by the bootloader. As U-Boot keeps the memory intact, we can use this method to peek at the kernel login memory in search of clues. Looking at the kernel source, we can see how the log ring buffer is set up in kernel/printk/printk.c and also note that it is stored in __log_buf. To find the location of the kernel buffer, we will use the System.map file created by the Linux build process, which maps symbols with virtual addresses using the following command: $grep __log_buf System.map 80f450c0 b __log_buf To convert the virtual address to physical address, we look at how __virt_to_phys() is defined for ARM: x - PAGE_OFFSET + PHYS_OFFSET The PAGE_OFFSET variable is defined in the kernel configuration as: config PAGE_OFFSET        hex        default 0x40000000 if VMSPLIT_1G        default 0x80000000 if VMSPLIT_2G        default 0xC0000000 Some of the ARM platforms, like the i.MX6, will dynamically patch the __virt_to_phys() translation at runtime, so PHYS_OFFSET will depend on where the kernel is loaded into memory. As this can vary, the calculation we just saw is platform specific. For the Wandboard, the physical address for 0x80f450c0 is 0x10f450c0. We can then force a reboot using a magic SysRq key, which needs to be enabled in the kernel configuration with CONFIG_MAGIC_SYSRQ, but is enabled in the Wandboard by default: $ echo b > /proc/sysrq-trigger We then dump that memory address from U-Boot as follows: > md.l 0x10f450c0 10f450c0: 00000000 00000000 00210038 c6000000   ........8.!..... 10f450d0: 746f6f42 20676e69 756e694c 6e6f2078   Booting Linux on 10f450e0: 79687020 61636973 5043206c 78302055     physical CPU 0x 10f450f0: 00000030 00000000 00000000 00000000   0............... 10f45100: 009600a8 a6000000 756e694c 65762078   ........Linux ve 10f45110: 6f697372 2e33206e 312e3031 2e312d37   rsion 3.10.17-1. 10f45120: 2d322e30 646e6177 72616f62 62672b64   0.2-wandboard+gb 10f45130: 36643865 62323738 20626535 656c6128   e8d6872b5eb (ale 10f45140: 6f6c4078 696c2d67 2d78756e 612d7068   x@log-linux-hp-a 10f45150: 7a6e6f67 20296c61 63636728 72657620   gonzal) (gcc ver 10f45160: 6e6f6973 392e3420 2820312e 29434347   sion 4.9.1 (GCC) 10f45170: 23202920 4d532031 52502050 504d4545     ) #1 SMP PREEMP 10f45180: 75532054 6546206e 35312062 3a323120   T Sun Feb 15 12: 10f45190: 333a3733 45432037 30322054 00003531   37:37 CET 2015.. 10f451a0: 00000000 00000000 00400050 82000000   ........P.@..... 10f451b0: 3a555043 4d524120 50203776 65636f72   CPU: ARMv7 Proce There's more... Another method is to store the kernel log messages and kernel panics or oops into persistent storage. The Linux kernel's persistent store support (CONFIG_PSTORE) allows you to log in to the persistent memory kept across reboots. To log panic and oops messages into persistent memory, we need to configure the kernel with the CONFIG_PSTORE_RAM configuration variable, and to log kernel messages, we need to configure the kernel with CONFIG_PSTORE_CONSOLE. We then need to configure the location of the persistent storage on an unused memory location, but keep the last 1 MB of memory free. For example, we could pass the following kernel command-line arguments to reserve a 128 KB region starting at 0x30000000: ramoops.mem_address=0x30000000 ramoops.mem_size=0x200000 We would then mount the persistent storage by adding it to /etc/fstab so that it is available on the next boot as well: /etc/fstab: pstore /pstore pstore defaults 0 0 We then mount it as follows: # mkdir /pstore # mount /pstore Next, we force a reboot with the magic SysRq key: # echo b > /proc/sysrq-trigger On reboot, we will see a file inside /pstore: -r--r--r-- 1 root root 4084 Sep 16 16:24 console-ramoops This will have contents such as the following: SysRq : Resetting CPU3: stopping CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.14.0-rc4-1.0.0-wandboard-37774-g1eae [<80014a30>] (unwind_backtrace) from [<800116cc>] (show_stack+0x10/0x14) [<800116cc>] (show_stack) from [<806091f4>] (dump_stack+0x7c/0xbc) [<806091f4>] (dump_stack) from [<80013990>] (handle_IPI+0x144/0x158) [<80013990>] (handle_IPI) from [<800085c4>] (gic_handle_irq+0x58/0x5c) [<800085c4>] (gic_handle_irq) from [<80012200>] (__irq_svc+0x40/0x70) Exception stack(0xee4c1f50 to 0xee4c1f98) We should move it out of /pstore or remove it completely so that it doesn't occupy memory. Summary This article guides you through the customization of the BSP for your own product. It then explains how to debug the Linux kernel booting process. Resources for Article: Further resources on this subject: Baking Bits with Yocto Project [article] An Introduction to the Terminal [article] Linux Shell Scripting – various recipes to help you [article]
Read more
  • 0
  • 0
  • 6287

article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,[email protected],Bill,Binky,open-enrollmentmoodlers moodler_2,[email protected],Rose,Krial,open-enrollmentmoodlers moodler_3,[email protected],Jeff,Marco,open-enrollmentmoodlers moodler_4,[email protected],Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,[email protected],Bill,Binky,history101,odds,science101 moodler_2,[email protected],Rose,Krial,history101,even,science101 moodler_3,[email protected],Jeff,Marco,history101,odds,science101 moodler_4,[email protected],Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 6279

article-image-creating-views-3-programmatically
Packt
21 Mar 2012
18 min read
Save for later

Creating Views 3 Programmatically

Packt
21 Mar 2012
18 min read
(For more resources on Drupal, see here.) Programming a view Creating a view with a module is a convenient way to have a predefined view available with Drupal. As long as the module is installed and enabled, the view will be there to be used. If you have never created a module in Drupal, or even never written a line of Drupal code, you will still be able to create a simple view using this recipe. Getting ready Creating a module involves the creation of the following two files at a minimum: An .info file that gives Drupal the information needed to add the module A .module file that contains the PHP script More complex modules will consist of more files, but those two are all we will need for now. How to do it... Carry out the following steps: Create a new directory named _custom inside your contributed modules directory (so, probably sites/all/modules/_custom). Create a subdirectory inside that directory; we will name it d7vr (Drupal 7 Views Recipes). Open a new file with your editor and add the following lines: ; $Id: name = Programmatic Views description = Provides supplementary resources such as programmatic views package = D7 Views Recipes version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrpv.info. Open a new file with your editor and add the following lines: Feel free to download this code from the author's web site rather than typing it, at http://theaccidentalcoder.com/ content/drupal-7-views-cookbook <?php /** * Implements hook_views_api(). */ function d7vrpv_views_api() { return array( 'api' => 2, 'path' => drupal_get_path('module', 'd7vrpv'), ); } /** * Implements hook_views_default_views(). */ function d7vrpv_views_default_views() { return d7vrpv_list_all_nodes(); } /** * Begin view */ function d7vrpv_list_all_nodes() { /* * View 'list_all_nodes' */ $view = views_new_view(); $view->name = 'list_all_nodes'; $view->description = 'Provide a list of node titles, creation dates, owner and status'; $view->tag = ''; $view->view_php = ''; $view->base_table = 'node'; $view->is_cacheable = FALSE; $view->api_version = '3.0-alpha1'; $view->disabled = FALSE; /* Edit this to true to make a default view disabled initially */ /* Display: Defaults */ $handler = $view->new_display('default', 'Defaults', 'default'); $handler->display->display_options['title'] = 'List All Nodes'; $handler->display->display_options['access']['type'] = 'role'; $handler->display->display_options['access']['role'] = array( '3' => '3', ); $handler->display->display_options['cache']['type'] = 'none'; $handler->display->display_options['exposed_form']['type'] = 'basic'; $handler->display->display_options['pager']['type'] = 'full'; $handler->display-> display_options['pager']['options']['items_per_page'] = '15'; $handler->display->display_options['pager']['options'] ['offset'] = '0'; $handler->display->display_options['pager']['options'] ['id'] = '0'; $handler->display->display_options['style_plugin'] = 'table'; $handler->display->display_options['style_options'] ['columns'] = array( 'title' => 'title', 'type' => 'type', 'created' => 'created', 'name' => 'name', 'status' => 'status', ); $handler->display->display_options['style_options'] ['default'] = 'created'; $handler->display->display_options['style_options'] ['info'] = array( 'title' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'type' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'created' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'name' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'status' => array( 'sortable' => 1, 'align' => 'views-align-left', 145 'separator' => '', ), ); $handler->display->display_options['style_options'] ['override'] = 1; $handler->display->display_options['style_options'] ['sticky'] = 0; $handler->display->display_options['style_options'] ['order'] = 'desc'; /* Header: Global: Text area */ $handler->display->display_options['header']['area'] ['id'] = 'area'; $handler->display->display_options['header']['area'] ['table'] = 'views'; $handler->display->display_options['header']['area'] ['field'] = 'area'; $handler->display->display_options['header']['area'] ['empty'] = TRUE; $handler->display->display_options['header']['area'] ['content'] = '<h2>Following is a list of all non-page nodes.</h2>'; $handler->display->display_options['header']['area'] ['format'] = '3'; /* Footer: Global: Text area */ $handler->display->display_options['footer']['area'] ['id'] = 'area'; $handler->display->display_options['footer']['area'] ['table'] = 'views'; $handler->display->display_options['footer']['area'] ['field'] = 'area'; $handler->display->display_options['footer']['area'] ['empty'] = TRUE; $handler->display->display_options['footer']['area'] ['content'] = '<small>This view is brought to you courtesy of the D7 Views Recipes module</small>'; $handler->display->display_options['footer']['area'] ['format'] = '3'; /* Field: Node: Title */ $handler->display->display_options['fields']['title'] ['id'] = 'title'; $handler->display->display_options['fields']['title'] ['table'] = 'node'; $handler->display->display_options['fields']['title'] ['field'] = 'title'; $handler->display-> display_options['fields']['title']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['title']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['title']['alter']['trim'] = 0; $handler->display-> display_options['fields']['title']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['title']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['title']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['title']['alter']['html'] = 0; $handler->display-> display_options['fields']['title']['hide_empty'] = 0; $handler->display-> display_options['fields']['title']['empty_zero'] = 0; $handler->display-> display_options['fields']['title']['link_to_node'] = 0; /* Field: Node: Type */ $handler->display->display_options['fields']['type'] ['id'] = 'type'; $handler->display->display_options['fields']['type'] ['table'] = 'node'; $handler->display->display_options['fields']['type'] ['field'] = 'type'; $handler->display-> display_options['fields']['type']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['type']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['type']['alter']['trim'] = 0; $handler->display-> display_options['fields']['type']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['type']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['type']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['type']['alter']['html'] = 0; $handler->display-> display_options['fields']['type']['hide_empty'] = 0; $handler->display-> display_options['fields']['type']['empty_zero'] = 0; $handler->display-> display_options['fields']['type']['link_to_node'] = 0; $handler->display-> display_options['fields']['type']['machine_name'] = 0; /* Field: Node: Post date */ $handler->display->display_options['fields']['created'] ['id'] = 'created'; $handler->display->display_options['fields']['created'] ['table'] = 'node'; $handler->display->display_options['fields']['created'] ['field'] = 'created'; $handler->display-> display_options['fields']['created']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['make_link'] = 0; $handler->display-> display_options['fields']['created']['alter']['trim'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['created']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['created']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['created']['alter']['html'] = 0; $handler->display-> display_options['fields']['created']['hide_empty'] = 0; $handler->display-> display_options['fields']['created']['empty_zero'] = 0; $handler->display-> display_options['fields']['created']['date_format'] = 'custom'; $handler->display-> display_options['fields']['created']['custom_date_format'] = 'Y-m-d'; /* Field: User: Name */ $handler->display->display_options['fields']['name'] ['id'] = 'name'; $handler->display->display_options['fields']['name'] ['table'] = 'users'; $handler->display->display_options['fields']['name'] ['field'] = 'name'; $handler->display->display_options['fields']['name'] ['label'] = 'Author'; $handler->display-> display_options['fields']['name']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['name']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['name']['alter']['trim'] = 0; $handler->display-> display_options['fields']['name']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['name']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['name']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['name']['alter']['html'] = 0; $handler->display-> display_options['fields']['name']['hide_empty'] = 0; $handler->display-> display_options['fields']['name']['empty_zero'] = 0; $handler->display-> display_options['fields']['name']['link_to_user'] = 0; $handler->display-> display_options['fields']['name']['overwrite_anonymous'] = 0; /* Field: Node: Published */ $handler->display->display_options['fields']['status'] ['id'] = 'status'; $handler->display->display_options['fields']['status'] ['table'] = 'node'; $handler->display->display_options['fields']['status'] ['field'] = 'status'; $handler->display-> display_options['fields']['status']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['status']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['status']['alter']['trim'] = 0; $handler->display-> display_options['fields']['status']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['status']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['status']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['status']['alter']['html'] = 0; $handler->display-> display_options['fields']['status']['hide_empty'] = 0; $handler->display-> display_options['fields']['status']['empty_zero'] = 0; $handler->display->display_options['fields']['status'] ['type'] = 'true-false'; $handler->display->display_options['fields']['status'] ['not'] = 0; /* Sort criterion: Node: Post date */ $handler->display->display_options['sorts']['created'] ['id'] = 'created'; $handler->display->display_options['sorts']['created'] ['table'] = 'node'; $handler->display->display_options['sorts']['created'] ['field'] = 'created'; $handler->display->display_options['sorts']['created'] ['order'] = 'DESC'; /* Filter: Node: Type */ $handler->display->display_options['filters']['type'] ['id'] = 'type'; $handler->display->display_options['filters']['type'] ['table'] = 'node'; $handler->display->display_options['filters']['type'] ['field'] = 'type'; $handler->display-> display_options['filters']['type']['operator'] = 'not in'; $handler->display->display_options['filters']['type'] ['value'] = array( 'page' => 'page', ); /* Display: Page */ $handler = $view->new_display('page', 'Page', 'page_1'); $handler->display->display_options['path'] = 'list-all-nodes'; $views[$view->name] = $view; return $views; } ?>   Save the file as d7vrpv.module. Navigate to the modules admin page at admin/modules. Scroll down to the new module and activate it, as shown in the following screenshot: Navigate to the Views Admin page (admin/structure/views) to verify that the view appears in the list: Finally, navigate to list-all-nodes to see the view, as shown in the following screenshot: How it works... The module we have just created could have many other features associated with it, beyond simply a view, and enabling the module will make those features and the view available, while disabling it will hide those same features and view. When compiling the list of installed modules, Drupal looks first in its own modules directory for .info files, and then in the site's modules directories. As can be deduced from the fact that we put our .info file in a second-level directory of sites/all/modules and it was found there, Drupal will traverse the modules directory tree looking for .info files. We created a .info file that provided Drupal with the name and description of our module, its version, the version of Drupal it is meant to work with, and a list of files used by the module, in our case just one. We saved the .info file as d7vrpv.info (Drupal 7 Views Recipes programmatic view); the name of the directory in which the module files appear (d7vr) has no bearing on the module itself. The module file contains the code that will be executed, at least initially. Drupal does not "call" the module code in an active way. Instead, there are events that occur during Drupal's creation of a page, and modules can elect to register with Drupal to be notifi ed of such events when they occur, so that the module can provide the code to be executed at that time; for example, you registering with a business to receive an e-mail in the event of a sale. Just like you are free to act or not, but the sales go on regardless, so too Drupal continues whether or not the module decides to do something when given the chance. Our module 'hooks' the views_api and views_default_views events in order to establish the fact that we do have a view to offer. The latter hook instructs the Views module which function in our code executes our view: d7vrpv_list_all_nodes(). The first thing it does is create a view object by calling a function provided by the Views module. Having instantiated the new object, we then proceed to provide the information it needs, such as the name of the view, its description, and all the information that we would have selected through the Views UI had we used it. As we are specifying the view options in the code, we need to provide the information that is needed by each handler of the view functionality. The net effect of the code is that when we have cleared cache and enabled our module, Drupal then includes it in its list of modules to poll during events. When we navigate to the Views Admin page, an event occurs in which any module wishing to include a view in the list on the admin screen does so, including ours. One of the things our module does is defi ne a path for the page display of our view, which is then used to establish a callback. When that path, list-all-nodes, is requested, it results in the function in our module being invoked, which in turn provides all the information necessary for our view to be rendered and presented. There's more The details of the code provided to each handler are outside the scope of this book, but you don't really need to understand it all in order to use it. You can enable the Views Bulk Export module (it comes with Views), create a view using the Views UI in admin, and choose to Bulk Export it. Give the exporter the name of your new module and it will create a file and populate it with nearly all the code necessary for you. Handling a view field As you may have noticed in the preceding code that you typed or pasted, Views makes tremendous use of handlers. What is a handler? It is simply a script that performs a special task on one or more elements. Think of a house being built. The person who comes in to tape, mud, and sand the wallboard is a handler. In Views, one type of handler is the field handler, which handles any number of things, from providing settings options in the field configuration dialog, to facilitating the field being retrieved from the database if it is not part of the primary record, to rendering the data. We will create a field handler in this recipe that will add to the display of a zip code a string showing how many other nodes have the same zip code, and we will add some formatting options to it in the next recipe. Getting ready A handler lives inside a module, so we will create one: Create a directory in your contributed modules path for this module. Open a new text file in your editor and paste the following code into it: ; $Id: name = Zip Code Handler description = Provides a view handler to format a field as a zip code package = D7 Views Recipes ; Handler files[] = d7vrzch_handler_field_zip_code.inc files[] = d7vrzch_views.inc version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrzch.info. Create another text file and paste the following code into it: <?php /** * Implements hook_views_data_alter() */ function d7vrzch_field_views_data_alter(&$data, $field) { if (array_key_exists('field_data_field_zip_code', $data)) { $data['field_data_field_zip_code']['field_zip_code'] ['field']['handler'] = 'd7vrzch_handler_field_zip_code'; } } Save the file as d7vrzch.views.inc. Create another text file and paste the following into it: <?php /** * Implements hook_views_api(). */ function d7vrzch_views_api() { return array( 'api' => 3, 'path' => drupal_get_path('module', 'd7vrzch'), ); } Save the file as d7vrzch.module. How to do it... Carry out the folowing steps: Create another text file and paste the following into it: <?php // $Id: $ /** * Field handler to format a zip code. * * @ingroup views_field_handlers */ class d7vrzch_handler_field_zip_code extends views_handler_field_field { function option_definition() { $options = parent::option_definition(); $options['display_zip_totals'] = array( 'contains' => array( 'display_zip_totals' => array('default' => FALSE), ) ); return $options; } /** * Provide a link to the page being visited. */ function options_form(&$form, &$form_state) { parent::options_form($form, $form_state); $form['display_zip_totals'] = array( '#title' => t('Display Zip total'), '#description' => t('Appends in parentheses the number of nodes containing the same zip code'), '#type' => 'checkbox', '#default_value' => !empty($this-> options['display_zip_totals']), ); } function pre_render(&$values) { if (isset($this->view->build_info['summary']) || empty($values)) { return parent::pre_render($values); } static $entity_type_map; if (!empty($values)) { // Cache the entity type map for repeat usage. if (empty($entity_type_map)) { $entity_type_map = db_query('SELECT etid, type FROM {field_config_entity_type}')->fetchAllKeyed(); } // Create an array mapping the Views values to their object types. $objects_by_type = array(); foreach ($values as $key => $object) { // Derive the entity type. For some field types, etid might be empty. if (isset($object->{$this->aliases['etid']}) && isset($entity_type_map[$object->{$this-> aliases['etid']}])) { $entity_type = $entity_type_map[$object->{$this-> aliases['etid']}]; $entity_id = $object->{$this->field_alias}; $objects_by_type[$entity_type][$key] = $entity_id; } } // Load the objects. foreach ($objects_by_type as $entity_type => $oids) { $objects = entity_load($entity_type, $oids); foreach ($oids as $key => $entity_id) { $values[$key]->_field_cache[$this->field_alias] = array( 'entity_type' => $entity_type, 'object' => $objects[$entity_id], ); } } } } function render($values) { $value = $values->_field_cache[$this->field_alias] ['object']->{$this->definition['field_name']} ['und'][0]['safe_value']; $newvalue = $value; if (!empty($this->options['display_zip_totals'])) { $result = db_query("SELECT count(*) AS recs FROM {field_data_field_zip_code} WHERE field_zip_code_value = :zip",array(':zip' => $value)); foreach ($result as $item) { $newvalue .= ' (' . $item->recs . ')'; } } return $newvalue; } Save the file as d7vrzch_handler_field_zip_code.inc. Navigate to admin/build/modules and enable the new module, which shows as the Zip Code Handler. We will test the handler in a quick view. Navigate to admin/build/views. Click on the +Add new view link , enter test as the View name, check the box for description and enter Zip code handler test; clear the Create a page checkbox , and click on the Continue & edit button . On the Views edit page, click on the add link in the Filter Criteria pane, check the box next to Content: Type, and click on the Add and configure filter criteria button . In the Content: Type configuration box , select Home and click on the Apply button . Click on the add link next to Fields, check the box next to Content: Zip code, and click on the Add and configure fields button. Check the box at the bottom of the Content: Zip code configuration box titled Display Zip total and click on the Apply button. Click on the Save button and see the result of our custom handler in the Live preview: How it works... The Views field handler is simply a set of functions that provide support for populating and formatting a field for Views, much in the way a printer driver does for the operating system. We created a module in which our handler resides, and whenever that field is requested within a view, our handler will be invoked. We also added a display option to the configuration options for our field, which when selected, takes each zip code value to be displayed, determines how many nodes have the same zip code, and appends the parenthesized total to the output. The three functions, two in the views.inc file and one in the module file, are very important. Their result is that our custom handler file will be used for field_zip_code instead of the default handler used for entity text fields. In the next recipe, we will add zip code formatting options to our custom handler.
Read more
  • 0
  • 0
  • 6279

article-image-facebook-accessing-graph-api
Packt
18 Jan 2011
8 min read
Save for later

Facebook: Accessing Graph API

Packt
18 Jan 2011
8 min read
  Facebook Graph API Development with Flash Build social Flash applications fully integrated with the Facebook Graph API Build your own interactive applications and games that integrate with Facebook Add social features to your AS3 projects without having to build a new social network from scratch Learn how to retrieve information from Facebook's database A hands-on guide with step-by-step instructions and clear explanation that encourages experimentation and play Accessing the Graph API through a Browser We'll dive right in by taking a look at how the Graph API represents the information from a public Page. When I talk about a Page with a capital P, I don't just mean any web page within the Facebook site; I'm referring to a specific type of page, also known as a public profile. Every Facebook user has their own personal profile; you can see yours by logging in to Facebook and clicking on the "Profile" link in the navigation bar at the top of the site. Public profiles look similar, but are designed to be used by businesses, bands, products, organizations, and public figures, as a way of having a presence on Facebook. This means that many people have both a personal profile and a public profile. For example, Mark Zuckerberg, the CEO of Facebook, has a personal profile at http://www.facebook.com/zuck and a public profile (a Page) at http://www.facebook.com/markzuckerberg. This way, he can use his personal profile to keep in touch with his friends and family, while using his public profile to connect with his fans and supporters. There is a second type of Page: a Community Page. Again, these look very similar to personal profiles; the difference is that these are based on topics, experience, and causes, rather than entities. Also, they automatically retrieve information about the topic from Wikipedia, where relevant, and contain a live feed of wall posts talking about the topic. All this can feel a little confusing – don't worry about it! Once you start using it, it all makes sense. Time for action – loading a Page Browse to http://www.facebook.com/PacktPub to load Packt Publishing's Facebook Page. You'll see a list of recent wall posts, an Info tab, some photo albums (mostly containing book covers), a profile picture, and a list of fans and links. That's how website users view the information. How will our code "see" it? Take a look at how the Graph API represents Packt Publishing's Page by pointing your web browser at https://graph.facebook.com/PacktPub. This is called a Graph URL – note that it's the same URL as the Page itself, but with a secure https connection, and using the graph sub domain, rather than www. What you'll see is as follows: { "id": "204603129458", "name": "Packt Publishing", "picture": "http://profile.ak.fbcdn.net/hprofile-ak-snc4/ hs302.ash1/23274_204603129458_7460_s.jpg", "link": "http://www.facebook.com/PacktPub", "category": "Products_other", "username": "PacktPub", "company_overview": "Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.", "fan_count": 412 } What just happened? You just fetched the Graph API's representation of the Packt Publishing Page in your browser. The Graph API is designed to be easy to pick up – practically self-documenting – and you can see that it's a success in that respect. It's pretty clear that the previous data is a list of fields and their values. The one field that's perhaps not clear is id; this number is what Facebook uses internally to refer to the Page. This means Pages can have two IDs: the numeric one assigned automatically by Facebook, and an alphanumeric one chosen by the Page's owner. The two IDs are equivalent: if you browse to https://graph.facebook.com/204603129458, you'll see exactly the same data as if you browse to https://graph.facebook.com/PacktPub. Have a go hero – exploring other objects Of course, the Packt Publishing Page is not the only Page you can explore with the Graph API in your browser. Find some other Pages through the Facebook website in your browser, then, using the https://graph.facebook.com/id format, take a look at their Graph API representations. Do they have more information, or less? Next, move on to other types of Facebook objects: personal profiles, events, groups. For personal profiles, the id may be alphanumeric (if the person has signed up for a custom Facebook Username at http://www.facebook.com/username/), but in general the id will be numeric, and auto-assigned by Facebook when the user signed up. For certain types of objects (like photo albums), the value of id will not be obvious from the URL within the Facebook website. In some cases, you'll get an error message, like: { "error": { "type": "OAuthAccessTokenException", "message": "An access token is required to request this resource." } } Accessing the Graph API through AS3 Now that you've got an idea of how easy it is to access and read Facebook data in a browser, we'll see how to fetch it in AS3. Time for action – retrieving a Page's information in AS3 Set up the project. Check that the project compiles with no errors (there may be a few warnings, depending on your IDE). You should see a 640 x 480 px SWF, all white, with just three buttons in the top-left corner: Zoom In, Zoom Out, and Reset View: This project is the basis for a Rich Internet Application (RIA) that will be able to explore all of the information on Facebook using the Graph API. All the code for the UI is in place, just waiting for some Graph data to render. Our job is to write code to retrieve the data and pass it on to the renderers. I'm not going to break down the entire project and explain what every class does. What you need to know at the moment is a single instance of the controllers. CustomGraphContainerController class is created when the project is initialized, and it is responsible for directing the flow of data to and from Facebook. It inherits some useful methods for this purpose from the controllers.GCController class; we'll make use of these later on. Open the CustomGraphContainerController class in your IDE. It can be found in srccontrollersCustomGraphContainerController.as, and should look like the listing below: package controllers { import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); } } } The first thing we'll do is grab the Graph API's representation of Packt Publishing's Page via a Graph URL, like we did using the web browser. For this we can use a URLLoader. The URLLoader and URLRequest classes are used together to download data from a URL. The data can be text, binary data, or URL-encoded variables. The download is triggered by passing a URLRequest object, whose url property contains the requested URL, to the load() method of a URLLoader. Once the required data has finished downloading, the URLLoader dispatches a COMPLETE event. The data can then be retrieved from its data property. Modify CustomGraphContainerController.as like so (the highlighted lines are new): package controllers { import flash.events.Event; import flash.net.URLLoader; import flash.net.URLRequest; import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest(); //Specify which Graph URL to load request.url = "https://graph.facebook.com/PacktPub"; loader.addEventListener(Event.COMPLETE, onGraphDataLoadComplete); //Start the actual loading process loader.load(request); } private function onGraphDataLoadComplete(a_event:Event):void { var loader:URLLoader = a_event.target as URLLoader; //obtain whatever data was loaded, and trace it var graphData:String = loader.data; trace(graphData); } } } All we're doing here is downloading whatever information is at https://graph.facebook.com/PackPub and tracing it to the output window. Test your project, and take a look at your output window. You should see the following data: {"id":"204603129458","name":"Packt Publishing","picture":"http:// profile.ak.fbcdn.net/hprofile-ak-snc4/hs302. ash1/23274_204603129458_7460_s.jpg","link":"http://www.facebook. com/PacktPub","category":"Products_other","username":"PacktPub", "company_overview":"Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.","fan_count":412} If you get an error, check that your code matches the previously mentioned code. If you see nothing in your output window, make sure that you are connected to the Internet. If you still don't see anything, it's possible that your security settings prevent you from accessing the Internet via Flash, so check those.  
Read more
  • 0
  • 0
  • 6275

article-image-microservices-and-service-oriented-architecture
Packt
09 Mar 2017
6 min read
Save for later

Microservices and Service Oriented Architecture

Packt
09 Mar 2017
6 min read
Microservices are an architecture style and an approach for software development to satisfy modern business demands. They are not a new invention as such. They are instead an evolution of previous architecture styles. Many organizations today use them - they can improve organizational agility, speed of delivery, and ability to scale. Microservices give you a way to develop more physically separated modular applications. This tutorial has been taken from Spring 5.0 Microsevices - Second Edition Microservices are similar to conventional service-oriented architectures. In this article, we will see how microservices are related to SOA. The emergence of microservices Many organizations, such as Netflix, Amazon, and eBay, successfully used what is known as the 'divide and conquer' technique to functionally partition their monolithic applications into smaller atomic units. Each one performs a single function - a 'service'. These organizations solved a number of prevailing issues they were experiencing with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern as microservices architecture. Microservices originated from the idea of Hexagonal Architecture, coined by Alistair Cockburn back in 2005. Hexagonal Architecture or Hexagonal pattern is also known as the Ports and Adapters pattern. Cockburn defined microservices as: "...an architectural style or an approach for building IT systems as a set of business capabilities that are autonomous, self contained, and loosely coupled." The following diagram depicts a traditional N-tier application architecture having presentation layer, business layer, and database layer: Modules A, B, and C represent three different business capabilities. The layers in the diagram represent separation of architecture concerns. Each layer holds all three business capabilities pertaining to that layer. Presentation layer has web components of all three modules, business layer has business components of all three modules, and database hosts tables of all three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservice-based architecture: As we can see in the preceding diagram, the boundaries are inversed in the microservices architecture. Each vertical slice represents a microservice. Each microservice will have its own presentation layer, business layer, and database layer. Microservices is aligned toward business capabilities. By doing so, changes to one microservice do not impact the others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols, such as HTTP and REST, or messaging protocols, such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols, such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices is more aligned to the business capabilities and has independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two facets of microservices. How do microservices compare to Service Oriented Architectures? One of the common question arises when dealing with microservices architecture is, how is it different from SOA. SOA and microservices follow similar concepts. Earlier in this article, we saw that microservices is evolved from SOA and many service characteristics that are common in both approaches. However, are they the same or different? As microservices evolved from SOA, many characteristics of microservices is similar to SOA. Let’s first examine the definition of SOA. The Open Group definition of SOA is as follows: "SOA is an architectural style that supports service-orientation. Service-orientation is a way of thinking in terms of services and service-based development and the outcomes of services. Is self-contained May be composed of other services Is a “black box” to consumers of the service" You have learned similar aspects in microservices as well. So, in what way is microservices different? The answer is--it depends. The answer to the previous question could be yes or no, depending upon the organization and its adoption of SOA. SOA is a broader term and different organizations approached SOA differently to solve different organizational problems. The difference between microservices and SOA is in the way based on how an organization approaches SOA. In order to get clarity, a few cases will be examined here. Service oriented integration Service-oriented integration refers to a service-based integration approach used by many organizations: Many organizations would have used SOA primarily to solve their integration complexities, also known as integration spaghetti. Generally, this is termed as Service Oriented Integration (SOI). In such cases, applications communicate with each other through a common integration layer using standard protocols and message formats, such as SOAP/XML-based web services over HTTP or Java Message Service (JMS). These types of organizations focus on Enterprise Integration Patterns (EIP) to model their integration requirements. This approach strongly relies on heavyweight Enterprise Service Bus (ESB),such as TIBCO Business Works, WebSphere ESB, Oracle ESB, and the likes. Most of the ESB vendors also packed a set of related product, such as Rules Engines, Business Process Management Engines, and so on as a SOA suite. Such organization's integrations are deeply rooted into these products. They either write heavy orchestration logic in the ESB layer or business logic itself in the service bus. In both cases, all enterprise services are deployed and accessed through the ESB. These services are managed through an enterprise governance model. For such organizations, microservices is altogether different from SOA. Legacy modernization SOA is also used to build service layers on top of legacy applications which is shown in the following diagram: Another category of organizations would have used SOA in transformation projects or legacy modernization projects. In such cases, the services are built and deployed in the ESB connecting to backend systems using ESB adapters. For these organizations, microservices are different from SOA. Service oriented application Some organizations would have adopted SOA at an application level: In this approach as shown in the preceding diagram, lightweight Integration frameworks, such as Apache Camel or Spring Integration, are embedded within applications to handle service related cross-cutting capabilities, such as protocol mediation, parallel execution, orchestration, and service integration. As some of the lightweight integration frameworks had native Java object support, such applications would have even used native Plain Old Java Objects (POJO) services for integration and data exchange between services. As a result, all services have to be packaged as one monolithic web archive. Such organizations could see microservices as the next logical step of their SOA. Monolithic migration using SOA The following diagram represents Logical System Boundaries: The last possibility is transforming a monolithic application into smaller units after hitting the breaking point with the monolithic system. They would have broken the application into smaller physically deployable subsystems, similar to the Y axis scaling approach explained earlier and deployed them as web archives on web servers or as jars deployed on some home grown containers. These subsystems as service would have used web services or other lightweight protocols to exchange data between services. They would have also used SOA and service design principles to achieve this. For such organizations, they may tend to think that microservices is the same old wine in a new bottle. Further resources on this subject: Building Scalable Microservices [article] Breaking into Microservices Architecture [article] A capability model for microservices [article]
Read more
  • 0
  • 0
  • 6273

article-image-extension-functions-in-kotlin
Aaron Lazar
08 Jun 2018
8 min read
Save for later

Extension functions in Kotlin: everything you need to know

Aaron Lazar
08 Jun 2018
8 min read
Kotlin is a rapidly rising programming language. It offers developers the simplicity and effectiveness to develop robust and lightweight applications. Kotlin offers great functional programming support, and one of the best features of Kotlin in this respect are extension functions, hands down! Extension functions are great, because they let you modify existing types with new functions. This is especially useful when you're working with Android and you want to add extra functions to the framework classes. In this article, we'll see what Extension functions are and how the're a blessing in disguise! This article has been extracted from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. The book bridges the language gap for Kotlin developers by showing you how to create and consume functional constructs in Kotlin. fun String.sendToConsole() = println(this) fun main(args: Array<String>) { "Hello world! (from an extension function)".sendToConsole() } To add an extension function to an existing type, you must write the function's name next to the type's name, joined by a dot (.). In our example, we add an extension function (sendToConsole()) to the String type. Inside the function's body, this refers the instance of String type (in this extension function, string is the receiver type). Apart from the dot (.) and this, extension functions have the same syntax rules and features as a normal function. Indeed, behind the scenes, an extension function is a normal function whose first parameter is a value of the receiver type. So, our sendToConsole() extension function is equivalent to the next code: fun sendToConsole(string: String) = println(string) sendToConsole("Hello world! (from a normal function)") So, in reality, we aren't modifying a type with new functions. Extension functions are a very elegant way to write utility functions, easy to write, very fun to use, and nice to read—a win-win. This also means that extension functions have one restriction—they can't access private members of this, in contrast with a proper member function that can access everything inside the instance: class Human(private val name: String) fun Human.speak(): String = "${this.name} makes a noise" //Cannot access 'name': it is private in 'Human' Invoking an extension function is the same as a normal function—with an instance of the receiver type (that will be referenced as this inside the extension), invoke the function by name. Extension functions and inheritance There is a big difference between member functions and extension functions when we talk about inheritance. The open class Canine has a subclass, Dog. A standalone function, printSpeak, receives a parameter of type Canine and prints the content of the result of the function speak(): String: open class Canine { open fun speak() = "<generic canine noise>" } class Dog : Canine() { override fun speak() = "woof!!" } fun printSpeak(canine: Canine) { println(canine.speak()) } Open classes with open methods (member functions) can be extended and alter their behavior. Invoking the speak function will act differently depending on which type is your instance. The printSpeak function can be invoked with any instance of a class that is-a Canine, either Canine itself or any subclass: printSpeak(Canine()) printSpeak(Dog()) If we execute this code, we can see this on the console: Although both are Canine, the behavior of speak is different in both cases, as the subclass overrides the parent implementation. But with extension functions, many things are different. As with the previous example, Feline is an open class extended by the Cat class. But speak is now an extension function: open class Feline fun Feline.speak() = "<generic feline noise>" class Cat : Feline() fun Cat.speak() = "meow!!" fun printSpeak(feline: Feline) { println(feline.speak()) } Extension functions don't need to be marked as override, because we aren't overriding anything: printSpeak(Feline()) printSpeak(Cat() If we execute this code, we can see this on the console: In this case, both invocations produce the same result. Although in the beginning, it seems confusing, once you analyze what is happening, it becomes clear. We're invoking the Feline.speak() function twice; this is because each parameter that we pass is a Feline to the printSpeak(Feline) function: open class Primate(val name: String) fun Primate.speak() = "$name: <generic primate noise>" open class GiantApe(name: String) : Primate(name) fun GiantApe.speak() = "${this.name} :<scary 100db roar>" fun printSpeak(primate: Primate) { println(primate.speak()) } printSpeak(Primate("Koko")) printSpeak(GiantApe("Kong")) If we execute this code, we can see this on the console: In this case, it is still the same behavior as with the previous examples, but using the right value for name. Speaking of which, we can reference name with name and this.name; both are valid. Extension functions as members Extension functions can be declared as members of a class. An instance of a class with extension functions declared is called the dispatch receiver. The Caregiver open class internally defines, extension functions for two different classes, Feline and Primate: open class Caregiver(val name: String) { open fun Feline.react() = "PURRR!!!" fun Primate.react() = "*$name plays with ${[email protected]}*" fun takeCare(feline: Feline) { println("Feline reacts: ${feline.react()}") } fun takeCare(primate: Primate){ println("Primate reacts: ${primate.react()}") } } Both extension functions are meant to be used inside an instance of Caregiver. Indeed, it is a good practice to mark member extension functions as private, if they aren't open. In the case of Primate.react(), we are using the name value from Primate and the name value from Caregiver. To access members with a name conflict, the extension receiver (this) takes precedence and to access members of the dispatcher receiver, the qualified this syntax must be used. Other members of the dispatcher receiver that don't have a name conflict can be used without qualified this. Don't get confused by the various means of this that we have already covered: Inside a class, this means the instance of that class Inside an extension function, this means the instance of the receiver type like the first parameter in our utility function with a nice syntax: class Dispatcher { val dispatcher: Dispatcher = this fun Int.extension(){ val receiver: Int = this val dispatcher: Dispatcher = this@Dispatcher } } Going back to our Zoo example, we instantiate a Caregiver, a Cat, and a Primate, and we invoke the function Caregiver.takeCare with both animal instances: val adam = Caregiver("Adam") val fulgencio = Cat() val koko = Primate("Koko") adam.takeCare(fulgencio) adam.takeCare(koko) If we execute this code, we can see this on the console: Any zoo needs a veterinary surgeon. The class Vet extends Caregiver: open class Vet(name: String): Caregiver(name) { override fun Feline.react() = "*runs away from $name*" } We override the Feline.react() function with a different implementation. We are also using the Vet class's name directly, as the Feline class doesn't have a property name: val brenda = Vet("Brenda") listOf(adam, brenda).forEach { caregiver -> println("${caregiver.javaClass.simpleName} ${caregiver.name}") caregiver.takeCare(fulgencio) caregiver.takeCare(koko) } After which, we get the following output: Extension functions with conflicting names What happens when an extension function has the same name as a member function? The Worker class has a function work(): String and a private function rest(): String. We also have two extension functions with the same signature, work and rest: class Worker { fun work() = "*working hard*" private fun rest() = "*resting*" } fun Worker.work() = "*not working so hard*" fun <T> Worker.work(t:T) = "*working on $t*" fun Worker.rest() = "*playing video games*" Having extension functions with the same signature isn't a compilation error, but a warning: Extension is shadowed by a member: public final fun work(): String It is legal to declare a function with the same signature as a member function, but the member function always takes precedence, therefore, the extension function is never invoked. This behavior changes when the member function is private, in this case, the extension function takes precedence. It is also possible to overload an existing member function with an extension function: val worker = Worker() println(worker.work()) println(worker.work("refactoring")) println(worker.rest()) On execution, work() invokes the member function and work(String) and rest() are extension functions: Extension functions for objects In Kotlin, objects are a type, therefore they can have functions, including extension functions (among other things, such as extending interfaces and others). We can add a buildBridge extension function to the object, Builder: object Builder { } fun Builder.buildBridge() = "A shinny new bridge" We can include companion objects. The class Designer has two inner objects, the companion object and Desk object: class Designer { companion object { } object Desk { } } fun Designer.Companion.fastPrototype() = "Prototype" fun Designer.Desk.portofolio() = listOf("Project1", "Project2") Calling this functions works like any normal object member function: Designer.fastPrototype() Designer.Desk.portofolio().forEach(::println) So there you have it! You now know how to take advantage of extension functions in Kotlin. If you found this tutorial helpful and would like to learn more, head on over to purchase the full book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Forget C and Java. Learn Kotlin: the next universal programming language 5 reasons to choose Kotlin over Java Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 6272
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-nodejs-building-maintainable-codebase
Benjamin Reed
06 May 2015
8 min read
Save for later

NodeJS: Building a Maintainable Codebase

Benjamin Reed
06 May 2015
8 min read
NodeJS has become the most anticipated web development technology since Ruby on Rails. This is not an introduction to Node. First, you must realize that NodeJS is not a direct competitor to Rails or Django. Instead, Node is a collection of libraries that allow JavaScript to run on the v8 runtime. Node powers many tools, and some of the tools have nothing to do with a scaling web application. For instance, GitHub’s Atom editor is built on top of Node. Its web application frameworks, like Express, are the competitors. This article can apply to all environments using Node. Second, Node is designed under the asynchronous ideology. Not all of the operations in Node are asynchronous. Many libraries offer synchronous and asynchronous options. A Node developer must decipher the best operation for his or her needs. Third, you should have a solid understanding of the concept of a callback in Node. Over the course of two weeks, a team attempted to refactor a Rails app to be an Express application. We loved the concepts behind Node, and we truly believed that all we needed was a barebones framework. We transferred our controller logic over to Express routes in a weekend. As a beginning team, I will analyze some of the pitfalls that we came across. Hopefully, this will help you identify strategies to tackle Node with your team. First, attempt to structure callbacks and avoid anonymous functions. As we added more and more logic, we added more and more callbacks. Everything was beautifully asynchronous, and our code would successfully run. However, we soon found ourselves debugging an anonymous function nested inside of other anonymous functions. In other words, the codebase was incredibly difficult to follow. Anyone starting out with Node could potentially notice the novice “spaghetti code.” Here’s a simple example of nested callbacks: router.put('/:id', function(req, res) { console.log("attempt to update bathroom"); models.User.find({ where: {id: req.param('id')} }).success(function (user) { var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; var raw_digest = req.param('digest') ? req.param('digest') : user.digest; user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.digest = raw_digest; user.updated_on = new Date(); user.save().success(function () { res.json(user); }).error(function () { res.json({"status": "error"}); }); }) .error(function() { res.json({"status": "error"}); }) }); Notice that there are many success and error callbacks. Locating a specific callback is not difficult if the whitespace is perfect or the developer can count closing brackets back up to the destination. However, this is pretty nasty to any newcomer. And this illegibility will only increase as the application becomes more complex. A developer may get this response: {"status": "error"} Where did this response come from? Did the ORM fail to update the object? Did it fail to find the object in the first place? A developer could add descriptions to the json in the chained error callbacks, but there has to be a better way. Let’s extract some of the callbacks into separate methods: router.put('/:id', function(req, res) { var id = req.param('id'); var query = { where: {id: id} }; // search for user models.User.find(query).success(function (user) { // parse req parameters var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; // set user attributes user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.updated_on = new Date(); // attempt to save user user.save() .success(SuccessHandler.userSaved(res, user)) .error(ErrorHandler.userNotSaved(res, id)); }) .error(ErrorHandler.userNotFound(res, id)) }); var ErrorHandler = { userNotFound: function(res, user_id) { res.json({"status": "error", "description": "The user with the specified id could not be found.", "user_id": user_id}); }, userNotSaved: function(res, user_id) { res.json({"status": "error", "description": "The update to the user with the specified id could not be completed.", "user_id": user_id}); } }; var SuccessHandler = { userSaved: function(res, user) { res.json(user); } } This seemed to help clean up our minimal sample. There is now only one anonymous function. The code seems to be a lot more readable and independent. However, our code is still cluttered by chaining success and error callbacks. One could make these global mutable variables, or, perhaps we can consider another approach. Futures, also known as promises, are becoming more prominent. Twitter has adopted them in Scala. It is definitely something to consider. Next, do what makes your team comfortable and productive. At the same time, do not compromise the integrity of the project. There are numerous posts that encourage certain styles over others. There are also extensive posts on the subject of CoffeeScript. If you aren’t aware, CoffeeScript is a language with some added syntactic flavor that compiles to JavaScript. Our team was primarily ruby developers, and it definitely appealed to us. When we migrated some of the project over to CoffeeScript, we found that our code was a lot shorter and appeared more legible. GitHub uses CoffeeScript for the Atom text editor to this day, and the Rails community has openly embraced it. The majority of node module documentation will use JavaScript, so CoffeeScript developers will have to become acquainted with translation. There are some problems with CoffeeScript being ES6 ready, and there are some modules that are clearly not meant to be utilized in CoffeeScript. CoffeeScript is an open source project, but it has appears to have a good backbone and a stable community. If your developers are more comfortable with it, utilize it. When it comes to open source projects, everyone tends to trust them. In the purest form, open source projects are absolutely beautiful. They make the lives of all of the developers better. Nobody has to re-implement the wheel unless they choose. Obviously, both Node and CoffeeScript are open source. However, the community is very new, and it is dangerous to assume that any package you find on NPM is stable. For us, the problem occurred when we searched for an ORM. We truly missed ActiveRecord, and we assumed that other projects would work similarly.  We tried several solutions, and none of them interacted the way we wanted. Besides expressing our entire schema in a JavaScript format, we found relations to be a bit of a hack. Settling on one, we ran our server. And our database cleared out. That’s fine in development, but we struggled to find a way to get it into production. We needed more documentation. Also, the module was not designed with CoffeeScript in mind. We practically needed to revert to JavaScript. In contrast, the Node community has openly embraced some NoSQL databases, such as MongoDB. They are definitely worth considering.   Either way, make sure that your team’s dependencies are very well documented. There should be a written documentation for each exposed object, function, etc. To sum everything up, this article comes down to two fundamental things learned in any computer science class: write modular code and document everything. Do your research on Node and find a style that is legible for your team and any newcomers. A NodeJS project can only be maintained if developers utilizing the framework recognize the importance of the project in the future. If your code is messy now, it will only become messier. If you cannot find necessary information in a module’s documentation, you probably will miss other information when there is a problem in production. Don’t take shortcuts. A node application can only be as good as its developers and dependencies. About the Author Benjamin Reed began Computer Science classes at a nearby university in Nashville during his sophomore year in high school. Since then, he has become an advocate for open source. He is now pursing degrees in Computer Science and Mathematics fulltime. The Ruby community has intrigued him, and he openly expresses support for the Rails framework. When asked, he believes that studying Rails has led him to some of the best practices and, ultimately, has made him a better programmer. iOS development is one of his hobbies, and he enjoys scouting out new projects on GitHub. On GitHub, he’s appropriately named @codeblooded. On Twitter, he’s @benreedDev.
Read more
  • 0
  • 0
  • 6263

article-image-connecting-arduino-web
Packt
27 Sep 2016
6 min read
Save for later

Connecting Arduino to the Web

Packt
27 Sep 2016
6 min read
In this article by Marco Schwartz, author of Internet of Things with Arduino Cookbook, we will focus on getting you started by connecting an Arduino board to the web. This article will really be the foundation of the rest of the article, so make sure to carefully follow the instructions so you are ready to complete the exciting projects we'll see in the rest of the article. (For more resources related to this topic, see here.) You will first learn how to set up the Arduino IDE development environment, and add Internet connectivity to your Arduino board. After that, we'll see how to connect a sensor and a relay to the Arduino board, for you to understand the basics of the Arduino platform. Then, we are actually going to connect an Arduino board to the web, and use it to grab the content from the web and to store data online. Note that all the projects in this article use the Arduino MKR1000 board. This is an Arduino board released in 2016 that has an on-board Wi-Fi connection. You can make all the projects in the article with other Arduino boards, but you might have to change parts of the code. Setting up the Arduino development environment In this first recipe of the article, we are going to see how to completely set up the Arduino IDE development environment, so that you can later use it to program your Arduino board and build Internet of Things projects. How to do it… The first thing you need to do is to download the latest version of the Arduino IDE from the following address: https://www.arduino.cc/en/Main/Software This is what you should see, and you should be able to select your operating system: You can now install the Arduino IDE, and open it on your computer. The Arduino IDE will be used through the whole article for several tasks. We will use it to write down all the code, but also to configure the Arduino boards and to read debug information back from those boards using the Arduino IDE Serial monitor. What we need to install now is the board definition for the MKR1000 board that we are going to use in this article. To do that, open the Arduino boards manager by going to Tools | Boards | Boards Manager. In there, search for SAMD boards: To install the board definition, just click on the little Install button next to the board definition. You should now be able to select the Arduino/GenuinoMKR1000 board inside the Arduino IDE: You are now completely set to develop Arduino projects using the Arduino IDE and the MKR1000 board. You can, for example, try to open an example sketch inside the IDE: How it works... The Arduino IDE is the best tool to program a wide range of boards, including the MKR1000 board that we are going to use in this article. We will see that it is a great tool to develop Internet of Things projects with Arduino. As we saw in this recipe, the board manager makes it really easy to use new boards inside the IDE. See also These are really the basics of the Arduino framework that we are going to use in the whole article to develop IoT projects. Options for Internet connectivity with Arduino Most of the boards made by Arduino don't come with Internet connectivity, which is something that we really need to build Internet of Things projects with Arduino. We are now going to review all the options that are available to us with the Arduino platform, and see which one is the best to build IoT projects. How to do it… The first option, that has been available since the advent of the Arduino platform, is to use a shield. A shield is basically an extension board that can be placed on top of the Arduino board. There are many shields available for Arduino. Inside the official collection of shields, you will find motor shields, prototyping shields, audio shields, and so on. Some shields will add Internet connectivity to the Arduino boards, for example the Ethernet shield or the Wi-Fi shield. This is a picture of the Ethernet shield: The other option is to use an external component, for example a Wi-Fi chip mounted on a breakout board, and then connect this shield to Arduino. There are many Wi-Fi chips available on the market. For example, Texas Instruments has a chip called the CC3000 that is really easy to connect to Arduino. This is a picture of a breakout board for the CC3000 Wi-Fi chip: Finally, there is the possibility of using one of the few Arduino boards that has an onboard Wi-Fi chip or Ethernet connectivity. The first board of this type introduced by Arduino was the Arduino Yun board. It is a really powerful board, with an onboard Linux machine. However, it is also a bit complex to use compared to other Arduino boards. Then, Arduino introduced the MKR1000 board, which is a board that integrates a powerful ARM Cortex M0+ process and a Wi-Fi chip on the same board, all in the small form factor. Here is a picture of this board: What to choose? All the solutions above would work to build powerful IoT projects using Arduino. However, as we want to easily build those projects and possibly integrate them into projects that are battery-powered, I chose to use the MKR1000 board for all the projects in this article. This board is really simple to use, powerful, and doesn't required any connections to hook it up with a Wi-Fi chip. Therefore, I believe this is the perfect board for IoT projects with Arduino. There's more... Of course, there are other options to connect Arduino boards to the Web. One option that's becoming more and more popular is to use 3G or LTE to connect your Arduino projects to the Web, again using either shields or breakout boards. This solution has the advantage of not requiring an active Internet connection like a Wi-Fi router, and can be used anywhere, for example outdoors. See also Now we have chosen a board that we will use in our IoT projects with Arduino, you can move on to the next recipe to actually learn how to use it. Resources for Article: Further resources on this subject: Building a Cloud Spy Camera and Creating a GPS Tracker [article] Internet Connected Smart Water Meter [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 6262

article-image-camera-calibration
Packt
25 Aug 2014
18 min read
Save for later

Camera Calibration

Packt
25 Aug 2014
18 min read
This article by Robert Laganière, author of OpenCV Computer Vision Application Programming Cookbook Second Edition, includes that images are generally produced using a digital camera, which captures a scene by projecting light going through its lens onto an image sensor. The fact that an image is formed by the projection of a 3D scene onto a 2D plane implies the existence of important relationships between a scene and its image and between different images of the same scene. Projective geometry is the tool that is used to describe and characterize, in mathematical terms, the process of image formation. In this article, we will introduce you to some of the fundamental projective relations that exist in multiview imagery and explain how these can be used in computer vision programming. You will learn how matching can be made more accurate through the use of projective constraints and how a mosaic from multiple images can be composited using two-view relations. Before we start the recipe, let's explore the basic concepts related to scene projection and image formation. (For more resources related to this topic, see here.) Image formation Fundamentally, the process used to produce images has not changed since the beginning of photography. The light coming from an observed scene is captured by a camera through a frontal aperture; the captured light rays hit an image plane (or an image sensor) located at the back of the camera. Additionally, a lens is used to concentrate the rays coming from the different scene elements. This process is illustrated by the following figure: Here, do is the distance from the lens to the observed object, di is the distance from the lens to the image plane, and f is the focal length of the lens. These quantities are related by the so-called thin lens equation: In computer vision, this camera model can be simplified in a number of ways. First, we can neglect the effect of the lens by considering that we have a camera with an infinitesimal aperture since, in theory, this does not change the image appearance. (However, by doing so, we ignore the focusing effect by creating an image with an infinite depth of field.) In this case, therefore, only the central ray is considered. Second, since most of the time we have do>>di, we can assume that the image plane is located at the focal distance. Finally, we can note from the geometry of the system that the image on the plane is inverted. We can obtain an identical but upright image by simply positioning the image plane in front of the lens. Obviously, this is not physically feasible, but from a mathematical point of view, this is completely equivalent. This simplified model is often referred to as the pin-hole camera model, and it is represented as follows: From this model, and using the law of similar triangles, we can easily derive the basic projective equation that relates a pictured object with its image: The size (hi) of the image of an object (of height ho) is therefore inversely proportional to its distance (do) from the camera, which is naturally true. In general, this relation describes where a 3D scene point will be projected on the image plane given the geometry of the camera. Calibrating a camera From the introduction of this article, we learned that the essential parameters of a camera under the pin-hole model are its focal length and the size of the image plane (which defines the field of view of the camera). Also, since we are dealing with digital images, the number of pixels on the image plane (its resolution) is another important characteristic of a camera. Finally, in order to be able to compute the position of an image's scene point in pixel coordinates, we need one additional piece of information. Considering the line coming from the focal point that is orthogonal to the image plane, we need to know at which pixel position this line pierces the image plane. This point is called the principal point. It might be logical to assume that this principal point is at the center of the image plane, but in practice, this point might be off by a few pixels depending on the precision at which the camera has been manufactured. Camera calibration is the process by which the different camera parameters are obtained. One can obviously use the specifications provided by the camera manufacturer, but for some tasks, such as 3D reconstruction, these specifications are not accurate enough. Camera calibration will proceed by showing known patterns to the camera and analyzing the obtained images. An optimization process will then determine the optimal parameter values that explain the observations. This is a complex process that has been made easy by the availability of OpenCV calibration functions. How to do it... To calibrate a camera, the idea is to show it a set of scene points for which their 3D positions are known. Then, you need to observe where these points project on the image. With the knowledge of a sufficient number of 3D points and associated 2D image points, the exact camera parameters can be inferred from the projective equation. Obviously, for accurate results, we need to observe as many points as possible. One way to achieve this would be to take one picture of a scene with many known 3D points, but in practice, this is rarely feasible. A more convenient way is to take several images of a set of some 3D points from different viewpoints. This approach is simpler but requires you to compute the position of each camera view in addition to the computation of the internal camera parameters, which fortunately is feasible. OpenCV proposes that you use a chessboard pattern to generate the set of 3D scene points required for calibration. This pattern creates points at the corners of each square, and since this pattern is flat, we can freely assume that the board is located at Z=0, with the X and Y axes well-aligned with the grid. In this case, the calibration process simply consists of showing the chessboard pattern to the camera from different viewpoints. Here is one example of a 6x4 calibration pattern image: The good thing is that OpenCV has a function that automatically detects the corners of this chessboard pattern. You simply provide an image and the size of the chessboard used (the number of horizontal and vertical inner corner points). The function will return the position of these chessboard corners on the image. If the function fails to find the pattern, then it simply returns false: // output vectors of image points std::vector<cv::Point2f> imageCorners; // number of inner corners on the chessboard cv::Size boardSize(6,4); // Get the chessboard corners bool found = cv::findChessboardCorners(image, boardSize, imageCorners); The output parameter, imageCorners, will simply contain the pixel coordinates of the detected inner corners of the shown pattern. Note that this function accepts additional parameters if you needs to tune the algorithm, which are not discussed here. There is also a special function that draws the detected corners on the chessboard image, with lines connecting them in a sequence: //Draw the corners cv::drawChessboardCorners(image, boardSize, imageCorners, found); // corners have been found The following image is obtained: The lines that connect the points show the order in which the points are listed in the vector of detected image points. To perform a calibration, we now need to specify the corresponding 3D points. You can specify these points in the units of your choice (for example, in centimeters or in inches); however, the simplest is to assume that each square represents one unit. In that case, the coordinates of the first point would be (0,0,0) (assuming that the board is located at a depth of Z=0), the coordinates of the second point would be (1,0,0), and so on, the last point being located at (5,3,0). There are a total of 24 points in this pattern, which is too small to obtain an accurate calibration. To get more points, you need to show more images of the same calibration pattern from various points of view. To do so, you can either move the pattern in front of the camera or move the camera around the board; from a mathematical point of view, this is completely equivalent. The OpenCV calibration function assumes that the reference frame is fixed on the calibration pattern and will calculate the rotation and translation of the camera with respect to the reference frame. Let's now encapsulate the calibration process in a CameraCalibrator class. The attributes of this class are as follows: class CameraCalibrator { // input points: // the points in world coordinates std::vector<std::vector<cv::Point3f>> objectPoints; // the point positions in pixels std::vector<std::vector<cv::Point2f>> imagePoints; // output Matrices cv::Mat cameraMatrix; cv::Mat distCoeffs; // flag to specify how calibration is done int flag; Note that the input vectors of the scene and image points are in fact made of std::vector of point instances; each vector element is a vector of the points from one view. Here, we decided to add the calibration points by specifying a vector of the chessboard image filename as input: // Open chessboard images and extract corner points int CameraCalibrator::addChessboardPoints( const std::vector<std::string>& filelist, cv::Size & boardSize) { // the points on the chessboard std::vector<cv::Point2f> imageCorners; std::vector<cv::Point3f> objectCorners; // 3D Scene Points: // Initialize the chessboard corners // in the chessboard reference frame // The corners are at 3D location (X,Y,Z)= (i,j,0) for (int i=0; i<boardSize.height; i++) { for (int j=0; j<boardSize.width; j++) { objectCorners.push_back(cv::Point3f(i, j, 0.0f)); } } // 2D Image points: cv::Mat image; // to contain chessboard image int successes = 0; // for all viewpoints for (int i=0; i<filelist.size(); i++) { // Open the image image = cv::imread(filelist[i],0); // Get the chessboard corners bool found = cv::findChessboardCorners( image, boardSize, imageCorners); // Get subpixel accuracy on the corners cv::cornerSubPix(image, imageCorners, cv::Size(5,5), cv::Size(-1,-1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 30, // max number of iterations 0.1)); // min accuracy //If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } return successes; } The first loop inputs the 3D coordinates of the chessboard, and the corresponding image points are the ones provided by the cv::findChessboardCorners function. This is done for all the available viewpoints. Moreover, in order to obtain a more accurate image point location, the cv::cornerSubPix function can be used, and as the name suggests, the image points will then be localized at a subpixel accuracy. The termination criterion that is specified by the cv::TermCriteria object defines the maximum number of iterations and the minimum accuracy in subpixel coordinates. The first of these two conditions that is reached will stop the corner refinement process. When a set of chessboard corners have been successfully detected, these points are added to our vectors of the image and scene points using our addPoints method. Once a sufficient number of chessboard images have been processed (and consequently, a large number of 3D scene point / 2D image point correspondences are available), we can initiate the computation of the calibration parameters as follows: // Calibrate the camera // returns the re-projection error double CameraCalibrator::calibrate(cv::Size &imageSize) { //Output rotations and translations std::vector<cv::Mat> rvecs, tvecs; // start calibration return calibrateCamera(objectPoints, // the 3D points imagePoints, // the image points imageSize, // image size cameraMatrix, // output camera matrix distCoeffs, // output distortion matrix rvecs, tvecs, // Rs, Ts flag); // set options } In practice, 10 to 20 chessboard images are sufficient, but these must be taken from different viewpoints at different depths. The two important outputs of this function are the camera matrix and the distortion parameters. These will be described in the next section. How it works... In order to explain the result of the calibration, we need to go back to the figure in the introduction, which describes the pin-hole camera model. More specifically, we want to demonstrate the relationship between a point in 3D at the position (X,Y,Z) and its image (x,y) on a camera specified in pixel coordinates. Let's redraw this figure by adding a reference frame that we position at the center of the projection as seen here: Note that the y axis is pointing downward to get a coordinate system compatible with the usual convention that places the image origin at the upper-left corner. We learned previously that the point (X,Y,Z) will be projected onto the image plane at (fX/Z,fY/Z). Now, if we want to translate this coordinate into pixels, we need to divide the 2D image position by the pixel's width (px) and height (py), respectively. Note that by dividing the focal length given in world units (generally given in millimeters) by px, we obtain the focal length expressed in (horizontal) pixels. Let's then define this term as fx. Similarly, fy =f/py is defined as the focal length expressed in vertical pixel units. Therefore, the complete projective equation is as follows: Recall that (u0,v0) is the principal point that is added to the result in order to move the origin to the upper-left corner of the image. These equations can be rewritten in the matrix form through the introduction of homogeneous coordinates, in which 2D points are represented by 3-vectors and 3D points are represented by 4-vectors (the extra coordinate is simply an arbitrary scale factor, S, that needs to be removed when a 2D coordinate needs to be extracted from a homogeneous 3-vector). Here is the rewritten projective equation: The second matrix is a simple projection matrix. The first matrix includes all of the camera parameters, which are called the intrinsic parameters of the camera. This 3x3 matrix is one of the output matrices returned by the cv::calibrateCamera function. There is also a function called cv::calibrationMatrixValues that returns the value of the intrinsic parameters given by a calibration matrix. More generally, when the reference frame is not at the projection center of the camera, we will need to add a rotation vector (a 3x3 matrix) and a translation vector (a 3x1 matrix). These two matrices describe the rigid transformation that must be applied to the 3D points in order to bring them back to the camera reference frame. Therefore, we can rewrite the projection equation in its most general form: Remember that in our calibration example, the reference frame was placed on the chessboard. Therefore, there is a rigid transformation (made of a rotation component represented by the matrix entries r1 to r9 and a translation represented by t1, t2, and t3) that must be computed for each view. These are in the output parameter list of the cv::calibrateCamera function. The rotation and translation components are often called the extrinsic parameters of the calibration, and they are different for each view. The intrinsic parameters remain constant for a given camera/lens system. The intrinsic parameters of our test camera obtained from a calibration based on 20 chessboard images are fx=167, fy=178, u0=156, and v0=119. These results are obtained by cv::calibrateCamera through an optimization process aimed at finding the intrinsic and extrinsic parameters that will minimize the difference between the predicted image point position, as computed from the projection of the 3D scene points, and the actual image point position, as observed on the image. The sum of this difference for all the points specified during the calibration is called the re-projection error. Let's now turn our attention to the distortion parameters. So far, we have mentioned that under the pin-hole camera model, we can neglect the effect of the lens. However, this is only possible if the lens that is used to capture an image does not introduce important optical distortions. Unfortunately, this is not the case with lower quality lenses or with lenses that have a very short focal length. You may have already noted that the chessboard pattern shown in the image that we used for our example is clearly distorted—the edges of the rectangular board are curved in the image. Also, note that this distortion becomes more important as we move away from the center of the image. This is a typical distortion observed with a fish-eye lens, and it is called radial distortion. The lenses used in common digital cameras usually do not exhibit such a high degree of distortion, but in the case of the lens used here, these distortions certainly cannot be ignored. It is possible to compensate for these deformations by introducing an appropriate distortion model. The idea is to represent the distortions induced by a lens by a set of mathematical equations. Once established, these equations can then be reverted in order to undo the distortions visible on the image. Fortunately, the exact parameters of the transformation that will correct the distortions can be obtained together with the other camera parameters during the calibration phase. Once this is done, any image from the newly calibrated camera will be undistorted. Therefore, we have added an additional method to our calibration class: // remove distortion in an image (after calibration) cv::Mat CameraCalibrator::remap(const cv::Mat &image) { cv::Mat undistorted; if (mustInitUndistort) { // called once per calibration cv::initUndistortRectifyMap( cameraMatrix, // computed camera matrix distCoeffs, // computed distortion matrix cv::Mat(), // optional rectification (none) cv::Mat(), // camera matrix to generate undistorted image.size(), // size of undistorted CV_32FC1, // type of output map map1, map2); // the x and y mapping functions mustInitUndistort= false; } // Apply mapping functions cv::remap(image, undistorted, map1, map2, cv::INTER_LINEAR); // interpolation type return undistorted; } Running this code results in the following image: As you can see, once the image is undistorted, we obtain a regular perspective image. To correct the distortion, OpenCV uses a polynomial function that is applied to the image points in order to move them at their undistorted position. By default, five coefficients are used; a model made of eight coefficients is also available. Once these coefficients are obtained, it is possible to compute two cv::Mat mapping functions (one for the x coordinate and one for the y coordinate) that will give the new undistorted position of an image point on a distorted image. This is computed by the cv::initUndistortRectifyMap function, and the cv::remap function remaps all the points of an input image to a new image. Note that because of the nonlinear transformation, some pixels of the input image now fall outside the boundary of the output image. You can expand the size of the output image to compensate for this loss of pixels, but you will now obtain output pixels that have no values in the input image (they will then be displayed as black pixels). There's more... More options are available when it comes to camera calibration. Calibration with known intrinsic parameters When a good estimate of the camera's intrinsic parameters is known, it could be advantageous to input them in the cv::calibrateCamera function. They will then be used as initial values in the optimization process. To do so, you just need to add the CV_CALIB_USE_INTRINSIC_GUESS flag and input these values in the calibration matrix parameter. It is also possible to impose a fixed value for the principal point (CV_CALIB_FIX_PRINCIPAL_POINT), which can often be assumed to be the central pixel. You can also impose a fixed ratio for the focal lengths fx and fy (CV_CALIB_FIX_RATIO); in which case, you assume the pixels of the square shape. Using a grid of circles for calibration Instead of the usual chessboard pattern, OpenCV also offers the possibility to calibrate a camera by using a grid of circles. In this case, the centers of the circles are used as calibration points. The corresponding function is very similar to the function we used to locate the chessboard corners: cv::Size boardSize(7,7); std::vector<cv::Point2f> centers; bool found = cv:: findCirclesGrid( image, boardSize, centers); See also The A flexible new technique for camera calibration article by Z. Zhang in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no 11, 2000, is a classic paper on the problem of camera calibration Summary In this article, we explored the projective relations that exist between two images of the same scene. Resources for Article: Further resources on this subject: Creating an Application from Scratch [Article] Wrapping OpenCV [Article] New functionality in OpenCV 3.0 [Article]
Read more
  • 0
  • 0
  • 6262

article-image-buttons-menus-and-toolbars-ext-js
Packt
20 Oct 2009
5 min read
Save for later

Buttons, Menus, and Toolbars in Ext JS

Packt
20 Oct 2009
5 min read
The unsung heroes of every application are the simple things like buttons, menus, and toolbars. In this article by Shea Frederick, Steve 'Cutter' Blades, and Colin Ramsay, we will cover how to add these items to our applications. Our example will contain a few different types of buttons, both with and without menus. A button can simply be an icon, or text, or both. Toolbars also have some mechanical elements such as spacers and dividers that can help to organize the buttons on your toolbars items. We will also cover how to make these elements react to user interaction. A toolbar for every occasion Just about every Ext component—panels, windows, grids can accept a toolbar on either the top or the bottom. The option is also available to render the toolbar standalone into any DOM element in our document. The toolbar is an extremely flexible and useful component that will no doubt be used in every application. Ext.Toolbar: The main container for the buttons Ext.Button: The primary handler for button creation and interaction Ext.menu: A menu Toolbars Our first toolbar is going to be rendered standalone in the body of our document. We will add one of each of the main button types, so we can experiment with each: Button—tbbutton: This is the standard button that we are all familiar with. Split Button—tbsplit: A split button is where you have a default button action and an optional menu. These are used in cases where you need to have many options in the same category as your button, of which there is a most commonly used default option. Menu—tbbutton+menu: A menu is just a button with the menu config filled in with options. Ext.onReady(function(){ new Ext.Toolbar({ renderTo: document.body, items: [{ xtype: 'tbbutton', text: 'Button' },{ xtype: 'tbbutton', text: 'Menu Button', menu: [{ text: 'Better' },{ text: 'Good' },{ text: 'Best' }] },{ xtype: 'tbsplit', text: 'Split Button', menu: [{ text: 'Item One' },{ text: 'Item Two' },{ text: 'Item Three' }] }] });}); As usual, everything is inside our onReady event handler. The items config holds all of our toolbars elements—I say elements and not buttons because the toolbar can accept many different types of Ext components including form fields—which we will be implementing later on in this article. The default xtype for each element in the items config is tbbutton. We can leave out the xtype config element if tbbutton is the type we want, but I like to include it just to help me keep track. The button Creating a button is fairly straightforward; the main config option is the text that is displayed on the button. We can also add an icon to be used alongside the text if we want to. Here is a stripped-down button: { xtype: 'tbbutton', text: 'Button'} Menu A menu is just a button with the menu config populated—it's that simple. The menu items work along the same principles as the buttons. They can have icons, classes, and handlers assigned to them. The menu items could also be grouped together to form a set of option buttons, but first let's create a standard menu. This is the config for a typical menu config: { xtype: 'tbbutton', text: 'Button', menu: [{ text: 'Better' },{ text: 'Good' },{ text: 'Best' }]} As we can see, once the menu array config is populated, the menu comes to life. To group these menu items together, we would need to set the group config and the boolean checked value for each item: menu: [{ text: 'Better', checked: true, group: 'quality'}, { text: 'Good', checked: false, group: 'quality'}, { text: 'Best', checked: false, group: 'quality'}] Split button The split button sounds like a complex component, but it's just like a button and a menu combined, with a slight twist. By using this type of button, you get to use the functionality of a button while adding the option to select an item from the attached menu. Clicking the left portion of the button that contains the text triggers the button action. However, clicking the right side of the button, which contains a small down arrow, triggers the menu. { xtype: 'tbsplit', text: 'Split Button', menu: [{ text: 'Item One' },{ text: 'Item Two' },{ text: 'Item Three' }]} Toolbar item alignment, dividers, and spacers By default, every toolbar aligns elements to the leftmost side. There is no alignment config for a toolbar, so if we want to align all of the toolbar buttons to the rightmost side, we need to add a fill as the first item in the toolbar. If we want to have items split up between both the left and right sides, we can also use a fill: { xtype: 'tbfill'} Pop this little guy in a tool-bar wherever you want to add space and he will push items on either side of the fill to the ends of the tool bar, as shown below: We also have elements that can add space or vertical dividers, like the one used between the Menu Button and the Split Button. The spacer adds a few pixels of empty space that can be used to space out buttons, or move elements away from the edge of the toolbar: { xtype: 'tbspacer'} A divider can be added in the same way: { xtype: 'tbseparator'} Shortcuts Ext has many shortcuts that can be used to make coding faster. Shortcuts are a character or two that can be used in place of a configuration object. For example, consider the standard toolbar filler configuration: { xtype: 'tbfill'} The shortcut for a toolbar filler is a hyphen and a greater than symbol: '->' Not all of these shortcuts are documented. So be adventurous, poke around the source code, and see what you can find. Here is a list of the commonly-used shortcuts:
Read more
  • 0
  • 0
  • 6261
article-image-installing-apache-karaf
Packt
31 Oct 2013
7 min read
Save for later

Installing Apache Karaf

Packt
31 Oct 2013
7 min read
Before Apache Karaf can provide you with an OSGi-based container runtime, we'll have to set up our environment first. The process is quick, requiring a minimum of normal Java usage integration work. In this article we'll review: The prerequisites for Apache Karaf Obtaining Apache Karaf Installing Apache Karaf and running it for the first time Prerequisites As a lightweight container, Apache Karaf has sparse system requirements. You will need to check that you have all of the below specifications met or exceeded: Operating System: Apache Karaf requires recent versions of Windows, AIX, Solaris, HP-UX, and various Linux distributions (RedHat, Suse, Ubuntu, and so on). Disk space: It requires at least 20 MB free disk space. You will require more free space as additional resources are provisioned into the container. As a rule of thumb, you should plan to allocate 100 to 1000 MB of disk space for logging, bundle cache, and repository. Memory: At least 128 MB memory is required; however, more than 2 GB is recommended. Java Runtime Environment (JRE): The runtime environments such as JRE 1.6 or JRE 1.7 are required. The location of the JRE should be made available via environment setting JAVA_HOME. At the time of writing, Java 1.6 is "end of life". For our demos we'll use Apache Maven 3.0.x and Java SDK 1.7.x; these tools should be obtained for future use. However, they will not be necessary to operate the base Karaf installation. Before attempting to build demos, please set the MAVEN_HOME environment variable to point towards your Apache Maven distribution. After verifying you have the above prerequisite hardware, operating system, JVM, and other software packages, you will have to set up your environment variables for JAVA_HOME and MAVEN_HOME. Both of these will be added to the system PATH. Setting up JAVA_HOME Environment Variable Apache Karaf honors the setting of JAVA_HOME in the system environment; if this is not set, it will pick up and use Java from PATH. For users unfamiliar with setting environment variables, the following batch setup script will set up your windows environment: @echo off REM execute setup.bat to setup environment variables. set JAVA_HOME=C:Program FilesJavajdk1.6.0_31 set MAVEN_HOME=c:x1apache-maven-3.0.4 set PATH=%JAVA_HOME%bin;%MAVEN_HOME%bin;%PATH%echo %PATH% The script creates and sets the JAVA_HOME and MAVEN_HOME variables to point to their local installation directories, and then adds their values to the system PATH. The initial echo off directive reduces console output as the script executes; the final echo command prints the value of PATH. Managing Windows System Environment Variables Windows environment settings can be managed via the Systems Properties control panel. Access to these controls varies according to the Windows release. Conversely, in a Unix-like environment, a script similar to the following one will set up your environment: # execute setup.sh to setup environment variables. JAVA_HOME=/path/to/jdk1.6.0_31 MAVEN_HOME=/path/to/apache-maven-3.0.4 PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH export PATH JAVA_HOME MAVEN_HOME echo $PATH The first two directives create and set the JAVA_HOME and MAVEN_HOME environment variables, respectively. These values are added to the PATH setting, and then made available to the environment via the export command. Obtaining Apache Karaf distribution As an Apache open source project, Apache Karaf is made available in both binary and source distributions. The binary distribution comes in a Linux-friendly, GNU-compressed archive and in Windows ZIP format. Your selection of distribution kit will affect which set of scripts are available in Karaf's bin folder. So, if you're using Windows, select the ZIP file; on Unix-like systems choose the tar.gz file. Apache Karaf distributions may be obtained from http://karaf.apache.org/index/community/download.html. The following screenshot shows this link: The primary download site for Apache Karaf provides a list of available mirror sites; it is advisable that you select a server nearer to your location for faster downloads. For the purposes of this article, we will be focusing on Apache Karaf 2.3.x with notes upon the 3.0.x release series. Apache Karaf 2.3.x versus 3.0.x series The major difference between Apache Karaf 2.3 and 3.0 lines is the core OSGi specification supported. Karaf 2.3 utilizes OSGi rev4.3, while Karaf 3.0 uses rev5.0. Karaf 3 also introduces several command name changes. There are a multitude of other internal differences between the code bases, and wherever appropriate, we'll highlight those changes that impact users throughout this text. Installing Apache Karaf The installation of Apache Karaf only requires you to extract the tar.gz or .zip file in your desired target folder destination. The following command is used in Windows: unzip apache-karaf-.zip The following command is used in Unix: tar –zxf apache-karaf-.tar.gz After extraction, the following folder structure will be present: The LICENSE, NOTICE, README, and RELEASE-NOTES files are plain text artifacts contained in each Karaf distribution. The RELEASE-NOTES files are of particular interest, as upon each major and minor release of Karaf, this file is updated with a list of changes. The LICENSE, NOTICE, README, and RELEASE-NOTES files are plain text artifacts contained in each Karaf distribution. The RELEASE-NOTES files are of particular interest, as upon each major and minor release of Karaf, this file is updated with a list of changes. The bin folder contains the Karaf scripts for the interactive shell (Karaf), starting and stopping background Karaf service, a client for connecting to running Karaf instances, and additional utilities. The data folder is home to Karaf's logfiles, bundle cache, and various other persistent data. The demos folder contains an assortment of sample projects for Karaf. It is advisable that new users explore these examples to gain familiarity with the system. For the purposes of this book we strived to create new sample projects to augment those existing in the distribution. The instances folder will be created when you use Karaf child instances. It stores the child instance folders and files. The deploy folder is monitored for hot deployment of artifacts into the running container. The etc folder contains the base configuration files of Karaf; it is also monitored for dynamic configuration updates to the configuration admin service in the running container. An HTML and PDF format copy of the Karaf manual is included in each kit. The lib folder contains the core libraries required for Karaf to boot upon a JVM. The system folder contains a simple repository of dependencies Karaf requires for operating at runtime. This repository has each library jar saved under a Maven-style directory structure, consisting of the library Maven group ID, artifact ID, version, artifact ID-version, any classifier, and extension. First boot! After extracting the Apache Karaf distribution kit and setting our environment variables, we are now ready to start up the container. The container can be started by invoking the Karaf script provided in the bin directory: On Windows, use the following command: binkaraf.bat On Unix, use the following command: ./bin/karaf The following image shows the first boot screen: Congratulations, you have successfully booted Apache Karaf! To stop the container, issue the following command in the console: karaf@root> shutdown –f The inclusion of the –for –-force flag to the shutdown command instructs Karaf to skip asking for confirmation of container shutdown. Pressing Ctrl+ D will shut down Karaf when you are on the shell; however, if you are connected remotely (using SSH), this action will just log off the SSH session, it won't shut down Karaf. Summary We have discovered the prerequisites for installing Karaf, which distribution to obtain, how to install the container, and finally how to start it. Resources for Article: Further resources on this subject: Apache Felix Gogo [Article] WordPress 3 Security: Apache Modules [Article] Configuring Apache and Nginx [Article]
Read more
  • 0
  • 0
  • 6251

article-image-integrating-d3js-visualization-simple-angularjs-application
Packt
27 Apr 2015
19 min read
Save for later

Integrating a D3.js visualization into a simple AngularJS application

Packt
27 Apr 2015
19 min read
In this article by Christoph Körner, author of the book Data Visualization with D3 and AngularJS, we will apply the acquired knowledge to integrate a D3.js visualization into a simple AngularJS application. First, we will set up an AngularJS template that serves as a boilerplate for the examples and the application. We will see a typical directory structure for an AngularJS project and initialize a controller. Similar to the previous example, the controller will generate random data that we want to display in an autoupdating chart. Next, we will wrap D3.js in a factory and create a directive for the visualization. You will learn how to isolate the components from each other. We will create a simple AngularJS directive and write a custom compile function to create and update the chart. (For more resources related to this topic, see here.) Setting up an AngularJS application To get started with this article, I assume that you feel comfortable with the main concepts of AngularJS: the application structure, controllers, directives, services, dependency injection, and scopes. I will use these concepts without introducing them in great detail, so if you do not know about one of these topics, first try an intermediate AngularJS tutorial. Organizing the directory To begin with, we will create a simple AngularJS boilerplate for the examples and the visualization application. We will use this boilerplate during the development of the sample application. Let's create a project root directory that contains the following files and folders: bower_components/: This directory contains all third-party components src/: This directory contains all source files src/app.js: This file contains source of the application src/app.css: CSS layout of the application test/: This directory contains all test files (test/config/ contains all test configurations, test/spec/ contains all unit tests, and test/e2e/ contains all integration tests) index.html: This is the starting point of the application Installing AngularJS In this article, we use the AngularJS version 1.3.14, but different patch versions (~1.3.0) should also work fine with the examples. Let's first install AngularJS with the Bower package manager. Therefore, we execute the following command in the root directory of the project: bower install angular#1.3.14 Now, AngularJS is downloaded and installed to the bower_components/ directory. If you don't want to use Bower, you can also simply download the source files from the AngularJS website and put them in a libs/ directory. Note that—if you develop large AngularJS applications—you most likely want to create a separate bower.json file and keep track of all your third-party dependencies. Bootstrapping the index file We can move on to the next step and code the index.html file that serves as a starting point for the application and all examples of this section. We need to include the JavaScript application files and the corresponding CSS layouts, the same for the chart component. Then, we need to initialize AngularJS by placing an ng-app attribute to the html tag; this will create the root scope of the application. Here, we will call the AngularJS application myApp, as shown in the following code: <html ng-app="myApp"> <head>    <!-- Include 3rd party libraries -->    <script src="bower_components/d3/d3.js" charset="UTF-   8"></script>    <script src="bower_components/angular/angular.js"     charset="UTF-8"></script>      <!-- Include the application files -->    <script src="src/app.js"></script>    <link href="src/app.css" rel="stylesheet">      <!-- Include the files of the chart component -->    <script src="src/chart.js"></script>    <link href="src/chart.css" rel="stylesheet">   </head> <body>    <!-- AngularJS example go here --> </body> </html> For all the examples in this section, I will use the exact same setup as the preceding code. I will only change the body of the HTML page or the JavaScript or CSS sources of the application. I will indicate to which file the code belongs to with a comment for each code snippet. If you are not using Bower and previously downloaded D3.js and AngularJS in a libs/ directory, refer to this directory when including the JavaScript files. Adding a module and a controller Next, we initialize the AngularJS module in the app.js file and create a main controller for the application. The controller should create random data (that represent some simple logs) in a fixed interval. Let's generate some random number of visitors every second and store all data points on the scope as follows: /* src/app.js */ // Application Module angular.module('myApp', [])   // Main application controller .controller('MainCtrl', ['$scope', '$interval', function ($scope, $interval) {      var time = new Date('2014-01-01 00:00:00 +0100');      // Random data point generator    var randPoint = function() {      var rand = Math.random;      return { time: time.toString(), visitors: rand()*100 };    }      // We store a list of logs    $scope.logs = [ randPoint() ];      $interval(function() {     time.setSeconds(time.getSeconds() + 1);      $scope.logs.push(randPoint());    }, 1000); }]); In the preceding example, we define an array of logs on the scope that we initialize with a random point. Every second, we will push a new random point to the logs. The points contain a number of visitors and a timestamp—starting with the date 2014-01-01 00:00:00 (timezone GMT+01) and counting up a second on each iteration. I want to keep it simple for now; therefore, we will use just a very basic example of random access log entries. Consider to use the cleaner controller as syntax for larger AngularJS applications because it makes the scopes in HTML templates explicit! However, for compatibility reasons, I will use the standard controller and $scope notation. Integrating D3.js into AngularJS We bootstrapped a simple AngularJS application in the previous section. Now, the goal is to integrate a D3.js component seamlessly into an AngularJS application—in an Angular way. This means that we have to design the AngularJS application and the visualization component such that the modules are fully encapsulated and reusable. In order to do so, we will use a separation on different levels: Code of different components goes into different files Code of the visualization library goes into a separate module Inside a module, we divide logics into controllers, services, and directives Using this clear separation allows you to keep files and modules organized and clean. If at anytime we want to replace the D3.js backend with a canvas pixel graphic, we can just implement it without interfering with the main application. This means that we want to use a new module of the visualization component and dependency injection. These modules enable us to have full control of the separate visualization component without touching the main application and they will make the component maintainable, reusable, and testable. Organizing the directory First, we add the new files for the visualization component to the project: src/: This is the default directory to store all the file components for the project src/chart.js: This is the JS source of the chart component src/chart.css: This is the CSS layout for the chart component test/test/config/: This directory contains all test configurations test/spec/test/spec/chart.spec.js: This file contains the unit tests of the chart component test/e2e/chart.e2e.js: This file contains the integration tests of the chart component If you develop large AngularJS applications, this is probably not the folder structure that you are aiming for. Especially in bigger applications, you will most likely want to have components in separate folders and directives and services in separate files. Then, we will encapsulate the visualization from the main application and create the new myChart module for it. This will make it possible to inject the visualization component or parts of it—for example just the chart directive—to the main application. Wrapping D3.js In this module, we will wrap D3.js—which is available via the global d3 variable—in a service; actually, we will use a factory to just return the reference to the d3 variable. This enables us to pass D3.js as a dependency inside the newly created module wherever we need it. The advantage of doing so is that the injectable d3 component—or some parts of it—can be mocked for testing easily. Let's assume we are loading data from a remote resource and do not want to wait for the time to load the resource every time we test the component. Then, the fact that we can mock and override functions without having to modify anything within the component will become very handy. Another great advantage will be defining custom localization configurations directly in the factory. This will guarantee that we have the proper localization wherever we use D3.js in the component. Moreover, in every component, we use the injected d3 variable in a private scope of a function and not in the global scope. This is absolutely necessary for clean and encapsulated components; we should never use any variables from global scope within an AngularJS component. Now, let's create a second module that stores all the visualization-specific code dependent on D3.js. Thus, we want to create an injectable factory for D3.js, as shown in the following code: /* src/chart.js */ // Chart Module   angular.module('myChart', [])   // D3 Factory .factory('d3', function() {   /* We could declare locals or other D3.js      specific configurations here. */   return d3; }); In the preceding example, we returned d3 without modifying it from the global scope. We can also define custom D3.js specific configurations here (such as locals and formatters). We can go one step further and load the complete D3.js code inside this factory so that d3 will not be available in the global scope at all. However, we don't use this approach here to keep things as simple and understandable as possible. We need to make this module or parts of it available to the main application. In AngularJS, we can do this by injecting the myChart module into the myApp application as follows: /* src/app.js */   angular.module('myApp', ['myChart']); Usually, we will just inject the directives and services of the visualization module that we want to use in the application, not the whole module. However, for the start and to access all parts of the visualization, we will leave it like this. We can use the components of the chart module now on the AngularJS application by injecting them into the controllers, services, and directives. The boilerplate—with a simple chart.js and chart.css file—is now ready. We can start to design the chart directive. A chart directive Next, we want to create a reusable and testable chart directive. The first question that comes into one's mind is where to put which functionality? Should we create a svg element as parent for the directive or a div element? Should we draw a data point as a circle in svg and use ng-repeat to replicate these points in the chart? Or should we better create and modify all data points with D3.js? I will answer all these question in the following sections. A directive for SVG As a general rule, we can say that different concepts should be encapsulated so that they can be replaced anytime by a new technology. Hence, we will use AngularJS with an element directive as a parent element for the visualization. We will bind the data and the options of the chart to the private scope of the directive. In the directive itself, we will create the complete chart including the parent svg container, the axis, and all data points using D3.js. Let's first add a simple directive for the chart component: /* src/chart.js */ …   // Scatter Chart Directive .directive('myScatterChart', ["d3", function(d3){      return {      restrict: 'E',      scope: {        },      compile: function( element, attrs, transclude ) {                   // Create a SVG root element        var svg = d3.select(element[0]).append('svg');          // Return the link function        return function(scope, element, attrs) { };      }    }; }]); In the preceding example, we first inject d3 to the directive by passing it as an argument to the caller function. Then, we return a directive as an element with a private scope. Next, we define a custom compile function that returns the link function of the directive. This is important because we need to create the svg container for the visualization during the compilation of the directive. Then, during the link phase of the directive, we need to draw the visualization. Let's try to define some of these directives and look at the generated output. We define three directives in the index.html file, as shown in the following code: <!-- index.html --> <div ng-controller="MainCtrl">   <!-- We can use the visualization directives here --> <!-- The first chart --> <my-scatter-chart class="chart"></my-scatter-chart>   <!-- A second chart --> <my-scatter-chart class="chart"></my-scatter-chart>   <!-- Another chart --> <my-scatter-chart class="chart"></my-scatter-chart>   </div> If we look at the output of the html page in the developer tools, we can see that for each base element of the directive, we created a svg parent element for the visualization: Output of the HTML page In the resulting DOM tree, we can see that three svg elements are appended to the directives. We can now start to draw the chart in these directives. Let's fill these elements with some awesome charts. Implementing a custom compile function First, let's add a data attribute to the isolated scope of the directive. This gives us access to the dataset, which we will later pass to the directive in the HTML template. Next, we extend the compile function of the directive to create a g group container for the data points and the axis. We will also add a watcher that checks for changes of the scope data array. Every time the data changes, we call a draw() function that redraws the chart of the directive. Let's get started: /* src/capp..js */ ... // Scatter Chart Directive .directive('myScatterChart', ["d3", function(d3){        // we will soon implement this function    var draw = function(svg, width, height, data){ … };      return {      restrict: 'E',      scope: {        data: '='      },      compile: function( element, attrs, transclude ) {          // Create a SVG root element        var svg = d3.select(element[0]).append('svg');          svg.append('g').attr('class', 'data');        svg.append('g').attr('class', 'x-axis axis');        svg.append('g').attr('class', 'y-axis axis');          // Define the dimensions for the chart        var width = 600, height = 300;          // Return the link function        return function(scope, element, attrs) {            // Watch the data attribute of the scope          scope.$watch('data', function(newVal, oldVal, scope) {              // Update the chart            draw(svg, width, height, scope.data);          }, true);        };      }    }; }]); Now, we implement the draw() function in the beginning of the directive. Drawing charts So far, the chart directive should look like the following code. We will now implement the draw() function, draw axis, and time series data. We start with setting the height and width for the svg element as follows: /* src/chart.js */ ...   // Scatter Chart Directive .directive('myScatterChart', ["d3", function(d3){      function draw(svg, width, height, data) {      svg        .attr('width', width)        .attr('height', height);      // code continues here }      return {      restrict: 'E',      scope: {        data: '='      },      compile: function( element, attrs, transclude ) { ... } }]); Axis, scale, range, and domain We first need to create the scales for the data and then the axis for the chart. The implementation looks very similar to the scatter chart. We want to update the axis with the minimum and maximum values of the dataset; therefore, we also add this code to the draw() function: /* src/chart.js --> myScatterChart --> draw() */   function draw(svg, width, height, data) { ... // Define a margin var margin = 30;   // Define x-scale var xScale = d3.time.scale()    .domain([      d3.min(data, function(d) { return d.time; }),      d3.max(data, function(d) { return d.time; })    ])    .range([margin, width-margin]);   // Define x-axis var xAxis = d3.svg.axis()    .scale(xScale)    .orient('top')    .tickFormat(d3.time.format('%S'));   // Define y-scale var yScale = d3.time.scale()    .domain([0, d3.max(data, function(d) { return d.visitors; })])    .range([margin, height-margin]);   // Define y-axis var yAxis = d3.svg.axis()    .scale(yScale)    .orient('left')    .tickFormat(d3.format('f'));   // Draw x-axis svg.select('.x-axis')    .attr("transform", "translate(0, " + margin + ")")    .call(xAxis);   // Draw y-axis svg.select('.y-axis')    .attr("transform", "translate(" + margin + ")")    .call(yAxis); } In the preceding code, we create a timescale for the x-axis and a linear scale for the y-axis and adapt the domain of both axes to match the maximum value of the dataset (we can also use the d3.extent() function to return min and max at the same time). Then, we define the pixel range for our chart area. Next, we create two axes objects with the previously defined scales and specify the tick format of the axis. We want to display the number of seconds that have passed on the x-axis and an integer value of the number of visitors on the y-axis. In the end, we draw the axes by calling the axis generator on the axis selection. Joining the data points Now, we will draw the data points and the axis. We finish the draw() function with this code: /* src/chart.js --> myScatterChart --> draw() */ function draw(svg, width, height, data) { ... // Add new the data points svg.select('.data')    .selectAll('circle').data(data)    .enter()    .append('circle');   // Updated all data points svg.select('.data')    .selectAll('circle').data(data)    .attr('r', 2.5)    .attr('cx', function(d) { return xScale(d.time); })    .attr('cy', function(d) { return yScale(d.visitors); }); } In the preceding code, we first create circle elements for the enter join for the data points where no corresponding circle is found in the Selection. Then, we update the attributes of the center point of all circle elements of the chart. Let's look at the generated output of the application: Output of the chart directive We notice that the axes and the whole chart scales as soon as new data points are added to the chart. In fact, this result looks very similar to the previous example with the main difference that we used a directive to draw this chart. This means that the data of the visualization that belongs to the application is stored and updated in the application itself, whereas the directive is completely decoupled from the data. To achieve a nice output like in the previous figure, we need to add some styles to the cart.css file, as shown in the following code: /* src/chart.css */ .axis path, .axis line {    fill: none;    stroke: #999;    shape-rendering: crispEdges; } .tick {    font: 10px sans-serif; } circle {    fill: steelblue; } We need to disable the filling of the axis and enable crisp edges rendering; this will give the whole visualization a much better look. Summary In this article, you learned how to properly integrate a D3.js component into an AngularJS application—the Angular way. All files, modules, and components should be maintainable, testable, and reusable. You learned how to set up an AngularJS application and how to structure the folder structure for the visualization component. We put different responsibilities in different files and modules. Every piece that we can separate from the main application can be reused in another application; the goal is to use as much modularization as possible. As a next step, we created the visualization directive by implementing a custom compile function. This gives us access to the first compilation of the element—where we can append the svg element as a parent for the visualization—and other container elements. Resources for Article: Further resources on this subject: AngularJS Performance [article] An introduction to testing AngularJS directives [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 6247

article-image-how-to-build-a-convolution-neural-network-based-malware-detector-using-malware-visualization-tutorial
Savia Lobo
05 Nov 2018
9 min read
Save for later

How to build a convolution neural network based malware detector using malware visualization [Tutorial]

Savia Lobo
05 Nov 2018
9 min read
Deep Learning (DL), a subfield of machine learning, arose to help build algorithms that work like the human mind and are inspired by its structure. Information security professionals are also intrigued by such techniques, as they have provided promising results in defending against major cyber threats and attacks. One of the best-suited candidates for the implementation of DL is malware analysis. This tutorial is an excerpt taken from the book, Mastering Machine Learning for Penetration Testing written by Chiheb Chebbi. In this book, you will learn to identify ambiguities, extensive techniques to breach an intelligent system, and much more. In this post, we are going to explore artificial network architectures and learn how to use one of them to help malware analysts and information security professionals to detect and classify malicious code. Before diving into the technical details and the steps for the practical implementation of the DL method, it is essential to learn and discover the other different architectures of artificial neural networks. Convolutional Neural Networks (CNNs) Convolutional Neural Networks (CNNs) are a deep learning approach to tackle the image classification problem, or what we call computer vision problems, because classic computer programs face many challenges and difficulties to identify objects for many reasons, including lighting, viewpoint, deformation, and segmentation. This technique is inspired by how the eye works, especially the visual cortex function algorithm in animals. In CNN are arranged in three-dimensional structures with width, height, and depth as characteristics. In the case of images, the height is the image height, the width is the image width, and the depth is RGB channels. To build a CNN, we need three main types of layer: Convolutional layer: A convolutional operation refers to extracting features from the input image and multiplying the values in the filter with the original pixel values Pooling layer: The pooling operation reduces the dimensionality of each feature map Fully-connected layer: The fully-connected layer is a classic multi-layer perceptrons with a softmax activation function in the output layer To implement a CNN with Python, you can use the following Python script: import numpy from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import Flatten from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.utils import np_utils from keras import backend backend.set_image_dim_ordering('th') model = Sequential() model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) Recurrent Neural Networks (RNNs) Recurrent Neural Networks (RNNs) are artificial neural networks where we can make use of sequential information, such as sentences. In other words, RNNs perform the same task for every element of a sequence, with the output depending on the previous computations. RNNs are widely used in language modeling and text generation (machine translation, speech recognition, and many other applications). RNNs do not remember things for a long time. Long Short Term Memory networks Long Short Term Memory (LSTM) solves the short memory issue in recurrent neural networks by building a memory block. This block sometimes is called a memory cell. Hopfield networks Hopfield networks were developed by John Hopfield in 1982. The main goal of Hopfield networks is auto-association and optimization. We have two categories of Hopfield network: discrete and continuous. Boltzmann machine networks Boltzmann machine networks use recurrent structures and they use only locally available information. They were developed by Geoffrey Hinton and Terry Sejnowski in 1985. Also, the goal of a Boltzmann machine is optimizing the solutions. Malware detection with CNNs For this new model, we are going to discover how to build a malware classifier with CNNs. But I bet you are wondering how we can do that while CNNs are taking images as inputs. The answer is really simple, the trick here is converting malware into an image. Is this possible? Yes, it is. Malware visualization is one of many research topics during the past few years. One of the proposed solutions has come from a research study called Malware Images: Visualization and Automatic Classification by Lakshmanan Nataraj from the Vision Research Lab, University of California, Santa Barbara. The following diagram details how to convert malware into an image: The following is an image of the Alueron.gen!J malware: This technique also gives us the ability to visualize malware sections in a detailed way: By solving the issue of how to feed malware machine learning classifiers that use CNNs by images, information security professionals can use the power of CNNs to train models. One of the malware datasets most often used to feed CNNs is the Malimg dataset. This malware dataset contains 9,339 malware samples from 25 different malware families. You can download it from Kaggle (a platform for predictive modeling and analytics competitions) by visiting this link: https://www.kaggle.com/afagarap/malimg-dataset/data. These are the malware families: Allaple.L Allaple.A Yuner.A Lolyda.AA 1 Lolyda.AA 2 Lolyda.AA 3 C2Lop.P C2Lop.gen!G Instant access Swizzor.gen!I Swizzor.gen!E VB.AT Fakerean Alueron.gen!J Malex.gen!J Lolyda.AT Adialer.C Wintrim.BX Dialplatform.B Dontovo.A Obfuscator.AD Agent.FYI Autorun.K Rbot!gen Skintrim.N After converting malware into grayscale images, you can get the following malware representation so you can use them later to feed the machine learning model: The conversion of each malware to a grayscale image can be done using the following Python script: import os import scipy import array filename = '<Malware_File_Name_Here>'; f = open(filename,'rb'); ln = os.path.getsize(filename); width = 256; rem = ln%width; a = array.array("B"); a.fromfile(f,ln-rem); f.close(); g = numpy.reshape(a,(len(a)/width,width)); g = numpy.uint8(g); scipy.misc.imsave('<Malware_File_Name_Here>.png',g); For feature selection, you can extract or use any image characteristics, such as the texture pattern, frequencies in image, intensity, or color features, using different techniques such as Euclidean distance, or mean and standard deviation, to generate later feature vectors. In our case, we can use algorithms such as a color layout descriptor, homogeneous texture descriptor, or global image descriptors (GIST). Let's suppose that we selected the GIST; pyleargist is a great Python library to compute it. To install it, use PIP as usual: # pip install pyleargist==1.0.1 As a use case, to compute a GIST, you can use the following Python script: import Image Import leargist image = Image.open('<Image_Name_Here>.png'); New_im = image.resize((64,64)); des = leargist.color_gist(New_im); Feature_Vector = des[0:320]; Here, 320 refers to the first 320 values while we are using grayscale images. Don't forget to save them as NumPy arrays to use them later to train the model. After getting the feature vectors, we can train many different models, including SVM, k-means, and artificial neural networks. One of the useful algorithms is that of the CNN. Once the feature selection and engineering is done, we can build a CNN. For our model, for example, we will build a convolutional network with two convolutional layers, with 32 * 32 inputs. To build the model using Python libraries, we can implement it with the previously installed TensorFlow and utils libraries. So the overall CNN architecture will be as in the following diagram: This CNN architecture is not the only proposal to build the model, but at the moment we are going to use it for the implementation. To build the model and CNN in general, I highly recommend Keras. The required imports are the following: import keras from keras.models import Sequential,Input,Model from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.layers.normalization import BatchNormalization from keras.layers.advanced_activations import LeakyReLU As we discussed before, the grayscale image has pixel values that range from 0 to 255, and we need to feed the net with 32 * 32 * 1 dimension images as a result: train_X = train_X.reshape(-1, 32,32, 1) test_X = test_X.reshape(-1, 32,32, 1) We will train our network with these parameters: batch_size = 64 epochs = 20 num_classes = 25 To build the architecture, with regards to its format, use the following: Malware_Model = Sequential() Malware_Model.add(Conv2D(32, kernel_size=(3,3),activation='linear',input_shape=(32,32,1),padding='same')) Malware_Model.add(LeakyReLU(alpha=0.1)) Malware_model.add(MaxPooling2D(pool_size=(2, 2),padding='same')) Malware_Model.add(Conv2D(64, (3, 3), activation='linear',padding='same')) Malware_Model.add(LeakyReLU(alpha=0.1)) Malware_Model.add(Dense(1024, activation='linear')) Malware_Model.add(LeakyReLU(alpha=0.1)) Malware_Model.add(Dropout(0.4)) Malware_Model.add(Dense(num_classes, activation='softmax')) To compile the model, use the following: Malware_Model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy']) Fit and train the model: Malware_Model.fit(train_X, train_label, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_label)) As you noticed, we are respecting the flow of training a neural network that was discussed in previous chapters. To evaluate the model, use the following code: Malware_Model.evaluate(test_X, test_Y_one_hot, verbose=0) print('The accuracy of the Test is:', test_eval[1]) Thus, in this post, we discovered how to build malware detectors using different machine learning algorithms, especially using the power of deep learning techniques.  If you've enjoyed reading this post, do check out  Mastering Machine Learning for Penetration Testing to find loopholes and surpass a self-learning security system This AI generated animation can dress like humans using deep reinforcement learning DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help “Deep meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI)”, Sudharsan Ravichandiran
Read more
  • 0
  • 0
  • 6243
article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 6241

article-image-prototyping-javascript
Packt
23 Oct 2009
7 min read
Save for later

Prototyping JavaScript

Packt
23 Oct 2009
7 min read
In this article by Stoyan Stefanov, you'll learn about the prototype property of the function objects. Understanding how the prototype works is an important part of learning the JavaScript language. After all, JavaScript is classified as having a prototype-based object model. There's nothing particularly difficult about the prototype, but it is a new concept and as such may sometimes take some time to sink in. It's one of these things in JavaScript (closures are another) which, once you "get" them, they seem so obvious and make perfect sense. As with the rest of the article, you're strongly encouraged to type in and play around with the examples; this makes it much easier to learn and remember the concepts. The following topics are discussed in this article: Every function has a prototype property and it contains an object Adding properties to the prototype object Using the properties added to the prototype The difference between own properties and properties of the prototype __proto__, the secret link every object keeps to its prototype Methods such as isPrototypeOf(), hasOwnProperty(), and propertyIsEnumerable() The prototype Property The functions in JavaScript are objects and they contain methods and properties. Some of the common methods are apply() and call() and some of the common properties are length and constructor. Another property of the function objects is prototype. If you define a simple function foo() you can access its properties as you would do with any other object: >>>function foo(a, b){return a * b;}>>>foo.length 2 >>>foo.constructor Function() prototype is a property that gets created as soon as you define the function. Its initial value is an empty object. >>>typeof foo.prototype "object" It's as if you added this property yourself like this: >>>foo.prototype = {} You can augment this empty object with properties and methods. They won't have any effect of the foo() function itself; they'll only be used when you use foo()as a constructor. Adding Methods and Properties Using the Prototype Constructor functions can be used to create (construct) new objects. The main idea is that inside a function invoked with new you have access to the value this, which contains the object to be returned by the constructor. Augmenting (adding methods and properties to) this object is the way to add functionality to the object being created. Let's take a look at the constructor function Gadget() which uses this to add two properties and one method to the objects it creates. function Gadget(name, color) {   this.name = name;   this.color = color;   this.whatAreYou = function(){    return 'I am a ' + this.color + ' ' + this.name;   }} Adding methods and properties to the prototype property of the constructor function is another way to add functionality to the objects this constructor produces. Let's add two more properties, price and rating, and a getInfo() method. Since prototype contains an object, you can just keep adding to it like this: Gadget.prototype.price = 100;Gadget.prototype.rating = 3;Gadget.prototype.getInfo = function() {   return 'Rating: ' + this.rating + ', price: ' + this.price;}; Instead of adding to the prototype object, another way to achieve the above result is to overwrite the prototype completely, replacing it with an object of your choice: Gadget.prototype = {   price: 100,   rating: 3,   getInfo: function() {    return 'Rating: ' + this.rating + ', price: ' + this.price;   }}; Using the Prototype's Methods and Properties All the methods and properties you have added to the prototype are directly available as soon as you create a new object using the constructor. If you create a newtoy object using the Gadget() constructor, you can access all the methods and properties already defined. >>> var newtoy = new Gadget('webcam', 'black');>>> newtoy.name; "webcam" >>> newtoy.color; "black" >>> newtoy.whatAreYou(); "I am a black webcam" >>> newtoy.price; 100 >>> newtoy.rating; 3 >>> newtoy.getInfo(); "Rating: 3, price: 100" It's important to note that the prototype is "live". Objects are passed by reference in JavaScript, and therefore the prototype is not copied with every new object instance. What does this mean in practice? It means that you can modify the prototype at any time and all objects (even those created before the modification) will inherit the changes. Let's continue the example, adding a new method to the prototype: Gadget.prototype.get = function(what) {   return this[what];}; Even though newtoy was created before the get() method was defined, newtoy will still have access to the new method: >>> newtoy.get('price'); 100 >>> newtoy.get('color'); "black" Own Properties versus prototype Properties In the example above getInfo() used this internally to address the object. It could've also used Gadget.prototype to achieve the same result: Gadget.prototype.getInfo = function() {   return 'Rating: ' + Gadget.prototype.rating + ', price: ' + Gadget.prototype.price;}; What's is the difference? To answer this question, let's examine how the prototype works in more detail. Let's again take our newtoy object: >>> var newtoy = new Gadget('webcam', 'black'); When you try to access a property of newtoy, say newtoy.name the JavaScript engine will look through all of the properties of the object searching for one called name and, if it finds it, will return its value. >>> newtoy.name "webcam" What if you try to access the rating property? The JavaScript engine will examine all of the properties of newtoy and will not find the one called rating. Then the script engine will identify the prototype of the constructor function used to create this object (same as if you do newtoy.constructor.prototype). If the property is found in the prototype, this property is used. >>> newtoy.rating 3 This would be the same as if you accessed the prototype directly. Every object has a constructor property, which is a reference to the function that created the object, so in our case: >>> newtoy.constructor Gadget(name, color) >>> newtoy.constructor.prototype.rating 3 Now let's take this lookup one step further. Every object has a constructor. The prototype is an object, so it must have a constructor too. Which in turn has a prototype. In other words you can do: >>> newtoy.constructor.prototype.constructor Gadget(name, color) >>> newtoy.constructor.prototype.constructor.prototype Object price=100 rating=3 This might go on for a while, depending on how long the prototype chain is, but you eventually end up with the built-in Object() object, which is the highest-level parent. In practice, this means that if you try newtoy.toString() and newtoy doesn't have an own toString() method and its prototype doesn't either, in the end you'll get the Object's toString() >>> newtoy.toString() "[object Object]" Overwriting Prototype's Property withOwn Property As the above discussion demonstrates, if one of your objects doesn't have a certain property of its own, it can use one (if exists) somewhere up the prototype chain. What if the object does have its own property and the prototype also has one with the same name? The own property takes precedence over the prototype's. Let's have a scenario where a property name exists both as an own property and as a property of the prototype object: function Gadget(name) {   this.name = name;}Gadget.prototype.name = 'foo'; "foo" Creating a new object and accessing its name property gives you the object's ownname property. >>> var toy = new Gadget('camera');>>> toy.name; "camera" If you delete this property, the prototype's property with the same name"shines through": >>> delete toy.name; true >>> toy.name; "foo" Of course, you can always re-create the object's own property: >>> toy.name = 'camera';>>> toy.name; "camera"  
Read more
  • 0
  • 0
  • 6238