Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-creating-spring-application
Packt
25 May 2015
18 min read
Save for later

Creating a Spring Application

Packt
25 May 2015
18 min read
In this article by Jérôme Jaglale, author of the book Spring Cookbook , we will cover the following recipes: Installing Java, Maven, Tomcat, and Eclipse on Mac OS Installing Java, Maven, Tomcat, and Eclipse on Ubuntu Installing Java, Maven, Tomcat, and Eclipse on Windows Creating a Spring web application Running a Spring web application Using Spring in a standard Java application (For more resources related to this topic, see here.) Introduction In this article, we will first cover the installation of some of the tools for Spring development: Java: Spring is a Java framework. Maven: This is a build tool similar to Ant. It makes it easy to add Spring libraries to a project. Gradle is another option as a build tool. Tomcat: This is a web server for Java web applications. You can also use JBoss, Jetty, GlassFish, or WebSphere. Eclipse: This is an IDE. You can also use NetBeans, IntelliJ IDEA, and so on. Then, we will build a Springweb application and run it with Tomcat. Finally, we'll see how Spring can also be used in a standard Java application (not a web application). Installing Java, Maven, Tomcat, and Eclipse on Mac OS We will first install Java 8 because it's not installed by default on Mac OS 10.9 or higher version. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Mac OS X x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. Open the downloaded file, launch it, and complete the installation. In your ~/.bash_profile file, set the JAVA_HOME environment variable. Change jdk1.8.0_40.jdk to the actual folder name on your system (this depends on the version of Java you are using, which is updated regularly): export JAVA_HOME="/Library/Java/JavaVirtualMachines/ jdk1.8.0_40.jdk/Contents/Home" Open a new terminal and test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b26)Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode) Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version: Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /Users/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle CorporationJava home: /Library/Java/JavaVirtualMachines/jdk1.8.0_...Default locale: en_US, platform encoding: UTF-8OS name: "mac os x", version: "10.9.5", arch... … Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh runUsing CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54...INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. In a web browser, go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Mac OS X 64 Bit version of Eclipse IDE for Java EE Developers. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.shbin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Ubuntu We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the EclipseIDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Add this PPA (Personal Package Archive): sudo add-apt-repository -y ppa:webupd8team/java Refresh the list of the available packages: sudo apt-get update Download and install Java 8: sudo apt-get install –y oracle-java8-installer Test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b25)...Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25… Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file and move the resulting folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /home/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle Corporation... Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh run Using CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54 ... INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Linux 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.sh bin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Windows We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Windows x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html.   Open the downloaded file, launch it, and complete the installation. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a JAVA_HOME system variable with the C:Program FilesJavajdk1.8.0_40 value. Change jdk1.8.0_40 to the actual folder name on your system (this depends on the version of Java, which is updated regularly). Test whether it's working by opening Command Prompt and entering java –version. Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file. Create a Programs folder in your user folder. Move the extracted folder to it. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a MAVEN_HOME system variable with the path to the Maven folder. For example, C:UsersjeromeProgramsapache-maven-3.2.1. Open the Path system variable. Append ;%MAVEN_HOME%bin to it.   Test whether it's working by opening a Command Prompt and entering mvn –v.   Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the 32-bit/64-bit Windows Service Installer binary distribution.   Launch and complete the installation. Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Windows 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file. Launch the eclipse program. Creating a Spring web application In this recipe, we will build a simple Spring web application with Eclipse. We will: Create a new Maven project Add Spring to it Add two Java classes to configure Spring Create a "Hello World" web page In the next recipe, we will compile and run this web application. How to do it… In this section, we will create a Spring web application in Eclipse. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project…. Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springwebapp. For Packaging, select war and click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the versions for Java and Spring. Also add the Servlet API, Spring Core, and Spring MVC dependencies: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Servlet API --> <dependency>    <groupId>javax.servlet</groupId>    <artifactId>javax.servlet-api</artifactId>    <version>3.1.0</version>    <scope>provided</scope> </dependency>   <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency>   <!-- Spring MVC --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-webmvc</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating the configuration classes for Spring Create the Java packages com.springcookbook.config and com.springcookbook.controller; in the left-hand side pane Package Explorer, right-click on the project folder and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: package com.springcookbook.config; @Configuration @EnableWebMvc @ComponentScan (basePackages = {"com.springcookbook.controller"}) public class AppConfig { } Still in the com.springcookbook.config package, create the ServletInitializer class. Add the needed import declarations similarly: package com.springcookbook.config;   public class ServletInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {    @Override    protected Class<?>[] getRootConfigClasses() {        return new Class<?>[0];    }       @Override    protected Class<?>[] getServletConfigClasses() {        return new Class<?>[]{AppConfig.class};    }      @Override    protected String[] getServletMappings() {        return new String[]{"/"};    } } Creating a "Hello World" web page In the com.springcookbook.controller package, create the HelloController class and its hi() method: @Controller public class HelloController { @RequestMapping("hi") @ResponseBody public String hi() {      return "Hello, world."; } } How it works… This section will give more you details of what happened at every step. Creating a new Maven project in Eclipse The generated Maven project is a pom.xml configuration file along with a hierarchy of empty directories: pom.xml src |- main    |- java    |- resources    |- webapp |- test    |- java    |- resources Adding Spring to the project using Maven The declared Maven libraries and their dependencies are automatically downloaded in the background by Eclipse. They are listed under Maven Dependencies in the left-hand side pane Package Explorer. Tomcat provides the Servlet API dependency, but we still declared it because our code needs it to compile. Maven will not include it in the generated .war file because of the <scope>provided</scope> declaration. Creating the configuration classes for Spring AppConfig is a Spring configuration class. It is a standard Java class annotated with: @Configuration: This declares it as a Spring configuration class @EnableWebMvc: This enables Spring's ability to receive and process web requests @ComponentScan(basePackages = {"com.springcookbook.controller"}): This scans the com.springcookbook.controller package for Spring components ServletInitializer is a configuration class for Spring's servlet; it replaces the standard web.xml file. It will be detected automatically by SpringServletContainerInitializer, which is automatically called by any Servlet 3. ServletInitializer extends the AbstractAnnotationConfigDispatcherServletInitializer abstract class and implements the required methods: getServletMappings(): This declares the servlet root URI. getServletConfigClasses(): This declares the Spring configuration classes. Here, we declared the AppConfig class that was previously defined. Creating a "Hello World" web page We created a controller class in the com.springcookbook.controller package, which we declared in AppConfig. When navigating to http://localhost:8080/hi, the hi()method will be called and Hello, world. will be displayed in the browser. Running a Spring web application In this recipe, we will use the Spring web application from the previous recipe. We will compile it with Maven and run it with Tomcat. How to do it… Here are the steps to compile and run a Spring web application: In pom.xml, add this boilerplate code under the project XML node. It will allow Maven to generate .war files without requiring a web.xml file: <build>    <finalName>springwebapp</finalName> <plugins>    <plugin>      <groupId>org.apache.maven.plugins</groupId>      <artifactId>maven-war-plugin</artifactId>      <version>2.5</version>      <configuration>       <failOnMissingWebXml>false</failOnMissingWebXml>      </configuration>    </plugin> </plugins> </build> In Eclipse, in the left-hand side pane Package Explorer, select the springwebapp project folder. In the Run menu, select Run and choose Maven install or you can execute mvn clean install in a terminal at the root of the project folder. In both cases, a target folder will be generated with the springwebapp.war file in it. Copy the target/springwebapp.war file to Tomcat's webapps folder. Launch Tomcat. In a web browser, go to http://localhost:8080/springwebapp/hi to check whether it's working.   How it works… In pom.xml the boilerplate code prevents Maven from throwing an error because there's no web.xml file. A web.xml file was required in Java web applications; however, since Servlet specification 3.0 (implemented in Tomcat 7 and higher versions), it's not required anymore. There's more… On Mac OS and Linux, you can create a symbolic link in Tomcat's webapps folder pointing to the.war file in your project folder. For example: ln -s ~/eclipse_workspace/spring_webapp/target/springwebapp.war ~/bin/apache-tomcat/webapps/springwebapp.war So, when the.war file is updated in your project folder, Tomcat will detect that it has been modified and will reload the application automatically. Using Spring in a standard Java application In this recipe, we will build a standard Java application (not a web application) using Spring. We will: Create a new Maven project Add Spring to it Add a class to configure Spring Add a User class Define a User singleton in the Spring configuration class Use the User singleton in the main() method How to do it… In this section, we will cover the steps to use Spring in a standard (not web) Java application. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project.... Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springapp. Click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the Java and Spring versions and add the Spring Core dependency: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating a configuration class for Spring Create the com.springcookbook.config Java package; in the left-hand side pane Package Explorer, right-click on the project and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: @Configuration public class AppConfig { } Creating the User class Create a User Java class with two String fields: public class User { private String name; private String skill; public String getName() {    return name; } public void setName(String name) {  this.name = name; } public String getSkill() {    return skill; } public void setSkill(String skill) {    this.skill = skill; } } Defining a User singleton in the Spring configuration class In the AppConfig class, define a User bean: @Bean public User admin(){    User u = new User();    u.setName("Merlin");    u.setSkill("Magic");    return u; } Using the User singleton in the main() method Create the com.springcookbook.main package with the Main class containing the main() method: package com.springcookbook.main; public class Main { public static void main(String[] args) { } } In the main() method, retrieve the User singleton and print its properties: AnnotationConfigApplicationContext springContext = new AnnotationConfigApplicationContext(AppConfig.class);   User admin = (User) springContext.getBean("admin");   System.out.println("admin name: " + admin.getName()); System.out.println("admin skill: " + admin.getSkill());   springContext.close(); Test whether it's working; in the Run menu, select Run.   How it works... We created a Java project to which we added Spring. We defined a User bean called admin (the bean name is by default the bean method name). In the Main class, we created a Spring context object from the AppConfig class and retrieved the admin bean from it. We used the bean and finally, closed the Spring context. Summary In this article, we have learned how to install some of the tools for Spring development. Then, we learned how to build a Springweb application and run it with Tomcat. Finally, we saw how Spring can also be used in a standard Java application.
Read more
  • 0
  • 0
  • 6732

article-image-frontend-development-bootstrap-4
Packt
06 Oct 2016
19 min read
Save for later

Frontend development with Bootstrap 4

Packt
06 Oct 2016
19 min read
In this article by Bass Jobsen author of the book Bootstrap 4 Site Blueprints explains Bootstrap's popularity as a frontend web development framework is easy to understand. It provides a palette of user-friendly, cross-browser-tested solutions for the most standard UI conventions. Its ready-made, community-tested combination of HTML markup, CSS styles, and JavaScript plugins greatly speed up the task of developing a frontend web interface, and it yields a pleasing result out of the gate. With the fundamental elements in place, we can customize the design on top of a solid foundation. (For more resources related to this topic, see here.) However, not all that is popular, efficient, and effective is good. Too often, a handy tool can generate and reinforce bad habits; not so with Bootstrap, at least not necessarily so. Those who have followed it from the beginning know that its first release and early updates have occasionally favored pragmatic efficiency over best practices. The fact is that some best practices, including from semantic markup, mobile-first design, and performance-optimized assets, require extra time and effort for implementation. Quantity and quality If handled well, I feel that Bootstrap is a boon for the web development community in terms of quality and efficiency. Since developers are attracted to the web development framework, they become part of a coding community that draws them increasingly to the current best practices. From the start, Bootstrap has encouraged the implementation of tried, tested, and future-friendly CSS solutions, from Nicholas Galagher's CSS normalize to CSS3's displacement of image-heavy design elements. It has also supported (if not always modeled) HTML5 semantic markup. Improving with age With the release of v2.0, Bootstrap took responsive design into the mainstream, ensuring that its interface elements could travel well across devices, from desktops to tablets to handhelds. With the v3.0 release, Bootstrap stepped up its game again by providing the following features: The responsive grid was now mobile-first friendly Icons now utilize web fonts and, thus, were mobile- and retina-friendly With the drop of the support for IE7, markup and CSS conventions were now leaner and more efficient Since version 3.2, autoprefixer was required to build Bootstrap This article is about the v4.0 release. This release contains many improvements and also some new components, while some other components and plugins are dropped. In the following overview, you will find the most important improvements and changes in Bootstrap 4: Less (Leaner CSS) has been replaced with Sass. CSS code has been refactored to avoid tag and child selectors. There is an improved grid system with a new grid tier to better target the mobile devices. The navbar has been replaced. It has an opt-in flexbox support. It has a new HTML reset module called Reboot. Reboot extends Nicholas Galagher's CSS normalize and handles the box-sizing: border-box declarations. jQuery plugins are written in ES6 now and come with a UMD support. There is an improved auto-placement of tooltips and popovers, thanks to the help of a library called Tether. It has dropped the support for Internet Explorer 8, which enables us to swap pixels with rem and em units. It has added the Card component, which replaces the Wells, thumbnails, and Panels in earlier versions. It has dropped the icons in the font format from the Glyphicon Halflings set. The Affix plugin is dropped, and it can be replaced with the position: sticky polyfill (https://github.com/filamentgroup/fixed-sticky). The power of Sass When working with Bootstrap, there is the power of Sass to consider. Sass is a preprocessor for CSS. It extends the CSS syntax with variables, mixins, and functions and helps you in DRY (Don't Repeat Yourself) coding your CSS code. Sass has originally been written in Ruby. Nowadays, a fast port of Sass written in C++, called libSass, is available. Bootstrap uses the modern SCSS syntax for Sass instead of the older indented syntax of Sass. Using Bootstrap CLI You will be introduced to Bootstrap CLI. Instead of using Bootstrap's bundled build process, you can also start a new project by running the Bootstrap CLI. Bootstrap CLI is the command-line interface for Bootstrap 4. It includes some built-in example projects, but you can also use it to employ and deliver your own projects. You'll need the following software installed to get started with Bootstrap CLI: Node.js 0.12+: Use the installer provided on the NodeJS website, which can be found at http://nodejs.org/ With Node installed, run [sudo] npm install -g grunt bower Git: Use the installer for your OS Windows users can also try Git Gulp is another task runner for the Node.js system. Note that if you prefer Gulp over Grunt, you should install gulp instead of grunt with the following command: [sudo] npm install -g gulp bower The Bootstrap CLI is installed through npm by running the following command in your console: npm install -g bootstrap-cli This will add the bootstrap command to your system. Preparing a new Bootstrap project After installing the Bootstrap CLI, you can create a new Bootstrap project by running the following command in your console: bootstrap new --template empty-bootstrap-project-gulp Enter the name of your project for the question "What's the project called? (no spaces)". A new folder with the project name will be created. After the setup process, the directory and file structure of your new project folder should look as shown in the following figure: The project folder also contains a Gulpfile.js file. Now, you can run the bootstrap watch command in your console and start editing the html/pages/index.html file. The HTML templates are compiled with Panini. Panini is a flat file compiler that helps you to create HTML pages with consistent layouts and reusable partials with ease. You can read more about Panini at http://foundation.zurb.com/sites/docs/panini.html. Responsive features and breakpoints Bootstrap has got four breakpoints at 544, 768, 992, and 1200 pixels by default. At these breakpoints, your design may adapt to and target specific devices and viewport sizes. Bootstrap's mobile-first and responsive grid(s) also use these breakpoints. You can read more about the grids later on. You can use these breakpoints to specify and name the viewport ranges. The extra small (xs) range is for portrait phones with a viewport smaller than 544 pixels, the small (sm) range is for landscape phones with viewports smaller than 768pixels, the medium (md) range is for tablets with viewports smaller than 992pixels, the large (lg) range is for desktop with viewports wider than 992pixels, and finally the extra-large (xl) range is for desktops with a viewport wider than 1200 pixels. The breakpoints are in pixel values, as the viewport pixel size does not depend on the font size and modern browsers have already fixed some zooming bugs. Some people claim that em values should be preferred. To learn more about this, check out the following link: http://zellwk.com/blog/media-query-units/. Those who still prefer em values over pixels value can simply change the $grid-breakpointsvariable declaration in the scss/includes/_variables.scssfile. To use em values for media queries, the SCSS code should as follows: $grid-breakpoints: ( // Extra small screen / phone xs: 0, // Small screen / phone sm: 34em, // 544px // Medium screen / tablet md: 48em, // 768px // Large screen / desktop lg: 62em, // 992px // Extra large screen / wide desktop xl: 75em //1200px ); Note that you also have to change the $container-max-widths variable declaration. You should change or modify Bootstrap's variables in the local scss/includes/_variables.scss file, as explained at http://bassjobsen.weblogs.fm/preserve_settings_and_customizations_when_updating_bootstrap/. This will ensure that your changes are not overwritten when you update Bootstrap. The new Reboot module and Normalize.css When talking about cascade in CSS, there will, no doubt, be a mention of the browser default settings getting a higher precedence than the author's preferred styling. In other words, anything that is not defined by the author will be assigned a default styling set by the browser. The default styling may differ for each browser, and this behavior plays a major role in many cross-browser issues. To prevent these sorts of problems, you can perform a CSS reset. CSS or HTML resets set a default author style for commonly used HTML elements to make sure that browser default styles do not mess up your pages or render your HTML elements to be different on other browsers. Bootstrap uses Normalize.css written by Nicholas Galagher. Normalize.css is a modern, HTML5-ready alternative to CSS resets and can be downloaded from http://necolas.github.io/normalize.css/. It lets browsers render all elements more consistently and makes them adhere to modern standards. Together with some other styles, Normalize.css forms the new Reboot module of Bootstrap. Box-sizing The Reboot module also sets the global box-sizing value from content-box to border-box. The box-sizing property is the one that sets the CSS-box model used for calculating the dimensions of an element. In fact, box-sizing is not new in CSS, but nonetheless, switching your code to box-sizing: border-box will make your work a lot easier. When using the border-box settings, calculation of the width of an element includes border width and padding. So, changing the border width or padding of an element won't break your layouts. Predefined CSS classes Bootstrap ships with predefined CSS classes for everything. You can build a mobile-first responsive grid for your project by only using div elements and the right grid classes. CSS classes for styling other elements and components are also available. Consider the styling of a button in the following HTML code: <button class="btn btn-warning">Warning!</button> Now, your button will be as shown in the following screenshot: You will notice that Bootstrap uses two classes to style a single button. The first is the .btn class that gives the button the general button layout styles. The second class is the .btn-warning class that sets the custom colors of the buttons. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file called app.scss. Then, create a new subdirectory in the new scss directory named includes. Now, you will have to copy bootstrap.scss and _variables.scss from the Bootstrap source code in the bower_components directory to the new scss/includes directory as follows: cp bower_components/bootstrap/scss/bootstrap.scss scss/includes/_bootstrap.scss cp bower_components/bootstrap/scss/_variables.scss scss/includes/ You will notice that the bootstrap.scss file has been renamed to _bootstrap.scss, starting with an underscore, and has become a partial file now. Import the files you have copied in the previous step into the app.scss file, as follows: @import "includes/variables"; @import "includes/bootstrap"; Then, open the scss/includes/_bootstrap.scss file and change the import part for the Bootstrap partial files so that the original code in the bower_components directory will be imported here. Note that we will set the include path for the Sass compiler to the bower_components directory later on. The @import statements should look as shown in the following SCSS code: // Core variables and mixins @import "bootstrap/scss/variables"; @import "bootstrap/scss/mixins"; // Reset and dependencies @import "bootstrap/scss/normalize"; You're importing all of Bootstrap's SCSS code in your project now. When preparing your code for production, you can consider commenting out the partials that you do not require for your project. Modification of scss/includes/_variables.scss is not required, but you can consider removing the !default declarations because the real default values are set in the original _variables.scss file, which is imported after the local one. Note that the local scss/includes/_variables.scss file does not have to contain a copy of all of the Bootstrap's variables. Having them all just makes it easier to modify them for customization; it also ensures that your default values do not change when you are updating Bootstrap. Setting up your project and requirements For this project, you'll use the Bootstrap CLI again, as it helps you create a setup for your project comfortably. Bootstrap CLI requires you to have Node.js and Gulp already installed on your system. Now, create a new project by running the following command in your console: bootstrap new Enter the name of your project and choose the An empty new Bootstrap project. Powered by Panini, Sass and Gulp. template. Now your project is ready to start with your design work. However, before you start, let's first go through the introduction to Sass and the strategies for customization. The power of Sass in your project Sass is a preprocessor for CSS code and is an extension of CSS3, which adds nested rules, variables, mixins, functions, selector inheritance, and more. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file and name it app.scss. Using the CLI and running the code from GitHub Install the Bootstrap CLI using the following commands in your console: [sudo] npm install -g gulp bower npm install bootstrap-cli --global Then, use the following command to set up a Bootstrap 4 Weblog project: bootstrap new --repo https://github.com/bassjobsen/bootstrap-weblog.git The following figure shows the end result of your efforts: Turning our design into a WordPress theme WordPress is a very popular CMS (Content Management System) system; it now powers 25 percent of all sites across the web. WordPress is a free and open source CMS system and is based on PHP. To learn more about WordPress, you can also visit Packt Publishing’s WordPress Tech Page at https://www.packtpub.com/tech/wordpress. Now let's turn our design into a WordPress theme. There are many Bootstrap-based themes that we could choose from. We've taken care to integrate Bootstrap's powerful Sass styles and JavaScript plugins with the best practices found for HTML5. It will be to our advantage to use a theme that does the same. We'll use the JBST4 theme for this exercise. JBST4 is a blank WordPress theme built with Bootstrap 4. Installing the JBST 4 theme Let's get started by downloading the JBST theme. Navigate to your wordpress/wp-content/themes/ directory and run the following command in your console: git clone https://github.com/bassjobsen/jbst-4-sass.git jbst-weblog-theme Then navigate to the new jbst-weblog-theme directory and run the following command to confirm whether everything is working: npm install gulp Download from GitHub You can download the newest and updated version of this theme from GitHub too. You will find it at https://github.com/bassjobsen/jbst-weblog-theme. JavaScript events of the Carousel plugin Bootstrap provides custom events for most of the plugins' unique actions. The Carousel plugin fires the slide.bs.carousel (at the beginning of the slide transition) and slid.bs.carousel (at the end of the slide transition) events. You can use these events to add custom JavaScript code. You can, for instance, change the background color of the body on the events by adding the following JavaScript into the js/main.js file: $('.carousel').on('slide.bs.carousel', function () { $('body').css('background-color','#'+(Math.random()*0xFFFFFF<<0).toString(16)); }); You will notice that the gulp watch task is not set for the js/main.js file, so you have to run the gulp or bootstrap watch command manually after you are done with the changes. For more advanced changes of the plugin's behavior, you can overwrite its methods by using, for instance, the following JavaScript code: !function($) { var number = 0; var tmp = $.fn.carousel.Constructor.prototype.cycle; $.fn.carousel.Constructor.prototype.cycle = function (relatedTarget) { // custom JavaScript code here number = (number % 4) + 1; $('body').css('transform','rotate('+ number * 90 +'deg)'); tmp.call(this); // call the original function }; }(jQuery); The preceding JavaScript sets the transform CSS property without vendor prefixes. The autoprefixer only prefixes your static CSS code. For full browser compatibility, you should add the vendor prefixes in the JavaScript code yourself. Bootstrap exclusively uses CSS3 for its animations, but Internet Explorer 9 doesn’t support the necessary CSS properties. Adding drop-down menus to our navbar Bootstrap’s JavaScript Dropdown Plugin enables you to create drop-down menus with ease. You can also add these drop-down menus in your navbar too. Open the html/includes/header.html file in your text editor. You will notice that the Gulp build process uses the Panini HTML compiler to compile our HTML templates into HTML pages. Panini is powered by the Handlebars template language. You can use helpers, iterations, and custom data in your templates. In this example, you'll use the power of Panini to build the navbar items with drop-down menus. First, create a html/data/productgroups.yml file that contains the titles of the navbar items: Shoes Clothing Accessories Women Men Kids All Departments The preceding code is written in the YAML format. YAML is a human-readable data serialization language that takes concepts from programming languages and ideas from XML; you can read more about it at http://yaml.org/. Using the data described in the preceding code, you can use the following HTML and template code to build the navbar items: <ul class="nav navbar-nav navbar-toggleable-sm collapse" id="collapsiblecontent"> {{#each productgroups}} <li class="nav-item dropdown {{#ifCond this 'Shoes'}}active{{/ifCond}}"> <a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false"> {{ this }} </a> <div class="dropdown-menu"> <a class="dropdown-item" href="#">Action</a> <a class="dropdown-item" href="#">Another action</a> <a class="dropdown-item" href="#">Something else here</a> <div class="dropdown-divider"></div> <a class="dropdown-item" href="#">Separated link</a> </div> </li> {{/each}} </ul> The preceding code uses a (for) each loop to build the seven navbar items; each item gets the same drop-down menu. The Shoes menu got the active class. Handlebars, and so Panini, does not support conditional comparisons by default. The if-statement can only handle a single value, but you can add a custom helper to enable conditional comparisons. The custom helper, which enables us to use the ifCond statement can be found in the html/helpers/ifCond.js file. Read my blog post, How to set up Panini for different environment, at http://bassjobsen.weblogs.fm/set-panini-different-environments/, to learn more about Panini and custom helpers. The HTML code for the drop-down menu is in accordance with the code for drop-down menus as described for the Dropdown plugin at http://getbootstrap.com/components/dropdowns/. The navbar collapses for smaller screen sizes. By default, the drop-down menus look the same on all grids: Now, you will use your Bootstrap skills to build an Angular 2 app. Angular 2 is the successor of AngularJS. You can read more about Angular 2 at https://angular.io/. It is a toolset for building the framework that is most suited to your application development; it lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop. Angular is maintained by Google and a community of individuals and corporations. I have also published the source for Angular 2 with Bootstrap 4 starting point at GitHub. You will find it at the following URL: https://github.com/bassjobsen/angular2-bootstrap4-website-builder. You can install it by simply running the following command in your console: git clone https://github.com/bassjobsen/angular2-bootstrap4-website-builder.git yourproject Next, navigate to the new folder and run the following commands and verify that it works: npm install npm start Other tools to deploy Bootstrap 4 A Brunch skeleton using Bootstrap 4 is available at https://github.com/bassjobsen/brunch-bootstrap4. Brunch is a frontend web app build tool that builds, lints, compiles, concatenates, and shrinks your HTML5 apps. Read more about Brunch at the official website, which can be found at http://brunch.io/. You can try Brunch by running the following commands in your console: npm install -g brunch brunch new -s https://github.com/bassjobsen/brunch-bootstrap4 Notice that the first command requires administrator rights to run. After installing the tool, you can run the following command to build your project: brunch build The preceding command will create a new public/index.html file, after which you can open it in your browser. You'll find that it look like this: Yeoman Yeoman is another build tool. It’s a command-line utility that allows the creation of projects utilizing scaffolding templates, called generators. A Yeoman generator that scaffolds out a frontend Bootstrap 4 web app can be found at the following URL: https://github.com/bassjobsen/generator-bootstrap4 You can run the Yeoman Bootstrap 4 generator by running the following commands in your console: npm install -g yo npm install -g generator-bootstrap4 yo bootstrap4 grunt serve Again, note that the first two commands require administrator rights. The grunt serve command runs a local web server at http://localhost:9000. Point your browser to that address and check whether it look as follows: Summary Beyond this, there are a plethora of resources available for pushing further with Bootstrap. The Bootstrap community is an active and exciting one. This is truly an exciting point in the history of frontend web development. Bootstrap has made a mark in history, and for a good reason. Check out my GitHub pages at http://github.com/bassjobsen for new projects and updated sources or ask me a question on Stack Overflow (http://stackoverflow.com/users/1596547/bass-jobsen). Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 6731

article-image-what-openlayers
Packt
13 May 2013
4 min read
Save for later

What is OpenLayers?

Packt
13 May 2013
4 min read
(For more resources related to this topic, see here.) As Christopher Schmidt, one of the main project developers, wrote on the OpenLayers users mailing list: OpenLayers is not designed to be usable out of the box. It is a library designed to help you to build applications, so it's your job as an OpenLayers user to build the box. Don't be scared! Building the box could be very easy and fun! The only two things you actually need to write your code and see it up and running are a text editor and a common web browser. With these tools you can create your Hello World web map, even without downloading anything and writing no more than a basic HTML template and a dozen line of JavaScript code. Going forward, step-by-step, you will realize that OpenLayers is not only easy to learn but also very powerful. So, whether you want to embed a simple web map in your website or you want to develop an advanced mash-up application by importing spatial data from different sources and in different formats, OpenLayers will probably prove to be a very good choice. The strengths of OpenLayers are many and reside, first of all, in its compliance with the Open Geospatial Consortium ( OGC ) standards, making it capable to work together with all major and most common spatial data servers. This means you can connect your client application to web services spread as WMS, WFS, or GeoRSS, add data from a bunch of raster and vector file formats such as GeoJSON and GML, and organize them in layers to create your original web mapping applications. From what has been said until now, it is clear that OpenLayers is incredibly flexible in reading spatial data, but another very important characteristic is that it is also very effective in helping you in the process of optimizing the performances of your web maps by easily defining the strategies with which spatial data are requested and (for vectors) imported on the client side. FastMap and OpenLayers make it possible to obtain them! As we already said at the beginning, web maps created with OpenLayers are interactive, so users can (and want to) do more than simply looking at your creation. To build this interactivity, OpenLayers provides you with a variety of controls that you can make available to your users. Tools to pan, zoom, or query the map give users the possibility to actually explore the content of the map and the spatial data displayed on it. We could say that controls bring maps to life and you will learn how to take advantage from them in a few easy steps. Fast loading and interactivity are important, but in many cases a crucial aspect in the process of developing a web map is to make it instantly readable. Isn't it useful to build web maps if the users they are dedicated to need to spend too much time before understanding what they are looking at? Fortunately, OpenLayers comes with a wide range of possibilities to styling features in vector layers. You can choose between different vector features, rendering strategies, and customize every aspect of their graphics to make your maps expressive, actually "talking" and—why not?—cool! Finally, as you probably remember, OpenLayers is pure JavaScript, and JavaScript is also the language of a lot of fantastic Rich Internet Application ( RIA) frameworks. Mixing OpenLayers and one of these frameworks opens a wide range of possibilities to obtain very advanced and attractive web mapping applications Resources for Article : Further resources on this subject: Getting Started with OpenLayers [Article] OpenLayers: Overview of Vector Layer [Article] Getting Started with OpenStreetMap [Article]
Read more
  • 0
  • 0
  • 6730

article-image-most-popular-programming-languages-in-2018
Fatema Patrawala
19 Jun 2018
11 min read
Save for later

The 5 most popular programming languages in 2018

Fatema Patrawala
19 Jun 2018
11 min read
Whether you’re new to software engineering or have years of experience under your belt, knowing what to learn can be difficult. Which programming language should you learn first? Which programming language should you learn next? There are hundreds of programming languages in widespread use. Each one has different features, many even have very different applications. Of course, many do not too - and knowing which one to use can be tough. Making those decisions is as important a part of being a developer as it is writing code. Just as English is the international language for most businesses and French is the language of love, different programming languages are better suited for different purposes. Let us take a look at what developers have chosen to be the top programming languages for 2018 in this year’s Packt Skill Up Survey. Source: Packt Skill Up Survey 2018 Java reigns supreme Java, the accessible and ever-present programming language continues to be widespread year on year. It is a 22 year old language which if put into human perspective is incredible. You would have been old enough to have finished college, have a celebratory alcoholic drink, gamble in Iowa and get married without parental consent in Mississippi! With time and age, Java has proven its consistency as a reliable programming language for engineers and developers. Our Skill Up 2018 survey data reveals it’s still the most popular programming language. Perhaps one of the reasons for this is Java’s flexibility and adaptability. Programs can run on several different types of machines; as long as the computer has a Java Runtime Environment (JRE) installed, a Java program can run on it. Most types of computers will be compatible with a JRE. PCs running on Windows, Macs, Unix or Linux computers, even large mainframe computers and mobile phones will be compatible with Java. Another reason for Java’s popularity is because it is object oriented. Java code is robust because Java objects contain no references to data external to themselves. This is why Java is so widely used across industry. It’s reliable and secure. When it's time for mobile developers to build Android apps, the first and most popular option is Java. Java is the official language for Android development which means it has great support from Google and most apps on Playstore are built on Java. If one talks about job opportunities in field of Java, there are plenty such as ‘Java-UI Developers’, ’Android Developers’ and many others. Hence, there are numerous jobs opportunities available in Java, J2EE combining with other new technologies. These technologies are among the highest paid jobs in IT industry today. According to Payscale, an average Java Developer salary in the USA is around $102,000 with salaries for job postings nationwide being 77% higher than average salaries. Some of the widely known domains where Java is used extensively is financial services, banking, stock market, retail and scientific and research communities. Finally, it’s worth noting that the demand for Java developers is pretty high and given it’s the language required in many engineering roles. It does make sense to start learning it if you don’t yet know it. Start learning Java. Explore Packt’s latest Java eBooks and videos Java Tutorials: Design a RESTful web API with Java JavaScript retains the runner up spot JavaScript for years kept featuring in the list of top programming languages and this time it is 2nd after Java. JavaScript has continued to be everywhere from front end web pages to mobile web apps and everything in between. A web developer can add personality to websites by using JavaScript. It is the native language of the browser. If you want to build single-page web apps, there is really only one language option for building client-side single-page apps, and that is JavaScript. It is supported by all popular browsers like Microsoft Internet Explorer (beginning with version 3.0), Firefox, Safari, Opera, Google Chrome, etc. JavaScript has been the most versatile and popular among developers because, it is simple to learn, gives extended functionality to web pages, is an inexpensive language. In other words, it does not require any special compilers or text editors to run the script. And it is simple to implement and is relatively fast for end users. After the release of Node.js in 2009, the "JavaScript everywhere" paradigm has become a reality. This server-side JavaScript framework allows to unify web application development around a single programming language, rather than rely on a different language. npm, Node.js's package manager, is the largest ecosystem of open source libraries in the world. Node.js is also being adopted in IoT projects due to its speed, number of plugins and scalability convenience. Another open-source JavaScript library known as React Native lets you build cross platform mobile applications in order to make a smooth transition from web to mobile. According to Daxx, JavaScript developers grab some of the highest paid tech salaries with an average of $96,000 in the US. There are plenty more sources that name JavaScript as one of the most sought-after skills in 2017. ITJobsWatch ranked JavaScript as the second most in-demand programming language in the UK, a conclusion based on the number of job ads posted over the last three months. Read More: 5 JavaScript Machine learning libraries you need to know Python on the rise Python is a general-purpose language; often described as utilitarian, a word which makes it sound a little boring. It is in fact anything but that. Why is it considered among the top programming languages? Simple: it is a truly universal language, applicable to a range of problems and areas. If programmers begin working with Python for one job or career, they can easily jump to another, even if it’s in an unrelated industry. Google uses Python in a number of applications. Additionally it has a developer portal devoted to Python. It features free classes so employees can learn more about the language. Here are just a few reasons why Python is so popular: Python comes with plenty of documentation, guides, tutorials and a developer community which is incredibly active for timely help and support. Python has Google as one of its corporate sponsors. Google contributes to the growing list of documentation and support tools, and effectively acts as a high profile evangelist for Python. Python is one of the most popular languages for data science. It’s used to build machine learning and AI systems. It comes with excellent sets of task specific libraries from Numpy and Scipy for scientific computing to Django for web development. Ask any Python developer and they have to agree Python is speedy, reliable and efficient. You can deploy Python applications on any environment and there is little to no performance loss no matter which platform. Python is easy to learn probably because it lets you think like programmer, as it is easily readable and almost looks like everyday English. The learning curve is very gradual than all other programming languages which tend to be quite steep. Python contains everything from data structures, tools, support, Python community, the Python Software Foundation and everything combined makes it a favorite among the developers. According to Gooroo, a platform that provides tech skill and salary analytics, Python is one of the highest-paying programming languages in the USA. In fact, at $103,492 per year, Python developers are on an average the second best-paid in the country. The language is used for system operations, web development, server and administrative tools, deployment, scientific modeling, and now in building AI and machine learning systems among others. Read More: What are professionals planning to learn this year? Python, deep learning, yes. But also... C# is not left behind Since the introduction of C# in 2002 by Microsoft, C#’s popularity has remained relatively high. It is a general-purpose language designed for developing apps on the Microsoft platform. C# can be used to create almost anything but it’s particularly strong at building Windows desktop applications and games. It can also be used to develop web applications and has become increasingly popular for mobile development too. Cross-platform tools such as Xamarin allow apps written in C# to be used on almost any mobile device. C# is widely used to create games using the Unity game engine, which is the most popular game engine today. Although C#’s syntax is more logical and consistent than C++, it is a complex language. Mastering it may take more time than languages like Python. The average pay for a C# Developer according to Payscale is $69,006 per year. With C# you have solid prospects as big finance corporations are using C# as their choice of language. In the News: Exciting New Features in C# 8.0 C# Tutorial: Behavior Scripting in C# and Javascript for game developers SQL remains strong The acronym SQL stands for Structured Query Language. This essentially means “a programming language that is used to communicate with database in a structured way”. Much like how Javascript is necessary for making websites exciting and more than just a static page, SQL is one of the only two languages widely used to communicate with databases. SQL is one of the very few languages where you describe what you want, not how to get it.  That means the database can be much more intelligent about how it decides to build its response, and is robust to changes in the computing environment it runs on. It is about a set theory which forces you to think very clearly about what it is you want, and express that in a precise way. More importantly, it gives you a vocabulary and set of tools for thinking about the problem you’re trying to solve without reference to the specific idioms of your application. SQL is a critical skill for many data-related roles. Data scientists and analysts will need to know SQL, for example. But as data reaches different parts of modern business, such as marketing and product management, it’s becoming a useful language for non-developers as well to learn. Source: Packt Skill Up Survey 2018 By far MySQL is still the most commonly used databases for web based applications in 2018, according to the report. It’s freeware, but it is frequently updated with features and security improvements. It offers a lot of functionality even for a free database engine. There are a variety of user interfaces that can be implemented. It can be made to work with other databases, including DB2 and Oracle. Organizations that need a robust database management tool but are on a budget, MySQL will be a perfect choice for them. The average salary for a Senior SQL Developer according to Glassdoor is $100,271 in the US. Unlike other areas in IT industry, job options and growth criteria for SQL developer is completely different. You may work as a database administrator, system manager, SQL professionals etc it completely depends on the functional knowledge and experience you have gained. Read More: 12 most common MySQL errors you should be aware of C++ also among favorites C++, the general purpose language for systems programming, was designed in 1979 by Bjarne Stroustrup. Despite competition from other powerful languages such as Java, Python, and SQL, it is still prevalent among developers. People have been predicting its demise for more than 20 years, but it's still growing. Basically, nothing that can handle complexity runs as fast as C++. C++ is designed for fairly hardcore applications, and it's generally used together with some scripting language or other. It is for building systems that need high performance, high reliability, small footprint, low energy consumption, all of these good things. That’s why you see telecom and financial applications built on C++. C++ is also considered to be one of the best solutions for creating applications that process music and film. There is even an extensive list of websites and tools created based on C++. Financial analysts predict that earnings in this specialization will reach at least $102,000 per year. C++ Tutorials: Getting started with C++ Features Read More: Working with Shaders in C++ to create 3D games Other programming languages like C, PHP, Swift, Go etc were also on the developer’s choice list. The creation of supporting technologies in the last few years has given rise to speculation that programming languages are converging in terms of their capabilities. For example, Node.js enables Javascript developers to create back-end functionalities - relinquishing the need to learn PHP or even hiring a separate back-end developer altogether. As of now, we can only speculate on future tech trends. However, definitely it will be worth keeping an eye out for which horse to bet on! 20 ways to describe programming in 5 words A really basic guide to batch file programming What is Mob Programming?
Read more
  • 0
  • 1
  • 6722

article-image-building-surveys-using-xcode
Packt
19 Jan 2016
14 min read
Save for later

Building Surveys using Xcode

Packt
19 Jan 2016
14 min read
In this article by Dhanushram Balachandran and Edward Cessna author of book Getting Started with ResearchKit, you can find the Softwareitis.xcodeproj project in the Chapter_3/Softwareitis folder of the RKBook GitHub repository (https://github.com/dhanushram/RKBook/tree/master/Chapter_3/Softwareitis). (For more resources related to this topic, see here.) Now that you have learned about the results of tasks from the previous section, we can modify the Softwareitis project to incorporate processing of the task results. In the TableViewController.swift file, let's update the rows data structure to include the reference for processResultsMethod: as shown in the following: //Array of dictionaries. Each dictionary contains [ rowTitle : (didSelectRowMethod, processResultsMethod) ] var rows : [ [String : ( didSelectRowMethod:()->(), processResultsMethod:(ORKTaskResult?)->() )] ] = [] Update the ORKTaskViewControllerDelegate method taskViewController(taskViewController:, didFinishWithReason:, error:) in TableViewController to call processResultsMethod, as shown in the following: func taskViewController(taskViewController: ORKTaskViewController, didFinishWithReason reason: ORKTaskViewControllerFinishReason, error: NSError?) { if let indexPath = tappedIndexPath { //1 let rowDict = rows[indexPath.row] if let tuple = rowDict.values.first { //2 tuple.processResultsMethod(taskViewController.result) } } dismissViewControllerAnimated(true, completion: nil) } Retrieves the dictionary of the tapped row and its associated tuple containing the didSelectRowMethod and processResultsMethod references from rows. Invokes the processResultsMethod with taskViewController.result as the parameter. Now, we are ready to create our first survey. In Survey.swift, under the Surveys folder, you will find two methods defined in the TableViewController extension: showSurvey() and processSurveyResults(). These are the methods that we will be using to create the survey and process the results. Instruction step Instruction step is used to show instruction or introductory content to the user at the beginning or middle of a task. It does not produce any result as its an informational step. We can create an instruction step using the ORKInstructionStep object. It has title and detailText properties to set the appropriate content. It also has the image property to show an image. The ORKCompletionStep is a special type of ORKInstructionStep used to show the completion of a task. The ORKCompletionStep shows an animation to indicate the completion of the task along with title and detailText, similar to ORKInstructionStep. In creating our first Softwareitis survey, let's use the following two steps to show the information: func showSurvey() { //1 let instStep = ORKInstructionStep(identifier: "Instruction Step") instStep.title = "Softwareitis Survey" instStep.detailText = "This survey demonstrates different question types." //2 let completionStep = ORKCompletionStep(identifier: "Completion Step") completionStep.title = "Thank you for taking this survey!" //3 let task = ORKOrderedTask(identifier: "first survey", steps: [instStep, completionStep]) //4 let taskViewController = ORKTaskViewController(task: task, taskRunUUID: nil) taskViewController.delegate = self presentViewController(taskViewController, animated: true, completion: nil) } The explanation of the preceding code is as follows: Creates an ORKInstructionStep object with an identifier "Instruction Step" and sets its title and detailText properties. Creates an ORKCompletionStep object with an identifier "Completion Step" and sets its title property. Creates an ORKOrderedTask object with the instruction and completion step as its parameters. Creates an ORKTaskViewController object with the ordered task that was previously created and presents it to the user. Let's update the processSurveyResults method to process the results of the instruction step and the completion step as shown in the following: func processSurveyResults(taskResult: ORKTaskResult?) { if let taskResultValue = taskResult { //1 print("Task Run UUID : " + taskResultValue.taskRunUUID.UUIDString) print("Survey started at : (taskResultValue.startDate!) Ended at : (taskResultValue.endDate!)") //2 if let instStepResult = taskResultValue.stepResultForStepIdentifier("Instruction Step") { print("Instruction Step started at : (instStepResult.startDate!) Ended at : (instStepResult.endDate!)") } //3 if let compStepResult = taskResultValue.stepResultForStepIdentifier("Completion Step") { print("Completion Step started at : (compStepResult.startDate!) Ended at : (compStepResult.endDate!)") } } } The explanation of the preceding code is given in the following: As mentioned at the beginning, each task run is associated with a UUID. This UUID is available in the taskRunUUID property, which is printed in the first line. The second line prints the start and end date of the task. These are useful user analytics data with regards to how much time the user took to finish the survey. Obtains the ORKStepResult object corresponding to the instruction step using the stepResultForStepIdentifier method of the ORKTaskResult object. Prints the start and end date of the step result, which shows the amount of time for which the instruction step was shown before the user pressed the Get Started or Cancel buttons. Note that, as mentioned earlier, ORKInstructionStep does not produce any results. Therefore, the results property of the ORKStepResult object will be nil. You can use a breakpoint to stop the execution at this line of code and verify it. Obtains the ORKStepResult object corresponding to the completion step. Similar to the instruction step, this prints the start and end date of the step. The preceding code produces screens as shown in the following image: After the Done button is pressed in the completion step, Xcode prints the output that is similar to the following: Task Run UUID : 0A343E5A-A5CD-4E7C-88C6-893E2B10E7F7 Survey started at : 2015-08-11 00:41:03 +0000     Ended at : 2015-08-11 00:41:07 +0000Instruction Step started at : 2015-08-11 00:41:03 +0000   Ended at : 2015-08-11 00:41:05 +0000Completion Step started at : 2015-08-11 00:41:05 +0000   Ended at : 2015-08-11 00:41:07 +0000 Question step Question steps make up the body of a survey. ResearchKit supports question steps with various answer types such as boolean (Yes or No), numeric input, date selection, and so on. Let's first create a question step with the simplest boolean answer type by inserting the following line of code in showSurvey(): let question1 = ORKQuestionStep(identifier: "question 1", title: "Have you ever been diagnosed with Softwareitis?", answer: ORKAnswerFormat.booleanAnswerFormat()) The preceding code creates a ORKQuestionStep object with identifier question 1, title with the question, and an ORKBooleanAnswerFormat object created using the booleanAnswerFormat() class method of ORKAnswerFormat. The answer type for a question is determined by the type of the ORKAnswerFormat object that is passed in the answer parameter. The ORKAnswerFormat has several subclasses such as ORKBooleanAnswerFormat, ORKNumericAnswerFormat, and so on. Here, we are using ORKBooleanAnswerFormat. Don't forget to insert the created question step in the ORKOrderedTask steps parameter by updating the following line: let task = ORKOrderedTask(identifier: "first survey", steps: [instStep, question1, completionStep]) When you run the preceding changes in Xcode and start the survey, you will see the question step with the Yes or No options. We have now successfully added a boolean question step to our survey, as shown in the following image: Now, its time to process the results of this question step. The result is produced in an ORKBooleanQuestionResult object. Insert the following lines of code in processSurveyResults(): //1 if let question1Result = taskResultValue.stepResultForStepIdentifier("question 1")?.results?.first as? ORKBooleanQuestionResult { //2 if question1Result.booleanAnswer != nil { let answerString = question1Result.booleanAnswer!.boolValue ? "Yes" : "No" print("Answer to question 1 is (answerString)") } else { print("question 1 was skipped") } } The explanation of the preceding code is as follows: Obtains the ORKBooleanQuestionResult object by first obtaining the step result using the stepResultForStepIdentifier method, accessing its results property, and finally obtaining the only ORKBooleanQuestionResult object available in the results array. The booleanAnswer property of ORKBooleanQuestionResult contains the user's answer. We will print the answer if booleanAnswer is non-nil. If booleanAnswer is nil, it indicates that the user has skipped answering the question by pressing the Skip this question button. You can disable the skipping-of-a-question step by setting its optional property to false. We can add the numeric and scale type question steps using the following lines of code in showSurvey(): //1 let question2 = ORKQuestionStep(identifier: "question 2", title: "How many apps do you download per week?", answer: ORKAnswerFormat.integerAnswerFormatWithUnit("Apps per week")) //2 let answerFormat3 = ORKNumericAnswerFormat.scaleAnswerFormatWithMaximumValue(10, minimumValue: 0, defaultValue: 5, step: 1, vertical: false, maximumValueDescription: nil, minimumValueDescription: nil) let question3 = ORKQuestionStep(identifier: "question 3", title: "How many apps do you download per week (range)?", answer: answerFormat3) The explanation of the preceding code is as follows: Creates ORKQuestionStep with the ORKNumericAnswerFormat object, created using the integerAnswerFormatWithUnit method with Apps per week as the unit. Feel free to refer to the ORKNumericAnswerFormat documentation for decimal answer format and other validation options that you can use. First creates ORKScaleAnswerFormat with minimum and maximum values and step. Note that the number of step increments required to go from minimumValue to maximumValue cannot exceed 10. For example, maximum value of 100 and minimum value of 0 with a step of 1 is not valid and ResearchKit will raise an exception. The step needs to be at least 10. In the second line, ORKScaleAnswerFormat is fed in the ORKQuestionStep object. The following lines in processSurveyResults() process the results from the number and the scale questions: //1 if let question2Result = taskResultValue.stepResultForStepIdentifier("question 2")?.results?.first as? ORKNumericQuestionResult { if question2Result.numericAnswer != nil { print("Answer to question 2 is (question2Result.numericAnswer!)") } else { print("question 2 was skipped") } } //2 if let question3Result = taskResultValue.stepResultForStepIdentifier("question 3")?.results?.first as? ORKScaleQuestionResult { if question3Result.scaleAnswer != nil { print("Answer to question 3 is (question3Result.scaleAnswer!)") } else { print("question 3 was skipped") } } The explanation of the preceding code is as follows: Question step with ORKNumericAnswerFormat generates the result with the ORKNumericQuestionResult object. The numericAnswer property of ORKNumericQuestionResult contains the answer value if the question is not skipped by the user. The scaleAnswer property of ORKScaleQuestionResult contains the answer for a scale question. As you can see in the following image, the numeric type question generates a free form text field to enter the value, while scale type generates a slider: Let's look at a slightly complicated question type with ORKTextChoiceAnswerFormat. In order to use this answer format, we need to create the ORKTextChoice objects before hand. Each text choice object provides the necessary data to act as a choice in a single choice or multiple choice question. The following lines in showSurvey() create a single choice question with three options: //1 let textChoice1 = ORKTextChoice(text: "Games", detailText: nil, value: 1, exclusive: false) let textChoice2 = ORKTextChoice(text: "Lifestyle", detailText: nil, value: 2, exclusive: false) let textChoice3 = ORKTextChoice(text: "Utility", detailText: nil, value: 3, exclusive: false) //2 let answerFormat4 = ORKNumericAnswerFormat.choiceAnswerFormatWithStyle(ORKChoiceAnswerStyle.SingleChoice, textChoices: [textChoice1, textChoice2, textChoice3]) let question4 = ORKQuestionStep(identifier: "question 4", title: "Which category of apps do you download the most?", answer: answerFormat4) The explanation of the preceding code is as follows: Creates text choice objects with text and value. When a choice is selected, the object in the value property is returned in the corresponding ORKChoiceQuestionResult object. The exclusive property is used in multiple choice questions context. Refer to the documentation for its use. First, creates an ORKChoiceAnswerFormat object with the text choices that were previously created and specifies a single choice type using the ORKChoiceAnswerStyle enum. You can easily change this question to multiple choice question by changing the ORKChoiceAnswerStyle enum to multiple choice. Then, an ORKQuestionStep object is created using the answer format object. Processing the results from a single or multiple choice question is shown in the following. Needless to say, this code goes in the processSurveyResults() method: //1 if let question4Result = taskResultValue.stepResultForStepIdentifier("question 4")?.results?.first as? ORKChoiceQuestionResult { //2 if question4Result.choiceAnswers != nil { print("Answer to question 4 is (question4Result.choiceAnswers!)") } else { print("question 4 was skipped") } } The explanation of the preceding code is as follows: The result for a single or multiple choice question is returned in an ORKChoiceQuestionResult object. The choiceAnswers property holds the array of values for the chosen options. The following image shows the generated choice question UI for the preceding code: There are several other question types, which operate in a very similar manner like the ones we discussed so far. You can find them in the documentations of ORKAnswerFormat and ORKResult classes. The Softwareitis project has implementation of two additional types: date format and time interval format. Using custom tasks, you can create surveys that can skip the display of certain questions based on the answers that the users have provided so far. For example, in a smoking habits survey, if the user chooses "I do not smoke" option, then the ability to not display the "How many cigarettes per day?" question. Form step A form step allows you to combine several related questions in a single scrollable page and reduces the number of the Next button taps for the user. The ORKFormStep object is used to create the form step. The questions in the form are represented using the ORKFormItem objects. The ORKFormItem is similar to ORKQuestionStep, in which it takes the same parameters (title and answer format). Let's create a new survey with a form step by creating a form.swift extension file and adding the form entry to the rows array in TableViewController.swift, as shown in the following: func setupTableViewRows() { rows += [ ["Survey" : (didSelectRowMethod: self.showSurvey, processResultsMethod: self.processSurveyResults)], //1 ["Form" : (didSelectRowMethod: self.showForm, processResultsMethod: self.processFormResults)] ] } The explanation of the preceding code is as follows: The "Form" entry added to the rows array to create a new form survey with the showForm() method to show the form survey and the processFormResults() method to process the results from the form. The following code shows the showForm() method in Form.swift file: func showForm() { //1 let instStep = ORKInstructionStep(identifier: "Instruction Step") instStep.title = "Softwareitis Form Type Survey" instStep.detailText = "This survey demonstrates a form type step." //2 let question1 = ORKFormItem(identifier: "question 1", text: "Have you ever been diagnosed with Softwareitis?", answerFormat: ORKAnswerFormat.booleanAnswerFormat()) let question2 = ORKFormItem(identifier: "question 2", text: "How many apps do you download per week?", answerFormat: ORKAnswerFormat.integerAnswerFormatWithUnit("Apps per week")) //3 let formStep = ORKFormStep(identifier: "form step", title: "Softwareitis Survey", text: nil) formStep.formItems = [question1, question2] //1 let completionStep = ORKCompletionStep(identifier: "Completion Step") completionStep.title = "Thank you for taking this survey!" //4 let task = ORKOrderedTask(identifier: "survey with form", steps: [instStep, formStep, completionStep]) let taskViewController = ORKTaskViewController(task: task, taskRunUUID: nil) taskViewController.delegate = self presentViewController(taskViewController, animated: true, completion: nil) } The explanation of the preceding code is as follows: Creates an instruction and a completion step, similar to the earlier survey. Creates two ORKFormItem objects using the questions from the earlier survey. Notice the similarity with the ORKQuestionStep constructors. Creates ORKFormStep object with an identifier form step and sets the formItems property of the ORKFormStep object with the ORKFormItem objects that are created earlier. Creates an ordered task using the instruction, form, and completion steps and presents it to the user using a new ORKTaskViewController object. The results are processed using the following processFormResults() method: func processFormResults(taskResult: ORKTaskResult?) { if let taskResultValue = taskResult { //1 if let formStepResult = taskResultValue.stepResultForStepIdentifier("form step"), formItemResults = formStepResult.results { //2 for result in formItemResults { //3 switch result { case let booleanResult as ORKBooleanQuestionResult: if booleanResult.booleanAnswer != nil { let answerString = booleanResult.booleanAnswer!.boolValue ? "Yes" : "No" print("Answer to (booleanResult.identifier) is (answerString)") } else { print("(booleanResult.identifier) was skipped") } case let numericResult as ORKNumericQuestionResult: if numericResult.numericAnswer != nil { print("Answer to (numericResult.identifier) is (numericResult.numericAnswer!)") } else { print("(numericResult.identifier) was skipped") } default: break } } } } } The explanation of the preceding code is as follows: Obtains the ORKStepResult object of the form step and unwraps the form item results from the results property. Iterates through each of the formItemResults, each of which will be the result for a question in the form. The switch statement detects the different types of question results and accesses the appropriate property that contains the answer. The following image shows the form step: Considerations for real world surveys Many clinical research studies that are conducted using a pen and paper tend to have well established surveys. When you try to convert these surveys to ResearchKit, they may not convert perfectly. Some questions and answer choices may have to be reworded so that they can fit on a phone screen. You are advised to work closely with the clinical researchers so that the changes in the surveys still produce comparable results with their pen and paper counterparts. Another aspect to consider is to eliminate some of the survey questions if the answers can be found elsewhere in the user's device. For example, age, blood type, and so on, can be obtained from HealthKit if the user has already set them. This will help in improving the user experience of your app. Summary Here we have learned to build surveys using Xcode. Resources for Article: Further resources on this subject: Signing up to be an iOS developer[article] Code Sharing Between iOS and Android[article] Creating a New iOS Social Project[article]
Read more
  • 0
  • 0
  • 6719

article-image-introduction-bluestacks
Packt
05 Sep 2013
4 min read
Save for later

Introduction to BlueStacks

Packt
05 Sep 2013
4 min read
(For more resources related to this topic, see here.) So, what is BlueStacks? BlueStacks is a suite of tools designed to allow you to run Android apps easily on a Windows or Mac computer. The following screenshot shows how it looks: At the time of writing, there are two elements to the BlueStacks suite, which are listed as follows: App Player: This is the engine, which runs the Android apps Cloud Connect: This is a synchronization tool As the BlueStacks tools can be freely downloaded, anyone with a PC running on Windows or Mac can download them and start experimenting with their capabilities. This article will walk you through the process of running BlueStacks on a computer and show you some of the ways in which you can make the most out of this emerging technology. There are other ways by which you can run an emulation of Android on your computer. You can, for instance, run a virtual machine or install the Android Software Development Kit (SDK). These assume a degree of technical understanding that isn't necessarily required with BlueStacks, making BlueStacks the quickest and easiest way of running apps on your computer. BlueStacks is particularly interesting for users of Windows 8 tablets, as it opens up a whole library of mature software designed for a touch interface. This is particularly useful for those wanting to use many, free, or cheap Android apps on their laptop or tablet. It is worth noting that, at the time of writing this article, these tools are beta releases, so it is important that you take time to report the bugs that you may find to the developers through their website. The ongoing development and success of the software depends upon this feedback and results in a better product. If you become reliant on a particular feature, it is a good idea to highlight your love to the developers too. This can help influence which features are to be kept and improved upon as the product matures. App Player BlueStacks App Player allows a Windows or Mac user to run Android apps on their desktop or laptop. It does this by running an emulated version of Android within a window that you can interact with using your keyboard and mouse. The App Player can be downloaded and installed for free from the BlueStacks website, http://www.bluestacks.com. Currently, there are two main versions available for different operating systems that are enlisted as follows: Mac OS X Windows XP, Vista, 7, and 8 Once you have installed the software, an Android emulator runs on your machine. This is a light version of Android that can access app stores so that you can download and run free and paid apps and content. Most apps are compatible with App Player; however, there are some which are not (for technical reasons) and some which have been prevented by the App developers from running. If you are running any another operating system on your computer, the more computing power you can make available to the App Player the better. Otherwise, you might experience slow loading apps or worse still ones that do not function properly. To increase your chances of success, first try running App Player without running any other applications (for example, Word). Cloud Connect Cloud Connect provides a means to synchronize the apps running on an existing phone or tablet with the App Player. This means that you do not have to manually install lots of apps. Instead, you install an app on your device and sign up so that your App Player has exactly the same apps as your device. Summary Thus we learned the basics of BlueStacks and saw a brief of App Player and Cloud Connect feature of BlueStacks Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] New Connectivity APIs – Android Beam [Article]
Read more
  • 0
  • 0
  • 6717
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-getting-modular-moodle
Packt
22 Nov 2010
9 min read
Save for later

Getting Modular with Moodle

Packt
22 Nov 2010
9 min read
  Moodle 1.9 Top Extensions Cookbook Over 60 simple and incredibly effective recipes for harnessing the power of the best Moodle modules to create effective online learning sites Packed with recipes to help you get the most out of Moodle modules Improve education outcomes by situating learning in a real-world context using Moodle Organize your content and customize your courses Reviews of the best Moodle modules—out of the available 600 modules Installation and configuration guides Written in a conversational and easy-to-follow manner         Read more about this book       (For more resources on Moodle, see here.) Changing site-wide settings Activity modules and blocks can have site-wide settings that you can adjust. These settings allow consistent changes in the use of the module across an entire site, but even during testing you might want to change such settings. It may be that you just want to see what settings can be changed globally for a module. Getting ready To achieve this you must have your web server running with Moodle installed. You need to be able to log in as the administrator, or get the help of someone who can. You should have installed the modules that you want to change settings for. The following steps assume you have installed the Progress Bar block, which has global settings that can be changed. How to do it... Log in as the site administrator and visit the root page of the site. To get to the global settings of a module, on the Site Administration menu, select Modules, then Activities or Blocks, whichever is appropriate. The Progress Bar block is a block, so select Blocks to reach its global settings. The next step is to select the name of the module. For our test, the module name is Progress Bar. The settings for the module should appear in a form. Not all activity modules or blocks have global settings. For many modules, this is not necessary. Changes to the global settings affect the configuration of the module, including any instances that may already exist, and any that are added in future, across the site. There's more... Be a little careful when changing global settings on a live site. If the module is currently in use, changing global settings can affect the experience of students and teachers. Accidentally using invalid global settings can detrimentally affect the running of the module on the site. See also Adding modules to Moodle Getting rid of modules Getting modules to speak your language Another feature of Moodle is its capacity for internationalization. This means that the same software can be used by people speaking different languages. While translations for over 80 languages are available for the core functionality of Moodle, most modules only offer translations for a smaller number of languages, and the language you are teaching in may not be one of them. Adding a translation for a module is simple to do. If you give your translation to the author of the module, your efforts could also benefit others who speak your language. Getting ready It is assumed that you have set the default language for your site. If not, there is more information about adding a language pack and setting the language for your site later. In order to create a translation for a module, you don't need any real programming experience; it's actually quite simple. Some understanding of HTML tags can be an advantage. You will need a text editor that can create and edit Unicode files. Word processors are not appropriate for this task, and a simple editor, such as Windows Notepad, is not up to the job. There are many free editors available that will allow you to create and edit Unicode files. One example available for Windows is Notepad++, which is a free editor and is also available as a portable application. The steps that follow provide an example that assumes the Progress Bar block has been installed. How to do it... Where the module was installed, there will usually be a /lang folder. For the Progress Bar block this is located at moodle/blocks/progress/lang. Within this folder, there are folders for different languages, most of them contributed by users around the world. If you are reading this, it is assumed you have an understanding of English, so have a look inside the en_utf8/ folder. You will see a file called block_progress.php and another directory called help/. The block_progress.php file contains strings of text used in the module, each with a code and the string displayed on screen. Open this file in your editor to see the contents. Inside the lang/help/progress/ directory there are a number of HTML files, each relating to a help topic. These appear when a help icon (usually appearing as a question mark) is clicked. Opening these files in your web browser will show you the rendered version of these files and opening them in your editor will show you the HTML source of the documents. To add a new language, you first need to find out the two letter code for your language. To see the list of supported languages visit the following site. You will also see the code letters for each language, and you need to follow the same code. Refer to http://download.moodle.org/lang16/. Return to the lang/ folder. For the Progress Bar block this is at moodle/blocks/progress/lang/. Assuming that you know English as a second language, copy the en_utf8/ folder and all of its content. Rename the folder with the two letter code for your language, for example, the folder for Afrikaans would be af_utf8/. Be sure to preserve the filenames and folder names within (they do not need translation, only the contents). Open the block_progress.php file in your Unicode editor. You need to translate the string on the right of the = symbol, within the quotes. Do not translate the code value for the string on the left. You may need to see the string in use to get a sense of what the string is intended for, in order to make an accurate translation. If you include any apostrophes within the string, escape the quote with a slash, as shown in the following example, otherwise the string will be seen as coming to an end earlier than it should. $string['owners'] = 'Owner's'; If there is code within the strings, or HTML tags, that you are unsure about, leave these and just translate any text around them. You can also translate the HTML files in moodle/blocks/progress/lang/help/progress/ to produce help files in your language. Open these in your editor and translate the text within the files. Again, avoid changing any HTML or code you don't understand. Some help files also include PHP code segments within <?php and ?> tags, avoid changing this content. Be sure to test your translated files. If, after changing a translation file, nothing appears on the course page, it may be that you have inadvertently created an error. Typically this comes from mismatched quotes around strings. Be sure each starting quote is matched with a closing quote, and any enclosed quotes are escaped. Test that your translated text items are appearing correctly and have an appropriate meaning in your language. Once created, you can use this translation throughout your site. The final step is to send your translation to the author of the module. You should be able to find their contact details on the Moodle Modules and plugins database entry page for the module. If you have translated the language strings but not translated the help files, this is still a helpful contribution that can be shared. Zip up the files you have translated and e-mail them to the author who will usually be more than happy to include your contribution within the module for future downloaders. How it works... Each time the module is loaded, its code is interpreted by the web server and HTML is produced to form the part of the page where the module will appear. Within the code, instead of writing text strings in the author's language, there are calls to functions that check the language and draw the appropriate strings from the language files. This way, all that is needed to change from one language to another is a different language file. There's more... If you want to use another language throughout your Moodle site, the following sections are a basic guide for installing and selecting the language. Adding a language pack Visit the following site to find if a language pack is available for your language: http://download.moodle.org/lang16/. If your language is available, download the appropriate zip file and extract its contents to the directory moodle/lang/. If your language is Afrikaans, for example, the language files should be contained in moodle/lang/af_utf8/. Ensure you do not introduce additional unnecessary directory levels. Selecting a language for your site and courses A language can be set as a default for courses on the site. This can be overridden at the course level if desired, or by students individually. To set the default language, log in as administrator and go to the site root page. On the Site Administration menu, select Language, then Language Settings. The default language can be set on the page that appears. Individual users can set a preferred language in their profile settings. For individual courses a language can be set. This will "force" students to use that particular language rather than their preferred language. See also If it's not quite what you want...
Read more
  • 0
  • 0
  • 6713

article-image-neural-style-transfer-creating-artificial-art-with-deep-learning-and-transfer-learning
Bhagyashree R
23 Nov 2018
14 min read
Save for later

Neural Style Transfer: Creating artificial art with deep learning and transfer learning

Bhagyashree R
23 Nov 2018
14 min read
Paintings require a special skill only a few have mastered. Paintings present a complex interplay of content and style. Photographs, on the other hand, are a combination of perspectives and light. When the two are combined, the results are spectacular and surprising. This process is called artistic style transfer. In this tutorial, we will be focusing on leveraging deep learning along with transfer learning for building a neural style transfer system. This article will walk you through the theoretical concepts around neural style transfer, loss functions, and optimization. Besides this, we will use a hands-on approach to implement our own neural style transfer model. [box type="shadow" align="" class="" width=""]This article is an excerpt from a book written by Dipanjan Sarkar, Raghav Bali, and Tamoghna Ghosh titled Hands-On Transfer Learning with Python. To follow along with the article, you can find the code in the book's GitHub repository.[/box] Understanding neural style transfer Neural style transfer is the process of applying the style of a reference image to a specific target image, such that the original content of the target image remains unchanged. Here, style is defined as colours, patterns, and textures present in the reference image, while content is defined as the overall structure and higher-level components of the image.  Here, the main objective is to retain the content of the original target image, while superimposing or adopting the style of the reference image on the target image. To define this concept mathematically, consider three images: the original content (c), the reference style (s), and the generated image (g). We would need a way to measure how different images c and g might be in terms of their content. Also, the output image should have less difference compared to the style image, in terms of styling features of the output. Formally, the objective function for neural style transfer can be formulated as follows:  Here, α and β are weights used to control the impact of content and style components on the overall loss. This depiction can be simplified further and represented as follows: Here, we can define the following components from the preceding formula: dist is a norm function; for example, the L2 norm distance style(...) is a function to compute representations of style for the reference style and generated images content(...) is a function to compute representations of content for the original content and generated images Ic, Is, and Ig are the content, style, and generated images respectively Thus, minimizing this loss causes style(Ig) to be close to style(Is), and also content(Ig) to be close to content(Ic). This helps us in achieving the necessary stipulations for effective style transfer. The loss function we will try to minimize consists of three parts; namely, the content loss, the style loss, and the total variation loss, which we will be talking about soon. The main steps for performing neural style transfer are depicted as follows: Leverage VGG-16 to help compute layer activations for the style, content, and generated image Use these activations to define the specific loss functions mentioned earlier Finally, use gradient descent to minimize the overall loss Image preprocessing methodology The first and foremost step towards implementation of such a network is to preprocess the data or images in this case. The following code snippet shows some quick utilities to preprocess and post-process images for size and channel adjustments: import numpy as np from keras.applications import vgg16 from keras.preprocessing.image import load_img, img_to_array def preprocess_image(image_path, height=None, width=None): height = 400 if not height else height width = width if width else int(width * height / height) img = load_img(image_path, target_size=(height, width)) img = img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg16.preprocess_input(img) return img def deprocess_image(x): # Remove zero-center by mean pixel x[:, :, 0] += 103.939 x[:, :, 1] += 116.779 x[:, :, 2] += 123.68 # 'BGR'->'RGB' x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype('uint8') return x As we would be writing custom loss functions and manipulation routines, we would need to define certain placeholders. Remember that keras is a high-level library that utilizes tensor manipulation backends (like tensorflow, theano, and CNTK) to perform the heavy lifting. Thus, these placeholders provide high-level abstractions to work with the underlying tensor object. The following snippet prepares placeholders for style, content, and generated images, along with the input tensor for the neural network: from keras import backend as K # This is the path to the image you want to transform. TARGET_IMG = 'lotr.jpg' # This is the path to the style image. REFERENCE_STYLE_IMG = 'pattern1.jpg' width, height = load_img(TARGET_IMG).size img_height = 480 img_width = int(width * img_height / height) target_image = K.constant(preprocess_image(TARGET_IMG, height=img_height, width=img_width)) style_image = K.constant(preprocess_image(REFERENCE_STYLE_IMG, height=img_height, width=img_width)) # Placeholder for our generated image generated_image = K.placeholder((1, img_height, img_width, 3)) # Combine the 3 images into a single batch input_tensor = K.concatenate([target_image, style_image, generated_image], axis=0) We will load the pre-trained VGG-16 model; that is, without the top fully-connected layers. The only difference here is that we would be providing the size dimensions of the input tensor for the model input. The following snippet helps us build the pre-trained model: model = vgg16.VGG16(input_tensor=input_tensor, weights='imagenet', include_top=False) Building loss functions In the Understanding neural style transfer section, we discussed that the problem with neural style transfer revolves around loss functions of content and style. In this section, we will define these loss functions. Content loss In any CNN-based model, activations from top layers contain more global and abstract information, and bottom layers will contain local information about the image. We would want to leverage the top layers of a CNN for capturing the right representations for the content of an image.  Hence, for the content loss, considering we will be using the pre-trained VGG-16 model, we can define our loss function as the L2 norm (scaled and squared Euclidean distance) between the activations of a top layer (giving feature representations) computed over the target image, and the activations of the same layer computed over the generated image. Assuming we usually get feature representations relevant to the content of images from the top layers of a CNN, the generated image is expected to look similar to the base target image. The following snippet shows the function to compute the content loss: def content_loss(base, combination): return K.sum(K.square(combination - base)) Style loss As per the A Neural Algorithm of Artistic Style, by Gatys et al, we will be leveraging the Gram matrix and computing the same over the feature representations generated by the convolution layers. The Gram matrix computes the inner product between the feature maps produced in any given conv layer. The inner product's terms are proportional to the co-variances of corresponding feature sets, and hence, captures patterns of correlations between the features of a layer that tends to activate together. These feature correlations help capture relevant aggregate statistics of the patterns of a particular spatial scale, which correspond to the style, texture, and appearance, and not the components and objects present in an image. The style loss is thus defined as the scaled and squared Frobenius norm (Euclidean norm on a matrix) of the difference between the Gram matrices of the reference style and the generated images. Minimizing this loss helps ensure that the textures found at different spatial scales in the reference style image will be similar in the generated image. Thus, the following snippet defines a style loss function based on a Gram matrix calculation: def style_loss(style, combination, height, width): def build_gram_matrix(x): features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1))) gram_matrix = K.dot(features, K.transpose(features)) return gram_matrix S = build_gram_matrix(style) C = build_gram_matrix(combination) channels = 3 size = height * width return K.sum(K.square(S - C))/(4. * (channels ** 2) * (size ** 2)) Total variation loss It was observed that optimization to reduce only the style and content losses led to highly pixelated and noisy outputs. To cover the same, total variation loss was introduced. The total variation loss is analogous to regularization loss. This is introduced for ensuring spatial continuity and smoothness in the generated image to avoid noisy and overly pixelated results. The same is defined in the function as follows: def total_variation_loss(x): a = K.square( x[:, :img_height - 1, :img_width - 1, :] - x[:, 1:, :img_width - 1, :]) b = K.square( x[:, :img_height - 1, :img_width - 1, :] - x[:, :img_height - 1, 1:, :]) return K.sum(K.pow(a + b, 1.25)) Overall loss function Having defined the components of the overall loss function for neural style transfer, the next step is to stitch together these building blocks. Since content and style information is captured by the CNNs at different depths in the network, we need to apply and calculate loss at appropriate layers for each type of loss. We will be taking the conv layers one to five for the style loss and setting appropriate weights for each layer. Here is the code snippet to build the overall loss function: # weights for the weighted average loss function content_weight = 0.05 total_variation_weight = 1e-4 content_layer = 'block4_conv2' style_layers = ['block1_conv2', 'block2_conv2', 'block3_conv3','block4_conv3', 'block5_conv3'] style_weights = [0.1, 0.15, 0.2, 0.25, 0.3] # initialize total loss loss = K.variable(0.) # add content loss layer_features = layers[content_layer] target_image_features = layer_features[0, :, :, :] combination_features = layer_features[2, :, :, :] loss += content_weight * content_loss(target_image_features, combination_features) # add style loss for layer_name, sw in zip(style_layers, style_weights): layer_features = layers[layer_name] style_reference_features = layer_features[1, :, :, :] combination_features = layer_features[2, :, :, :] sl = style_loss(style_reference_features, combination_features, height=img_height, width=img_width) loss += (sl*sw) # add total variation loss loss += total_variation_weight * total_variation_loss(generated_image) Constructing a custom optimizer The objective is to iteratively minimize the overall loss with the help of an optimization algorithm. In the paper by Gatys et al., optimization was done using the L-BFGS algorithm, which is an optimization algorithm based on Quasi-Newton methods, which are popularly used for solving non-linear optimization problems and parameter estimation. This method usually converges faster than standard gradient descent.  We build an Evaluator class based on patterns, followed by keras creator François Chollet, to compute both loss and gradient values in one pass instead of independent and separate computations. This will return the loss value when called the first time and will cache the gradients for the next call. Thus, it would be more efficient than computing both independently. The following snippet defines the Evaluator class: class Evaluator(object): def __init__(self, height=None, width=None): self.loss_value = None self.grads_values = None self.height = height self.width = width def loss(self, x): assert self.loss_value is None x = x.reshape((1, self.height, self.width, 3)) outs = fetch_loss_and_grads([x]) loss_value = outs[0] grad_values = outs[1].flatten().astype('float64') self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_values evaluator = Evaluator(height=img_height, width=img_width) Style transfer in action The final piece of the puzzle is to use all the building blocks and perform style transfer in action! The following snippet outlines how loss and gradients are evaluated. We also write back outputs after regular intervals/iterations (5, 10, and so on) to understand how the process of neural style transfer transforms the images in consideration after a certain number of iterations as depicted in the following snippet: from scipy.optimize import fmin_l_bfgs_b from scipy.misc import imsave from imageio import imwrite import time result_prefix = 'st_res_'+TARGET_IMG.split('.')[0] iterations = 20 # Run scipy-based optimization (L-BFGS) over the pixels of the # generated image # so as to minimize the neural style loss. # This is our initial state: the target image. # Note that `scipy.optimize.fmin_l_bfgs_b` can only process flat # vectors. x = preprocess_image(TARGET_IMG, height=img_height, width=img_width) x = x.flatten() for i in range(iterations): print('Start of iteration', (i+1)) start_time = time.time() x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x, fprime=evaluator.grads, maxfun=20) print('Current loss value:', min_val) if (i+1) % 5 == 0 or i == 0: # Save current generated image only every 5 iterations img = x.copy().reshape((img_height, img_width, 3)) img = deprocess_image(img) fname = result_prefix + '_iter%d.png' %(i+1) imwrite(fname, img) print('Image saved as', fname) end_time = time.time() print('Iteration %d completed in %ds' % (i+1, end_time - start_time)) It must be pretty evident by now that neural style transfer is a computationally expensive task. For the set of images in consideration, each iteration took between 500-1,000 seconds on an Intel i5 CPU with 8GB RAM (much faster on i7 or Xeon processors though!). The following code snippet shows the speedup we are getting using GPUs on a p2.x instance on AWS, where each iteration takes a mere 25 seconds! The following code snippet also shows the output of some of the iterations. We print the loss and time taken for each iteration, and save the generated image after every five iterations: Start of iteration 1 Current loss value: 10028529000.0 Image saved as st_res_lotr_iter1.png Iteration 1 completed in 28s Start of iteration 2 Current loss value: 5671338500.0 Iteration 2 completed in 24s Start of iteration 3 Current loss value: 4681865700.0 Iteration 3 completed in 25s Start of iteration 4 Current loss value: 4249350400.0 . . . Start of iteration 20 Current loss value: 3458219000.0 Image saved as st_res_lotr_iter20.png Iteration 20 completed in 25s Now you'll learn how the neural style transfer model has performed style transfer for the content images in consideration. Remember that we performed checkpoint outputs after certain iterations for every pair of style and content images. We utilize matplotlib and skimage to load and understand the style transfer magic performed by our system! We have used the following image from the very popular Lord of the Rings movie as our content image, and a nice floral pattern-based artwork as our style image: In the following code snippet, we are loading the generated styled images after various iterations: from skimage import io from glob import glob from matplotlib import pyplot as plt %matplotlib inline content_image = io.imread('lotr.jpg') style_image = io.imread('pattern1.jpg') iter1 = io.imread('st_res_lotr_iter1.png') iter5 = io.imread('st_res_lotr_iter5.png') iter10 = io.imread('st_res_lotr_iter10.png') iter15 = io.imread('st_res_lotr_iter15.png') iter20 = io.imread('st_res_lotr_iter20.png') fig = plt.figure(figsize = (15, 15)) ax1 = fig.add_subplot(6,3, 1) ax1.imshow(content_image) t1 = ax1.set_title('Original') gen_images = [iter1,iter5, iter10, iter15, iter20] for i, img in enumerate(gen_images): ax1 = fig.add_subplot(6,3,i+1) ax1.imshow(content_image) t1 = ax1.set_title('Iteration {}'.format(i+5)) plt.tight_layout() fig.subplots_adjust(top=0.95) t = fig.suptitle('LOTR Scene after Style Transfer') Following is the output showcasing the original image and the generated styled images after every five iterations: Following is the final styled image at a higher resolution. You can clearly see how the floral pattern textures and styles have slowly started propagating in the original Lord of the Rings movie image, giving it a nice vintage look: This chapter presented a very novel technique in the deep learning landscape, leveraging the power of deep learning to create art!  We covered the core concepts of neural style transfer, how to represent and formulate the problem using an effective loss function, and how to leverage the power of transfer learning and pretrained models like VGG-16 to extract the right feature representations. If you found this post useful, do check out the book, Hands-On Transfer Learning with Python, which covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples. Generative Models in action: How to create a Van Gogh with Neural Artistic Style Transfer “Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners
Read more
  • 0
  • 0
  • 6708

article-image-face-detection-and-tracking-using-ros-open-cv-and-dynamixel-servos
Packt
14 Nov 2016
13 min read
Save for later

Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos

Packt
14 Nov 2016
13 min read
In this article by Lentin Joseph, the author of the book ROS Robotic Projects, we learn how one of the capability in most of the service and social robots is face detection and tracking. The robot can identify faces and it can move its head according to the human face move around it. There are numerous implementation of face detection and tracking system in web. Most of the trackers are having a pan and tilt mechanism and a camera is mounted on the top of the servos. In this article, we are going to see a simple tracker which is having only pan mechanism. We are going to use a USB webcam which is mounted on AX-12 Dynamixel servo. (For more resources related to this topic, see here.) You can see following topics on this article: Overview of the project Hardware and software prerequisites Overview of the project The aim of the project is to build a simple face tracker which can track face only in the horizontal axis of camera. The tracker is having a webcam, Dynamixel servo called AX-12 and a supporting bracket to mount camera on the servo. The servo tracker will follow the face until it align to the center of the image which is getting from webcam. Once it reaches the center, it will stop and wait for the face movement. The face detection is done using OpenCV and ROS interface, and controlling the servo is done using Dynamixel motor driver in ROS. We are going to create two ROS packages for this complete tracking system, one is for face detection and finding centroid of face and next is for sending commands to servo to track the face using the centroid values. Ok!! Let's start discussing the hardware and software prerequisites of this project. Hardware and software prerequisites Following table of hardware components which can be used for building this project. You can also see a rough price of each component and purchase link of the same. List of hardware components: No Component name Estimated price (USD) Purchase link 1 Webcam 32 https://amzn.com/B003LVZO8S 2 Dynamixel AX -12 A servo with mounting bracket 76 https://amzn.com/B0051OXJXU 3 USB To Dynamixel Adapter 50 http://www.robotshop.com/en/robotis-usb-to-dynamixel-adapter.html 4 Extra 3 pin cables for AX-12 servos 12 http://www.trossenrobotics.com/p/100mm-3-Pin-DYNAMIXEL-Compatible-Cable-10-Pack 5 Power adapter 5 https://amzn.com/B005JRGOCM 6 6 Port AX/MX Power Hub 5 http://www.trossenrobotics.com/6-port-ax-mx-power-hub 7 USB extension cable 1 https://amzn.com/B00YBKA5Z0   Total Cost + Shipping + Tax ~ 190 - 200   The URLs and price can vary. If the links are not available, you can do a google search may do the job. The shipping charges and tax are excluded from the price. If you are thinking that, the total cost is not affordable for you, then there are cheap alternatives to do this project too. The main heart of this project is Dynamixel servo. We may can replace this servo with RC servos which only cost around $10 and using an Arduino board cost around $20 can be used to control the servo too, so you may can think about porting the face tracker project work using Arduino and RC servo Ok, let's look on to the software prerequisites of the project. The prerequisites include ROS framework, OS version and ROS packages: No Name of software Estimated price (USD) Download link 1 Ubuntu 16.04 L.T.S Free http://releases.ubuntu.com/16.04/ 2 ROS Kinetic L.T.S Free http://wiki.ros.org/kinetic/Installation/Ubuntu 3 ROS usb_cam package Free http://wiki.ros.org/usb_cam 3 ROS cv_bridge package Free http://wiki.ros.org/cv_bridge 4 ROS Dynamixel controller Free https://github.com/arebgun/dynamixel_motor 5 Windows 7 or higher ~ $120 https://www.microsoft.com/en-in/software-download/windows7 7 RoboPlus (Windows application) Free http://www.robotis.com/download/software/RoboPlusWeb%28v1.1.3.0%29.exe The above table will gives you an idea about which all are the software we are going to be used for this project. We may need both Windows and Ubuntu for doing this project. It will be great if you have dual operating system on your computer Let's see how to install these software first Installing dependent ROS packages We have already installed and configured Ubuntu 16.04 and ROS Kinetic on it. Here are the dependent packages we need to install for this project. Installing usb_cam ROS package Let's see the use of usb_cam package in ROS first. The usb_cam package is ROS driver for Video4Linux (V4L) USB camera. The V4L is a collection of devices drivers in Linux for real time video capture from webcams. The usb_cam ROS package work using the V4L devices and publish the video stream from devices as ROS image messages. We can subscribe it and do our own processing using it. The official ROS page of this package is given in the above table. You may can check this page for different settings and configuration this package offers. Creating ROS workspace for dependencies Before starting installing usb_cam package, let's create a ROS workspace for keeping the dependencies of the entire projects mentioned in the book. We may can create another workspace for keeping the project code. Create a ROS workspace called ros_project_dependencies_ws in home folder. Clone the usb_cam package into the src folder: $ git clone https://github.com/bosch-ros-pkg/usb_cam.git Build the workspace using catkin_make After building the package, install v4l-util Ubuntu package. It is a collection of command line V4L utilities which is using by usb_cam package: $ sudo apt-get install v4l-utils Configuring webcam on Ubuntu 16.04 After installing these two, we can connect the webcam to PC to check it properly detected in our PC. Take a terminal and execute dmesg command to check the kernel logs. If your camera is detected in Linux, it may give logs like this{ $ dmesg Kernels logs of webcam device You can use any webcam which has driver support in Linux. In this project, iBall Face2Face (http://www.iball.co.in/Product/Face2Face-C8-0--Rev-3-0-/90) webcam is used for tracking. You can also go for a popular webcam which is mentioned as a hardware prerequisite. You can opt that for better performance and tracking. If our webcam has support in Ubuntu, we may can open the video device using a tool called cheese. Cheese is simply a webcam viewer. Enter the command cheese in the terminal, if it is not available you can install it using following command: $ sudo apt-get install cheese If the driver and device are proper, you may get the video stream from webcam like this: Webcam video streaming using cheese Congratulation!!, your webcam is working well in Ubuntu, but are we done with everything? No. The next thing is to test the ROS usb_cam package. We have to make sure that is working well in ROS!! Interfacing Webcam to ROS Let's test the webcam using usb_cam package. The following command is used to launch the usb_cam nodes to display images from webcam and publishing ROS image topics at the same time: $ roslaunch usb_cam usb_cam-test.launch If everything works fine, you will get the image stream and logs in the terminal as shown below: Working of usb_cam package in ROS The image is displayed using image_view package in ROS, which is subscribing the topic called /usb_cam/image_raw Here are the topics, that usb_cam node is publishing: Figure 4: The topics publishing by usb_cam node We have just done with interfacing a webcam in ROS. So what's next? We have to interface AX-12 Dynamixel servo to ROS. Before proceeding to interfacing, we have to do something to configure this servo. Next we are going to see how to configure a Dynamixel servo AX-12A. Configuring a Dynamixel servo using RoboPlus The configuring of Dynamixel servo can be done using a software called RoboPlus providing by ROBOTIS INC (http://en.robotis.com/index/), the manufacturer of Dynamixel servos. For configuring Dynamixel, you have to switch your operating system to Windows. The tool RoboPlus will work on Windows. In this project, we are going to configure the servo in Windows 7. Here is the link to download RoboPlus: http://www.robotis.com/download/software/RoboPlusWeb%28v1.1.3.0%29.exe. If the link is not working, you can just search in google to get the RoboPlus 1.1.3 version. After installing the software, you will get the following window, navigate to Expert tab in the software for getting the application for configuring Dynamixel: Dynamixel Manager in RoboPlus Before taking the Dynamixel Wizard and do configuring, we have to connect the Dynamixel and properly powered. Following image of AX-12A servo that we are using for this project and its pin connection. AX-12A Dynamixel and its connection diagram Unlike other RC servos, AX-12 is an intelligent actuator which is having a microcontroller which can monitoring every parameters of servo and customize all the servo parameters. It is having a geared drive and the output of the servo is connected to servo horn. We may can connect any links on this servo horn. There are two connection ports behind each servo. Each port is having pins such as VCC, GND and Data. The ports of Dynamixel are daisy chained so that we can connect another servo from one servo. Here is the connection diagram of Dynamixel with PC. AX-12A Dynamixel and its connection diagram The main hardware component which interfacing Dynamixel to PC is called USB to Dynamixel. This is a USB to serial adapter which can convert USB to RS232, RS 484 and TTL. In AX-12 motors, the data communication is using TTL. From the Figure AX 12A Dynamixel and its connection diagram, we can seen that there are three pins in each port. The data pin is used to send and receive from AX-12 and power pins are used to power the servo. The input voltage range of AX-12A Dynamixel is from 9V to 12V. The second port in each Dynamixel can be used for daisy chaining. We can connect up to 254 servos using this chaining Official links of AX-12A servo and USB to Dynamixel AX-12A: http://www.trossenrobotics.com/dynamixel-ax-12-robot-actuator.aspx USB to Dynamixel: http://www.trossenrobotics.com/robotis-bioloid-usb2dynamixel.aspx For working with Dynamixel, we should know some more things. Let's have a look on some of the important specification of AX-12A servo. The specifications are taken from the servo manual. Figure 8: AX-12A Specification The Dynamixel servos can communicate to PC to a maximum speed of 1 Mbps. It can also give feedback of various parameters such as its position, temperature and current load. Unlike RC servos, this can rotate up to 300 degrees and communication is mainly using digital packets. Powering and connecting Dynamixel to PC Now we are going to connect Dynamixel to PC. Given below a standard way of connecting Dynamixel to PC: Connecting Dynamixel to PC The three pin cable can be first connected to any of the port of AX-12 and other side have to connect to the way to connect 6 port power hub. From the 6-port power hub, connect another cable to the USB to Dynamixel. We have to select the switch of USB to Dynamixel to TTL mode. The power can be either be connected through a 12V adapter or through battery. The 12V adapter is having 2.1X5.5 female barrel jack, so you should check the specification of male adapter plug while purchasing. Setting USB to Dynamixel driver on PC As we have already discussed the USB to Dynamixel adapter is a USB to serial convertor, which is having an FTDI chip (http://www.ftdichip.com/) on it. We have to install a proper FTDI driver on the PC for detecting the device. The driver may need for Windows but not for Linux, because FTDI drivers are built in the Linux kernel. If you install the RoboPlus software, the driver may be already installed along with it. If it is not, you can manually install from the RoboPlus installation folder. Plug the USB to Dynamixel to the Windows PC, and check the device manager. (Right click on My Computer | Properties | Device Manager). If the device is properly detected, you can see like following figure: Figure 10: COM Port of USB to Dynamixel If you are getting a COM port for USB to Dynamixel, then you can start the Dynamixel Manager from RoboPlus. You can connect to the serial port number from the list and click the Search button to scan for Dynamixel as shown in following figure. Select the COM port from the list and connecting to the port is marked as 1. After connecting to the COM port, select the default baud rate as 1 Mbps and click the Start searching button: COM Port of USB to Dynamixel If you are getting a list of servo in the left side panel, it means that your PC have detected a Dynamixel servo. If the servo is not detecting, you can do following steps to debug: Make sure that supply is proper and connections are proper using a multi meter. Make sure that servo LED on the back is blinking when power on. If it is not coming, it can be a problem with servo or power supply. Upgrade the firmware of servo using Dynamixel Manager from the option marked as 6. The wizard is shown in the following figure. During wizard, you may need power off the supply and ON it again for detecting the servo. After detecting the servo, you have to select the servo model and install the new firmware. This may help to detect the servo in the Dynamixel manager if the existing servo firmware is out dated. Dynamixel recovery wizard If the servos are listing on the Dynamixel manager, click on a servo and you can see its complete configuration. We have to modify some values inside the configurations for our current face tracker project. Here are the parameters: ID : Set the ID as 1 Baud rate: 1 Moving Speed: 100 Goal Position: 512 The modified servo settings are shown in the following figure: Modified Dynamixel firmware settings After doing these settings, you can check the servo is working good or not by changing its Goal position. Yes!! Now you are done with Dynamixel configuration, Congratulation!! What's next? We want to interface Dynamixel to ROS. Summary This article was about building a face tracker using webcam and Dynamixel motor. The software we have used was ROS and OpenCV. Initially you can see how to configure the webcam and Dynamixel motor and after configuring, we were trying to build two package for tracking. One package does the face detection and second package is a controller which can send position command to Dynamixel to track the face. We have discussed the use of all files inside the packages and did a final run to show the complete working of the system. Resources for Article: Further resources on this subject: Using ROS with UAVs [article] Hardware Overview [article] Arduino Development [article]
Read more
  • 0
  • 0
  • 6708

article-image-implementing-gradient-descent-algorithm-to-solve-optimization-problems
Sunith Shetty
22 Feb 2018
7 min read
Save for later

Implementing gradient descent algorithm to solve optimization problems

Sunith Shetty
22 Feb 2018
7 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Rajdeep Dua and Manpreet Singh Ghotra titled Neural Network Programming with Tensorflow. In this book, you will learn to leverage the power of TensorFlow to train neural networks of varying complexities, without any hassle.[/box] Today we will focus on the gradient descent algorithm and its different variants. We will take a simple example of linear regression to solve the optimization problem. Gradient descent is the most successful optimization algorithm. As mentioned earlier, it is used to do weights updates in a neural network so that we minimize the loss function. Let's now talk about an important neural network method called backpropagation, in which we firstly propagate forward and calculate the dot product of inputs with their corresponding weights, and then apply an activation function to the sum of products which transforms the input to an output and adds non linearities to the model, which enables the model to learn almost any arbitrary functional mappings. Later, we back propagate in the neural network, carrying error terms and updating weights values using gradient descent, as shown in the following graph: Different variants of gradient descent Standard gradient descent, also known as batch gradient descent, will calculate the gradient of the whole dataset but will perform only one update. Therefore, it can be quite slow and tough to control for datasets which are extremely large and don't fit in the memory. Let's now look at algorithms that can solve this problem. Stochastic gradient descent (SGD) performs parameter updates on each training example, whereas mini batch performs an update with n number of training examples in each batch. The issue with SGD is that, due to the frequent updates and fluctuations, it eventually complicates the convergence to the accurate minimum and will keep exceeding due to regular fluctuations. Mini-batch gradient descent comes to the rescue here, which reduces the variance in the parameter update, leading to a much better and stable convergence. SGD and mini-batch are used interchangeably. Overall problems with gradient descent include choosing a proper learning rate so that we avoid slow convergence at small values, or divergence at larger values and applying the same learning rate to all parameter updates wherein if the data is sparse we might not want to update all of them to the same extent. Lastly, is dealing with saddle points. Algorithms to optimize gradient descent We will now be looking at various methods for optimizing gradient descent in order to calculate different learning rates for each parameter, calculate momentum, and prevent decaying learning rates. To solve the problem of high variance oscillation of the SGD, a method called momentum was discovered; this accelerates the SGD by navigating along the appropriate direction and softening the oscillations in irrelevant directions. Basically, it adds a fraction of the update vector of the past step to the current update vector. Momentum value is usually set to .9. Momentum leads to a faster and stable convergence with reduced oscillations. Nesterov accelerated gradient explains that as we reach the minima, that is, the lowest point on the curve, momentum is quite high and it doesn't know to slow down at that point due to the large momentum which could cause it to miss the minima entirely and continue moving up. Nesterov proposed that we first make a long jump based on the previous momentum, then calculate the gradient and then make a correction which results in a parameter update. Now, this update prevents us to go too fast and not miss the minima, and makes it more responsive to changes. Adagrad allows the learning rate to adapt based on the parameters. Therefore, it performs large updates for infrequent parameters and small updates for frequent parameters. Therefore, it is very well-suited for dealing with sparse data. The main flaw is that its learning rate is always decreasing and decaying. Problems with decaying learning rates are solved using AdaDelta. AdaDelta solves the problem of decreasing learning rate in AdaGrad. In AdaGrad, the learning rate is computed as one divided by the sum of square roots. At each stage, we add another square root to the sum, which causes the denominator to decrease constantly. Now, instead of summing all prior square roots, it uses a sliding window which allows the sum to decrease. Adaptive Moment Estimation (Adam) computes adaptive learning rates for each parameter. Like AdaDelta, Adam not only stores the decaying average of past squared gradients but additionally stores the momentum change for each parameter. Adam works well in practice and is one of the most used optimization methods today. The following two images (image credit: Alec Radford) show the optimization behavior of optimization algorithms described earlier. We see their behavior on the contours of a loss surface over time. Adagrad, RMsprop, and Adadelta almost quickly head off in the right direction and converge fast, whereas momentum and NAG are headed off-track. NAG is soon able to correct its course due to its improved responsiveness by looking ahead and going to the minimum. The second image displays the behavior of the algorithms at a saddle point. SGD, Momentum, and NAG find it challenging to break symmetry, but slowly they manage to escape the saddle point, whereas Adagrad, Adadelta, and RMsprop head down the negative slope, as can seen from the following image: Which optimizer to choose In the case that the input data is sparse or if we want fast convergence while training complex neural networks, we get the best results using adaptive learning rate methods. We also don't need to tune the learning rate. For most cases, Adam is usually a good choice. Optimization with an example Let's take an example of linear regression, where we try to find the best fit for a straight line through a number of data points by minimizing the squares of the distance from the line to each data point. This is why we call it least squares regression. Essentially, we are formulating the problem as an optimization problem, where we are trying to minimize a loss function. Let's set up input data and look at the scatter plot: #  input  data xData  =  np.arange(100,  step=.1) yData  =  xData  +  20  *  np.sin(xData/10) Define the data size and batch size: #  define  the  data  size  and  batch  size nSamples  =  1000 batchSize  =  100 We will need to resize the data to meet the TensorFlow input format, as follows: #  resize  input  for  tensorflow xData  =  np.reshape(xData,  (nSamples,  1)) yData  =  np.reshape(yData,  (nSamples,  1)) The following scope initializes the weights and bias, and describes the linear model and loss function: with tf.variable_scope("linear-regression-pipeline"): W  =  tf.get_variable("weights",  (1,1), initializer=tf.random_normal_initializer()) b  =  tf.get_variable("bias",   (1,  ), initializer=tf.constant_initializer(0.0)) # model yPred  =  tf.matmul(X,  W)  +  b # loss  function loss  =  tf.reduce_sum((y  -  yPred)**2/nSamples) We then set optimizers for minimizing the loss: # set the optimizer #optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss) #optimizer = tf.train.AdamOptimizer(learning_rate=.001).minimize(loss) #optimizer = tf.train.AdadeltaOptimizer(learning_rate=.001).minimize(loss) #optimizer = tf.train.AdagradOptimizer(learning_rate=.001).minimize(loss) #optimizer = tf.train.MomentumOptimizer(learning_rate=.001, momentum=0.9).minimize(loss) #optimizer = tf.train.FtrlOptimizer(learning_rate=.001).minimize(loss) optimizer = tf.train.RMSPropOptimizer(learning_rate=.001).minimize(loss) We then select the mini batch and run the optimizers errors = [] with tf.Session() as sess: # init variables sess.run(tf.global_variables_initializer()) for _ in range(1000): # select mini batch indices = np.random.choice(nSamples, batchSize) xBatch, yBatch = xData[indices], yData[indices] # run optimizer _, lossVal = sess.run([optimizer, loss], feed_dict={X: xBatch, y: yBatch}) errors.append(lossVal) plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))]) plt.show() plt.savefig("errors.png") The output of the preceding code is as follows: We also get a sliding curve, as follows: We learned optimization is a complicated subject and a lot depends on the nature and size of our data. Also, optimization depends on weight matrices. A lot of these optimizers are trained and tuned for tasks like image classification or predictions. However, for custom or new use cases, we need to perform trial and error to determine the best solution. To know more about how to build and optimize neural networks using TensorFlow, do checkout this book Neural Network Programming with Tensorflow.  
Read more
  • 0
  • 0
  • 6706
article-image-search-using-beautiful-soup
Packt
20 Jan 2014
6 min read
Save for later

Search Using Beautiful Soup

Packt
20 Jan 2014
6 min read
(For more resources related to this topic, see here.) Searching with find_all() The find() method was used to find the first result within a particular search criteria that we applied on a BeautifulSoup object. As the name implies, find_all() will give us all the items matching the search criteria we defined. The different filters that we see in find() can be used in the find_all() method. In fact, these filters can be used in any searching methods, such as find_parents() and find_siblings(). Let us consider an example of using find_all(). Finding all tertiary consumers We saw how to find the first and second primary consumer. If we need to find all the tertiary consumers, we can't use find(). In this case, find_all() will become handy. all_tertiaryconsumers = soup.find_all(class_="tertiaryconsumerslist") The preceding code line finds all the tags with the = "tertiaryconsumerlist" class. If given a type check on this variable, we can see that it is nothing but a list of tag objects as follows: print(type(all_tertiaryconsumers)) #output <class 'list'> We can iterate through this list to display all tertiary consumer names by using the following code: for tertiaryconsumer in all_tertiaryconsumers: print(tertiaryconsumer.div.string) #output lion tiger Understanding parameters used with find_all() Like find(), the find_all() method also has a similar set of parameters with an extra parameter, limit, as shown in the following code line: find_all(name,attrs,recursive,text,limit,**kwargs) The limit parameter is used to specify a limit to the number of results that we get. For example, from the e-mail ID sample we saw, we can use find_all() to get all the e-mail IDs. Refer to the following code: email_ids = soup.find_all(text=emailid_regexp) print(email_ids) #output [u'[email protected]',u'[email protected]',u'[email protected]'] Here, if we pass limit, it will limit the result set to the limit we impose, as shown in the following example: email_ids_limited = soup.find_all(text=emailid_regexp,limit=2) print(email_ids_limited) #output [u'[email protected]',u'[email protected]'] From the output, we can see that the result is limited to two. The find() method is find_all() with limit=1. We can pass True or False values to find the methods. If we pass True to find_all(), it will return all tags in the soup object. In the case of find(), it will be the first tag within the object. The print(soup.find_all(True)) line of code will print out all the tags associated with the soup object. In the case of searching for text, passing True will return all text within the document as follows: all_texts = soup.find_all(text=True) print(all_texts) #output [u'n', u'n', u'n', u'n', u'n', u'plants', u'n', u'100000', u'n', u'n', u'n', u'algae', u'n', u'100000', u'n', u'n', u'n', u'n', u'n', u'deer', u'n', u'1000', u'n', u'n', u'n', u'rabbit', u'n', u'2000', u'n', u'n', u'n', u'n', u'n', u'fox', u'n', u'100', u'n', u'n', u'n', u'bear', u'n', u'100', u'n', u'n', u'n', u'n', u'n', u'lion', u'n', u'80', u'n', u'n', u'n', u'tiger', u'n', u'50', u'n', u'n', u'n', u'n', u'n'] The preceding output prints every text content within the soup object including the new-line characters too. Also, in the case of text, we can pass a list of strings and find_all() will find every string defined in the list: all_texts_in_list = soup.find_all(text=["plants","algae"]) print(all_texts_in_list) #output [u'plants', u'algae'] This is same in the case of searching for the tags, attribute values of tag, custom attributes, and the CSS class. For finding all the div and li tags, we can use the following code line: div_li_tags = soup.find_all(["div","li"]) Similarly, for finding tags with the producerlist and primaryconsumerlist classes, we can use the following code lines: all_css_class = soup.find_all(class_=["producerlist","primaryconsumerlist"]) Both find() and find_all() search an object's descendants (that is, all children coming after it in the tree), their children, and so on. We can control this behavior by using the recursive parameter. If recursive = False, search happens only on an object's direct children. For example, in the following code, search happens only at direct children for div and li tags. Since the direct child of the soup object is html, the following code will give an empty list: div_li_tags = soup.find_all(["div","li"],recursive=False) print(div_li_tags) #output [] If find_all() can't find results, it will return an empty list, whereas find() returns None. Navigation using Beautiful Soup Navigation in Beautiful Soup is almost the same as the searching methods. In navigating, instead of methods, there are certain attributes that facilitate the navigation. So each Tag or NavigableString object will be a member of the resulting tree with the Beautiful Soup object placed at the top and other objects as the nodes of the tree. The following code snippet is an example for an HTML tree: html_markup = """<div class="ecopyramid"> <ul id="producers"> <li class="producerlist"> <div class="name">plants</div> <div class="number">100000</div> </li> <li class="producerlist"> <div class="name">algae</div> <div class="number">100000</div> </li> </ul> </div>""" For the previous code snippet, the following HTML tree is formed: In the previous figure, we can see that Beautiful Soup is the root of the tree, the Tag objects make up the different nodes of the tree, while NavigableString objects make up the leaves of the tree. Navigation in Beautiful Soup is intended to help us visit the nodes of this HTML/XML tree. From a particular node, it is possible to: Navigate down to the children Navigate up to the parent Navigate sideways to the siblings Navigate to the next and previous objects parsed We will be using the previous html_markup as an example to discuss the different navigations using Beautiful Soup. Summary In this article, we discussed in detail the different search methods in Beautiful Soup, namely, find(), find_all(), find_next(), and find_parents(); code examples for a scraper using search methods to get information from a website; and understanding the application of search methods in combination. We also discussed in detail the different navigation methods provided by Beautiful Soup, methods specific to navigating downwards and upwards, and sideways, to the previous and next elements of the HTML tree. Resources for Article: Further resources on this subject: Web Services Testing and soapUI [article] Web Scraping with Python [article] Plotting data using Matplotlib: Part 1 [article]
Read more
  • 0
  • 0
  • 6695

article-image-how-to-build-12-factor-design-microservices-on-docker-part-2
Cody A.
29 Jun 2015
14 min read
Save for later

How to Build 12 Factor Microservices on Docker - Part 2

Cody A.
29 Jun 2015
14 min read
Welcome back to our how-to on Building and Running 12 Factor Microservices on Docker. In Part 1, we introduced a very simple python flask application which displayed a list of users from a relational database. Then we walked through the first four of these factors, reworking the example application to follow these guidelines. In Part 2, we'll be introducing a multi-container Docker setup as the execution environment for our application. We’ll continue from where we left off with the next factor, number five. Build, Release, Run. A 12-factor app strictly separates the process for transforming a codebase into a deploy into distinct build, release, and run stages. The build stage creates an executable bundle from a code repo, including vendoring dependencies and compiling binaries and asset packages. The release stage combines the executable bundle created in the build with the deploy’s current config. Releases are immutable and form an append-only ledger; consequently, each release must have a unique release ID. The run stage runs the app in the execution environment by launching the app’s processes against the release. This is where your operations meet your development and where a PaaS can really shine. For now, we’re assuming that we’ll be using a Docker-based containerized deploy strategy. We’ll start by writing a simple Dockerfile. The Dockerfile starts with an ubuntu base image and then I add myself as the maintainer of this app. FROM ubuntu:14.04.2 MAINTAINER codyaray Before installing anything, let’s make sure that apt has the latest versions of all the packages. RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list RUN apt-get update Install some basic tools and the requirements for running a python webapp RUN apt-get install -y tar curl wget dialog net-tools build-essential RUN apt-get install -y python python-dev python-distribute python-pip RUN apt-get install -y libmysqlclient-dev Copy over the application to the container. ADD /. /src Install the dependencies. RUN pip install -r /src/requirements.txt Finally, set the current working directory, expose the port, and set the default command. EXPOSE 5000 WORKDIR /src CMD python app.py Now, the build phase consists of building a docker image. You can build and store locally with docker build -t codyaray/12factor:0.1.0 . If you look at your local repository, you should see the new image present. $ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE codyaray/12factor 0.1.0 bfb61d2bbb17 1 hour ago 454.8 MB The release phase really depends on details of the execution environment. You’ll notice that none of the configuration is stored in the image produced from the build stage; however, we need a way to build a versioned release with the full configuration as well. Ideally, the execution environment would be responsible for creating releases from the source code and configuration specific to that environment. However, if we’re working from first principles with Docker rather than a full-featured PaaS, one possibility is to build a new docker image using the one we just built as a base. Each environment would have its own set of configuration parameters and thus its own Dockerfile. It could be something as simple as FROM codyaray/12factor:0.1.0 MAINTAINER codyaray ENV DATABASE_URL mysql://sa:[email protected]/mydb This is simple enough to be programmatically generated given the environment-specific configuration and the new container version to be deployed. For the demonstration purposes, though, we’ll call the above file Dockerfile-release so it doesn’t conflict with the main application’s Dockerfile. Then we can build it with docker build -f Dockerfile-release -t codyaray/12factor-release:0.1.0.0 . The resulting built image could be stored in the environment’s registry as codyaray/12factor-release:0.1.0.0. The images in this registry would serve as the immutable ledger of releases. Notice that the version has been extended to include a fourth level which, in this instance, could represent configuration version “0” applied to source version “0.1.0”. The key here is that these configuration parameters aren’t collated into named groups (sometimes called “environments”). For example, these aren’t static files named like Dockerfile.staging or Dockerfile.dev in a centralized repo. Rather, the set of parameters is distributed so that each environment maintains its own environment mapping in some fashion. The deployment system would be setup such that a new release to the environment automatically applies the environment variables it has stored to create a new Docker image. As always, the final deploy stage depends on whether you’re using a cluster manager, scheduler, etc. If you’re using standalone Docker, then it would boil down to docker run -P -t codyaray/12factor-release:0.1.0.0 Processes. A 12-factor app is executed as one or more stateless processes which share nothing and are horizontally partitionable. All data which needs to be stored must use a stateful backing service, usually a database. This means no sticky sessions and no in-memory or local disk-based caches. These processes should never daemonize or write their own PID files; rather, they should rely on the execution environment’s process manager (such as Upstart). This factor must be considered up-front, in line with the discussions on antifragility, horizontal scaling, and overall application design. As the example app delegates all stateful persistence to a database, we’ve already succeeded on this point. However, it is good to note that a number of issues have been found using the standard ubuntu base image for Docker, one of which is its process management (or lack thereof). If you would like to use a process manager to automatically restart crashed daemons, or to notify a service registry or operations team, check out baseimage-docker. This image adds runit for process supervision and management, amongst other improvements to base ubuntu for use in Docker such as obsoleting the need for pid files. To use this new image, we have to update the Dockerfile to set the new base image and use its init system instead of running our application as the root process in the container. FROM phusion/baseimage:0.9.16 MAINTAINER codyaray RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list RUN apt-get update RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential RUN apt-get install -y python python-dev python-distribute python-pip RUN apt-get install -y libmysqlclient-dev ADD /. /src RUN pip install -r /src/requirements.txt EXPOSE 5000 WORKDIR /src RUN mkdir /etc/service/12factor ADD 12factor.sh /etc/service/12factor/run # Use baseimage-docker's init system. CMD ["/sbin/my_init"]  Notice the file 12factor.sh that we’re now adding to /etc/service. This is how we instruct runit to run our application as a service. Let’s add the new 12factor.sh file. #!/bin/sh python /src/app.py Now the new containers we deploy will attempt to be a little more fault-tolerant by using an OS-level process manager. Port Binding. A 12-factor app must be self-contained and bind to a port specified as an environment variable. It can’t rely on the injection of a web container such as tomcat or unicorn; instead it must embed a server such as jetty or thin. The execution environment is responsible for routing requests from a public-facing hostname to the port-bound web process. This is trivial with most embedded web servers. If you’re currently using an external web server, this may require more effort to support an embedded server within your application. For the example python app (which uses the built-in flask web server), it boils down to port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port) Now the execution environment is free to instruct the application to listen on whatever port is available. This obviates the need for the application to tell the environment what ports must be exposed, as we’ve been required to do with Docker. Concurrency. Because a 12-factor exclusively uses stateless processes, it can scale out by adding processes. A 12-factor app can have multiple process types, such as web processes, background worker processes, or clock processes (for cron-like scheduled jobs). As each process type is scaled independently, each logical process would become its own Docker container as well. We’ve already seen building a web process; other processes are very similar. In most cases, scaling out simply means launching more instances of the container. (Its usually not desirable to scale out the clock processes, though, as they often generate events that you want to be scheduled singletons within your infrastructure.) Disposability. A 12-factor app’s processes can be started or stopped (with a SIGTERM) anytime. Thus, minimizing startup time and gracefully shutting down is very important. For example, when a web service receives a SIGTERM, it should stop listening on the HTTP port, allow in-flight requests to finish, and then exit. Similar, processes should be robust against sudden death; for example, worker processes should use a robust queuing backend. You want to ensure the web server you select can gracefully shutdown. The is one of the trickier parts of selecting a web server, at least for many of the common python http servers that I’ve tried.  In theory, shutting down based on receiving a SIGTERM should be as simple as follows. import signal signal.signal(signal.SIGTERM, lambda *args: server.stop(timeout=60)) But often times, you’ll find that this will immediately kill the in-flight requests as well as closing the listening socket. You’ll want to test this thoroughly if dependable graceful shutdown is critical to your application. Dev/Prod Parity. A 12-factor app is designed to keep the gap between development and production small. Continuous deployment shrinks the amount of time that code lives in development but not production. A self-serve platform allows developers to deploy their own code in production, just like they do in their local development environments. Using the same backing services (databases, caches, queues, etc) in development as production reduces the number of subtle bugs that arise in inconsistencies between technologies or integrations. As we’re deploying this solution using fully Dockerized containers and third-party backing services, we’ve effectively achieved dev/prod parity. For local development, I use boot2docker on my Mac which provides a Docker-compatible VM to host my containers. Using boot2docker, you can start the VM and setup all the env variables automatically with boot2docker up $(boot2docker shellinit) Once you’ve initialized this VM and set the DOCKER_HOST variable to its IP address with shellinit, the docker commands given above work exactly the same for development as they do for production. Logs. Consider logs as a stream of time-ordered events collected from all running processes and backing services. A 12-factor app doesn’t concern itself with how its output is handled. Instead, it just writes its output to its `stdout` stream. The execution environment is responsible for collecting, collating, and routing this output to its final destination(s). Most logging frameworks either support logging to stderr/stdout by default or easily switching from file-based logging to one of these streams. In a 12-factor app, the execution environment is expected to capture these streams and handle them however the platform dictates. Because our app doesn’t have specific logging yet, and the only logs are from flask and already to stderr, we don’t have any application changes to make.  However, we can show how an execution environment which could be used handle the logs. We’ll setup a Docker container which collects the logs from all the other docker containers on the same host. Ideally, this would then forward the logs to a centralized service such as Elasticsearch. Here we’ll demo using Fluentd to capture and collect the logs inside the log collection container; a simple configuration change would allow us to switch from writing these logs to disk as we demo here and instead send them from Fluentd to a local Elasticsearch cluster. We’ll create a Dockerfile for our new logcollector container type. For more detail, you can find a Docker fluent tutorial here. We can call this file Dockerfile-logcollector. FROM kiyoto/fluentd:0.10.56-2.1.1 MAINTAINER [email protected] RUN mkdir /etc/fluent ADD fluent.conf /etc/fluent/ CMD "/usr/local/bin/fluentd -c /etc/fluent/fluent.conf" We use an existing fluentd base image with a specific fluentd configuration. Notably this tails all the log files in /var/lib/docker/containers/<container-id>/<container-id>-json.log, adds the container ID to the log message, and then writes to JSON-formatted files inside /var/log/docker. <source> type tail path /var/lib/docker/containers/*/*-json.log pos_file /var/log/fluentd-docker.pos time_format %Y-%m-%dT%H:%M:%S tag docker.* format json </source> <match docker.var.lib.docker.containers.*.*.log> type record_reformer container_id ${tag_parts[5]} tag docker.all </match> <match docker.all> type file path /var/log/docker/*.log format json include_time_key true </match> As usual, we create a Docker image. Don’t forget to specify the logcollector Dockerfile. docker build -f Dockerfile-logcollector -t codyaray/docker-fluentd . We’ll need to mount two directories from the Docker host into this container when we launch it. Specifically, we’ll mount the directory containing the logs from all the other containers as well as the directory to which we’ll be writing the consolidated JSON logs. docker run -d -v /var/lib/docker/containers:/var/lib/docker/containers -v /var/log/docker:/var/log/docker codyaray/docker-fluentd Now if you check in the /var/log/docker directory, you’ll see the collated JSON log files. Note that this is on the docker host rather than in any container; if you’re using boot2docker, you can ssh into the docker host with boot2docker ssh and then check /var/log/docker. Admin Processes. Any admin or management tasks for a 12-factor app should be run as one-off processes within a deploy’s execution environment. This process runs against a release using the same codebase and configs as any process in that release and uses the same dependency isolation techniques as the long-running processes. This is really a feature of your app's execution environment. If you’re running a Docker-like containerized solution, this may be pretty trivial. docker run -i -t --entrypoint /bin/bash codyaray/12factor-release:0.1.0.0 The -i flag instructs docker to provide interactive session, that is, to keep the input and output ttys attached. Then we instruct docker to run the /bin/bash command instead of another 12factor app instance. This creates a new container based on the same docker image, which means we have access to all the code and configs for this release. This will drop us into a bash terminal to do whatever we want. But let’s say we want to add a new “friends” table to our database, so we wrote a migration script add_friends_table.py. We could run it as follows: docker run -i -t --entrypoint python codyaray/12factor-release:0.1.0.0 /src/add_friends_table.py As you can see, following the few simple rules specified in the 12 Factor manifesto really allows your execution environment to manage and scale your application. While this may not be the most feature-rich integration within a PaaS, it is certainly very portable with a clean separation of responsibilities between your app and its environment. Much of the tools and integration demonstrated here were a do-it-yourself container approach to the environment, which would be subsumed by an external vertically integrated PaaS such as Deis. If you’re not familiar with Deis, its one of several competitors in the open source platform-as-a-service space which allows you to run your own PaaS on a public or private cloud. Like many, Deis is inspired by Heroku. So instead of Dockerfiles, Deis uses a buildpack to transform a code repository into an executable image and a Procfile to specify an app’s processes. Finally, by default you can use a specialized git receiver to complete a deploy. Instead of having to manage separate build, release, and deploy stages yourself like we described above, deploying an app to Deis could be a simple as git push deis-prod While it can’t get much easier than this, you’re certainly trading control for simplicity. It's up to you to determine which works best for your business. Find more Docker tutorials alongside our latest releases on our dedicated Docker page. About the Author Cody A. Ray is an inquisitive, tech-savvy, entrepreneurially-spirited dude. Currently, he is a software engineer at Signal, an amazing startup in downtown Chicago, where he gets to work with a dream team that’s changing the service model underlying the Internet.
Read more
  • 0
  • 1
  • 6692

article-image-getting-started-gamesalad
Packt
20 Mar 2012
10 min read
Save for later

Getting Started with GameSalad

Packt
20 Mar 2012
10 min read
Let's get to it shall we?   System requirements In order for you to run GameSalad and create amazingly awesome games, you must meet the minimum system requirements, which are as follows:   Intel Mac (Any Mac from 2006 and above) Mac OS X 10.6 or higher At least 1GB RAM A device running iOS (iPad, iPhone 3G and up, or iPod Touch) If your computer exceeds these requirements, perfect! If not, you will need to upgrade your computer. Keep in mind, these are the minimum requirements, having a computer with better specs is recommended.   Let's get into GameSalad Let's start by downloading GameSalad and registering for an account. Let's go to GameSalad's website, www.gamesalad.com. Click the "Download Free App – GameSalad Creator" button.   While you are waiting for GameSalad to download, you should sign up for a free account. At the top of the page click Sign Up, enter your email address and create a username and password. You have two options for GameSalad membership, you can keep the Basic Pricing, which is completely free or select Professional Pricing. The difference is when you publish your App, you will have a Created with GameSalad splash screen, not a big deal right? Especially, not when you can get this awesome program for free! The Professional Pricing, which is $499 (USD) per year gives you all the features of the free version of GameSalad, plus it allows you to use iAds, Game Center, Promotional Links, your own Custom Splash Screen, and Priority Technical Support.   This does not include your Apple developer cost, which is $99 a year Other tools that are recommended for game development:   Adobe Photoshop (http://www.adobe.com/products/photoshop.html) or a free equivalent, Inkscape, Gimp, and Pixelmator Drawing Tablet (Makes creating sprites much easier but not required) Getting familiar with GameSalad's interface Once you open GameSalad, you are presented with several options on the screen.   Following are the options:   Home: It shows you the latest GameSalad links (Success stories, their latest game release, and so on...). News: It is self-explanatory, this shows you the latest update notes, and what is new in the GS community. Start: The getting started screen, here you have video tutorials, Wiki Docs, Blog, and more. Profile: This shows you, your GameSalad's profile page, messages, followers, and likes. New: These are all your new projects, Blank templates, and various bare bone templates to get you started. Recent: This shows you all of your recently saved projects. Portfolio: This shows all your published Apps through GameSalad.   Back/Forward buttons: Used when navigating back and forth between windows Web Preview: Allows you to see what your game will look like within the browser (HTML5) Home: This takes you right back to the project's main window Publish: Brings up the Publish window, here you can chose to deploy your game to the web, iPhone, iPad, Mac, or Android Scenes: Gives you a drop-down menu of all your scenes Feedback: Have some thoughts about GameSalad? Click this to send them to the Creators! Preview: At the main menu, or while editing an actor this starts your game from the beginning. If you are in a level, it will preview the level Help: Brings up the GameSalad documentation, which lists many help topics. Target Platform and Orientation: This drop-down menu gives you, your device options, iPhone Landscape, iPhone Portrait, GameSalad.com, iPad Landscape, iPad Portrait, and 720p HD Enable Resolution Independence (only when iPhone and iPad device is set): Check this option if you are creating a game specifically for the iPhone 4, 4S, iPad, or Kindles and Nooks. This takes your high resolution images and converts them for iPhone 3GS, 3G, and iPhone (1st Gen) Scenes Tab: Switch to this to see all your wonderful levels! Actors Tab: Select this tab to see all your actors in the game project. From this tab, you can group different types of actors, such as platforms and enemies. This comes in handy when an actor has to collide with numerous other actors (enemies or platforms) + button: Adds a Level - button (when a level is selected): Deletes a level Inspector (with Game selected) Actors: Here, you will see all your in-game items (Players, platforms, collectables, and so on) Attributes: Here, you can edit all the attributes of the game such as the display size. Devices: Here, you can edit all the settings for the mouse, touch screen, accelerometer, screen, and audio. Inspector (with Scene selected) o Attributes: Here, you can edit all the attributes of the current level, such as the size of the level, screen wrap (X,Y), Gravity, background color, camera settings, and autorotate. o Layers: Here, you can create numerous layers with scrollable on or off. For example, a layer called UI with scrollable deselected will have all your user interface items, and they will stay on the screen. Library (with Behaviors selected) o Standard: These are all the standard GameSalad behaviors (Movements, change actor attributes, and more) o Custom: These are your own custom behaviors. Let's say, you needed the same behavior throughout numerous actors but you didn't want to keep re-adding and changing the behavior for each actor. Once, you create the Behavior, drag it into this box and you can use it as much as you want. o Pro: These are all the professional behaviors (only available when you have paid for the professional membership). These include Game Center Connect, iAd, and Open URL Library (with Images selected) Project: This shows all your imported images into this project. Purchased: This is a new feature that shows the images you have purchased through GameSalad's Marketplace. (When you click Purchase Images..., this will take you to the GameSalad Marketplace where you will have a plethora of Content packs and more to purchase and import into your game) When you click the "+" button, you can import images from your hard drive, alternately, you can drag them directly into the Library from the Finder Library (with Sounds selected): This shows you all of your sound effects and music that you have imported into your project. As with images, when you click the "+" button you can import sound effects or music from your hard drive, or drag them directly in from the Finder. Actor Mode: This involves normal mouse functions; it allows you to select actors within the level. Following is the screenshot of the icon: Camera Mode: It allows you to edit the camera, position, and scrolling hot spots for characters that control the camera. Following is the screenshot of the icon: Reset Scene: While previewing your level and if this button is pressed, everything will go back to its initial state. Following is the screenshot of the icon: Play: This will start a preview of the current level. This is different from the green Project Preview button, as this will only preview the current level, and not the whole project. When you complete the level, an alert will appear telling you the scene has ended, and you can either select to preview the next level, or reset the current scene. Following is the screebshot of the icon: Show Initial State: If you have run a preview, and want to see the initial state without ending the preview, then pause the preview, click on the following icon and the initial state is seen. Following is the screenshot of the icon: For now, let's click New | My Great Project This is a fresh project; everything is empty. You can see that you have one level so far, but you can add more at a later time. See the Scenes and Actors Tabs? Currently, Scenes is selected, this shows you all of your levels, but if you click the Actors tab, you will be able to see all your actors (or game objects, characters, collectables, and so on.) in the game. You can also rearrange all of the actors in Actor Tags, to give you an idea of what these are useful for. Take for example, if you have 30 different enemies, when you are setting up your collisions within behaviors, you won't have to set up 30 different collisions. Rather, when you set up all the enemies within a tag named Enemies you can do a single collision behavior for all actors of the tag! This reduces a lot of time when coding. We will get into more detail about actor tags, when we get into creating some games later in the book. If you double-click on the Initial Scene, you will be taken to the level editor. Before we do that, let's go through the buttons shown in the following screenshot: The descriptions of the buttons in the previous screenshot are as as follows: Seems pretty easy, right? It is! GameSalad's user interface is simple. Even if you don't know what a certain button does, just hover your mouse over the button and a tooltip appears and tells you what the button does. Even though it's a very simple user interface, it is very powerful. Take for example, something as simple as the Enable Resolution Independence option. Simply selecting this takes out a lot of time from having to create two sets of images, a high resolution retina-friendly image, and a lower quality set for non-retina display images. With this option, all you have to do is create a high resolution set. Choose this option and GameSalad automatically creates a lower quality set of images for non-retina devices. How great is that? Such a simple option and yet it saves so much time and effort, and isn't simplicity what everyone wants? Now let's get into the scene editor Double-click our initial scene and you will see the Scene Editor, yes it may be a little daunting, but once you get used to the user interface, it is really quite simple. Let's break down all the buttons and see what they do: What do all these buttons mean? Following is a description of all the buttons and boxes: There we go! The GameSalad interface really is that easy to navigate! In this article, you set up an account with GameSalad, you downloaded and installed it and now you know how to use the interface. GameSalad has such a simple interface, but it is really powerful. As we looked at earlier, an option as simple as Resolution Independence is so easy to select and yet one click takes off so much time from creating different sets of images that can be used for developing. This is what makes GameSalad so great; it's such a simple user interface and yet it is so powerful. What is so amazing about all of it, is that there's no programming involved whatsoever! For those who don't have the smartness to program a full game, this is what everyone else wants, simple, quick, and super powerful.
Read more
  • 0
  • 0
  • 6691
article-image-tinkering-with-ticks-in-matplotlib-2-0
Sugandha Lahoti
13 Dec 2017
6 min read
Save for later

Tinkering with ticks in Matplotlib 2.0

Sugandha Lahoti
13 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This is an excerpt from the book titled Matplotlib 2.x By Example written by Allen Chi Shing Yu, Claire Yik Lok Chung, and Aldrin Kay Yuen Yim,. The book covers basic know-how on how to create and customize plots by Matplotlib. It will help you learn to visualize geographical data on maps and implement interactive charts. [/box] The article talks about how you can manipulate ticks in Matplotlib 2.0. It includes steps to adjust tick spacing, customizing tick formats, trying out the ticker locator and formatter, and rotating tick labels. What are Ticks Ticks are dividers on an axis that help readers locate the coordinates. Tick labels allow estimation of values or, sometimes, labeling of a data series as in bar charts and box plots. Adjusting tick spacing Tick spacing can be adjusted by calling the locator methods: ax.xaxis.set_major_locator(xmajorLocator) ax.xaxis.set_minor_locator(xminorLocator) ax.yaxis.set_major_locator(ymajorLocator) ax.yaxis.set_minor_locator(yminorLocator) Here, ax refers to axes in a Matplotlib figure. Since set_major_locator() or set_minor_locator() cannot be called from the pyplot interface but requires an axis, we call pyplot.gca() to get the current axes. We can also store a figure and axes as variables at initiation, which is especially useful when we want multiple axes. Removing ticks NullLocator: No ticks Drawing ticks in multiples Spacing ticks in multiples of a given number is the most intuitive way. This can be done by using MultipleLocator space ticks in multiples of a given value. Automatic tick settings MaxNLocator: This finds the maximum number of ticks that will display nicely AutoLocator: MaxNLocator with simple defaults AutoMinorLocator: Adds minor ticks uniformly when the axis is linear Setting ticks by the number of data points IndexLocator: Sets ticks by index (x = range(len(y)) Set scaling of ticks by mathematical functions LinearLocator: Linear scale LogLocator: Log scale SymmetricalLogLocator: Symmetrical log scale, log with a range of linearity LogitLocator: Logit scaling Locating ticks by datetime There is a series of locators dedicated to displaying date and time: MinuteLocator: Locate minutes HourLocator: Locate hours DayLocator: Locate days of the month WeekdayLocator: Locate days of the week MonthLocator: Locate months, for example, 8 for August YearLocator: Locate years that in multiples RRuleLocator: Locate using matplotlib.dates.rrulewrapper The rrulewrapper is a simple wrapper around a dateutil.rrule (dateutil) that allows almost arbitrary date tick specifications AutoDateLocator: On autoscale, this class picks the best MultipleDateLocator to set the view limits and the tick locations Customizing tick formats Tick formatters control the style of tick labels. They can be called to set the major and minor tick formats on the x and y axes as follows: ax.xaxis.set_major_formatter( xmajorFormatter ) ax.xaxis.set_minor_formatter( xminorFormatter ) ax.yaxis.set_major_formatter( ymajorFormatter ) ax.yaxis.set_minor_formatter( yminorFormatter ) Removing tick labels NullFormatter: No tick labels Fixing labels FixedFormatter: Labels are set manually Setting labels with strings IndexFormatter: Take labels from a list of strings StrMethodFormatter: Use the string format method Setting labels with user-defined functions FuncFormatter: Labels are set by a user-defined function Formatting axes by numerical values ScalarFormatter: The format string is automatically selected for scalars by default The following formatters set values for log axes: LogFormatter: Basic log axis LogFormatterExponent: Log axis using exponent = log_base(value) LogFormatterMathtext: Log axis using exponent = log_base(value) using Math text LogFormatterSciNotation: Log axis with scientific notation LogitFormatter: Probability formatter Trying out the ticker locator and formatter To demonstrate the ticker locator and formatter, here we use Netflix subscriber data as an example. Business performance is often measured seasonally. Television shows are even more "seasonal". Can we better show it in the timeline? import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker """ Number for Netflix streaming subscribers from 2012-2017 Data were obtained from Statista on https://www.statista.com/statistics/250934/quarterly-number-of-netflix-stre aming-subscribers-worldwide/ on May 10, 2017. The data were originally published by Netflix in April 2017. """ # Prepare the data set x = range(2011,2018) y = [26.48,27.56,29.41,33.27,36.32,37.55,40.28,44.35, 48.36,50.05,53.06,57.39,62.27,65.55,69.17,74.76,81.5, 83.18,86.74,93.8,98.75] # quarterly subscriber count in millions # Plot lines with different line styles plt.plot(y,'^',label = 'Netflix subscribers',ls='-') # get current axes and store it to ax ax = plt.gca() # set ticks in multiples for both labels ax.xaxis.set_major_locator(ticker.MultipleLocator(4)) # set major marks # every 4 quarters, ie once a year ax.xaxis.set_minor_locator(ticker.MultipleLocator(1)) # set minor marks # for each quarter ax.yaxis.set_major_locator(ticker.MultipleLocator(10)) # ax.yaxis.set_minor_locator(ticker.MultipleLocator(2)) # label the start of each year by FixedFormatter  ax.get_xaxis().set_major_formatter(ticker.FixedFormatter(x)) plt.legend() plt.show() From this plot, we see that Netflix has a pretty linear growth of subscribers from the year 2012 to 2017. We can tell the seasonal growth better after formatting the x axis in a quarterly manner. In 2016, Netflix was doing better in the latter half of the year. Any TV shows you watched in each season? Rotating tick labels A figure can get too crowded or some tick labels may get skipped when we have too many tick labels or if the label strings are too long. We can solve this by rotating the ticks, for example, by pyplot.xticks(rotation=60): import matplotlib.pyplot as plt import numpy as np import matplotlib as mpl mpl.style.use('seaborn') techs = ['Google Adsense','DoubleClick.Net','Facebook Custom Audiences','Google Publisher Tag', 'App Nexus'] y_pos = np.arange(len(techs)) # Number of websites using the advertising technologies # Data were quoted from builtwith.com on May 8th 2017 websites = [14409195,1821385,948344,176310,283766] plt.bar(y_pos, websites, align='center', alpha=0.5) # set x-axis tick rotation plt.xticks(y_pos, techs, rotation=25) plt.ylabel('Live site count') plt.title('Online advertising technologies usage') plt.show() Use pyplot.tight_layout() to avoid image clipping. Using rotated labels can sometimes result in image clipping, as follows, if you save the figure by pyplot.savefig(). You can call pyplot.tight_layout() before pyplot.savefig() to ensure a complete image output. We saw how ticks can be adjusted, customized, rotated and formatted in Matplotlib 2.0 for easy readability, labelling and estimation of values. To become well-versed with Matplotlib for your day to day work, check out this book Matplotlib 2.x By Example.  
Read more
  • 0
  • 0
  • 6690

article-image-delphi-cookbook
Packt
07 Jul 2016
6 min read
Save for later

Delphi Cookbook

Packt
07 Jul 2016
6 min read
In this article by Daniele Teti author of the book Delphi Cookbook - Second Edition we will study about Multithreading. Multithreading can be your biggest problem if you cannot handle it with care. One of the fathers of the Delphi compiler used to say: "New programmers are drawn to multithreading like moths to flame, with similar results." – Danny Thorpe (For more resources related to this topic, see here.) In this chapter, we will discuss some of the main techniques to handle single or multiple background threads. We'll talk about shared resource synchronization and thread-safe queues and events. The last three recipes will talk about the Parallel Programming Library introduced in Delphi XE7, and I hope that you will love it as much as I love it. Multithreaded programming is a huge topic. So, after reading this chapter, although you will not become a master of it, you will surely be able to approach the concept of multithreaded programming with confidence and will have the basics to jump on to more specific stuff when (and if) you require them. Talking with the main thread using a thread-safe queue Using a background thread and working with its private data is not difficult, but safely bringing information retrieved or elaborated by the thread back to the main thread to show them to the user (as you know, only the main thread can handle the GUI in VCL as well as in FireMonkey) can be a daunting task. An even more complex task would be establishing a generic communication between two or more background threads. In this recipe, you'll see how a background thread can talk to the main thread in a safe manner using the TThreadedQueue<T> class. The same concepts are valid for a communication between two or more background threads. Getting ready Let's talk about a scenario. You have to show data generated from some sort of device or subsystem, let's say a serial, a USB device, a query polling on the database data, or a TCP socket. You cannot simply wait for data using TTimer because this would freeze your GUI during the wait, and the wait can be long. You have tried it, but your interface became sluggish… you need another solution! In the Delphi RTL, there is a very useful class called TThreadedQueue<T> that is, as the name suggests, a particular parametric queue (a FIFO data structure) that can be safely used from different threads. How to use it? In the programming field, there is mostly no single solution valid for all situations, but the following one is very popular. Feel free to change your approach if necessary. However, this is the approach used in the recipe code: Create the queue within the main form. Create a thread and inject the form queue to it. In the thread Execute method, append all generated data to the queue. In the main form, use a timer or some other mechanism to periodically read from the queue and display data on the form. How to do it… Open the recipe project called ThreadingQueueSample.dproj. This project contains the main form with all the GUI-related code and another unit with the thread code. The FormCreate event creates the shared queue with the following parameters that will influence the behavior of the queue: QueueDepth = 100: This is the maximum queue size. If the queue reaches this limit, all the push operations will be blocked for a maximum of PushTimeout, then the Push call will fail with a timeout. PushTimeout = 1000: This is the timeout in milliseconds that will affect the thread, that in this recipe is the producer of a producer/consumer pattern. PopTimeout = 1: This is the timeout in milliseconds that will affect the timer when the queue is empty. This timeout must be very short because the pop call is blocking in nature, and you are in the main thread that should never be blocked for a long time. The button labeled Start Thread creates a TReaderThread instance passing the already created queue to its constructor (this is a particular type of dependency injection called constructor injection). The thread declaration is really simple and is as follows: type TReaderThread = class(TThread) private FQueue: TThreadedQueue<Byte>; protected procedure Execute; override; public constructor Create(AQueue: TThreadedQueue<Byte>); end; While the Execute method simply appends randomly generated data to the queue, note that the Terminated property must be checked often so the application can terminate the thread and wait a reasonable time for its actual termination. In the following example, if the queue is not empty, check the termination at least every 700 msec ca: procedure TReaderThread.Execute; begin while not Terminated do begin TThread.Sleep(200 + Trunc(Random(500))); // e.g. reading from an actual device FQueue.PushItem(Random(256)); end; end; So far, you've filled the queue. Now, you have to read from the queue and do something useful with the read data. This is the job of a timer. The following is the code of the timer event on the main form: procedure TMainForm.Timer1Timer(Sender: TObject); var Value: Byte; begin while FQueue.PopItem(Value) = TWaitResult.wrSignaled do begin ListBox1.Items.Add(Format('[%3.3d]', [Value])); end; ListBox1.ItemIndex := ListBox1.Count - 1; end; That's it! Run the application and see how we are reading the data coming from the threads and showing the main form. The following is a screenshot: The main form showing data generated by the background thread There's more… The TThreadedQueue<T> is very powerful and can be used to communicate between two or more background threads in a consumer/producer schema as well. You can use multiple producers, multiple consumers, or both. The following screenshot shows a popular schema used when the speed at which the data generated is faster than the speed at which the same data is handled. In this case, usually you can gain speed on the processing side using multiple consumers. Single producer, multiple consumers Summary In this article we had a look at how to talk to the main thread using a thread-safe queue. Resources for Article: Further resources on this subject: Exploring the Usages of Delphi[article] Adding Graphics to the Map[article] Application Performance[article]
Read more
  • 0
  • 0
  • 6688