Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-overview-tcl-shell
Packt
15 Feb 2011
10 min read
Save for later

An Overview of the Tcl Shell

Packt
15 Feb 2011
10 min read
  Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language      Introduction So, you've installed Tcl, written some scripts, and now you're ready to get a deeper understanding of Tcl and all that it has to offer. So, why are we starting with the shell when it is the most basic tool in the Tcl toolbox? When I started using Tcl I needed to rapidly deliver a Graphical User Interface (GUI) to display a video from the IP-based network cameras. The solution had to run on Windows and Linux and it could not be browser-based due to the end user's security concerns. The client needed it quickly and our sales team had, as usual, committed to a delivery date without speaking to the developer in advance. So, with the requirement document in hand, I researched the open source tools available at the time and Tcl/Tk was the only language that met the challenge. The original solution quickly evolved into a full-featured IP Video Security system with the ability to record and display historic video as well as providing the ability to attach to live video feeds from the cameras. Next search capabilities were added to review the stored video and a method to navigate to specific dates and times. The final version included configuring advanced recording settings such as resolution, color levels, frame rate, and variable speed playback. All was accomplished with Tcl. Due to the time constraints, I was not able get a full appreciation of the capabilities of the shell. I saw it as a basic tool to interact with the interpreter to run commands and access the file system. When I had the time, I returned to the shell and realized just how valuable a tool it is and the many capabilities I had failed to make use of. When used to its fullest, the shell provides much more that an interface to the Tcl interpreter, especially in the early stages of the development process. Need to isolate and test a procedure in a program? Need a quick debugging tool? Need real-time notification of the values stored in a variable? The Tcl shell is the place to go. Since then, I have learned countless uses for the shell that would not only have sped up the development process, but also saved me several headaches in debugging the GUI and video collection. I relied on numerous dialog boxes to pop up values or turned to writing debugging information to error logs. While this was an excellent way to get what I needed, I could have minimized the overhead in terms of coding by simply relying on the shell to display the desired information in the early stages. While dialog windows and error logs are irreplaceable, I now add in quick debugging by using the commands the shell has to offer. If something isn't proceeding as expected, I drop in a command to write to standard out and voila! I have my answer. The shell continues to provide me with a reliable method to isolate issues with a minimum investment of time. The Tcl shell The Tcl Shell (Tclsh) provides an interface to the Tcl interpreter that accepts commands from both standard input and text files. Much like the Windows Command Line or Linux Terminal, the Tcl shell allows a developer to rapidly invoke a command and observe the return value or error messages in standard output. The shell differs based on the Operating System in use. For the Unix/Linux systems, this is the standard terminal console; while on a Windows system, the shell is launched separately via an executable. If invoked with no arguments, the shell interface runs interactively, accepting commands from the native command line. The input line is demarked with a percent sign (%) with the prompt located at the start position. If the shell is invoked from the command line (Windows DOS or Unix/Linux terminal) and arguments are passed, the interpreter will accept the first as the filename to be read. Any additional arguments are processed as variables. The shell will run until the exit command is invoked or until it has reached the end of the text file. When invoked with arguments, the shell sets several Tcl variables that may be accessed within your program, much like the C family of languages. These variables are: VariableExplanationargcThis variable contains the number of arguments passed in with the exception of the script file name. A value of 0 is returned if no arguments were passed in.argvThis variable contains a Tcl List with elements detailing the arguments passed in. An empty string is returned if no arguments were provided.argv0This variable contains the filename (if specified) or the name used to invoke the Tcl shell.TCL_interactiveThis variable contains a '1' if Tclsh is running in interactive mode, otherwise a '0' is contained.envThe env variable is maintained automatically, as an array in Tcl and is created at startup to hold the environment variables on your system. Writing to the Tcl console The following recipe illustrates a basic command invocation. In this example, we will use the puts command to output a "Hello World" message to the console. Getting ready To complete the following example, launch your Tcl Shell as appropriate, based on your operating platform. For example, on Windows, you would launch the executable contained in the Tcl installation location within the bin directory, while on a Unix/Linux installation, you would enter TCLsh at the command line, provided this is the executable name for your particular system. To check the name, locate the executable in the bin directory of your installation. How to do it… Enter the following command: % puts "Hello World" Hello World How it works… As you can see, the puts command writes what it was passed as an argument to standard out. Although this is a basic "Hello World" recipe, you can easily see how this 'simple' command can be used for rapid tracking of the location within a procedure, where a problem may have arisen. Add in variable values and some error handling and you can rapidly isolate issues and correct them without the additional efforts of creating a Dialog Window or writing to an error log. Mathematical expressions The expr command is used to evaluate mathematical expressions. This command can address everything from simple addition and subtraction to advanced computations, such as sine and cosine. This eliminates the need to make system calls to perform advanced mathematical functions. The expr command evaluates the input and arguments, and returns an integer or floating-point value. A Tcl expression consists of a combination of operators, operands, and parenthetical containers (parenthesis, braces, or brackets). There are no strict typing requirements, so any white space is stripped by the command automatically. Tcl supports non-numeric and string comparisons as well as Tcl-specific operators. Tcl expr operands Tcl operands are treated as integers, where feasible. They may be specified as decimal, binary (first two characters must be 0b), hexadecimal (first two characters must be 0x), or octal (first two characters must be 0o). Care should be taken when passing integers with a leading 0, for example 08, as the interpreter would evaluate 08 as an illegal octal value. If no integer formats are included, the command will evaluate the operand as a floating-point numeric value. For scientific notations, the character e (or E) is inserted as appropriate. If no numeric interpretation is feasible, the value will be evaluated as a string. In this case, the value must be enclosed within double quotes or braces. Please note that not all operands are accepted by all operators. To avoid inadvertent variable substitution, it is always best to enclose the operands within braces. For example, take a look at the following: expr 1+1*3 will return a value of 4. expr (1+1)*3 will return a value of 6. Operands may be presented in any of the following: OperandExplanationNumericInteger and floating-point values may be passed directly to the command.BooleanAll standard Boolean values (true, false, yes, no, 0, or 1) are supported.Tcl variableAll referenced variables (in Tcl, a variable is referenced using the $ notation, for example, myVariable is a named variable, whereas $myVariable is the referenced variable).Strings (in double quotes)Strings contained within double quotes may be passed with no need to include backslash, variable, or command substitution, as these are handled automatically.Strings (in braces)Strings contained within braces will be used with no substitution.Tcl commandsTcl commands must be enclosed within square braces. The command will be executed and the mathematical function is performed on the return value.Named functionsFunctions, such as sine, cosine, and so on. Tcl supports a subset of the C programming language math operators and treats them in the same manner and precedence. If a named function (such as sine) is encountered, expr automatically makes a call to the mathfunc namespace to minimize the syntax required to obtain the value. Tcl expr operators may be specified as noted in the following table, in the descending order of precedence: OperatorExplanation- + ~ !Unary minus, unary plus, bitwise NOT and logical NOT. Cannot be applied to string operands. Bit-wise NOT may be applied to only integers.**Exponentiation Numeric operands only.*/ %Multiply, divide, and remainder. Numeric operands only.+ -Add and subtract. Numeric operands only.<< >>Left shift and right shift. Integer operands only. A right shift always propagates the sign bit.< > <= >=Boolean Less, Boolean Greater, Boolean Less Than or Equal To, Boolean Greater Than or Equal To (A value of 1 is returned if the condition is true, otherwise a 0 is returned). If utilized for strings, string comparison will be applied.== !=Boolean Equal and Boolean Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned).eq neBoolean String Equal and Boolean String Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned). Any operand provided will be interpreted as a string.in niList Containment and Negated List Containment (A value of 1 is returned if the condition is true, otherwise a 0 is returned). The first operand is treated as a string value, the second as a list.&Bitwise AND Integers only.^Bitwise Exclusive OR Integers only.|Bitwise OR Integers only.&&Logical AND (a value of 1 is returned if both operands are 0, otherwise a 1 is returned). Boolean and numeric (integer and floating-point) operands only.x?y:zIf-then-else (if x evaluates to non-zero, then the return is the value of y, otherwise the value of z is returned). The x operand must have a Boolean or a numeric value.  
Read more
  • 0
  • 0
  • 7433

article-image-applying-spring-security-using-json-web-token-jwt
Vijin Boricha
10 Apr 2018
9 min read
Save for later

Applying Spring Security using JSON Web Token (JWT)

Vijin Boricha
10 Apr 2018
9 min read
Today, we will learn about spring security and how it can be applied in various forms using powerful libraries like JSON Web Token (JWT). Spring Security is a powerful authentication and authorization framework, which will help us provide a secure application. By using Spring Security, we can keep all of our REST APIs secured and accessible only by authenticated and authorized calls. Authentication and authorization Let's look at an example to explain this. Assume you have a library with many books. Authentication will provide a key to enter the library; however, authorization will give you permission to take a book. Without a key, you can't even enter the library. Even though you have a key to the library, you will be allowed to take only a few books. JSON Web Token (JWT) Spring Security can be applied in many forms, including XML configurations using powerful libraries such as Jason Web Token. As most companies use JWT in their security, we will focus more on JWT-based security than simple Spring Security, which can be configured in XML. JWT tokens are URL-safe and web browser-compatible especially for Single Sign-On (SSO) contexts. JWT has three parts: Header Payload Signature The header part decides which algorithm should be used to generate the token. While authenticating, the client has to save the JWT, which is returned by the server. Unlike traditional session creation approaches, this process doesn't need to store any cookies on the client side. JWT authentication is stateless as the client state is never saved on a server. JWT dependency To use JWT in our application, we may need to use the Maven dependency. The following dependency should be added in the pom.xml file. You can get the Maven dependency from: https://mvnrepository.com/artifact/javax.xml.Bind. We have used version 2.3.0 of the Maven dependency in our application: <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.0</version> </dependency> Note: As Java 9 doesn't include DataTypeConverter in their bundle, we need to add the preceding configuration to work with DataTypeConverter. We will cover DataTypeConverter in the following section. Creating a Jason Web Token To create a token, we have added an abstract method called createToken in our SecurityService interface. This interface will tell the implementing class that it has to create a complete method for createToken. In the createToken method, we will use only the subject and expiry time as these two options are important when creating a token. At first, we will create an abstract method in the SecurityService interface. The concrete class (whoever implements the SecurityService interface) has to implement the method in their class: public interface SecurityService { String createToken(String subject, long ttlMillis); // other methods } In the preceding code, we defined the method for token creation in the interface. SecurityServiceImpl is the concrete class that implements the abstract method of the SecurityService interface by applying the business logic. The following code will explain how JWT will be created by using the subject and expiry time: private static final String secretKey= "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX"; @Override public String createToken(String subject, long ttlMillis) { if (ttlMillis <= 0) { throw new RuntimeException("Expiry time must be greater than Zero :["+ttlMillis+"] "); } // The JWT signature algorithm we will be using to sign the token SignatureAlgorithm signatureAlgorithm = SignatureAlgorithm.HS256; byte[] apiKeySecretBytes = DatatypeConverter.parseBase64Binary(secretKey); Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName()); JwtBuilder builder = Jwts.builder() .setSubject(subject) .signWith(signatureAlgorithm, signingKey); long nowMillis = System.currentTimeMillis(); builder.setExpiration(new Date(nowMillis + ttlMillis)); return builder.compact(); } The preceding code creates the token for the subject. Here, we have hardcoded the secret key "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX " to simplify the token creation process. If needed, we can keep the secret key inside the properties file to avoid hard code in the Java code. At first, we verify whether the time is greater than zero. If not, we throw the exception right away. We are using the SHA-256 algorithm as it is used in most applications. Note: Secure Hash Algorithm (SHA) is a cryptographic hash function. The cryptographic hash is in the text form of a data file. The SHA-256 algorithm generates an almost-unique, fixed-size 256-bit hash. SHA-256 is one of the more reliable hash functions. We have hardcoded the secret key in this class. We can also store the key in the application.properties file. However to simplify the process, we have hardcoded it: private static final String secretKey= "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX"; We are converting the string key to a byte array and then passing it to a Java class, SecretKeySpec, to get a signingKey. This key will be used in the token builder. Also, while creating a signing key, we use JCA, the name of our signature algorithm. Note: Java Cryptography Architecture (JCA) was introduced by Java to support modern cryptography techniques. We use the JwtBuilder class to create the token and set the expiration time for it. The following code defines the token creation and expiry time setting option: JwtBuilder builder = Jwts.builder() .setSubject(subject) .signWith(signatureAlgorithm, signingKey); long nowMillis = System.currentTimeMillis(); builder.setExpiration(new Date(nowMillis + ttlMillis)); We will have to pass time in milliseconds while calling this method as the setExpiration takes only milliseconds. Finally, we have to call the createToken method in our HomeController. Before calling the method, we will have to autowire the SecurityService as follows: @Autowired SecurityService securityService; The createToken call is coded as follows. We take the subject as the parameter. To simplify the process, we have hardcoded the expiry time as 2 * 1000 * 60 (two minutes). HomeController.java: @Autowired SecurityService securityService; @ResponseBody @RequestMapping("/security/generate/token") public Map<String, Object> generateToken(@RequestParam(value="subject") String subject){ String token = securityService.createToken(subject, (2 * 1000 * 60)); Map<String, Object> map = new LinkedHashMap<>(); map.put("result", token); return map; } Generating a token We can test the token by calling the API in a browser or any REST client. By calling this API, we can create a token. This token will be used for user authentication-like purposes. Sample API for creating a token is as follows: http://localhost:8080/security/generate/token?subject=one Here we have used one as a subject. We can see the token in the following result. This is how the token will be generated for all the subjects we pass to the API: { result: "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJvbmUiLCJleHAiOjE1MDk5MzY2ODF9.GknKcywiIG4- R2bRmBOsjomujP0MxZqdawrB8TO3P4" } Note: JWT is a string that has three parts, each separated by a dot (.). Each section is base-64 encoded. The first section is the header, which gives a clue about the algorithm used to sign the JWT. The second section is the body, and the final section is the signature. Getting a subject from a Jason Web Token So far, we have created a JWT token. Here, we are going to decode the token and get the subject from it. In a future section, we will talk about how to decode and get the subject from the token. As usual, we have to define the method to get the subject. We will define the getSubject method in SecurityService. Here, we will create an abstract method called getSubject in the SecurityService interface. Later, we will implement this method in our concrete class: String getSubject(String token); In our concrete class, we will implement the getSubject method and add our code in the SecurityServiceImpl class. We can use the following code to get the subject from the token: @Override public String getSubject(String token) { Claims claims = Jwts.parser() .setSigningKey(DatatypeConverter.parseBase64Binary(secretKey)) .parseClaimsJws(token).getBody(); return claims.getSubject(); } In the preceding method, we use the Jwts.parser to get the claims. We set a signing key by converting the secret key to binary and then passing it to a parser. Once we get the Claims, we can simply get the subject by calling getSubject. Finally, we can call the method in our controller and pass the generated token to get the subject. You can check the following code, where the controller is calling the getSubject method and returning the subject in the HomeController.java file: @ResponseBody @RequestMapping("/security/get/subject") public Map<String, Object> getSubject(@RequestParam(value="token") String token){ String subject = securityService.getSubject(token); Map<String, Object> map = new LinkedHashMap<>(); map.put("result", subject); return map; } Getting a subject from a token Previously, we created the code to get the token. Here we will test the method we created previously by calling the get subject API. By calling the REST API, we will get the subject that we passed earlier. Sample API: http://localhost:8080/security/get/subject?token=eyJhbGciOiJIUzI1NiJ9.eyJzd WIiOiJvbmUiLCJleHAiOjE1MDk5MzY2ODF9.GknKcywiI-G4- R2bRmBOsjomujP0MxZqdawrB8TO3P4 Since we used one as the subject when creating the token by calling the generateToken method, we will get "one" in the getSubject method: { result: "one" } Note: Usually, we attach the token in the headers; however, to avoid complexity, we have provided the result. Also, we have passed the token as a parameter to get the subject. You may not need to do it the same way in a real application. This is only for demo purposes. This article is an excerpt from the book Building RESTful Web Services with Spring 5 - Second Edition, written by Raja CSP Raman. This book involves techniques to deal with security in Spring and shows how to implement unit test and integration test strategies. You may also like How to develop RESTful web services in Spring, another tutorial from this book. Check out other posts on Spring Security: Spring Security 3: Tips and Tricks Opening up to OpenID with Spring Security Migration to Spring Security 3  
Read more
  • 0
  • 0
  • 7430

article-image-introduction-machine-learning-r
Packt
18 Feb 2016
7 min read
Save for later

Introduction to Machine Learning with R

Packt
18 Feb 2016
7 min read
If science fiction stories are to be believed, the invention of artificial intelligence inevitably leads to apocalyptic wars between machines and their makers. In the early stages, computers are taught to play simple games of tic-tac-toe and chess. Later, machines are given control of traffic lights and communications, followed by military drones and missiles. The machine's evolution takes an ominous turn once the computers become sentient and learn how to teach themselves. Having no more need for human programmers, humankind is then deleted. (For more resources related to this topic, see here.) Thankfully, at the time of writing this, machines still require user input. Though your impressions of machine learning may be colored by these mass-media depictions, today's algorithms are too application-specific to pose any danger of becoming self-aware. The goal of today's machine learning is not to create an artificial brain, but rather to assist us in making sense of the world's massive data stores. Putting popular misconceptions aside, in this article we will learn the following topics: Installing R packages Loading and unloading R packages Machine learning with R Many of the algorithms needed for machine learning with R are not included as part of the base installation. Instead, the algorithms needed for machine learning are available via a large community of experts who have shared their work freely. These must be installed on top of base R manually. Thanks to R's status as free open source software, there is no additional charge for this functionality. A collection of R functions that can be shared among users is called a package. Free packages exist for each of the machine learning algorithms covered in this book. In fact, this book only covers a small portion of all of R's machine learning packages. If you are interested in the breadth of R packages, you can view a list at Comprehensive R Archive Network (CRAN), a collection of web and FTP sites located around the world to provide the most up-to-date versions of R software and packages. If you obtained the R software via download, it was most likely from CRAN at http://cran.r-project.org/index.html. If you do not already have R, the CRAN website also provides installation instructions and information on where to find help if you have trouble. The Packages link on the left side of the page will take you to a page where you can browse packages in an alphabetical order or sorted by the publication date. At the time of writing this, a total 6,779 packages were available—a jump of over 60% in the time since the first edition was written, and this trend shows no sign of slowing! The Task Views link on the left side of the CRAN page provides a curated list of packages as per the subject area. The task view for machine learning, which lists the packages covered in this book (and many more), is available at http://cran.r-project.org/web/views/MachineLearning.html. Installing R packages Despite the vast set of available R add-ons, the package format makes installation and use a virtually effortless process. To demonstrate the use of packages, we will install and load the RWeka package, which was developed by Kurt Hornik, Christian Buchta, and Achim Zeileis (see Open-Source Machine Learning: R Meets Weka in Computational Statistics 24: 225-232 for more information). The RWeka package provides a collection of functions that give R access to the machine learning algorithms in the Java-based Weka software package by Ian H. Witten and Eibe Frank. More information on Weka is available at http://www.cs.waikato.ac.nz/~ml/weka/ To use the RWeka package, you will need to have Java installed (many computers come with Java preinstalled). Java is a set of programming tools available for free, which allow for the use of cross-platform applications such as Weka. For more information, and to download Java on your system, you can visit http://java.com. The most direct way to install a package is via the install.packages() function. To install the RWeka package, at the R command prompt, simply type: > install.packages("RWeka") R will then connect to CRAN and download the package in the correct format for your OS. Some packages such as RWeka require additional packages to be installed before they can be used (these are called dependencies). By default, the installer will automatically download and install any dependencies. The first time you install a package, R may ask you to choose a CRAN mirror. If this happens, choose the mirror residing at a location close to you. This will generally provide the fastest download speed. The default installation options are appropriate for most systems. However, in some cases, you may want to install a package to another location. For example, if you do not have root or administrator privileges on your system, you may need to specify an alternative installation path. This can be accomplished using the lib option, as follows: > install.packages("RWeka", lib="/path/to/library") The installation function also provides additional options for installation from a local file, installation from source, or using experimental versions. You can read about these options in the help file, by using the following command: > ?install.packages More generally, the question mark operator can be used to obtain help on any R function. Simply type ? before the name of the function. Loading and unloading R packages In order to conserve memory, R does not load every installed package by default. Instead, packages are loaded by users as they are needed, using the library() function. The name of this function leads some people to incorrectly use the terms library and package interchangeably. However, to be precise, a library refers to the location where packages are installed and never to a package itself. To load the RWeka package we installed previously, you can type the following: > library(RWeka) Aside from RWeka, there are several other R packages. To unload an R package, use the detach() function. For example, to unload the RWeka package shown previously use the following command: > detach("package:RWeka", unload = TRUE) This will free up any resources used by the package. Summary Machine learning originated at the intersection of statistics, database science, and computer science. It is a powerful tool, capable of finding actionable insight in large quantities of data. Still, caution must be used in order to avoid common abuses of machine learning in the real world. Conceptually, learning involves the abstraction of data into a structured representation and the generalization of this structure into action that can be evaluated for utility. In practical terms, a machine learner uses data containing examples and features of the concept to be learned and summarizes this data in the form of a model, which is then used for predictive or descriptive purposes. These purposes can be grouped into tasks, including classification, numeric prediction, pattern detection, and clustering. Among the many options, machine learning algorithms are chosen on the basis of the input data and the learning task. R provides support for machine learning in the form of community-authored packages. These powerful tools are free to download; however, they need to be installed before they can be used. To learn more about R, you can refer the following books published by Packt Publishing (https://www.packtpub.com/): Machine Learning with R - Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-r-second-edition) R for Data Science (https://www.packtpub.com/big-data-and-business-intelligence/r-data-science) R Data Science Essentials (https://www.packtpub.com/big-data-and-business-intelligence/r-data-science-essentials) R Graphs Cookbook Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/r-graph-cookbook-%E2%80%93-second-edition) Resources for Article: Further resources on this subject: Machine Learning[article] Introducing Test-driven Machine Learning[article] Machine Learning with R[article]
Read more
  • 0
  • 0
  • 7419

article-image-bootstrap-30-mobile-first
Packt
16 Dec 2013
10 min read
Save for later

Bootstrap 3.0 is Mobile First

Packt
16 Dec 2013
10 min read
(For more resources related to this topic, see here.) But why Mobile First? Why did Bootstrap completely change its course from Desktop First to Mobile First to get into this new way to develop more suitable websites and web applications? Why did the most popular frontend framework embrace this change at a time when responsive web design is continuously growing with better suited and standard techniques such as media-queries, fluid layout, and JavaScript on demand? Mobile browsers are increasing support for the brand new HTML5 and CSS3, with the philosophy to offer, for older browsers, a less stylized but fully functional component, and for capable browsers a rich and full experience that comes from mobiles to larger screens such as TVs. For older browsers (such as IE 8 and IE 9), Bootstrap has functional support, but enhanced features such as rounded corners and a placeholder attribute for tips in input fields are not supported for these browsers. To see the full details on browser support, check the Bootstrap documentation from the Getting started section (http://getbootstrap.com/getting-started/#browsers). We are living at a time when mobile use is increasing at a pace that will soon surpass desktop usage (http://www.businessinsider.com/mobile-will-eclipsedesktop-by-2014-2012-6). Apart from the statistics, one thing we can presume is that the web scenario is changing so fast that we have to embrace the certainty of devices getting better and smarter. In this article, we will explore the main changes in Bootstrap 3. If you are already familiar with Bootstrap 2, check the migration guide (http://getbootstrap.com/getting-started/#migration) to have a practical overview about what has changed. If you're not familiar with Bootstrap, there's nothing that's too difficult for you to understand directly from this article about this new version. The only thing you need to have in mind is the Mobile First approach, which is covered well in this article. You will be guided to design with Mobile First, discover why Mobile First is so important, and how to make Bootstrap a powerful frontend platform to make your site friendly for a wider range of devices. We can take a step further and add to your previous Bootstrap knowledge by thinking of a concrete way to design processes as a continuous layer of capabilities and embrace the constraints and not fight with them. Mobile First with Bootstrap is an elegant solution for frontend development. Combined with server-side techniques, we get a full bag of solutions to get your product better suited to different users and needs in different platforms. This article will cover the following topics: Bootstrap reviewed Desktop to responsive   Bootstrap reviewed In the third era of Bootstrap that is coming, the developers have redesigned the whole framework with a different approach. Let's get started building interface components of small and simple screens, instead of adapting the existent UI components to fit in a constrained environment. From mobile, we will then go to desktop. However, we will not adapt the experience as we usually do with responsive design going from desktop to mobile. Now with Mobile First we will enhance accordingly as we increase the device screens. Why should I do this if my target audience will be using desktops? Going to mobile indirectly benefits desktop users. But how? To better understand this, let's recap Bootstrap history for a while. In 2011, Bootstrap was launched to serve as a live and agnostic style guide that was used by Twitter to create their products. It became an open source framework at that time. It was a time when we worked in pixel-perfect layouts and explored CSS3 animations, and we found in Bootstrap a well-documented and standardized set of features. Bootstrap creates a new design for the browsers because you don't need to define basic interface elements from scratch, such as buttons. At the same time, you have utility elements like badges to cover the most common interface elements. Bootstrap does what a framework is supposed to do: Bootstrapping! The term means the act of taking off a new project; it's like saying, "give me the tools that I will need to start developing my application for different needs". Bootstrap is a toolkit belt with standard conventions from well-defined classes with clean and practical documentation to live code that is ready to use and be customized for your needs. It's not a magic solution to solve the interface element reuse issue, but it's a kick-start. It fits in so many scenarios that developers are increasing its use with their own tools. "CSS moved beyond type, forms and grids. People get tired to create the same stuffs"— Mark Otto, one of Bootstrap's creators, in the Desktop First to Mobile First Bootstrap presentation (https://speakerdeck.com/mdo/desktop-firstto- with-bootstrap) A must-have from this breeding ground of possibilities is the Bootstrap extension font-awesome (http://fortawesome.github.io/Font-Awesome/). It uses font-face, which is widely supported and flexible, instead of sprites for icons. With a single CSS file and font resources used to render the custom fonts, you have a tool that can handle all your icons. This shows the flexibility of Bootstrap tools; for example, font-awesome is independent, works as a standalone project, and is a great fit with Bootstrap. There are a lot of ways to use Bootstrap. You can customize and extend components, from editing the source code in LESS variables or customize via the Bootstrap download page (http://getbootstrap.com/customize/). At the time of this writing, Bootstrap is the most popular project on Github, so it's just one more reason to consider its importance. There's now an official Bootstrap Expo (http://expo.getbootstrap.com/). This is one of the changes in this new version. Bootstrap Expo is the official directory for websites and web applications that are being developed using this framework. A lot of developers get their first touch with the capabilities of HTML5 and CSS3 with this framework. Bootstrap has amazing capabilities such as offering a responsive grid, dozens of JavaScript components, and a customizer in a web interface or through the LESS variables, if you're an experienced developer. It's suitable for any level of developer and designers because it has solutions that suit both scenarios. This is the second of Bootstrap's main philosophies—it's made for everyone. Desktop to responsive With the rise of smart phones, there is a need for responsive content to cover the growing demand. It's possible to add an optional file with media queries and a bunch of CSS code and be adapted to mobile needs. Media queries, a CSS3 module introduced in June 2012, is a basic structure that gives a namespace with a bunch of CSS rules and declarations according to the user resolution, density, and screen capabilities. So, with CSS files, it is possible to manage the ongoing rise of smartphones. It was possible with just one stylesheet file with good support to adapt according with the device and make a website mobile friendly. In Bootstrap Version 2, we used to have an optional file (responsive.less) that used to have all the media queries necessary for Bootstrap to work well with mobiles. Another good news is that we can adapt to tablets as a bonus. We have breakpoints for the most common mobile resolutions—this means we have a range of width (768 px to 979 px) that can represent tablet devices. A breakpoint is the extreme point (minimum and/or maximum) where you can define CSS rules specific to that range and change your layout. This could be achieved with a simple declaration of media queries in your CSS: @media (min-width: 768px) and (max-width: 979px) { ... } But sometimes it's indispensable to rethink some elements—some of those already developed only for desktops—in a pixel-perfect scenario. There's no flexibility in a pixel-width accommodation. No matter how much the screen is different, the website will behave like you were using a desktop when we work with fixed units. This is when we can use a bunch of media queries to get more flexible. Even with this solution, redefining dimensions and CSS rules according to the device using media queries will solve screen flexibility issues but not solve performance issues on mobiles. Performance is one of the main concerns when we go mobile. We have to consider scenarios where the Internet connection is slow and it is a recurrent issue. You will have to perform reverse engineering to make your JavaScript optimize loading, and combine it with server-side solutions. A worse solution would be to just hide content after considering what could be painful for your page load; for example, images have a deep impact on the final performance. Lower page response time is equivalent to more money spent, as we can see in this article about page loading versus user patience (http://blog.kissmetrics.com/loading-time/). One of the curious things this research points to is that mobile Internet users expect their browsing experience in phones to be comparable to what they get on their desktops. We are living at a time when the Web is filled with rich content and we have faster Internet connections. We have to be prepared to offer the closest thing to a fast and optimized loading, at least for our most important content. This does not involve just the use of CSS to hide content and show content depending on the device, as we can do using media queries. It's all about keeping the concepts simple and focused and developing each interface component thoughtfully from scratch—the primary use, with the constraints and its enhanced capabilities. It's not just about adapting, it's exploring the device's capabilities and delivering the best user experience across platforms. Sounds familiar? Yes, for sure, the same concept as progressive enhancement, you might think. You're not wrong. Progressive enhancement was a term widely used at a time when we talked about HTML page dependency on JavaScript to be functional. Progressive enhancement is a strategy for web design that relies on semantic markup and technologies such as JavaScript. Nowadays, progressive enhancement is a longer term for Mobile First because it's not just about JavaScript disabled, as it was vastly talked before. A hundred of articles tried to show its benefits in a no JavaScript environment scenarios. Now progressive enhancement is about to be faster (http://coding.smashingmagazine.com/2013/09/03/progressive-enhancement-is-faster/). Progressive enhancement is one of the three keys of Mobile First, together with responsive design and giving priority to content over navigation. So, these three rationales are at the background of all the details of Bootstrap 3, from your CSS components to your grid structure. Summary In this article we saw the Twitter Bootstrap's latest version Mobile First. We also saw how developers developed this framework. The growing world of smartphones have forced for the need for Mobile First. Resources for Article: Further resources on this subject: Downloading and setting up Bootstrap [Article] Introduction to RWD frameworks [Article] Getting started with using Chef [Article]
Read more
  • 0
  • 29
  • 7418

article-image-how-to-stream-live-video-with-raspberry-pi
Jakub Mandula
07 Dec 2015
5 min read
Save for later

How to Stream Live Video With Raspberry Pi

Jakub Mandula
07 Dec 2015
5 min read
Say you want to build a remote controlled robot or a surveillance camera using your Raspberry Pi. What is the best method of transmitting the live footage to your screen? If only there was a program that could do this in a simple way while not frying your Pi. Fortunately, there is a program called mjpg-streamer to save the day. In its core, it grabs images from one input device and forwards them to one or more output devices. We are going to provide it with the video from our web-camera and feed it into a self-hosted HTTP server that lets us display the images in the browser. Dependencies Let's get started! mjpg-streamer does not come as a standard package and must be compiled manually. But do not let that be the reason for giving up on it. In order to compile mjpg-streamer, we have to install several dependencies. sudo apt-get update sudo apt-get upgrade sudo apt-get install build-essential libjpeg8-dev imagemagick libv4l-dev Recently the videodev.h file has been replaced by a newer version videodev2.h. In order to fix the path references, just create a quick symbolic link. sudo ln -s /usr/include/linux/videodev2.h /usr/include/linux/videodev.h Downloading Now that we have all the dependencies we can download the mjpg-streamer repository. I am using a Github fork by jacksonliam. git clone [email protected]:jacksonliam/mjpg-streamer.git cd mjpg-streamer/mjpg-streamer-experimental Compilation There is a number of plugins that come with MJPEG-streamer. We are going to only compile these: input_file.so - used to provide a file as input input_uvc.so - provides input form USB web cameras input_raspicam.so - provides input from the raspicam module output_http.so - our http streaming output Feel free to look through the rest of the plugins folder and explore the other inputs/outputs. Just add their names to the command below. make mjpg_streamer input_file.so input_uvc.so input_raspicam.so output_http.so Moving mjpg-streamer I recommend moving mjpg-streamer to a more permanent location in your file system. I personally use the /usr/local directory. But feel free to use any other path as long as you adjust any following steps in the setup process. sudo cp mjpg_streamer /usr/local/bin sudo cp input_file.so input_uvc.so input_raspicam.so output_http.so /usr/local/lib/ sudo cp -R www /usr/local/www Finally reference the plugins in your bashrc file. Simply open it with your favorite text editor. vim ~/.bashrc And append the following line to the file: export LD_LIBRARY_PATH=/usr/local/lib/ You can a Now source your bash and you are good to go. source ~/.bashrc Running mjpg-streamer Running mjpg-streamer is very simple. If you have followed all the steps up to now all you have to do is one command. mjpg_streamer -i input_uvc.so -o "output_http.so -w /usr/local/www" The flags mean the following: * -i - input to mjpg-streamer (our USB camera) * -o output from mjpg-streamer (our HTTP server) * -w a flag to the HTTP server of the location with the HTML and CSS which we moved to /usr/local/www. There is a vast number of other flags that you can explore. Here I list a few of them. -f - framerate in seconds -c - protect the HTTP server with a username:password -b - run in background -p - use another port Testing the stream Now that you have your mjpg-streamer running go to http://ip-of-your-raspberry-pi:8080 and watch the live stream. To just grab the stream, paste the following image tag into your HTML <img src="http://ip-of-your-raspberry-pi:8080/?action=stream"> This should work in most modern browsers. I found there to be a problem with Google Chrome on iOS devices, which is strange because it does work in Safari which is basically identical to Chrome. Securing the stream You could leave it at that. However as of now your stream is insecure and anyone with the IP address to your Raspberry Pi can access it. We have talked about the -c flag which can be passed to the output_http.so plugin. However, this does not prevent people eavesdropping on your connection. We need to use HTTPS. The easiest way to secure your mjpg-stream is using a utility called stunnel. Stunnel is a very lightweight HTTPS-proxy. You give it all the keys and certificates and a two ports. stunnel then forwards the traffic from one port to the other while silently encrypting it. The installation is very simple. sudo apt-get install stunnel4 Next you have to generate an RSA key and certificate. This is very easy with OpenSSL openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 720 -nodes Just fill the prompts with your information. This creates a 2048-bit RSA key pair and a self-signed certificate valid for 720 days. Now create the following configuration file stunnel.conf. ; Paths to your key and certificate you generated in the previous step key=./key.pem cert=./cert.pem debug=7 [https] clinet=no accept=1234 ; Secure port connect=8080 ; mjpg streamer port sslVersion=all Now the last thing to do is start stunnel. sudo stunnel4 stunnel.conf Go to https://ip-of-your-raspberry-pi:1234. Confirm that you trust this certificate (it is self-signed so most browsers will complain about the security). Conclusion Now you can enjoy a live and secure stream directly from your Raspberry Pi. You can integrate it in your Home security system or on a robot. You can also grab the stream and use OpenCV to implement some cool computer vision ability. I used this on my PiNet Project to build a robot that can be controlled over the internet using a Webcam. I am curious what you can come up with! About the author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 0
  • 7415

article-image-extending-elasticsearch-scripting
Packt
06 Feb 2015
21 min read
Save for later

Extending ElasticSearch with Scripting

Packt
06 Feb 2015
21 min read
In article by Alberto Paro, the author of ElasticSearch Cookbook Second Edition, we will cover about the following recipes: (For more resources related to this topic, see here.) Installing additional script plugins Managing scripts Sorting data using scripts Computing return fields with scripting Filtering a search via scripting Introduction ElasticSearch has a powerful way of extending its capabilities with custom scripts, which can be written in several programming languages. The most common ones are Groovy, MVEL, JavaScript, and Python. In this article, we will see how it's possible to create custom scoring algorithms, special processed return fields, custom sorting, and complex update operations on records. The scripting concept of ElasticSearch can be seen as an advanced stored procedures system in the NoSQL world; so, for an advanced usage of ElasticSearch, it is very important to master it. Installing additional script plugins ElasticSearch provides native scripting (a Java code compiled in JAR) and Groovy, but a lot of interesting languages are also available, such as JavaScript and Python. In older ElasticSearch releases, prior to version 1.4, the official scripting language was MVEL, but due to the fact that it was not well-maintained by MVEL developers, in addition to the impossibility to sandbox it and prevent security issues, MVEL was replaced with Groovy. Groovy scripting is now provided by default in ElasticSearch. The other scripting languages can be installed as plugins. Getting ready You will need a working ElasticSearch cluster. How to do it... In order to install JavaScript language support for ElasticSearch (1.3.x), perform the following steps: From the command line, simply enter the following command: bin/plugin --install elasticsearch/elasticsearch-lang-javascript/2.3.0 This will print the following result: -> Installing elasticsearch/elasticsearch-lang-javascript/2.3.0... Trying http://download.elasticsearch.org/elasticsearch/elasticsearch-lang-javascript/ elasticsearch-lang-javascript-2.3.0.zip... Downloading ....DONE Installed lang-javascript If the installation is successful, the output will end with Installed; otherwise, an error is returned. To install Python language support for ElasticSearch, just enter the following command: bin/plugin -install elasticsearch/elasticsearch-lang-python/2.3.0 The version number depends on the ElasticSearch version. Take a look at the plugin's web page to choose the correct version. How it works... Language plugins allow you to extend the number of supported languages to be used in scripting. During the ElasticSearch startup, an internal ElasticSearch service called PluginService loads all the installed language plugins. In order to install or upgrade a plugin, you need to restart the node. The ElasticSearch community provides common scripting languages (a list of the supported scripting languages is available on the ElasticSearch site plugin page at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html), and others are available in GitHub repositories (a simple search on GitHub allows you to find them). The following are the most commonly used languages for scripting: Groovy (http://groovy.codehaus.org/): This language is embedded in ElasticSearch by default. It is a simple language that provides scripting functionalities. This is one of the fastest available language extensions. Groovy is a dynamic, object-oriented programming language with features similar to those of Python, Ruby, Perl, and Smalltalk. It also provides support to write a functional code. JavaScript (https://github.com/elasticsearch/elasticsearch-lang-javascript): This is available as an external plugin. The JavaScript implementation is based on Java Rhino (https://developer.mozilla.org/en-US/docs/Rhino) and is really fast. Python (https://github.com/elasticsearch/elasticsearch-lang-python): This is available as an external plugin, based on Jython (http://jython.org). It allows Python to be used as a script engine. Considering several benchmark results, it's slower than other languages. There's more... Groovy is preferred if the script is not too complex; otherwise, a native plugin provides a better environment to implement complex logic and data management. The performance of every language is different; the fastest one is the native Java. In the case of dynamic scripting languages, Groovy is faster, as compared to JavaScript and Python. In order to access document properties in Groovy scripts, the same approach will work as in other scripting languages: doc.score: This stores the document's score. doc['field_name'].value: This extracts the value of the field_name field from the document. If the value is an array or if you want to extract the value as an array, you can use doc['field_name'].values. doc['field_name'].empty: This returns true if the field_name field has no value in the document. doc['field_name'].multivalue: This returns true if the field_name field contains multiple values. If the field contains a geopoint value, additional methods are available, as follows: doc['field_name'].lat: This returns the latitude of a geopoint. If you need the value as an array, you can use the doc['field_name'].lats method. doc['field_name'].lon: This returns the longitude of a geopoint. If you need the value as an array, you can use the doc['field_name'].lons method. doc['field_name'].distance(lat,lon): This returns the plane distance, in miles, from a latitude/longitude point. If you need to calculate the distance in kilometers, you should use the doc['field_name'].distanceInKm(lat,lon) method. doc['field_name'].arcDistance(lat,lon): This returns the arc distance, in miles, from a latitude/longitude point. If you need to calculate the distance in kilometers, you should use the doc['field_name'].arcDistanceInKm(lat,lon) method. doc['field_name'].geohashDistance(geohash): This returns the distance, in miles, from a geohash value. If you need to calculate the same distance in kilometers, you should use doc['field_name'] and the geohashDistanceInKm(lat,lon) method. By using these helper methods, it is possible to create advanced scripts in order to boost a document by a distance that can be very handy in developing geolocalized centered applications. Managing scripts Depending on your scripting usage, there are several ways to customize ElasticSearch to use your script extensions. In this recipe, we will see how to provide scripts to ElasticSearch via files, indexes, or inline. Getting ready You will need a working ElasticSearch cluster populated with the populate script (chapter_06/populate_aggregations.sh), available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... To manage scripting, perform the following steps: Dynamic scripting is disabled by default for security reasons; we need to activate it in order to use dynamic scripting languages such as JavaScript or Python. To do this, we need to turn off the disable flag (script.disable_dynamic: false) in the ElasticSearch configuration file (config/elasticseach.yml) and restart the cluster. To increase security, ElasticSearch does not allow you to specify scripts for non-sandbox languages. Scripts can be placed in the scripts directory inside the configuration directory. To provide a script in a file, we'll put a my_script.groovy script in the config/scripts location with the following code content: doc["price"].value * factor If the dynamic script is enabled (as done in the first step), ElasticSearch allows you to store the scripts in a special index, .scripts. To put my_script in the index, execute the following command in the command terminal: curl -XPOST localhost:9200/_scripts/groovy/my_script -d '{ "script":"doc["price"].value * factor" }' The script can be used by simply referencing it in the script_id field; use the following command: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script_id" : "my_script",      "lang" : "groovy",      "type" : "number",      "ignore_unmapped" : true,      "params" : {        "factor" : 1.1      },      "order" : "asc"    } } }' How it works... ElasticSearch allows you to load your script in different ways; each one of these methods has their pros and cons. The most secure way to load or import scripts is to provide them as files in the config/scripts directory. This directory is continuously scanned for new files (by default, every 60 seconds). The scripting language is automatically detected by the file extension, and the script name depends on the filename. If the file is put in subdirectories, the directory path becomes part of the filename; for example, if it is config/scripts/mysub1/mysub2/my_script.groovy, the script name will be mysub1_mysub2_my_script. If the script is provided via a filesystem, it can be referenced in the code via the "script": "script_name" parameter. Scripts can also be available in the special .script index. These are the REST end points: To retrieve a script, use the following code: GET http://<server>/_scripts/<language>/<id"> To store a script use the following code: PUT http://<server>/_scripts/<language>/<id> To delete a script use the following code: DELETE http://<server>/_scripts/<language>/<id> The indexed script can be referenced in the code via the "script_id": "id_of_the_script" parameter. The recipes that follow will use inline scripting because it's easier to use it during the development and testing phases. Generally, a good practice is to develop using the inline dynamic scripting in a request, because it's faster to prototype. Once the script is ready and no changes are needed, it can be stored in the index since it is simpler to call and manage. In production, a best practice is to disable dynamic scripting and store the script on the disk (generally, dumping the indexed script to disk). See also The scripting page on the ElasticSearch website at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html Sorting data using script ElasticSearch provides scripting support for the sorting functionality. In real world applications, there is often a need to modify the default sort by the match score using an algorithm that depends on the context and some external variables. Some common scenarios are given as follows: Sorting places near a point Sorting by most-read articles Sorting items by custom user logic Sorting items by revenue Getting ready You will need a working ElasticSearch cluster and an index populated with the script, which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to sort using scripting, perform the following steps: If you want to order your documents by the price field multiplied by a factor parameter (that is, sales tax), the search will be as shown in the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script" : "doc["price"].value * factor",      "lang" : "groovy",      "type" : "number",      "ignore_unmapped" : true,    "params" : {        "factor" : 1.1      },            "order" : "asc"        }    } }' In this case, we have used a match_all query and a sort script. If everything is correct, the result returned by ElasticSearch should be as shown in the following code: { "took" : 7, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 1000,    "max_score" : null,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "161",      "_score" : null, "_source" : … truncated …,      "sort" : [ 0.0278578661440021 ]    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "634",      "_score" : null, "_source" : … truncated …,     "sort" : [ 0.08131364254827411 ]    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "465",      "_score" : null, "_source" : … truncated …,      "sort" : [ 0.1094966959069832 ]    } ] } } How it works... The sort scripting allows you to define several parameters, as follows: order (default "asc") ("asc" or "desc"): This determines whether the order must be ascending or descending. script: This contains the code to be executed. type: This defines the type to convert the value. params (optional, a JSON object): This defines the parameters that need to be passed. lang (by default, groovy): This defines the scripting language to be used. ignore_unmapped (optional): This ignores unmapped fields in a sort. This flag allows you to avoid errors due to missing fields in shards. Extending the sort with scripting allows the use of a broader approach to score your hits. ElasticSearch scripting permits the use of every code that you want. You can create custom complex algorithms to score your documents. There's more... Groovy provides a lot of built-in functions (mainly taken from Java's Math class) that can be used in scripts, as shown in the following table: Function Description time() The current time in milliseconds sin(a) Returns the trigonometric sine of an angle cos(a) Returns the trigonometric cosine of an angle tan(a) Returns the trigonometric tangent of an angle asin(a) Returns the arc sine of a value acos(a) Returns the arc cosine of a value atan(a) Returns the arc tangent of a value toRadians(angdeg) Converts an angle measured in degrees to an approximately equivalent angle measured in radians toDegrees(angrad) Converts an angle measured in radians to an approximately equivalent angle measured in degrees exp(a) Returns Euler's number raised to the power of a value log(a) Returns the natural logarithm (base e) of a value log10(a) Returns the base 10 logarithm of a value sqrt(a) Returns the correctly rounded positive square root of a value cbrt(a) Returns the cube root of a double value IEEEremainder(f1, f2) Computes the remainder operation on two arguments, as prescribed by the IEEE 754 standard ceil(a) Returns the smallest (closest to negative infinity) value that is greater than or equal to the argument and is equal to a mathematical integer floor(a) Returns the largest (closest to positive infinity) value that is less than or equal to the argument and is equal to a mathematical integer rint(a) Returns the value that is closest in value to the argument and is equal to a mathematical integer atan2(y, x) Returns the angle theta from the conversion of rectangular coordinates (x,y_) to polar coordinates (r,_theta) pow(a, b) Returns the value of the first argument raised to the power of the second argument round(a) Returns the closest integer to the argument random() Returns a random double value abs(a) Returns the absolute value of a value max(a, b) Returns the greater of the two values min(a, b) Returns the smaller of the two values ulp(d) Returns the size of the unit in the last place of the argument signum(d) Returns the signum function of the argument sinh(x) Returns the hyperbolic sine of a value cosh(x) Returns the hyperbolic cosine of a value tanh(x) Returns the hyperbolic tangent of a value hypot(x,y) Returns sqrt(x^2+y^2) without an intermediate overflow or underflow acos(a) Returns the arc cosine of a value atan(a) Returns the arc tangent of a value If you want to retrieve records in a random order, you can use a script with a random method, as shown in the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script" : "Math.random()",      "lang" : "groovy",      "type" : "number",      "params" : {}    } } }' In this example, for every hit, the new sort value is computed by executing the Math.random() scripting function. See also The official ElasticSearch documentation at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html Computing return fields with scripting ElasticSearch allows you to define complex expressions that can be used to return a new calculated field value. These special fields are called script_fields, and they can be expressed with a script in every available ElasticSearch scripting language. Getting ready You will need a working ElasticSearch cluster and an index populated with the script (chapter_06/populate_aggregations.sh), which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to compute return fields with scripting, perform the following steps: Return the following script fields: "my_calc_field": This concatenates the text of the "name" and "description" fields "my_calc_field2": This multiplies the "price" value by the "discount" parameter From the command line, execute the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/ _search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "script_fields" : {    "my_calc_field" : {      "script" : "doc["name"].value + " -- " + doc["description"].value"    },    "my_calc_field2" : {      "script" : "doc["price"].value * discount",      "params" : {       "discount" : 0.8      }    } } }' If everything works all right, this is how the result returned by ElasticSearch should be: { "took" : 4, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 1000,    "max_score" : 1.0,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "4",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "entropic -- accusantium",        "my_calc_field2" : 5.480038242170081      }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "9",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "frankie -- accusantium",        "my_calc_field2" : 34.79852410178313      }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "11",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "johansson -- accusamus",        "my_calc_field2" : 11.824173084636591      }    } ] } } How it works... The scripting fields are similar to executing an SQL function on a field during a select operation. In ElasticSearch, after a search phase is executed and the hits to be returned are calculated, if some fields (standard or script) are defined, they are calculated and returned. The script field, which can be defined with all the supported languages, is processed by passing a value to the source of the document and, if some other parameters are defined in the script (in the discount factor example), they are passed to the script function. The script function is a code snippet; it can contain everything that the language allows you to write, but it must be evaluated to a value (or a list of values). See also The Installing additional script plugins recipe in this article to install additional languages for scripting The Sorting using script recipe to have a reference of the extra built-in functions in Groovy scripts Filtering a search via scripting ElasticSearch scripting allows you to extend the traditional filter with custom scripts. Using scripting to create a custom filter is a convenient way to write scripting rules that are not provided by Lucene or ElasticSearch, and to implement business logic that is not available in the query DSL. Getting ready You will need a working ElasticSearch cluster and an index populated with the (chapter_06/populate_aggregations.sh) script, which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to filter a search using a script, perform the following steps: Write a search with a filter that filters out a document with the value of age less than the parameter value: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' In this example, all the documents in which the value of age is greater than param1 are qualified to be returned. If everything works correctly, the result returned by ElasticSearch should be as shown here: { "took" : 30, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 237,    "max_score" : 1.0,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "9",      "_score" : 1.0, "_source" :{ … "age": 83, … }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "23",      "_score" : 1.0, "_source" : { … "age": 87, … }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "47",      "_score" : 1.0, "_source" : {…. "age": 98, …}    } ] } } How it works... The script filter is a language script that returns a Boolean value (true/false). For every hit, the script is evaluated, and if it returns true, the hit passes the filter. This type of scripting can only be used as Lucene filters, not as queries, because it doesn't affect the search (the exceptions are constant_score and custom_filters_score). These are the scripting fields: script: This contains the code to be executed params: These are optional parameters to be passed to the script lang (defaults to groovy): This defines the language of the script The script code can be any code in your preferred and supported scripting language that returns a Boolean value. There's more... Other languages are used in the same way as Groovy. For the current example, I have chosen a standard comparison that works in several languages. To execute the same script using the JavaScript language, use the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "lang":"javascript",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' For Python, use the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "lang":"python",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' See also The Installing additional script plugins recipe in this article to install additional languages for scripting The Sorting data using script recipe in this article to get a reference of the extra built-in functions in Groovy scripts Summary In this article you have learnt the ways you can use scripting to extend the ElasticSearch functional capabilities using different programming languages. Resources for Article: Further resources on this subject: Indexing the Data [Article] Low-Level Index Control [Article] Designing Puppet Architectures [Article]
Read more
  • 0
  • 0
  • 7403
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-user-input-validation-tapestry-5
Packt
16 Oct 2009
9 min read
Save for later

User Input Validation in Tapestry 5

Packt
16 Oct 2009
9 min read
Adding Validation to Components The Start page of the web application Celebrity Collector has a login form that expects the user to enter some values into its two fields. But, what if the user didn't enter anything and still clicked on the Log In button? Currently, the application will decide that the credentials are wrong and the user will be redirected to the Registration page, and receive an invitation to register. This logic does make some sense; but, it isn't the best line of action, as the button might have been pressed by mistake. These two fields, User Name and Password, are actually mandatory, and if no value was entered into them, then it should be considered an error. All we need to do for this is to add a required validator to every field, as seen in the following code: <tr> <td> <t:label t_for="userName"> Label for the first text box</t:label>: </td> <td> <input type="text" t_id="userName" t_type="TextField" t:label="User Name" t_validate="required"/> </td></tr><tr> <td> <t:label t_for="password"> The second label</t:label>: </td><td> <input type="text" t_id="password" t_label="Password" t:type="PasswordField" t_validate="required"/></td></tr> Just one additional attribute for each component, and let's see how this works now. Run the application, leave both fields empty and click on the Log In button. Here is what you should see: Both fields, including their labels, are clearly marked now as an error. We even have some kind of graphical marker for the problematic fields. However, one thing is missing—a clear explanation of what exactly went wrong. To display such a message, one more component needs to be added to the page. Modify the page template, as done here: <t:form t_id="loginForm"> <t:errors/> <table> The Errors component is very simple, but one important thing to remember is that it should be placed inside of the Form component, which in turn, surrounds the validated components. Let's run the application again and try to submit an empty form. Now the result should look like this: This kind of feedback doesn't leave any space for doubt, does it? If you see that the error messages are strongly misplaced to the left, it means that an error in the default.css file that comes with Tapestry distribution still hasn't been fixed. To override the faulty style, define it in our application's styles.css file like this: DIV.t-error LI{ margin-left: 20px;} Do not forget to make the stylesheet available to the page. I hope you will agree that the efforts we had to make to get user input validated are close to zero. But let's see what Tapestry has done in response to them: Every form component has a ValidationTracker object associated with it. It is provided automatically, we do not need to care about it. Basically, ValidationTracker is the place where any validation problems, if they happen, are recorded. As soon as we use the t:validate attribute for a component in the form, Tapestry will assign to that component one or more validators, the number and type of them will depend on the value of the t:validate attribute (more about this later). As soon as a validator decides that the value entered associated with the component is not valid, it records an error in the ValidationTracker. Again, this happens automatically. If there are any errors recorded in ValidationTracker, Tapestry will redisplay the form, decorating the fields with erroneous input and their labels appropriately. If there is an Errors component in the form, it will automatically display error messages for all the errors in ValidationTracker. The error messages for standard validators are provided by Tapestry while the name of the component to be mentioned in the message is taken from its label. A lot of very useful functionality comes with the framework and works for us "out of the box", without any configuration or set-up! Tapestry comes with a set of validators that should be sufficient for most needs. Let's have a more detailed look at how to use them. Validators The following validators come with the current distribution of Tapestry 5: Required—checks if the value of the validated component is not null or an empty string. MinLength—checks if the string (the value of the validated component) is not shorter than the specified length. You will see how to pass the length parameter to this validator shortly. MaxLength—same as above, but checks if the string is not too long. Min—ensures that the numeric value of the validated component is not less than the specified value, passed to the validator as a parameter. Max—as above, but ensures that the value does not exceed the specified limit. Regexp—checks if the string value fits the specified pattern. We can use several validators for one component. Let's see how all this works together. First of all, let's add another component to the Registration page template: <tr> <td><t:label t_for="age"/>:</td> <td><input type="text" t_type="textfield" t_id="age"/></td></tr> Also, add the corresponding property to the Registration page class, age, of type double. It could be an int indeed, but I want to show that the Min and Max validators can work with fractional numbers too. Besides, someone might decide to enter their age as 23.4567. This will be weird, but not against the laws. Finally, add an Errors component to the form at the Registration page, so that we can see error messages: <t:form t_id="registrationForm"> <t:errors/> <table> Now we can test all the available validators on one page. Let's specify the validation rules first: Both User Name and Password are required. Also, they should not be shorter than three characters and not longer than eight characters. Age is required, and it should not be less than five (change this number if you've got a prodigy in your family) and not more than 120 (as that would probably be a mistake). Email address is not required, but if entered, should match a common pattern. Here are the changes to the Registration page template that will implement the specified validation rules: <td> <input type="text" t_type="textfield" t_id="userName" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="passwordfield" t_id="password" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="textfield" t_id="age" t:validate="required,min=5,max=120"/></td>...<input type="text" t_type="textfield" t_id="email" t:validate="regexp"/> As you see, it is very easy to pass a parameter to a validator, like min=5 or maxlength=8. But, where do we specify a pattern for the Regexp validator? The answer is, in the message catalog. Let's add the following line to the app.properties file: email-regexp=^([a-zA-Z0-9_.-])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$ This will serve as a regular expression for all Regexp validators applied to components with ID email throughout the application. Run the application, go to the Registration page and, try to submit the empty form. Here is what you should see: Looks all right, but the message for the age could be more sensible, something like You are too young! You should be at least 5 years old. We'll deal with this later. However for now, enter a very long username, only two characters for password and an age that is more than the upper limit, and see how the messages will change: Again, looks good, except for the message about age. Next, enter some valid values for User Name, Password and Age. Then click on the check box to subscribe to the newsletter. In the text box for email, enter some invalid value and click on Submit. Here is the result: Yes! The validation worked properly, but the error message is absolutely unacceptable. Let's deal with this, but first make sure that any valid email address will pass the validation.   Providing Custom Error Messages We can provide custom messages for validators in the application's (or page's) message catalog. For such messages we use keys that are made of the validated component's ID, the name of validator and the "message" postfix. Here is an example of what we could add to the app.properties file to change error messages for the Min and Max validators of the Age component as well as the message used for the email validation: email-regexp-message=Email address is not valid.age-min-message=You are too young! You should be at least 5 years old.age-max-message=People do not live that long! Still better, instead of hard-coding the required minimal age into the message, we could insert into the message the parameter that was passed to the Min validator (following the rules for java.text.Format), like this: age-min-message=You are too young! You should be at least %s years old. If you run the application now and submit an invalid value for age, the error message will be much better: You might want to make sure that the other error messages have changed too. We can now successfully validate values entered into separate fields, but what if the validity of the input depends on how two or more different values relate to each other? For example, at the Registration page we want two versions of password to be the same, and if they are not, this should be considered as an invalid input and reported appropriately. Before dealing with this problem however, we need to look more thoroughly at different events generated by the Form component.  
Read more
  • 0
  • 0
  • 7401

article-image-common-performance-issues
Packt
19 Jun 2014
16 min read
Save for later

Common performance issues

Packt
19 Jun 2014
16 min read
(For more resources related to this topic, see here.) Threading performance issues Threading performance issues are the issues related to concurrency, as follows: Lack of threading or excessive threading Threads blocking up to starvation (usually from competing on shared resources) Deadlock until the complete application hangs (threads waiting for each other) Memory performance issues Memory performance issues are the issues that are related to application memory management, as follows: Memory leakage: This issue is an explicit leakage or implicit leakage as seen in improper hashing Improper caching: This issue is due to over caching, inadequate size of the object, or missing essential caching Insufficient memory allocation: This issue is due to missing JVM memory tuning Algorithmic performance issues Implementing the application logic requires two important parameters that are related to each other; correctness and optimization. If the logic is not optimized, we have algorithmic issues, as follows: Costive algorithmic logic Unnecessary logic Work as designed performance issues The work as designed performance issue is a group of issues related to the application design. The application behaves exactly as designed but if the design has issues, it will lead to performance issues. Some examples of performance issues are as follows: Using synchronous when asynchronous should be used Neglecting remoteness, that is, using remote calls as if they are local calls Improper loading technique, that is, eager versus lazy loading techniques Selection of the size of the object Excessive serialization layers Web services granularity Too much synchronization Non-scalable architecture, especially in the integration layer or middleware Saturated hardware on a shared infrastructure Interfacing performance issues Whenever the application is dealing with resources, we may face the following interfacing issues that could impact our application performance: Using an old driver/library Missing frequent database housekeeping Database issues, such as, missing database indexes Low performing JMS or integration service bus Logging issues (excessive logging or not following the best practices while logging) Network component issues, that is, load balancer, proxy, firewall, and so on Miscellaneous performance issues Miscellaneous performance issues include different performance issues, as follows: Inconsistent performance of application components, for example, having slow components can cause the whole application to slow down Introduced performance issues to delay the processing speed Improper configuration tuning of different components, for example, JVM, application server, and so on Application-specific performance issues, such as excessive validations, apply many business rules, and so on Fake performance issues Fake performance issues could be a temporary issue or not even an issue. The famous examples are as follows: Networking temporary issues Scheduled running jobs (detected from the associated pattern) Software automatic updates (it must be disabled in production) Non-reproducible issues In the following sections, we will go through some of the listed issues. Threading performance issues Multithreading has the advantage of maximizing the hardware utilization. In particular, it maximizes the processing power by executing multiple tasks concurrently. But it has different side effects, especially if not used wisely inside the application. For example, in order to distribute tasks among different concurrent threads, there should be no or minimal data dependency, so each thread can complete its task without waiting for other threads to finish. Also, they shouldn't compete over different shared resources or they will be blocked, waiting for each other. We will discuss some of the common threading issues in the next section. Blocking threads A common issue where threads are blocked is waiting to obtain the monitor(s) of certain shared resources (objects), that is, holding by other threads. If most of the application server threads are consumed in a certain blocked status, the application becomes gradually unresponsive to user requests. In the Weblogic application server, if a thread keeps executing for more than a configurable period of time (not idle), it gets promoted to the Stuck thread. The more the threads are in the stuck status, the more the server status becomes critical. Configuring the stuck thread parameters is part of the Weblogic performance tuning. Performance symptoms The following symptoms are the performance symptoms that usually appear in cases of thread blocking: Slow application response (increased single request latency and pending user requests) Application server logs might show some stuck threads. The server's healthy status becomes critical on monitoring tools (application server console or different monitoring tools) Frequent application server restarts either manually or automatically Thread dump shows a lot of threads in the blocked status waiting for different resources Application profiling shows a lot of thread blocking An example of thread blocking To understand the effect of thread blocking on application execution, open the HighCPU project and measure the time it takes for execution by adding the following additional lines: long start= new Date().getTime(); .. .. long duration= new Date().getTime()-start; System.err.println("total time = "+duration); Now, try to execute the code with a different number of the thread pool size. We can try using the thread pool size as 50 and 5, and compare the results. In our results, the execution of the application with 5 threads is much faster than 50 threads! Let's now compare the NetBeans profiling results of both the executions to understand the reason behind this unexpected difference. The following screenshot shows the profiling of 50 threads; we can see a lot of blocking for the monitor in the column and the percentage of Monitor to the left waiting around at 75 percent: To get the preceding profiling screen, click on the Profile menu inside NetBeans, and then click on Profile Project (HighCPU). From the pop-up options, select Monitor and check all the available options, and then click on Run. The following screenshot shows the profiling of 5 threads, where there is almost no blocking, that is, less threads compete on these resources: Try to remove the System.out statement from inside the run() method, re-execute the tests, and compare the results. Another factor that also affects the selection of the pool size, especially when the thread execution takes long time, is the context switching overhead. This overhead requires the selection of the optimal pool size, usually related to the number of available processors for our application. Context switching is the CPU switching from one process (or thread) to another, which requires restoration of the execution data (different CPU registers and program counters). The context switching includes suspension of the current executing process, storing its current data, picking up the next process for execution according to its priority, and restoring its data. Although it's supported on the hardware level and is faster, most operating systems do this on the level of software context switching to improve the performance. The main reason behind this is the ability of the software context switching to selectively choose the required registers to save. Thread deadlock When many threads hold the monitor for objects that they need, this will result in a deadlock unless the implementation uses the new explicit Lock interface. In the example, we had a deadlock caused by two different threads waiting to obtain the monitor that the other thread held. The thread profiling will show these threads in a continuous blocking status, waiting for the monitors. All threads that go into the deadlock status become out of service for the user's requests, as shown in the following screenshot: Usually, this happens if the order of obtaining the locks is not planned. For example, if we need to have a quick and easy fix for a multidirectional thread deadlock, we can always lock the smallest or the largest bank account first, regardless of the transfer direction. This will prevent any deadlock from happening in our simple two-threaded mode. But if we have more threads, we need to have a much more mature way to handle this by using the Lock interface or some other technique. Memory performance issues In spite of all this great effort put into the allocated and free memory in an optimized way, we still see memory issues in Java Enterprise applications mainly due to the way people are dealing with memory in these applications. We will discuss mainly three types of memory issues: memory leakage, memory allocation, and application data caching. Memory leakage Memory leakage is a common performance issue where the garbage collector is not at fault; it is mainly the design/coding issues where the object is no longer required but it remains referenced in the heap, so the garbage collector can't reclaim its space. If this is repeated with different objects over a long period (according to object size and involved scenarios), it may lead to an out of memory error. The most common example of memory leakage is adding objects to the static collections (or an instance collection of long living objects, such as a servlet) and forgetting to clean collections totally or partially. Performance symptoms The following symptoms are some of the expected performance symptoms during a memory leakage in our application: The application uses heap memory increased by time The response slows down gradually due to memory congestion OutOfMemoryError occurs frequently in the logs and sometimes an application server restart is required Aggressive execution of garbage collection activities Heap dump shows a lot of objects retained (from the leakage types) A sudden increase of memory paging as reported by the operating system monitoring tools An example of memory leakage We have a sample application ExampleTwo; this is a product catalog where users can select products and add them to the basket. The application is written in spaghetti code, so it has a lot of issues, including bad design, improper object scopes, bad caching, and memory leakage. The following screenshot shows the product catalog browser page: One of the bad issues is the usage of the servlet instance (or static members), as it causes a lot of issues in multiple threads and has a common location for unnoticed memory leakages. We have added the following instance variable as a leakage location: private final HashMap<String, HashMap> cachingAllUsersCollection = new HashMap(); We will add some collections to the preceding code to cause memory leakage. We also used the caching in the session scope, which causes implicit leakage. The session scope leakage is difficult to diagnose, as it follows the session life cycle. Once the session is destroyed, the leakage stops, so we can say it is less severe but more difficult to catch. Adding global elements, such as a catalog or stock levels, to the session scope has no meaning. The session scope should only be restricted to the user-specific data. Also, forgetting to remove data that is not required from a session makes the memory utilization worse. Refer to the following code: @Stateful public class CacheSessionBean Instead of using a singleton class here or stateless bean with a static member, we used the Stateful bean, so it is instantiated per user session. We used JPA beans in the application layers instead of using View Objects. We also used loops over collections instead of querying or retrieving the required object directly, and so on. It would be good to troubleshoot this application with different profiling aspects to fix all these issues. All these factors are enough to describe such a project as spaghetti. We can use our knowledge in Apache JMeter to develop simple testing scenarios. As shown in the following screenshot, the scenario consists of catalog navigations and details of adding some products to the basket: Executing the test plan using many concurrent users over many iterations will show the bad behavior of our application, where the used memory is increased by time. There is no justification as the catalog is the same for all users and there's no specific user data, except for the IDs of the selected products. Actually, it needs to be saved inside the user session, which won't take any remarkable memory space. In our example, we intend to save a lot of objects in the session, implement a wrong session level, cache, and implement meaningless servlet level caching. All this will contribute to memory leakage. This gradual increase in the memory consumption is what we need to spot in our environment as early as possible (as we can see in the following screenshot, the memory consumption in our application is approaching 200 MB!): Improper data caching Caching is one of the critical components in the enterprise application architecture. It increases the application performance by decreasing the time required to query the object again from its data store, but it also complicates the application design and causes a lot of other secondary issues. The main concerns in the cache implementation are caching refresh rate, caching invalidation policy, data inconsistency in a distributed environment, locking issues while waiting to obtain the cached object's lock, and so on. Improper caching issue types The improper caching issue can take a lot of different variants. We will pick some of them and discuss them in the following sections. No caching (disabled caching) Disabled caching will definitely cause a big load over the interfacing resources (for example, database) by hitting it in with almost every interaction. This should be avoided while designing an enterprise application; otherwise; the application won't be usable. Fortunately, this has less impact than using wrong caching implementation! Most of the application components such as database, JPA, and application servers already have an out-of-the-box caching support. Too small caching size Too small caching size is a common performance issue, where the cache size is initially determined but doesn't get reviewed with the increase of the application data. The cache sizing is affected by many factors such as the memory size. If it allows more caching and the type of the data, lookup data should be cached entirely when possible, while transactional data shouldn't be cached unless required under a very strict locking mechanism. Also, the cache replacement policy and invalidation play an important role and should be tailored according to the application's needs, for example, least frequently used, least recently used, most frequently used, and so on. As a general rule, the bigger the cache size, the higher the cache hit rate and the lower the cache miss ratio. Also, the proper replacement policy contributes here; if we are working—as in our example—on an online product catalog, we may use the least recently used policy so all the old products will be removed, which makes sense as the users usually look for the new products. Monitoring of the caching utilization periodically is an essential proactive measure to catch any deviations early and adjust the cache size according to the monitoring results. For example, if the cache saturation is more than 90 percent and the missed cache ratio is high, a cache resizing is required. Missed cache hits are very costive as they hit the cache first and then the resource itself (for example, database) to get the required object, and then add this loaded object into the cache again by releasing another object (if the cache is 100 percent), according to the used cache replacement policy. Too big caching size Too big caching size might cause memory issues. If there is no control over the cache size and it keeps growing, and if it is a Java cache, the garbage collector will consume a lot of time trying to garbage collect that huge memory, aiming to free some memory. This will increase the garbage collection pause time and decrease the cache throughput. If the cache throughput is decreased, the latency to get objects from the cache will increase causing the cache retrieval cost to be high to the level it might be slower than hitting the actual resources (for example, database). Using the wrong caching policy Each application's cache implementation should be tailored according to the application's needs and data types (transactional versus lookup data). If the selection of the caching policy is wrong, the cache will affect the application performance rather than improving it. Performance symptoms According to the cache issue type and different cache configurations, we will see the following symptoms: Decreased cache hit rate (and increased cache missed ratio) Increased cache loading because of the improper size Increased cache latency with a huge caching size Spiky pattern in the performance testing response time, in case the cache size is not correct, causes continuous invalidation and reloading of the cached objects An example of improper caching techniques In our example, ExampleTwo, we have demonstrated many caching issues, such as no policy defined, global cache is wrong, local cache is improper, and no cache invalidation is implemented. So, we can have stale objects inside the cache. Cache invalidation is the process of refreshing or updating the existing object inside the cache or simply removing it from the cache. So in the next load, it reflects its recent values. This is to keep the cached objects always updated. Cache hit rate is the rate or ratio in which cache hits match (finds) the required cached object. It is the main measure for cache effectiveness together with the retrieval cost. Cache miss rate is the rate or ratio at which the cache hits the required object that is not found in the cache. Last access time is the timestamp of the last access (successful hit) to the cached objects. Caching replacement policies or algorithms are algorithms implemented by a cache to replace the existing cached objects with other new objects when there are no rooms available for any additional objects. This follows missed cache hits for these objects. Some examples of these policies are as follows: First-in-first-out (FIFO): In this policy, the cached object is aged and the oldest object is removed in favor of the new added ones. Least frequently used (LFU): In this policy, the cache picks the less frequently used object to free the memory, which means the cache will record statistics against each cached object. Least recently used (LRU): In this policy, the cache replaces the least recently accessed or used items; this means the cache will keep information like the last access time of all cached objects. Most recently used (MRU): This policy is the opposite of the previous one; it removes the most recently used items. This policy fits the application where items are no longer needed after the access, such as used exam vouchers. Aging policy: Every object in the cache will have an age limit, and once it exceeds this limit, it will be removed from the cache in the simple type. In the advanced type, it will also consider the invalidation of the cache according to predefined configuration rules, for example, every three hours, and so on. It is important for us to understand that caching is not our magic bullet and it has a lot of related issues and drawbacks. Sometimes, it causes overhead if not correctly tailored according to real application needs.
Read more
  • 0
  • 0
  • 7393

article-image-creating-jsf-composite-component
Packt
22 Oct 2014
9 min read
Save for later

Creating a JSF composite component

Packt
22 Oct 2014
9 min read
This article by David Salter, author of the book, NetBeans IDE 8 Cookbook, explains how to create a JSF composite component in NetBeans. (For more resources related to this topic, see here.) JSF is a rich component-based framework, which provides many components that developers can use to enrich their applications. JSF 2 also allows composite components to be easily created, which can then be inserted into other JSF pages in a similar way to any other JSF components such as buttons and labels. In this article, we'll see how to create a custom component that displays an input label and asks for corresponding input. If the input is not validated by the JSF runtime, we'll show an error message. The component is going to look like this: The custom component is built up from three different standard JSF components. On the left, we have a <h:outputText/> component that displays the label. Next, we have a <h:inputText /> component. Finally, we have a <h:message /> component. Putting these three components together like this is a very useful pattern when designing input forms within JSF. Getting ready To create a JSF composite component, you will need to have a working installation of WildFly that has been configured within NetBeans. We will be using the Enterprise download bundle of NetBeans as this includes all of the tools we need without having to download any additional plugins. How to do it… First of all, we need to create a web application and then create a JSF composite component within it. Perform the following steps: Click on File and then New Project…. Select Java Web from the list of Categories and Web Application form the list of Projects. Click on Next. Enter the Project Name value as CompositeComp. Click on Next. Ensure that Add to Enterprise Application is set to <None>, Server is set to WildFly Application Server, Java EE Version is set to Java EE 7 Web, and Context Path is set to /CompositeComp. Click on Next. Click on the checkbox next to JavaServer Faces as we are using this framework. All of the default JSF configurations are correct, so click on the Finish button to create the project. Right-click on the CompositeComp project within the Projects explorer and click on New and then Other…. In the New File dialog, select JavaServer Faces from the list of Categories and JSF Composite Component from the list of File Types. Click on Next. On the New JSF Composite Component dialog, enter the File Name value as inputWithLabel and change the folder to resourcescookbook. Click on Finish to create the custom component. In JSF, custom components are created as Facelets files that are stored within the resources folder of the web application. Within the resources folder, multiple subfolders can exist, each representing a namespace of a custom component. Within each namespace folder, individual custom components are stored with filenames that match the composite component names. We have just created a composite component within the cookbook namespace called inputWithLabel. Within each composite component file, there are two sections: an interface and an implementation. The interface lists all of the attributes that are required by the composite component and the implementation provides the XHTML code to represent the component. Let's now define our component by specifying the interface and the implementation. Perform the following steps: The inputWithLabel.xhtml file should be open for editing. If not, double–click on it within the Projects explorer to open it. For our composite component, we need two attributes to be passed into the component. We need the text for the label and the expression language to bind the input box to. Change the interface section of the file to read:    <cc:attribute name="labelValue" />   <cc:attribute name="editValue" /></cc:interface> To render the component, we need to instantiate a <h:outputText /> tag to display the label, a <h:inputText /> tag to receive the input from the user, and a <h:message /> tag to display any errors that are entered for the input field. Change the implementation section of the file to read: <cc:implementation>   <style>   .outputText{width: 100px; }   .inputText{width: 100px; }   .errorText{width: 200px; color: red; }   </style>   <h:panelGrid id="panel" columns="3" columnClasses="outputText, inputText, errorText">       <h:outputText value="#{cc.attrs.labelValue}" />       <h:inputText value="#{cc.attrs.editValue}" id="inputText" />       <h:message for="inputText" />   </h:panelGrid></cc:implementation> Click on the lightbulb on the left-hand side of the editor window and accept the fix to add the h=http://><html       > We can now reference the composite component from within the Facelets page. Add the following code inside the <h:body> code on the page: <h:form id="inputForm">   <cookbook:inputWithLabel labelValue="Forename" editValue="#{personController.person.foreName}"/>   <cookbook:inputWithLabel labelValue="Last Name" editValue="#{personController.person.lastName}"/>   <h:commandButton type="submit" value="Submit" action="#{personController.submit}"/></h:form> This code instantiates two instances of our inputWithLabel composite control and binds them to personController. We haven't got one of those yet, so let's create one and a class to represent a person. Perform the following steps: Create a new Java class within the project. Enter Class Name as Person and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. Add members to the class to represent foreName and lastName: private String foreName;private String lastName; Use the Encapsulate Fields refactoring to generate getters and setters for these members. To allow error messages to be displayed if the foreName and lastName values are inputted incorrectly, we will add some Bean Validation annotations to the attributes of the class. Annotate the foreName member of the class as follows: @NotNull@Size(min=1, max=25)private String foreName; Annotate the lastName member of the class as follows: @NotNull@Size(min=1, max=50)private String lastName; Use the Fix Imports tool to add the required imports for the Bean Validation annotations. Create a new Java class within the project. Enter Class Name as PersonController and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. We need to make the PersonController class an @Named bean so that it can be referenced via expression language from within JSF pages. Annotate the PersonController class as follows: @Named@RequestScopedpublic class PersonController { We need to add a Person instance into PersonController that will be used to transfer data from the JSF page to the named bean. We will also need to add a method onto the bean that will redirect JSF to an output page after the names have been entered. Add the following to the PersonController class: private Person person = new Person();public Person getPerson() {   return person;}public void setPerson(Person person) {   this.person = person;}public String submit() {   return "results.xhtml";} The final task before completing our application is to add a results page so we can see what input the user entered. This output page will simply display the values of foreName and lastName that have been entered. Create a new JSF page called results that uses the Facelets syntax. Change the <h:body> tag of this page to read: <h:body>   You Entered:   <h:outputText value="#{personController.person.foreName}" />&nbsp;   <h:outputText value="#{personController.person.lastName}" /></h:body> The application is now complete. Deploy and run the application by right-clicking on the project within the Projects explorer and selecting Run. Note that two instances of the composite component have been created and displayed within the browser. Click on the Submit button without entering any information and note how the error messages are displayed: Enter some valid information and click on Submit, and note how the information entered is echoed back on a second page. How it works… Creating composite components was a new feature added to JSF 2. Creating JSF components was a very tedious job in JSF 1.x, and the designers of JSF 2 thought that the majority of custom components created in JSF could probably be built by adding different existing components together. As it is seen, we've added together three different existing JSF components and made a very useful composite component. It's useful to distinguish between custom components and composite components. Custom components are entirely new components that did not exist before. They are created entirely in Java code and build into frameworks such as PrimeFaces and RichFaces. Composite components are built from existing components and their graphical view is designed in the .xhtml files. There's more... When creating composite components, it may be necessary to specify attributes. The default option is that the attributes are not mandatory when creating a custom component. They can, however, be made mandatory by adding the required="true" attribute to their definition, as follows: <cc:attribute name="labelValue" required="true" /> If an attribute is specified as required, but is not present, a JSF error will be produced, as follows: /index.xhtml @11,88 <cookbook:inputWithLabel> The following attribute(s) are required, but no values have been supplied for them: labelValue. Sometimes, it can be useful to specify a default value for an attribute. This is achieved by adding the default="…" attribute to their definition: <cc:attribute name="labelValue" default="Please enter a value" /> Summary In this article, we have learned to create a JSF composite component using NetBeans. Resources for Article: Further resources on this subject: Creating a Lazarus Component [article] Top Geany features you need to know about [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 7390

article-image-new-functionality-opencv-30
Packt
25 Aug 2014
5 min read
Save for later

New functionality in OpenCV 3.0

Packt
25 Aug 2014
5 min read
In this article by Oscar Deniz Suarez, coauthor of the book OpenCV Essentials, we will cover the forthcoming Version 3.0, which represents a major evolution of the OpenCV library for Computer Vision. Currently, OpenCV already includes several new techniques that are not available in the latest official release (2.4.9). The new functionality can be already used by downloading and compiling the latest development version from the official repository. This article provides an overview of some of the new techniques implemented. Other numerous lower-level changes in the forthcoming Version 3.0 (updated module structure, C++ API changes, transparent API for GPU acceleration, and so on) are not discussed. (For more resources related to this topic, see here.) Line Segment Detector OpenCV users have had the Hough transform-based straight line detector available in the previous versions. An improved method called Line Segment Detector (LSD) is now available. LSD is based on the algorithm described at http://dx.doi.org/10.5201/ipol.2012.gjmr-lsd. This method has been shown to be more robust and faster than the best previous Hough-based detector (the Progressive Probabilistic Hough Transform). The detector is now part of the imgproc module. OpenCV provides a short sample code ([opencv_source_code]/samples/cpp/lsd_lines.cpp), which shows how to use the LineSegmentDetector class. The following table shows the main components of the class: Method Function <constructor> The constructor allows to enter parameters of the algorithm; particularly; the level of refinements we want in the result detect This method detects line segments in the image drawSegments This method draws the segments in a given image compareSegments This method draws two sets of segments in a given image. The two sets are drawn with blue and red color lines Connected components The previous versions of OpenCV have included functions for working with image contours. Contours are the external limits of connected components (that is, regions of connected pixels in a binary image). The new functions, connectedComponents and connectedComponentsWithStats retrieve connected components as such. The connected components are retrieved as a labeled image with the same dimensions as the input image. This allows drawing the components on the original image easily. The connectedComponentsWithStats function retrieves useful statistics about each component shown in the following table: CC_STAT_LEFT  The leftmost (x) coordinate, which is the inclusive start of the bounding box in the horizontal direction CC_STAT_TOP  The topmost (y) coordinate, which is the inclusive start of the bounding box in the vertical direction CC_STAT_WIDTH  The horizontal size of the bounding box CC_STAT_HEIGHT  The vertical size of the bounding box CC_STAT_AREA  The total area (in pixels) of the connected component Scene text detection Text recognition is a classic problem in Computer Vision. Thus, Optical Character Recognition (OCR) is now routinely used in our society. In OCR, the input image is expected to depict typewriter black text over white background. In the last years, researchers aim at the more challenging problem of recognizing text "in the wild" on street signs, indoor signs, with diverse backgrounds and fonts, colors, and so on. The following figure shows and example of the difference between the two scenarios. In this scenario, OCR cannot be applied to the input images. Consequently, text recognition is actually accomplished in two steps. The text is first localized in the image and then character or word recognition is performed on the cropped region. OpenCV now provides a scene text detector based on the algorithm described in Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 (Providence, Rhode Island, USA). The implementation of OpenCV makes use of additional improvements found at http://158.109.8.37/files/GoK2013.pdf. OpenCV includes an example ([opencv_source_code]/samples/cpp/textdetection.cpp) that detects and draws text regions in an input image. The KAZE and AKAZE features Several 2D features have been proposed in the computer vision literature. Generally, the two most important aspects in feature extraction algorithms are computational efficiency and robustness. One of the latest contenders is the KAZE (Japanese word meaning "Wind") and Accelerated-KAZE (AKAZE) detector. There are reports that show that KAZE features are both robust and efficient, as compared with other widely-known features (BRISK, FREAK, and so on). The underlying algorithm is described in KAZE Features, Pablo F. Alcantarilla, Adrien Bartoli, and Andrew J. Davison, in European Conference on Computer Vision (ECCV), Florence, Italy, October 2012. As with other keypoint detectors in OpenCV, the KAZE implementation allows retrieving both keypoints and descriptors (that is, a feature vector computed around the keypoint neighborhood). The detector follows the same framework used in OpenCV for other detectors, so drawing methods are also available. Computational photography One of the modules with most improvements in the forthcoming Version 3.0 is the computational photography module (photo). The new techniques include the functionalities mentioned in the following table: Functionality Description HDR imaging Functions for handling High-Dynamic Range images (tonemapping, exposure alignment, camera calibration with multiple exposures, and exposure fusion) Seamless cloning Functions for realistically inserting one image into other image with an arbitrary-shape region of interest. Non-photorealistic rendering This technique includes non-photorealistic filters (such as pencil-like drawing effect) and edge-preserving smoothing filters (those are similar to the bilateral filter). New modules Finally, we provide a list with the new modules in development for version 3.0: Module name Description videostab Global motion estimation, Fast Marching method softcascade Implements a stageless variant of the cascade detector, which is considered more accurate shape Shape matching and retrieval. Shape context descriptor and matching algorithm, Hausdorff distance and Thin-Plate Splines cuda<X> Several modules with CUDA-accelerated implementations of other functions in the library Summary In this article, we learned about the different functionalities in OpenCV 3.0 and its different components. Resources for Article: Further resources on this subject: Wrapping OpenCV [article] A quick start – OpenCV fundamentals [article] Linking OpenCV to an iOS project [article]
Read more
  • 0
  • 0
  • 7388
article-image-using-frameworks
Packt
24 Dec 2014
24 min read
Save for later

Using Frameworks

Packt
24 Dec 2014
24 min read
In this article by Alex Libby, author of the book Responsive Media in HTML5, we will cover the following topics: Adding responsive media to a CMS Implementing responsive media in frameworks such as Twitter Bootstrap Using the Less CSS preprocessor to create CSS media queries Ready? Let's make a start! (For more resources related to this topic, see here.) Introducing our three examples Throughout this article, we've covered a number of simple, practical techniques to make media responsive within our sites—these are good, but nothing beats seeing these principles used in a real-world context, right? Absolutely! To prove this, we're going to look at three examples throughout this article, using technologies that you are likely to be familiar with: WordPress, Bootstrap, and Less CSS. Each demo will assume a certain level of prior knowledge, so it may be worth reading up a little first. In all three cases, we should see that with little effort, we can easily add responsive media to each one of these technologies. Let's kick off with a look at working with WordPress. Adding responsive media to a CMS We will begin the first of our three examples with a look at using the ever popular WordPress system. Created back in 2003, WordPress has been used to host sites by small independent traders all the way up to Fortune 500 companies—this includes some of the biggest names in business such as eBay, UPS, and Ford. WordPress comes in two flavors; the one we're interested in is the self-install version available at http://www.wordpress.org. This example assumes you have a local installation of WordPress installed and working; if not, then head over to http://codex.wordpress.org/Installing_WordPress and follow the tutorial to get started. We will also need a DOM inspector such as Firebug installed if you don't already have it. It can be downloaded from http://www.getfirebug.com if you need to install it. If you only have access to WordPress.com (the other flavor of WordPress), then some of the tips in this section may not work, due to limitations in that version of WordPress. Okay, assuming we have WordPress set up and running, let's make a start on making uploaded media responsive. Adding responsive media manually It's at this point that you're probably thinking we have to do something complex when working in WordPress, right? Wrong! As long as you use the Twenty Fourteen core theme, the work has already been done for you. For this exercise, and the following sections, I will assume you have installed and/or activated WordPress' Twenty Fourteen theme. Don't believe me? It's easy to verify: try uploading an image to a post or page in WordPress. Resize the browser—you should see the image shrink or grow in size as the browser window changes size. If we take a look at the code elsewhere using Firebug, we can also see the height: auto set against a number of the img tags; this is frequently done for responsive images to ensure they maintain the correct proportions. The responsive style seems to work well in the Twenty Fourteen theme; if you are using an older theme, we can easily apply the same style rule to images stored in WordPress when using that theme. Fixing a responsive issue So far, so good. Now, we have the Twenty Fourteen theme in place, we've uploaded images of various sizes, and we try resizing the browser window ... only to find that the images don't seem to grow in size above a certain point. At least not well—what gives? Well, it's a classic trap: we've talked about using percentage values to dynamically resize images, only to find that we've shot ourselves in the foot (proverbially speaking, of course!). The reason? Let's dive in and find out using the following steps: Browse to your WordPress installation and activate Firebug using F12. Switch to the HTML tab and select your preferred image. In Firebug, look for the <header class="entry-header"> line, then look for the following line in the rendered styles on the right-hand side of the window: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 474px; } The keen-eyed amongst you should hopefully spot the issue straightaway—we're using percentages to make the sizes dynamic for each image, yet we're constraining its parent container! To fix this, change the highlighted line as indicated: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 100%; } To balance the content, we need to make the same change to the comments area. So go ahead and change max-width to 100% as follows: .comments-area { margin: 48px auto; max-width: 100%;   padding: 0 10px; } If we try resizing the browser window now, we should see the image size adjust automatically. At this stage, the change is not permanent. To fix this, we would log in to WordPress' admin area, go to Appearance | Editor and add the adjusted styles at the foot of the Stylesheet (style.css) file. Let's move on. Did anyone notice two rather critical issues with the approach used here? Hopefully, you must have spotted that if a large image is used and then resized to a smaller size, we're still working with large files. The alteration we're making has a big impact on the theme, even though it is only a small change. Even though it proves that we can make images truly responsive, it is the kind of change that we would not necessarily want to make without careful consideration and plenty of testing. We can improve on this. However, making changes directly to the CSS style sheet is not ideal; they could be lost when upgrading to a newer version of the theme. We can improve on this by either using a custom CSS plugin to manage these changes or (better) using a plugin that tells WordPress to swap an existing image for a small one automatically if we resize the window to a smaller size. Using plugins to add responsive images A drawback though, of using a theme such as Twenty Fourteen, is the resizing of images. While we can grow or shrink an image when resizing the browser window, we are still technically altering the size of what could potentially be an unnecessarily large image! This is considered bad practice (and also bad manners!)—browsing on a desktop with a fast Internet connection as it might not have too much of an impact; the same cannot be said for mobile devices, where we have less choice. To overcome this, we need to take a different approach—get WordPress to automatically swap in smaller images when we reach a particular size or breakpoint. Instead of doing this manually using code, we can take advantage of one of the many plugins available that offer responsive capabilities in some format. I feel a demo coming on. Now's a good time to take a look at one such plugin in action: Let's start by downloading our plugin. For this exercise, we'll use the PictureFill.WP plugin by Kyle Ricks, which is available at https://wordpress.org/plugins/picturefillwp/. We're going to use the version that uses Picturefill.js version 2. This is available to download from https://github.com/kylereicks/picturefill.js.wp/tree/master. Click on Download ZIP to get the latest version. Log in to the admin area of your WordPress installation and click on Settings and then Media. Make sure your image settings for Thumbnail, Medium, and Large sizes are set to values that work with useful breakpoints in your design. We then need to install the plugin. In the admin area, go to Plugins | Add New to install the plugin and activate it in the normal manner. At this point, we will have installed responsive capabilities in WordPress—everything is managed automatically by the plugin; there is no need to change any settings (except maybe the image sizes we talked about in step 2). Switch back to your WordPress frontend and try resizing the screen to a smaller size. Press F12 to activate Firebug and switch to the HTML tab. Press Ctrl + Shift + C (or Cmd + Shift + C for Mac users) to toggle the element inspector; move your mouse over your resized image. If we've set the right image sizes in WordPress' admin area and the window is resized correctly, we can expect to see something like the following screenshot: To confirm we are indeed using a smaller image, right-click on the image and select View Image Info; it will display something akin to the following screenshot: We should now have a fully functioning plugin within our WordPress installation. A good tip is to test this thoroughly, if only to ensure we've set the right sizes for our breakpoints in WordPress! What happens if WordPress doesn't refresh my thumbnail images properly? This can happen. If you find this happening, get hold of and install the Regenerate Thumbnails plugin to resolve this issue; it's available at https://wordpress.org/plugins/regenerate-thumbnails/. Adding responsive videos using plugins Now that we can add responsive images to WordPress, let's turn our attention to videos. The process of adding them is a little more complex; we need to use code to achieve the best effect. Let's examine our options. If you are hosting your own videos, the simplest way is to add some additional CSS style rules. Although this removes any reliance on JavaScript or jQuery using this method, the result isn't perfect and will need additional styles to handle the repositioning of the play button overlay. Although we are working locally, we should remember the note from earlier in this article; changes to the CSS style sheet may be lost when upgrading. A custom CSS plugin should be used, if possible, to retain any changes. To use a CSS-only solution, it only requires a couple of steps: Browse to your WordPress theme folder and open a copy of styles.css in your text editor of choice. Add the following lines at the end of the file and save it: video { width: 100%; height: 100%; max-width: 100%; } .wp-video { width: 100% !important; } .wp-video-shortcode {width: 100% !important; } Close the file. You now have the basics in place for responsive videos. At this stage, you're probably thinking, "great, my videos are now responsive. I can handle the repositioning of the play button overlay myself, no problem"; sounds about right? Thought so and therein lies the main drawback of this method! Repositioning the overlay shouldn't be too difficult. The real problem is in the high costs of hardware and bandwidth that is needed to host videos of any reasonable quality and that even if we were to spend time repositioning the overlay, the high costs would outweigh any benefit of using a CSS-only solution. A far better option is to let a service such as YouTube do all the hard work for you and to simply embed your chosen video directly from YouTube into your pages. The main benefit of this is that YouTube's servers do all the hard work for you. You can take advantage of an increased audience and YouTube will automatically optimize the video for the best resolution possible for the Internet connections being used by your visitors. Although aimed at beginners, wpbeginner.com has a useful article located at http://www.wpbeginner.com/beginners-guide/why-you-should-never-upload-a-video-to-wordpress/, on the pros and cons of why self-hosting videos isn't recommended and that using an external service is preferable. Using plugins to embed videos Embedding videos from an external service into WordPress is ironically far simpler than using the CSS method. There are dozens of plugins available to achieve this, but one of the simplest to use (and my personal favorite) is FluidVids, by Todd Motto, available at http://github.com/toddmotto/fluidvids/. To get it working in WordPress, we need to follow these steps using a video from YouTube as the basis for our example: Browse to your WordPress' theme folder and open a copy of functions.php in your usual text editor. At the bottom, add the following lines: add_action ( 'wp_enqueue_scripts', 'add_fluidvid' );   function add_fluidvid() { wp_enqueue_script( 'fluidvids',     get_stylesheet_directory_uri() .     '/lib/js/fluidvids.js', array(), false, true ); } Save the file, then log in to the admin area of your WordPress installation. Navigate to Posts | Add New to add a post and switch to the Text tab of your Post Editor, then add http://www.youtube.com/watch?v=Vpg9yizPP_g&hd=1 to the editor on the page. Click on Update to save your post, then click on View post to see the video in action. There is no need to further configure WordPress—any video added from services such as YouTube or Vimeo will be automatically set as responsive by the FluidVids plugin. At this point, try resizing the browser window. If all is well, we should see the video shrink or grow in size, depending on how the browser window has been resized: To prove that the code is working, we can take a peek at the compiled results within Firebug. We will see something akin to the following screenshot: For those of us who are not feeling quite so brave (!), there is fortunately a WordPress plugin available that will achieve the same results, without configuration. It's available at https://wordpress.org/plugins/fluidvids/ and can be downloaded and installed using the normal process for WordPress plugins. Let's change track and move onto our next demo. I feel a need to get stuck in some coding, so let's take a look at how we can implement responsive images in frameworks such as Bootstrap. Implementing responsive media in Bootstrap A question—as developers, hands up if you have not heard of Bootstrap? Good—not too many hands going down Why have I asked this question, I hear you say? Easy—it's to illustrate that in popular frameworks (such as Bootstrap), it is easy to add basic responsive capabilities to media, such as images or video. The exact process may differ from framework to framework, but the result is likely to be very similar. To see what I mean, let's take a look at using Bootstrap for our second demo, where we'll see just how easy it is to add images and video to our Bootstrap-enabled site. If you would like to explore using some of the free Bootstrap templates that are available, then http://www.startbootstrap.com/ is well worth a visit! Using Bootstrap's CSS classes Making images and videos responsive in Bootstrap uses a slightly different approach to what we've examined so far; this is only because we don't have to define each style property explicitly, but instead simply add the appropriate class to the media HTML for it to render responsively. For the purposes of this demo, we'll use an edited version of the Blog Page example, available at http://www.getbootstrap.com/getting-started/#examples; a copy of the edited version is available on the code download that accompanies this article. Before we begin, go ahead and download a copy of the Bootstrap Example folder that is in the code download. Inside, you'll find the CSS, image and JavaScript files needed, along with our HTML markup file. Now that we have our files, the following is a screenshot of what we're going to achieve over the course of our demo: Let's make a start on our example using the following steps: Open up bootstrap.html and look for the following lines (around lines 34 to 35):    <p class="blog-post-meta">January 1, 2014 by <a href="#">Mark</a></p>      <p>This blog post shows a few different types of content that's supported and styled with Bootstrap.         Basic typography, images, and code are all         supported.</p> Immediately below, add the following code—this contains markup for our embedded video, using Bootstrap's responsive CSS styling: <div class="bs-example"> <div class="embed-responsive embed-responsive-16by9">    <iframe allowfullscreen="" src="http://www.youtube.com/embed/zpOULjyy-n8?rel=0" class="embed-responsive-item"></iframe> </div> </div> With the video now styled, let's go ahead and add in an image—this will go in the About section on the right. Look for these lines, on or around lines 74 and 75:    <h4>About</h4>      <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet       fermentum. Aenean lacinia bibendum nulla sed       consectetur.</p> Immediately below, add in the following markup for our image: <a href="#" class="thumbnail"> <img src="http://placehold.it/350x150" class="img-responsive"> </a> Save the file and preview the results in a browser. If all is well, we can see our video and image appear, as shown at the start of our demo. At this point, try resizing the browser—you should see the video and placeholder image shrink or grow as the window is resized. However, the great thing about Bootstrap is that the right styles have already been set for each class. All we need to do is apply the correct class to the appropriate media file—.embed-responsive embed-responsive-16by9 for videos or .img-responsive for images—for that image or video to behave responsively within our site. In this example, we used Bootstrap's .img-responsive class in the code; if we have a lot of images, we could consider using img { max-width: 100%; height: auto; } instead. So far, we've worked with two popular examples of frameworks in the form of WordPress and Bootstrap. This is great, but it can mean getting stuck into a lot of CSS styling, particularly if we're working with media queries, as we saw earlier in the article! Can we do anything about this? Absolutely! It's time for a brief look at CSS preprocessing and how this can help with adding responsive media to our pages. Using Less CSS to create responsive content Working with frameworks often means getting stuck into a lot of CSS styling; this can become awkward to manage if we're not careful! To help with this, and for our third scenario, we're going back to basics to work on an alternative way of rendering CSS using the Less CSS preprocessing language. Why? Well, as a superset (or extension) of CSS, Less allows us to write our styles more efficiently; it then compiles them into valid CSS. The aim of this example is to show that if you're already using Less, then we can still apply the same principles that we've covered throughout this article, to make our content responsive. It should be noted that this exercise does assume a certain level of prior experience using Less; if this is the first time, you may like to peruse my article, Learning Less, by Packt Publishing. There will be a few steps involved in making the changes, so the following screenshot gives a heads-up on what it will look like, once we've finished: You would be right. If we play our cards right, there should indeed be no change in appearance; working with Less is all about writing CSS more efficiently. Let's see what is involved: We'll start by extracting copies of the Less CSS example from the code download that accompanies this article—inside it, we'll find our HTML markup, reset style sheet, images, and video needed for our demo. Save the folder locally to your PC. Next, add the following styles in a new file, saving it as responsive.less in the css subfolder—we'll start with some of the styling for the base elements, such as the video and banner: #wrapper {width: 96%; max-width: 45rem; margin: auto;   padding: 2%} #main { width: 60%; margin-right: 5%; float: left } #video-wrapper video { max-width: 100%; } #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width:   100%; float: left; margin-bottom: 15px; } #skipTo { display: none; li { background: #197a8a }; }   p { font-family: "Droid Sans",sans-serif; } aside { width: 35%; float: right; } footer { border-top: 1px solid #ccc; clear: both; height:   30px; padding-top: 5px; } We need to add some basic formatting styles for images and links, so go ahead and add the following, immediately below the #skipTo rule: a { text-decoration: none; text-transform: uppercase } a, img { border: medium none; color: #000; font-weight: bold; outline: medium none; } Next up comes the navigation for our page. These styles control the main navigation and the Skip To… link that appears when viewed on smaller devices. Go ahead and add these style rules immediately below the rules for a and img: header { font-family: 'Droid Sans', sans-serif; h1 { height: 70px; float: left; display: block; fontweight: 700; font-size: 2rem; } nav { float: right; margin-top: 40px; height: 22px; borderradius: 4px; li { display: inline; margin-left: 15px; } ul { font-weight: 400; font-size: 1.1rem; } a { padding: 5px 5px 5px 5px; &:hover { background-color: #27a7bd; color: #fff; borderradius: 4px; } } } } We need to add the media query that controls the display for smaller devices, so go ahead and add the following to a new file and save it as media.less in the css subfolder. We'll start with setting the screen size for our media query: @smallscreen: ~"screen and (max-width: 30rem)";   @media @smallscreen { p { font-family: "Droid Sans", sans-serif; }      #main, aside { margin: 0 0 10px; width: 100%; }    #banner { margin-top: 150px; height: 4.85rem; max-width: 100%; background-image: url('../img/abstract-     banner-medium.jpg'); width: 45.5rem; } Next up comes the media query rule that will handle the Skip To… link at the top of our resized window:    #skipTo {      display: block; height: 18px;      a {         display: block; text-align: center; color: #fff; font-size: 0.8rem;        &:hover { background-color: #27a7bd; border-radius: 0; height: 20px }      }    } We can't forget the main navigation, so go ahead and add the following line of code immediately below the block for #skipTo:    header {      h1 { margin-top: 20px }      nav {        float: left; clear: left; margin: 0 0 10px; width:100%;        li { margin: 0; background: #efefef; display:block; margin-bottom: 3px; height: 40px; }        a {          display: block; padding: 10px; text-align:center; color: #000;          &:hover {background-color: #27a7bd; border-radius: 0; padding: 10px; height: 20px; }        }     }    } } At this point, we should then compile the Less style sheet before previewing the results of our work. If we launch responsive.html in a browser, we'll see our mocked up portfolio page appear as we saw at the beginning of the exercise. If we resize the screen to its minimum width, its responsive design kicks in to reorder and resize elements on screen, as we would expect to see. Okay, so we now have a responsive page that uses Less CSS for styling; it still seems like a lot of code, right? Working through the code in detail Although this seems like a lot of code for a simple page, the principles we've used are in fact very simple and are the ones we already used earlier in the article. Not convinced? Well, let's look at it in more detail—the focus of this article is on responsive images and video, so we'll start with video. Open the responsive.css style sheet and look for the #video-wrapper video class: #video-wrapper video { max-width: 100%; } Notice how it's set to a max-width value of 100%? Granted, we don't want to resize a large video to a really small size—we would use a media query to replace it with a smaller version. But, for most purposes, max-width should be sufficient. Now, for the image, this is a little more complicated. Let's start with the code from responsive.less: #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width: 100%; float: left; margin-bottom: 15px; } Here, we used the max-width value again. In both instances, we can style the element directly, unlike videos where we have to add a container in order to style it. The theme continues in the media query setup in media.less: @smallscreen: ~"screen and (max-width: 30rem)"; @media @smallscreen { ... #banner { margin-top: 150px; background-image: url('../img/abstract-banner-medium.jpg'); height: 4.85rem;     width: 45.5rem; max-width: 100%; } ... } In this instance, we're styling the element to cover the width of the viewport. A small point of note; you might ask why we are using the rem values instead of the percentage values when styling our image? This is a good question—the key to it is that when using pixel values, these do not scale well in responsive designs. However, the rem values do scale beautifully; we could use percentage values if we're so inclined, although they are best suited to instances where we need to fill a container that only covers part of the screen (as we did with the video for this demo). An interesting article extolling the virtues of why we should use rem units is available at http://techtime.getharvest.com/blog/in-defense-of-rem-units - it's worth a read. Of particular note is a known bug with using rem values in Mobile Safari, which should be considered when developing for mobile platforms; with all of the iPhones available, its usage could be said to be higher than Firefox! For more details, head over to http://wtfhtmlcss.com/#rems-mobile-safari. Transferring to production use Throughout this exercise, we used Less to compile our styles on the fly each time. This is okay for development purposes, but is not recommended for production use. Once we've worked out the requisite styles needed for our site, we should always look to precompile them into valid CSS before uploading the results into our site. There are a number of options available for this purpose; two of my personal favorites are Crunch! available at http://www.crunchapp.net and the Less2CSS plugin for Sublime Text available at https://github.com/timdouglas/sublime-less2css. You can learn more about precompiling Less code from my new article, Learning Less.js, by Packt Publishing. Summary Wow! We've certainly covered a lot; it shows that adding basic responsive capabilities to media need not be difficult. Let's take a moment to recap on what you learned. We kicked off this article with an introduction to three real-word scenarios that we would then cover. Our first scenario looked at using WordPress. We covered how although we can add simple CSS styling to make images and videos responsive, the preferred method is to use one of the several plugins available to achieve the same result. Our next scenario visited the all too familiar framework known as Twitter Bootstrap. In comparison, we saw that this is a much easier framework to work with, in that styles have been predefined and that all we needed to do was add the right class to the right selector. Our third and final scenario went completely the opposite way, with a look at using the Less CSS preprocessor to handle the styles that we would otherwise have manually created. We saw how easy it was to rework the styles we originally created earlier in the article to produce a more concise and efficient version that compiled into valid CSS with no apparent change in design. Well, we've now reached the end of the book; all good things must come to an end at some point! Nonetheless, I hope you've enjoyed reading the book as much as I have writing it. Hopefully, I've shown that adding responsive media to your sites need not be as complicated as it might first look and that it gives you a good grounding to develop something more complex using responsive media. Resources for Article: Further resources on this subject: Styling the Forms [article] CSS3 Animation [article] Responsive image sliders [article]
Read more
  • 0
  • 35
  • 7381

article-image-data-model-in-splunk-to-enable-interactive-reports-and-dashboards
Pravin Dhandre
26 Jun 2018
8 min read
Save for later

Create a data model in Splunk to enable interactive reports and dashboards

Pravin Dhandre
26 Jun 2018
8 min read
Data models enable you to create Splunk reports and dashboards without having to develop Splunk search. Typically, data models are designed by those that understand the specifics around the format, the semantics of certain data, and the manner in which users may expect to work with that data. In building a typical data model, knowledge managers use knowledge object types (such as lookups, transactions, search-time field extractions, and calculated fields). Today we are going to learn how to create a Splunk data model and how to describe that model with various fields and lookup attributes. This article is an excerpt from a book written by James D. Miller titled Implementing Splunk 7 - Third Edition.  Creating a data model So now that we have a general idea of what a Splunk data model is, let's go ahead and create one. Before we can get started, we need to verify that our user ID is set up with the proper access required to create a data model. By default, only users with an admin or power role can create data models. For other users, the ability to create a data model depends on whether their roles have write access to an app. To begin (once you have verified that you do have access to create a data model), you can click on Settings and then on Data models (under KNOWLEDGE): This takes you to the Data Models (management) page, shown in the next screenshot. This is where a list of data models is displayed. From here, you can manage permissions, acceleration, cloning, and removal of existing data models. You can also use this page to upload a data model or create new data models, using the Upload Data Model and New Data Model buttons on the upper-right corner, respectively. Since this is a new data model, you can click on the button labeled New Data Model. This will open the New Data Model dialog box (shown in the following image). We can fill in the required information in this dialog box: Filling in the new data model dialog You have four fields to fill in order to describe your new Splunk data model (Title, ID, App, and Description): Title: Here you must enter a Title for your data model. This field accepts any character, as well as spaces. The value you enter here is what will appear on the data model listing page. ID: This is an optional field. It gets prepopulated with what you entered for your data model title (with any spaces replaced with underscores. Take a minute to make sure you have a good one, since once you enter the data model ID, you can't change it. App: Here you select (from a drop-down list) the Splunk app that your data model will serve. Description: The description is also an optional field, but I recommend adding something descriptive to later identify your data model. Once you have filled in these fields, you can click on the button labeled Create. This opens the data model (in our example, Aviation Games) in the Splunk Edit Objects page as shown in the following screenshot: The next step in defining a data model is to add the first object. As we have already stated, data models are typically composed of object hierarchies built on root event objects. Each root event object represents a set of data that is defined by a constraint, which is a simple search that filters out events that are not relevant to the object. Getting back to our example, let's create an object for our data model to track purchase requests on our Aviation Games website. To define our first event-based object, click on Add Dataset (as shown in the following screenshot): Our data model's first object can either be a Root Event, or Root Search. We're going to add a Root Event, so select Root Event. This will take you to the Add Event Dataset editor: Our example event will expose events that contain the phrase error, which represents processing errors that have occurred within our data source. So, for Dataset Name, we will enter Processing Errors. The Dataset ID will automatically populate when you type in the Dataset Name (you can edit it if you want to change it). For our object's constraint, we'll enter sourcetype=tm1* error. This constraint defines the events that will be reported on (all events that contain the phrase error that are indexed in the data sources starting with tml). After providing Constraints for the event-based object, you can click on Preview to test whether the constraints you've supplied return the kind of events that you want. The following screenshot depicts the preview of the constraints given in this example: After reviewing the output, click on Save. The list of attributes for our root object is displayed: host, source, sourcetype, and _time. If you want to add child objects to client and server errors, you need to edit the attributes list to include additional attributes: Editing fields (attributes) Let's add an auto-extracted attribute, as mentioned earlier in this chapter, to our data model. Remember, auto-extracted attributes are derived by Splunk at search time. To start, click on Add Field: Next, select Auto-Extracted. The Add Auto-Extracted Field window opens: You can scroll through the list of automatically extracted fields and check the fields that you want to include. Since my data model example deals with errors that occurred, I've selected date_mday, date_month, and date_year. Notice that to the right of the field list, you have the opportunity to rename and type set each of the fields that you selected. Rename is self-explanatory, but for Type, Splunk allows you to select String, Number, Boolean, or IPV$ and indicate if the attribute is Required, Optional, Hidden, or Hidden &amp; Required. Optional means that the attribute doesn't have to appear in every event represented by the object. The attribute may appear in some of the object events and not others. Once you have reviewed your selected field types, click on Save: Lookup attributes Let's discuss lookup attributes now. Splunk can use the existing lookup definitions to match the values of an attribute that you select to values of a field in the specified lookup table. It then returns the corresponding field/value combinations and applies them to your object as (lookup) attributes. Once again, if you click on Add Field and select Lookup, Splunk opens the Add Fields with a Lookup page (shown in the following screenshot) where you can select from your currently defined lookup definitions. For this example, we select dnslookup: The dnslookup converts clienthost to clientip. We can configure a lookup attribute using this lookup to add that result to the processing errors objects. Under Input, select clienthost for Field in Lookup and Field in Dataset. Field in Lookup is the field to be used in the lookup table. Field in Dataset is the name of the field used in the event data. In our simple example, Splunk will match the field clienthost with the field host: Under Output, I have selected host as the output field to be matched with the lookup. You can provide a Display Name for the selected field. This display name is the name used for the field in your events. I simply typed AviationLookupName for my display name (see the following screenshot): Again, Splunk allows you to click on Preview to review the fields that you want to add. You can use the tabs to view the Events in a table, or view the values of each of the fields that you selected in Output. For example, the following screenshot shows the values of AviationLookupName: Finally, we can click on Save: Add Child object to our model We have just added a root (or parent) object to our data model. The next step is to add some children. Although a child object inherits all the constraints and attributes from its parent, when you create a child, you will give it additional constraints with the intention of further filtering the dataset that the object represents. To add a child object to our data model, click on Add Field and select Child: Splunk then opens the editor window, Add Child Dataset (shown in the following screenshot): On this page, follow these steps: Enter the Object Name: Dimensional Errors. Leave the Object ID as it is: Dimensional_Errors. Under Inherit From, select Processing Errors. This means that this child object will inherit all the attributes from the parent object, Processing Errors. Add the Additional Constraints, dimension, which means that the data models search for the events in this object; when expanded, it will look something like sourcetype=tm1* error dimension. Finally, click on Save to save your changes: Following the previously outlined steps, you can add more objects, each continuing to filter the results until you have the results that you need. With this we learned to create data models, and manage permissions, cloning and accelerating operational data models with ease. If you found this tutorial useful, do check out the book Implementing Splunk 7 - Third Edition and start transforming machine-generated data into valuable and actionable business insights. How to use R to boost your Data Model Building a Microsoft Power BI Data Model Splunk’s Input Methods and Data Feeds
Read more
  • 0
  • 0
  • 7377

article-image-sql-query-basics-sap-business-one
Packt
18 May 2011
7 min read
Save for later

SQL Query Basics in SAP Business One

Packt
18 May 2011
7 min read
  Mastering SQL Queries for SAP Business One Utilize the power of SQL queries to bring Business Intelligence to your small to medium-sized business Who can benefit from using SQL Queries in SAP Business One? There are many different groups of SAP Business One users who may need this tool. To my knowledge, there is no standard organization chart for Small and Midsized enterprises. Most of them are different. You may often find one person that handles more than one role. You may check the following list to see if anything applies to you: Do you need to check specific sales results over certain time periods, for certain areas or certain customers? Do you want to know who the top vendors from certain locations for certain materials are? Do you have dynamic updated version of your sales force performance in real time? Do you often check if approval procedures are exactly matching your expectations? Have you tried to start building your SQL query but could not get it done properly? Have you experienced writing SQL query but the results are not always correct or up to your expectations? Consultant If you are an SAP Business One consultant, you have probably mastered SQL query already. However, if that is not the case, this would be a great help to extend your consulting power. It will probably become a mandatory skill in the future that any SAP Business One consultant should be able to use SQL query. Developer If you are an SAP Business One add-on developer, these skills will be good additions to your capabilities. You may find this useful even in some other development work like coding or programming. Very often you need to embed SQL query to your codes to complete your Software Development Kit (SDK) project. SAP Business One end user If you are simply a normal SAP Business One end user, you may need this more. This is because SQL query usage is best applied for the companies who have SAP Business One live data. Only you as the end users know better than anyone else what you are looking for to make Business Intelligence a daily routine job. It is very important for you to have an ability to create a query report so that you can map your requirement by query in a timely manner. SQL query and related terms Before going into the details of SQL query, I would like to briefly introduce some basic database concepts because SQL is a database language for managing data in Relational Database Management Systems (RDBMS). RDBMS RDBMS is a Database Management System that is based on the relation model. Relational here is a key word for RDBMS. You will find that data is stored in the form of Tables and the relationship among the data is also stored in the form of tables for RDBMS. Table Table is a key component within a database. One table or a group of tables represent one kind of data. For example, table OSLP within SAP Business One holds all Sales Employee Data. Tables are two-dimensional data storage place holders. You need to be familiar with their usage and their relationships with each other. If you are familiar with Microsoft Excel, the worksheet in Excel is a kind of two-dimensional table. Table is also one of the most often used concepts. Relationships between each table may be more important than tables themselves because without relation, nothing could be of any value. One important function within SAP Business One is allowing User Defined Table (UDT). All UDTs start with "@". Field A field is the lowest unit holding data within a table. A table can have many fields. It is also called a column. Field and column are interchangeable. A table is comprised of records, and all records have the same structure with specific fields. One important concept in SAP Business One is User Defined Field (UDF). All UDFs start with U_. SQL SQL is often referred to as Structured Query Language. It is pronounced as S-Q-L or as the word "Sequel". There are many different revisions and extensions of SQL. The current revision is SQL: 2008, and the first major revision is SQL-92. Most of SQL extensions are built on top of SQL-92. T-SQL Since SAP Business One is built on Microsoft SQL Server database, SQL here means Transact-SQL or T-SQL in brief. It is a Microsoft's/Sybase's extension of general meaning for SQL. Subsets of SQL There are three main subsets of the SQL language: Data Control Language (DCL) Data Definition Language (DDL) Data Manipulation Language (DML) Each set of the SQL language has a special purpose: DCL is used to control access to data in a database such as to grant or revoke specified users' rights to perform specified tasks. DDL is used to define data structures such as to create, alter, or drop tables. DML is used to retrieve and manipulate data in the table such as to insert, delete, and update data. Select, however, becomes a special statement belonging to this subset even though it is a read-only command that will not manipulate data at all. Query Query is the most common operation in SQL. It could refer to all three SQL subsets. You have to understand the risks of running any Add, Delete, or Update queries that could potentially alter system tables even if they are User Defined Fields. Only SELECT query is legitimate for SAP Business One system table. Data dictionary In order to create working SQL queries, you not only need to know how to write it, but also need to have a clear view regarding the relationship between tables and where to find the information required. As you know, SAP Business One is built on Microsoft SQL Server. Data dictionary is a great tool for creating SQL queries. Before we start, a good Data Dictionary is essential for the database. Fortunately, there is a very good reference called SAP Business One Database Tables Reference readily available through SAP Business One SDK help Centre. You can find the details in the following section. SAP Business One—Database tables reference The database tables reference file named REFDB.CHM is the one we are looking for. SDK is usually installed on the same server as the SAP Business One database server. Normally, the file path is: X:Program FilesSAPSAP Business One SDKHelp. Here, "X" means the drive where your SAP Business One SDK is installed. The help file looks like this: In this help file, we will find the same categories as the SAP Business One menu with all 11 modules. The tables related to each module are listed one by one. There are tree structures in the help file if the header tables have row tables. Each table provides a list of all the fields in the table along with their description, type, size, related tables, default value, and constraints. Naming convention of tables for SAP Business One To help you understand the previous mentioned data dictionary quickly, we will be going through the naming conventions for the table in SAP Business One. Three letter words Most tables for SAP Business One have four letters. The only exceptions are numberending tables, if the numbers are greater than nine. Those tables will have five letters. To understand table names easily, there is a three letter abbreviation in SAP Business One. Some of the commonly used abbreviations are listed as follows: ADM: Administration ATC: Attachments CPR: Contact Persons CRD: Business Partners DLN: Delivery Notes HEM: Employees INV: Sales Invoices ITM: Items ITT: Product Trees (Bill of Materials) OPR: Sales Opportunities PCH: Purchase Invoices PDN: Goods Receipt PO POR: Purchase Orders QUT: Sales Quotations RDR: Sales Orders RIN: Sales Credit Notes RPC: Purchase Credit Notes SLP: Sales Employees USR: Users WOR: Production Orders WTR: Stock Transfers  
Read more
  • 0
  • 1
  • 7369
article-image-configuring-and-deploying-ejb-30-entity-weblogic-server
Packt
27 Aug 2010
8 min read
Save for later

Configuring and Deploying the EJB 3.0 Entity in WebLogic Server

Packt
27 Aug 2010
8 min read
(For more resources on Oracle, see here.) Creating a Persistence Configuration file An EJB 3.0 entity bean is required to have a persistence.xml configuration file, which defines the database persistence properties. A persistence.xml file gets added to the META-INF folder when a JPA project is defined. Copy the following listing to the persistence.xml file in Eclipse: <?xml version="1.0" encoding="UTF-8" ?> <persistence xsi_schemaLocation= "http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="em"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/MySQLDS</jta-data-source> <class>ejb3.Catalog</class> <properties> <property name="eclipselink.target-server" value="WebLogic_10" /> <property name="javax.persistence.jtaDataSource" value="jdbc/ MySQLDS" /> <property name="eclipselink.ddl-generation" value="create-tables" /> <property name="eclipselink.target-database" value="MySQL" /> </properties> </persistence-unit> </persistence> The persistence-unit is required to be named and may be given any name. We had configured a JDBC data source with JNDI jdbc/MySQLDS in WebLogic Server. Specify the JNDI name in the jta-data-source element. The properties element specifies vendor-specific properties. The eclipselink.ddl-generation property is set to create-tables, which implies that the required database tables will be created unless they are already created . The persistence.xml configuration file is shown in the Eclipse project in the following illustration: (Move the mouse over the image to enlarge.) Creating a session bean For better performance, one of the best practices in developing EJBs is to access entity beans from session beans. Wrapping an entity bean with a session bean reduces the number of remote method calls as a session bean may invoke an entity bean locally. If a client accesses an entity bean directly, each method invocation is a remote method call and incurs an overhead of additional network resources. We shall use a stateless session bean, which consumes less resources than a stateful session bean, to invoke entity bean methods. In this section, we create a session bean in Eclipse. A stateless session bean class is just a Java class annotated with the @Stateless annotation. Therefore, we create Java classes for the session bean and session bean remote interface in Eclipse. To create a Java class, select File | New. In the New window, select Java | Class and click on Next> In the New Java Class window, select the Source folder as EJB3JPA/src, EJB3JPA being the project name. Specify Class Name as CatalogTestBean and click on Finish. Similarly, create a CatalogTestBeanRemote interface by selecting Java | Interface in the New window. The session bean class and the remote interface get added to the EJB3JPA project. The session bean class The stateless session bean class, CatalogTestBean implements the CatalogTestRemote interface. We shall use the EntityManager API to create, find, query, and remove entity instances. Inject an EntityManager using the @PersistenceContext annotation. Specify unitName as the same as the persistence-unit name in the persistence.xml configuration file: @PersistenceContext(unitName = "em") EntityManager em; Next, create a test() method, which we shall invoke from a test client. In the test() method we shall create and persist entity instances, query an entity instance, and delete an entity instance, all using an EntityManager object, which we had injected earlier in the session bean class. Injecting an EntityManager implies that an instance of EntityManager is made available to the session bean. Create an instance of the Entity bean class: Catalog catalog = new Catalog(new Integer(1), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Put Your Arrays in a Bind","Mark Williams"); Persist the entity instance to the database using the persist() method: em.persist(catalog); Similarly, persist two more entity instances. Next, create a query using the createQuery() method of the EntityManager object. The query string may be specified as a EJB-QL query. Unlike HQL, the SELECT clause is not optional in EJB-QL. Execute the query and return the query result as a List using the getResultList() method. As an example, select the catalog entry corresponding to author David Baum. The FROM clause of a query is directed towards the mapped entity bean class, not the underlying database. List catalogEntry =em.createQuery("SELECT c from Catalog c where c.author=:name").setParameter("name","David Baum"). getResultList(); Iterate over the result list to output the properties of the entity instance: for (Iterator iter = catalogEntry.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue =retValue + "<br/>" + element.getJournal() + "<br/>" + element.getPublisher() +"<br/>" + element.getDate() + "<br/>" + element.getTitle() + "<br/>" + element.getAuthor() +"<br/>"; } The variable retValue is a String that is returned by the test() method. Similarly, create and run a EJB-QL query to return all titles in the Catalog database: List allTitles =em.createQuery("SELECT c from Catalog c"). getResultList(); An entity instance may be removed using the remove() method: em.remove(catalog2); The corresponding database row gets deleted from the Catalog table. Subsequently, create and run a query to list all the entity instances mapped to the database. The session bean class, CatalogTestBean, is listed next: package ejb3; import java.util.Iterator; import java.util.List; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; /** * Session Bean implementation class CatalogTestBean */ @Stateless(mappedName = "EJB3-SessionEJB") public class CatalogTestBean implements CatalogTestBeanRemote { @PersistenceContext(unitName = "em") EntityManager em; /** * Default constructor. */ public CatalogTestBean() { // TODO Auto-generated constructor stub } public String test() { Catalog catalog = new Catalog(new Integer(1), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Put Your Arrays in a Bind","Mark Williams"); em.persist(catalog); Catalog catalog2 = new Catalog(new Integer(2), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Oracle Fusion Middleware 11g: The Foundation for Innovation", "David Baum"); em.persist(catalog2); Catalog catalog3 = new Catalog(new Integer(3), "Oracle Magazine", "Oracle Publishing", "September-October 2009", "Integrating Information","David Baum"); em.persist(catalog3); String retValue = "<b>Catalog Entries: </b>"; List catalogEntry = em.createQuery("SELECT c from Catalog c where c.author=:name").setParameter("name", "David Baum").getResultList(); for (Iterator iter = catalogEntry.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue = retValue + "<br/>" + element.getJournal() + "<br/>" + element.getPublisher() + "<br/>" + element.getDate() + "<br/>" + element.getTitle() + "<br/>" + element.getAuthor() + "<br/>"; } retValue = retValue + "<b>All Titles: </b>"; List allTitles = em.createQuery("SELECT c from Catalog c").getResultList(); for (Iterator iter = allTitles.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue = retValue + "<br/>" + element.getTitle() + "<br/>"; } em.remove(catalog2); ); retValue = retValue + "<b>All Entries after removing an entry: </b>"; List allCatalogEntries = em.createQuery("SELECT c from Catalog c"). getResultList(); for (Iterator iter = allCatalogEntries.iterator(); iter.hasNext(); ) { Catalog element = (Catalog)iter.next(); retValue = retValue + "<br/>" + element + "<br/>"; } return retValue; } } We also need to add a remote or a local interface for the session bean: package ejb3; import javax.ejb.Remote; @Remote public interface CatalogTestBeanRemote { public String test(); } The session bean class and the remote interface are shown next: We shall be packaging the entity bean and the session bean in a EJB JAR file, and packaging the JAR file with a WAR file for the EJB 3.0 client into an EAR file as shown next: EAR File | | |-WAR File | |-EJB 3.0 Client |-JAR File | |-EJB 3.0 Entity Bean EJB 3.0 Session Bean Next, we create an application.xml for the EAR file. Create a META-INF folder for the application.xml. Right-click on the EJB3JPA project in Project Explorer and select New>Folder. In the New Folder window, select the EJB3JPA folder and specify the new Folder name as META-INF. Click on Finish. Right-click on the META-INF folder and select New | Other. In the New window, select XML | XML and click on Next. In the New XML File window, select the META-INF folder and specify File name as application.xml. Click on Next. Click on Finish. An application.xml file gets created. Copy the following listing to application.xml: <?xml version = '1.0' encoding = 'windows-1252'?> <application> <display-name></display-name> <module> <ejb>ejb3.jar</ejb> </module> <module> <web> <web-uri>weblogic.war</web-uri> <context-root>weblogic</context-root> </web> </module> </application> The application.xml in the Project Explorer is shown next:
Read more
  • 0
  • 2
  • 7365

article-image-virtual-machine-concepts
Packt
19 Mar 2014
18 min read
Save for later

Virtual Machine Concepts

Packt
19 Mar 2014
18 min read
(For more resources related to this topic, see here.) The multiple instances of Windows or Linux systems that are running on an ESXi host are commonly referred to as a virtual machine (VM). Any reference to a guest operating system (OS) is an instance of Linux, Windows, or any other supported operating system that is installed on the VM. vSphere virtual machines At the heart of virtualization lies the virtual machine. A virtual machine is a set of virtual hardware whose characteristics are determined by a set of files; it is this virtual hardware that a guest operating system is installed on. A virtual machine runs an operating system and a set of applications just like a physical server. A virtual machine comprises a set of configuration files and is backed by the physical resources of an ESXi host. An ESXi host is the physical server that has the VMware hypervisor, known as ESXi, installed. Each virtual machine is equipped with virtual hardware and devices that provide the same functionality as having physical hardware. Virtual machines are created within a virtualization layer, such as ESXi running on a physical server. This virtualization layer manages requests from the virtual machine for resources such as CPU or memory. It is the virtualization layer that is responsible for translating these requests to the underlying physical hardware. Each virtual machine is granted a portion of the physical hardware. All VMs have their own virtual hardware (there are important ones to note, called the primary 4: CPU, memory, disk, and network). Each of these VMs is isolated from the other and each interacts with the underlying hardware through a thin software layer known as the hypervisor. This is different from a physical architecture in which the installed operating system interacts with installed hardware directly. With virtualization, there are many benefits, in relation to portability, security, and manageability that aren't available in an environment that uses a traditional physical infrastructure. However, once provisioned, virtual machines use many of the same principles that are applied to physical servers. The preceding diagram demonstrates the differences between the traditional physical architecture (left) and a virtual architecture (right). Notice that the physical architecture typically has a single application and a single operating system using the physical resources. The virtual architecture has multiple virtual machines running on a single physical server, accessing the hardware through the thin hypervisor layer. Virtual machine components When a virtual machine is created, a default set of virtual hardware is assigned to it. VMware provides devices and resources that can be added and configured to the virtual machine. Not all virtual hardware devices will be available to every single virtual machine; both the physical hardware of the ESXi host and the VM's guest OS must support these configurations. For example, a virtual machine will not be capable of being configured with more vCPUs than the ESXi host has logical CPU cores. The virtual hardware available includes: BIOS: Phoenix Technologies 6.00 that functions like a physical server BIOS. Virtual machine administrators are able to enable/disable I/O devices, configure boot order, and so on. DVD/CD-ROM: NEC VMware IDE CDR10 that is installed by default in new virtual machines created in vSphere. The DVD/CD-ROM can be configured to connect to the client workstation DVD/CD-ROM, an ESXi host DVD/CD-ROM, or even an .iso file located on a datastore. DVD/CD-ROM devices can be added to or removed from a virtual machine. Floppy drive: This is installed by default with new virtual machines created in vSphere. The floppy drive can be configured to connect to the client device's floppy drive, a floppy device located on the ESXi host, or even a floppy image (.flp) located on a datastore. Floppy devices can be added to or removed from a virtual machine. Hard disk: This stores the guest operating system, program files, and any other data associated with a virtual machine. The virtual disk is a large file, or potentially a set of files, that can be easily copied, moved, and backed up. IDE controller: Intel 82371 AB/EB PCI Bus Master IDE Controller that presents two Integrated Drive Electronics (IDE) interfaces to the virtual machine by default. This IDE controller is a standard way for storage devices, such as floppy drives and CD-ROM drives, to connect to the virtual machine. Keyboard: This mirrors the keyboard that is first connected to the virtual machine console upon initial console connection. Memory: This is the virtual memory size configured for the virtual machine that determines the guest operating system's memory size. Motherboard/Chipset: The motherboard uses VMware proprietary devices that are based on the following chips: Intel 440BX AGPset 82443BX Host Bridge/Controller Intel 82093 AA I/O Advanced Programmable Interrupt Controller Intel 82371 AB (PIIX4) PCI ISA IDE Xcelerator National Semiconductor PC87338 ACPI 1.0 and PC98/99 Compliant Super I/O Network adapter: ESXi networking features provide communication between virtual machines residing on the same ESXi host, between VMs residing on different ESXi hosts, and between VMs and physical machines. When configuring a VM, network adapters (NICs) can be added and the adapter type can be specified. Parallel port: This is an interface for connecting peripherals to the virtual machine. Virtual parallel ports can be added to or removed from the virtual machine. PCI controller: This is a bus located on the virtual machine motherboard, communicating with components such as a hard disk. A single PCI controller is presented to the virtual machine. This cannot be configured or removed. PCI device: DirectPath devices can be added to a virtual machine. The devices must be reserved for PCI pass-through on the ESXi host that the virtual machine runs on. Keep in mind that snapshots are not supported with DirectPath I/O pass-through device configuration. For more information on virtual machine snapshots, see http://vmware.com/kb/1015180. Pointing device: This mirrors the pointing device that is first connected to the virtual machine console upon initial console connection. Processor: This specifies the number of sockets and core for the virtual processor. This will appear as AMD or Intel to the virtual machine guest operating system depending upon the physical hardware. Serial port: This is an interface for connecting peripherals to the virtual machine. The virtual machine can be configured to connect to a physical serial port, a file on the host, or over the network. The serial port can also be used to establish a direct connection between two VMs. Virtual serial ports can be added to or removed from the virtual machine. SCSI controller: This provides access to virtual disks. The virtual SCSI controller may appear as one of several different types of controllers to a virtual machine, depending on the guest operating system of the VM. Editing the VM configuration can modify the SCSI controller type, a SCSI controller can be added, and a virtual controller can be configured to allocate bus sharing. SCSI device: A SCSI device interface is available to the virtual machine by default. This interface is a typical way to connect storage devices (hard drives, floppy drives, CD-ROMs, and so on) to a VM. SCSI device that can be added to or removed from a virtual machine. SIO controller: The Super I/O controller provides serial and parallel ports, and floppy devices, and performs system management activities. A single SIO controller is presented to the virtual machine. This cannot be configured or removed. USB controller: This provides USB functionality to the USB ports managed. The virtual USB controller is a software virtualization of the USB host controller function in a VM. USB device: Multiple USB devices may be added to a virtual machine. These can be mass storage devices or security dongles. The USB devices can be connected to a client workstation or to an ESXi host. Video controller: This is a VMware Standard VGA II Graphics Adapter with 128 MB video memory. VMCI: The Virtual Machine Communication Interface provides high-speed communication between the hypervisor and a virtual machine. VMCI can also be enabled for communication between VMs. VMCI devices cannot be added or removed. Uses of virtual machines In any infrastructure, there are many business processes that have applications supporting them. These applications typically have certain requirements, such as security or performance requirements, which may limit the application to being the only thing installed on a given machine. Without virtualization, there is typically a 1:1:1 ratio for server hardware to an operating system to a single application. This type of architecture is not flexible and is inefficient due to many applications using only a small percentage of the physical resources dedicated to it, effectively leaving the physical servers vastly underutilized. As hardware continues to get better and better, the gap between the abundant resources and the often small application requirements widens. Also, consider the overhead needed to support the entire infrastructure, such as power, cooling, cabling, manpower, and provisioning time. A large server sprawl will cost more money for space and power to keep these systems housed and cooled. Virtual infrastructures are able to do more with less—fewer physical servers are needed due to higher consolidation ratios. Virtualization provides a safe way of putting more than one operating system (or virtual machine) on a single piece of server hardware by isolating each VM running on the ESXi host from any other. Migrating physical servers to virtual machines and consolidating onto far fewer physical servers means lowering monthly power and cooling costs in the datacenter. Fewer physical servers can help reduce the datacenter footprint; fewer servers means less networking equipment, fewer server racks, and eventually less datacenter floor space required. Virtualization changes the way a server is provisioned. Initially it took hours to build a cable and install the OS; now it takes only seconds to deploy a new virtual machine using templates and cloning. VMware offers a number of advanced features that aren't found in a strictly physical infrastructure. These features, such as High Availability, Fault Tolerance, and Distributed Resource Scheduler, help with increased uptime and overall availability. These technologies keep the VMs running or give the ability to quickly recover from unplanned outages. The ability to quickly and easily relocate a VM from one ESXi host to another is one of the greatest benefits of using vSphere virtual machines. In the end, virtualizing the infrastructure and using virtual machines will help save time, space, and money. However, keep in mind that there are some upfront costs to be aware of. Server hardware may need to be upgraded or new hardware purchased to ensure compliance with the VMware Hardware Compatibility List (HCL). Another cost that should be taken into account is the licensing costs for VMware and the guest operating system; each tier of licensing allows for more features but drives up the price to license all of the server hardware. The primary virtual machine resources Virtualization decouples physical hardware from an operating system. Each virtual machine contains a set of its own virtual hardware and there are four primary resources that a virtual machine needs in order to correctly function. These are CPU, memory, network, and hard disk. These four resources look like physical hardware to the guest operating systems and applications. The virtual machine is granted access to a portion of the resources at creation and can be reconfigured at any time thereafter. If a virtual machine experiences constraint, one of the four primary resources is generally where a bottleneck will occur. In a traditional architecture, the operating system interacts directly with the server's physical hardware without virtualization. It is the operating system that allocates memory to applications, schedules processes to run, reads from and writes to attached storage, and sends and receives data on the network. This is not the case with a virtualized architecture. The virtual machine guest operating system still does the aforementioned tasks, but also interacts with virtual hardware presented by the hypervisor. In a virtualized environment, a virtual machine interacts with the physical hardware through a thin layer of software known as the virtualization layer or the hypervisor; in this case the hypervisor is ESXi. This hypervisor allows the VM to function with a degree of independence from underlying physical hardware. This independence is what allows vMotion and Storage vMotion functionality. The following diagram demonstrates a virtual machine and its four primary resources: This section will provide an overview of each of the "primary four" resources. CPU The virtualization layer runs CPU instructions to make sure that the virtual machines run as though accessing the physical processor on the ESXi host. Performance is paramount for CPU virtualization, and therefore will use the ESXi host physical resources whenever possible. The following image displays a representation of a virtual machine's CPU: A virtual machine can be configured with up to 64 virtual CPUs (vCPUs) as of vSphere 5.5. The maximum vCPUs able to be allocated depends on the underlying logical cores that the physical hardware has. Another factor in the maximum vCPUs is the tier of vSphere licensing; only Enterprise Plus licensing allows for 64 vCPUs. The VMkernel includes a CPU scheduler that dynamically schedules vCPUs on the ESXi host's physical processors. The VMkernel scheduler, when making scheduling decisions, considers socket-core-thread topology. A socket is a single, integrated circuit package that has one or more physical processor cores. Each core has one or more logical processors, also known as threads. If hyperthreading is enabled on the host, then ESXi is capable of executing two threads, or sets of instruction, simultaneously. Effectively, hyperthreading provides more logical CPUs to ESXi on which vCPUs can be scheduled, providing more scheduler throughput. However, keep in mind that hyperthreading does not double the core's power. During times of CPU contention, when VMs are competing for resources, the VMkernel timeslices the physical processor across all virtual machines to ensure that the VMs run as if having a specified number of vCPUs. VMware vSphere Virtual Symmetric Multiprocessing (SMP) is what allows the virtual machines to be configured with up to 64 virtual CPUs, which allows a larger CPU workload to run on an ESXi host. Though most supported guest operating systems are multiprocessor aware, many guest OSes and applications do not need and are not enhanced by having multiple vCPUs. Check vendor documentation for operating system and application requirements before configuring SMP virtual machines. Memory In a physical architecture, an operating system assumes that it owns all physical memory in the server, which is a correct assumption. A guest operating system in a virtual architecture also makes this assumption but it does not, in fact, own all of the physical memory. A guest operating system in a virtual machine uses a contiguous virtual address space that is created by ESXi as its configured memory. The following image displays a representation of a virtual machine's memory: Virtual memory is a well-known technique that creates this contiguous virtual address space, allowing the hardware and operating system to handle the address translation between the physical and virtual address spaces. Since each virtual machine has its own contiguous virtual address space, this allows ESXi to run more than one virtual machine at the same time. The virtual machine's memory is protected against access from other virtual machines. This effectively results in three layers of virtual memory in ESXi: physical memory, guest operating system physical memory, and guest operating system virtual memory. The VMkernel presents a portion of physical host memory to the virtual machine as its guest operating system physical memory. The guest operating system presents the virtual memory to the applications. The virtual machine is configured with a set of memory; this is the sum that the guest OS is told it has available to it. A virtual machine will not necessarily use the entire memory size; it only uses what is needed at the time by the guest OS and applications. However, a VM cannot access more memory than the configured memory size. A default memory size is provided by vSphere when creating the virtual machine. It is important to know the memory needs of the application and guest operating system being virtualized so that the virtual machine's memory can be sized accordingly. Network There are two key components with virtual networking: the virtual switch and virtual Ethernet adapters. A virtual machine can be configured with up to ten virtual Ethernet adapters, called vNICs. The following image displays a representation of a virtual machine's vNIC: Virtual network switching is software interfacing between virtual machines at the vSwitch level until the frames hit an uplink or a physical adapter, exiting the ESXi host and entering the physical network. Virtual networks exist for virtual devices; all communication between the virtual machines and the external world (physical network) goes through vNetwork standard switches or vNetwork distributed switches. Virtual networks operate on layer 2, data link, of the OSI model. A virtual switch is similar to a physical Ethernet switch in many ways. For example, virtual switches support the standard VLAN (802.1Q) implementation and have a forwarding table, like a physical switch. An ESXi host may contain more than one virtual switch. Each virtual switch is capable of binding multiple vmnics together in a network interface card (NIC) team, which offers greater availability to the virtual machines using the virtual switch. There are two connection types available on a virtual switch: a port group and a VMkernel port. Virtual machines are connected to port groups on a virtual switch, allowing access to network resources. VMkernel ports provide a network service to the ESXi host to include IP storage, management, vMotion, and so on. Each VMkernel port must be configured with its own IP address and network mask. The port groups and VMkernel ports reside on a virtual switch and connect to the physical network through the physical Ethernet adapters known as vmnics. If uplinks (vmnics) are associated with a virtual switch, then the virtual machines connected to a port group on this virtual switch will be able to access the physical network. Disk In a non-virtualized environment, physical servers connect directly to storage, either to an external storage array or to their internal hard disk arrays to the server chassis. The issue with this configuration is that a single server expects total ownership of the physical device, tying an entire disk drive to one server. Sharing storage resources in non-virtualized environments can require complex filesystems and migration to file-based Network Attached Storage (NAS) or Storage Area Networks (SAN). The following image displays a representation of a virtual disk: Shared storage is a foundational technology that allows many things to happen in a virtual environment (High Availability, Distributed Resource Scheduler, and so on). Virtual machines are encapsulated in a set of discrete files stored on a datastore. This encapsulation makes the VMs portable and easy to be cloned or backed up. For each virtual machine, there is a directory on the datastore that contains all of the VM's files. A datastore is a generic term for a container that holds files as well as .iso images and floppy images. It can be formatted with VMware's Virtual Machine File System (VMFS) or can use NFS. Both datastore types can be accessed across multiple ESXi hosts. VMFS is a high-performance, clustered filesystem devised for virtual machines that allows a virtualization-based architecture of multiple physical servers to read and write to the same storage simultaneously. VMFS is designed, constructed, and optimized for virtualization. The newest version, VMFS-5, exclusively uses 1 MB block size, which is good for large files, while also having an 8 KB subblock allocation for writing small files such as logs. VMFS-5 can have datastores as large as 64 TB. The ESXi hosts use a locking mechanism to prevent the other ESXi hosts accessing the same storage from writing to the VMs' files. This helps prevent corruption. Several storage protocols can be used to access and interface with VMFS datastores; these include Fibre Channel, Fibre Channel over Ethernet, iSCSI, and direct attached storage. NFS can also be used to create a datastore. VMFS datastore can be dynamically expanded, allowing the growth of the shared storage pool with no downtime. vSphere significantly simplifies accessing storage from the guest OS of the VM. The virtual hardware presented to the guest operating system includes a set of familiar SCSI and IDE controllers; this way the guest OS sees a simple physical disk attached via a common controller. Presenting a virtualized storage view to the virtual machine's guest OS has advantages such as expanded support and access, improved efficiency, and easier storage management.
Read more
  • 0
  • 0
  • 7354