Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-exploring-scala-performance
Packt
19 May 2016
19 min read
Save for later

Exploring Scala Performance

Packt
19 May 2016
19 min read
In this article by Michael Diamant and Vincent Theron, author of the book Scala High Performance Programming, we look at how Scala features get compiled with bytecode. (For more resources related to this topic, see here.) Value classes The domain model of the order book application included two classes, Price and OrderId. We pointed out that we created domain classes for Price and OrderId to provide contextual meanings to the wrapped BigDecimal and Long. While providing us with readable code and compilation time safety, this practice also increases the amount of instances that are created by our application. Allocating memory and generating class instances create more work for the garbage collector by increasing the frequency of collections and by potentially introducing additional long-lived objects. The garbage collector will have to work harder to collect them, and this process may severely impact our latency. Luckily, as of Scala 2.10, the AnyVal abstract class is available for developers to define their own value classes to solve this problem. The AnyVal class is defined in the Scala doc (http://www.scala-lang.org/api/current/#scala.AnyVal) as, "the root class of all value types, which describe values not implemented as objects in the underlying host system." The AnyVal class can be used to define a value class, which receives special treatment from the compiler. Value classes are optimized at compile time to avoid the allocation of an instance, and instead, they use the wrapped type. Bytecode representation As an example, to improve the performance of our order book, we can define Price and OrderId as value classes: case class Price(value: BigDecimal) extends AnyVal case class OrderId(value: Long) extends AnyVal To illustrate the special treatment of value classes, we define a dummy function taking a Price value class and an OrderId value class as arguments: def printInfo(p: Price, oId: OrderId): Unit = println(s"Price: ${p.value}, ID: ${oId.value}") From this definition, the compiler produces the following method signature: public void printInfo(scala.math.BigDecimal, long); We see that the generated signature takes a BigDecimal object and a long object, even though the Scala code allows us to take advantage of the types defined in our model. This means that we cannot use an instance of BigDecimal or Long when calling printInfo because the compiler will throw an error. An interesting thing to notice is that the second parameter of printInfo is not compiled as Long (an object), but long (a primitive type, note the lower case 'l'). Long and other objects matching to primitive types, such as Int,Float or Short, are specially handled by the compiler to be represented by their primitive type at runtime. Value classes can also define methods. Let's enrich our Price class, as follows: case class Price(value: BigDecimal) extends AnyVal { def lowerThan(p: Price): Boolean = this.value < p.value } // Example usage val p1 = Price(BigDecimal(1.23)) val p2 = Price(BigDecimal(2.03)) p1.lowerThan(p2) // returns true Our new method allows us to compare two instances of Price. At compile time, a companion object is created for Price. This companion object defines a lowerThan method that takes two BigDecimal objects as parameters. In reality, when we call lowerThan on an instance of Price, the code is transformed by the compiler from an instance method call to a static method call that is defined in the companion object: public final boolean lowerThan$extension(scala.math.BigDecimal, scala.math.BigDecimal); Code: 0: aload_1 1: aload_2 2: invokevirtual #56 // Method scala/math/BigDecimal.$less:(Lscala/math/BigDecimal;)Z 5: ireturn If we were to write the pseudo-code equivalent to the preceding Scala code, it would look something like the following: val p1 = BigDecimal(1.23) val p2 = BigDecimal(2.03) Price.lowerThan(p1, p2) // returns true   Performance considerations Value classes are a great addition to our developer toolbox. They help us reduce the count of instances and spare some work for the garbage collector, while allowing us to rely on meaningful types that reflect our business abstractions. However, extending AnyVal comes with a certain set of conditions that the class must fulfill. For example, a value class may only have one primary constructor that takes one public val as a single parameter. Furthermore, this parameter cannot be a value class. We saw that value classes can define methods via def. Neither val nor var are allowed inside a value class. A nested class or object definitions are also impossible. Another limitation prevents value classes from extending anything other than a universal trait, that is, a trait that extends Any, only has defs as members, and performs no initialization. If any of these conditions is not fulfilled, the compiler generates an error. In addition to the preceding constraints that are listed, there are special cases in which a value class has to be instantiated by the JVM. Such cases include performing a pattern matching or runtime type test, or assigning a value class to an array. An example of the latter looks like the following snippet: def newPriceArray(count: Int): Array[Price] = { val a = new Array[Price](count) for(i <- 0 until count){ a(i) = Price(BigDecimal(Random.nextInt())) } a } The generated bytecode is as follows: public highperfscala.anyval.ValueClasses$$anonfun$newPriceArray$1(highperfscala.anyval.ValueClasses$Price[]); Code: 0: aload_0 1: aload_1 2: putfield #29 // Field a$1:[Lhighperfscala/anyval/ValueClasses$Price; 5: aload_0 6: invokespecial #80 // Method scala/runtime/AbstractFunction1$mcVI$sp."<init>":()V 9: return public void apply$mcVI$sp(int); Code: 0: aload_0 1: getfield #29 // Field a$1:[Lhighperfscala/anyval/ValueClasses$Price; 4: iload_1 5: new #31 // class highperfscala/anyval/ValueClasses$Price // omitted for brevity 21: invokevirtual #55 // Method scala/math/BigDecimal$.apply:(I)Lscala/math/BigDecimal; 24: invokespecial #59 // Method highperfscala/anyval/ValueClasses$Price."<init>":(Lscala/math/BigDecimal;)V 27: aastore 28: return Notice how mcVI$sp is invoked from newPriceArray, and this creates a new instance of ValueClasses$Price at the 5 instruction. As turning a single field case class into a value class is as trivial as extending the AnyVal trait, we recommend that you always use AnyVal wherever possible. The overhead is quite low, and it generate high benefits in terms of garbage collection's performance. To learn more about value classes, their limitations and use cases, you can find detailed descriptions at http://docs.scala-lang.org/overviews/core/value-classes.html. Tagged types – an alternative to value classes Value classes are an easy to use tool, and they can yield great improvements in terms of performance. However, they come with a constraining set of conditions, which can make them impossible to use in certain cases. We will conclude this section with a glance at an interesting alternative to leveraging the tagged type feature that is implemented by the Scalaz library. The Scalaz implementation of tagged types is inspired by another Scala library, named shapeless. The shapeless library provides tools to write type-safe, generic code with minimal boilerplate. While we will not explore shapeless, we encourage you to learn more about the project at https://github.com/milessabin/shapeless. Tagged types are another way to enforce compile-type checking without incurring the cost of instance instantiation. They rely on the Tagged structural type and the @@ type alias that is defined in the Scalaz library, as follows: type Tagged[U] = { type Tag = U } type @@[T, U] = T with Tagged[U] Let's rewrite part of our code to leverage tagged types with our Price object: object TaggedTypes { sealed trait PriceTag type Price = BigDecimal @@ PriceTag object Price { def newPrice(p: BigDecimal): Price = Tag[BigDecimal, PriceTag](p) def lowerThan(a: Price, b: Price): Boolean = Tag.unwrap(a) < Tag.unwrap(b) } } Let's perform a short walkthrough of the code snippet. We will define a PriceTag sealed trait that we will use to tag our instances, a Price type alias is created and defined as a BigDecimal object tagged with PriceTag. The Price object defines useful functions, including the newPrice factory function that is used to tag a given BigDecimal object and return a Price object (that is, a tagged BigDecimal object). We will also implement an equivalent to the lowerThan method. This function takes two Price objects (that is two tagged BigDecimal objects), extracts the content of the tag that are two BigDecimal objects, and compares them. Using our new Price type, we rewrite the same newPriceArray function that we previously looked at (the code is omitted for brevity, but you can refer to it in the attached source code), and print the following generated bytecode: public void apply$mcVI$sp(int); Code: 0: aload_0 1: getfield #29 // Field a$1:[Ljava/lang/Object; 4: iload_1 5: getstatic #35 // Field highperfscala/anyval/TaggedTypes$Price$.MODULE$:Lhighperfscala/anyval/TaggedTypes$Price$; 8: getstatic #40 // Field scala/package$.MODULE$:Lscala/package$; 11: invokevirtual #44 // Method scala/package$.BigDecimal:()Lscala/math/BigDecimal$; 14: getstatic #49 // Field scala/util/Random$.MODULE$:Lscala/util/Random$; 17: invokevirtual #53 // Method scala/util/Random$.nextInt:()I 20: invokevirtual #58 // Method scala/math/BigDecimal$.apply:(I)Lscala/math/BigDecimal; 23: invokevirtual #62 // Method highperfscala/anyval/TaggedTypes$Price$.newPrice:(Lscala/math/BigDecimal;)Ljava/lang/Object; 26: aastore 27: return In this version, we no longer see an instantiation of Price, even though we are assigning them to an array. The tagged Price implementation involves a runtime cast, but we anticipate that the cost of this cast will be less than the instance allocations (and garbage collection) that was observed in the previous value class Price strategy. Specialization To understand the significance of specialization, it is important to first grasp the concept of object boxing. The JVM defines primitive types (boolean, byte, char, float, int, long, short, and double) that are stack allocated rather than heap allocated. When a generic type is introduced, for example, scala.collection.immutable.List, the JVM references an object equivalent, instead of a primitive type. In this example, an instantiated list of integers would be heap allocated objects rather than integer primitives. The process of converting a primitive to its object equivalent is called boxing, and the reverse process is called unboxing. Boxing is a relevant concern for performance-sensitive programming because boxing involves heap allocation. In performance-sensitive code that performs numerical computations, the cost of boxing and unboxing can create an order of magnitude or larger performance slowdowns. Consider the following example to illustrate boxing overhead: List.fill(10000)(2).map(_* 2) Creating the list via fill yields 10,000 heap allocations of the integer object. Performing the multiplication in map requires 10,000 unboxings to perform multiplication and then 10,000 boxings to add the multiplication result into the new list. From this simple example, you can imagine how critical section arithmetic will be slowed down due to boxing or unboxing operations. As shown in Oracle's tutorial on boxing at https://docs.oracle.com/javase/tutorial/java/data/autoboxing.html, boxing in Java and also in Scala happens transparently. This means that without careful profiling or bytecode analysis, it is difficult to discern where you are paying the cost for object boxing. To ameliorate this problem, Scala provides a feature named specialization. Specialization refers to the compile-time process of generating duplicate versions of a generic trait or class that refer directly to a primitive type instead of the associated object wrapper. At runtime, the compiler-generated version of the generic class, or as it is commonly referred to, the specialized version of the class, is instantiated. This process eliminates the runtime cost of boxing primitives, which means that you can define generic abstractions while retaining the performance of a handwritten, specialized implementation. Bytecode representation Let's look at a concrete example to better understand how the specialization process works. Consider a naive, generic representation of the number of shares purchased, as follows: case class ShareCount[T](value: T) For this example, let's assume that the intended usage is to swap between an integer or long representation of ShareCount. With this definition, instantiating a long-based ShareCount instance incurs the cost of boxing, as follows: def newShareCount(l: Long): ShareCount[Long] = ShareCount(l) This definition translates to the following bytecode: public highperfscala.specialization.Specialization$ShareCount<java.lang.Object> newShareCount(long); Code: 0: new #21 // class orderbook/Specialization$ShareCount 3: dup 4: lload_1 5: invokestatic #27 // Method scala/runtime/BoxesRunTime.boxToLong:(J)Ljava/lang/Long; 8: invokespecial #30 // Method orderbook/Specialization$ShareCount."<init>":(Ljava/lang/Object;)V 11: areturn In the preceding bytecode, it is clear in the 5 instruction that the primitive long value is boxed before instantiating the ShareCount instance. By introducing the @specialized annotation, we are able to eliminate the boxing by having the compiler provide an implementation of ShareCount that works with primitive long values. It is possible to specify which types you wish to specialize by supplying a set of types. As defined in the Specializables trait (http://www.scala-lang.org/api/current/index.html#scala.Specializable), you are able to specialize for all JVM primitives, such as Unit and AnyRef. For our example, let's specialize ShareCount for integers and longs, as follows: case class ShareCount[@specialized(Long, Int) T](value: T) With this definition, the bytecode now becomes the following: public highperfscala.specialization.Specialization$ShareCount<java.lang.Object> newShareCount(long); Code: 0: new #21 // class highperfscala.specialization/Specialization$ShareCount$mcJ$sp 3: dup 4: lload_1 5: invokespecial #24 // Method highperfscala.specialization/Specialization$ShareCount$mcJ$sp."<init>":(J)V 8: areturn The boxing disappears and is curiously replaced with a different class name, ShareCount $mcJ$sp. This is because we are invoking the compiler-generated version of ShareCount that is specialized for long values. By inspecting the output of javap, we see that the specialized class generated by the compiler is a subclass of ShareCount: public class highperfscala.specialization.Specialization$ShareCount$mcI$sp extends highperfscala.specialization.Specialization$ShareCount<java .lang.Object> Bear this specialization implementation detail in mind as we turn to the Performance considerations section. The use of inheritance forces tradeoffs to be made in more complex use cases. Performance considerations At first glance, specialization appears to be a simple panacea for JVM boxing. However, there are several caveats to consider when using specialization. A liberal use of specialization leads to significant increases in compile time and resulting code size. Consider specializing Function3, which accepts three arguments as input and produces one result. To specialize four arguments across all types (that is, Byte, Short, Int, Long, Char, Float, Double, Boolean, Unit, and AnyRef) yields 10^4 or 10,000 possible permutations. For this reason, the standard library conserves application of specialization. In your own use cases, consider carefully which types you wish to specialize. If we specialize Function3 only for Int and Long, the number of generated classes shrinks to 2^4 or 16. Specialization involving inheritance requires extra attention because it is trivial to lose specialization when extending a generic class. Consider the following example: class ParentFoo[@specialized T](t: T) class ChildFoo[T](t: T) extends ParentFoo[T](t) def newChildFoo(i: Int): ChildFoo[Int] = new ChildFoo[Int](i) In this scenario, you likely expect that ChildFoo is defined with a primitive integer. However, as ChildFoo does not mark its type with the @specialized annotation, zero specialized classes are created. Here is the bytecode to prove it: public highperfscala.specialization.Inheritance$ChildFoo<java.lang.Object> newChildFoo(int); Code: 0: new #16 // class highperfscala/specialization/Inheritance$ChildFoo 3: dup 4: iload_1 5: invokestatic #22 // Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer; 8: invokespecial #25 // Method highperfscala/specialization/Inheritance$ChildFoo."<init>":(Ljava/lang/Object;)V 11: areturn The next logical step is to add the @specialized annotation to the definition of ChildFoo. In doing so, we stumble across a scenario where the compiler warns about use of specialization, as follows: class ParentFoo must be a trait. Specialized version of class ChildFoo will inherit generic highperfscala.specialization.Inheritance.ParentFoo[Boolean] class ChildFoo[@specialized T](t: T) extends ParentFoo[T](t) The compiler indicates that you have created a diamond inheritance problem, where the specialized versions of ChildFoo extend both ChildFoo and the associated specialized version of ParentFoo. This issue can be resolved by modeling the problem with a trait, as follows: trait ParentBar[@specialized T] { def t(): T } class ChildBar[@specialized T](val t: T) extends ParentBar[T] def newChildBar(i: Int): ChildBar[Int] = new ChildBar(i) This definition compiles using a specialized version of ChildBar, as we originally were hoping for, as see in the following code: public highperfscala.specialization.Inheritance$ChildBar<java.lang.Object> newChildBar(int); Code: 0: new #32 // class highperfscala/specialization/Inheritance$ChildBar$mcI$sp 3: dup 4: iload_1 5: invokespecial #35 // Method highperfscala/specialization/Inheritance$ChildBar$mcI$sp."<init>":(I)V 8: areturn An analogous and equally error-prone scenario is when a generic function is defined around a specialized type. Consider the following definition: class Foo[T](t: T) object Foo { def create[T](t: T): Foo[T] = new Foo(t) } def boxed: Foo[Int] = Foo.create(1) Here, the definition of create is analogous to the child class from the inheritance example. Instances of Foo wrapping a primitive that are instantiated from the create method will be boxed. The following bytecode demonstrates how boxed leads to heap allocations: public highperfscala.specialization.MethodReturnTypes$Foo<java.lang.Object> boxed(); Code: 0: getstatic #19 // Field highperfscala/specialization/MethodReturnTypes$Foo$.MODULE$:Lhighperfscala/specialization/MethodReturnTypes$Foo$; 3: iconst_1 4: invokestatic #25 // Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer; 7: invokevirtual #29 // Method highperfscala/specialization/MethodReturnTypes$Foo$.create:(Ljava/lang/Object;)Lhighperfscala/specialization/MethodReturnTypes$Foo; 10: areturn The solution is to apply the @specialized annotation at the call site, as follows: def createSpecialized[@specialized T](t: T): Foo[T] = new Foo(t) The solution is to apply the @specialized annotation at the call site, as follows: def createSpecialized[@specialized T](t: T): Foo[T] = new Foo(t) One final interesting scenario is when specialization is used with multiple types and one of the types extends AnyRef or is a value class. To illustrate this scenario, consider the following example: case class ShareCount(value: Int) extends AnyVal case class ExecutionCount(value: Int) class Container2[@specialized X, @specialized Y](x: X, y: Y) def shareCount = new Container2(ShareCount(1), 1) def executionCount = new Container2(ExecutionCount(1), 1) def ints = new Container2(1, 1) In this example, which methods do you expect to box the second argument to Container2? For brevity, we omit the bytecode, but you can easily inspect it yourself. As it turns out, shareCount and executionCount box the integer. The compiler does not generate a specialized version of Container2 that accepts a primitive integer and a value extending AnyVal (for example, ExecutionCount). The shareCount variable also causes boxing due to the order in which the compiler removes the value class type information from the source code. In both scenarios, the workaround is to define a case class that is specific to a set of types (for example, ShareCount and Int). Removing the generics allows the compiler to select the primitive types. The conclusion to draw from these examples is that specialization requires extra focus to be used throughout an application without boxing. As the compiler is unable to infer scenarios where you accidentally forgot to apply the @specialized annotation, it fails to raise a warning. This places the onus on you to be vigilant about profiling and inspecting bytecode to detect scenarios where specialization is incidentally dropped. To combat some of the shortcomings that specialization brings, there is a compiler plugin under active development, named miniboxing, at http://scala-miniboxing.org/. This compiler plugin applies a different strategy that involves encoding all primitive types into a long value and carrying metadata to recall the original type. For example, boolean can be represented in long using a single bit to signal true or false. With this approach, performance is qualitatively similar to specialization while producing orders of magnitude for fewer classes for large permutations. Additionally, miniboxing is able to more robustly handle inheritance scenarios and can warn when boxing will occur. While the implementations of specialization and miniboxing differ, the end user usage is quite similar. Like specialization, you must add appropriate annotations to activate the miniboxing plugin. To learn more about the plugin, you can view the tutorials on the miniboxing project site. The extra focus to ensure specialization produces heap-allocation free code is worthwhile because of the performance wins in performance-sensitive code. To drive home the value of specialization, consider the following microbenchmark that computes the cost of a trade by multiplying share count with execution price. For simplicity, primitive types are used directly instead of value classes. Of course, in production code this would never happen: @BenchmarkMode(Array(Throughput)) @OutputTimeUnit(TimeUnit.SECONDS) @Warmup(iterations = 3, time = 5, timeUnit = TimeUnit.SECONDS) @Measurement(iterations = 30, time = 10, timeUnit = TimeUnit.SECONDS) @Fork(value = 1, warmups = 1, jvmArgs = Array("-Xms1G", "-Xmx1G")) class SpecializationBenchmark { @Benchmark def specialized(): Double = specializedExecution.shareCount.toDouble * specializedExecution.price @Benchmark def boxed(): Double = boxedExecution.shareCount.toDouble * boxedExecution.price } object SpecializationBenchmark { class SpecializedExecution[@specialized(Int) T1, @specialized(Double) T2]( val shareCount: Long, val price: Double) class BoxingExecution[T1, T2](val shareCount: T1, val price: T2) val specializedExecution: SpecializedExecution[Int, Double] = new SpecializedExecution(10l, 2d) val boxedExecution: BoxingExecution[Long, Double] = new BoxingExecution(10l, 2d) } In this benchmark, two versions of a generic execution class are defined. SpecializedExecution incurs zero boxing when computing the total cost because of specialization, while BoxingExecution requires object boxing and unboxing to perform the arithmetic. The microbenchmark is invoked with the following parameterization: sbt 'project chapter3' 'jmh:run SpecializationBenchmark -foe true' We configure this JMH benchmark via annotations that are placed at the class level in the code. Annotations have the advantage of setting proper defaults for your benchmark, and simplifying the command-line invocation. It is still possible to override the values in the annotation with command-line arguments. We use the -foe command-line argument to enable failure on error because there is no annotation to control this behavior. In the rest of this book, we will parameterize JMH with annotations and omit the annotations in the code samples because we always use the same values. The results are summarized in the following table: Benchmark Throughput (ops per second) Error as percentage of throughput boxed 251,534,293.11 ±2.23 specialized 302,371,879.84 ±0.87 This microbenchmark indicates that the specialized implementation yields approximately 17% higher throughput. By eliminating boxing in a critical section of the code, there is an order of magnitude performance improvement available through judicious usage of specialization. For performance-sensitive arithmetic, this benchmark provides justification for the extra effort that is required to ensure that specialization is applied properly. Summary This article talk about different Scala constructs and features. It also explained different features and how they get compiled with bytecode. Resources for Article: Further resources on this subject: Differences in style between Java and Scala code [article] Integrating Scala, Groovy, and Flex Development with Apache Maven [article] Cluster Computing Using Scala [article]
Read more
  • 0
  • 0
  • 7635

article-image-cissp-security-measures-access-control
Packt
27 Nov 2009
4 min read
Save for later

CISSP: Security Measures for Access Control

Packt
27 Nov 2009
4 min read
Knowledge requirements A candidate appearing for the CISSP exam should have knowledge in the following areas that relate to access control: Control access by applying concepts, methodologies, and techniques Identify, evaluate, and respond to access control attacks such as Brute force attack, dictionary, spoofing, denial of service, etc. Design, coordinate, and evaluate penetration test(s) Design, coordinate, and evaluate vulnerability test(s) The approach In accordance with the knowledge expected in the CISSP exam, this domain is broadly grouped under five sections as shown in the following diagram: Section 1: The Access Control domain consists of many concepts, methodologies, and some specific techniques that are used as best practices. This section coverssome of the basic concepts, access control models, and a few examples of access control techniques. Section 2: Authentication processes are critical for controlling access to facilities and systems. This section looks into important concepts that establish the relationship between access control mechanisms and authentication processes. Section 3: A system or facility becomes compromised primarily through unauthorized access either through the front door or the back door. We'll see some of the common and popular attacks on access control mechanisms, and also learn about the prevalent countermeasures to such attacks. Section 4: An IT system consists of an operating system software, applications, and embedded software in the devices to name a few. Vulnerabilities in such software are nothing but holes or errors. In this section we see some of the common vulnerabilities in IT systems, vulnerability assessment techniques, and vulnerability management principles. Section 5: Vulnerabilities are exploitable, in the sense that the IT systems can be compromised and unauthorized access can be gained by exploiting the vulnerabilities. Penetration testing or ethical hacking is an activity that tests the exploitability of vulnerabilities for gaining unauthorized access to an IT system. Today, we'll quickly review some of the important concepts in the Sections 1, 2,and 3. Access control concepts, methodologies, and techniques Controlling access to the information systems and the information processing facilities by means of administrative, physical, and technical safeguards is the primary goal of access control domain. Following topics provide insight into someof the important access control related concepts, methodologies, and techniques. Basic concepts One of the primary concepts in access control is to understand the subject and the object. A subject may be a person, a process, or a technology component that either seeks access or controls the access. For example, an employee trying to access his business email account is a subject. Similarly, the system that verifies the credentials such as username and password is also termed as a subject. An object can be a file, data, physical equipment, or premises which need controlled access. For example, the email stored in the mailbox is an object that a subject is trying to access. Controlling access to an object by a subject is the core requirement of an access control process and its associated mechanisms. In a nutshell, a subject either seeks or controls access to an object. An access control mechanism can be classified broadly into the following two types: If access to an object is controlled based on certain contextual parameters, such as location, time, sequence of responses, access history, and so on, then it is known as a context-dependent access control. In this type of control, the value of the asset being accessed is not a primary consideration. Providing the username and password combination followed by a challenge and response mechanism such as CAPTCHA, filtering the access based on MAC adresses in wireless connections, or a firewall filtering the data based on packet analysis are all examples of context-dependent access control mechanisms. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a challenge-response test to ensure that the input to an access control system is supplied by humans and not by machines. This mechanism is predominantly used by web sites to prevent Web Robots(WebBots) to access the controlled section of the web site by brute force methods The following is an example of CAPTCHA: If the access is provided based on the attributes or content of an object,then it is known as a content-dependent access control. In this type of control, the value and attributes of the content that is being accessed determines the control requirements. For example, hiding or showing menus in an application, views in databases, and access to confidential information are all content-dependent.
Read more
  • 0
  • 0
  • 7614

article-image-nodejs-fundamentals
Packt
22 May 2015
17 min read
Save for later

Node.js Fundamentals

Packt
22 May 2015
17 min read
This article is written by Krasimir Tsonev, the author of Node.js By Example. Node.js is one of the most popular JavaScript-driven technologies nowadays. It was created in 2009 by Ryan Dahl and since then, the framework has evolved into a well-developed ecosystem. Its package manager is full of useful modules and developers around the world have started using Node.js in their production environments. In this article, we will learn about the following: Node.js building blocks The main capabilities of the environment The package management of Node.js (For more resources related to this topic, see here.) Understanding the Node.js architecture Back in the days, Ryan was interested in developing network applications. He found out that most high performance servers followed similar concepts. Their architecture was similar to that of an event loop and they worked with nonblocking input/output operations. These operations would permit other processing activities to continue before an ongoing task could be finished. These characteristics are very important if we want to handle thousands of simultaneous requests. Most of the servers written in Java or C use multithreading. They process every request in a new thread. Ryan decided to try something different—a single-threaded architecture. In other words, all the requests that come to the server are processed by a single thread. This may sound like a nonscalable solution, but Node.js is definitely scalable. We just have to run different Node.js processes and use a load balancer that distributes the requests between them. Ryan needed something that is event-loop-based and which works fast. As he pointed out in one of his presentations, big companies such as Google, Apple, and Microsoft invest a lot of time in developing high performance JavaScript engines. They have become faster and faster every year. There, event-loop architecture is implemented. JavaScript has become really popular in recent years. The community and the hundreds of thousands of developers who are ready to contribute made Ryan think about using JavaScript. Here is a diagram of the Node.js architecture: In general, Node.js is made up of three things: V8 is Google's JavaScript engine that is used in the Chrome web browser (https://developers.google.com/v8/) A thread pool is the part that handles the file input/output operations. All the blocking system calls are executed here (http://software.schmorp.de/pkg/libeio.html) The event loop library (http://software.schmorp.de/pkg/libev.html) On top of these three blocks, we have several bindings that expose low-level interfaces. The rest of Node.js is written in JavaScript. Almost all the APIs that we see as built-in modules and which are present in the documentation, are written in JavaScript. Installing Node.js A fast and easy way to install Node.js is by visiting and downloading the appropriate installer for your operating system. For OS X and Windows users, the installer provides a nice, easy-to-use interface. For developers that use Linux as an operating system, Node.js is available in the APT package manager. The following commands will set up Node.js and Node Package Manager (NPM): sudo apt-get updatesudo apt-get install nodejssudo apt-get install npm Running Node.js server Node.js is a command-line tool. After installing it, the node command will be available on our terminal. The node command accepts several arguments, but the most important one is the file that contains our JavaScript. Let's create a file called server.js and put the following code inside: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); If you run node ./server.js in your console, you will have the Node.js server running. It listens for incoming requests at localhost (127.0.0.1) on port 9000. The very first line of the preceding code requires the built-in http module. In Node.js, we have the require global function that provides the mechanism to use external modules. We will see how to define our own modules in a bit. After that, the scripts continue with the createServer and listen methods on the http module. In this case, the API of the module is designed in such a way that we can chain these two methods like in jQuery. The first one (createServer) accepts a function that is also known as a callback, which is called every time a new request comes to the server. The second one makes the server listen. The result that we will get in a browser is as follows: Defining and using modules JavaScript as a language does not have mechanisms to define real classes. In fact, everything in JavaScript is an object. We normally inherit properties and functions from one object to another. Thankfully, Node.js adopts the concepts defined by CommonJS—a project that specifies an ecosystem for JavaScript. We encapsulate logic in modules. Every module is defined in its own file. Let's illustrate how everything works with a simple example. Let's say that we have a module that represents this book and we save it in a file called book.js: // book.jsexports.name = 'Node.js by example';exports.read = function() {   console.log('I am reading ' + exports.name);} We defined a public property and a public function. Now, we will use require to access them: // script.jsvar book = require('./book.js');console.log('Name: ' + book.name);book.read(); We will now create another file named script.js. To test our code, we will run node ./script.js. The result in the terminal looks like this: Along with exports, we also have module.exports available. There is a difference between the two. Look at the following pseudocode. It illustrates how Node.js constructs our modules: var module = { exports: {} };var exports = module.exports;// our codereturn module.exports; So, in the end, module.exports is returned and this is what require produces. We should be careful because if at some point we apply a value directly to exports or module.exports, we may not receive what we need. Like at the end of the following snippet, we set a function as a value and that function is exposed to the outside world: exports.name = 'Node.js by example';exports.read = function() {   console.log('Iam reading ' + exports.name);}module.exports = function() { ... } In this case, we do not have an access to .name and .read. If we try to execute node ./script.js again, we will get the following output: To avoid such issues, we should stick to one of the two options—exports or module.exports—but make sure that we do not have both. We should also keep in mind that by default, require caches the object that is returned. So, if we need two different instances, we should export a function. Here is a version of the book class that provides API methods to rate the books and that do not work properly: // book.jsvar ratePoints = 0;exports.rate = function(points) {   ratePoints = points;}exports.getPoints = function() {   return ratePoints;} Let's create two instances and rate the books with different points value: // script.jsvar bookA = require('./book.js');var bookB = require('./book.js');bookA.rate(10);bookB.rate(20);console.log(bookA.getPoints(), bookB.getPoints()); The logical response should be 10 20, but we got 20 20. This is why it is a common practice to export a function that produces a different object every time: // book.jsmodule.exports = function() {   var ratePoints = 0;   return {     rate: function(points) {         ratePoints = points;     },     getPoints: function() {         return ratePoints;     }   }} Now, we should also have require('./book.js')() because require returns a function and not an object anymore. Managing and distributing packages Once we understand the idea of require and exports, we should start thinking about grouping our logic into building blocks. In the Node.js world, these blocks are called modules (or packages). One of the reasons behind the popularity of Node.js is its package management. Node.js normally comes with two executables—node and npm. NPM is a command-line tool that downloads and uploads Node.js packages. The official site, , acts as a central registry. When we create a package via the npm command, we store it there so that every other developer may use it. Creating a module Every module should live in its own directory, which also contains a metadata file called package.json. In this file, we have set at least two properties—name and version: {   "name": "my-awesome-nodejs-module",   "version": "0.0.1"} We can place whatever code we like in the same directory. Once we publish the module to the NPM registry and someone installs it, he/she will get the same files. For example, let's add an index.js file so that we have two files in the package: // index.jsconsole.log('Hello, this is my awesome Node.js module!'); Our module does only one thing—it displays a simple message to the console. Now, to upload the modules, we need to navigate to the directory containing the package.json file and execute npm publish. This is the result that we should see: We are ready. Now our little module is listed in the Node.js package manager's site and everyone is able to download it. Using modules In general, there are three ways to use the modules that are already created. All three ways involve the package manager: We may install a specific module manually. Let's say that we have a folder called project. We open the folder and run the following: npm install my-awesome-nodejs-module The manager automatically downloads the latest version of the module and puts it in a folder called node_modules. If we want to use it, we do not need to reference the exact path. By default, Node.js checks the node_modules folder before requiring something. So, just require('my-awesome-nodejs-module') will be enough. The installation of modules globally is a common practice, especially if we talk about command-line tools made with Node.js. It has become an easy-to-use technology to develop such tools. The little module that we created is not made as a command-line program, but we can still install it globally by running the following code: npm install my-awesome-nodejs-module -g Note the -g flag at the end. This is how we tell the manager that we want this module to be a global one. When the process finishes, we do not have a node_modules directory. The my-awesome-nodejs-module folder is stored in another place on our system. To be able to use it, we have to add another property to package.json, but we'll talk more about this in the next section. The resolving of dependencies is one of the key features of the package manager of Node.js. Every module can have as many dependencies as you want. These dependences are nothing but other Node.js modules that were uploaded to the registry. All we have to do is list the needed packages in the package.json file: {    "name": "another-module",    "version": "0.0.1",    "dependencies": {        "my-awesome-nodejs-module": "0.0.1"      } } Now we don't have to specify the module explicitly and we can simply execute npm install to install our dependencies. The manager reads the package.json file and saves our module again in the node_modules directory. It is good to use this technique because we may add several dependencies and install them at once. It also makes our module transferable and self-documented. There is no need to explain to other programmers what our module is made up of. Updating our module Let's transform our module into a command-line tool. Once we do this, users will have a my-awesome-nodejs-module command available in their terminals. There are two changes in the package.json file that we have to make: {   "name": "my-awesome-nodejs-module",   "version": "0.0.2",   "bin": "index.js"} A new bin property is added. It points to the entry point of our application. We have a really simple example and only one file—index.js. The other change that we have to make is to update the version property. In Node.js, the version of the module plays important role. If we look back, we will see that while describing dependencies in the package.json file, we pointed out the exact version. This ensures that in the future, we will get the same module with the same APIs. Every number from the version property means something. The package manager uses Semantic Versioning 2.0.0 (http://semver.org/). Its format is MAJOR.MINOR.PATCH. So, we as developers should increment the following: MAJOR number if we make incompatible API changes MINOR number if we add new functions/features in a backwards-compatible manner PATCH number if we have bug fixes Sometimes, we may see a version like 2.12.*. This means that the developer is interested in using the exact MAJOR and MINOR version, but he/she agrees that there may be bug fixes in the future. It's also possible to use values like >=1.2.7 to match any equal-or-greater version, for example, 1.2.7, 1.2.8, or 2.5.3. We updated our package.json file. The next step is to send the changes to the registry. This could be done again with npm publish in the directory that holds the JSON file. The result will be similar. We will see the new 0.0.2 version number on the screen: Just after this, we may run npm install my-awesome-nodejs-module -g and the new version of the module will be installed on our machine. The difference is that now we have the my-awesome-nodejs-module command available and if you run it, it displays the message written in the index.js file: Introducing built-in modules Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal. Creating a server with the HTTP module We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); We have a createServer method that returns a new web server object. In most cases, we run the listen method. If needed, there is close, which stops the server from accepting new connections. The callback function that we pass always accepts the request (req) and response (res) objects. We can use the first one to retrieve information about incoming request, such as, GET or POST parameters. Reading and writing to files The module that is responsible for the read and write processes is called fs (it is derived from filesystem). Here is a simple example that illustrates how to write data to a file: var fs = require('fs');fs.writeFile('data.txt', 'Hello world!', function (err) {   if(err) { throw err; }   console.log('It is saved!');}); Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync, as follows: fs.writeFileSync('data.txt', 'Hello world!'); However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible. The reading of the file is almost the same. We should use the readFile method in the following way: fs.readFile('data.txt', function(err, data) {   if (err) throw err;   console.log(data.toString());}); Working with events The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example: var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() {    console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen'); The eventEmitter object is the object that we subscribed to. We did this with the help of the on method. The emit function fires the event and the somethingHappen handler is executed. The events module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner: // book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) {    ratePoints = points;    this.emit('rated'); }; Class.prototype.getPoints = function() {    return ratePoints; } module.exports = Class; We want to inherit the behavior of the EventEmitter object. The easiest way to achieve this in Node.js is by using the utility module (util) and its inherits method. The defined class could be used like this: var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() {    console.log('Rated with ' + book.getPoints()); }); book.rate(10); We again used the on method to subscribe to the rated event. The book class displays that message once we set the points. The terminal then shows the Rated with 10 text. Managing child processes There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls command: // exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) {    console.log('stdout: ' + stdout);    console.log('stderr: ' + stderr);    if (error !== null) {        console.log('exec error: ' + error);    } }); The module that we used is called child_process. Its exec method accepts the desired command as a string and a callback. The stdout item is the output of the command. If we want to process the errors (if any), we may use the error object or the stderr buffer data. The preceding code produces the following screenshot: Along with the exec method, we have spawn. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push may take a few seconds and it may send messages to the console continuously. In such cases, spawn is a good variant because we get an access to a stream: var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) {    console.log('stdout: ' + data); }); command.stderr.on('data', function (data) {    console.log('stderr: ' + data); }); command.on('close', function (code) {    console.log('child process exited with code ' + code); }); Here, stdout and stderr are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master and sent the full command responses to the console. Summary Node.js is used by many companies nowadays. This proves that it is mature enough to work in a production environment. In this article, we saw what the fundamentals of this technology are. We covered some of the commonly used cases. Resources for Article: Further resources on this subject: AngularJS Project [article] Exploring streams [article] Getting Started with NW.js [article]
Read more
  • 0
  • 0
  • 7598

article-image-creating-a-continuous-integration-commit-pipeline-using-docker-tutorial
Savia Lobo
04 Oct 2018
10 min read
Save for later

Creating a Continuous Integration commit pipeline using Docker [Tutorial]

Savia Lobo
04 Oct 2018
10 min read
The most basic Continuous Integration process is called a commit pipeline. This classic phase, as its name says, starts with a commit (or push in Git) to the main repository and results in a report about the build success or failure. Since it runs after each change in the code, the build should take no more than 5 minutes and should consume a reasonable amount of resources. This tutorial is an excerpt taken from the book, Continuous Delivery with Docker and Jenkins written by Rafał Leszko. This book provides steps to build applications on Docker files and integrate them with Jenkins using continuous delivery processes such as continuous integration, automated acceptance testing, and configuration management. In this article, you will learn how to create Continuous Integration commit pipeline using Docker. The commit phase is always the starting point of the Continuous Delivery process, and it provides the most important feedback cycle in the development process, constant information if the code is in a healthy state.  A developer checks in the code to the repository, the Continuous Integration server detects the change, and the build starts. The most fundamental commit pipeline contains three stages: Checkout: This stage downloads the source code from the repository Compile: This stage compiles the source code Unit test: This stage runs a suite of unit tests Let's create a sample project and see how to implement the commit pipeline. This is an example of a pipeline for the project that uses technologies such as Git, Java, Gradle, and Spring Boot. Nevertheless, the same principles apply to any other technology. Checkout Checking out code from the repository is always the first operation in any pipeline. In order to see this, we need to have a repository. Then, we will be able to create a pipeline. Creating a GitHub repository Creating a repository on the GitHub server takes just a few steps: Go to the https://github.com/ page. Create an account if you don't have one yet. Click on New repository. Give it a name, calculator. Tick Initialize this repository with a README. Click on Create repository. Now, you should see the address of the repository, for example, https://github.com/leszko/calculator.git. Creating a checkout stage We can create a new pipeline called calculator and, as Pipeline script, put the code with a stage called Checkout: pipeline { agent any stages { stage("Checkout") { steps { git url: 'https://github.com/leszko/calculator.git' } } } } The pipeline can be executed on any of the agents, and its only step does nothing more than downloading code from the repository. We can click on Build Now and see if it was executed successfully. Note that the Git toolkit needs to be installed on the node where the build is executed. When we have the checkout, we're ready for the second stage. Compile In order to compile a project, we need to: Create a project with the source code. Push it to the repository. Add the Compile stage to the pipeline. Creating a Java Spring Boot project Let's create a very simple Java project using the Spring Boot framework built by Gradle. Spring Boot is a Java framework that simplifies building enterprise applications. Gradle is a build automation system that is based on the concepts of Apache Maven. The simplest way to create a Spring Boot project is to perform the following steps: Go to the http://start.spring.io/ page. Select Gradle project instead of Maven project (you can also leave Maven if you prefer it to Gradle). Fill Group and Artifact (for example, com.leszko and calculator). Add Web to Dependencies. Click on Generate Project. The generated skeleton project should be downloaded (the calculator.zip file). The following screenshot presents the http://start.spring.io/ page: Pushing code to GitHub We will use the Git tool to perform the commit and push operations: In order to run the git command, you need to have the Git toolkit installed (it can be downloaded from https://git-scm.com/downloads). Let's first clone the repository to the filesystem: $ git clone https://github.com/leszko/calculator.git Extract the project downloaded from http://start.spring.io/ into the directory created by Git. If you prefer, you can import the project into IntelliJ, Eclipse, or your favorite IDE tool. As a result, the calculator directory should have the following files: $ ls -a . .. build.gradle .git .gitignore gradle gradlew gradlew.bat README.md src In order to perform the Gradle operations locally, you need to have Java JDK installed (in Ubuntu, you can do it by executing sudo apt-get install -y default-jdk). We can compile the project locally using the following code: $ ./gradlew compileJava In the case of Maven, you can run ./mvnw compile. Both Gradle and Maven compile the Java classes located in the src directory. You can find all possible Gradle instructions (for the Java project) at https://docs.gradle.org/current/userguide/java_plugin.html. Now, we can commit and push to the GitHub repository: $ git add . $ git commit -m "Add Spring Boot skeleton" $ git push -u origin master After running the git push command, you will be prompted to enter the GitHub credentials (username and password). The code is already in the GitHub repository. If you want to check it, you can go to the GitHub page and see the files. Creating a compile stage We can add a Compile stage to the pipeline using the following code: stage("Compile") { steps { sh "./gradlew compileJava" } } Note that we used exactly the same command locally and in the Jenkins pipeline, which is a very good sign because the local development process is consistent with the Continuous Integration environment. After running the build, you should see two green boxes. You can also check that the project was compiled correctly in the console log. Unit test It's time to add the last stage that is Unit test, which checks if our code does what we expect it to do. We have to: Add the source code for the calculator logic Write unit test for the code Add a stage to execute the unit test Creating business logic The first version of the calculator will be able to add two numbers. Let's add the business logic as a class in the src/main/java/com/leszko/calculator/Calculator.java file: package com.leszko.calculator; import org.springframework.stereotype.Service; @Service public class Calculator { int sum(int a, int b) { return a + b; } } To execute the business logic, we also need to add the web service controller in a separate file src/main/java/com/leszko/calculator/CalculatorController.java: package com.leszko.calculator; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController class CalculatorController { @Autowired private Calculator calculator; @RequestMapping("/sum") String sum(@RequestParam("a") Integer a, @RequestParam("b") Integer b) { return String.valueOf(calculator.sum(a, b)); } } This class exposes the business logic as a web service. We can run the application and see how it works: $ ./gradlew bootRun It should start our web service and we can check that it works by navigating to the browser and opening the page http://localhost:8080/sum?a=1&b=2. This should sum two numbers ( 1 and 2) and show 3 in the browser. Writing a unit test We already have the working application. How can we ensure that the logic works as expected? We have tried it once, but in order to know constantly, we need a unit test. In our case, it will be trivial, maybe even unnecessary; however, in real projects, unit tests can save from bugs and system failures. Let's create a unit test in the file src/test/java/com/leszko/calculator/CalculatorTest.java: package com.leszko.calculator; import org.junit.Test; import static org.junit.Assert.assertEquals; public class CalculatorTest { private Calculator calculator = new Calculator(); @Test public void testSum() { assertEquals(5, calculator.sum(2, 3)); } } We can run the test locally using the ./gradlew test command. Then, let's commit the code and push it to the repository: $ git add . $ git commit -m "Add sum logic, controller and unit test" $ git push Creating a unit test stage Now, we can add a Unit test stage to the pipeline: stage("Unit test") { steps { sh "./gradlew test" } } In the case of Maven, we would have to use ./mvnw test. When we build the pipeline again, we should see three boxes, which means that we've completed the Continuous Integration pipeline: Placing the pipeline definition inside Jenkinsfile All the time, so far, we created the pipeline code directly in Jenkins. This is, however, not the only option. We can also put the pipeline definition inside a file called Jenkinsfile and commit it to the repository together with the source code. This method is even more consistent because the way your pipeline looks is strictly related to the project itself. For example, if you don't need the code compilation because your programming language is interpreted (and not compiled), then you won't have the Compile stage. The tools you use also differ depending on the environment. We used Gradle/Maven because we've built the Java project; however, in the case of a project written in Python, you could use PyBuilder. It leads to the idea that the pipelines should be created by the same people who write the code, developers. Also, the pipeline definition should be put together with the code, in the repository. This approach brings immediate benefits, as follows: In case of Jenkins' failure, the pipeline definition is not lost (because it's stored in the code repository, not in Jenkins) The history of the pipeline changes is stored Pipeline changes go through the standard code development process (for example, they are subjected to code reviews) Access to the pipeline changes is restricted exactly in the same way as the access to the source code Creating Jenkinsfile We can create the Jenkinsfile and push it to our GitHub repository. Its content is almost the same as the commit pipeline we wrote. The only difference is that the checkout stage becomes redundant because Jenkins has to checkout the code (together with Jenkinsfile) first and then read the pipeline structure (from Jenkinsfile). This is why Jenkins needs to know the repository address before it reads Jenkinsfile. Let's create a file called Jenkinsfile in the root directory of our project: pipeline { agent any stages { stage("Compile") { steps { sh "./gradlew compileJava" } } stage("Unit test") { steps { sh "./gradlew test" } } } } We can now commit the added files and push to the GitHub repository: $ git add . $ git commit -m "Add sum Jenkinsfile" $ git push Running pipeline from Jenkinsfile When Jenkinsfile is in the repository, then all we have to do is to open the pipeline configuration and in the Pipeline section: Change Definition from Pipeline script to Pipeline script from SCM Select Git in SCM Put https://github.com/leszko/calculator.git in Repository URL After saving, the build will always run from the current version of Jenkinsfile into the repository. We have successfully created the first complete commit pipeline. It can be treated as a minimum viable product, and actually, in many cases, it's sufficient as the Continuous Integration process. In the next sections, we will see what improvements can be done to make the commit pipeline even better. To summarize, we covered some aspects of the Continuous Integration pipeline, which is always the first step for Continuous Delivery. If you've enjoyed reading this post, do check out the book,  Continuous Delivery with Docker and Jenkins to know more on how to deploy applications using Docker images and testing them with Jenkins. Gremlin makes chaos engineering with Docker easier with new container discovery feature Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 7585

article-image-react-forces-leaders-to-confront-community-toxic-culture
Sugandha Lahoti
27 Aug 2019
7 min read
Save for later

#Reactgate forces React leaders to confront community's toxic culture head on

Sugandha Lahoti
27 Aug 2019
7 min read
On Thursday last week, Twitter account @heydonworks posted a tweet that “Vue developers like cooking/quiet activities and React developers like trump, guns, weightlifting and being "bros". He also talked about the rising number of super conservative React dev accounts. https://twitter.com/heydonworks/status/1164506235518910464 This was met with disapproval from people within both the React and Vue communities. “Front end development isn’t a competition,” remarked one user. https://twitter.com/mattisadev/status/1164633489305739267 https://twitter.com/nsantos_pessoal/status/1164629726499102720 @heydonworks responded to the chorus of disapproval by saying that his intention was to  highlight how a broad and diverse community of thousands of people can be eclipsed by an aggressive and vocal toxic minority. He then went on to ask Dan Abramov, React co-founder, “Perhaps a public disowning of the neocon / supremacist contingent on your part would land better than my crappy joke?” https://twitter.com/heydonworks/status/1164653560598093824 He also clarified how his original tweet was supposed to paint a picture of what React would be like if it was taken over by hypermasculine conservatives. “I admit it's not obvious”, he tweeted, “but I am on your side. I don't want that to happen and the joke was meant as a warning.” @heydonworks also accused a well known React Developer of playing "the circle game" at a React conference. The “circle game” is a school prank that has more recently come to be associated with white supremacism in the U.S. @heydonworks later deleted this tweet and issued  an apology admitting that he was wrong to accuse the person of making the gesture. https://twitter.com/heydonworks/status/1165439718512824320 This conversation then developed into a wider argument about how toxicity is enabled and allowed in the React community - and, indeed, other tech communities as well. The crucial point that many will have to reckon with is what behaviors people allow and overlook. Indeed, to a certain extent, the ability to be comfortable with certain behaviors is related to an individual’s privilege - what may seem merely an aspect or a quirk of someone’s persona to one person, might be threatening and a cause of discomfort to another person. This was the point made by web developer Nat Alison (@tesseralis): “Remember that fascists and abusers can often seem like normal people to everyone but the people that they're harming.” Alison’s thread highlights that associating with people without challenging toxic behaviors or attitudes is a way of enabling and tacitly supporting them. https://twitter.com/tesseralis/status/1165111494062641152 Web designer Tatiana Mac quits the tech industry following the React controversy Web designer Tatiana Mac’s talk at Clarity Conf (you can see the slides here) in San Francisco last week (21 August) took place just a few hours before @heydonworks sent the first of his tweets mentioned above. The talk was a powerful statement on how systems can be built in ways that can either reinforce power or challenge it. Although it was well-received by many present at the event and online, it also was met with hostility, with one Twitter user (now locked) tweeting in response to an image of Mac’s talk that it “most definitely wasn't a tech conference… Looks to be some kind of SJW (Social justice warrior) conference.” This only added an extra layer of toxicity to the furore that has been engulfing the React community. Following the talk, Mac offered her thoughts, criticizing those she described as being more interested in “protecting the reputation of a framework than listening to multiple marginalized people.” https://twitter.com/TatianaTMac/status/1164912554876891137 She adds, “I don’t perceive this problem in the other JS framework communities as intensively.  Do White Supremacists exist in other frameworks? Likely. But there is a multiplier/feeder here that is systemically baked. That’s what I want analysed by the most ardent supporters of the community.” She says that even after bringing this issue multiple times, she has been consistently ignored. Her tweet reads, “I'm disappointed by repeatedly bringing this shit up and getting ignored/gaslit, then having a white woman bring it up and her getting praised for it? White supremacy might as well be an opiate—some people take it without ever knowing, others microdose it to get ahead.” “Why is no one like, ‘Tatiana had good intentions in bringing up the rampant racism problem in our community?’ Instead, it’s all, ‘Look at all the impact it had on two white guys!’ Is cuz y’all finally realise intent doesn’t erase impact?”, she adds. She has since decided to quit the tech industry following these developments. In a tweet, she wrote that she is “incredibly sad, disappointed, and not at all surprised by *so* many people.” Mac has described in detail the emotional and financial toll the situation is having on her. She has said she is committed to all contracts through to 2020, but also revealed that she may need to sell belongings to support herself. This highlights the potential cost involved in challenging the status quo. To provide clarity on what has happened, Tatiana approached her friend, designer Carlos Eriksson, who put together a timeline of the Reactgate controversy. Dan Abramov and Ken Wheeler quit and then rejoin Twitter Following the furore, both Dan Abramov and Ken Wheeler quit Twitter over the weekend. They have now rejoined. After he deactivated, Abramov talked about his disappearance from Twitter on Reddit: “Hey all. I'm fine, and I plan to be back soon. This isn't a ‘shut a door in your face’ kind of situation.  The real answer is that I've bit off more social media than I can chew. I've been feeling anxious for the past few days and I need a clean break from checking it every ten minutes. Deactivating is a barrier to logging in that I needed. I plan to be back soon.” Abramov returned to Twitter on August 27. He apologized for his sudden disappearance. He apologized, calling deactivating his account “a desperate and petty thing.” He also thanked Tatiana Mac for highlighting issues in the React community. “I am deeply thankful to @TatianaTMac for highlighting issues in the React community,” Abramov wrote. “She engaged in a dialog despite being on the receiving end of abuse and brigading. I admire her bravery and her kindness in doing the emotional labor that should have fallen on us instead.” Wheeler also returned to Twitter. “Moving forward, I will be working to do better. To educate myself. To lift up minoritized folks. And to be a better member of the community. And if you are out there attacking and harassing people, you are not on my side,” he said. Mac acknowledged  Abramov and Wheeler’s apologies, writing that, “it is unfair and preemptive to call Dan and Ken fragile. Both committed to facing the white supremacist capitalist patriarchy head on. I support the promise and will be watching from the sidelines supporting positive influence.” What can the React community do to grow from this experience? This news has shaken the React community to the core. At such distressing times, the React community needs to come together as a whole and offer constructive criticism to tackle the issue of unhealthy tribalism, while making minority groups feel safe and heard. Tatiana puts forward a few points to tackle the toxic culture. “Pay attention to your biggest proponents and how they reject all discussion of the injustices of tech. It’s subtle like that, and, it’s as overt as throwing white supremacist hand gestures at conferences on stage. Neither is necessarily more dangerous than the other, but instead shows the journey and spectrum of radicalization—it’s a process.” She urges, “If you want to clean up the community, you’ve got to see what systemic forces allow these hateful dingdongs to sit so comfortably in your space.  I’m here to help and hope I have today already, as a member of tech, but I need you to do the work there.” “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Github Sponsors: Could corporate strategy eat FOSS culture for dinner? Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination
Read more
  • 0
  • 0
  • 7582

article-image-deploying-html5-applications-gnome
Packt
28 May 2013
10 min read
Save for later

Deploying HTML5 Applications with GNOME

Packt
28 May 2013
10 min read
(For more resources related to this topic, see here.) Before we start Most of the discussions in this article require a moderate knowledge of HTML5, JSON, and common client-side JavaScript programming. One particular exercise uses JQuery and JQuery Mobile to show how a real HTML5 application will be implemented. Embedding WebKit What we need to learn first is how to embed a WebKit layout engine inside our GTK+ application. Embedding WebKit means we can use HTML and CSS as our user interface instead of GTK+ or Clutter. Time for action – embedding WebKit With WebKitGTK+, this is a very easy task to do; just follow these steps: Create an empty Vala project without GtkBuilder and no license. Name it hello-webkit. Modify configure.ac to include WebKitGTK+ into the project. Find the following line of code in the file: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0]) Remove the previous line and replace it with the following one: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0 webkitgtk-3.0]) Modify Makefile.am inside the src folder to include WebKitGTK into the Vala compilation pipeline. Find the following lines of code in the file: hello_webkit_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_webkit_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 Fill the hello_webkit.vala file inside the src folder with the following lines: using GLib;using Gtk;using WebKit;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>","/");}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanying webkit-1.0.vapi file into the src folder. We need to do this, unfortunately, because the webkit-1.0.vapi file distributed with many distributions is still using GTK+ Version 2. Run it, you will see a window with the message Hello, as shown in the following screenshot: What just happened? What we need to do first is to include WebKit into our namespace, so we can use all the functions and classes from it. using WebKit; Our class is derived from the WebView widget. It is an important widget in WebKit, which is capable of showing a web page. Showing it means not only parsing and displaying the DOM properly, but that it's capable to run the scripts and handle the styles referred to by the document. The derivation declaration is put in the class declaration as shown next: public class Main : WebView In our constructor, we only load a string and parse it as an HTML document. The string is Hello, styled with level 1 heading. After the execution of the following line, WebKit will parse and display the presentation of the HTML5 code inside its body: public Main (){load_html_string("<h1>Hello</h1>","/");} In our main function, what we need to do is create a window to put our WebView widget into. After adding the widget, we need to call the show_all() function in order to display both the window and the widget. static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView); The window content now only has a WebView widget as its sole displaying widget. At this point, we no longer use GTK+ to show our UI, but it is all written in HTML5. Runtime with JavaScriptCore An HTML5 application is, most of the time, accompanied by client-side scripts that are written in JavaScript and a set of styling definition written in CSS3. WebKit already provides the feature of running client-side JavaScript (running the script inside the web page) with a component called JavaScriptCore, so we don't need to worry about it. But how about the connection with the GNOME platform? How to make the client-side script access the GNOME objects? One approach is that we can expose our objects, which are written in Vala so that they can be used by the client-side JavaScript. This is where we will utilize JavaScriptCore. We can think of this as a frontend and backend architecture pattern. All of the code of business process which touch GNOME will reside in the backend. They are all written in Vala and run by the main process. On the opposite side, the frontend, the code is written in JavaScript and HTML5, and is run by WebKit internally. The frontend is what the user sees while the backend is what is going on behind the scene. Consider the following diagram of our application. The backend part is grouped inside a grey bordered box and run in the main process. The frontend is outside the box and run and displayed by WebKit. From the diagram, we can see that the frontend creates an object and calls a function in the created object. The object we create is not defined in the client side, but is actually created at the backend. We ask JavaScriptCore to act as a bridge to connect the object created at the backend to be made accessible by the frontend code. To do this, we wrap the backend objects with JavaScriptCore class and function definitions. For each object we want to make available to frontend, we need to create a mapping in the JavaScriptCore side. In the following diagram, we first map the MyClass object, then the helloFromVala function, then the intFromVala, and so on: Time for action – calling the Vala object from the frontend Now let's try and create a simple client-side JavaScript code and call an object defined at the backend: Create an empty Vala project, without GtkBuilder and no license. Name it hello-jscore. Modify configure.ac to include WebKitGTK+ exactly like our previous experiment. Modify Makefile.am inside the src folder to include WebKitGTK+ and JSCore into the Vala compilation pipeline. Find the following lines of code in the file: hello_jscore_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_jscore_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 --pkg javascriptcore Fill the hello_jscore.vala file inside the src folder with the following lines of code: using GLib;using Gtk;using WebKit;using JSCore;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/");window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext) context);});}public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello fromJSCore");return new JSCore.Value.string (ctx, text);}static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }};static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType};void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanied webkit-1.0.vapi and javascriptcore.vapi files into the src folder. The javascriptcore.vapi file is needed because some distributions do not have this .vapi file in their repositories. Run the application. The following output will be displayed: What just happened? The first thing we do is include the WebKit and JavaScriptCore namespaces. Note, in the following code snippet, that the JavaScriptCore namespace is abbreviated as JSCore: using WebKit;using JSCore; In the Main function, we load HTML content into the WebView widget. We display a level 1 heading and then call the alert function. The alert function displays a string returned by the hello function inside the HelloJSCore class, as shown in the following code: public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/"); In the preceding code snippet, we can see that the client-side JavaScript code is as follows: alert(HelloJSCore.hello()) And we can also see that we call the hello function from the HelloJSCore class as a static function. It means that we don't instantiate the HelloJSCore object before calling the hello function. In WebView, we initialize the class defined in the Vala class when we get the window_object_cleared signal. This signal is emitted whenever a page is cleared. The initialization is done in setup_js_class and this is also where we pass the JSCore global context into. The global context is where JSCore keeps the global variables and functions. It is accessible by every code. window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext)context);}); The following snippet of code contains the function, which we want to expose to the clientside JavaScript. The function just returns a Hello from JSCore string message: public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello from JSCore");return new JSCore.Value.string (ctx, text);} Then we need to put a boilerplate code that is needed to expose the function and other members of the class. The first part of the code is the static function index. This is the mapping between the exposed function and the name of the function defined in the wrapper. In the following example, we map the hello function, which can be used in the client side, with the helloFromVala function defined in the code. The index is then ended with null to mark the end of the array: static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }}; The next part of the code is the class definition. It is about the structure that we have to fill, so that JSCore would know about the class. All of the fields are filled with null, except for those we want to make use of. In this example, we use the static function for the hello function. So we fill the static function field with js_funcs, which we defined in the preceding code snippet: static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType}; After that, in the the setup_js_class function, we set up the class to be made available in the JSCore global context. First, we create JSCore.Class with the class definition structure we filled previously. Then, we create an object of the class, which is created in the global context. Last but not least, we assign the object with a string identifier, which is HelloJSCore. After executing the following code, we will be able to refer HelloJSCore on the client side: void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}
Read more
  • 0
  • 0
  • 7582
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-hosting-service-iis-using-tcp-protocol
Packt
30 Oct 2014
8 min read
Save for later

Hosting the service in IIS using the TCP protocol

Packt
30 Oct 2014
8 min read
In this article by Mike Liu, the author of WCF Multi-layer Services Development with Entity Framework, Fourth Edtion, we will learn how to create and host a service in IIS using the TCP protocol. (For more resources related to this topic, see here.) Hosting WCF services in IIS using the HTTP protocol gives the best interoperability to the service, because the HTTP protocol is supported everywhere today. However, sometimes interoperability might not be an issue. For example, the service may be invoked only within your network with all Microsoft clients only. In this case, hosting the service by using the TCP protocol might be a better solution. Benefits of hosting a WCF service using the TCP protocol Compared to HTTP, there are a few benefits in hosting a WCF service using the TCP protocol: It supports connection-based, stream-oriented delivery services with end-to-end error detection and correction It is the fastest WCF binding for scenarios that involve communication between different machines It supports duplex communication, so it can be used to implement duplex contracts It has a reliable data delivery capability (this is applied between two TCP/IP nodes and is not the same thing as WS-ReliableMessaging, which applies between endpoints) Preparing the folders and files First, we need to prepare the folders and files for the host application, just as we did for hosting the service using the HTTP protocol. We will use the previous HTTP hosting application as the base to create the new TCP hosting application: Create the folders: In Windows Explorer, create a new folder called HostIISTcp under C:SOAwithWCFandEFProjectsHelloWorld and a new subfolder called bin under the HostIISTcp folder. You should now have the following new folders: C:SOAwithWCFandEFProjectsHelloWorld HostIISTcp and a bin folder inside the HostIISTcp folder. Copy the files: Now, copy all the files from the HostIIS hosting application folder at C:SOAwithWCFandEFProjectsHelloWorldHostIIS to the new folder that we created at C:SOAwithWCFandEFProjectsHelloWorldHostIISTcp. Create the Visual Studio solution folder: To make it easier to be viewed and managed from the Visual Studio Solution Explorer, you can add a new solution folder, HostIISTcp, to the solution and add the Web.config file to this folder. Add another new solution folder, bin, under HostIISTcp and add the HelloWorldService.dll and HelloWorldService.pdb files under this bin folder. Add the following post-build events to the HelloWorldService project, so next time, all the files will be copied automatically when the service project is built: xcopy "$(AssemblyName).dll" "C:SOAwithWCFandEFProjectsHelloWorldHostIISTcpbin" /Y xcopy "$(AssemblyName).pdb" "C:SOAwithWCFandEFProjectsHelloWorldHostIISTcpbin" /Y Modify the Web.config file: The Web.config file that we have copied from HostIIS is using the default basicHttpBinding as the service binding. To make our service use the TCP binding, we need to change the binding to TCP and add a TCP base address. Open the Web.config file and add the following node to it under the <system.serviceModel> node: <services> <service name="HelloWorldService.HelloWorldService">    <endpoint address="" binding="netTcpBinding"    contract="HelloWorldService.IHelloWorldService"/>    <host>      <baseAddresses>        <add baseAddress=        "net.tcp://localhost/HelloWorldServiceTcp/"/>      </baseAddresses>    </host> </service> </services> In this new services node, we have defined one service called HelloWorldService.HelloWorldService. The base address of this service is net.tcp://localhost/HelloWorldServiceTcp/. Remember, we have defined the host activation relative address as ./HelloWorldService.svc, so we can invoke this service from the client application with the following URL: http://localhost/HelloWorldServiceTcp/HelloWorldService.svc. For the file-less WCF activation, if no endpoint is defined explicitly, HTTP and HTTPS endpoints will be defined by default. In this example, we would like to expose only one TCP endpoint, so we have added an endpoint explicitly (as soon as this endpoint is added explicitly, the default endpoints will not be added). If you don't add this TCP endpoint explicitly here, the TCP client that we will create in the next section will still work, but on the client config file you will see three endpoints instead of one and you will have to specify which endpoint you are using in the client program. The following is the full content of the Web.config file: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=169433 --> <configuration> <system.web>    <compilation debug="true" targetFramework="4.5"/>    <httpRuntime targetFramework="4.5" /> </system.web>   <system.serviceModel>    <serviceHostingEnvironment >      <serviceActivations>        <add factory="System.ServiceModel.Activation.ServiceHostFactory"          relativeAddress="./HelloWorldService.svc"          service="HelloWorldService.HelloWorldService"/>      </serviceActivations>    </serviceHostingEnvironment>      <behaviors>      <serviceBehaviors>        <behavior>          <serviceMetadata httpGetEnabled="true"/>        </behavior>      </serviceBehaviors>    </behaviors>    <services>      <service name="HelloWorldService.HelloWorldService">        <endpoint address="" binding="netTcpBinding"         contract="HelloWorldService.IHelloWorldService"/>        <host>          <baseAddresses>            <add baseAddress=            "net.tcp://localhost/HelloWorldServiceTcp/"/>          </baseAddresses>        </host>      </service>    </services> </system.serviceModel>   </configuration> Enabling the TCP WCF activation for the host machine By default, the TCP WCF activation service is not enabled on your machine. This means your IIS server won't be able to host a WCF service with the TCP protocol. You can follow these steps to enable the TCP activation for WCF services: Go to Control Panel | Programs | Turn Windows features on or off. Expand the Microsoft .Net Framework 3.5.1 node on Windows 7 or .Net Framework 4.5 Advanced Services on Windows 8. Check the checkbox for Windows Communication Foundation Non-HTTP Activation on Windows 7 or TCP Activation on Windows 8. The following screenshot depicts the options required to enable WCF activation on Windows 7: The following screenshot depicts the options required to enable TCP WCF activation on Windows 8: Repair the .NET Framework: After you have turned on the TCP WCF activation, you have to repair .NET. Just go to Control Panel, click on Uninstall a Program, select Microsoft .NET Framework 4.5.1, and then click on Repair. Creating the IIS application Next, we need to create an IIS application named HelloWorldServiceTcp to host the WCF service, using the TCP protocol. Follow these steps to create this application in IIS: Open IIS Manager. Add a new IIS application, HelloWorldServiceTcp, pointing to the HostIISTcp physical folder under your project's folder. Choose DefaultAppPool as the application pool for the new application. Again, make sure your default app pool is a .NET 4.0.30319 application pool. Enable the TCP protocol for the application. Right-click on HelloWorldServiceTcp, select Manage Application | Advanced Settings, and then add net.tcp to Enabled Protocols. Make sure you use all lowercase letters and separate it from the existing HTTP protocol with a comma. Now the service is hosted in IIS using the TCP protocol. To view the WSDL of the service, browse to http://localhost/HelloWorldServiceTcp/HelloWorldService.svc and you should see the service description and a link to the WSDL of the service. Testing the WCF service hosted in IIS using the TCP protocol Now, we have the service hosted in IIS using the TCP protocol; let's create a new test client to test it: Add a new console application project to the solution, named HelloWorldClientTcp. Add a reference to System.ServiceModel in the new project. Add a service reference to the WCF service in the new project, naming the reference HelloWorldServiceRef and use the URL http://localhost/HelloWorldServiceTcp/HelloWorldService.svc?wsdl. You can still use the SvcUtil.exe command-line tool to generate the proxy and config files for the service hosted with TCP, just as we did in previous sections. Actually, behind the scenes Visual Studio is also calling SvcUtil.exe to generate the proxy and config files. Add the following code to the Main method of the new project: var client = new HelloWorldServiceRef.HelloWorldServiceClient (); Console.WriteLine(client.GetMessage("Mike Liu")); Finally, set the new project as the startup project. Now, if you run the program, you will get the same result as before; however, this time the service is hosted in IIS using the TCP protocol. Summary In this article, we created and tested an IIS application to host the service with the TCP protocol. Resources for Article: Further resources on this subject: Microsoft WCF Hosting and Configuration [Article] Testing and Debugging Windows Workflow Foundation 4.0 (WF) Program [Article] Applying LINQ to Entities to a WCF Service [Article]
Read more
  • 0
  • 0
  • 7577

article-image-introduction-odoo
Packt
04 Sep 2015
12 min read
Save for later

Introduction to Odoo

Packt
04 Sep 2015
12 min read
 In this article by Greg Moss, author of Working with Odoo, he explains that Odoo is a very feature-filled business application framework with literally hundreds of applications and modules available. We have done our best to cover the most essential features of the Odoo applications that you are most likely to use in your business. Setting up an Odoo system is no easy task. Many companies get into trouble believing that they can just install the software and throw in some data. Inevitably, the scope of the project grows and what was supposed to be a simple system ends up being a confusing mess. Fortunately, Odoo's modular design will allow you to take a systematic approach to implementing Odoo for your business. (For more resources related to this topic, see here.) What is an ERP system? An Enterprise Resource Planning (ERP) system is essentially a suite of business applications that are integrated together to assist a company in collecting, managing, and reporting information throughout core business processes. These business applications, typically called modules, can often be independently installed and configured based on the specific needs of the business. As the needs of the business change and grow, additional modules can be incorporated into an existing ERP system to better handle the new business requirements. This modular design of most ERP systems gives companies great flexibility in how they implement the system. In the past, ERP systems were primarily utilized in manufacturing operations. Over the years, the scope of ERP systems have grown to encompass a wide range of business-related functions. Recently, ERP systems have started to include more sophisticated communication and social networking features. Common ERP modules The core applications of an ERP system typically include: Sales Orders Purchase Orders Accounting and Finance Manufacturing Resource Planning (MRP) Customer Relationship Management (CRM) Human Resources (HR) Let's take a brief look at each of these modules and how they address specific business needs. Selling products to your customer Sales Orders, commonly abbreviated as SO, are documents that a business generates when they sell products and services to a customer. In an ERP system, the Sales Order module will usually allow management of customers and products to optimize efficiency for data entry of the sales order. Many sales orders begin as customer quotes. Quotes allow a salesperson to collect order information that may change as the customer makes decisions on what they want in their final order. Once a customer has decided exactly what they wish to purchase, the quote is turned into a sales order and is confirmed for processing. Depending on the requirements of the business, there are a variety of methods to determine when a customer is invoiced or billed for the order. This preceding screenshot shows a sample sales order in Odoo. Purchasing products from suppliers Purchase Orders, often known as PO, are documents that a business generates when they purchase products from a vendor. The Purchase Order module in an ERP system will typically include management of vendors (also called suppliers) as well as management of the products that the vendor carries. Much like sales order quotes, a purchase order system will allow a purchasing department to create draft purchase orders before they are finalized into a specific purchasing request. Often, a business will configure the Sales Order and Purchase Order modules to work together to streamline business operations. When a valid sales order is entered, most ERP systems will allow you to configure the system so that a purchase order can be automatically generated if the required products are not in stock to fulfill the sales order. ERP systems will allow you to set minimum quantities on-hand or order limits that will automatically generate purchase orders when inventory falls below a predetermined level. When properly configured, a purchase order system can save a significant amount of time in purchasing operations and assist in preventing supply shortages. Managing your accounts and financing in Odoo Accounting and finance modules integrate with an ERP system to organize and report business transactions. In many ERP systems, the accounting and finance module is known as GL for General Ledger. All accounting and finance modules are built around a structure known as the chart of accounts. The chart of accounts organizes groups of transactions into categories such as assets, liabilities, income, and expenses. ERP systems provide a lot of flexibility in defining the structure of your chart of accounts to meet the specific requirements for your business. Accounting transactions are grouped by date into periods (typically by month) for reporting purposes. These reports are most often known as financial statements. Common financial statements include balance sheets, income statements, cash flow statements, and statements of owner's equity. Handling your manufacturing operations The Manufacturing Resource Planning (MRP) module manages all the various business operations that go into the manufacturing of products. The fundamental transaction of an MRP module is a manufacturing order, which is also known as a production order in some ERP systems. A manufacturing order describes the raw products or subcomponents, steps, and routings required to produce a finished product. The raw products or subcomponents required to produce the finished product are typically broken down into a detailed list called a bill of materials or BOM. A BOM describes the exact quantities required of each component and are often used to define the raw material costs that go into manufacturing the final products for a company. Often an MRP module will incorporate several submodules that are necessary to define all the required operations. Warehouse management is used to define locations and sublocations to store materials and products as they move through the various manufacturing operations. For example, you may receive raw materials in one warehouse location, assemble those raw materials into subcomponents and store them in another location, then ultimately manufacture the end products and store them in a final location before delivering them to the customer. Managing customer relations in Odoo In today's business environment, quality customer service is essential to being competitive in most markets. A Customer Relationship Management (CRM) module assists a business in better handling the interactions they may have with each customer. Most CRM systems also incorporate a presales component that will manage opportunities, leads, and various marketing campaigns. Typically, a CRM system is utilized the most by the sales and marketing departments within a company. For this reason, CRM systems are often considered to be sales force automation tools or SFA tools. Sales personnel can set up appointments, schedule call backs, and employ tools to manage their communication. More modern CRM systems have started to incorporate social networking features to assist sales personnel in utilizing these newly emerging technologies. Configuring human resource applications in Odoo Human Resource modules, commonly known as HR, manage the workforce- or employee-related information in a business. Some of the processes ordinarily covered by HR systems are payroll, time and attendance, benefits administration, recruitment, and knowledge management. Increased regulations and complexities in payroll and benefits have led to HR modules becoming a major component of most ERP systems. Modern HR modules typically include employee kiosk functions to allow employees to self-administer many tasks such as putting in a leave request or checking on their available vacation time. Finding additional modules for your business requirements In addition to core ERP modules, Odoo has many more official and community-developed modules available. At the time of this article's publication, the Odoo application repository had 1,348 modules listed for version 7! Many of these modules provide small enhancements to improve usability like adding payment type to a sales order. Other modules offer e-commerce integration or complete application solutions, such as managing a school or hospital. Here is a short list of the more common modules you may wish to include in an Odoo installation: Point of Sale Project Management Analytic Accounting Document Management System Outlook Plug-in Country-Specific Accounting Templates OpenOffice Report Designer You will be introduced to various Odoo modules that extend the functionality of the base Odoo system. You can find a complete list of Odoo modules at http://apps.Odoo.com/. This preceding screenshot shows the module selection page in Odoo. Getting quickly into Odoo Do you want to jump in right now and get a look at Odoo 7 without any complex installations? Well, you are lucky! You can access an online installation of Odoo, where you can get a peek at many of the core modules right from your web browser. The installation is shared publicly, so you will not want to use this for any sensitive information. It is ideal, however, to get a quick overview of the software and to get an idea for how the interface functions. You can access a trial version of Odoo at https://www.Odoo.com/start. Odoo – an open source ERP solution Odoo is a collection of business applications that are available under an open source license. For this reason, Odoo can be used without paying license fees and can be customized to suit the specific needs of a business. There are many advantages to open source software solutions. We will discuss some of these advantages shortly. Free your company from expensive software license fees One of the primary downsides of most ERP systems is they often involve expensive license fees. Increasingly, companies must pay these license fees on an annual basis just to receive bug fixes and product updates. Because ERP systems can require companies to devote great amounts of time and money for setup, data conversion, integration, and training, it can be very expensive, often prohibitively so, to change ERP systems. For this reason, many companies feel trapped as their current ERP vendors increase license fees. Choosing open source software solutions, frees a company from the real possibility that a vendor will increase license fees in the years ahead. Modify the software to meet your business needs With proprietary ERP solutions, you are often forced to accept the software solution the vendor provides chiefly "as is". While you may have customization options and can sometimes pay the company to make specific changes, you rarely have the freedom to make changes directly to the source code yourself. The advantages to having the source code available to enterprise companies can be very significant. In a highly competitive market, being able to develop solutions that improve business processes and give your company the flexibility to meet future demands can make all the difference. Collaborative development Open source software does not rely on a group of developers who work secretly to write proprietary code. Instead, developers from all around the world work together transparently to develop modules, prepare bug fixes, and increase software usability. In the case of Odoo, the entire source code is available on Launchpad.net. Here, developers submit their code changes through a structure called branches. Changes can be peer reviewed, and once the changes are approved, they are incorporated into the final source code product. Odoo – AGPL open source license The term open source covers a wide range of open source licenses that have their own specific rights and limitations. Odoo and all of its modules are released under the Affero General Public License (AGPL) version 3. One key feature of this license is that any custom-developed module running under Odoo must be released with the source code. This stipulation protects the Odoo community as a whole from developers who may have a desire to hide their code from everyone else. This may have changed or has been appended recently with an alternative license. You can find the full AGPL license at http://www.gnu.org/licenses/agpl-3.0.html. A real-world case study using Odoo The goal is to do more than just walk through the various screens and reports of Odoo. Instead, we want to give you a solid understanding of how you would implement Odoo to solve real-world business problems. For this reason, this article will present a real-life case study in which Odoo was actually utilized to improve specific business processes. Silkworm, Inc. – a mid-sized screen printing company Silkworm, Inc. is a highly respected mid-sized silkscreen printer in the Midwest that manufactures and sells a variety of custom apparel products. They have been kind enough to allow us to include some basic aspects of their business processes as a set of real-world examples implementing Odoo into a manufacturing operation. Using Odoo, we will set up the company records (or system) from scratch and begin by walking through their most basic sales order process, selling T-shirts. From there, we will move on to manufacturing operations, where custom art designs are developed and then screen printed onto raw materials for shipment to customers. We will come back to this real-world example so that you can see how Odoo can be used to solve real-world business solutions. Although Silkworm is actively implementing Odoo, Silkworm, Inc. does not directly endorse or recommend Odoo for any specific business solution. Every company must do their own research to determine whether Odoo is a good fit for their operation. Summary In this article, we have learned about the ERP system and common ERP modules. An introduction about Odoo and features of it. Resources for Article: Further resources on this subject: Getting Started with Odoo Development[article] Machine Learning in IPython with scikit-learn [article] Making Goods with Manufacturing Resource Planning [article]
Read more
  • 0
  • 0
  • 7565

article-image-how-to-build-and-deploy-microservices-using-payara-micro
Gebin George
28 Mar 2018
9 min read
Save for later

How to build and deploy Microservices using Payara Micro

Gebin George
28 Mar 2018
9 min read
Payara Micro offers a new way to run Java EE or microservice applications. It is based on the Web profile of Glassfish and bundles few additional APIs. The distribution is designed keeping modern containerized environment in mind. Payara Micro is available to download as a standalone executable JAR, as well as a Docker image. It's an open source MicroProfile compatible runtime. Today, we will learn to use payara micro to build and deploy microservices. Here’s a list of APIs that are supported in Payara Micro: Servlets, JSTL, EL, and JSPs WebSockets JSF JAX-RS Chapter 4 [ 91 ] EJB lite JTA JPA Bean Validation CDI Interceptors JBatch Concurrency JCache We will be exploring how to build our services using Payara Micro in the next section. Building services with Payara Micro Let's start building parts of our Issue Management System (IMS), which is going to be a one-stop-destination for collaboration among teams. As the name implies, this system will be used for managing issues that are raised as tickets and get assigned to users for resolution. To begin the project, we will identify our microservice candidates based on the business model of IMS. Here, let's define three functional services, which will be hosted in their own independent Git repositories: ims-micro-users ims-micro-tasks ims-micro-notify You might wonder, why these three and why separate repositories? We could create much more fine-grained services and perhaps it wouldn't be wrong to do so. The answer lies in understanding the following points: Isolating what varies: We need to be able to independently develop and deploy each unit. Changes to one business capability or domain shouldn't require changes in other services more often than desired. Organisation or Team structure: If you define teams by business capability, then they can work independent of others and release features with greater agility. The tasks team should be able to evolve independent of the teams that are handling users or notifications. The functional boundaries should allow independent version and release cycle management. Transactional boundaries for consistency: Distributed transactions are not easy, thus creating services for related features that are too fine grained, and lead to more complexity than desired. You would need to become familiar with concepts like eventual consistency, but these are not easy to achieve in practice. Source repository per service: Setting up a single repository that hosts all the services is ideal when it's the same team that works on these services and the project is relatively small. But we are building our fictional IMS, which is a large complex system with many moving parts. Separate teams would get tightly coupled by sharing a repository. Moreover, versioning and tagging of releases will be yet another problem to solve. The projects are created as standard Java EE projects, which are Skinny WARs, that will be deployed using the Payara Micro server. Payara Micro allows us to delay the decision of using a Fat JAR or Skinny WAR. This gives us flexibility in picking the deployment choice at a later stage. As Maven is a widely adopted build tool among developers, we will use the same to create our example projects, using the following steps: mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-users - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-tasks - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-notify - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false Once the structure is generated, update the properties and dependencies section of pom.xml with the following contents, for all three projects: <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <failOnMissingWebXml>false</failOnMissingWebXml> </properties> <dependencies> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Chapter 4 [ 93 ] <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> Next, create a beans.xml file under WEB-INF folder for all three projects: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd" bean-discovery-mode="all"> </beans> You can delete the index.jsp and web.xml files, as we won't be needing them. The following is the project structure of ims-micro-users. The same structure will be used for ims-micro-tasks and ims-micro-notify: The package name for users, tasks, and notify service will be as shown as the following: org.jee8ng.ims.users (inside ims-micro-users) org.jee8ng.ims.tasks (inside ims-micro-tasks) org.jee8ng.ims.notify (inside ims-micro-notify) Each of the above will in turn have sub-packages called boundary, control, and entity. The structure follows the Boundary-Control-Entity (BCE)/Entity-Control-Boundary (ECB) pattern. The JaxrsActivator shown as follows is required to enable the JAX-RS API and thus needs to be placed in each of the projects: import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("resources") public class JaxrsActivator extends Application {} All three projects will have REST endpoints that we can invoke over HTTP. When doing RESTful API design, a popular convention is to use plural names for resources, especially if the resource could represent a collection. For example: /users /tasks The resource class names in the projects use the plural form, as it's consistent with the resource URL naming used. This avoids confusions such as a resource URL being called a users resource, while the class is named UserResource. Given that this is an opinionated approach, feel free to use singular class names if desired. Here's the relevant code for ims-micro-users, ims-micro-tasks, and ims-micronotify projects respectively. Under ims-micro-users, define the UsersResource endpoint: package org.jee8ng.ims.users.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("users") public class UsersResource { @GET Chapter 4 [ 95 ] @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("user works").build(); } } Under ims-micro-tasks, define the TasksResource endpoint: package org.jee8ng.ims.tasks.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("tasks") public class TasksResource { @GET @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("task works").build(); } } Under ims-micro-notify, define the NotificationsResource endpoint: package org.jee8ng.ims.notify.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("notifications") public class NotificationsResource { @GET @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("notification works").build(); } } Once you build all three projects using mvn clean install, you will get your Skinny WAR files generated in the target directory, which can be deployed on the Payara Micro server. Running services with Payara Micro Download the Payara Micro server if you haven't already, from this link: https://www.payara.fish/downloads. The micro server will have the name payara-micro-xxx.jar, where xxx will be the version number, which might be different when you download the file. Here's how you can start Payara Micro with our services deployed locally. When doing so, we need to ensure that the instances start on different ports, to avoid any port conflicts: >java -jar payara-micro-xxx.jar --deploy ims-micro-users/target/ims-microusers. war --port 8081 >java -jar payara-micro-xxx.jar --deploy ims-micro-tasks/target/ims-microtasks. war --port 8082 >java -jar payara-micro-xxx.jar --deploy ims-micro-notify/target/ims-micronotify. war --port 8083 This will start three instances of Payara Micro running on the specified ports. This makes our applications available under these URLs: http://localhost:8081/ims-micro-users/resources/users/ http://localhost:8082/ims-micro-tasks/resources/tasks/ http://localhost:8083/ims-micro-notify/resources/notifications/ Payar Micro can be started on a non-default port by using the --port parameter, as we did earlier. This is useful when running multiple instances on the same machine. Another option is to use the --autoBindHttp parameter, which will attempt to connect on 8080 as the default port, and if that port is unavailable, it will try to bind on the next port up, repeating until it finds an available port. Examples of starting Payara Micro: Uber JAR option: Now, there's one more feature that Payara Micro provides. We can generate an Uber JAR as well, which would be the Fat JAR approach that we learnt in the Fat JAR section. To package our ims-micro-users project as an Uber JAR, we can run the following command: java -jar payara-micro-xxx.jar --deploy ims-micro-users/target/ims-microusers. war --outputUberJar users.jar This will generate the users.jar file in the directory where you run this command. The size of this JAR will naturally be larger than our WAR file, since it will also bundle the Payara Micro runtime in it. Here's how you can start the application using the generated JAR: java -jar users.jar The server parameters that we used earlier can be passed to this runnable JAR file too. Apart from the two choices we saw for running our microservice projects, there's a third option as well. Payara Micro provides an API based approach, which can be used to programmatically start the embedded server. We will expand upon these three services as we progress further into the realm of cloud based Java EE. We saw how to leverage the power of Payara Micro to run Java EE or microservice applications. You read an excerpt from the book, Java EE 8 and Angular written by Prashant Padmanabhan. This book helps you build high-performing enterprise applications using Java EE powered by Angular at the frontend.  
Read more
  • 0
  • 0
  • 7564

article-image-ubuntu-server-and-wordpress-15-minutes-flat
Packt
21 Sep 2010
6 min read
Save for later

Ubuntu Server and WordPress in 15 Minutes Flat

Packt
21 Sep 2010
6 min read
(For more resources on WordPress, see here.) Introduction Ubuntu Server is a robust, powerful and user-friendly distribution engineered by a dedicated team at Canonical as well as hundreds (if not thousands) of volunteers around the world. It powers thousands of server installations, but public and private and is becoming a very popular and trusted solution for all types of server needs. In this article I will outline how to install Ubuntu server toward the goal of running and publishing your own blog, using the WordPress blogging software. This can be used to run a personal blog out of your home, or even run a corporate blog in a workplace. Hundreds of companies use Wordpress as their blogging software of choice—I've deployed it at my office even. I personally maintain about a dozen Wordpress installations, all at varying levels of popularity and traffic. Wordpress scales well, is easy to maintain, and very intuitive to use. If you're not familiar with the Wordpress blogging software I'd invite you to go check it out at http://www.wordpress.com. Requirements In order to get this whole process started you'll only need a few simple things. First, a copy of Ubuntu Server. At the time of this writing, the latest release is 10.04.1 LTS (Long Term Support), which will be supported and provide security and errata updates for five years. You can download a free copy of Ubuntu Server here: http://www.ubuntu.com/server In addition to a copy of Ubuntu Server you'll need, of course, a platform to install it one. This could be a physical server, or a virtual machine. Your times (the 15 minute goal) may vary based on your physical hardware speeds. I based this article on the following platform and specifications: Dell D630 Core 2 Duo 2.10 Ghz 2G RAM VirtualBox 3.2.8 Open Source Edition Again, your mileage may vary depending on your hardware and network, but overall this article will quickly get you from zero to blogger in no time! The last requirement you'll need, and I mentioned this just very briefly in this last paragraph, is network access. If you're installing this on a physical machine, make sure that you'll have local network access to that machine. If you're planning on installing this on a virtual machine, make sure that you configure the virtual machine to use bridged networking, making it accessible to your local area network. To recap, your requirements are: Ubuntu Server 10.04.1 LTS .iso (or printed CD) Physical or virtual machine to provision Local network access to said machine Getting started Once you have everything prepared we can jump right in and get started. Start up your virtual machine, or drop in your CD-ROM, and we'll start the installation. I've taken screenshots of each step in the process so you should be able to follow along closely. In most situations I chose the default configuration. If you are unsure about the configuration requirements during installation, it is generally safe to select the default. Again, just follow my lead and you should be fine! This is the initial installer screen. You'll notice there are a number of options available. The highlighted option (also the default) of "Install Ubuntu Server" is what you'll want to select here. Next, the installer will prompt you for your preferred or native language. The default here is English, and was my selection. You'll notice that there is a huge number of available languages here. This is one of the goals and strengths of Ubuntu, "that software tools should be usable by people in their local language". Select your preferred language and move on to the next step. The next step is to select your country. If you selected English as your primary language you'll then need to select your region. The default is United States, and was also my selection. The Ubuntu installer can automatically detect your keyboard layout if you ask it to. The default prompt is no, which then allows you to select your keyboard from a list. I prefer to use the auto-detection, which I find a bit faster. You can use your own preference here, but be sure you select the correct layout. Nothing more frustrating than not being able to type properly on your keyboard! Next you'll need to assign a hostname to your machine. This is an enjoyable part of the process for me, as I get to assign a unique name to the machine I'll be working with. This always seems to personalize the process for me, and I've chosen a number of creative names for my machines. Select whatever you like here, just make sure it is unique compared to the other machines on your current network. To help ensure that your clock is set properly the Ubuntu installer will auto-detect or prompt you for your time zone. I've found that, when installing on physical hardware, the auto-detection is usuall pretty accurate. When installing on virtual hardware it has a more difficult time. The screenshot above was taken on virtual hardware, which required me to select my time zone manually. If this is the case for you, find your time zone and hit ENTER. The next step in the installation process is partitioning the disks. Unless you have specific needs here, I'd suggest safely selecting the defaults. If you're wondering whether or not you do have specific needs, you probably don't. For our intentions here toward the goal of setting up a web server to run Wordpress, the default is just fine. Select "Guided – use entire disk and set up LVM" and hit ENTER. The installer will prompt you with a confirmation dialog before writing partitioning changes to the disk. Based on the fact that making changes to partitions and filesystems will destroy any existing data on the disk(s), this requires secondary confirmation. If you are installing on a newly created virtual machine you should have nothing to worry about here. If you are installing on physical hardware, please note that it will destroy any existing data and you should be OK with that action. You also have the option of defining the size of the disk made available to your installation. Again, I selected the default here which is to use 100% of the available space. If you have more specific requirements, make them here. Lastly, in regards to the partitioning, one more final confirmation. This screen outlines the partitions that will be created or changed and the filesystems and formatting that will be done on those partitions. Each of these filesystem related screenshots selected the default values. If you've done the same, and you're OK with losing any existing data that might be on the machine, finalize this change by selecting YES. At this point the installer will install the base system within the newly created partitions. This will take a few minutes (again, your mileage may vary depending on hardware type). There are no prompts during this process, just a progress bar and a communication of the packages that are being installed and configured.
Read more
  • 0
  • 0
  • 7549
article-image-getting-started-leaflet
Packt
14 Jun 2013
9 min read
Save for later

Getting started with Leaflet

Packt
14 Jun 2013
9 min read
(For more resources related to this topic, see here.) Getting ready First, we need to get an Internet browser, if we don't have one already installed. Leaflet is tested with modern desktop browsers: Chrome, Firefox, Safari 5+, Opera 11.11+, and Internet Explorer 7-10. Internet Explorer 6 support is stated as not perfect but accessible. We can pick one of them, or all of them if we want to be thorough. Then, we need an editor. Editors come in many shapes and flavors: free or not free, with or without syntax highlighting, or remote file editing. A quick search on the Internet will provide thousands of capable editors. Notepad++ (http://notepad-plus-plus.org/) for Windows, Komodo Edit (http://www.activestate.com/komodo-edit) for Mac OS, or Vim (http://www.vim.org/) for Linux are among them. We can download Leaflet's latest stable release (v0.5.1 at the time of writing) and extract the content of the ZIP file somewhere appropriate. The ZIP file contains the sources as well as a prebuilt version of the library that can be found in the dist directory. Optionally, we can build from the sources included in the ZIP file; see this article's Building Leaflet from source section. Finally, let's create a new project directory on our hard drive and copy the dist folder from the extracted Leaflet package to it, ensuring we rename it to leaflet. How to do it... Note that the following code will constitute our code base throughout the rest of the article. Create a blank HTML file called index.html in the root of our project directory. Add the code given here and use the browser installed previously to execute it: <!DOCTYPE html> <html> <head> <link rel="stylesheet" type="text/css" href="leaflet/ leaflet.css" /> <!--[if lte IE 8]> <link rel="stylesheet" type="text/css" href=" leaflet/ leaflet.ie.css" /> <![endif]--> <script src = "leaflet/leaflet.js"></script> <style> html, body, #map { height: 100%; } body { padding: 0; margin: 0; } </style> <title>Getting Started with Leaflet</title> </head> <body> <div id="map"></div> <script type="text/javascript"> var map = L.map('map', { center: [52.48626, -1.89042], zoom: 14 }); L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/ {x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); </script> </body> </html> The following screenshot is of the first map we have created: How it works... The index.html file we created is a standardized file that all Internet browsers can read and display the contents. Our file is based on the HTML doctype standard produced by the World Wide Web Consortium (W3C), which is only one of many that can be used as seen at http://www.w3.org/QA/2002/04/valid-dtd-list.html. Our index file specifies the doctype on the first line of code as required by the W3C, using the <!DOCTYPE HTML> markup. We added a link to Leaflet's main CSS file in the head section of our code: <link rel="stylesheet" type="text/css" href="leaflet/leaflet.css" /> We also added a conditional statement to link an Internet Explorer 8 or lower only stylesheet when these browsers interpret the HTML code: <!--[if lte IE 8]> <link rel="stylesheet" type="text/css" href="leaflet/leaflet.ie.css" /> <![endif]--> This stylesheet mainly addresses Internet Explorer specific issues with borders and margins. Leaflet's JavaScript file is then referred to using a script tag: <script src = "leaflet/leaflet.js"></script> We are using the compressed JavaScript file that is appropriate for production but very inefficient for debugging. In the compressed version, every white space character has been removed, as shown in the following bullet list, which is a straight copy-paste from the source of both files for the function onMouseClick: compressed: _onMouseClick:function(t){!this._loaded||this.dragging&& this.dragging.moved()||(this.fire("preclick"),this._ fireMouseEvent(t))}, uncompressed: _onMouseClick: function (e) { if (!this._loaded || (this.dragging && this.dragging.moved())) { return; } this.fire('preclick'); this._fireMouseEvent(e); }, To make things easier, we can replace leaflet.js with leaflet-src.js—an uncompressed version of the library. We also added styles to our document to make the map fit nicely in our browser window: html, body, #map { height: 100%; } body { padding: 0; margin: 0; } The <div> tag with the id attribute map in the document's body is the container of our map. It must be given a height otherwise the map won't be displayed: <div id="map" style="height: 100%;" ></div> Finally, we added a script section enclosing the map's initialization code, instantiating a Map object using the L.map(…) constructor and a TileLayer object using the L.tileLayer(…) constructor. The script section must be placed after the map container declaration otherwise Leaflet will be referencing an element that does not yet exist when the page loads. When instantiating a Map object, we pass the id of the container of our map and an array of Map options: var map = L.map('map', { center: [52.48626, -1.89042], zoom: 14 }); There are a number of Map options affecting the state, the interactions, the navigation, and the controls of the map. See the documentation to explore those in detail at http://leafletjs.com/reference.html#map-options. Next, we instantiated a TileLayer object using the L.tileLayer(…) constructor and added to the map using the TileLayer.addTo(…) method: L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); Here, the first parameter is the URL template of our tile provider—that is OpenStreetMap— and the second a noncompulsory array of TileLayer options including the recommended attribution text for our map tile's source. The TileLayer options are also numerous. Refer to the documentation for the exhaustive list at http://leafletjs.com/reference.html#tilelayer-options. There's more... Let's have a look at some of the Map options, as well as how to build Leaflet from source or use different tile providers. More on Map options We have encountered a few Map options in the code for this recipe, namely center and zoom. We could have instantiated our OpenStreetMap TileLayer object before our Map object and passed it as a Map option using the layers option. We also could have specified a minimum and maximum zoom or bounds to our map, using minZoom and maxZoom (integers) and maxBounds, respectively. The latter must be an instance of LatLngBounds: var bounds = L.latLngBounds([ L.latLng([52.312, -2.186]), L.latLng([52.663, -1.594]) ]); We also came across the TileLayer URL template that will be used to fetch the tile images, replacing { s} by a subdomain and { x}, {y}, and {z} by the tiles coordinate and zoom. The subdomains can be configured by setting the subdomains property of a TileLayer object instance. Finally, the attribution property was set to display the owner of the copyright of the data and/or a description. Building Leaflet from source A Leaflet release comes with the source code that we can build using Node.js. This will be a necessity if we want to fix annoying bugs or add awesome new features. The source code itself can be found in the src directory of the extracted release ZIP file. Feel free to explore and look at how things get done within a Leaflet. First things first, go to http://nodejs.org and get the install file for your platform. It will install Node.js along with npm, a command line utility that will download and install Node Packaged Modules and resolve their dependencies for us. Following is the list of modules we are going to install: Jake: A JavaScript build program similar to make JSHint: It will detect potential problems and errors in JavaScript code UglifyJS: A mangler and compressor library for JavaScript Hopefully, we won't need to delve into the specifics of these tools to build Leaflet from source. So let's open a command line interpreter— cmd.exe on Windows, or a terminal on Mac OSX or Linux—and navigate to the Leaflet's src directory using the cd command, then use npm to install Jake, JSHint and UglifyJS: cd leaflet/src npm install –g jake npm install jshint npm install uglify-js We can now run Jake in Leaflet's directory: jake What about tile providers? We could have chosen a different tile provider as OpenStreetMap is free of charge but has its limitations in regard of a production environment. A number of web services provide tiles but might come at a price depending on your usage: CloudMade, MapQuest. These three providers serve tiles use the OpenStreetMap tile scheme described at http://wiki.openstreetmap.org/wiki/Slippy_map_tilenames. Remember the way we added the OpenStreetMap layer to the map? L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { attribution: '© OpenStreetMap contributors' }).addTo(map); Remember the way we added the OpenStreetMap layer to the map? Cloudmade: L.tileLayer(' http://{s}.tile.cloudmade.com/API-key/997/256/{z}/ {x}/{y}.png', { attribution: ' Map data © <a href="http://openstreetmap. org">OpenStreetMap</a> contributors, <a href="http:// creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, Imagery © <a href="http://cloudmade.com">CloudMade</a>' }).addTo(map); MapQuest: L.tileLayer('http://{s}.mqcdn.com/tiles/1.0.0/map/{z}/{x}/{y}. png', { attribution: ' Tiles Courtesy of <a href="http://www.mapquest. com/" target="_blank">MapQuest</a> <img src = "http://developer.mapquest.com/content/osm/mq_logo.png">', subdomains: ['otile1', 'otile2', 'otile3', 'otile4'] }).addTo(map); You will learn more about the Layer URL template and subdomains option in the documentation at http://leafletjs.com/reference.html#tilelayer. Leaflet also supports Web Map Service (WMS) tile layers—read more about it at http://leafletjs.com/reference.html#tilelayer-wms—and GeoJSON layers in the documentation at http://leafletjs.com/reference.html#geojson. Summary In this article we have learned how to create map using Leaflet and created our first map. We learned about different map options and also how to build a leaflet from source. Resources for Article : Further resources on this subject: Using JavaScript Effects with Joomla! [Article] Getting Started with OpenStreetMap [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 7538

article-image-testing-single-page-apps-spas-vue-js-dev-tools
Pravin Dhandre
25 May 2018
8 min read
Save for later

Testing Single Page Applications (SPAs) using Vue.js developer tools

Pravin Dhandre
25 May 2018
8 min read
Testing, especially for big applications, is paramount – especially when deploying your application to a development environment. Whether you choose unit testing or browser automation, there are a host of articles and books available on the subject. In this tutorial, we have covered the usage of Vue developer tools to test Single Page Applications. We will also touch upon other alternative tools like Nightwatch.js, Selenium, and TestCafe for testing. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example.  Using the Vue.js developer tools The Vue developer tools are available for Chrome and Firefox and can be downloaded from GitHub. Once installed, they become an extension of the browser developer tools. For example, in Chrome, they appear after the Audits tab. The Vue developer tools will only work when you are using Vue in development mode. By default, the un-minified version of Vue has the development mode enabled. However, if you are using the production version of the code, the development tools can be enabled by setting the devtools variable to true in your code: Vue.config.devtools = true We've been using the development version of Vue, so the dev tools should work with all three of the SPAs we have developed. Open the Dropbox example and open the Vue developer tools. Inspecting Vue components data and computed values The Vue developer tools give a great overview of the components in use on the page. You can also drill down into the components and preview the data in use on that particular instance. This is perfect for inspecting the properties of each component on the page at any given time. For example, if we inspect the Dropbox app and navigate to the Components tab, we can see the <Root> Vue instance and we can see the <DropboxViewer> component. Clicking this will reveal all of the data properties of the component – along with any computed properties. This lets us validate whether the structure is constructed correctly, along with the computed path property: Drilling down into each component, we can access individual data objects and computed properties. Using the Vue developer tools for inspecting your application is a much more efficient way of validating data while creating your app, as it saves having to place several console.log() statements. Viewing Vuex mutations and time-travel Navigating to the next tab, Vuex, allows us to watch store mutations taking place in real time. Every time a mutation is fired, a new line is created in the left-hand panel. This element allows us to view what data is being sent, and what the Vuex store looked like before and after the data had been committed. It also gives you several options to revert, commit, and time-travel to any point. Loading the Dropbox app, several structure mutations immediately populate within the left-hand panel, listing the mutation name and the time they occurred. This is the code pre-caching the folders in action. Clicking on each one will reveal the Vuex store state – along with a mutation containing the payload sent. The state display is after the payload has been sent and the mutation committed. To preview what the state looked like before that mutation, select the preceding option: On each entry, next to the mutation name, you will notice three symbols that allow you to carry out several actions and directly mutate the store in your browser: Commit this mutation: This allows you to commit all the data up to that point. This will remove all of the mutations from the dev tools and update the Base State to this point. This is handy if there are several mutations occurring that you wish to keep track of. Revert this mutation: This will undo the mutation and all mutations after this point. This allows you to carry out the same actions again and again without pressing refresh or losing your current place. For example, when adding a product to the basket in our shop app, a mutation occurs. Using this would allow you to remove the product from the basket and undo any following mutations without navigating away from the product page. Time-travel to this state: This allows you to preview the app and state at that particular mutation, without reverting any mutations that occur after the selected point. The mutations tab also allows you to commit or revert all mutations at the top of the left-hand panel. Within the right-hand panel, you can also import and export a JSON encoded version of the store's state. This is particularly handy when you want to re-test several circumstances and instances without having to reproduce several steps. Previewing event data The Events tab of the Vue developer tools works in a similar way to the Vuex tab, allowing you to inspect any events emitted throughout your app. Changing the filters in this app emits an event each time the filter type is updated, along with the filter query: The left-hand panel again lists the name of the event and the time it occurred. The right panel contains information about the event, including its component origin and payload. This data allows you to ensure the event data is as you expected it to be and, if not, helps you locate where the event is being triggered. The Vue dev tools are invaluable, especially as your JavaScript application gets bigger and more complex. Open the shop SPA we developed and inspect the various components and Vuex data to get an idea of how this tool can help you create applications that only commit mutations they need to and emit the events they have to. Testing your Single Page Application The majority of Vue testing suites revolve around having command-line knowledge and creating a Vue application using the CLI (command-line interface). Along with creating applications in frontend-compatible JavaScript, Vue also has a CLI that allows you to create applications using component-based files. These are files with a .vue extension and contain the template HTML along with the JavaScript required for the component. They also allow you to create scoped CSS – styles that only apply to that component. If you chose to create your app using the CLI, all of the theory and a lot of the practical knowledge you have learned in this book can easily be ported across. Command-line unit testing Along with component files, the Vue CLI allows you to integrate with command-line unit tests easier, such as Jest, Mocha, Chai, and TestCafe (https://testcafe.devexpress.com/). For example, TestCafe allows you to specify several different tests, including checking whether content exists, to clicking buttons to test functionality. An example of a TestCafe test checking to see if our filtering component in our first app contains the work Field would be: test('The filtering contains the word "filter"', async testController => { const filterSelector = await new Selector('body > #app > form > label:nth-child(1)'); await testController.expect(paragraphSelector.innerText).eql('Filter'); }); This test would then equate to true or false. Unit tests are generally written in conjunction with components themselves, allowing components to be reused and tested in isolation. This allows you to check that external factors have no bearing on the output of your tests. Most command-line JavaScript testing libraries will integrate with Vue.js; there is a great list available in the awesome Vue GitHub repository (https://github.com/vuejs/awesome-vue#test). Browser automation The alternative to using command-line unit testing is to automate your browser with a testing suite. This kind of testing is still triggered via the command line, but rather than integrating directly with your Vue application, it opens the page in the browser and interacts with it like a user would. A popular tool for doing this is Nightwatch.js (http://nightwatchjs.org/). You may use this suite for opening your shop and interacting with the filtering component or product list ordering and comparing the result. The tests are written in very colloquial English and are not restricted to being on the same domain name or file network as the site to be tested. The library is also language agnostic – working for any website regardless of what it is built with. The example Nightwatch.js gives on their website is for opening Google and ensuring the first result of a Google search for rembrandt van rijn is the Wikipedia entry: module.exports = { 'Demo test Google' : function (client) { client .url('http://www.google.com') .waitForElementVisible('body', 1000) .assert.title('Google') .assert.visible('input[type=text]') .setValue('input[type=text]', 'rembrandt van rijn') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('ol#rso li:first-child', 'Rembrandt - Wikipedia') .end(); } }; An alternative to Nightwatch is Selenium (http://www.seleniumhq.org/). Selenium has the advantage of having a Firefox extension that allows you to visually create tests and commands. We covered usage of Vue.js dev tools and learned to build automated tests for your web applications. If you found this tutorial useful, do check out the book Vue.js 2.x by Example and get complete knowledge resource on the process of building single-page applications with Vue.js. Building your first Vue.js 2 Web application 5 web development tools will matter in 2018
Read more
  • 0
  • 0
  • 7536

article-image-5-key-skills-for-web-and-app-developers-to-learn-in-2020
Richard Gall
20 Dec 2019
5 min read
Save for later

5 Key skills for web and app developers to learn in 2020

Richard Gall
20 Dec 2019
5 min read
Web and application development can change quickly. Much of this is driven by user behavior and user needs - and if you can’t keep up with it, it’s going to be impossible to keep your products and projects relevant. The only way to do that, of course, is to ensure your skills are up to date and constantly looking forward to what might be coming. You can’t predict the future, but you can prepare yourself. Here are 5 key skill areas that we think web and app developers should focus on in 2020. Artificial intelligence It’s impossible to overstate the importance of AI in application development at the moment. Yes, it’s massively hyped, but that’s largely because its so ubiquitous. Indeed, to a certain extent many users won’t even realise their interacting with AI or machine learning systems. You might even say that that’s when its used best. The ways in which AI can be used by web and app developers is extensive and constantly growing. Perhaps the most obvious is personal recommendations, but it’s chatbots and augmented reality that are really pushing the boundaries of what’s possible with AI in the development field. Artificial intelligence might sound daunting if you’re primarily a web developer. But it shouldn’t - you don’t need a computer science or math degree to use it effectively. There are now many platforms and tools available to use machine learning technology out of the box, from Azure’s Cognitive Services, Amazon’s Rekognition, and ML Kit, built by Google for mobile developers. Learn how to build smart, AI-backed applications with Azure Cognitive Services in Azure Cognitive Services for Developers [Video]. New programming languages Earlier this year I wrote about how polyglot programming (being able to use more than one language) “allows developers to choose the right language to solve tough engineering problems.” For web and app developers, who are responsible for building increasingly complex applications and websites, in as elegant and as clean a manner as possible, this is particularly true. The emergence of languages like TypeScript and Kotlin attest to the importance of keeping your programming proficiency up to date. Moreover, you could even say that they highlight that, however popular core languages like JavaScript and Java are, there are now some tasks that they’re just not capable of dealing with. So, this doesn’t mean you should just ditch your favored programming languages in 2020. But it does mean that learning a new language is a great way to build your skill set. Explore new programming languages with eBook and video bundles here. Accessibility Web accessibility is a topic that has been overlooked for too long. That needs to change in 2020. It’s not hard to see how it gets ignored. When the pressure to deliver software is high, thinking about the consequences of specific design decisions on different types of users, is almost certainly going to be pushed to the bottom of developer’s priorities. But if anything this means we need a two-pronged approach - on the one hand developers need to commit to learning web accessibility themselves, but they also need to be evangelists in communicating its importance to non-technical team members. The benefits of this will be significant: it could be a big step towards a more inclusive digital world, but from a personal perspective, it will also help developers to become more aware and well-rounded in their design decisions. And insofar as no one’s taking real leadership for this at the moment, it’s the perfect opportunity for developers to prove their leadership chops. Read next: It’s a win for Web accessibility as courts can now order companies to make their sites WCAG 2.0 compliant JAMStack and (sort of) static websites Traditional CMSes like WordPress can be a pain for developers if you want to build something that is more customized than what you get out of the box. This is one of the reasons why JAMStack (a term coined by Netlify) is so popular - combining JavaScript, APIs, and markup, it offers web developers a way to build secure, performant websites very quickly. To a certain extent, JAMstack is the next generation of static websites; but JAMStack sites aren’t exactly static, as they call data from the server-side through APIs. Developers then call on the help of templated markup - usually in the form of static site generators (like Gatsby.js) or build tools - to act as a pre-built front end. The benefits of JAMstack as an approach are well-documented. Perhaps the most important, though, is that it offers a really great developer experience. It allows you to build with the tools that you want to use, and integrate with services you might already be using, and minimizes the level of complexity that can come with some development approaches. Get started with Gatsby.js and find out how to use it in JAMstack with The Gatsby Masterclass video. State management We’ve talked about state management recently - in fact, lots of people have been talking about it. We won’t go into detail about what it involves, but the issue has grown as increasing app complexity has made it harder to gain a single source of truth on what’s actually happening inside our applications. If you haven’t already, it’s essential to learn some of the design patterns and approaches for managing application state that have emerged over the last couple of years. The two most popular - Flux and Redux - are very closely associated with React, but for Vue developers Vuex is well worth learning. Thinking about state management can feel brain-wrenching at times. However, getting to grips with it can really help you to feel more in control of your projects. Get up and running with Redux quickly with the Redux Quick Start Guide.
Read more
  • 0
  • 0
  • 7533
article-image-what-we-can-learn-attacks-wep-protocol
Packt
12 Aug 2015
4 min read
Save for later

What we can learn from attacks on the WEP Protocol

Packt
12 Aug 2015
4 min read
In the past years, many types of attacks on the WEP protocol have been undertaken. Being successful with such an attack is an important milestone for anyone who wants to undertake penetration tests of wireless networks. In this article by Marco Alamanni, the author of Kali Linux Wireless Penetration Testing Essentials, we will take a look at the basics and the most common types of WEP protocols. What is the WEP protocol? The WEP protocol was introduced with the original 802.11 standard as a means to provide authentication and encryption to wireless LAN implementations. It is based on the Rivest Cipher 4 (RC4) stream cypher with a Pre-shared Secret Key (PSK) of 40 or 104 bits, depending on the implementation. A 24-bit pseudorandom Initialization Vector (IV) is concatenated with the pre-shared key to generate the per-packet keystream used by RC4 for the actual encryption and decryption process. Thus, the resulting keystream could be 64 or 128 bits long. In the encryption phase, the keystream is encrypted with the XOR cypher with the plaintext data to obtain the encrypted data. While in the decryption phase, the encrypted data is XOR-encrypted with the keystream to obtain the plaintext data. The encryption process is shown in the following diagram: Attacks against WEP and why do they occur? WEP is an insecure protocol and has been deprecated by the Wi-Fi Alliance. It suffers from various vulnerabilities related to the generation of the keystreams, to the use of IVs (initialization vectors), and to the length of the keys. The IV is used to add randomness to the keystream, trying to avoid the reuse of the same keystream to encrypt different packets. This purpose has not been accomplished in the design of WEP because the IV is only 24 bits long (with 2^24 =16,777,216 possible values) and it is transmitted in clear text within each frame. Thus, after a certain period of time (depending on the network traffic), the same IV and consequently the same keystream will be reused, allowing the attacker to collect the relative cypher texts and perform statistical attacks to recover plain texts and the key. FMS attacks on WEP The first well-known attack against WEP was the Fluhrer, Mantin, and Shamir (FMS) attack back in 2001. The FMS attack relies on the way WEP generates the keystreams and on the fact that it also uses weak IV to generate weak keystreams, making it possible for an attacker to collect a sufficient number of packets encrypted with these keys, to analyze them, and recover the key. The number of IVs to be collected to complete the FMS attack is about 250,000 for 40-bit keys and 1,500,000 for 104-bit keys. The FMS attack has been enhanced by Korek, improving its performance. Andreas Klein found more correlations between the RC4 keystream and the key than the ones discovered by Fluhrer, Mantin, and Shamir, which can be used to crack the WEP key. PTW attacks on WEP In 2007, Pyshkin, Tews, and Weinmann (PTW) extended Andreas Klein's research and improved the FMS attack, significantly reducing the number of IVs needed to successfully recover the WEP key. Indeed, the PTW attack does not rely on weak IVs such as the FMS attack does and is very fast and effective. It is able to recover a 104-bit WEP key with a success probability of 50% using less than 40,000 frames and with a probability of 95% with 85,000 frames. The PTW attack is the default method used by Aircrack-ng to crack WEP keys. ARP Request replay attacks on WEP Both FMS and PTW attacks need to collect quite a large number of frames to succeed and can be conducted passively, sniffing the wireless traffic on the same channel of the target AP and capturing frames. The problem is that, in normal conditions, we will have to spend quite a long time to passively collect all the necessary packets for the attacks, especially with the FMS attack. To accelerate the process, the idea is to reinject frames in the network to generate traffic in response so that we can collect the necessary IVs more quickly. A type of frame that is suitable for this purpose is the ARP request because the AP broadcasts it, each time with a new IV. As we are not associated with the AP, if we send frames to it directly, they are discarded and a de-authentication frame is sent. Instead, we can capture ARP requests from associated clients and retransmit them to the AP. This technique is called the ARP Request Replay attack and is also adopted by Aircrack-ng for the implementation of the PTW attack. Find out more to become a master penetration tester by reading Kali Linux Wireless Penetration Testing Essentials
Read more
  • 0
  • 0
  • 7523

article-image-article-building-objects-inkscape
Packt
22 May 2012
9 min read
Save for later

Building Objects in Inkscape

Packt
22 May 2012
9 min read
(For more resources on Inkscape, see here.) Working with objects Objects in Inkscape are any shapes that make up your overall drawing. This means that any text, path, or shape that you create is essentially an object. Let's start by making a simple object and then changing some of its attributes. Time for action – creating a simple object Inkscape can create predefined shapes that are part of the SVG standard. These include rectangles/squares, circles/ellipses/arcs, stars, polygons, and spirals. To create any of these shapes, you can select items from the toolbar: However, you can also create more freehand-based objects as well. Let's look at how we can create a simple freehand triangle: Select the Bezier tool: Click once where you want the first corner and then move the mouse/pointer to the next corner. A node appears with the click and then a freehand line: When you have the length of the first side of the triangle estimated, click for the second corner: Move the mouse to form the second side and click for the third corner: Move the mouse back to the first corner node and click it to form the triangle, shown as follows: Now save the file. From the main menu, select File and then Save. We will use this triangle to build a graphic later in this book, so choose a location to save so that you will know where to find the file. Now that the basic triangle is saved, let's also experiment with how we can manipulate the shape itself and/or the shape's position on the canvas. Let's start with manipulating the triangle. Select the triangle and drag a handle to a new location. You have essentially skewed the triangle, as shown in the following diagram: To change the overall shape of the triangle, select the triangle, then click the Edit path by Nodes tool (or press F2 ): Now the nodes of the triangle are displayed as follows: Nodes are points on a path that define the path's shape. Click a node and you can drag it to another location to manipulate the triangle's overall shape as follows: Double-click between two nodes to add another node and change the shape: If you decide that you don't want the extra node, click it (the node turns red), press Delete on your keyboard and it disappears. You can also use the control bar to add, delete, or manipulate the path/shape and nodes: If you want to change the position of the shape on the canvas by choosing the Select tool in the toolbox, click and drag the shape and move it where you need it to be. Change the size of the shape by also choosing the Select tool from the toolbox, clicking and holding the edge of the shape at the handle (small square or circles at edges), and dragging it outward to grow larger or inward to shrink until the shape is of the desired size. You can also rotate an object. Choose the Select tool from the toolbox and single-click the shape until the nodes turn to arrows with curves (this might require you to click the object a couple of times). When you see the curved arrow nodes, click-and-drag on a corner node to rotate the object until it is rotated and positioned correctly. No need to save this file again after we have manipulated it—unless you want to reference this new version of the triangle for future projects. What just happened? We created a free-form triangle and saved it for a future project. We also manipulated the shape in a number of ways—used the nodes to change the skew of the overall shape, added nodes to change the shape completely, and also how to move the shape around on the canvas. Fill and Stroke As you've already noticed, when creating objects in Inkscape they have color associated with them. You can fill an object with a color as well as give the object an outline or stroke. This section will explain how to change these characteristics of an object in Inkscape. Fill and Stroke dialog You can use the Fill and Stroke dialog from the main menu to change the fill colors of an object. Time for action – using the Fill and Stroke dialog Let's open the dialog and get started: Open your triangle Inkscape file again and select the triangle. From the main menu, choose Object | Fill and Stroke (or use the Shift + Ctrl + F keyboard shortcut). The Fill and Stroke dialog appears on the right-hand side of your screen. Notice it has three tabs: Fill, Stroke paint, and Stroke style , as shown in the following screenshot: Select the Fill tab (if not already selected). Here are the options for fill: Type of fill: The buttons below the Fill tab allow you to select the type of fill you would like to use. No fill (the button with the X), flat color, linear or radial gradients. In the previous example screenshot, the flat fill button is selected. Color picker : Another set of tabs below the type of the fill area are presented; RGB , CMYK, HSL, and Wheel. You can use any of these to choose a color. The most intuitive option is Wheel as it allows you to visually see all the colors and rotate a triangle to the color of your choice, as shown in the following screenshot: Once a color is chosen, then the exact color can be seen in various values on the other color picker tabs. Blur : Below the color area, you also have an option to blur the object's fill. This means that if you move the sliding lever to the right, the blur of the fill will move outward. See the following diagram for examples of an object without and with blur: Opacity: Lastly, there is the opacity slider. By moving this slider to the right you will give the object an alpha of opacity setting making it a bit more transparent. The following diagram demonstrates opacity: In the Fill and Stroke dialog , if you select the Stroke paint tab , you will notice it looks very much like the Fill tab. You can remove the stroke (outline) of the object, set the color, and determine if it is a flat color or gradient: In the last tab, Stroke style is where you can most notably set the width of the stroke: You can also use this tab to determine what types of corners or joins an object has (round or square corners) and how the end caps of the border look like. The Dashes field gives options for the stroke line type, as shown in the following screenshot: Start, Mid, and End Markers allow you to add end points to your strokes, as follows: For our triangle object, use the Fill tab and choose a green color, no stroke, and 100 percent opacity: What just happened? You learned where to open the Fill and Stroke dialog, adjust the fill of an object, use blur and opacity, and how to change the stroke color and weights of the stroke line. Next, let's learn other ways to change the fill and stroke options. Color palette bar You can also use the color palette bar to change fill color: Time for action – using the color palette Let's learn all the tips and tricks for using the color palette bar: From the palette bar, click a color and drag it from the palette onto the object to change its fill, as shown in the following diagram: You can also change an object and the stroke color in a number of other ways: Select an object on the canvas and then click a color box in the palette to immediately set the fill of an object. Select an object on the canvas and then right-click a color box in the palette. A popup menu appears with options to set the fill (and stroke). If you hold the Shift key and drag a color box onto an object, it changes the stroke color. Shift + left-click a color box to immediately set the stroke color. Note, you can use the scroll bar just below the viewable color swatches on the color palette to scroll right to see even more color choices. What just happened? You learned how to change the fill and stroke color of an object by using the color swatches on the color palette bar on the main screen of Inkscape. Dropper Yet another way to change the fill or stroke of an object is to use the dropper: Let's learn how to use it. Time for action – using the dropper tool Open an Inkscape file with objects on the canvas or create a quick object to try this out: Select an object on the canvas. Select the dropper tool from the toolbar or use the shortcut key F7 . Then click anywhere in the drawing with that tool that has the color you want to choose. The chosen color will be assigned to the selected object's fill. Alternatively, use Shift + click to set the stroke color. Be aware of the tool control bar and the dropper tool controls, shown as follows: The two buttons affect the opacity of the object, especially if it is different than the 100% setting. If Pick is disabled, then the color as chosen by the dropper looks exactly like it is on screen If Pick is enabled and Assign is disabled, then the color picked by the dropper is one that the object would have if its opacity was 100% If Pick is enabled and Assign is enabled, then the color and opacity are both copied from the picked object What just happened? By using the dropper tool, you learned how to change a color of another object on the screen.
Read more
  • 0
  • 0
  • 7516