Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-gain-practical-expertise-latest-edition-software-architecture-with-c-sharp9-dotnet5
Expert Network
08 Jul 2021
3 min read
Save for later

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5 

Expert Network
08 Jul 2021
3 min read
Software architecture is one of the most discussed topics in the software industry today, and its importance will certainly grow more in the future. But the speed at which new features are added to these software solutions keeps increasing, and new architectural opportunities keep emerging. To strengthen your command on this, Packt brings to you the Second Edition of Software Architecture with C# 9 and .NET 5 by Gabriel Baptista and Francesco Abbruzzese – a fully revised and expanded guide, featuring the latest features of .NET 5 and C# 9.  This book covers the most common design patterns and frameworks involved in modern cloud-based and distributed software architectures. It discusses when and how to use each pattern, by providing you with practical real-world scenarios. This book also presents techniques and processes such as DevOps, microservices, Kubernetes, continuous integration, and cloud computing, so that you can have a best-in-class software solution developed and delivered for your customers.   This book will help you to understand the product that your customer wants from you. It will guide you to deliver and solve the biggest problems you can face during development. It also covers the do's and don'ts that you need to follow when you manage your application in a cloud-based environment. You will learn about different architectural approaches, such as layered architectures, service-oriented architecture, microservices, Single Page Applications, and cloud architecture, and understand how to apply them to specific business requirements.   Finally, you will deploy code in remote environments or on the cloud using Azure. All the concepts in this book will be explained with the help of real-world practical use cases where design principles make the difference when creating safe and robust applications. By the end of the book, you will be able to develop and deliver highly scalable and secure enterprise-ready applications that meet the end customers' business needs.   It is worth mentioning that Software Architecture with C# 9 and .NET 5, Second Edition will not only cover the best practices that a software architect should follow for developing C# and .NET Core solutions, but it will also discuss all the environments that we need to master in order to develop a software product according to the latest trends.   This second edition is improved in code, and adapted to the new opportunities offered by C# 9 and .Net 5. We added all new frameworks and technologies such as gRPC, and Blazor, and described Kubernetes in more detail in a dedicated chapter.   To get the most out of this book, understand it as a guidance that you may want to revisit many times for different circumstances. Do not forget to have Visual Studio Community 2019 or higher installed and be sure that you understand C# .NET principles.
Read more
  • 0
  • 0
  • 8814

article-image-setting-glassfish-jms-and-working-message-queues
Packt
30 Jul 2010
4 min read
Save for later

Setting up GlassFish for JMS and Working with Message Queues

Packt
30 Jul 2010
4 min read
(For more resources on Java, see here.) Setting up GlassFish for JMS Before we start writing code to take advantage of the JMS API, we need to configure some GlassFish resources. Specifically, we need to set up a JMS connection factory, a message queue, and a message topic. Setting up a JMS connection factory The easiest way to set up a JMS connection factory is via GlassFish's web console. The web console can be accessed by starting our domain, by entering the following command in the command line: asadmin start-domain domain1 Then point the browser to http://localhost:4848 and log in: A connection factory can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Connection Factories node, then clicking on the New... button in the main area of the web console. For our purposes, we can take most of the defaults. The only thing we need to do is enter a Pool Name and pick a Resource Type for our connection factory. It is always a good idea to use a Pool Name starting with "jms/" when picking a name for JMS resources. This way JMS resources can be easily identified when browsing a JNDI tree. In the text field labeled Pool Name, enter jms/GlassFishBookConnectionFactory. Our code examples later in this article will use this JNDI name to obtain a reference to this connection factory. The Resource Type drop-down menu has three options: javax.jms.TopicConnectionFactory - used to create a connection factory that creates JMS topics for JMS clients using the pub/sub messaging domain javax.jms.QueueConnectionFactory - used to create a connection factory that creates JMS queues for JMS clients using the PTP messaging domain javax.jms.ConnectionFactory - used to create a connection factory that creates either JMS topics or JMS queues For our example, we will select javax.jms.ConnectionFactory. This way we can use the same connection factory for all our examples, those using the PTP messaging domain and those using the pub/sub messaging domain. After entering the Pool Name for our connection factory, selecting a connection factory type, and optionally entering a description for our connection factory, we must click on the OK button for the changes to take effect. We should then see our newly created connection factory listed in the main area of the GlassFish web console. Setting up a JMS message queue A JMS message queue can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Destination Resources node, then clicking on the New... button in the main area of the web console. In our example, the JNDI name of the message queue is jms/GlassFishBookQueue. The resource type for message queues must be javax.jms.Queue. Additionally, a Physical Destination Name must be entered. In this example, we use GlassFishBookQueue as the value for this field. After clicking on the New... button, entering the appropriate information for our message queue, and clicking on the OK button, we should see the newly created queue: Setting up a JMS message topic Setting up a JMS message topic in GlassFish is very similar to setting up a message queue. In the GlassFish web console, expand the Resources node in the tree at the left hand side, then expand the JMS Resouces node and click on the Destination Resources node, then click on the New... button in the main area of the web console. Our examples will use a JNDI Name of jms/GlassFishBookTopic. As this is a message topic, Resource Type must be javax.jms.Topic. The Description field is optional. The Physical Destination Name property is required. For our example, we will use GlassFishBookTopic as the value for this property. After clicking on the OK button, we can see our newly created message topic: Now that we have set up a connection factory, a message queue, and a message topic, we are ready to start writing code using the JMS API.
Read more
  • 0
  • 0
  • 8796

article-image-postgis-extension-pgrouting-for-calculating-driving-distance-tutorial
Pravin Dhandre
19 Jul 2018
5 min read
Save for later

PostGIS extension: pgRouting for calculating driving distance [Tutorial]

Pravin Dhandre
19 Jul 2018
5 min read
pgRouting is an extension of PostGIS and PostgreSQL geospatial database. It adds routing and other network analysis functionality. In this tutorial we will learn to work with pgRouting tool in estimating the driving distance from all nearby nodes which can be very useful in supply chain, logistics and transportation based applications. This tutorial is an excerpt from a book written by Mayra Zurbaran,Pedro Wightman, Paolo Corti, Stephen Mather, Thomas Kraft and Bborie Park titled PostGIS Cookbook - Second Edition. Driving distance is useful when user sheds are needed that give realistic driving distance estimates, for example, for all customers with five miles driving, biking, or walking distance. These estimates can be contrasted with buffering techniques, which assume no barrier to travelling and are useful for revealing the underlying structures of our transportation networks relative to individual locations. Driving distance (pgr_drivingDistance) is a query that calculates all nodes within the specified driving distance of a starting node. This is an optional function compiled with pgRouting; so if you compile pgRouting yourself, make sure that you enable it and include the CGAL library, an optional dependency for pgr_drivingDistance. We will start by loading a test dataset. You can get some really basic sample data from https://docs.pgrouting.org/latest/en/sampledata.html. In the following example, we will look at all users within a distance of three units from our starting point—that is, a proposed bike shop at node 2: SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ); The preceding command gives the following output: As usual, we just get a list from the pgr_drivingDistance table that, in this case, comprises sequence, node, edge cost, and aggregate cost. PgRouting, like PostGIS, gives us low-level functionality; we need to reconstruct what geometries we need from that low-level functionality. We can use that node ID to extract the geometries of all of our nodes by executing the following script: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT ST_AsText(the_geom) FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The preceding command gives the following output: But the output seen is just a cluster of points. Normally, when we think of driving distance, we visualize a polygon. Fortunately, we have the pgr_alphaShape function that provides us that functionality. This function expects id, x, and y values for input, so we will first change our previous query to convert to x and y from the geometries in edge_table_vertices_pgr: WITH DD AS ( SELECT * FROM pgr_drivingDistance( 'SELECT id, source, target, cost FROM chp06.edge_table', 2, 3 ) ) SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node; The output is as follows: Now we can wrap the preceding script up in the alphashape function: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), So first, we will get our cluster of points. As we did earlier, we will explicitly convert the text to geometric points: alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), Now that we have points, we can create a line by connecting them: alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon(ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline))) FROM alphaline; Finally, we construct the line as a polygon using ST_MakePolygon. This requires adding the start point by executing ST_StartPoint in order to properly close the polygon. The complete code is as follows: WITH alphashape AS ( SELECT pgr_alphaShape(' WITH DD AS ( SELECT * FROM pgr_drivingDistance( ''SELECT id, source, target, cost FROM chp06.edge_table'', 2, 3 ) ), dd_points AS( SELECT id::integer, ST_X(the_geom)::float AS x, ST_Y(the_geom)::float AS y FROM chp06.edge_table_vertices_pgr w, DD d WHERE w.id = d.node ) SELECT * FROM dd_points ') ), alphapoints AS ( SELECT ST_MakePoint((pgr_alphashape).x, (pgr_alphashape).y) FROM alphashape ), alphaline AS ( SELECT ST_Makeline(ST_MakePoint) FROM alphapoints ) SELECT ST_MakePolygon( ST_AddPoint(ST_Makeline, ST_StartPoint(ST_Makeline)) ) FROM alphaline; Our first driving distance calculation can be better understood in the context of the following diagram, where we can reach nodes 9, 11, 13 from node 2 with a driving distance of 3: With this,  you can calculate the most optimistic distance route across different nodes in your transportation network. Want to explore more with PostGIS, check out PostGIS Cookbook - Second Edition and get access to complete range of PostGIS techniques and related extensions for better analytics on your spatial information. Top 7 libraries for geospatial analysis Using R to implement Kriging - A Spatial Interpolation technique for Geostatistics data Learning R for Geospatial Analysis
Read more
  • 0
  • 0
  • 8788
Visually different images

article-image-inheritance-python
Packt
30 Dec 2010
8 min read
Save for later

Inheritance in Python

Packt
30 Dec 2010
8 min read
Python 3 Object Oriented Programming Harness the power of Python 3 objects Learn how to do Object Oriented Programming in Python using this step-by-step tutorial Design public interfaces using abstraction, encapsulation, and information hiding Turn your designs into working software by studying the Python syntax Raise, handle, define, and manipulate exceptions using special error objects Implement Object Oriented Programming in Python using practical examples        Basic inheritance Technically, every class we create uses inheritance. All Python classes are subclasses of the special class named object. This class provides very little in terms of data and behaviors (those behaviors it does provide are all double-underscore methods intended for internal use only), but it does allow Python to treat all objects in the same way. If we don't explicitly inherit from a different class, our classes will automatically inherit from object. However, we can openly state that our class derives from object using the following syntax: class MySubClass(object): pass This is inheritance! Since Python 3 automatically inherits from object if we don't explicitly provide a different superclass. A superclass, or parent class, is a class that is being inherited from. A subclass is a class that is inheriting from a superclass. In this case, the superclass is object, and MySubClass is the subclass. A subclass is also said to be derived from its parent class or that the subclass extends the parent. As you've probably figured out from the example, inheritance requires a minimal amount of extra syntax over a basic class definition. Simply include the name of the parent class inside a pair of parentheses after the class name, but before the colon terminating the class definition. This is all we have to do to tell Python that the new class should be derived from the given superclass. How do we apply inheritance in practice? The simplest and most obvious use of inheritance is to add functionality to an existing class. Let's start with a simple contact manager that tracks the name and e-mail address of several people. The contact class is responsible for maintaining a list of all contacts in a class variable, and for initializing the name and address, in this simple class: class Contact: all_contacts = [] def __init__(self, name, email): self.name = name self.email = email Contact.all_contacts.append(self) This example introduces us to class variables. The all_contacts list, because it is part of the class definition, is actually shared by all instances of this class. This means that there is only one Contact.all_contacts list, and if we call self.all_contacts on any one object, it will refer to that single list. The code in the initializer ensures that whenever we create a new contact, the list will automatically have the new object added. Be careful with this syntax, for if you ever set the variable using self.all_contacts, you will actually be creating a new instance variable on the object; the class variable will still be unchanged and accessible as Contact.all_contacts. This is a very simple class that allows us to track a couple pieces of data about our contacts. But what if some of our contacts are also suppliers that we need to order supplies from? We could add an order method to the Contact class, but that would allow people to accidentally order things from contacts who are customers or family friends. Instead, let's create a new Supplier class that acts like a Contact, but has an additional order method: class Supplier(Contact): def order(self, order): print("If this were a real system we would send " "{} order to {}".format(order, self.name)) Now, if we test this class in our trusty interpreter, we see that all contacts, including suppliers, accept a name and e-mail address in their __init__, but only suppliers have a functional order method: >>> c = Contact("Some Body", "[email protected]") >>> s = Supplier("Sup Plier", "[email protected]") >>> print(c.name, c.email, s.name, s.email) Some Body [email protected] Sup Plier [email protected] >>> c.all_contacts [<__main__.Contact object at 0xb7375ecc>, <__main__.Supplier object at 0xb7375f8c>] >>> c.order("Ineed pliers") Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Contact' object has no attribute 'order' >>> s.order("I need pliers") If this were a real system we would send I need pliers order to Supplier >>> So now our Supplier class can do everything a Contact can do (including adding itself to the list of all_contacts) and all the special things it needs to handle as a supplier. This is the beauty of inheritance. Extending built-ins One of the most interesting uses of this kind of inheritance is adding functionality to built-in classes. In the Contact class seen earlier, we are adding contacts to a list of all contacts. What if we also wanted to search that list by name? Well, we could add a method on the Contact class to search it, but it feels like this method actually belongs on the list itself. We can do this using inheritance: class ContactList(list): def search(self, name): '''Return all contacts that contain the search value in their name.''' matching_contacts = [] for contact in self: if name in contact.name: matching_contacts.append(contact) return matching_contacts class Contact: all_contacts = ContactList() def __init__(self, name, email): self.name = name self.email = email self.all_contacts.append(self) Instead of instantiating a normal list as our class variable, we create a new ContactList class that extends the built-in list. Then we instantiate this subclass as our all_contacts list. We can test the new search functionality as follows: >>> c1 = Contact("John A", "[email protected]") >>> c2 = Contact("John B", "[email protected]") >>> c3 = Contact("Jenna C", "[email protected]") >>> [c.name for c in Contact.all_contacts.search('John')] ['John A', 'John B'] >>> Are you wondering how we changed the built-in syntax [] into something we can inherit from? Creating an empty list with [] is actually a shorthand for creating an empty list using list(); the two syntaxes are identical: >>> [] == list() True So, the list data type is like a class that we can extend, not unlike object. As a second example, we can extend the dict class, which is the long way of creating a dictionary (the {:} syntax). class LongNameDict(dict): def longest_key(self): longest = None for key in self: if not longest or len(key) > len(longest): longest = key return longest This is easy to test in the interactive interpreter: >>> longkeys = LongNameDict() >>> longkeys['hello'] = 1 >>> longkeys['longest yet'] = 5 >>> longkeys['hello2'] = 'world' >>> longkeys.longest_key() 'longest yet' Most built-in types can be similarly extended. Commonly extended built-ins are object, list, set, dict, file, and str. Numerical types such as int and float are also occasionally inherited from. Overriding and super So inheritance is great for adding new behavior to existing classes, but what about changing behavior? Our contact class allows only a name and an e-mail address. This may be sufficient for most contacts, but what if we want to add a phone number for our close friends? We can do this easily by just setting a phone attribute on the contact after it is constructed. But if we want to make this third variable available on initialization, we have to override __init__. Overriding is altering or replacing a method of the superclass with a new method (with the same name) in the subclass. No special syntax is needed to do this; the subclass's newly created method is automatically called instead of the superclass's method. For example: class Friend(Contact): def __init__(self, name, email, phone): self.name = name self.email = email self.phone = phone Any method can be overridden, not just __init__. Before we go on, however, we need to correct some problems in this example. Our Contact and Friend classes have duplicate code to set up the name and email properties; this can make maintenance complicated, as we have to update the code in two or more places. More alarmingly, our Friend class is neglecting to add itself to the all_contacts list we have created on the Contact class. What we really need is a way to call code on the parent class. This is what the super function does; it returns the object as an instance of the parent class, allowing us to call the parent method directly: class Friend(Contact): def __init__(self, name, email, phone): super().__init__(name, email) self.phone = phone This example first gets the instance of the parent object using super, and calls __init__ on that object, passing in the expected arguments. It then does its own initialization, namely setting the phone attribute. A super() call can be made inside any method, not just __init__. This means all methods can be modified via overriding and calls to super. The call to super can also be made at any point in the method; we don't have to make the call as the first line in the method. For example, we may need to manipulate the incoming parameters before forwarding them to the superclass.  
Read more
  • 0
  • 0
  • 8788

article-image-what-domain-driven-design
Packt Editorial Staff
03 Apr 2018
18 min read
Save for later

What is domain driven design?

Packt Editorial Staff
03 Apr 2018
18 min read
Domain driven design exists because all software exists for a purpose. It does something. For example, you can't provide a software solution for a financial system such as online stock trading if you don't understand the stock exchanges and their functioning. Having domain knowledge is essential to solving problems with software. Domain driven design is simply designing software with the specific domain - whether that's finance, medicine, law, eCommerce - in mind. This has been taken from Mastering Microservices with Java 9 - Second Edition. Central to Domain Driven Design is the concept of a model. A model is an abstraction, or a blueprint, of the domain. Domain driven design is a collaborative activity Designing this model is not rocket science, but it does take a lot of effort, refining, and input from domain experts. It is the collective job of software designers, domain experts, and developers. They organize information, divide it into smaller parts, group them logically, and create modules. Each module can be taken up individually, and can be divided using a similar approach. This process can be followed until we reach the unit level, or when we cannot divide it any further. A complex project may have more of such iterations; similarly, a simple project could have just a single iteration of it. Once a model is defined and well documented, it can move onto the next stage - code design. So, here we have a software design—a domain model and code design, and code implementation of the domain model. The domain model provides a high level of the architecture of a solution (software/application), and the code implementation gives the domain model a life, as a working model. Domain Driven Design makes design and development work together. It provides the ability to develop software continuously, while keeping the design up to date based on feedback received from the development. It solves one of the limitations offered by Agile and Waterfall methodologies, making software maintainable, including design and code, as well as keeping application minimum viable. It gives developers the right platform to understand the domain, and provides the opportunity to share early feedback of the domain model implementation. It removes the bottleneck that appears in later stages when stockholders wait for deliverables. The fundamental components of Domain Driven Design To understand domain driven design, you can break it down into 3 fundamental concepts: Ubiquitous language and unified model language (UML) Multilayer architecture Artifacts (components) Ubiquitous language Ubiquitous language is a common language to communicate within a project. It's because designing a model is a collaborative effort of software designers, domain experts, and developers that it requires a common language to communicate with. It removes misunderstandings, misinterpretations. Communication gaps so often lead to bad software - ubiquitous language minimizes these gaps. It does, however, need to be used everywhere on a project. Unified Modeling Language (UML) is widely used and very popular when creating models. It also has a few limitations; for example, when you have thousands of classes drawn from a paper, it's difficult to represent class relationships and simultaneously understand their abstraction while taking a meaning from it. Also, UML diagrams do not represent the concepts of a model and what objects are supposed to do. Therefore, UML should always be used with other documents, code, or any other reference for effective communication. Multilayered architecture Multilayered architecture is a common solution for Domain Driven Design. It contains four layers: Presentation layer or (UI) Application layer - responsible for application logic. It maintains and coordinates the overall flow of the product/service. It does not contain business logic or UI. It may hold the state of application objects, like tasks in progress. Domain layer - contains the domain information and business logic. It holds the state of the business object. Infrastructure layer -  provides support to all the other layers and is responsible for communication between them. To understand the interaction of the different layers, take the example of table booking at a restaurant. The end user places a request for a table booking using UI. The UI passes the request to the application layer. The application layer fetches the domain objects, such as the restaurant, the table, a date, and so on, from the domain layer. The domain layer fetches these existing persisted objects from the infrastructure, and invokes relevant methods to make the booking and persist them back to the infrastructure layer. Once domain objects are persisted, the application layer shows the booking confirmation to the end user. Artifacts used in Domain Driven Design There are seven different artifacts used in Domain Driven Design to express, create, and retrieve domain models: Entities Value objects Services Aggregates Repository Factory Module Entities are certain types of objects that are identifiable and remain the same throughout the states of the products/services. These objects are not identified by their attributes, but by their identity and thread of continuity. These type of objects are known as entities. It sounds pretty simple, but it carries complexity. You need to understand how we can define the entities. Let's take an example of a table booking system, where we have a restaurant class with attributes such as restaurant name, address, phone number, establishment data, and so on. We can take two instances of the restaurant class that are not identifiable using the restaurant name, as there could be other restaurants with the same name. Similarly, if we go by any other single attribute, we will not find any attributes that can singularly identify a unique restaurant. If two restaurants have all the same attribute values, they are therefore the same and are interchangeable with each other. Still, they are not the same entities, as both have different references (memory addresses). Conversely, let's take a class of U.S. citizens. Every U.S. citizen has his or her own social security number. This number is not only unique, but remains unchanged throughout the life of the citizen and assures continuity. This citizen object would exist in the memory, would be serialized, and would be removed from the memory and stored in the database. It even exists after the person is deceased. It will be kept in the system for as long as the system exists. A citizen's social security number remains the same irrespective of its representation. Therefore, creating entities in a product means creating an identity. So, now give an identity to any restaurant in the previous example, then either use a combination of attributes such as restaurant name, establishment date, and street, or add an identifier such as restaurant_id to identify it. The basic rule is that two identifiers cannot be the same. Therefore, when we introduce an identifier for an entity, we need to be sure of it. There are different ways to create a unique identity for objects, described as follows: Using the primary key in a table. Using an automated generated ID by a domain module. A domain program generates the identifier and assigns it to objects that are being persisted among different layers. A few real-life objects carry user-defined identifiers themselves. For example, each country has its own country codes for dialing ISD calls. Composite key. This is a combination of attributes that can also be used for creating an identifier, as explained for the preceding restaurant object. Value objects Value objects (VOs) simplify the design. In contrast to entities, value objects have only attributes and no conceptual identity. A best practice is to keep value objects as immutable objects. If possible, you should even keep entity objects immutable too. You might want to keep all objects as entities, but you're likely to run into problems if you do this; there has to be one instance for each object. Let's say you are creating customers as entity objects. Each customer object would represent the restaurant guest; this cannot be used for booking orders for other guests. This may create millions of customer entity objects in the memory if millions of customers are using the system. Not only are there millions of uniquely identifiable objects that exist in the system, but each object is being tracked. Tracking as well as creating an identity is complex. A highly credible system is required to create and track these objects, which is not only very complex, but also resource heavy. It may result in system performance degradation. Therefore, it is important to use value objects instead of using entities. The reasons are explained in the next few paragraphs. Applications don't always need to have to be trackable and have an identifiable customer object. There are cases when you just need to have some or all attributes of the domain element. These are the cases when value objects can be used by the application. It makes things simple and improves the performance. Value objects can easily be created and destroyed, owing to the absence of identity. This simplifies the design—it makes value objects available for garbage collection if no other object has referenced them. Value objects should be designed and coded as immutable. Once they are created, they should never be modified during their life-cycle. If you need a different value of the VO, or any of its objects, then simply create a new value object, but don't modify the original value object. Here, immutability carries all the significance from object-oriented programming (OOP). A value object can be shared and used without impacting on its integrity if, and only if, it is immutable. Services While creating the domain model, you may come across situations where behavior may not be related to any object. These behaviors can be accommodated in service objects. Service objects are part of the domain layer and do not have any internal state. The sole purpose of service objects is to provide behavior to the domain that does not belong to a single entity or value object. Ubiquitous language helps you to identify different objects, identities, or value objects with different attributes and behaviors during the process of domain driven design and domain modelling. During the course of creating the domain model, you may find different behaviors or methods that do not belong to any specific object. Such behaviors are important, and so cannot be neglected. Neither can you add them to entities or value objects. It would spoil the object to add behavior that does not belong to it. Keep in mind, that behavior may impact on various objects. The use of object-oriented programming makes it possible to attach to some objects; this is known as a service. Services are common in technical frameworks. These are also used in domain layers in domain driven design. A service object does not have any internal state; its only purpose is to provide a behavior to the domain. Service objects provide behaviors that cannot be related to specific entities or value objects. Service objects may provide one or more related behaviors to one or more entities or value objects. It is a practice to define the services explicitly in the domain model. While creating the services, you need to tick all of the following points: Service objects' behavior performs on entities and value objects, but it does not belong to entities or value objects Service objects' behavior state is not maintained, and hence, they are stateless Services are part of the domain model Services may also exist in other layers. It is very important to keep domain-layer services isolated. It removes the complexities and keeps the design decoupled. Let's take an example where a restaurant owner wants to see the report of his monthly table bookings. In this case, he will log in as an admin and click the Display Report button after providing the required input fields, such as duration. Application layers pass the request to the domain layer that owns the report and templates objects, with some parameters such as report ID, and so on. Reports get created using the template, and data is fetched from either the database or other sources. Then the application layer passes through all the parameters, including the report ID to the business layer. Here, a template needs to be fetched from the database or another source to generate the report based on the ID. This operation does not belong to either the report object or the template object. Therefore, a service object is used that performs this operation to retrieve the required template from the database. Aggregates Aggregate domain pattern is related to the object's life cycle. It defines ownership and boundaries which is crucial in Domain Driven Design When you reserve a table at your favorite restaurant online using an application, you don't need to worry about the internal system and process that takes place to book your reservation, including searching for available restaurants, then for available tables on the given date, time, and so on and so forth. Therefore, you can say that a reservation application is an aggregate of several other objects, and works as a root for all the other objects for a table reservation system. This root should be an entity that binds collections of objects together. It is also called the aggregate root. This root object does not pass any reference of inside objects to external worlds, and protects the changes performed within internal objects. We need to understand why aggregators are required. A domain model can contain large numbers of domain objects. The bigger the application functionalities and size and the more complex its design, the greater number of objects present. A relationship exists between these objects. Some may have a many-to-many relationship, a few may have a one-to-many relationship, and others may have a one-to-one relationship. These relationships are enforced by the model implementation in the code, or in the database that ensures that these relationships among the objects are kept intact. Relationships are not just unidirectional; they can also be bidirectional. They can also increase in complexity. The designer's job is to simplify these relationships in the model. Some relationships may exist in a real domain, but may not be required in the domain model. Designers need to ensure that such relationships do not exist in the domain model. Similarly, multiplicity can be reduced by these constraints. One constraint may do the job where many objects satisfy the relationship. It is also possible that a bidirectional relationship could be converted into a unidirectional relationship. No matter how much simplification you input, you may still end up with relationships in the model. These relationships need to be maintained in the code. When one object is removed, the code should remove all the references to this object from other places. For example, a record removal from one table needs to be addressed wherever it has references in the form of foreign keys and such, to keep the data consistent and maintain its integrity. Also, invariants (rules) need to be forced and maintained whenever data changes. Relationships, constraints, and invariants bring a complexity that requires an efficient handling in code. We find the solution by using the aggregate represented by the single entity known as the root, which is associated with the group of objects that maintains consistency with regards to data changes. This root is the only object that is accessible from outside, so this root element works as a boundary gate that separates the internal objects from the external world. Roots can refer to one or more inside objects, and these inside objects can have references to other inside objects that may or may not have relationships with the root. However, outside objects can also refer to the root, and not to any inside objects. An aggregate ensures data integrity and enforces the invariant. Outside objects cannot make any change to inside objects; they can only change the root. However, they can use the root to make a change inside the object by calling exposed operations. The root should pass the value of inside objects to outside objects if required. If an aggregate object is stored in the database, then the query should only return the aggregate object. Traversal associations should be used to return the object when it is internally linked to the aggregate root. These internal objects may also have references to other aggregates. An aggregate root entity holds its global identity, and holds local identities inside their entities. A simple example of an aggregate in the table booking system is the customer. Customers can be exposed to external objects, and their root object contains their internal object address and contact information. When requested, the value object of internal objects, such as address, can be passed to external objects: Repository In a domain model, at a given point in time, many domain objects may exist. Each object may have its own life-cycle, from the creation of objects to their removal or persistence. Whenever any domain operation needs a domain object, it should retrieve the reference of the requested object efficiently. It would be very difficult if you didn't maintain all of the available domain objects in a central object. A central object carries the references of all the objects, and is responsible for returning the requested object reference. This central object is known as the repository. The repository is a point that interacts with infrastructures such as the database or file system. A repository object is the part of the domain model that interacts with storage such as the database, external sources, and so on, to retrieve the persisted objects. When a request is received by the repository for an object's reference, it returns the existing object's reference. If the requested object does not exist in the repository, then it retrieves the object from storage. For example, if you need a customer, you would query the repository object to provide the customer with ID 31. The repository would provide the requested customer object if it is already available in the repository, and if not, it would query the persisted stores such as the database, fetch it, and provide its reference. The main advantage of using the repository is having a consistent way to retrieve objects where the requestor does not need to interact directly with the storage such as the database. A repository may query objects from various storage types, such as one or more databases, filesystems, or factory repositories, and so on. In such cases, a repository may have strategies that also point to different sources for different object types As you can see in the repository object flow diagram on the right, the repository interacts with the infrastructure layer, and this interface is part of the domain layer. The requestor may belong to a domain layer, or an application layer. The repository helps the system to manage the life cycle of domain objects. Factory A factory is required when a simple constructor is not enough to create the object. It helps to create complex objects, or an aggregate that involves the creation of other related objects. A factory is also a part of the life cycle of domain objects, as it is responsible for creating them. Factories and repositories are in some way related to each other, as both refer to domain objects. The factory refers to newly created objects, whereas the repository returns the already existing objects either from the memory, or from external storage. Let's see how control flows, by using a user creation process application. Let's say that a user signs up with a username user1. This user creation first interacts with the factory, which creates the name user1 and then caches it in the domain using the repository, which also stores it in the storage for persistence. When the same user logs in again, the call moves to the repository for a reference. This uses the storage to load the reference and pass it to the requestor. The requestor may then use this user1 object to book the table in a specified restaurant, and at a specified time. These values are passed as parameters, and a table booking record is created in storage using the repository:       The factory may use one of the object-oriented programming patterns, such as the factory or abstract factory pattern, for object creation. Modules Modules are the best way to separate related business objects. These are best suited to large projects where the size of domain objects is bigger. For the end user, it makes sense to divide the domain model into modules and set the relationship between these modules. Once you understand the modules and their relationship, you start to see the bigger picture of the domain model, thus it's easier to drill down further and understand the model. Modules also help you to write code that is highly cohesive, or maintains low coupling. Ubiquitous language can be used to name these modules. For the table booking system, we could have different modules, such as user-management, restaurants and tables, analytics and reports, and reviews, and so on. This introduction to domain driven design should give you a strong foundation for using it when you build software. It's principles are useful - in particular, making sure you collaborate and use the same language as different stakeholders is one of domain driven design's most valuable contributions to the way we approach software development.
Read more
  • 0
  • 0
  • 8788

article-image-python-multimedia-enhancing-images
Packt
20 Jan 2011
5 min read
Save for later

Python Multimedia: Enhancing Images

Packt
20 Jan 2011
5 min read
Adjusting brightness and contrast One often needs to tweak the brightness and contrast level of an image. For example, you may have a photograph that was taken with a basic camera, when there was insufficient light. How would you correct that digitally? The brightness adjustment helps make the image brighter or darker whereas the contrast adjustments emphasize differences between the color and brightness level within the image data. The image can be made lighter or darker using the ImageEnhance module in PIL. The same module provides a class that can auto-contrast an image. Time for action – adjusting brightness and contrast Let's learn how to modify the image brightness and contrast. First, we will write code to adjust brightness. The ImageEnhance module makes our job easier by providing Brightness class. Download image 0165_3_12_Before_BRIGHTENING.png and rename it to Before_BRIGHTENING.png. Use the following code: 1 import Image 2 import ImageEnhance 3 4 brightness = 3.0 5 peak = Image.open( "C:imagesBefore_BRIGHTENING.png ") 6 enhancer = ImageEnhance.Brightness(peak) 7 bright = enhancer.enhance(brightness) 8 bright.save( "C:imagesBRIGHTENED.png ") 9 bright.show() On line 6 in the code snippet, we created an instance of the class Brightness. It takes Image instance as an argument. Line 7 creates a new image bright by using the specified brightness value. A value between 0.0 and less than 1.0 gives a darker image, whereas a value greater than 1.0 makes it brighter. A value of 1.0 keeps the brightness of the image unchanged. The original and resultant image are shown in the next illustration. Comparison of images before and after brightening. Let's move on and adjust the contrast of the brightened image. We will append the following lines of code to the code snippet that brightened the image. 10 contrast = 1.3 11 enhancer = ImageEnhance.Contrast(bright) 12 con = enhancer.enhance(contrast) 13 con.save( "C:imagesCONTRAST.png ") 14 con.show() Thus, similar to what we did to brighten the image, the image contrast was tweaked by using the ImageEnhance.Contrast class. A contrast value of 0.0 creates a black image. A value of 1.0 keeps the current contrast. The resultant image is compared with the original in the following illustration. The original image with the image displaying the increasing contrast. In the preceding code snippet, we were required to specify a contrast value. If you prefer PIL for deciding an appropriate contrast level, there is a way to do this. The ImageOps.autocontrast functionality sets an appropriate contrast level. This function normalizes the image contrast. Let's use this functionality now. Use the following code: import ImageOps bright = Image.open( "C:imagesBRIGHTENED.png ") con = ImageOps.autocontrast(bright, cutoff = 0) con.show() The highlighted line in the code is where contrast is automatically set. The autocontrast function computes histogram of the input image. The cutoff argument represents the percentage of lightest and darkest pixels to be trimmed from this histogram. The image is then remapped. What just happened? Using the classes and functionality in ImageEnhance module, we learned how to increase or decrease the brightness and the contrast of the image. We also wrote code to auto-contrast an image using functionality provided in the ImageOps module. Tweaking colors Another useful operation performed on the image is adjusting the colors within an image. The image may contain one or more bands, containing image data. The image mode contains information about the depth and type of the image pixel data. The most common modes we will use are RGB (true color, 3x8 bit pixel data), RGBA (true color with transparency mask, 4x8 bit) and L (black and white, 8 bit). In PIL, you can easily get the information about the bands data within an image. To get the name and number of bands, the getbands() method of the class Image can be used. Here, img is an instance of class Image. >>> img.getbands() ('R', 'G', 'B', 'A') Time for action – swap colors within an image! To understand some basic concepts, let's write code that just swaps the image band data. Download the image 0165_3_15_COLOR_TWEAK.png and rename it as COLOR_TWEAK.png. Type the following code: 1 import Image 2 3 img = Image.open( "C:imagesCOLOR_TWEAK.png ") 4 img = img.convert('RGBA') 5 r, g, b, alpha = img.split() 6 img = Image.merge( "RGBA ", (g, r, b, alpha)) 7 img.show() Let's analyze this code now. On line 2, the Image instance is created as usual. Then, we change the mode of the image to RGBA. Here we should check if the image already has that mode or if this conversion is possible. You can add that check as an exercise! Next, the call to Image.split() creates separate instances of Image class, each containing a single band data. Thus, we have four Image instances—r, g, b, and alpha corresponding to red, green, and blue bands, and the alpha channel respectively. The code in line 6 does the main image processing. The first argument that Image.merge takes mode as the first argument whereas the second argument is a tuple of image instances containing band information. It is required to have same size for all the bands. As you can notice, we have swapped the order of band data in Image instances r and g while specifying the second argument. The original and resultant image thus obtained are compared in the next illustration. The color of the flower now has a shade of green and the grass behind the flower is rendered with a shade of red. Please download and refer to the supplementary PDF file Chapter 3 Supplementary Material.pdf. Here, the color images are provided that will help you see the difference. Original (left) and the color swapped image (right). What just happened? We accomplished creating an image with its band data swapped. We learned how to use PIL's Image.split() and Image.merge() to achieve this. However, this operation was performed on the whole image. In the next section, we will learn how to apply color changes to a specific color region.
Read more
  • 0
  • 0
  • 8767
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-dependency-management-sbt
Packt
07 Oct 2013
17 min read
Save for later

Dependency Management in SBT

Packt
07 Oct 2013
17 min read
(For more resources related to this topic, see here.) In the early days of Java, when projects were small and didn't have many external dependencies, developers tended to manage dependencies manually by copying the required JAR files in the lib folder and checking it in their SCM/VCS with their code. This is still followed by a lot of developers, even today. But due to the aforementioned issues, this is not an option for larger projects. In many enterprises, there are central servers, FTP, shared drives, and so on, which store the approved libraries for use and also internally released libraries. But managing and tracking them manually is never easy. They end up relying on scripts and build files. Maven came and standardized this process. Maven defines standards for the project format to define its dependencies, formats for repositories to store libraries, the automated process to fetch transitive dependencies, and much more. Most of the systems today either back onto Maven's dependency management system or on Ivy's, which can function in the same way, and also provides its own standards, which is heavily inspired by Maven. SBT uses Ivy in the backend for dependency management, but uses a custom DSL to specify the dependency. Quick introduction to Maven or Ivy dependency management Apache Maven is not a dependency management tool. It is a project management and a comprehension tool. Maven is configured using a Project Object Model (POM), which is represented in an XML file. A POM has all the details related to the project right from the basic ones, such as groupId, artifactId, version, and so on, to environment settings such as prerequisites, and repositories. Apache Ivy is a dependency management tool and a subproject of Apache Ant. Ivy integrates publicly available artifact repositories automatically. The project dependencies are declared using XML in a file called ivy.xml. This is commonly known as the Ivy file. Ivy is configured using a settings file. The settings file (ivysettings.xml) defines a set of dependency resolvers. Each resolver points to an Ivy file and/or artifacts. So, the configuration essentially indicates which resource should be used to resolve a module. How Ivy works The following diagram depicts the usual cycle of Ivy modules between different locations: The tags along the arrows are the Ivy commands that need to be run for that task, which are explained in detail in the following sections. Resolve Resolve is the phase where Ivy resolves the dependencies of a module by accessing the Ivy file defined for that module. For each dependency in the Ivy file, Ivy finds the module using the configuration. A module could be an Ivy file or artifact. Once a module is found, its Ivy file is downloaded to the Ivy cache. Then, Ivy checks for the dependencies of that module. If the module has dependencies on other modules, Ivy recursively traverses the graph of dependencies, handling conflicts simultaneously. After traversing the whole graph, Ivy downloads all the dependencies that are not already in the cache and have not been evicted by conflict management. Ivy uses a filesystem-based cache to avoid loading dependencies already available in the cache. In the end, an XML report of the dependencies of the module is generated in the cache. Retrieve Retrieve is the act of copying artifacts from the cache to another directory structure. The destination for the files to be copied is specified using a pattern. Before copying, Ivy checks if the files are not already copied to maximize performance. After dependencies have been copied, the build becomes independent of Ivy. Publish Ivy can then be used to publish the module to a repository. This can be done by manually running a task or from a continuous integration server. Dependency management in SBT In SBT, library dependencies can be managed in the following two ways: By specifying the libraries in the build definition By manually adding the JAR files of the library Manual addition of JAR files may seem simple in the beginning of a project. But as the project grows, it may depend on a lot of other projects, or the projects it depends on may have newer versions. These situations make handling dependencies manually a cumbersome task. Hence, most developers prefer to automate dependency management. Automatic dependency management SBT uses Apache Ivy to handle automatic dependency management. When dependencies are configured in this manner, SBT handles the retrieval and update of the dependencies. An update does not happen every time there is a change, since that slows down all the processes. To update the dependencies, you need to execute the update task. Other tasks depend on the output generated through the update. Whenever dependencies are modified, an update should be run for these changes to get reflected. There are three ways in which project dependencies can be specified. They are as follows: Declarations within the build definition Maven dependency files, that is, POM files Configuration and settings files used for Ivy Adding JAR files manually Declaring dependencies in the build definition The Setting key libraryDependencies is used to configure the dependencies of a project. The following are some of the possible syntaxes for libraryDependencies: libraryDependencies += groupID % artifactID % revision libraryDependencies += groupID %% artifactID % revision libraryDependencies += groupID % artifactID % revision % configuration libraryDependencies ++= Seq( groupID %% artifactID % revision, groupID %% otherID % otherRevision ) Let's explain some of these examples in more detail: groupID: This is the organization/group's ID by whom it was published artifactID: This is the project's name on which there is a dependency revision: This is the Ivy revision of the project on which there is a dependency configuration: This is the Ivy configuration for which we want to specify the dependency Notice that the first and second syntax are not the same. The second one has a %% symbol after groupID. This tells SBT to append the project's Scala version to artifactID. So, in a project with Scala Version 2.9.1, libraryDependencies ++= Seq("mysql" %% "mysql-connector-java" % "5.1.18") is equivalent to libraryDependencies ++= Seq("mysql" % "mysql-connector-java_2.9.1" % "5.1.18"). The %% symbol is very helpful for cross-building a project. Cross-building is the process of building a project for multiple Scala versions. SBT uses the crossScalaVersion key's value to configure dependencies for multiple versions of Scala. Cross-building is possible only for Scala Version 2.8.0 or higher. The %% symbol simply appends the current Scala version, so it should not be used when you know that there is no dependency for a given Scala version, although it is compatible with an older version. In such cases, you have to hardcode the version using the first syntax. Using the third syntax, we could add a dependency only for a specific configuration. This is very useful as some dependencies are not required by all configurations. For example, the dependency on a testing library is only for the test configuration. We could declare this as follows: libraryDependencies ++= Seq("org.specs2" % "specs2_2.9.1" % "1.12.3" % "test") We could also specify dependency for the provided scope (where the JDK or container provides the dependency at runtime).This scope is only available on compilation and test classpath, and is not transitive. Generally, servlet-api dependencies are declared in this scope: libraryDependencies += "javax.servlet" % "javax.servlet-api" % "3.0.1" % "provided" The revision does not have to be a single-fixed version, that is, it can be set with some constraints, and Ivy will select the one that matches best. For example, it could be latest integration or 12.0 or higher, or even a range of versions. A URL for the dependency JAR If the dependency is not published to a repository, you can also specify a direct URL to the JAR file: libraryDependencies += groupID %% artifactID % revision from directURL directURL is used only if the dependency cannot be found in the specified repositories and is not included in published metadata. For example: libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar" Extra attributes SBT also supports Ivy's extra attributes. To specify extra attributes, one could use the extra method. Consider that the project has a dependency on the following Ivy module: <ivy-module version ="2.0" > <info organization="packt" module = "introduction" e:media = "screen" status = "integration" e:codeWord = "PP1872"</ivy-module> A dependency on this can be declared by using the following: libraryDependencies += "packt" % "introduction" % "latest.integration" extra( "media"->"screen", "codeWord"-> "PP1872") The extra method can also be used to specify extra attributes for the current project, so that when it is published to the repository its Ivy file will also have extra attributes. An example for this is as follows: projectID << projectID {id => id extra( "codeWord"-> "PP1952")} Classifiers Classifiers ensure that the dependency being loaded is compatible with the platform for which the project is written. For example, to fetch the dependency relevant to JDK 1.5, use the following: libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15" We could also have multiple classifiers, as follows: libraryDependencies += "org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifier "natives-linux" classifier "natives-osx" Transitivity In logic and mathematics, a relationship between three elements is said to be transitive. If the relationship holds between the first and second elements and between the second and third elements, it implies that it also holds a relationship between the first and third elements. Relating this to the dependencies of a project, imagine that you have a project that depends on the project Foo for some of its functionality. Now, Foo depends on another project, Bar, for some of its functionality. If a change in the project Bar affects your project's functionality, then this implies that your project indirectly depends on project Bar. This means that your project has a transitive dependency on the project Bar. But if in this case a change in the project Bar does not affect your project's functionality, then your project does not depend on the project Bar. This means that your project does not have a dependency on the project Bar. SBT cannot know whether your project has a transitive dependency or not, so to avoid dependency issues, it loads the library dependencies transitively by default. In situations where this is not required for your project, you can disable it using intransitive() or notTransitive(). A common case where artifact dependencies are not required is in projects using the Felix OSGI framework (only its main JAR is required). The dependency can be declared as follows: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive() Or, it can be declared as follows: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" notTransitive() If we need to exclude certain transitive dependencies of a dependency, we could use the excludeAll or exclude method. libraryDependencies += "log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")libraryDependencies += "log4j" % "log4j" % "1.2.15" excludeAll( ExclusionRule(organization = "com.sun.jdmk"), ExclusionRule(organization = "com.sun.jmx"), ExclusionRule(organization = "javax.jms") ) Although excludeAll provides more flexibility, it should not be used in projects that will be published in the Maven style as it cannot be represented in a pom.xml file. The exclude method is more useful in projects that require pom.xml when being published, since it requires both organizationID and name to exclude a module. Download documentation Generally, an IDE plugin is used to download the source and API documentation JAR files. However, one can configure SBT to download the documentation without using an IDE plugin. To download the dependency's sources, add withSources() to the dependency definition. For example: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() To download API JAR files, add withJavaDoc() to the dependency definition. For example: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc() The documentation downloaded like this is not transitive. You must use the update-classifiers task to do so. Dependencies using Maven files SBT can be configured to use a Maven POM file to handle the project dependencies by using the externalPom method. The following statements can be used in the build definition: externalPom(): This will set pom.xml in the project's base directory as the source for project dependencies externalPom(baseDirectory{base=>base/"myProjectPom"}): This will set the custom-named POM file myProjectPom.xml in the project's base directory as the source for project dependencies There are a few restrictions with using a POM file, as follows: It can be used only for configuring dependencies. The repositories mentioned in POM will not be considered. They need to be specified explicitly in the build definition or in an Ivy settings file. There is no support for relativePath in the parent element of POM and its existence will result in an error. Dependencies using Ivy files or Ivy XML Both Ivy settings and dependencies can be used to configure project dependencies in SBT through the build definition. They can either be loaded from a file or can be given inline in the build definition. The Ivy XML can be declared as follows: ivyXML := <dependencies> <dependency org="org.specs2" name="specs2" rev="1.12.3"></dependency></ dependencies> The commands to load from a file are as follows: externalIvySettings(): This will set ivysettings.xml in the project's base directory as the source for dependency settings. externalIvySettings(baseDirectory{base=>base/"myIvySettings"}): This will set the custom-named settings file myIvySettings.xml in the project's base directory as the source for dependency settings. externalIvySettingsURL(url("settingsURL")): This will set the settings file at settingsURL as the source for dependency settings. externalIvyFile(): This will set ivy.xml in the project's base directory as the source for dependency. externalIvyFile(baseDirectory(_/"myIvy"): This will set the custom-named settings file myIvy.xml in the project's base directory as the source for project dependencies. When using Ivy settings and configuration files, the configurations need to be mapped, because Ivy files specify their own configurations. So, classpathConfiguration must be set for the three main configurations. For example: classpathConfiguration in Compile := Compile classpathConfiguration in Test := Test classpathConfiguration in Runtime := Runtime Adding JAR files manually To handle dependencies manually in SBT, you need to create a lib folder in the project and add the JAR files to it. That is the default location where SBT looks for unmanaged dependencies. If you have the JAR files located in some other folder, you could specify that in the build definition. The key used to specify the source for manually added JAR files is unmanagedBase. For example, if the JAR files your project depends on are in project/extras/dependencies instead of project/lib, modify the value of unmanagedBase as follows: unmanagedBase <<= baseDirectory {base => base/"extras/dependencies"} Here, baseDirectory is the project's root directory. unmanagedJars is a task which lists the JAR files from the unmanagedBase directory. To see the list of JAR files in the interactive shell type, type the following: > show unmanaged-jars [info] ArrayBuffer() Or in the project folder, type: $ sbt show unmanaged-jars If you add a Spring JAR (org.springframework.aop-3.0.1.jar) to the dependencies folder, then the result of the previous command would be: > show unmanaged-jars [info] ArrayBuffer(Attributed(/home/introduction/extras/dependencies/ org.springframework.aop-3.0.1.jar)) It is also possible to specify the path(s) of JAR files for different configurations using unmanagedJars. In the build definition, the unmanagedJars task may need to be replaced when the jars are in multiple directories and other complex cases. unmanagedJars in Compile += file("/home/downloads/ org.springframework.aop-3.0.1.jar") Resolvers Resolvers are alternate resources provided for the projects on which there is a dependency. If the specified project's JAR is not found in the default repository, these are tried. The default repository used by SBT is Maven2 and the local Ivy repository. The simplest ways of adding a repository are as follows: resolvers += name at location. For example: resolvers += "releases" at "http://oss.sonatype.org/content/ repositories/releases" resolvers ++= Seq (name1 at location1, name2 at location2). For example: resolvers ++= Seq("snapshots" at "http://oss.sonatype.org/ content/repositories/snapshots", "releases" at "http://oss.sonatype.org/content/repositories/releases") resolvers := Seq (name1 at location1, name2 at location2). For example: resolvers := Seq("sgodbillon" at "https://bitbucket.org/ sgodbillon/repository/raw/master/snapshots/", "Typesafe backup repo" at " http://repo.typesafe.com/typesafe/repo/", "Maven repo1" at "http://repo1.maven.org/") ) You can also add their own local Maven repository as a resource using the following syntax: resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository" An Ivy repository of the file types URL, SSH, or SFTP can also be added as resources using sbt.Resolver. Note that sbt.Resolver is a class with factories for interfaces to Ivy repositories that require a hostname, port, and patterns. Let's see how to use the Resolver class. For filesystem repositories, the following line defines an atomic filesystem repository in the test directory of the current working directory: resolvers += Resolver.file ("my-test-repo", file("test")) transactional() For URL repositories, the following line defines a URL repository at http://example.org/repo-releases/: resolvers += Resolver.url(" my-test-repo", url("http://example.org/repo-releases/")) The following line defines an Ivy repository at http://joscha.github.com/play-easymail/repo/releases/: resolvers += Resolver.url("my-test-repo", url("http://joscha.github.com/play-easymail/repo/releases/")) (Resolver.ivyStylePatterns) For SFTP repositories, the following line defines a repository that is served by SFTP from the host example.org: resolvers += Resolver.sftp(" my-sftp-repo", "example.org") The following line defines a repository that is served by SFTP from the host example.org at port 22: resolvers += Resolver.sftp("my-sftp-repo", "example.org", 22) The following line defines a repository that is served by SFTP from the host example.org with maven2/repo-releases/ as the base path: resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/") For SSH repositories, the following line defines an SSH repository with user-password authentication: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", "password") The following line defines an SSH repository with an access request for the given user. The user will be prompted to enter the password to complete the download. resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user") The following line defines an SSH repository using key authentication: resolvers += { val keyFile: File = ... Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword") } The next line defines an SSH repository using key authentication where no keyFile password is required to be prompted for before download: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile) The following line defines an SSH repository with the permissions. It is a mode specification such as chmod: resolvers += Resolver.ssh("my-ssh-repo", "example.org") withPermissions("0644") SFTP authentication can be handled in the same way as shown for SSH in the previous examples. Ivy patterns can also be given to the factory methods. Each factory method uses a Patterns instance which defines the patterns to be used. The default pattern passed to the factory methods gives the Maven-style layout. To use a different layout, provide a Patterns object describing it. The following are some examples that specify custom repository layouts using patterns: resolvers += Resolver.url("my-test-repo", url)( Patterns("[organisation]/[module]/ [revision]/[artifact].[ext]") ) You can specify multiple patterns or patterns for the metadata and artifacts separately. For filesystem and URL repositories, you can specify absolute patterns by omitting the base URL, passing an empty patterns instance, and using Ivy instances and artifacts. resolvers += Resolver.url("my-test-repo") artifacts "http://example.org/[organisation]/[module]/ [revision]/[artifact].[ext]" When you do not need the default repositories, you must override externalResolvers. It is the combination of resolvers and default repositories. To use the local Ivy repository without the Maven repository, define externalResolvers as follows: externalResolvers <<= resolvers map { rs => Resolver.withDefaultResolvers(rs, mavenCentral = false) } Summary In this article, we have seen how dependency management tools such as Maven and Ivy work and how SBT handles project dependencies. This article also talked about the different options that SBT provides to handle your project dependencies and configuring resolvers for the module on which your project has a dependency. Resources for Article : Further resources on this subject: So, what is Play? [Article] Play! Framework 2 – Dealing with Content [Article] Integrating Scala, Groovy, and Flex Development with Apache Maven [Article]
Read more
  • 0
  • 0
  • 8716

article-image-configuring-jdbc-oracle-jdeveloper
Packt
21 Oct 2009
14 min read
Save for later

Configuring JDBC in Oracle JDeveloper

Packt
21 Oct 2009
14 min read
Introduction Unlike Eclipse IDE, which requires a plug-in, JDeveloper has a built-in provision to establish a JDBC connection with a database. JDeveloper is the only Java IDE with an embedded application server, the Oracle Containers for J2EE (OC4J). This database-based web application may run in JDeveloper without requiring a third-party application server. However, JDeveloper also supports third-party application servers. Starting with JDeveloper 11, application developers may point the IDE to an application server instance (or OC4J instance), including third-party application servers that they want to use for testing during development. JDeveloper provides connection pooling for the efficient use of database connections. A database connection may be used in an ADF BC application, or in a JavaEE application. A database connection in JDeveloper may be configured in the Connections Navigator. A Connections Navigator connection is available as a DataSource registered with a JNDI naming service. The database connection in JDeveloper is a reusable named connection that developers configure once and then use in as many of their projects as they want. Depending on the nature of the project and the database connection, the connection is configured in the bc4j.xcfg file or a JavaEE data source. Here, it is necessary to distinguish between data source and DataSource. A data source is a source of data; for example an RDBMS database is a data source. A DataSource is an interface that represents a factory for JDBC Connection objects. JDeveloper uses the term Data Source or data source to refer to a factory for connections. We will also use the term Data Source or data source to refer to a factory for connections, which in the javax.sql package is represented by the DataSource interface. A DataSource object may be created from a data source registered with the JNDI (Java Naming and Directory) naming service using JNDI lookup. A JDBC Connection object may be obtained from a DataSource object using the getConnection method. As an alternative to configuring a connection in the Connections Navigator a data source may also be specified directly in the data source configuration file data-sources.xml. In this article we will discuss the procedure to configure a JDBC connection and a JDBC data source in JDeveloper 10g IDE. We will use the MySQL 5.0 database server and MySQL Connector/J 5.1 JDBC driver, which support the JDBC 4.0 specification. In this article you will learn the following: Creating a database connection in JDeveloper Connections Navigator. Configuring the Data Source and Connection Pool associated with the connection configured in the Connections Navigator. The common JDBC Connection Errors. Before we create a JDBC connection and a data source we will discuss connection pooling and DataSource. Connection Pooling and DataSource The javax.sql package provides the API for server-side database access. The main interfaces in the javax.sql package are DataSource, ConnectionPoolDataSource, and PooledConnection. The DataSource interface represents a factory for connections to a database. DataSource is a preferred method of obtaining a JDBC connection. An object that implements the DataSource interface is typically registered with a Java Naming and Directory API-based naming service. DataSource interface implementation is driver-vendor specific. The DataSource interface has three types of implementations: Basic implementation: In basic implementation there is 1:1 correspondence between a client's Connection object and the connection with the database. This implies that for every Connection object, there is a connection with the database. With the basic implementation, the overhead of opening, initiating, and closing a connection is incurred for each client session. Connection pooling implementation: A pool of Connection objects is available, from which connections are assigned to the different client sessions. A connection pooling manager implements the connection pooling. When a client session does not require a connection, the connection is returned to the connection pool and becomes available to other clients. Thus, the overheads of opening, initiating, and closing connections are reduced. Distributed transaction implementation: Distributed transaction implementation produces a Connection object that is mostly used for distributed transactions and is always connection pooled. A transaction manager implements the distributed transactions. An advantage of using a data source is that code accessing a data source does not have to be modified when an application is migrated to a different application server. Only the data source properties need to be modified. A JDBC driver that is accessed with a DataSource does not register itself with a DriverManager. A DataSource object is created using a JNDI lookup and subsequently a Connection object is created from the DataSource object. For example, if a data source JNDI name is jdbc/OracleDS a DataSource object may be created using JNDI lookup. First, create an InitialContext object and subsequently create a DataSource object using the InitialContext lookup method. From the DataSource object create a Connection object using the getConnection() method: InitialContext ctx=new InitialContext(); DataSource ds=ctx.lookup("jdbc/OracleDS"); Connection conn=ds.getConnection(); The JNDI naming service, which we used to create a DataSource object is provided by J2EE application servers such as the Oracle Application Server Containers for J2EE (OC4J) embedded in the JDeveloper IDE. A connection in a pool of connections is represented by the PooledConnection interface, not the Connection interface. The connection pool manager, typically the application server, maintains a pool of PooledConnection objects. When an application requests a connection using the DataSource.getConnection() method, as we did using the jdbc/OracleDS data source example, the connection pool manager returns a Connection object, which is actually a handle to an object that implements the PooledConnection interface. A ConnectionPoolDataSource object, which is typically registered with a JNDI naming service, represents a collection of PooledConnection objects. The JDBC driver provides an implementation of the ConnectionPoolDataSource, which is used by the application server to build and manage a connection pool. When an application requests a connection, if a suitable PooledConnection object is available in the connection pool, the connection pool manager returns a handle to the PooledConnection object as a Connection object. If a suitable PooledConnection object is not available, the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource to create a new PooledConnection object. For example, if connectionPoolDataSource is a ConnectionPoolDataSource object a new PooledConnection gets created as follows: PooledConnection pooledConnection=connectionPoolDataSource.getPooledConnection(); The application does not have to invoke the getPooledConnection() method though; the connection pool manager invokes the getPooledConnection() method and the JDBC driver implementing the ConnectionPoolDataSource creates a new PooledConnection, and returns a handle to it. The connection pool manager returns a Connection object, which is a handle to a PooledConnection object, to the application requesting a connection. When an application closes a Connection object using the close() method, as follows, the connection does not actually get closed. conn.close(); The connection handle gets deactivated when an application closes a Connection object with the close() method. The connection pool manager does the deactivation. When an application closes a Connection object with the close() method any client info properties that were set using the setClientInfo method are cleared. The connection pool manager is registered with a PooledConnection object using the addConnectionEventListener() method. When a connection is closed, the connection pool manager is notified and the connection pool manager deactivates the handle to the PooledConnection object, and returns the PooledConnection object to the connection pool to be used by another application. The connection pool manager is also notified if a connection has an error. A PooledConnection object is not closed until the connection pool is being reinitialized, the server is shutdown, or a connection becomes unusable. In addition to connections being pooled, PreparedStatement objects are also pooled by default if the database supports statement pooling. It can be discovered if a database supports statement pooling using the supportsStatementPooling() method of the DatabaseMetaData interface. The PeparedStatement pooling is also managed by the connection pool manager. To be notified of PreparedStatement events such as a PreparedStatement getting closed or a PreparedStatement becoming unusable, a connection pool manager is registered with a PooledConnection manager using the addStatementEventListener() method. A connection pool manager deregisters a PooledConnection object using the removeStatementEventListener() method. Methods addStatementEventListener and removeStatementEventListener are new methods in the PooledConnection interface in JDBC 4.0. Pooling of Statement objects is another new feature in JDBC 4.0. The Statement interface has two new methods in JDBC 4.0 for Statement pooling: isPoolable() and setPoolable(). The isPoolable method checks if a Statement object is poolable and the setPoolable method sets the Statement object to poolable. When an application closes a PreparedStatement object using the close() method the PreparedStatement object is not actually closed. The PreparedStatement object is returned to the pool of PreparedStatements. When the connection pool manager closes a PooledConnection object by invoking the close() method of PooledConnection all the associated statements also get closed. Pooling of PreparedStatements provides significant optimization, but if a large number of statements are left open, it may not be an optimal use of resources. Thus, the following procedure is followed to obtain a connection in an application server using a data source: Create a data source with a JNDI name binding to the JNDI naming service. Create an InitialContext object and look up the JNDI name of the data source using the lookup method to create a DataSource object. If the JDBC driver implements the DataSource as a connection pool, a connection pool becomes available. Request a connection from the connection pool. The connection pool manager checks if a suitable PooledConnection object is available. If a suitable PooledConnection object is available, the connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. If a PooledConnection object is not available the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource, which is implemented by the JDBC driver. The JDBC driver implementing the ConnectionPoolDataSource creates a PooledConnection object and returns a handle to it. The connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. When an application closes a connection, the connection pool manager deactivates the handle to the PooledConnection object and returns the PooledConnection object to the connection pool. ConnectionPoolDataSource provides some configuration properties to configure a connection pool. The configuration pool properties are not set by the JDBC client, but are implemented or augmented by the connection pool. The properties can be set in a data source configuration. Therefore, it is not for the application itself to change the settings, but for the administrator of the pool, who also happens to be the developer sometimes, to do so. Connection pool properties supported by ConnectionPoolDataSource are discussed in following table: Connection Pool Property Type Description maxStatements int Maximum number of statements the pool should keep open. 0 (zero) indicates that statement caching is not enabled. initialPoolSize int The initial number of connections the pool should have at the time of creation. minPoolSize int The minimum number of connections in the pool. 0 (zero) indicates that connections are created as required. maxPoolSize int The maximum number of connections in the connection pool. 0 indicates that there is no maximum limit. maxIdleTime int Maximum duration (in seconds) a connection can be kept open without being used before the connection is closed. 0 (zero) indicates that there is no limit. propertyCycle int The interval in seconds the pool should wait before implementing the current policy defined by the connection pool properties. maxStatements int The maximum number of statements the pool can keep open. 0 (zero) indicates that statement caching is not enabled.     Setting the Environment Before getting started, we have to install the JDeveloper 10.1.3 IDE and the MySQL 5.0 database. Download JDeveloper from: http://www.oracle.com/technology/software/products/jdev/index.html. Download the MySQL Connector/J 5.1, the MySQL JDBC driver that supports JDBC 4.0 specification. To install JDeveloper extract the JDeveloper ZIP file to a directory. Log in to the MySQL database and set the database to test. Create a database table, Catalog, which we will use in a web application. The SQL script to create the database table is listed below: CREATE TABLE Catalog(CatalogId VARCHAR(25)PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO Catalog VALUES('catalog1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss');INSERT INTO Catalog VALUES('catalog2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); MySQL does not support ROWID, for which support has been added in JDBC 4.0. Having installed the JDeveloper IDE, next we will configure a JDBC connection in the Connections Navigator. Select the Connections tab and right-click on the Database node to select New Database Connection. Click on Next in Create Database Connection Wizard. In the Create Database Connection Type window, specify a Connection Name—MySQLConnection for example—and set Connection Type to Third Party JDBC Driver, because we will be using MySQL database, which is a third-party database for Oracle JDeveloper and click on Next. If a connection is to be configured with Oracle database select Oracle (JDBC) as the Connection Type and click on Next. In the Authentication window specify Username as root (Password is not required to be specified for a root user by default), and click on Next. In the Connection window, we will specify the connection parameters, such as the driver name and connection URL; click on New to specify a Driver Class. In the Register JDBC Driver window, specify Driver Class as com.mysql.jdbc.Driver and click on Browse to select a Library for the Driver Class. In the Select Library window, click on New to create a new library for the MySQL Connector/J 5.1 JAR file. In the Create Library window, specify Library Name as MySQL and click on Add Entry to add a JAR file entry for the MySQL library. In the Select Path Entry window select mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar and click on Select. In the Create Library window, after a Class Path entry gets added to the MySQL library, click on OK. In the Select Library window, select the MySQL library and click on OK. In the Register JDBC Driver window, the MySQL library gets specified in the Library field and the mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar gets specified in the Classpath field. Now, click on OK. The Driver Class, Library, and Classpath fields get specified in the Connection window. Specify URL as jdbc:mysql://localhost:3306/test, and click on Next. In the Test window click on Test Connection to test the connection that we have configured. A connection is established and a success message gets output in the Status text area. Click on Finish in the Test window. A connection configuration, MySQLConnection, gets added to the Connections navigator. The connection parameters are displayed in the structure view. To modify any of the connection settings, double-click on the Connection node. The Edit Database Connection window gets displayed. The connection Username, Password, Driver Class, and URL may be modified in the Edit window. A database connection configured in the Connections navigator has a JNDI name binding in the JNDI naming service provided by OC4J. Using the JNDI name binding, a DataSource object may be created in a J2EE application. To view, or modify the configuration settings of the JDBC connection select Tools | Embedded OC4J Server Preferences in JDeveloper. In the window displayed, select Global | Data Sources node, and to update the data-sources.xml file with the connection defined in the Connections navigator, click on the Refresh Now button. Checkboxes may be selected to Create data-source elements where not defined, and to Update existing data-source elements. The connection pool and data source associated with the connection configured in the Connections navigator get listed. Select the jdev-connection-pool-MySQLConnection node to list the connection pool properties as Property Set A and Property Set B. The tuning properties of the JDBC connection pool may be set in the Connection Pool window. The different tuning attributes are listed in following table:        
Read more
  • 0
  • 0
  • 8680

article-image-creating-lazarus-component
Packt
22 Apr 2013
14 min read
Save for later

Creating a Lazarus Component

Packt
22 Apr 2013
14 min read
(For more resources related to this topic, see here.) Creating a new component package We are going to create a custom-logging component, and add it to the Misc tab of the component palette. To do this, we first need to create a new package and add out component to that package along with any other required resources, such as an icon for the component. To create a new package, do the following: Select package from the main menu. Select New Package.... from the submenu. Select a directory that appears in the Save dialog and create a new directory called MyComponents. Select the MyComponents directory. Enter MyComponents as the filename and press the Save button. Now, you have a new package that is ready to have components added to it. Follow these steps: On the Package dialog window, click on the add (+) button. Select the New Component tab. Select TComponent as Ancestor Type. Set New class name to TMessageLog. Set Palette Page to Misc. Leave all the other settings as they are. You should now have something similar to the following screenshot. If so, click on the Create New Component button: You should see messagelog.pas listed under the Files node in the Package dialog window. Let's open this file and see what the auto-generated code contains. Double-click on the file or choose Open file from More menu in the Package dialog. Do not name your component the same as the package. This will cause you problems when you compile the package later. If you were to do this, the .pas file would be over written, because the compile procedure creates a .pas file for the package automatically. The code in the Source Editor window is given as follows: unit TMessageLog;{$mode objfpc}{$H+}interfaceusesClasses, SysUtils, LResources, Forms, Controls, Graphics, Dialogs,StdCtrls;typeTMessageLog = class(TComponent)private{ Private declarations }protected{ Protected declarations }public{ Public declarations }published{ Published declarations }end;procedure Register;implementationprocedure Register;beginRegisterComponents('Misc',[TMessageLog]);end;end. What should stand out in the auto-generated code is the global procedure RegisterComponents. RegisterComponents is contained in the Classes unit. The procedure registers the component (or components if you create more than one in the unit) to the component page that is passed to it as the first parameter of the procedure. Since everything is in order, we can now compile the package and install the component. Click the Compile button on the toolbar. Once the compile procedure has been completed, select Install, which is located in the menu under the Use button. You will be presented with a dialog telling you that Lazarus needs to be rebuilt. Click on the Yes button, as shown in the following screenshot: The Lazarus rebuilding process will take some time. When it is complete, it will need to be restarted. If this does not happen automatically, then restart Lazarus yourself. On restarting Lazarus, select the Misc tab on the component palette. You should see the new component as the last component on the tab, as shown in the following screenshot: You have now successfully created and installed a new component. You can now create a new application and add this component to a Lazarus form. The component in its current state does not perform any action. Let us now look at adding properties and events to the component that will be accessible in the Object Inspector window at design time. Adding properties Properties of a component that you would like to have visible in the Object Inspector window must be declared as published. Properties are attributes that determine an object's status and behavior. A property is a name that is mapped to read and write methods or access data directly. This means, when you read or write a property, you are accessing a field or calling a method of the object. For example, let us add a FileName property to TMessageLog, which is the name of the file that messages will be written to. The actual field of the object that will store this data will be named fFileName. To the TMessageLog private declaration section, add: fFileName: String; To the TMessagLog published declaration section, add: property FileName: String read fFileName write fFileName; With these changes, when the packages are compiled and installed, the property FileName will be visible in the Object Inspector window when the TMessageLog declaration is added to a form in a project. You can do this now if you would like to verify this. Adding events Any interaction that a user has with a component, such as clicking it, generates an event. Events are also generated by the system in response to a method call or a change in a component's property, or if different component's property changes, such as the focus being set on one component causes the current component in focus to lose it, which triggers an event call. Event handlers are methods of the form containing the component; this technique is referred to as delegation. You will notice that when you double-click on a component's event in the object inspector it creates a new procedure of the form. Events are properties, and such methods are assigned to event properties, as we just saw with normal properties. Because events are the properties and use of delegation, multiple events can share the same event handler. The simplest way to create an event is to define a method of the type TNotifyEvent. For example, if we want to add an OnChange event to TMessageLog, we could add the following code: ...privateFonChange : TNotifyEvent;...publicproperty OnChange: TNotifyEvent read FOnChange write FOnChange;…end; When you double-click on the OnChange event in Object Inspector, the following method stub would be created in the form containing the TMessageLog component: procedure TForm.MessageLogChange(Sender: TObject);beginend; Some properties, such as OnChange or OnFocus, are sometimes called on the change of value of a component's property or the firing of another event. Traditionally, in this case, a method with the prefix of Do and with the suffix of the On event are called. So, in the case of our OnChange event, it would be called from the DoChange method (as called by some other method). Let us assume that, when a filename is set for the TMessageLog component, the procedure SetFileName is called, and that calls DoChange. The code would look as follows: procedure SetFileName(name : string);beginFFileName = name;//fire the eventDoChange;end;procedure DoChange;beginif Assigned(FOnChange) thenFOnChange(Self);end; The DoChange procedure checks to see if anything has been assigned to the FOnChange field. If it is assigned, then it executes what is assigned to it. What this means is that if you double-click on the OnChange event in Object Inspector, it assigns the method name you enter to FOnChange, and this is the method that is called by DoChange. Events with more parameters You probably noticed that the OnChange event only had one parameter, which was Sender and is of the type Object. Most of the time, this is adequate, but there may be times when we want to send other parameters into an event. In those cases, TNotifyEvent is not an adequate type, and we will need to define a new type. The new type will need to be a method pointer type, which is similar to a procedural type but has the keyword of object at the end of the declaration. In the case of TMessageLog, we may need to perform some action before or after a message is written to the file. To do this, we will need to declare two method pointers, TBeforeWriteMsgEvent and TAfterWriteMsgEvent, both of which will be triggered in another method named WriteMessage. The modification of our code will look as follows: typeTBeforeWriteMsgEvent = procedure(var Msg: String; var OKToWrite:Boolean) of Object;TAfterWriteMsgEvent = procedure(Msg: String) of Object;TmessageLog = class(TComponent)…publicfunction WriteMessage(Msg: String): Boolean;...publishedproperty OnBeforeWriteMsg: TBeforeWriteMsgEvent read fBeforeWriteMsgwrite fBeforeWriteMsg;property OnAfterWriteMsg: TAfterWriteMsgEvent read fAfterWriteMsgwrite fAfterWriteMsg;end;implementationfunction TMessageLog.WriteMessage(Msg: String): Boolean;varOKToWrite: Boolean;beginResult := FALSE;OKToWrite := TRUE;if Assigned(fBeforeWriteMsg) thenfBeforeWriteMsg(Msg, OKToWrite);if OKToWrite thenbegintryAssignFile(fLogFile, fFileName);if FileExists(fFileName) thenAppend(fLogFile)elseReWrite(fLogFile);WriteLn(fLogFile, DateTimeToStr(Now()) + ' - ' + Msg);if Assigned(fAfterWriteMsg) thenfAfterWriteMsg(Msg);Result := TRUE;CloseFile(fLogFile);exceptMessageDlg('Cannot write to log file, ' + fFileName + '!',mtError, [mbOK], 0);CloseFile(fLogFile);end; // try...exceptend; // ifend; // WriteMessage While examining the function WriteMessage, we see that, before the Msg parameter is written to the file, the FBeforeWriteMsg field is checked to see if anything is assigned to it, and, if so, the write method of that field is called with the parameters Msg and OKToWrite. The method pointer TBeforeWriteMsgEvent declares both of these parameters as var types. So if any changes are made to the method, the changes will be returned to WriteMessage function. If the Msg parameter is successfully written to the file, the FAfterWriteMsg parameter is checked for assigned and executed parameter (if it is). The file is then closed and the function's result is set to True. If the Msg parameter value is not able to be written to the file, then an error dialog is shown, the file is closed, and the function's result is set to False. With the changes that we have made to the TMessageLog unit, we now have a functional component. You can now save the changes, recompile, reinstall the package, and try out the new component by creating a small application using the TMessageLog component. Property editors Property editors are custom dialogs for editing special properties of a component. The standard property types, such as strings, images, or enumerated types, have default property editors, but special property types may require you to write custom property editors. Custom property editors must extend from the class TPropertyEditor or one of its descendant classes. Property editors must be registered in the Register procedure using the function RegisterPropertyEditor from the unit PropEdits. An example of property editor class declaration is given as follows: TPropertyEditor = classpublicfunction AutoFill: Boolean; Virtual;procedure Edit; Virtual; // double-clicking the property value toactivateprocedure ShowValue; Virtual; //control-clicking the propertyvalue to activatefunction GetAttributes: TPropertyAttributes; Virtual;function GetEditLimit: Integer; Virtual;function GetName: ShortString; Virtual;function GetHint(HintType: TPropEditHint; x, y: integer): String;Virtual;function GetDefaultValue: AnsiString; Virtual;function SubPropertiesNeedsUpdate: Boolean; Virtual;function IsDefaultValue: Boolean; Virtual;function IsNotDefaultValue: Boolean; Virtual;procedure GetProperties(Proc: TGetPropEditProc); Virtual;procedure GetValues(Proc: TGetStrProc); Virtual;procedure SetValue(const NewValue: AnsiString); Virtual;procedure UpdateSubProperties; Virtual;end; Having a class as a property of a component is a good example of a property that would need a custom property editor. Because a class has many fields with different formats, it is not possible for Lazarus to have the object inspector make these fields available for editing without a property editor created for a class property, as with standard type properties. For such properties, Lazarus shows the property name in parentheses followed by a button with an ellipsis (…) that activates the property editor. This functionality is handled by the standard property editor called TClassPropertyEditor, which can then be inherited to create a custom property editor, as given in the following code: TClassPropertyEditor = class(TPropertyEditor)publicconstructor Create(Hook: TPropertyEditorHook; APropCount: Integer);Override;function GetAttributes: TPropertyAttributes; Override;procedure GetProperties(Proc: TGetPropEditProc); Override;function GetValue: AnsiString; Override;property SubPropsTypeFilter: TTypeKinds Read FSubPropsTypeFilterWrite SetSubPropsTypeFilterDefault tkAny;end; Using the preceding class as a base class, all you need to do to complete a property editor is add a dialog in the Edit method as follows: TMyPropertyEditor = class(TClassPropertyEditor)publicprocedure Edit; Override;function GetAttributes: TPropertyAttributes; Override;end;procedure TMyPropertyEditor.Edit;varMyDialog: TCommonDialog;beginMyDialog := TCommonDialog.Create(NIL);try…//Here you can set attributes of the dialogMyDialog.Options := MyDialog.Options + [fdShowHelp];...finallyMyDialog.Free;end;end; Component editors Component editors control the behavior of a component when double-clicked or right-clicked in the form designer. Classes that define a component editor must descend from TComponentEditor or one of its descendent classes. The class should be registered in the Register procedure using the function RegisterComponentEditor. Most of the methods of TComponentEditor are inherited from it's ancestor TBaseComponentEditor, and, if you are going to write a component editor, you need to be aware of this class and its methods. Declaration of TBaseComponentEditor is as follows: TBaseComponentEditor = classprotectedpublicconstructor Create(AComponent: TComponent;ADesigner: TComponentEditorDesigner); Virtual;procedure Edit; Virtual; Abstract;procedure ExecuteVerb(Index: Integer); Virtual; Abstract;function GetVerb(Index: Integer): String; Virtual; Abstract;function GetVerbCount: Integer; Virtual; Abstract;procedure PrepareItem(Index: Integer; const AnItem: TMenuItem);Virtual; Abstract;procedure Copy; Virtual; Abstract;function IsInInlined: Boolean; Virtual; Abstract;function GetComponent: TComponent; Virtual; Abstract;function GetDesigner: TComponentEditorDesigner; Virtual;Abstract;function GetHook(out Hook: TPropertyEditorHook): Boolean;Virtual; Abstract;procedure Modified; Virtual; Abstract;end; Let us look at some of the more important methods of the class. The Edit method is called on the double-clicking of a component in the form designer. GetVerbCount and GetVerb are called to build the context menu that is invoked by right-clicking on the component. A verb is a menu item. GetVerb returns the name of the menu item. GetVerbCount gets the total number of items on the context menu. The PrepareItem method is called for each menu item after the menu is created, and it allows the menu item to be customized, such as adding a submenu or hiding the item by setting its visibility to False. ExecuteVerb executes the menu item. The Copy method is called when the component is copied to the clipboard. A good example of a component editor is the TCheckListBox component editor. It is a descendant from TComponentEditor so all the methods of the TBaseComponentEditor do not need to be implemented. TComponentEditor provides empty implementation for most methods and sets defaults for others. Using this, methods that are needed for the TCheckListBoxComponentEditor component are overwritten. An example of the TCheckListBoxComponentEditor code is given as follows: TCheckListBoxComponentEditor = class(TComponentEditor)protectedprocedure DoShowEditor;publicprocedure ExecuteVerb(Index: Integer); override;function GetVerb(Index: Integer): String; override;function GetVerbCount: Integer; override;end;procedure TCheckGroupComponentEditor.DoShowEditor;varDlg: TCheckGroupEditorDlg;beginDlg := TCheckGroupEditorDlg.Create(NIL);try// .. shortenedDlg.ShowModal;// .. shortenedfinallyDlg.Free;end;end;procedure TCheckGroupComponentEditor.ExecuteVerb(Index: Integer);begincase Index of0: DoShowEditor;end;end;function TCheckGroupComponentEditor.GetVerb(Index: Integer): String;beginResult := 'CheckBox Editor...';end;function TCheckGroupComponentEditor.GetVerbCount: Integer;beginResult := 1;end; Summary In this article, we learned how to create a new Lazarus package and add a new component to that using the New Package dialog window to create our own custom component, TMessageLog. We also learned about compiling and installing a new component into the IDE, which requires Lazarus to rebuild itself in order to do so. Moreover, we discussed component properties. Then, we became acquainted with the events, which are triggered by any interaction that a user has with a component, such as clicking it, or by a system response, which could be caused by the change in any component of a form that affects another component. We studied that Events are properties, and they are handled through a technique called delegation. We discovered the simplest way to create an event is to create a descendant of TNotifyEvent—if you needed to send more parameters to an event and a single parameter provided by TNotifyEvent, then you need to declare a method pointer. We learned that property editors are custom dialogs for editing special properties of a component that aren't of a standard type, such as string or integer, and that they must extend from TPropertyEditor. Then, we discussed the component editors, which control the behavior of a component when it is right-clicked or double- clicked in the form designer, and that a component editor must descend from TComponentEditor or a descendant class of it. Finally, we looked at an example of a component editor for the TCheckListBox. Resources for Article : Further resources on this subject: User Extensions and Add-ons in Selenium 1.0 Testing Tools [Article] 10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack [Article] Support for Developers of Spring Web Flow 2 [Article]
Read more
  • 0
  • 0
  • 8603

article-image-linux-programmers-opposed-to-new-code-of-conduct-threaten-to-pull-code-from-project
Melisha Dsouza
25 Sep 2018
6 min read
Save for later

Linux programmers opposed to new Code of Conduct threaten to pull code from project

Melisha Dsouza
25 Sep 2018
6 min read
Facts of the Controversy at hand To “help make the kernel community a welcoming environment to participate in”, Linux accepted a new Code of Conduct earlier this month. This created conflict among the developer community because of the clauses in the CoC. The CoC is derived from a Contributor Covenant , created by Coraline Ada Ehmke, a software developer, an open-source advocate, and an LGBT activist. Just 30 minutes after signing this CoC, Principal kernel contributor- Linus Torvalds sent a mail apologizing for his past behavior and announced temporary break to improve upon his behavior. “This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry” This decision of taking a break is speculated among many as a precautionary measure to prevent Torvalds from violating the newly created code of conduct. Controversies took better shape after Caroline’s sarcastic tweet-                                               Source: Twitter The new Linux Code of Conduct is causing huge conflict Linux’s move from its Code of Conflicts to a new Code of Conduct has not been received well by many of its developers. Some have threatened to pull out their blocks of code important to the project to revolt against the change. This could have serious consequences because Linux is one of the most important pieces of open source software in the world. If threats are put into action, large parts of the internet would be left vulnerable to exploits. Applications that use Linux would be like an incomplete Jenga stack that could collapse any minute. Why some Linux developers are opposed to the new code of conduct Here is a summary of developers views on the Code of Conduct that, according to them, justifies their decision: Amending to the CoC would mean good contributors are removed over trivial matters or even events that happened a long time ago–like Larry Garfield, a prominent Drupal contributor who was asked to step down after his sex fetish was made public. There is a lack of proper definitions for punishments, time frames, and what constitutes abuse or harassment, inappropriate behavior and leaves the Code of Conduct wide open for exploitation.  It gives the people charged with enforcement immense power. It could force acceptance of contributions that wouldn’t make the cut if made by cis white males. Developers are concerned that Linux will start accepting patches from anyone and everyone just to keep par with the Code of Conduct. It will no longer depend on the ability of a person to code, but on social parameters like race, color, sex, and gender. Why some developers believe in the new Linux Code of Conduct On the other side of the argument, here are some potential reasons why the CoC will foster social justice: Encouraging an inclusive and safe space for women, LGBTQIA+, and People of Color, who in the absence of the CoC are excluded, harassed, and sometimes even raped by cis white males. The CoC aims to overcome meritocracy, which in many organizations has consistently shown itself to mainly benefit those with privilege, to the exclusion of underrepresented people in technology VA vastmajority of Linux contributors are cis white males. CC’s Code of Conduct would enable the building of a more diverse demographic array of contributors What does this mean to the developer community? Linux includes programmers who are always free to contribute to its open source platform. Contributing good code would help them climb up the ladder and become a ‘maintainer’. The greatest strength of Linux was its flexibility. Developers would contribute to the kernel and be concerned about only a single entity- their code patch. The Linux community would judge the code based on its quality. However, with the new Code of Conduct, critics say this could make passing judgement on code more challenging, For them, the Code of Conduct is a set of rules that expects everyone to be at equal levels in the community. It could mean that certain patches are accepted for fear of contravening the Code of Conduct. Here is what Caroline Ada Ehmke was forthright in her criticism of this view:   Source: Twitter   Clearly, many of the fears of the Code of Conduct’s critics haven’t yet come to pass. What they’re ultimately worried about is that there could be negative consequences. Google Developer Ted Ts’o next on the hit list Earlier this week, activist Sage Sharp, tweeted about Ted Ts'o:                                                      Source: Twitter This perhaps needs some context - the beginning of this argument dates all the way back to 2011 when Ts’o when was a member of the Linux Foundation's technical advisory board-participated in a discussion on the mailing list for the Australian national Linux conference that year, making comments that were later interpreted by Aurora as rape apologism. Using Aurora's piece as a fuse, Google employee Matthew Garrett slammed Ts'o on his beliefs. In 2017, yielding to the demands of SJWs, Google threw out James Damore, an engineer who circulated an internal creed about reverse discrimination in hiring practices. The SJW’s are coming for him and the best way to go forward would be to “take a break”, just like Linus did. As claimed by Caroline, the underlying aim of the CoC was to guide people in behaving in a respectable way and create a positive environment for people irrespective of their race, ethnicity, religion, nationality and political views. However, overlooking this aim, developers are concerned with the loopholes in the CoC. Gain more insights on this news as well as views from members of the Linux community at itfloss. Linux drops Code of Conflict and adopts new Code of Conduct Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’ NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018
Read more
  • 0
  • 0
  • 8534
article-image-how-to-build-12-factor-design-microservices-on-docker-part-1
Cody A.
26 Jun 2015
9 min read
Save for later

How to Build 12 Factor Microservices on Docker - Part 1

Cody A.
26 Jun 2015
9 min read
As companies continue to reap benefits of the cloud beyond cost savings, DevOps teams are gradually transforming their infrastructure into a self-serve platform. Critical to this effort is designing applications to be cloud-native and antifragile. In this post series, we will examine the 12 factor methodology for application design, how this design approach interfaces with some of the more popular Platform-as-a-Service (PaaS) providers, and demonstrate how to run such microservices on the Deis PaaS. What began as Service Oriented Architectures in the data center are realizing their full potential as microservices in the cloud, led by innovators such as Netflix and Heroku. Netflix was arguably the first to design their applications to not only be resilient but to be antifragile; that is, by intentionally introducing chaos into their systems, their applications become more stable, scalable, and graceful in the presence of errors. Similarly, by helping thousands of clients building cloud applications, Heroku recognized a set of common patterns emerging and set forth the 12 factor methodology. ANTIFRAGILITY You may have never heard of antifragility. This concept was introduced by Nassim Taleb, the author of Fooled by Randomness and The Black Swan. Essentially, antifragility is what gains from volatility and uncertainty (up to a point). Think of the MySQL server that everyone is afraid to touch lest it crash vs the Cassandra ring which can handle the loss of multiple servers without a problem. In terms more familiar to the tech crowd, a “pet” is fragile while “cattle” are antifragile (or at least robust, that is, they neither gain nor lose from volatility). Adrian Cockroft seems to have discovered this concept with his team at Netflix. During their transition from a data center to Amazon Web Services, they claimed that “the best way to avoid failure is to fail constantly.” (http://techblog.netflix.com/2010/12/5-lessons-weve-learned-using-aws.html) To facilitate this process, one of the first tools Netflix built was Chaos Monkey, the now-infamous tool which kills your Amazon instances to see if and how well your application responds. By constantly injecting failure, their engineers were forced to design their applications to be more fault tolerant, to degrade gracefully, and to be better distributed so as to avoid any Single Points Of Failure (SPOF). As a result, Netflix has a whole suite of tools which form the Netflix PaaS. Many of these have been released as part of the Netflix OSS ecosystem. 12 FACTOR APPS Because many companies want to avoid relying too heavily on tools from any single third-party, it may be more beneficial to look at the concepts underlying such a cloud-native design. This will also help you evaluate and compare multiple options for solving the core issues at hand. Heroku, being a platform on which thousands or millions of applications are deployed, have had to isolate the core design patterns for applications which operate in the cloud and provide an environment which makes such applications easy to build and maintain. These are described as a manifesto entitled the 12-Factor App. The first part of this post walks through the first five factors and reworks a simple python webapp with them in mind. Part 2 continues with the remaining seven factors, demonstrating how this design allows easier integration with cloud-native containerization technologies like Docker and Deis. Let’s say we’re starting with a minimal python application which simply provides a way to view some content from a relational database. We’ll start with a single-file application, app.py. from flask import Flask import mysql.connector as db import json app = Flask(__name__) def execute(query): con = None try: con = db.connect(host='localhost', user='testdb', password='t123', database='testdb') cur = con.cursor() cur.execute(query) return cur.fetchall() except db.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) return None finally: if con: con.close() def list_users(): users = execute("SELECT id, username, email FROM users") or [] return [{"id": user_id, "username": username, "email": email} for (user_id, username, email) in users] @app.route("/users") def users_index(): return json.dumps(list_users()) if __name__ == "__main__": app.run(host='0.0.0.0', port=5000, debug=True) We can assume you have a simple mysql database setup already. CREATE DATABASE testdb; CREATE TABLE users ( id INT NOT NULL AUTO_INCREMENT, username VARCHAR(80) NOT NULL, email VARCHAR(120) NOT NULL, PRIMARY KEY (id), UNIQUE INDEX (username), UNIQUE INDEX (email) ); INSERT INTO users VALUES (1, "admin", "[email protected]"); INSERT INTO users VALUES (2, "guest", "[email protected]"); As you can see, the application is currently implemented as about the most naive approach possible and contained within this single file. We’ll now walk step-by-step through the 12 Factors and apply them to this simple application. THE 12 FACTORS: STEP BY STEP Codebase. A 12-factor app is always tracked in a version control system, such as Git, Mercurial, or Subversion. If there are multiple codebases, its a distributed system in which each component may be a 12-factor app. There are many deploys, or running instances, of each application, including production, staging, and developers' local environments. Since many people are familiar with git today, let’s choose that as our version control system. We can initialize a git repo for our new project. First ensure we’re in the app directory which, at this point, only contains the single app.py file. cd 12factor git init . After adding the single app.py file, we can commit to the repo. git add app.py git commit -m "Initial commit" Dependencies. All dependencies must be explicitly declared and isolated. A 12-factor app never depends on packages to be installed system-wide and uses a dependency isolation tool during execution to stop any system-wide packages from “leaking in.” Good examples are Gem Bundler for Ruby (Gemfile provides declaration and `bundle exec` provides isolation) and Pip/requirements.txt and Virtualenv for Python (where pip/requirements.txt provides declaration and `virtualenv --no-site-packages` provides isolation). We can create and use (source) a virtualenv environment which explicitly isolates the local app’s environment from the global “site-packages” installations. virtualenv env --no-site-packages source env/bin/activate A quick glance at the code we’ll show that we’re only using two dependencies currently, flask and mysql-connector-python, so we’ll add them to the requirements file. echo flask==0.10.1 >> requirements.txt echo mysql-python==1.2.5 >> requirements.txt Let’s use the requirements file to install all the dependencies into our isolated virtualenv. pip install -r requirements.txt Config. An app’s config must be stored in environment variables. This config is what may vary between deploys in developer environments, staging, and production. The most common example is the database credentials or resource handle. We currently have the host, user, password, and database name hardcoded. Hopefully you’ve at least already extracted this to a configuration file; either way, we’ll be moving them to environment variables instead. import os DATABASE_CREDENTIALS = { 'host': os.environ['DATABASE_HOST'], 'user': os.environ['DATABASE_USER'], 'password': os.environ['DATABASE_PASSWORD'], 'database': os.environ['DATABASE_NAME'] } Don’t forget to update the actual connection to use the new credentials object: con = db.connect(**DATABASE_CREDENTIALS) Backing Services. A 12-factor app must make no distinction between a service running locally or as a third-party. For example, a deploy should be able to swap out a local MySQL database with a third-party replacement such as Amazon RDS without any code changes, just by updating a URL or other handle/credentials inside the config. Using a database abstraction layer such as SQLAlchemy (or your own adapter) lets you treat many backing services similarly so that you can switch between them with a single configuration parameter. In this case, it has the added advantage of serving as an Object Relational Mapper to better encapsulate our database access logic. We can replace the hand-rolled execute function and SELECT query with a model object from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL'] db = SQLAlchemy(app) class User(db.Model): __tablename__ = 'users' id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) def __init__(self, username, email): self.username = username self.email = email def __repr__(self): return '<User %r>' % self.username @app.route("/users") def users_index(): to_json = lambda user: {"id": user.id, "name": user.username, "email": user.email} return json.dumps([to_json(user) for user in User.query.all()]) Now we set the DATABASE_URL environment property to something like export DATABASE_URL=mysql://testdb:t123@localhost/testdb But its should be easy to switch to Postgres or Amazon RDS (still backed by MySQL). DATABASE_URL=postgresql://testdb:t123@localhost/testdb We’ll continue this demo using a MySQL cluster provided by Amazon RDS. DATABASE_URL=mysql://sa:[email protected]/mydb As you can see, this makes attaching and detaching from different backing services trivial from a code perspective, allowing you to focus on more challenging issues. This is important during the early stages of code because it allows you to performance test multiple databases and third-party providers against one another, and in general keeps with the notion of avoiding vendor lock-in. In Part 2, we'll continue reworking this application so that it fully conforms to the 12 Factors. The remaining eight factors concern the overall application design and how it interacts with the execution environment in which its operated. We’ll assume that we’re operating the app in a multi-container Docker environment. This container-up approach provides the most flexibility and control over your execution environment. We’ll then conclude the article by deploying our application to Deis, a vertically integrated Docker-based PaaS, to demonstrate the tradeoff of configuration vs convention in selecting your own PaaS. About the Author Cody A. Ray is an inquisitive, tech-savvy, entrepreneurially-spirited dude. Currently, he is a software engineer at Signal, an amazing startup in downtown Chicago, where he gets to work with a dream team that’s changing the service model underlying the Internet.
Read more
  • 0
  • 0
  • 8527

article-image-web-scraping-python
Packt
17 Feb 2010
5 min read
Save for later

Web Scraping with Python

Packt
17 Feb 2010
5 min read
To perform this task, usually three basic steps are followed: Explore the website to find out where the desired information is located in the HTML DOM tree Download as many web pages as needed Parse downloaded web pages and extract the information from the places found in the exploration step The exploration step is performed manually with the aid of some tools that make it easier to locate the information and reduce the development time in next steps. The download and parsing steps are usually performed in an iterative cycle since they are interrelated. This is because the next page to download may depend on a link or similar in the current page, so not every web page can be downloaded without previously looking into the earlier one. This article will show an example covering the three steps mentioned and how this could be done using python with some development. The code that will be displayed is guaranteed to work at the time of writing, however it should be taken into account that it may stop working in future if the presentation format changes. The reason is that web scraping depends on the DOM tree to be stable enough, that is to say, as happens with regular expressions, it will work fine for slight changes in the information being parsed. However, when the presentation format is completely changed, the web scraping scripts have to be modified to match the new DOM tree. Explore Let's say you are a fan of Pack Publishing article network and that you want to keep a list of the titles of all the articles that have been published until now and the link to them. First of all, you will need to connect to the main article network page (http://www.packtpub.com/article-network) and start exploring the web page to have an idea about where the information that you want to extract is located. Many ways are available to perform this task such as view the source code directly in your browser or download it and inspect it with your favorite editor. However, HTML pages often contain auto-generated code and are not as readable as they should be, so using a specialized tool might be quite helpful. In my opinion, the best one for this task is the Firebug add-on for the Firefox browser. With this add-on, instead of looking carefully in the code looking for some string, all you have to do is press the Inspect button, move the pointer to the area in which you are interested and click. After that, the HTML code for the area marked and the location of the tag in the DOM tree will be clearly displayed. For example, the links to the different pages containing all the articles are located inside a right tag, and, in every page, the links to the articles are contained as list items in an unnumbered list. In addition to this, the links URLs, as you probably have noticed while reading other articles, start with http://www.packtpub.com/article/ So, our scraping strategy will be Get the list of links to all pages containing articles Follow all links so as to extract the article information in all pages One small optimization here is that main article network page is the same as the one pointed by the first page link, so we will take this into account to avoid loading the same page twice when we develop the code. Download Before parsing any web page, the contents of that page must be downloaded. As usual, there are many ways to do this: Creating your own HTTP requests using urllib2 standard python library Using a more advanced library that provides the capability to navigate through a website simulating a browser such as  mechanize. In this article mechanize will be covered as it is the easiest choice. mechanize is a library that provides a Browser class that lets the developer to interact with a website in a similar way a real browser would. In particular it provides methods to open pages, follow links, change form data and submit forms. Recalling the scraping strategy in our previous version, the first thing we would like to do is to download the main article network web page. To do that we will create a Browser class instance and then open the main article network page: >>> import mechanize>>> BASE_URL = "http://www.packtpub.com/article-network">>> br = mechanize.Browser()>>> data = br.open(BASE_URL).get_data()>>> links = scrape_links(BASE_URL, data) Where the result of the open method is an HTTP response object, the get_data method returns the contents of the web page. The scrape_links function will be explained later. For now, as pointed out in the introduction section, bear in mind that the downloading and parsing steps are usually performed iteratively since some contents to be downloaded depends on the parsing done in some kind of initial contents such as in this case.
Read more
  • 0
  • 0
  • 8491

article-image-handling-dom-dart
Packt
24 Dec 2013
15 min read
Save for later

Handling the DOM in Dart

Packt
24 Dec 2013
15 min read
(For more resources related to this topic, see here.) A Dart web application runs inside the browser (HTML) page that hosts the app; a single-page web app is more and more common. This page may already contain some HTML elements or nodes, such as <div> and <input>, and your Dart code will manipulate and change them, but it can also create new elements. The user interface may even be entirely built up through code. Besides that, Dart is responsible for implementing interactivity with the user (the handling of events, such as button-clicks) and the dynamic behavior of the program, for example, fetching data from a server and showing it on the screen. We explored some simple examples of these techniques. Compared to JavaScript, Dart has simplified the way in which code interacts with the collection of elements on a web page (called the DOM tree). This article teaches you this new method using a number of simple examples, culminating with a Ping Pong game. The following are the topics: Finding elements and changing their attributes Creating and removing elements Handling events Manipulating the style of page elements Animating a game Ping Pong using style(s) How to draw on a canvas – Ping Pong revisited Finding elements and changing their attributes All web apps import the Dart library dart:html; this is a huge collection of functions and classes needed to program the DOM (look it up at api.dartlang.org). Let's discuss the base classes, which are as follows: The Navigator class contains info about the browser running the app, such as the product (the name of the browser), its vendor, the MIME types supported by the installed plugins, and also the geolocation object. Every browser window corresponds to an object of the Window class, which contains, amongst many others, a navigator object, the close, print, scroll and moveTo methods, and a whole bunch of event handlers, such as onLoad, onClick, onKeyUp, onMouseOver, onTouchStart, and onSubmit. Use an alert to get a pop-up message in the web page, such as in todo_v2.dart: window.onLoad.listen( (e) => window.alert("I am at your disposal") ); If your browser has tabs, each tab opens in a separate window. From the Window class, you can access local storage or IndexedDB to store app data on the client The Window object also contains an object document of the Document class, which corresponds to the HTML document. It is used to query for, create, and manipulate elements within the document. The document also has a list of stylesheets (objects of the StyleSheet class)—we will use this in our first version of the Ping Pong game. Everything that appears on a web page can be represented by an object of the Node class; so, not only are tags and their attributes nodes, but also text, comments, and so on. The Document object in a Window class contains a List<Node> element of the nodes in the document tree (DOM) called childNodes. The Element class, being a subclass of Node, represents web page elements (tags, such as <p>, <div>, and so on); it has subclasses, such as ButtonElement, InputElement, TableElement, and so on, each corresponding to a specific HTML tag, such as <button>, <input>, <table>, and so on. Every element can have embedded tags, so it contains a List<Element> element called children. Let us make this more concrete by looking at todo_v2.dart, solely for didactic purposes—the HTML file contains an <input> tag with the id value task, and a <ul> tag with the id value list: <div><input id="task" type="text" placeholder="What do you want to do?"/> <p id="para">Initial paragraph text</p> </div> <div id="btns"> <button class="backgr">Toggle background color of header</button> <button class="backgr">Change text of paragraph</button> <button class="backgr">Change text of placeholder in input field and the background color of the buttons</button> </div> <div><ul id="list"/> </div> In our Dart code, we declare the following objects representing them: InputElement task; UListElement list; The following list object contains objects of the LIElement class, which are made in addItem(): var newTask = new LIElement(); You can see the different elements and their layout in the following screenshot: The screen of todo_v2 Finding elements Now we must bind these objects to the corresponding HTML elements. For that, we use the top-level functions querySelector and querySelectorAll; for example, the InputElement task is bound to the <input> tag with the id value task using: task = querySelector('#task'); . Both functions take a string (a CSS selector) that identifies the element, where the id value task will be preceded by #. CSS selectors are patterns that are used in .css files to select elements that you want to style. There are a number of them, but, generally, we only need a few basic selectors (for an overview visit http://www.w3schools.com/cssref/css_selectors.asp). If the element has an id attribute with the value abc, use querySelector('#abc') If the element has a class attribute with value abc, use querySelector('.abc') To get a list of all elements with the tag <button>, use querySelectorAll('button') To get a list of all text elements, use querySelectorAll('input[type="text"]') and all sorts of combinations of selectors; for example, querySelectorAll('#btns .backgr') will get a list of all elements with the backgr class that are inside a tag with the id value btns These functions are defined on the document object of the web page, so in code you will also see document.querySelector() and document.querySelectorAll(). Changing the attributes of elements All objects of the Element class have properties in common, such as classes, hidden, id, innerHtml, style, text, and title; specialized subclasses have additional properties, such as value for a ProgressElement method. Changing the value of a property in an element makes the browser re-render the page to show the changed user interface. Experiment with todo_v2.dart: import 'dart:html'; InputElement task; UListElement list; Element header; List<ButtonElement> btns; main() { task = querySelector('#task'); list = querySelector('#list'); task.onChange.listen( (e) => addItem() ); // find the h2 header element: header = querySelector('.header'); (1) // find the buttons: btns = querySelectorAll('button'); (2) // attach event handler to 1st and 2nd buttons: btns[0].onClick.listen( (e) => changeColorHeader() ); (3) btns[1].onDoubleClick.listen( (e) => changeTextPara() ); (4) // another way to get the same buttons with class backgr: var btns2 = querySelectorAll('#btns .backgr'); (5) btns2[2].onMouseOver.listen( (e) => changePlaceHolder() );(6) btns2[2].onClick.listen((e) => changeBtnsBackColor() ); (7) addElements(); } changeColorHeader() => header.classes.toggle('header2'); (8) changeTextPara() => querySelector('#para').text = "You changed my text!"; (9) changePlaceHolder() => task.placeholder = 'Come on, type something in!'; (10) changeBtnsBackColor() => btns.forEach( (b) => b.classes.add('btns_backgr')); (11) void addItem() { var newTask = new LIElement(); (12) newTask.text = task.value; (13) newTask.onClick.listen( (e) => newTask.remove()); task.value = ''; list.children.add(newTask); (14) } addElements() { var ch1 = new CheckboxInputElement(); (15) ch1.checked = true; document.body.children.add(ch1); (16) var par = new Element.tag('p'); (17) par.text = 'I am a newly created paragraph!'; document.body.children.add(par); var el = new Element.html('<div><h4><b>A small divsection</b></h4></div>'); (18) document.body.children.add(el); var btn = new ButtonElement(); btn.text = 'Replace'; btn.onClick.listen(replacePar); document.body.children.add(btn); var btn2 = new ButtonElement(); btn2.text = 'Delete all list items'; btn2.onClick.listen( (e) => list.children.clear() ); (19) document.body.children.add(btn2); } replacePar(Event e) { var el2 = new Element.html('<div><h4><b>I replaced this div!</b></h4></div>'); el.replaceWith(el2); (20) } Comments for the numbered lines are as follows: We find the <h2> element via its class. We get a list of all the buttons via their tags. We attach an event handler to the Click event of the first button, which toggles the class of the <h2> element, changing its background color at each click (line (8)). We attach an event handler to the DoubleClick event of the second button, which changes the text in the <p> element (line (9)). We get the same list of buttons via a combination of CSS selectors. We attach an event handler to the MouseOver event of the third button, which changes the placeholder in the input field (line (10)). We attach a second event handler to the third button; clicking on it changes the background color of all buttons by adding a new CSS class to their classes collection (line (11)). Every HTML element also has an attribute Map where the keys are the attribute names; you can use this Map to change an attribute, for example: btn.attributes['disabled'] = 'true'; Please refer to the following document to see which attributes apply to which element: https://developer.mozilla.org/en-US/docs/HTML/Attributes Creating and removing elements The structure of a web page is represented as a tree of nodes in the Document Object Model (DOM). A web page can start its life with an initial DOM tree, marked up in its HTML file, and then the tree can be changed using code; or, it can start off with an empty tree, which is then entirely created using code in the app, that is every element is created through a constructor and its properties are set in code. Elements are subclasses of Node; they take up a rectangular space on the web page (with a width and height). They have, at most, one parent Element in which they are enclosed and can contain a list of Elements—their children (you can check this with the function hasChildNodes() that returns a bool function). Furthermore, they can receive events. Elements must first be created before they can be added to the list of a parent element. Elements can also be removed from a node. When elements are added or removed, the DOM tree is changed and the browser has to re-render the web page. An Element object is either bound to an existing node with the querySelector method of the document object or it can be created with its specific constructor, such as that in line (12) (where newTask belongs to the class LIElement—List Item element) or line (15). If useful, we could also specify the id in the code, such as in newTask.id = 'newTask'; If you need a DOM element in different places in your code, you can improve the performance of your app by querying it only once, binding it to a variable, and then working with that variable. After being created, the element properties can be given a value such as that in line (13). Then, the object (let's name it elem) is added to an existing node, for example, to the body node with document.body.children.add(elem), as in line (16), or to an existing node, as list in line (14). Elements can also be created with two named constructors from the Element class: Like Element.tag('tagName') in line (17), where tagName is any valid HTML tag, such as <p>, <div>, <input>, <select>, and so on. Like Element.html('htmlSnippet') in line (18), where htmlSnippet is any valid combination of HTML tags. Use the first constructor if you want to create everything dynamically at runtime; use the second constructor when you know what the HTML for that element will be like and you won't need to reference its child elements in your code (but by using the querySelector method, you can always find them if needed). You can leave the type of the created object open using var, or use the type Element, or use the specific class name (such as InputElement)—use the latter if you want your IDE to give you more specific code completion and warnings/errors against the possible misuse of types. When hovering over a list item, the item changes color and the cursor becomes a hand icon; this could be done in code (try it), but it is easier to do in the CSS file: #list li:hover { color: aqua; font-size:20 px; font-weight: bold; cursor: pointer; } To delete an Element elem from the DOM tree, use elem.remove(). We can delete list items by clicking on them, which is coded with only one line: newTask.onClick.listen( (e) => newTask.remove() ); To remove all the list items, use the List function clear(), such as in line (19). Replace elem with another element elem2 using elem.replaceWith(elem2), such as in line (20). Handling events When the user interacts with the web form, such as when clicking on a button or filling in a text field, an event fires; any element on the page can have events. The DOM contains hooks for these events and the developer can write code (an event handler) that the browser must execute when the event fires. How do we add an event handler to an element (which is also called registering an event handler)?. The general format is: element.onEvent.listen( event_handler ) (The spaces are not needed, but can be used to make the code more readable). Examples of events are Click, Change, Focus, Drag, MouseDown, Load, KeyUp, and so on. View this as the browser listening to events on elements and, when they occur, executing the indicated event handler. The argument that is passed to the listen() method is a callback function and has to be of the type EventListener; it has the signature: void EventListener(Event e) The event handler gets passed an Event parameter, succinctly called e or ev, that contains more specific info on the event, such as which mouse button should be pressed in case of a mouse event, on which object the event took place using e.target, and so on. If an event is not handled on the target object itself, you can still write the event handler in its parent, or its parent's parent, and so on up the DOM tree, where it will also get executed; in such a situation, the target property can be useful in determining the original event object. In todo_v2.dart, we examine the various event-coding styles. Using the general format, the Click event on btns2[2] can be handled using the following code: btns2[2].onClick.listen( changeBtnsBackColor ); where changeBtnsBackColor is either the event handler or callback function. This function is written as: changeBtnsBackColor(Event e) => btns.forEach( (b) => b.classes.add('btns_backgr')); Another, shorter way to write this (such as in line (7)) is: btns2[2].onClick.listen( (e) => changeBtnsBackColor() ); changeBtnsBackColor() => btns.forEach( (b) => b.classes.add('btns_backgr')); When a Click event occurs on btns2[2], the handler changeBtnsBackColor is called. In case the event handler needs more code lines, use the brace syntax as follows: changeBtnsBackColor(Event e) { btns.forEach( (b) => b.classes.add('btns_backgr')); // possibly other code } Familiarize yourself with these different ways of writing event handlers. Of course, when the handler needs only one line of code, there is no need for a separate method, as in the following code: newTask.onClick.listen( (e) => newTask.remove() ); For clarity, we use the function expression syntax => whenever possible, but you can inline the event handler and use the brace syntax along with an anonymous function, thus avoiding a separate method. So instead of executing the following code: task.onChange.listen( (e) => addItem() ); we could have executed: task.onChange.listen( (e) { var newTask = new LIElement(); newTask.text = task.value; newTask.onClick.listen( (e) => newTask.remove()); task.value = ''; list.children.add(newTask); } ); JavaScript developers will find the preceding code very familiar, but it is also used frequently in Dart code, so make yourself acquainted with the code pattern ( (e) {...} );. The following is an example of how you can respond to key events (in this case, on the window object) using the keyCode and ctrlKey properties of the event e: window.onKeyPress .listen( (e) { if (e.keyCode == KeyCode.ENTER) { window.alert("You pressed ENTER"); } if (e.ctrlKey && e.keyCode == CTRL_ENTER) { window.alert("You pressed CTRL + ENTER"); } }); In this code, the constant const int CTRL_ENTER = 10; is used. (The list of keyCodes can be found at http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes). Manipulating the style of page elements CSS style properties can be changed in the code as well: every element elem has a classes property, which is a set of CSS classes. You can add a CSS class as follows: elem.classes.add ('cssclass'); as we did in changeBtnsBackColor (line (11)); by adding this class, the new style is immediately applied to the element. Or, we can remove it to take away the style: elem.classes.remove ('cssclass'); The toggle method (line (8)) elem.classes.toggle('cssclass'); is a combination of both: first the cssclass is applied (added), the next time it is removed, and, the time after that, it is applied again, and so on. Working with CSS classes is the best way to change the style, because the CSS definition is separated from the HTML markup. If you do want to change the style of an element directly, use its style property elem.style, where the cascade style of coding is very appropriate, for example: newTask.style ..fontWeight = 'bold' ..fontSize = '3em' ..color = 'red';
Read more
  • 0
  • 0
  • 8442
article-image-creating-customized-dialog-boxes-wix
Packt
22 Oct 2010
5 min read
Save for later

Creating Customized Dialog Boxes with WiX

Packt
22 Oct 2010
5 min read
        Read more about this book       The WiX toolset ships with several User Interface wizards that are ready to use out of the box. We'll briefly discuss each of the available sets and then move on to learning how to create your own from scratch. In this article by Nick Ramirez, author of the book WiX: A Developer's Guide to Windows Installer XML, you'll learn about: Adding dialogs into the InstallUISequence Linking one dialog to another to form a complete wizard Getting basic text and window styling working Including necessary dialogs like those needed to display errors (For more resources on WiX, see here.) WiX standard dialog sets The wizards that come prebuilt with WiX won't fit every need, but they're a good place to get your feet wet. To add any one of them, you first have to add a project reference to WixUIExtension.dll, which can be found in the bin directory of your WiX program files. Adding this reference is sort of like adding a new source file. This one contains dialogs. To use one, you'll need to use a UIRef element to pull the dialog into the scope of your project. For example, this line, anywhere inside the Product element, will add the "Minimal" wizard to your installer: <UIRef Id="WixUI_Minimal" /> It's definitely minimal, containing just one screen. It gives you a license agreement, which you can change by adding a WixVariable element with an Id of WixUILicenseRtf and a Value attribute that points to a Rich Text Format (.rtf) file containing your new license agreement: <WixVariable Id="WixUILicenseRtf" Value="newLicense.rtf" /> You can also override the background image (red wheel on the left, white box on the right) by setting another WixVariable called WixUIDialogBmp to a new image. The dimensions used are 493x312. The other available wizards offer more and we'll cover them in the following sections. WixUI_Advanced The "Advanced" dialog set offers more: It has a screen that lets the user choose to install for just the current user or for all users, another where the end user can change the folder that files are installed to and a screen with a feature tree where features can be turned on or off. As in the following screenshot: You'll need to change your UIRef element to use WixUI_Advanced. This can be done by adding the following line: <UIRef Id="WixUI_Advanced" /> You'll also have to make sure that your install directory has an Id of APPLICATIONFOLDER, as in this example: <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="APPLICATIONFOLDER" Name="My Program" /> </Directory></Directory> Next, set two properties: ApplicationFolderName and WixAppFolder. The first sets the name of the install directory as it will be displayed in the UI. The second sets whether this install should default to being per user or per machine. It can be either WixPerMachineFolder or WixPerUserFolder. <Property Id="ApplicationFolderName" Value="My Program" /><Property Id="WixAppFolder" Value="WixPerMachineFolder" /> This dialog uses a bitmap that the Minimal installer doesn't: the white banner at the top. You can replace it with your own image by setting the WixUIBannerBmp variable. Its dimensions are 493x58. It would look something like this: <WixVariable Id="WixUIBannerBmp" Value="myBanner.bmp" /> WixUI_FeatureTree The WixUI_FeatureTree wizard shows a feature tree like the Advanced wizard, but it doesn't have a dialog that lets the user change the install path. To use it, you only need to set the UIRef to WixUI_FeatureTree, like so: <UIRef Id="WixUI_FeatureTree" /> This would produce a window that would allow you to choose features as show in the following screenshot: Notice that in the image, the Browse button is disabled. If any of your Feature elements have the ConfigurableDirectory attribute set to the Id of a Directory element, then this button will allow you to change where that feature is installed to. The Directory element's Id must be all uppercase. WixUI_InstallDir WixUI_InstallDir shows a dialog where the user can change the installation path. Change the UIRef to WixUI_InstallDir. Like so: <UIRef Id="WixUI_InstallDir" /> Here, the user can chose the installation path. This is seen in the following screenshot: You'll have to set a property called WIXUI_INSTALLDIR to the Id you gave your install directory. So, if your directory structure used INSTALLLDIR for the Id of the main install folder, use that as the value of the property. <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLDIR" Name="My Program" /> </Directory></Directory> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLDIR" /> WixUI_Mondo The WixUI_Mondo wizard gives the user the option of installing a "Typical", "Complete" or "Custom" install. Typical sets the INSTALLLEVEL property to 3 while Complete sets it to 1000. You can set the Level attribute of your Feature elements accordingly to include them in one group or the other. Selecting a Custom install will display a feature tree dialog where the user can choose exactly what they want. To use this wizard, change your UIRef element to WixUI_Mondo. <UIRef Id="WixUI_Mondo" /> This would result in a window like the following:
Read more
  • 0
  • 0
  • 8419

article-image-introduction-logging-tomcat-7
Packt
21 Mar 2012
9 min read
Save for later

Introduction to Logging in Tomcat 7

Packt
21 Mar 2012
9 min read
(For more resources on Apache, see here.) JULI Previous versions of Tomcat (till 5.x) use Apache common logging services for generating logs. A major disadvantage with this logging mechanism is that it can handle only single JVM configuration and makes it difficult to configure separate logging for each class loader for independent application. In order to resolve this issue, Tomcat developers have introduced a separate API for Tomcat 6 version, that comes with the capability of capturing each class loader activity in the Tomcat logs. It is based on java.util.logging framework. By default, Tomcat 7 uses its own Java logging API to implement logging services. This is also called as JULI. This API can be found in TOMCAT_HOME/bin of the Tomcat 7 directory structures (tomcat-juli.jar). The following screenshot shows the directory structure of the bin directory where tomcat-juli.jar is placed. JULI also provides the feature for custom logging for each web application, and it also supports private per-application logging configurations. With the enhanced feature of separate class loader logging, it also helps in detecting memory issues while unloading the classes at runtime. For more information on JULI and the class loading issue, please refer to http://tomcat.apache.org/tomcat-7.0-doc/logging.html and http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html respectively. Loggers, appenders, and layouts There are some important components of logging which we use at the time of implementing the logging mechanism for applications. Each term has its individual importance in tracking the events of the application. Let's discuss each term individually to find out their usage: Loggers:It can be defined as the logical name for the log file. This logical name is written in the application code. We can configure an independent logger for each application. Appenders: The process of generation of logs are handled by appenders. There are many types of appenders, such as FileAppender, ConsoleAppender, SocketAppender, and so on, which are available in log4j. The following are some examples of appenders for log4j: log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.out log4j.appender.CATALINA.Append=true log4j.appender.CATALINA.Encoding=UTF-8 The previous four lines of appenders define the DailyRollingFileAppender in log4j, where the filename is catalina.out . These logs will have UTF-8 encoding enabled. If log4j.appender.CATALINA.append=false, then logs will not get updated in the log files. # Roll-over the log once per day log4j.appender.CATALINA.DatePattern='.'dd-MM-yyyy'.log' log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n The previous three lines of code show the roll-over of log once per day. Layout: It is defined as the format of logs displayed in the log file. The appender uses layout to format the log files (also called as patterns). The highlighted code shows the pattern for access logs: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Loggers, appenders, and layouts together help the developer to capture the log message for the application event. Types of logging in Tomcat 7 We can enable logging in Tomcat 7 in different ways based on the requirement. There are a total of five types of logging that we can configure in Tomcat, such as application, server, console, and so on. The following figure shows the different types of logging for Tomcat 7. These methods are used in combination with each other based on environment needs. For example, if you have issues where Tomcat services are not displayed, then console logs are very helpful to identify the issue, as we can verify the real-time boot sequence. Let's discuss each logging method briefly. Application log These logs are used to capture the application event while running the application transaction. These logs are very useful in order to identify the application level issues. For example, suppose your application performance is slow on a particular transition, then the details of that transition can only be traced in application log. The biggest advantage of application logs is we can configure separate log levels and log files for each application, making it very easy for the administrators to troubleshoot the application. Log4j is used in 90 percent of the cases for application log generation. Server log Server logs are identical to console logs. The only advantage of server logs is that they can be retrieved anytime but console logs are not available after we log out from the console. Console log This log gives you the complete information of Tomcat 7 startup and loader sequence. The log file is named as catalina.out and is found in TOMCAT_HOME/logs. This log file is very useful in checking the application deployment and server startup testing for any environment. This log is configured in the Tomcat file catalina.sh, which can be found in TOMCAT_HOME/bin. The previous screenshot shows the definition for Tomcat logging. By default, the console logs are configured as INFO mode. There are different levels of logging in Tomcat such as WARNING, INFORMATION, CONFIG, and FINE. The previous screenshot shows the Tomcat log file location, after the start of Tomcat services. The previous screenshot shows the output of the catalina.out file, where Tomcat services are started in 1903 ms. Access log Access logs are customized logs, which give information about the following: Who has accessed the application What components of the application are accessed Source IP and so on These logs play a vital role in traffic analysis of many applications to analyze the bandwidth requirement and also helps in troubleshooting the application under heavy load. These logs are configured in server.xml in TOMCAT_HOME/conf. The following screenshot shows the definition of access logs. You can customize them according to the environment and your auditing requirement. Let's discuss the pattern format of the access logs and understand how we can customize the logging format: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Class Name: This parameter defines the class name used for generation of logs. By default, Apache Tomcat 7 uses the org.apache.catalina.valves.AccessLogValve class for access logs. Directory: This parameter defines the directory location for the log file. All the log files are generated in the log directory—TOMCAT_HOME/logs—but we can customize the log location based on our environment setup and then update the directory path in the definition of access logs. Prefix: This parameter defines the prefix of the access log filename, that is, by default, access log files are generated by the name localhost_access_log.yy-mm-dd.txt. Suffix: This parameter defines the file extension of the log file. Currently it is in .txt format. Pattern: This parameter defines the format of the log file. The pattern is a combination of values defined by the administrator, for example, %h = remote host address. The following screenshot shows the default log format for Tomcat 7. Access logs show the remote host address, date/time of request, method used for response, URI mapping, and HTTP status code. In case you have installed the web traffic analysis tool for application, then you have to change the access logs to a different format. Host manager These logs define the activity performed using Tomcat Manager, such as various tasks performed, status of application, deployment of application, and lifecycle of Tomcat. These configurations are done on the logging.properties, which can be found in TOMCAT_HOME/conf. The previous screenshot shows the definition of host, manager, and host-manager details. If you see the definitions, it defines the log location, log level, and prefix of the filename. In logging.properties, we are defining file handlers and appenders using JULI. The log file for manager looks similar to the following: I28 Jun, 2011 3:36:23 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:13 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:42 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: undeploy: Undeploying web application at '/sample' 28 Jun, 2011 3:37:43 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:42:59 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:43:01 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:53:44 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' Types of log levels in Tomcat 7 There are seven levels defined for Tomcat logging services (JULI). They can be set based on the application requirement. The following figure shows the sequence of log levels for JULI: Every log level in JULI had its own functionality. The following table shows the functionality of each log level in JULI: Log level Description SEVERE(highest) Captures exception and Error WARNING Warning messages INFO Informational message, related to server activity CONFIG Configuration message FINE Detailed activity of server transaction (similar to debug) FINER More detailed logs than FINE FINEST(least) Entire flow of events (similar to trace) For example, let's take an appender from logging.properties and find out the log level used; the first log appender for localhost is using FINE as the log level, as shown in the following code snippet: localhost.org.apache.juli.FileHandler.level = FINE localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs localhost.org.apache.juli.FileHandler.prefix = localhost. The following code shows the default file handler configuration for logging in Tomcat 7 using JULI. The properties are mentioned and log levels are highlighted: ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .handlers = 3manager.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].handlers = 4host-manager.org.apache.juli.FileHandler
Read more
  • 0
  • 0
  • 8394