Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Servers

95 Articles
article-image-squid-proxy-server-fine-tuning-achieve-better-performance
Packt
25 Apr 2011
12 min read
Save for later

Squid Proxy Server: Fine Tuning to Achieve Better Performance

Packt
25 Apr 2011
12 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       Whether you only run one site, or are in charge of a whole network, Squid is an invaluable tool which improves performance immeasurably. Caching and performance optimization usually requires a lot of work on the developer's part, but Squid does all that for you. In this article we will learn to fine-tune our cache to achieve a better HIT ratio to save bandwidth and reduce the average page load time. In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we will take a look at the following: Cache peers or neighbors Caching the web documents in the main memory and hard disk Tuning Squid to enhance bandwidth savings and reduce latency (For more resources on Proxy Servers, see here.) Cache peers or neighbors Cache peers or neighbors are the other proxy servers with which our Squid proxy server can: Share its cache with to reduce bandwidth usage and access time Use it as a parent or sibling proxy server to satisfy its clients' requests Use it as a parent or sibling proxy server We normally deploy more than one proxy server in the same network to share the load of a single server for better performance. The proxy servers can use each other's cache to retrieve the cached web documents locally to improve performance. Let's have a brief look at the directives provided by Squid for communication among different cache peers. Declaring cache peers The directive cache_peer is used to tell Squid about proxy servers in our neighborhood. Let's have a quick look at the syntax for this directive: cache_peer HOSTNAME_OR_IP_ADDRESS TYPE PROXY_PORT ICP_PORT [OPTIONS] In this code, HOSTNAME_OR_IP_ADDRESS is the hostname or IP address of the target proxy server or cache peer. TYPE specifies the type of the proxy server, which in turn, determines how that proxy server will be used by our proxy server. The other proxy servers can be used as a parent, sibling, or a member of a multicast group. Time for action – adding a cache peer Let's add a proxy server (parent.example.com) that will act as a parent proxy to our proxy server: cache_peer parent.example.com parent 3128 3130 default proxy-only 3130 is the standard ICP port. If the other proxy server is not using the standard ICP port, we should change the code accordingly. This code will direct Squid to use parent.example.com as a proxy server to satisfy client requests in case it's not able to do so itself. The option default specifies that this cache peer should be used as a last resort in the scenario where other peers can't be contacted. The option proxy-only specifies that the content fetched using this peer should not be cached locally. This is helpful when we don't want to replicate cached web documents, especially when the two peers are connected with a high bandwidth backbone. What just happened? We added parent.example.com as a cache peer or parent proxy to our Squid proxy server. We also used the option proxy-only, which means the requests fetched using this cache peer will not be cached on our proxy server. There are several other options in which you can add cache peers, for various purposes, such as, a hierarchy. Quickly restricting access to domains using peers If we have added a few proxy servers as cache peers to our Squid server, we may have the desire to have a little bit of control over the requests being forwarded to the peers. The directive cache_peer_domain is a quick way to achieve the desired control. The syntax of this directive is quite simple: cache_peer_domain CACHE_PEER_HOSTNAME [!]DOMAIN1 [[!]DOMAIN2 ...] In the code, CACHE_PEER_HOSTNAME is the hostname or IP address of the cache peer being used when declaring it as a cache peer, using the cache_peer directive. We can specify any number of domains which may be fetched through this cache peer. Adding a bang (!) as a prefix to the domain name will prevent the use of this cache peer for that particular domain. Let's say we want to use the videoproxy.example.com cache peer for browsing video portals like Youtube, Netflix, Metacafe, and so on. cache_peer_domain videoproxy.example.com .youtube.com .netflix.comcache_peer_domain videoproxy.example.com .metacafe.com These two lines will configure Squid to use the videoproxy.example.com cache peer for requests to the domains youtube.com, netflix.com, and metacafe.com only. Requests to other domains will not be forwarded using this peer. Advanced control on access using peers We just learned about cache_peer_domain, which provides a way to control access using cache peers. However, it's not really flexible in granting or revoking access. That's when cache_peer_access comes into the picture, which provides a very flexible way to control access using cache peers using ACLs. The syntax and implications are similar to other access directives such as http_access. cache_peer_access CACHE_PEER_HOSTNAME allow|deny [!]ACL_NAME Let's write the following configuration lines, which will allow only the clients on the network 192.0.2.0/24 to use the cache peer acadproxy.example.com for accessing Youtube, Netflix, and Metacafe. acl my_network src 192.0.2.0/24acl video_sites dstdomain .youtube.com .netflix.com .metacafe.comcache_peer_access acadproxy.example.com allow my_network video_sitescache_peer_access acadproxy.example.com deny all In the same way, we can use other ACL types to achieve better control over access to various websites using cache peers. Caching web documents All this time, we have been talking about the caching of web documents and how it helps in saving bandwidth and improving the end user experience, now it's time to learn how and where Squid actually keeps these cached documents so that they can be served on demand. Squid uses main memory (RAM) and hard disks for storing or caching the web documents. Caching is a complex process but Squid handles it beautifully and exposes the directives using squid.conf, so that we can control how much should be cached and what should be given the highest priority while caching. Let's have a brief look at the caching-related directives provided by Squid. Using main memory (RAM) for caching The web documents cached in the main memory or RAM can be served very quickly as data read/write speeds of RAM are very high compared to hard disks with mechanical parts. However, as the amount of space available in RAM for caching is very low compared to the cache space available on hard disks, only very popular objects or the documents with a very high probability of being requested again, are stored in cache space available in RAM. As the cache space in memory is precious, the documents are stored on a priority basis. Let's have a look at the different types of objects which can be cached. In-transit objects or current requests These are the objects related to the current requests and they have the highest priority to be kept in the cache space in RAM. These objects must be kept in RAM and if there is a situation where the incoming request rate is quite high and we are about to overflow the cache space in RAM, Squid will try to keep the served part (the part which has already been sent to the client) on the disk to create free space in RAM. Hot or popular objects These objects or web documents are popular and are requested quite frequently compared to others. These are stored in the cache space left after storing the in-transit objects as these have a lower priority than in-transit objects. These objects are generally pushed to disk when there is a need to generate more in RAM cache space for storing the in-transit objects. Negatively cached objects Negatively cached objects are error messages which Squid has encountered while fetching a page or web document on behalf of a client. For example, if a request to a web page has resulted in a HTTP error 404 (page not found), and Squid receives a subsequent request for the same web page, then Squid will check if the response is still fresh and will return a reply from the cache itself. If there is a request for the same page after the negatively cached object corresponding to that page has expired, Squid will check again if the page is available. Negatively cached objects have the same priority as hot or popular objects and they can be pushed to disk at any time in favor of in-transit objects. Specifying cache space in RAM So far we have learned about how the available cache space is utilized for storing or caching different types of objects with different priorities. Now, it's time to learn about specifying the amount of RAM space we want to dedicate for caching. While deciding the RAM space for caching, we should be neither greedy nor paranoid. If we specify a large percentage of RAM for caching, the overall system performance will suffer as the system will start swapping processes in case there is no free RAM left for other processes. If we use a very low percentage of RAM for caching, then we'll not be able to take full advantage of Squid's caching mechanism. The default size of the memory cache is 256 MB. Time for action – specifying space for memory caching We can use extra RAM space available on a running system after sparing a chunk of memory that can be utilized by the running process under heavy load. To find out the amount of free RAM available on our system, we can use either the top or free command. To find out the free RAM in Megabytes, we can use the free command as follows: $ free -m For more details, please check the top(1) and free(1) man pages. Now, let's say we have 4 GB of total RAM on the server and all the processes are running comfortably in 1 GB of RAM space. After securing another 512 MB for emergency situations where running processes may take extra memory, we can safely allocate 2.5 GB of RAM for caching. To specify the cache size in the main memory, we use the directive cache_mem. It has a very simple format. As we have learned before, we can specify the memory size in bytes, KB, MB, or GB. Let's specify the cache memory size for the previous example: cache_mem 2500 MB The previous value specified with cache_mem is in Megabytes. What just happened? We learned about calculating the approximate space in the main memory, which can be used to cache web documents and therefore enhance the performance of the Squid server by a significant margin. Have a go hero – calculating cache_mem for your machine Note down the total RAM on your machine and calculate the approximate space in megabytes that you can allocate for memory caching. Maximum object size in memory As we have limited space in memory available for caching objects, we need to use the space in an optimized way. We should plan to set this a bit low, as setting it to a too larger size will mean that there will be a lesser number of cached objects in the memory and the HIT (being found in cache) rate will suffer significantly. The default maximum size used by Squid is 512 KB, but we can change it depending on our value for cache_mem. So, if we want to set it to 1 MB, as we have a lot of RAM available for caching (as in the previous example), we can use the maximum_object_size_in_memory directive as follows: maximum_object_size_in_memory 1 MB This command will set the allowed maximum object size in memory cache to 1 MB. Memory cache mode With the newer versions of Squid, we can control which objects we want to keep in the memory cache for optimizing the performance. Squid offers the directive memory_cache_mode to set the mode that Squid should use to utilize the space available in memory cache. There are three different modes available: Mode Description always The mode always is used to keep all the most recently fetched objects that can fit in the available space. This is the default mode used by Squid. disk When the disk mode is set, only the objects which are already cached on a hard disk and have received a HIT (meaning they were requested subsequently after being cached), will be stored in the memory cache. network Only the objects which have been fetched from the network (including neighbors) are kept in the memory cache, if the network mode is set. Setting the mode is easy and can be set using the memory_cache_mode directive as shown: memory_cache_mode always This configuration line will set memory cache mode to always; this means that most recently fetched objects will be kept in the memory.  
Read more
  • 0
  • 0
  • 28505

article-image-how-configure-squid-proxy-server
Packt
25 Apr 2011
8 min read
Save for later

How to Configure Squid Proxy Server

Packt
25 Apr 2011
8 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we are going to learn to configure Squid according to the requirements of a given network. We will learn about the general syntax used for a Squid configuration file. Specifically, we will cover the following: Quick exposure to Squid Syntax of the configuration file HTTP port, the most important configuration directive Access Control Lists (ACLs) Controlling access to various components of Squid (For more resources on Proxy Servers, see here.) Quick start Let's have a look at the minimal configuration that you will need to get started. Get ready with the configuration file located at /opt/squid/etc/squid.conf, as we are going to make the changes and additions necessary to quickly set up a minimal proxy server. cache_dir ufs /opt/squid/var/cache/ 500 16 256acl my_machine src 192.0.2.21 # Replace with your IP addresshttp_access allow my_machine We should add the previous lines at the top of our current configuration file (ensuring that we change the IP address accordingly). Now, we need to create the cache directories. We can do that by using the following command: $ /opt/squid/sbin/squid -z We are now ready to run our proxy server, and this can be done by running the following command: $ /opt/squid/sbin/squid Squid will start listening on port 3128 (default) on all network interfaces on our machine. Now we can configure our browser to use Squid as an HTTP proxy server with the host as the IP address of our machine and port 3128. Once the browser is configured, try browsing to http://www.example.com/. That's it! We have configured Squid as an HTTP proxy server! Now try to browse to http://www.example.com:897/ and observe the message you receive. The message shown is an access denied message sent to you by Squid. Now, let's move on to understanding the configuration file in detail. Syntax of the configuration file Squid's configuration file can normally be found at /etc/squid/squid.conf, /usr/local/squid/etc/squid.conf, or ${prefix}/etc/squid.conf where ${prefix} is the value passed to the --prefix option, which is passed to the configure command before compiling Squid. In the newer versions of Squid, a documented version of squid.conf, known as squid.conf.documented, can be found along side squid.conf. In this article, we'll cover some of the import directives available in the configuration file. For a detailed description of all the directives used in the configuration file, please check http://www.squid-cache.org/Doc/config/. The syntax for Squid's documented configuration file is similar to many other programs for Linux/Unix. Generally, there are a few lines of comments containing useful related documentation before every directive used in the configuration file. This makes it easier to understand and configure directives, even for people who are not familiar with configuring applications using configuration files. Normally, we just need to read the comments and use the appropriate options available for a particular directive. The lines beginning with the character # are treated as comments and are completely ignored by Squid while parsing the configuration file. Additionally, any blank lines are also ignored. # Test comment. This and the above blank line will be ignored by Squid. Let's see a snippet from the documented configuration file (squid.conf.documented) # TAG: cache_effective_user# If you start Squid as root, it will change its effective/real# UID/GID to the user specified below. The default is to change# to UID of nobody.# see also; cache_effective_group#Default:# cache_effective_user nobody In the previous snippet, the first line mentions the name of the directive, that is in this case, cache_effective_user. The lines following the tag line provide brief information about the usage of a directive. The last line shows the default value for the directive, if none is specified. Types of directives Now, let's have a brief look at the different types of directives and the values that can be specified. Single valued directives These are directives which take only one value. These directives should not be used multiple times in the configuration file because the last occurrence of the directive will override all the previous declarations. For example, logfile_rotate should be specified only once. logfile_rotate 10# Few lines containing other configuration directiveslogfile_rotate 5 In this case, five logfile rotations will be made when we trigger Squid to rotate logfiles. Boolean-valued or toggle directives These are also single valued directives, but these directives are generally used to toggle features on or off. query_icmp onlog_icp_queries offurl_rewrite_bypass off We use these directives when we need to change the default behavior. Multi-valued directives Directives of this type generally take one or more than one value. We can either specify all the values on a single line after the directive or we can write them on multiple lines with a directive repeated every time. All the values for a directive are aggregated from different lines: hostname_aliases proxy.exmaple.com squid.example.com Optionally, we can pass them on separate lines as follows: dns_nameservers proxy.example.comdns_nameservers squid.example.com Both the previous code snippets will instruct Squid to use proxy.example.com and squid.example.com as aliases for the hostname of our proxy server. Directives with time as a value There are a few directives which take values with time as the unit. Squid understands the words seconds, minutes, hours, and so on, and these can be suffixed to numerical values to specify actual values. For example: request_timeout 3 hourspersistent_request_timeout 2 minutes Directives with file or memory size as values The values passed to these directives are generally suffixed with file or memory size units like bytes, KB, MB, or GB. For example: reply_body_max_size 10 MBcache_mem 512 MBmaximum_object_in_memory 8192 KB As we are familiar with the configuration file syntax now, let's open the squid.conf file and learn about the frequently used directives. Have a go hero – categorize the directives Open the documented Squid configuration file and find out at least three directives of each type that we discussed before. Don't use the directives already used in the examples. HTTP port This directive is used to specify the port where Squid will listen for client connections. The default behavior is to listen on port 3128 on all the available interfaces on a machine. Time for action – setting the HTTP port Now, we'll see the various ways to set the HTTP port in the squid.conf file: In its simplest form, we just specify the port on which we want Squid to listen: http_port 8080 We can also specify the IP address and port combination on which we want Squid to listen. We normally use this approach when we have multiple interfaces on our machine and we want Squid to listen only on the interface connected to local area network (LAN): http_port 192.0.2.25:3128 This will instruct Squid to listen on port 3128 on the interface with the IP address as 192.0.2.25. Another form in which we can specify http_port is by using hostname and port combination: http_port myproxy.example.com:8080 The hostname will be translated to an IP address by Squid and then Squid will listen on port 8080 on that particular IP address. Another aspect of this directive is that, it can take multiple values on separate lines. Let's see what the following lines will do: http_port 192.0.2.25:8080http_port lan1.example.com:3128http_port lan2.example.com:8081 These lines will trigger Squid to listen on three different IP addresses and port combinations. This is generally helpful when we have clients in different LANs, which are configured to use different ports for the proxy server. In the newer versions of Squid, we may also specify the mode of operation such as intercept, tproxy, accel, and so on. Intercept mode will support the interception of requests without needing to configure the client machines. http_port 3128 intercept tproxy mode is used to enable Linux Transparent Proxy support for spoofing outgoing connections using the client's IP address. http_port 8080 tproxy We should note that enabling intercept or tproxy mode disables any configured authentication mechanism. Also, IPv6 is supported for tproxy but requires very recent kernel versions. IPv6 is not supported in the intercept mode. Accelerator mode is enabled using the mode accel. It's a good idea to listen on port 80, if we are configuring Squid in accelerator mode. This mode can't be used as it is. We must specify at least one website we want to accelerate. http_port 80 accel defaultsite=website.example.com We should set the HTTP port carefully as the standard ports like 3128 or 8080 can pose a security risk if we don't secure the port properly. If we don't want to spend time on securing the port, we can use any arbitrary port number above 10000. What just happened? In this section, we learned about the usage of one of the most important directives, namely, http_port. We have learned about the various ways in which we can specify HTTP port, depending on the requirement. We can force Squid to listen on multiple interfaces and on different ports, on different interfaces.  
Read more
  • 0
  • 2
  • 22487

article-image-deploying-your-applications-websphere-application-server-70-part-1
Packt
30 Sep 2009
10 min read
Save for later

Deploying your Applications on WebSphere Application Server 7.0 (Part 1)

Packt
30 Sep 2009
10 min read
Inside the Application Server Before we look at deploying an application, we will quickly run over the internals of WebSphere Application Server (WAS). The anatomy of WebSphere Application Server is quite detailed, so for now, we will briefly explain the important parts of WebSphere Application Server. The figure below shows the basic architecture model for a WebSphere Application Server JVM. An important thing to remember is that the WebSphere product code base is the same for all operating-systems (platforms). The Java applications that are deployed are written once and can be deployed to all versions of a given WebSphere release without any code changes. JVM All WebSphere Application Servers are essentially Java Virtual Machines (JVMs). IBM has implemented the J2EE application server model in a way which maximizes the J2EE specification and also provides many enhancements creating specific features for WAS. J2EE applications are deployed to an Application Server. Web container A common type of business application is a web application. The WAS web container is essentially a Java-based web server contained within an application server's JVM, which serves the web component of an application to the client browser. Virtual hosts A virtual host is a configuration element which is required for the web container to receive HTTP requests. As in most web server technologies, a single machine may be required to host multiple applications and appear to the outside world as multiple machines. Resources that are associated with a particular virtual host are designed to not share data with resources belonging to another virtual host, even if the virtual hosts share the same physical machine. Each virtual host is given a logical name and assigned one or more DNS aliases by which it is known. A DNS alias is the TCP/ host name and port number that are used to request a web resource, for example: <hostname>:9080/<servlet>. By default, two virtual host aliases are created during installation. One for the administration console called admin_host and another called default_host which is assigned as the default virtual host alias for all application deployments unless overridden during the deployment phase. All web applications must be mapped to a virtual host, otherwise web browser clients cannot access the application that is being served by the web container. Environment settings WebSphere uses Java environment variables to control settings and properties relating to the server environment. WebSphere variables are used to configure product path names, such as the location of a database driver, for example, ORACLE_JDBC_DRIVER_PATH, and environmental values required by internal WebSphere services and/or applications. Resources Configuration data is stored in XML files in the underlying configuration repository of the WebSphere Application Server. Resource definitions are a fundamental part of J2EE administration. Application logic can vary depending on the business requirement and there are several types of resource types that can be used by an application. Below is a list of some of the most commonly used resource types. Resource Types Description JDBC (Java database connectivity) Used to define providers and data sources URL Providers Used to define end-points for external services for example web services... JMS Providers Used to defined messaging configurations for Java Message Service, MQ connection factories and queue destinations etc. Mail Providers Enable applications to send and receive mail, typically use the SMTP protocol. JNDI The Java Naming and Directory Interface (JNDI) is employed to make applications more portable. JNDI is essentially an API for a directory service which allows Java applications to look up data and objects via a name. JNDI is a lookup service where each resource can be given a unique name. Naming operations, such as lookups and binds, are performed on contexts. All naming operations begin with obtaining an initial context. You can view the initial context as a starting point in the namespace. Applications use JNDI lookups to find a resource using a known naming convention. Administrators can override the resource the application is actually connecting to without requiring a reconfiguration or code change in the application. This level of abstraction using JNDI is fundamental and required for the proper use of WebSphere by applications. Application file types There are three file types we work with in Java applications. Two can be installed via the WebSphere deployment process. One is known as an EAR file, and the other is a WAR file. The third is a JAR file (often re-usable common code) which is contained in either the WAR or EAR format. The explanation of these file types is shown in the following table: File Type Description JAR file A JAR file (or Java ARchive) is used for organising many files into one. The actual internal physical layout is much like a ZIP file. A JAR is  generally used to distribute Java classes and associated metadata. In J2EE applications the JAR file often contains utility code, shared libraries and  EJBS. An EJB is a server-side model that encapsulates the business logic of an application and is one of several Java APIs in the Java Platform, Enterprise Edition with its own specification. You can visit http://java.sun.com/products/ejb/ for information on EJBs. EAR file An Enterprise Archive file represents a J2EE application that can be deployed in a WebSphere application server. EAR files are standard Java archive files (JAR) and have the file extension .ear. An EAR file can consist of the following: One or more Web modules packaged in WAR files. One or more EJB modules packaged in JAR files One or more application client modules Additional JAR files required by the application Any combination of the above The modules that make up the EAR file are themselves packaged in archive files specific to their types. For example, a Web module contains Web archive files and an EJB module contains Java archive files. EAR files also contain a deployment descriptor (an XML file called application.xml) that describes the contents of the application and contains instructions for the entire application, such as security settings to be used in the run-time environment... WAR file A WAR file (Web Application) is essentially a JAR file used to encapsulate a collection of JavaServer Pages (JSP), servlets, Java classes, HTML and other related files which may include XML and other file types depending on the web technology used. For information on JSP and Servlets, you can visit http://java.sun.com/products/jsp/. Servlets can support dynamic Web page content; they provide dynamic server-side processing and can connect to databases. Java ServerPages (JSP) files can be used to separate HTML code from the business logic in Web pages. Essentially they too can generate dynamic pages; however, they employ Java beans (classes) which contain specific detailed server-side logic. A WAR file also has its own deployment descriptor called "web.xml" which is used to configure the WAR file and can contain instruction for resource mapping and security. When an EJB module or web module is installed as a standalone application, it is automatically wrapped in an Enterprise Archive (EAR) file by the WebSphere deployment process and is managed on disk by WebSphere as an EAR file structure. So, if a WAR file is deployed, WebSphere will convert it into an EAR file. Deploying an application As WebSphere administrators, we are asked to deploy applications. These applications may be written in-house or delivered by a third-party vendor. Either way, they will most often be provided as an EAR file for deployment into WebSphere. For the purpose of understanding a manual deployment, we are now going to install a default application. The default application can be located in the <was_root>/installableApps folder. The following steps will show how we deploy the EAR file. Open the administration console and navigate to the Applications section and click on New Application as shown below: You now see the option to create one of the following three types of applications: Application Type Description Enterprise Application EAR file on a server configured to hold installable Web Applications, (WAR), Java archives, library files, and other resource files. Business Level Application A business-level application is an administration model similar to a server or cluster. However, it lends itself to the configuration of applications as a single grouping of modules. Asset An asset represents one or more application binary files that are stored in an asset repository such as Java archives, library files, and other resource files. Assets can be shared between applications. Click on New Enterprise Application. As seen in the following screenshot, you will be presented with the option to either browse locally on your machine for the file or remotely on the Application Server's file system. Since the EAR file we wish to install is on the server, we will choose the Remote file system option. It can sometimes be quicker to deploy large applications by first using Secure File Transfer Protocol (SFTP) to move the file to the application server's file system and then using remote, as opposed to transferring via local browse, which will do an HTTP file transfer which takes more resources and can be slower. The following screenshot depicts the path to the new application: Click Browse.... You will see the name of the application server node. If there is more than one profile, select the appropriate instance. You will then be able to navigate through a web-based version of the Linux file system as seen in the following screenshot: Locate the DefaultApplication.ear file. It will be in a folder called installableApps located in the root WebSphere install folder, for example, <was_root>/installableApps as shown in the previous screenshot. Click Next to begin installing the EAR file. On the Preparing for the application installation page, choose the Fast Path option. There are two options to choose. Install option Description Fast Path The deployment wizard will skip advanced settings and only prompt for the absolute minimum settings required for the deployment. Detailed The wizard will allow, at each stage of the installation, for the user to override any of the J2EE properties and configurations available to an EAR file. The Choose to generate default bindings and mappings setting allows the user to accept the default settings for resource mappings or override with specific values. Resource mappings will exist depending on the complexity of the EAR. Bindings are JNDI to resource mappings. Each EAR file has pre-configured XML descriptors which specify the JNDI name that the application resource uses to map to a matching (application server) provided resource. An example would be a JDBC data source name which is referred to as jdbc/mydatasource, whereas the actual data source created in the application server might be called jdbc/datasource1. By choosing the Detailed option, you get prompted by the wizard to decide on how you want to map the resource bindings. By choosing the Fast Path option, you are allowing the application to use its pre-configured default JNDI names. We will select Fast Path as demonstrated in the following screenshot: Click on Next. In the next screen, we are given the ability to fill out some specific deployment options. Below is a list of the options presented in this page.
Read more
  • 0
  • 0
  • 19236
Visually different images

article-image-how-use-xmlhttprequests-send-post-server
Antonio Cucciniello
03 Apr 2017
5 min read
Save for later

How to use XmlHttpRequests to Send POST to Server

Antonio Cucciniello
03 Apr 2017
5 min read
So, you need to send some bits of information from your browser to the server in order to complete some processing. Maybe you need the information to search for something in a database, or just to update something on your server. Today I am going to show you how to send some data to your server from the client through a POST request using XmlHttpRequest. First, we need to set up our environment! Set up The first thing to make sure you have is Node and NPM installed. Create a new directory for your project; here we will call it xhr-post: $ mkdir xhr-post $ cd xhr-post Then we would like to install express.js and body-parser: $ npm install express $ npm install body-parser Express makes it easy for us to handle HTTP requests, and body-parser allows us to parse incoming request bodies. Let's create two files: one for our server called server.js and one for our front end code called index.html. Then initialize your repo with a package.json file by doing: $ npm init Client Now it’s time to start with some front end work. Open and edit your index.html file with: <!doctype html> <html> <h1> XHR POST to Server </h1> <body> <input type='text' id='num' /> <script> function send () { var number = { value: document.getElementById('num').value } var xhr = new window.XMLHttpRequest() xhr.open('POST', '/num', true) xhr.setRequestHeader('Content-Type', 'application/json;charset=UTF-8') xhr.send(JSON.stringify(number)) } </script> <button type='button' value='Send' name='Send' onclick='send()' > Send </button> </body> </html> This file simply has a input field to allow users to enter some information, and a button to then send the information entered to the server. What we should focus on here is the button's onclick method send(). This is the function that is called once the button is clicked. We create a JSON object to hold the value from the text field. Then we create a new instance of an XMLHttpRequest with xhr. We call xhr.open() to initialize our request by giving it a request method (POST), the url we would like to open the request with ('/num') and determine if it should be asynchronous or not (set true for asynchronous). We then call xhr.setRequestHeader(). This sets the value of the HTTP request to json and UTF-8. As a last step, we send the request with xhr.send(). We pass the value of the text box and stringify it to send the data as raw text to our server, where it can be manipulated. Server Here our server is supposed to handle the POST request and we are simply going to log the request received from the client. const express = require('express') const app = express() const path = require('path') var bodyParser = require('body-parser') var port = 3000 app.listen(port, function () { console.log('We are listening on port ' + port) }) app.use(bodyParser.urlencoded({extended: false})) app.use(bodyParser.json()) app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) app.post('/num', function (req, res) { var num = req.body.value console.log(num) return res.end('done') }) At the top, we declare our variables, obtaining an instance of express, path and body-parser. Then we set our server to listen on port 3000. Next, we use bodyParser object to decide what kind of information we would like to parse, we set it to json because we sent a json object from our client, if you recall the last section. This is done with: app.use(bodyParser.json()) Then we serve our html file in order to see our front end created in the last section with: app.get('*', function (req, res) { res.sendFile(path.join(__dirname, '/index.html')) }) The last part of server.js is where we handle the POST request from the client. We access the value sent over by checking for corresponding property on the body object which is part of the request object. Then, as a last step for us to verify we have the correct information, we will log the data received to the console and send a response to the client. Test Let's test what we have done. In the project directory, we can run: $ node server.js Open your web browser and go to the url localhost:3000. This is what your web page should look like: This is what your output to the console should look like if you enter a 5 in the input field: Conclusion You are all done! You now have a web page that sends some JSON data to your server using XmlHttpRequest! Here is a summary of what we went over: Created a front end with an input field and button Created a function for our button to send an XmlHttpRequest Created our server to listen on port 3000 Served our html file Handled our POST request at route '/num' Logged the value to our console If you enjoyed this post, share it on twitter. Check out the code for this tutorial on GitHub. Possible Resources Check out my GitHub View my personal blog Information on XmlHtttpRequest GitHub pages for: express body-parser About the author Antonio Cucciniello is a software engineer with a background in C, C++, and JavaScript (Node.Js). He is from New Jersey, USA. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using their voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. To contact Antonio, e-mail him at [email protected], follow him on twitter at @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 18357

article-image-creating-web-page-displaying-data-sql-server-2008
Packt
23 Oct 2009
5 min read
Save for later

Creating a Web Page for Displaying Data from SQL Server 2008

Packt
23 Oct 2009
5 min read
This article by Jayaram Krishnaswamy describes how you may connect to SQL Server 2008 and display the retrieved data in a GridView Control on a web page. Trying to establish a connection to the SQL Server 2008 is not possible in Visual Studio 2008 as you will see soon in the tutorial. One way to get around this, as shown in this tutorial, is to create an ODBC connection to the SQL Server and then using the ODBC connection to retrieve the data. Visual Studio 2008 Version: 9.0.21022.8 RTM, Microsoft Windows XP Professional Media Center Edition, and SQL Server 'Katmai' were used for this tutorial. (For more resources on Microsoft, see here.) Connecting to SQL Server 2008 is Not Natively Supported in Microsoft Visual Studio 2008 Designer In the Visual Studio 2008 IDE make a right click on the Data Connections node in the Server Explorer. This will open up the Add Connection window where the default connection being displayed is MS SQL Server Compact. Click on the Change... button which opens the Change Data Source window shown in the next figure. Highlight Microsoft SQL Server as shown and click on the OK button. This once again opens the Add Connection window showing SQL Server 2008 on the machine, Hodentek as shown in the next figure in this case. The connection is set for Windows Authentication and should you test the connectivity you would get 'Success' as a reply. However when you click on the handle for the database name to retrieve a list of databases on this server, you would get a message as shown. Creating a ODBC DSN You will be using the ODBC Data Source Administrator on your desktop to create a ODBC DSN. You access the ODBC Source Administrator from Start | All Programs | Control Panel | Administrative Tools | Data Sources(ODBC). This opens up ODBC Data Source Administrator window as shown in the next figure. Click on System DSN tab and click on the Add... button. This opens up the Create New Data Source window where you scroll down to SQL Server Native Client 10.0. Click on the Finish button. This will bring up the Create a New Data Source to SQL Server window. You must provide a name in the Name box. You also provide a description and click on the drop-down handle for the question, Which SQL Server do you want to connect to? to reveal a number of accessible servers as shown. Highlight SQL Server 2008. Click on the Next button which opens a window where you provide the authentication information. This server uses windows authentication and if your server uses SQL Server authentication you will have to be ready to provide the LoginID and Password. You may accept the default for other configurable options. Click on the Next button which opens a window where you choose the default database to which you want to establish a connection. Click on the Next button which opens a window where you accept the defaults and click on the Finish button. This brings up the final screen, the ODBC Data SQL Server Setup which summarizes the options made as shown. By clicking on the Test Data Source... button you can verify the connectivity. When you click on the OK button you will be taken back to the ODBC Data Source Administrator window where the DSN you created is now added to the list of DSNs on your machine as shown. Retrieving Data from the Server to a Web Page You will be creating an ASP.NET website project. As this version of Visual Studio supports projects in different versions, choose the Net Framework 2.0 as shown. On to the Default.aspx page, drag and drop a GridView control from the Toolkit as shown in this design view. Click on the Smart task handle to reveal the tasks you need to complete this control. Click on the drop-down handle for the Choose Data Source: task as shown in the previous figure. Now click on the <New data Source...> item. This opens the Data Source Configuration Wizard window which displays the various sources from which you may get your data. Click on the Database icon. Now the OK button becomes visible. Click on the OK button. The wizard's next task is to guide you to get the connection information as in the next figure. Click on the New Connection... button. This will take you back to the Add Connection window. Click on the Change... button as shown earlier in the tutorial. In the Change Data Source window, you now highlight the Microsoft ODBC Data Source as shown in the next figure. Click on the OK button. This opens the Add Connection window where you can now point to the ODBC source you created earlier, using the drop-down handle for the Use user or system data source name. You may also test your connection by hitting the Test Connection button. Click on the OK button. This brings the connection information to the wizard's screen as shown in the next figure. Click on the Next button which opens a window in which you have the option to save your connection information to the configuration node of your web.config file. Make sure you read the information on this page. The default connection name has been changed to Conn2k8 as shown. Click on the Next button. This will bring up the screen where you provide a SQL Select statement to retrieve the columns you want. You have three options and here the Specify a custom SQL Statement or stored procedure option is chosen.
Read more
  • 0
  • 0
  • 17010

article-image-learning-essential-linux-commands-for-navigating-the-shell-effectively
Expert Network
16 Aug 2021
9 min read
Save for later

Learning Essential Linux Commands for Navigating the Shell Effectively 

Expert Network
16 Aug 2021
9 min read
Once we learn how to deploy an Ubuntu server, how to manage users, and how to manage software packages, we should take a moment to learn some important concepts and commands that will allow us to build more of the foundational knowledge that will serve us well while understanding the advanced concepts and treading the path of expertise. These foundational concepts include core Linux commands for navigating the shell.  This article is an excerpt from the book, Mastering Ubuntu Server, Third Edition by Jeremy “Jay” La Croix – A hands-on book that will teach you how to deploy, maintain and troubleshoot Ubuntu Server.    Learning essential Linux commands Building a solid competency on the command line is essential and effectively gives any system administrator or engineer superpowers. Our new abilities won’t allow us to leap tall buildings in a single bound, but will definitely enable us to execute terminal commands as if we’re ninjas. While we won’t master the art of using the command line in this section (that can only come with years and experience), we will definitely become more confident.  First, let’s talk about moving from one place to another within the Linux filesystem. Specifically, by “Linux filesystem”, I’m referring to the default structure of the various folders (also referred to as “directories”) contained within your Ubuntu installation. The Linux filesystem contains many important directories, each with their own designated purpose, which we’ll talk about in more detail in the book. Before we can explore that further, we’ll need to learn how to navigate from one directory to another. The first command we’ll cover in this section relative to navigating the filesystem will clarify the directory you’re currently working from. For that, we have the pwd command. The pwd command pwd stands for print working directory, and shows you where you currently are in the filesystem. If you run it, you may see output such as this:  Figure 4.1: Viewing the current working directory  In this example, when I ran pwd, the output informed me that my current working directory is /home/jay. This is known as your home directory and, by default, every user has one. This is where all the files for your user account will reside by default. Sure, you can create files anywhere you’d like, even outside your home directory if you have permission to do so or you use sudo. But just because you can doesn’t mean you should. As you’ll learn in this article, the Linux filesystem has a designated place for just about everything. But your home directory, located at /home/<username>, is yours. You own it, you control it—it’s your home on the server. In the early 2000s, Linux installations with a graphical user interface even depicted your home directory with an icon of a house.  Typically, files that you create in your home directory will have permission string similar to this:  -rw-rw-r-- 1 jay  jay      0 Jul  5 14:10 testfile.txt  You can see by default, files you create in your home directory are owned by your user, your group, and are readable by all three categories (user, group, and other).  The cd command To change our current directory and navigate to another, we can use the cd command along with a path we’d like to move to:  cd /etc  Now, I haven’t gone over the file and directory layout yet, so I just randomly picked the etc directory. The forward slash at the beginning designates the beginning of the filesystem. More on that later. Now, we’re in the /etc directory, and our command prompt has even changed as well:  Figure 4.2: Command prompt and pwd command after changing a directory  As you could probably guess, the cd command stands for change directory, and it’s how you move your working directory from one to another while navigating around. You can use the following command, for example, to return back to the home directory:  cd /home/<user>  In fact, there are several ways to return home, a few of which are demonstrated in the following screenshot:    Figure 4.3: Other ways of navigating to the home directory  The first command, cd -, doesn’t actually have anything to do with your home directory specifically. It’s a neat trick to return you to whatever directory you were in most previously. For me, the cd – command took me to the previous directory I was just in, which just so happened to be /home/jay. The second command, cd /home/jay, took me directly to my home directory since I called out the entire path. The last command, cd ~, also took me to my home directory. This is because ~ is shorthand for the full path to your home directory, so you don’t really ever have to type out the entire path to /home/<user>. You can just refer to that path simply as ~.  The ls command Another essential command is ls. The ls command lists the contents of the current working directory. We probably don’t have any contents in our home directory yet. But if we navigate to /etc by running cd /etc, as we did earlier, and then execute ls, we’ll see that the /etc</span> directory has a number of files in it. Go ahead and try it yourself and see:  cd /etc ls  We didn’t actually have to change our working directory to /etc just to list the contents. We could’ve just executed the following command:  ls /etc  Even better, we can run:  ls -l /etc  This gives us the contents in a long list, which I think is much easier to understand. It will show each directory or file entry on its own line, along with the permission string. But you probably already must be knowing ls as well as ls -l so I won’t go into too much more detail here. The -l portion of the ls command in that example is known as an argument. I’m not referring to an argument such as the ever-ensuing debate in the Linux community over which command-line text editor is the best between vim and emacs (it’s clearly vim). Instead, I’m referring to the concept of an argument in shell commands that allow you to override the defaults, or feed options to the command in some way, such as in this example, where we format the output of ls to be in a long list.  The rm command The rm command is another one that we touched on in, when we were discussing manually removing the home directory of a user that was removed from the system. So, at this point, you’re probably well aware of that command and what it does (it removes files and directories). It’s a potentially dangerous command, as you could use it to accidentally remove something that you shouldn’t have. We used the following command to remove the home directory of user dscully:  rm -r /home/dscully  As you can see, we’re using the -r argument to alter the behavior of the rm command, which, by default, doesn’t remove directories but only files. The -r argument instructs rm to remove everything recursively, even if it’s a directory. The -r argument will also remove subdirectories of the path as well, so you’ll definitely want to be careful with this command. As I’ve mentioned earlier in the book, if you use sudo with rm, you can hypothetically delete your entire Ubuntu installation!  Another option offered by rm is the -f argument which is short for force, and it tells rm not to prompt before removing things. This argument won’t be needed as often, and use cases for it are outside the scope of this article. But keep in mind that it exists, should you need it.  The touch command Another foundational command that’s good to know is touch, which actually serves two purposes. First, assuming you have permission to do so in your current working directory, the touch command will create an empty file if it doesn’t already exist. Second, the touch command will update the modification time of a file or directory if it does already exist:  Figure 4.4: Experimenting with the touch command  To illustrate this, in the related screenshot, I ran several commands. First, I ran the following command to create an empty file:  touch testfile.txt  That file didn’t exist before, so when I ran ls -l afterward, it showed the newly created file with a size of 0 bytes. Next, I ran the touch testfile.txt command again a minute later, and you can see in the screenshot that the modification time went from 15:12 to 15:13.  When it comes to viewing the contents of a file, we’ll get to that later on in the book, Mastering Ubuntu Server, Third Edition. And there are definitely more commands that we’ll need to learn to build the basis of our foundation. But for now, let’s take a break from the foundational concepts to understand the Linux filesystem layout better.  Summary There are more Linux commands than you will never be able to memorize. Most of us just memorize our favorite commands and variations of commands. You’ll develop your own menu of these commands as you learn and expand your knowledge. In this article, we covered many of the foundational commands that are, for the most part, essential. Commands such as grep, cat, cd, ls, and others were explored this time around.  About Jeremy “Jay” La Croix is a technologist and open-source enthusiast, specializing in Linux. Jay is currently the director of Cloud Services, Adaptavist. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master’s degree in Information Systems Technology Management from Capella University.     In addition, Jay also has an active Linux-focused YouTube channel with over 186K followers and 15.9M views, available at LearnLinux.tv, where he posts instructional tutorial videos and other Linux-related content.
Read more
  • 0
  • 0
  • 13395
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-squid-proxy-server-debugging-problems
Packt
29 Jul 2011
4 min read
Save for later

Squid Proxy Server: debugging problems

Packt
29 Jul 2011
4 min read
Mostly, we encounter problems that are well-known and are a result of configuration glitches or operating system limitations. So, those problems can be fixed easily by tweaking configuration files. However, sometimes we may face problems that cannot be solved directly or we may not even be able to identify them by simply looking at the log files. By default, Squid only logs the essential information to cache.log. To inspect or debug problems, we need to increase the verbosity of the logs so that Squid can tell us more about the actions it's taking, which may help us find the source of the problem. We can extract information from Squid about its actions at our convenience by using the debug_options directive in the Squid configuration file. Let's have a look at the format of the debug_options directive: debug_options rotate=N section,verbosity [section,verbosity]... The parameter rotate (rotate=N) specifies the number of cache.log files that will be maintained when Squid logs are rotated. The default value of N is 1. The rotate option helps in preventing disk space from being wasted due to excessive log messages when the verbosity level is high. The parameter section is an integer identifying a particular component of Squid. It can have a special value, ALL, which represents all components of Squid. The verbosity parameter is also an integer representing the verbosity level for each section. Let's have a look at the meaning of different verbosity levels: Verbosity levelDescription0Only critical or fatal messages will be logged.1Warnings and important problems will be logged.2At verbosity level 2, the minor problems, recovery, and regular high-level actions will be logged.3-5Almost everything useful is covered by verbosity level 5.6-9Above verbosity level 5, it is extremely verbose. Individual events, signals, and so on are described in detail. The following is the default configuration: debug_options rotate=1 ALL,1 The preceding configuration line sets the verbosity level for all sections of Squid to 1, which means that Squid will try to log the minimum amount of information possible. The section number can be determined by looking at the source of the file. In most source files, we can locate a commented line, as shown in the following example, which is from access_log.cc: /* ... * DEBUG: section 46 Access Log ... */ The previous comment tells us that the section number for the Access Log is 46. A list of section numbers and corresponding Squid components can be found at doc/debug-sections.txt in Squid's source code. The following table represents some of the important section numbers for Squid version 3.1.10: Section numberSquid components0Announcement Server, Client Database, Debug Routines, DNS Resolver Daemon, UFS Store Dump Tool1Main Loop, Startup2Unlink Daemon3Configuration File Parsing, Configuration Settings4Error Generation6Disk I/O Routines9File Transfer Protocol (FTP)11Hypertext Transfer Protocol (HTTP)12Internet Cache Protocol (ICP)14IP Cache, IP Storage, and Handling15Neighbor Routines16Cache Manager Objects17Request Forwarding18Cache Manager Statistics20Storage Manager, Storage Manager Heap-based replacement, Storage Manager Logging Functions, Storage Manager MD5 Cache Keys, Storage Manager Swapfile Metadata, Storage Manager Swapfile Unpacker, Storage Manager Swapin Functions, Storage Manager Swapout Functions, Store Rebuild Routines, Swap Dir base object23URL Parsing, URL Scheme parsing28Access Control29Authenticator, Negotiate Authenticator, NTLM Authenticator31Hypertext Caching Protocol32Asynchronous Disk I/O34Dnsserver interface35FQDN Cache44Peer Selection Algorithm46Access Log50Log file handling51Filedescriptor Functions55HTTP Header56HTTP Message Body57HTTP Status-line58HTTP Reply (Response)61Redirector64HTTP Range Header65HTTP Cache Control Header66HTTP Header Tools67String68HTTP Content-Range Header70Cache Digest71Store Digest Manager72Peer Digest Routines73HTTP Request74HTTP Message76Internal Squid Object handling78DNS lookups, DNS lookups; interacts with lib/rfc1035.c79Disk IO Routines, Squid-side DISKD I/O functions, Squid-side Disk I/O functions, Storage Manager COSS Interface, Storage Manager UFS Interface84Helper process maintenance89NAT / IP Interception90HTTP Cache Control Header, Storage Manager Client-Side Interface92Storage File System Summary In this article we took a look at some debugging problems which we may come across while configuring or running Squid. Further resources on this subject: Squid Proxy Server: Tips and Tricks [Article] Squid Proxy Server 3: Getting Started [Article] Configuring Squid to Use DNS Servers [Article] Different Ways of Running Squid Proxy Server [Article] Squid Proxy Server: Fine Tuning to Achieve Better Performance [Article]
Read more
  • 0
  • 0
  • 12754

article-image-moving-database-sql-server-2005-sql-server-2008-three-steps
Packt
23 Oct 2009
3 min read
Save for later

Moving a Database from SQL Server 2005 to SQL Server 2008 in Three Steps

Packt
23 Oct 2009
3 min read
(For more resources on Microsoft, see here.) Introduction There are several options if one wishes to move a database from a SQL Server 2005 to SQL 2008 Server. First of all there is a 'Copy Database Wizard' in SQL 2008 Server which is meant for transferring a database from any version of SQL Server 2000 and above to 2008 version. This Wizard can operate in two ways. In the first option it can attach a database (even one on the network) and uses the SQL 2008 SQL Server agent. The Copying of the database is implemented by an Integration Services package to run as a SQL Server Agent job that is scheduled to run immediately or according to some configurable schedule. This will therefore depend on correctly configuring the SQL Server Agent. In order to use the attach / detach process, the remote server will be stopped and if the database / log files are on a shared drive they are correctly brought in by the wizard. In the other option the database will be copied using the SQL Server Management Program for which the source database need not be stopped. However this is slower than the previous method and would also require the SQL Server Agent since a package has to be run. An option which works without too much hassles is manually detaching and attaching the database/log files. In this step-by-step (really two steps) tutorial, this simple procedure is described. If you are just interested in taking a small database from 2005 to 2008 server the author strongly recommends this procedure. Interested readers may also want to read my other popular article Moving Data from SQL Server 2000 to SQL Server 2005 Step 1: Detaching the Database Highlight the database you want to transfer in the Databases node in the SQL Server Management Studio as shown in the next figure. Right click this database as shown and click on Detach... Make sure the database is running (notice the green arrow for HodentekSQL Express which is a junior version of SQL 2005). This brings up the Detach Database window as shown. Place a check mark for 'Drop' as shown and click on OK. This removes the 'Pubs' node from the Databases folder in the SQL Server Management Studio (You may need to attach it again). With this accomplished you can physically move the files or do what you want with them. Step 2: Copy the DATA / LOG Files Copy the pubs.mdf and pubs.ldf files to a location on the C: drive of the machine on which SQL 2008 Server is installed.
Read more
  • 0
  • 0
  • 12175

article-image-different-ways-running-squid-proxy-server
Packt
24 Feb 2011
10 min read
Save for later

Different Ways of Running Squid Proxy Server

Packt
24 Feb 2011
10 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid Get the most out of your network connection by customizing Squid's access control lists and helpers Set up and configure Squid to get your website working quicker and more efficiently No previous knowledge of Squid or proxy servers is required Part of Packt's Beginner's Guide series: lots of practical, easy-to-follow examples accompanied by screenshots Command line options Normally, all of the Squid configuration options reside with in the squid.conf file (the main Squid configuration file). To tweak the Squid functionality, the preferred method is to change the options in the squid.conf file. However there are some options which can also be controlled using additional command line options while running Squid. These options are not very popular and are rarely used, but these are very useful for debugging problems without the Squid proxy server. Before exploring the command line options, let's see how Squid is run from the command line. The location of the Squid binary file depends on the --prefix option passed to the configure command before compiling. So, depending upon the value of the --prefix option, the location of the Squid executable may be one of /usr/local/sbin/squid or ${prefix}/sbin/squid, where ${prefix} is the value of the option --prefix passed to the configure command. Therefore, to run Squid, we need to run one of the following commands on the terminal: When the --prefix option was not used with the configure command, the default location of the Squid executable will be /usr/local/sbin/squid. When the --prefix option was used and was set to a directory, then the location of the Squid executable will be ${prefix}/sbin/squid. It's painful to type the absolute path for Squid to run. So, to avoid typing the absolute path, we can include the path to the Squid executable in our PATH shell variable, using the export command as shown in the following example: $ export PATH=$PATH:/usr/local/sbin/ Alternatively, we can use the following command: $ export PATH=$PATH:/opt/squid/sbin/ We can also add the preceding command to our ~/.bashrc or ~/.bash_profile file to avoid running the export command every time we enter a new shell. After setting the PATH shell variable appropriately, we can run Squid by simply typing the following command on shell: $ squid This command will run Squid after loading the configuration options from the squid.conf file. We'll be using the squid command without an absolute path for running the Squid process. Please use the appropriate path according to the installation prefix which you have chosen. Now that we know how to run Squid from the command line, let's have a look at the various command line options. Getting a list of available options Before actually moving forward, we should firstly check the available set of options for our Squid installation. Time for action – listing the options Like a lot of other Linux programs, Squid also provides the option -h which can be used to retrieve a list of options: squid -h The previous command will result in the following output: Usage: squid [-cdhvzCFNRVYX] [-s | -l facility] [-f config-file] [-[au] port] [-k signal] -a port Specify HTTP port number (default: 3128). -d level Write debugging to stderr also. -f file Use given config-file instead of /opt/squid/etc/squid.conf. -h Print help message. -k reconfigure|rotate|shutdown|interrupt|kill|debug|check|parse Parse configuration file, then send signal to running copy (except -k parse) and exit. -s | -l facility Enable logging to syslog. -u port Specify ICP port number (default: 3130), disable with 0. -v Print version. -z Create swap directories. -C Do not catch fatal signals. -F Don't serve any requests until store is rebuilt. -N No daemon mode. -R Do not set REUSEADDR on port. -S Double-check swap during rebuild. ... We will now have a look at a few important options from the preceding list. We will also, have a look at the squid(8) man page or http://linux.die.net/man/8/squid for more details. What just happened? We have just used the squid command to list the available options which we can use on the command line. Getting information about our Squid installation Various features may vary across different versions of Squid. Before proceeding any further, it's a good idea to know the version of Squid installed on our machine. Time for action – finding out the Squid version Just in case we want to check which version of Squid we are using or the options we used with the configure command before compiling, we can use the option -v on the command line. Let's run Squid with this option: squid -v If we try to run the preceding command in the terminal, it will produce an output similar to the following: configure options: '--config-cache' '--prefix=/opt/squid/' '--enable-storeio=ufs,aufs' '--enable-removal-policies=lru,heap' '--enable-icmp' '--enable-useragent-log' '--enable-referer-log' '--enable-cache-digests' '--with-large-files' --enable-ltdl-convenience What just happened? We used the squid command with the -v option to find out the version of Squid installed on our machine, and the options used with the configure command before compiling Squid. Creating cache or swap directories The cache directories specified using the cache_dir directive in the squid.conf file, must already exist before Squid can actually use them. Time for action – creating cache directories Squid provides the -z command line option to create the swap directories. Let's see an example: squid -z If this option is used and the cache directories don't exist already, the output will look similar to the following: 2010/07/20 21:48:35| Creating Swap Directories 2010/07/20 21:48:35| Making directories in /squid_cache/00 2010/07/20 21:48:35| Making directories in /squid_cache/01 2010/07/20 21:48:35| Making directories in /squid_cache/02 2010/07/20 21:48:35| Making directories in /squid_cache/03 ... We should use this option whenever we add new cache directories in the Squid configuration file. What just happened? When the squid command is run with the option -z, Squid reads all the cache directories from the configuration file and checks if they already exist. It will then create the directory structure for all the cache directories that don't exist. Have a go hero – adding cache directories Add two or three test cache directories with different values of level 1 (8, 16, and 32) and level 2 (64, 256, and 512) to the configuration file. Then try creating them using the squid command. Now study the difference in the directory structure. Using a different configuration file The default location for Squid's main configuration file is ${prefix}/etc/squid/squid.conf. Whenever we run Squid, the main configuration is read from the default location. While testing or deploying a new configuration, we may want to use a different configuration file just to check whether it will work or not. We can achieve this by using the option -f, which allows us to specify a custom location for the configuration file. Let's see an example: squid -f /etc/squid.minimal.conf # OR squid -f /etc/squid.alternate.conf If Squid is run this way, Squid will try to load the configuration from /etc/squid.minimal.conf or /etc/squid.alternate.conf, and it will completely ignore the squid.conf from the default location. Getting verbose output When we run Squid from the terminal without any additional command line options, only warnings and errors are displayed on the terminal (or stderr). However, while testing, we would like to get a verbose output on the terminal, to see what is happening when Squid starts up. Time for action – debugging output in the console To get more information on the terminal, we can use the option -d. The following is an example: squid -d 2 We must specify an integer with the option -d to indicate the verbosity level. Let's have a look at the meaning of the different levels: Only critical and fatal errors are logged when level 0 (zero) is used. Level 1 includes the logging of important problems. Level 2 and higher includes the logging of informative details and other actions. Higher levels result in more output on the terminal. A sample output on the terminal with level 2 would look similar to the following: 2010/07/20 21:40:53| Starting Squid Cache version 3.1.10 for i686-pc-linux-gnu... 2010/07/20 21:40:53| Process ID 15861 2010/07/20 21:40:53| With 1024 file descriptors available 2010/07/20 21:40:53| Initializing IP Cache... 2010/07/20 21:40:53| DNS Socket created at [::], FD 7 2010/07/20 21:40:53| Adding nameserver 192.168.36.222 from /etc/resolv.conf 2010/07/20 21:40:53| User-Agent logging is disabled. 2010/07/20 21:40:53| Referer logging is disabled. 2010/07/20 21:40:53| Unlinkd pipe opened on FD 13 2010/07/20 21:40:53| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2010/07/20 21:40:53| Store logging disabled 2010/07/20 21:40:53| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2010/07/20 21:40:53| Target number of buckets: 1008 2010/07/20 21:40:53| Using 8192 Store buckets 2010/07/20 21:40:53| Max Mem size: 262144 KB 2010/07/20 21:40:53| Max Swap size: 0 KB 2010/07/20 21:40:53| Using Least Load store dir selection 2010/07/20 21:40:53| Current Directory is /opt/squid/sbin 2010/07/20 21:40:53| Loaded Icons. As we can see, Squid is trying to dump a log of actions that it is performing. The messages shown are mostly startup messages and there will be fewer messages when Squid starts accepting connections. Starting Squid in debug mode is quite helpful when Squid is up and running and users complain about poor speeds or being unable to connect. We can have a look at the debugging output and the appropriate actions to take. What just happened? We started Squid in debugging mode and can now see Squid writing an output on the command line, which is basically a log of the actions which Squid is performing. If Squid is not working, we'll be able to see the reasons on the command line and we'll be able to take actions accordingly. Full debugging output on the terminal The option -d specifies the verbosity level of the output dumped by Squid on the terminal. If we require all of the debugging information on the terminal, we can use the option -X, which will force Squid to write debugging information at every single step. If the option -X is used, we'll see information about parsing the squid.conf file and the actions taken by Squid, based on the configuration directives encountered. Let's see a sample output produced when option -X is used: ... 2010/07/21 21:50:51.515| Processing: 'acl my_machines src 172.17.8.175 10.2.44.46 127.0.0.1 172.17.11.68 192.168.1.3' 2010/07/21 21:50:51.515| ACL::Prototype::Registered: invoked for type src 2010/07/21 21:50:51.515| ACL::Prototype::Registered: yes 2010/07/21 21:50:51.515| ACL::FindByName 'my_machines' 2010/07/21 21:50:51.515| ACL::FindByName found no match 2010/07/21 21:50:51.515| aclParseAclLine: Creating ACL 'my_machines' 2010/07/21 21:50:51.515| ACL::Prototype::Factory: cloning an object for type 'src' 2010/07/21 21:50:51.515| aclParseIpData: 172.17.8.175 2010/07/21 21:50:51.515| aclParseIpData: 10.2.44.46 2010/07/21 21:50:51.515| aclParseIpData: 127.0.0.1 2010/07/21 21:50:51.515| aclParseIpData: 172.17.11.68 2010/07/21 21:50:51.515| aclParseIpData: 192.168.1.3 ... Let's see what this output means. In the first line, Squid encountered a line defining an ACL my_machines. The next few lines in the output describe Squid invoking different methods to parse, creating a new ACL, and then assigning values to it. This option can be very helpful while debugging ambiguous ACLs. Running as a normal process Sometime during testing, we may not want Squid to run as a daemon. Instead, we may want it to run as a normal process which we can interrupt easily by pressing CTRL-C. To achieve this, we can use the option -N. When this option is used, Squid will not run in the background it will run in the current shell instead. Parsing the Squid configuration file for errors or warnings It's a good idea to parse or check the configuration file (squid.conf) for any errors or warnings before we actually try to run Squid, or reload a Squid process which is already running in a production deployment. Squid provides an option -k with an argument parse, which, if supplied, will force Squid to parse the current Squid configuration file and report any errors and warnings. Squid -k is also used to check and report directive and option changes when we upgrade our Squid version.  
Read more
  • 0
  • 0
  • 11803

article-image-choosing-right-flavor-debian-simple
Packt
08 Oct 2013
7 min read
Save for later

Choosing the right flavor of Debian (Simple)

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready At any point in time, Debian has three different branches available for use: stable, testing, and unstable. Think of unstable as the cutting edge of free software; it has reasonably modern software packages, and sometimes those packages introduce changes or features that may break the user experience. After an amount of time has passed (usually 10 days, but it depends on the package's upload priority), the new software is considered to be relatively safe to use and is moved to testing. Testing can provide a good balance between modern software and relatively reliable software. Testing goes through several iterations during the course of several years, and eventually it's frozen for a new stable release. This stable release is supported by the Debian Project for a number of years, including feature and security updates. Chances are you are building something that has an interesting team of people to back it up. In such scenarios, web development teams have chosen to go with testing, or even unstable, in order to get the latest software available. In other cases, conservative teams or groups with less savvy staff have resorted to stable because it's consistent for years. It is up to you to choose between any, but this book will get you started with stable. You can change your Advanced Packaging Tool (APT ) configuration later and upgrade to testing and unstable, but the initial installation media that we will use will be stable. Also, it is important that developers target the production environment as closely as possible. If you use stable for production, using stable for development will save a lot of time debugging mismatches. You should know which versions of programming languages, modules, libraries, frameworks, and databases your application will be targeting, as this will influence the selection of your branch. You can go to packages.debian.org to check the versions available for a specific package across different branches. Choosing testing (outside a freeze period) and unstable will also mean that you'll need to have an upgrade strategy where you continuously check for new updates (with tools such as cron-apt) and install them if you want to take advantage of new bug fixes and so on. How to do it… Debian offers a plethora of installation methods for the operating system. From standard CDs and DVDs, Debian also offers reduced-size installation media, bootable USB images, network boot, and other methods. The complexity of installation is a relative factor that usually is of no concern for DevOps since installation only happens once, while configuration and administration are continuously happening. Before you start considering replication methods (such as precooked images, network distribution, configuration management, and software delivery), you and your team can choose from the following installation methods: If you are installing Debian on a third-party provider (such as a cloud vendor), they will either provide a Debian image for you, or you can prepare your own in virtualization software and upload the disk later. If you are installing on your own hardware (including virtualized environments), it's advisable to get either the netinst ISO or the full first DVD ISO. It all depends on whether you are installing several servers over the course of several months (thus making the DVD obsolete as new updates come out) or have a good Internet connection (or proxies and caching facilities, nearby CDNs, and so on) for downloading any additional packages that the netinst disk might not contain. In general, if you are only deploying a handful of servers and have a good Internet connection at hand, I'd suggest you choose the amd64 netinst ISO, which we will use in this book. There's more… There are several other points that you need to consider while choosing the right flavor of Debian. One of them is the architecture you're using and targeting for development. Architectures There are tens of computer architectures available in the market. ARM, Intel, AMD, SPARC, and Alpha are all different types of architectures. Debian uses the architecture codenames i386 and amd64 for historical reasons. i386 actually means an Intel or Intel-compatible, 32-bit processor (x86), while amd64 means an Intel or Intel-compatible, 64-bit processor (x86_64). The brand of the processor is irrelevant. A few years ago, choosing between the two was tricky as some binary-only, non-free libraries and software were not always available for 64-bit processors, and architecture mismatches happened. While there were workarounds (such as running a 32-bit-only software using special libraries), it was basically a matter of time until popular software such as Flash caught up with 64-bit versions—thus, the concern was mainly about laptops and desktops. Nowadays, if your CPU (and/or your hypervisor) has 64-bit capabilities (most Intel do), it's considered a good practice to use the amd64 architecture. We will use amd64 in this book. And since Debian 7.0, the multiarch feature has been included, allowing more than one architecture to be installed and be active on the same hardware. While the market seems to settle around 64-bit Intel processors, the choice of an architecture is still important because it determines the future availability of software that you can choose from Debian. There might be some software that is not compiled for or not compatible with your specific architecture, but there is software that is independent of the architecture. DevOps are usually pragmatic when it comes to choosing architectures, so the following two questions aim to help you understand what to expect when it comes to it: Will you run your web applications on your own hardware? If so, do you already have this hardware or will you procure it? If you need to procure hardware, take a look at the existing server hardware in your datacenter. Factors such as a preferred vendor, hardware standardization, and so on are all important when choosing the right architecture. From the most popular 32- or 64-bit Intel and AMD processors, the growing ARM ecosystem, and also the more venerable but declining SPARC or Itanium, Debian is available for lots of architectures. If you are out in the market for new hardware, your options are most likely based on an Intel- or AMD-compatible, 32- or 64-bit, server-grade processor. Your decisions will be influenced by factors such as the I/O capacity (throughput and speed), memory, disk, and so on, and the architecture will most likely be covered by Debian. Will you run your web applications on third-party hardware, such as a Virtual Private Server (VPS ) provider or a cloud Infrastructure as a Service (IaaS ) provider? Most providers will provide you with prebuilt images for Debian. They are either 32- or 64-bit, x86 images that have some sort of community support—but, be aware they might have no vendor support, or in some cases waive warranties and/or other factors such as the SLA. You should be able to prepare your own Debian installation using virtualization software (such as KVM, VirtualBox, or Hyper-V) and then upload the virtual disk (VHD, VDI, and so on) to your provider. Summary In this article, we learned about selecting the right flavor of Debian for our system. We also learned about the different architectures available in the market that we can use for Debian. Resources for Article : Further resources on this subject: Installation of OpenSIPS 1.6 [Article] Installing and customizing Redmine [Article] Installing and Using Openfire [Article]
Read more
  • 0
  • 0
  • 8993
article-image-messaging-websphere-application-server-70-part-1
Packt
01 Oct 2009
6 min read
Save for later

Messaging with WebSphere Application Server 7.0 (Part 1)

Packt
01 Oct 2009
6 min read
Messaging in a large enterprise is common and a WebSphere administrator needs to understand what WebSphere Application Server can do for Java Messaging and/or WebSphere Message Queuing (WMQ) based messaging. Here, we will learn how to create Queue Connection Factories (QCF) and Queue Destinations (QD) which we will use in a demonstration application where we will demonstrate the Java Message Service (JMS) and also show how WMQ can be used as part of a messaging implementation. In this two-part article by Steven Charles Robinson, we will cover the following topics: Java messaging Java Messaging Service (JMS) WebSphere messaging Service integration bus (SIB) WebSphere MQ Message providers Queue connection factories Queue destinations Java messaging Messaging is a method of communication between software components or applications. A messaging system is often peer-to-peer, meaning that a messaging client can send messages to, and receive messages from, any other client. Each client connects to a messaging service that provides a system for creating, sending, receiving, and reading messages. So why do we have Java messaging? Messaging enables distributed communication that is loosely-coupled. What this means is that a client sends a message to a destination, and the recipient can retrieve the message from the destination. A key point of Java messaging is that the sender and the receiver do not have to be available at the same time in order to communicate. The term communication can be understood as an exchange of messages between software components. In fact, the sender does not need to know anything about the receiver; nor does the receiver need to know anything about the sender. The sender and the receiver need to know only what message format and what destination to use. Messaging also differs from electronic mail (email), which is a method of communication between people or between software applications and people. Messaging is used for communication between software applications or software components. Java messaging attempts to relax tightly-coupled communication (such as, TCP network sockets, CORBA, or RMI), allowing software components to communicate indirectly with each other. Java Message Service Java Message Service (JMS) is an application program interface (API) from Sun. JMS provides a common interface to standard messaging protocols and also to special messaging services in support of Java programs. Messages can involve the exchange of crucial data between systems and contain information such as event notification and service requests. Messaging is often used to coordinate programs in dissimilar systems or written in different programming languages. By using the JMS interface, a programmer can invoke the messaging services like IBM's WebSphere MQ (WMQ) formerly known as MQSeries, and other popular messaging products. In addition, JMS supports messages that contain serialized Java objects and messages that contain XML-based data. A JMS application is made up of the following parts, as shown in the following diagram: A JMS provider is a messaging system that implements the JMS interfaces and provides administrative and control features. JMS clients are the programs or components, written in the Java programming language, that produce and consume messages. Messages are the objects that communicate information between JMS clients. Administered objects are preconfigured JMS objects created by an administrator for the use of clients. The two kinds of objects are destinations and Connection Factories (CF). As shown in the diagram above, administrative tools allow you to create destinations and connection factories resources and bind them into a Java Naming and Directory Interface (JNDI) API namespace. A JMS client can then look up the administered objects in the namespace and establish a logical connection to the same objects through the JMS provider. JMS features Application clients, Enterprise Java Beans (EJB), and Web components can send or synchronously receive JMS messages. Application clients can, in addition, receive JMS messages asynchronously. A special kind of enterprise bean, the message-driven bean, enables the asynchronous consumption of messages. A JMS message can also participate in distributed transactions. JMS concepts The JMS API supports two models: Point-to-point or queuing model As shown below, in the point-to-point or queueing model, the sender posts messages to a particular queue and a receiver reads messages from the queue. Here, the sender knows the destination of the message and posts the message directly to the receiver's queue. Only one consumer gets the message. The producer does not have to be running at the time the consumer consumes the message, nor does the consumer need to be running at the time the message is sent. Every message successfully processed is acknowledged by the consumer. Multiple queue senders and queue receivers can be associated with a single queue, but an individual message can be delivered to only one queue receiver. If multiple queue receivers are listening for messages on a queue, Java Message Service determines which one will receive the next message on a first-come-first-serve basis. If no queue receivers are listening on the queue, messages remain in the queue until a queue receiver attaches to the queue. Publish and subscribe model As shown by the above diagram, the publish/subscribe model supports publishing messages to a particular message topic. Unlike the point-to-point messaging model, the publish/subscribe messaging model allows multiple topic subscribers to receive the same message. JMS retains the message until all topic subscribers have received it. The Publish & Subscribe messaging model supports durable subscribers, allowing you to assign a name to a topic subscriber and associate it with a user or application. Subscribers may register interest in receiving messages on a particular message topic. In this model, neither the publisher nor the subscriber knows about each other. By using Java, JMS provides a way of separating the application from the transport layer of providing data. The same Java classes can be used to communicate with different JMS providers by using the JNDI information for the desired provider. The classes first use a connection factory to connect to the queue or topic, and then populate and send or publish the messages. On the receiving side, the clients then receive or subscribe to the messages.
Read more
  • 0
  • 1
  • 8592

article-image-hyper-v-architecture-and-components
Packt
04 Jan 2017
15 min read
Save for later

Hyper-V Architecture and Components

Packt
04 Jan 2017
15 min read
In this article by Charbel Nemnom and Patrick Lownds, the author of the book Windows Server 2016 Hyper-V Cookbook, Second Edition, we will see Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Virtualization is not a new feature or technology that everyone decided to have in their environment overnight. Actually, it's quite old. There are a couple of computers in the mid-60s that were using virtualization already, such as the IBM M44/44X, where you could run multiple VMs using hardware and software abstraction. It is known as the first virtualization system and the creation of the term virtual machine. Although Hyper-V is in its fifth version, Microsoft virtualization technology is very mature. Everything started in 1988 with a company named Connectix. It had innovative products such as Connectix Virtual PC and Virtual Server, an x86 software emulation for Mac, Windows, and OS/2. In 2003, Microsoft acquired Connectix and a year later released Microsoft Virtual PC and Microsoft Virtual Server 2005. After lots of improvements in the architecture during the project Viridian, Microsoft released Hyper-V in 2008, the second version in 2009 (Windows Server 2008 R2), the third version in 2012 (Windows Server 2012), a year later in 2013 the fourth version was released (Windows Server 2012 R2), the current and fifth version in 2016 (Windows Server 2016). In the past years, Microsoft has proven that Hyper-V is a strong and competitive solution for server virtualization and provides scalability, flexible infrastructure, high availability, and resiliency. To better understand the different virtualization models, and how the VMs are created and managed by Hyper-V, it is very important to know its core, architecture, and components. By doing so, you will understand how it works, you can compare with other solutions, and troubleshoot problems easily. Microsoft has long told customers that Azure datacenters are powered by Microsoft Hyper-V, and the forthcoming Azure Stack will actually allow us to run Azure in our own datacenters on top of Windows Server 2016 Hyper-V as well. For more information about Azure Stack, please refer to the following link: https://azure.microsoft.com/en-us/overview/azure-stack/ Microsoft Hyper-V proves over the years that it's a very scalable platform to virtualize any and every workload without exception. This appendix includes well-explained topics with the most important Hyper-V architecture components compared with other versions. (For more resources related to this topic, see here.) Understanding Hypervisors The Virtual Machine Manager (VMM), also known as Hypervisor, is the software application responsible for running multiple VMs in a single system. It is also responsible for creation, preservation, division, system access, and VM management running on the Hypervisor layer. These are the types of Hypervisors: VMM Type 2 VMM Hybrid VMM Type 1 VMM Type 2 This type runs Hypervisor on top of an OS, as shown in the following diagram, we have the hardware at the bottom, the OS and then the Hypervisor running on top. Microsoft Virtual PC and VMware Workstation is an example of software that uses VMM Type 2. VMs pass hardware requests to the Hypervisor, to the host OS, and finally reaching the hardware. That leads to performance and management limitation imposed by the host OS. Type 2 is common for test environments—VMs with hardware restrictions—to run on software applications that are installed in the host OS. VMM Hybrid When using the VMM Hybrid type, the Hypervisor runs on the same level as the OS, as shown in the following diagram. As both Hypervisor and the OS are sharing the same access to the hardware with the same priority, it is not as fast and safe as it could be. This is the type used by the Hyper-V predecessor named Microsoft Virtual Server 2005: VMM Type 1 VMM Type 1 is a type that has the Hypervisor running in a tiny software layer between the hardware and the partitions, managing and orchestrating the hardware access. The host OS, known as Parent Partition, run on the same level as the Child Partition, known as VMs, as shown in the next diagram. Due to the privileged access that the Hypervisor has on the hardware, it provides more security, performance, and control over the partitions. This is the type used by Hyper-V since its first release: Hyper-V architecture Knowing how Hyper-V works and how its architecture is constructed will make it easier to understand its concepts and operations. The following sections will explore the most important components in Hyper-V. Windows before Hyper-V Before we dive into the Hyper-V architecture details, it will be easy to understand what happens after Hyper-V is installed, by looking at Windows without Hyper-V, as shown in the following diagram: In a normal Windows installation, the instructions access is divided by four privileged levels in the processor called Rings. The most privileged level is Ring 0, with direct access to the hardware and where the Windows Kernel sits. Ring 3 is responsible for hosting the user level, where most common applications run and with the least privileged access. Windows after Hyper-V When Hyper-V is installed, it needs a higher privilege than Ring 0. Also, it must have dedicated access to the hardware. This is possible due to the capabilities of the new processor created by Intel and AMD, called Intel-VT and AMD-V respectively, that allows the creation of a fifth ring called Ring -1. Hyper-V uses this ring to add its Hypervisor, having a higher privilege and running under Ring 0, controlling all the access to the physical components, as shown in the following diagram: The OS architecture suffers several changes after Hyper-V installation. Right after the first boot, the Operating System Boot Loader file (winload.exe) checks the processor that is being used and loads the Hypervisor image on Ring -1 (using the files Hvix64.exe for Intel processors and Hvax64.exe for AMD processors). Then, Windows Server is initiated running on top of the Hypervisor and every VM that runs beside it. After Hyper-V installation, Windows Server has the same privilege level as a VM and is responsible for managing VMs using several components. Differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware There are four different versions of Hyper-V—the role that is installed on Windows Server 2016 (Core or Full Server), the role that can be installed on a Nano Server, its free version called Hyper-V Server and the Hyper-V that comes in Windows 10 called Hyper-V Client. The following sections will explain the differences between all the versions and a comparison between Hyper-V and its competitor, VMware. Windows Server 2016 Hyper-V Hyper-V is one of the most fascinating and improved role on Windows Server 2016. Its fifth version goes beyond virtualization and helps us deliver the correct infrastructure to host your cloud environment. Hyper-V can be installed as a role in both Windows Server Standard and Datacenter editions. The only difference in Windows Server 2012 and 2012 R2 in the Standard edition, two free Windows Server OSes are licensed whereas there are unlimited licenses in the Datacenter edition. However, in Windows Server 2016 there are significant changes between the two editions. The following table will show the difference between Windows Server 2016 Standard and Datacenter editions: Resource Windows Server 2016 Datacenter edition Windows Server 2016 Standard edition Core functionality of Windows Server Yes Yes OSes/Hyper-V Containers Unlimited 2 Windows Server Containers Unlimited Unlimited Nano Server Yes Yes Storage features for software-defined datacenter including Storage Spaces Direct and Storage Replica Yes N/A Shielded VMs Yes N/A Networking stack for software-defined datacenter Yes N/A Licensing Model Core + CAL Core + CAL As you can see in preceding table, the Datacenter edition is designed for highly virtualized private and hybrid cloud environments and Standard edition is for low density or non-virtualized (physical) environments. In Windows Server 2016, Microsoft is also changing the licensing model from a per-processor to per-core licensing for Standard and Datacenter editions. The following points will guide you in order to license Windows Server 2016 Standard and Datacenter edition: All physical cores in the server must be licensed. In other words, servers are licensed based on the number of processor cores in the physical server. You need a minimum of 16 core licenses for each server. You need a minimum of 8 core licenses for each physical processor. The core licenses will be sold in packs of two. Eight 2-core packs will be the minimum required to license each physical server. The 2-core pack for each edition is one-eighth the price of a 2-processor license for corresponding Windows Server 2012 R2 editions. The Standard edition provides rights for up to two OSEs or Hyper-V containers when all physical cores in the server are licensed. For every two additional VMs, all the cores in the server have to be licensed again. The price of 16-core licenses of Windows Server 2016 Datacenter and Standard edition will be the same price as the 2-processor license of the corresponding editions of the Windows Server 2012 R2 version. Existing customers' servers under Software Assurance agreement will receive core grants as required, with documentation. The following table illustrates the new licensing model based on number of 2-core pack licenses: Legend: Gray cells represent licensing costs White cells represent additional licensing is required Windows Server 2016 Standard edition may need additional licensing. Nano Server Nano Server is a new headless, 64-bit only installation option that installs "just enough OS" resulting in a dramatically smaller footprint that results in more uptime and a smaller attack surface. Users can choose to add server roles as needed, including Hyper-V, Scale out File Server, DNS Server and IIS server roles. User can also choose to install features, including Container support, Defender, Clustering, Desired State Configuration (DSC), and Shielded VM support. Nano Server is available in Windows Server 2016 for: Physical Machines Virtual Machines Hyper-V Containers Windows Server Containers Supports the following inbox optional roles and features: Hyper-V, including container and shielded VM support Datacenter Bridging Defender DNS Server Desired State Configuration Clustering IIS Network Performance Diagnostics Service (NPDS) System Center Virtual Machine Manager and System Center Operations Manager Secure Startup Scale out File Server, including Storage Replica, MPIO, iSCSI initiator, Data Deduplication The Windows Server 2016 Hyper-V role can be installed on a Nano Server; this is a key Nano Server role, shrinking the OS footprint and minimizing reboots required when Hyper-V is used to run virtualization hosts. Nano server can be clustered, including Hyper-V failover clusters. Hyper-V works the same on Nano Server including all features does in Windows Server 2016, aside from a few caveats: All management must be performed remotely, using another Windows Server 2016 computer. Remote management consoles such as Hyper-V Manager, Failover Cluster Manager, PowerShell remoting, and management tools like System Center Virtual Machine Manager as well as the new Azure web-based Server Management Tool (SMT) can all be used to manage a Nano Server environment. RemoteFX is not available. Microsoft Hyper-V Server 2016 Hyper-V Server 2016, the free virtualization solution from Microsoft has all the features included on Windows Server 2016 Hyper-V. The only difference is that Microsoft Hyper-V Server does not include VM licenses and a graphical interface. The management can be done remotely using PowerShell, Hyper-V Manager from another Windows Server 2016 or Windows 10. All the other Hyper-V features and limits in Windows Server 2016, including Failover Cluster, Shared Nothing Live Migration, RemoteFX, Discrete Device Assignment and Hyper-V Replica are included in the Hyper-V free version. Hyper-V Client In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. Windows Server 2016 Hyper-V X VMware vSphere 6.0 VMware is the existing competitor of Hyper-V and the current version 6.0 offers the VMware vSphere as a free and a standalone Hypervisor, vSphere Standard, Enterprise, and Enterprise Plus. The following list compares all the features existing in the free version of Hyper-V with VMware Sphere and Enterprise Plus: Feature Windows Server 2012 R2 Windows Server 2016 VMware vSphere 6.0 VMware vSphere 6.0 Enterprise Plus Logical Processors 320 512 480 480 Physical Memory 4TB 24TB 6TB 6TB/12TB Virtual CPU per Host 2,048 2,048 4,096 4,096 Virtual CPU per VM 64 240 8 128 Memory per VM 1TB 12TB 4TB 4TB Active VMs per Host 1,024 1,024 1,024 1,024 Guest NUMA Yes Yes Yes Yes Maximum Nodes 64 64 N/A 64 Maximum VMs per Cluster 8,000 8,000 N/A 8,000 VM Live Migration Yes Yes No Yes VM Live Migration with Compression Yes Yes N/A No VM Live Migration using RDMA Yes Yes N/A No 1GB Simultaneous Live Migrations Unlimited Unlimited N/A 4 10GB Simultaneous Live Migrations Unlimited Unlimited N/A 8 Live Storage Migration Yes Yes No Yes Shared Nothing Live Migration Yes Yes No Yes Cluster Rolling Upgrades Yes Yes N/A Yes VM Replica Hot/Add virtual Disk Yes Yes Yes Yes Native 4-KB Disk Support Yes Yes No No Maximum Virtual Disk Size 64TB 64TB 2TB 62TB Maximum Pass Through Disk Size 256TB or more 256TB or more 64TB 64TB Extensible Network Switch Yes Yes No Third party vendors   Network Virtualization Yes Yes No Requires vCloud networking and security IPsec Task Offload Yes Yes No No SR-IOV Yes Yes N/A Yes Virtual NICs per VM 12 12 10 10 VM NIC Device Naming No Yes N/A No Guest OS Application Monitoring Yes Yes No No Guest Clustering with Live Migration Yes Yes N/A No Guest Clustering with Dynamic Memory Yes Yes N/A No Shielded VMs No Yes N/A No Summary In this article, we have covered Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Resources for Article: Further resources on this subject: Storage Practices and Migration to Hyper-V 2016 [article] Proxmox VE Fundamentals [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article]
Read more
  • 0
  • 0
  • 8564

article-image-building-portable-minecraft-server-lan-parties-park
Andrew Fisher
01 Jun 2015
14 min read
Save for later

Building a portable Minecraft server for LAN parties in the park

Andrew Fisher
01 Jun 2015
14 min read
Minecraft is a lot of fun, especially when you play with friends. Minecraft servers are great but they aren’t very portable and rely on a good Internet connection. What about if you could take your own portable server with you - say to the park - and it will fit inside a lunchbox? This post is about doing just that, building a small, portable minecraft server that you can use to host pop up crafting sessions no matter where you are when the mood strikes.   where shell instructions are provided in this document, they are presented assuming you have relevant permissions to execute them. If you run into permission denied errors then execute using sudo or switch user to elevate your permissions. Bill of Materials The following components are needed. Item QTY Notes Raspberry Pi 2 Model B 1 Older version will be too laggy. Get a new one 4GB MicroSD card with raspbian installed on it 1 The faster the class of SD card the better WiPi wireless USB dongle 1 Note that “cheap” USB dongles often won’t support hostmode so can’t create a network access point. The “official” ones cost more but are known to work. USB Powerbank 1 Make sure it’s designed for charging tablets (ie 2.1A) and the higher capacity the better (5000mAh or better is good). Prerequisites I am assuming you’ve done the following with regards to getting your Raspberry Pi operational. Latest Raspbian is installed and is up to date - run ‘apt-get update && apt-get upgrade’ if unsure. Using raspi-config you have set the correct timezone for your location and you have expanded the file system to fill the SD card. You have wireless configured and you can access your network using wpa_supplicant You’ve configured the Pi to be accessible over SSH and you have a client that allows you do this (eg ssh, putty etc). Setup I’ll break the setup into a few parts, each focussing on one aspect of what we’re trying to do. These are: Getting the base dependencies you need to install everything on the RPi Installing and configuring a minecraft server that will run on the Pi Configuring the wireless access point. Automating everything to happen at boot time. Configure your Raspberry Pi Before running minecraft server on the RPi you will need a few additional packaged than you have probably installed by default. From a command line install the following: sudo apt-get install oracle-java8-jdk git avahi-daemon avahi-utils hostapd dnsmasq screen Java is required for minecraft and building the minecraft packages git will allow you to install various source packages avahi (also known as ZeroConf) will allow other machines to talk to your machine by name rather than IP address (which means you can connect to minecraft.local rather than 192.168.0.1 or similar). dnsmasq allows you to run a DNS server and assign IP addresses to the machines that connect to your minecraft box hostapd uses your wifi module to create a wireless access point (so you can play minecraft in a tent with your friends). Now you have the various components we need, it’s time to start building your minecraft server. Download the script repo To make this as fast as possible I’ve created a repository on Github that has all of the config files in it. Download this using the following commands: mkdir ~/tmp cd ~/tmp git clone https://gist.github.com/f61c89733340cd5351a4.git This will place a folder called ‘mc-config’ inside your ~/tmp directory. Everything will be referenced from there. Get a spigot build It is possible to run Minecraft using the Vanilla Minecraft server however it’s a little laggy. Spigot is a fork of CraftBukkit that seems to be a bit more performance oriented and a lot more stable. Using the Vanilla minecraft server I was experiencing lots of lag issues and crashes, with Spigot these disappeared entirely. The challenge with Spigot is you have to build the server from scratch as it can’t be distributed. This takes a little while on an RPi but is mostly automated. Run the following commands. mkdir ~/tmp/mc-build cd ~/tmp/mc-build wget https://hub.spigotmc.org/jenkins/job/BuildTools/lastSuccessfulBuild/artifact/target/BuildTools.jar java -jar BuildTools.jar --rev 1.8 If you have a dev environment setup on your computer you can do this step locally and it will be a lot faster. The key thing is at the end to put the spigot-1.8.jar and the craftbukkit-1.8.jar files on the RPi in the ~/tmp/mc-build/ directory. You can do this with scp.  Now wait a while. If you’re impatient, open up another SSH connection to your server and configure your access point while the build process is happening. //time passes After about 45 minutes, you should have your own spigot build. Time to configure that with the following commands: cd ~/tmp/mc-config ./configuremc.sh This will run a helper script which will then setup some baseline properties for your server and some plugins to help it be more stable. It will also move the server files to the right spot, configure a minecraft user and set minecraft to run as a service when you boot up. Once that is complete, you can start your server. service minecraft-server start The first time you do this it will take a long time as it has to build out the whole world, create config files and do all sorts of set up things. After this is done the first time however, it will usually only take 10 to 20 seconds to start after this. Administer your server We are using a tool called “screen” to run our minecraft server. Screen is a useful utility that allows you to create a shell and then execute things within it and just connect and detach from it as you please. This is a really handy utility say when you are running something for a long time and you want to detach your ssh session and reconnect to it later - perhaps you have a flaky connection. When the minecraft service starts up it creates a new screen session, and gives it the name “minecraft_server” and runs the spigot server command. The nice thing with this is that once the spigot server stops, the screen will close too. Now if you want to connect to your minecraft server the way you do it is like this: sudo screen -r minecraft_server To leave your server running just hit <CRTL+a> then hit the “d” key. CTRL+A sends an “action” and then “d” sends “detach”. You can keep resuming and detaching like this as much as you like. To stop the server you can do it two ways. The first is to do it manually once you’ve connected to the screen session then type “stop”. This is good as it means you can watch the minecraft server come down and ensure there’s no errors. Alternatively just type: service minecraft-server stop And this actually simulates doing exactly the same thing. Figure: Spigot server command line Connect to your server Once you’ve got your minecraft server running, attempt to connect to it from your normal computer using multiplayer and then direct connect. The machine address will be minecraft.local (unless you changed it to something else). Figure: Server selection Now you have the minecraft server complete you can simply ssh in, run ‘service minecraft-server start’ and your server will come up for you and your friends to play. The next sections will get you portable and automated. Setting up the WiFi host The way I’m going to show you to set up the Raspberry Pi is a little different than other tutorials you’ll see. The objective is that if the RPi can discover an SSID that it is allowed to join (eg your home network) then it should join that. If the RPi can’t discover a network it knows, then it should create it’s own and allow other machines to join it. This means that when you have your minecraft server sitting in your kitchen you can update the OS, download new mods and use it on your existing network. When you take it to the park to have a sunny Crafting session with your friends, the server will create it’s own network that you can all jump onto. Turn off auto wlan management By default the OS will try and do the “right thing” when you plug in a network interface so when you go to create your access point, it will actually try and put it on the wireless network again - not the desired result. To change this make the following modifications to /etc/default/ifplugd change the lines: INTERFACES="all" HOTPLUG_INTERFACES="ALL" to: INTERFACES="eth0" HOTPLUG_INTERFACES="eth0" Configure hostapd Now, stop hostapd and dnsmasq from running at boot. They should only come up when needed so the following commands will make them manual. update-rc.d -f hostapd remove update-rc.d -f dnsmasq remove Next, modify the hostapd daemon file to read the hostapd config from a file. Change the /etc/default/hostapd file to have the line: DAEMON_CONF="/etc/hostapd/hostapd.conf" Now create the /etc/hostapd/hostapd.conf file using this command using the one from the repo. cd ~/tmp/mc-config cp hostapd.conf /etc/hostapd/hostapd.conf If you look at this file you can see we’ve set the SSID of the access point to be “minecraft”, the password to be “qwertyuiop” and it has been set to use wlan0. If you want to change any of these things, feel free to do it. Now you'll probably want to kill your wireless device with ifdown wlan0 Make sure all your other processes are finished if you’re doing this and compiling spigot at the same time (or make sure you’re connected via the wired ethernet as well). Now run hostapd to just check any config errors. hostapd -d /etc/hostapd/hostapd.conf If there are no errors, then background the task (ctrl + z then type ‘bg 1’) and look at ifconfig. You should now have the wlan0 interface up as well as a wlan0.mon interface. If this is all good then you know your config is working for hostapd. Foreground the task (‘fg 1’) and stop the hostapd process (ctrl + c). Configure dnsmasq Now to get dnsmasq running - this is pretty easy. Download the dnsmasq example and put it in the right place using this command: cd ~/tmp/mc-config mv /etc/dnsmasq.conf /etc/dnsmasq.conf.backup cp dnsmasq.conf /etc/dnsmasq.conf dnsmasq is set to listen on wlan0 and allocate IP addresses in the range 192.168.40.5 - 192.168.40.40. The default IP address of the server will be 192.168.40.1. That's pretty much all you really need to get dnsmasq configured. Testing the config Now it’s time to test all this configuration. It is probably useful to have your RPi connected to a keyboard and monitor or available over eth0 as this is the most likely point where things may need debugging. The following commands will bring down wlan0, start hostapd, configure your wlan0 interface to a static IP address then start up dnsmasq. ifdown wlan0 service hostapd start ifconfig wlan0 192.168.40.1 service dnsmasq start Assuming you had no errors, you can now connect to your wireless access point from your laptop or phone using the SSID “minecraft” and password “qwertyuiop”. Once you are connected you should be given an IP address and you should be able to ping 192.168.40.1 and shell onto minecraft.local. Congratulations - you’ve now got a Raspberry Pi Wireless Access Point. As an aside, were you to write some iptables rules, you could now route traffic from the wlan0 interface to the eth0 interface and out onto the rest of your wired network - thus turning your RPi into a router. Running everything at boot The final setup task is to make the wifi detection happen at boot time. Most flavours of linux have a boot script called rc.local which is pretty much the final thing to run before giving you your login prompt at a terminal. Download the rc.local file using the following commands. mv /etc/rc.local /etc/rc.local.backup cp rc.local /etc/rc.local chmod a+x /etc/rc.local This script checks to see if the RPi came up on the network. If not it will wait a couple of seconds, then start setting up hostapd and dnsmasq. To test this is all working, modify your /etc/wpa_supplicant/wpa_supplicant.conf file and change the SSID so it’s clearly incorrect. For example if your SSID was “home” change it to “home2”. This way when the RPi boots it won’t find it and the access point will be created. Park Crafting Now you have your RPi minecraft server and it can detect networks and make good choices about when to create it’s own network, the next thing you need to do it make it portable. The new version 2 RPi is more energy efficient, though running both minecraft and a wifi access point is going to use some power. The easiest way to go mobile is to get a high capacity USB powerbank that is designed for charging tablets. They can be expensive but a large capacity one will keep you going for hours. This powerbank is 5000mAh and can deliver 2.1amps over it’s USB connection. Plenty for several hours crafting in the park. Setting this up couldn’t be easier, plug a usb cable into the powerbank and then into the RPi. When you’re done, simply plug the powerbank into your USB charger or computer and charge it up again. If you want something a little more custom then 7.4v lipo batteries with a step down voltage regulator (such as this one from Pololu: https://www.pololu.com/product/2850) connected to the power and ground pins on the RPi works very well. The challenge here is charging the LiPo again, however if you have the means to balance charge lipos then this will probably be a very cheap option. If you connect more than 5V to the RPi you WILL destroy it. There are no protection circuits when you use the GPIO pins. To protect your setup simply put it in a little plastic container like a lunch box. This is how mine travels with me. A plastic lunchbox will protect your server from spilled drinks, enthusiastic dogs and toddlers attracted to the blinking lights. Take your RPi and power supply to the park, along with your laptop and some friends then create your own little wifi hotspot and play some minecraft together in the sun. Going further If you want to take things a little further, here are some ideas: Build a custom enclosure - maybe something that looks like your favorite minecraft block. Using WiringPi (C) or RPI.GPIO (Python), attach an LCD screen that shows the status of your server and who is connected to the minecraft world. Add some custom LEDs to your enclosure then using RaspberryJuice and the python interface to the API, create objects in your gameworld that can switch the LEDs on and off. Add a big red button to your enclosure that when pressed will “nuke” the game world, removing all blocks and leaving only water. Go a little easier and simply use the button to kick everyone off the server when it’s time to go or your battery is getting low. About the Author Andrew Fisher is a creator and destroyer of things that combine mobile web, ubicomp and lots of data. He is a sometime programmer, interaction researcher and CTO at JBA, a data consultancy in Melbourne, Australia. He can be found on Twitter @ajfisher.
Read more
  • 0
  • 0
  • 8514
article-image-coding-minecraft
Packt
17 Sep 2013
7 min read
Save for later

Coding with Minecraft

Packt
17 Sep 2013
7 min read
(For more resources related to this topic, see here.) Getting ready Before you begin, you will need a running copy of Minecraft: Pi Edition. Start a game in a new or existing world, and wait for the game world to load. How to do it... Follow these steps to connect to the running Minecraft game: Open a fresh terminal by double-clicking on the LXTerminal icon on the desktop. Type cd ~/mcpi/api/python/mcpi into the terminal. Type python to begin the Python interpreter. Enter the following Python code: import minecraftmc = minecraft.Minecraft.create()mc.postToChat("Hello, world!") In the Minecraft window, you should see a message appear! How it works... First, we used the cd command, which we have seen previously to move to the location where the Python API is located. The application programming interface (API) consists of code provided by the Minecraft developers that handles some of the more basic functionality you might need. We used the ~ character as a shortcut for your home directory (/home/pi). Typing in cd /home/pi/mcpi/api/python/mcpi would have exactly the same effect, but requires more typing. We then start the Python interpreter. An interpreter is a program that executes code line by line as it is being typed. This allows us to get instant feedback on the code we are writing. You may like to explore the IDLE interpreter by typing idle into the terminal instead of python. IDLE is more advanced, and is able to color your code based on its meaning (so you can spot errors more easily), and can graphically suggest functions available for use. Then we started writing real Python code. The first line, import minecraft, gives us access to the necessary parts of the API by loading the minecraft module. There are several Python code files inside the directory we moved to, one of which is called minecraft.py, each containing a different code module. The main module we want access to is called minecraft. We then create a connection to the game using mc = minecraft.Minecraft.create(). mc is the name we have given to the connection, which allows us to use the same connection in any future code. minecraft. tells Python to look in the minecraft module. Minecraft is the name of a class in the minecraft module that groups together related data and functions. create() is a function of the Minecraft class that creates a connection to the game. Finally, we use the connection we have created, and its postToChat method to display a message in Minecraft. The way that our code interacts with the game is completely hidden from us to increase fl exibility: we can use almost exactly the same code to interact with any game of Minecraft: Pi Edition, and it is possible to use many different programming languages. If the developers want to change the way the communication works, they can do so, and it won't affect any of the code we have written. Behind the scenes, some text describing our command is sent across a network connection to the game, where the command is interpreted and performed. By default, the connection is to the very Raspberry Pi that we are running the code on, but it is also possible to send these commands over the Internet from any computer to any network-connected Raspberry Pi running Minecraft. A description of all of these text-based messages can be found in ~/ mcpi/api/spec: the message sent to the game when we wrote mc.postToChat("Hello,world!") was chat.post("Hello, world!"). This way of doing things allows any programming language to communicate with the running Minecraft game. As well as Python, a Java API is included that is capable of all the same tasks, and the community has created versions of the API in several other languages. There's more... There are many more functions provided in the Python API, some of the main ones are described here. You can explore the available commands using Python's help function: after importing the minecraft module, help(minecraft) will list the contents of that module, along with any text descriptions provided by the writer of the module. You can also use help to provide information on classes and functions. It is also possible to create your own API by building on top of the existing functions. For example, if you find yourself wanting to create a lot of spheres, you could write your own function that makes use of those provided, and import your module wherever you need it. The minecraft module The following code assumes that you have imported the minecraft module and created a connection to the running Minecraft game using mc = minecraft.Minecraft.create(). Whenever x, y, and z coordinates are used, x and z are both different directions that follow the ground, and y is the height, with 0 at sea level. Code Description mc.getBlock(x,y,z) This gets the type of a block at a particular location as a number. These numbers are all provided in the block module. mc.setBlock(x,y,z, type) The sets the block at a particular position to a particular type. There is also a setBlocks function that allows a cuboid to be filled - this will be faster than setting blocks individually. mc.getHeight(x,z) This gets the height of the world at the given location. mc.getPlayerEntityIds() This gets a list of IDs of all connected players. mc.saveCheckpoint() This saves the current state of the world. mc.restoreCheckpoint() This restores the state of the world from the saved checkpoint. mc.postToChat(message) This posts a message to the game chat. mc.setting(setting, status) This changes a game setting (such as "world_immutable" or "nametags_visible") to True or False. mc.camera.setPos(x,y,z) This moves the game camera to a particular location. Other options are setNormal(player_id), setFixed(), and setFollow(player_id). mc.player.getPos() This gets the position of the host player. mc.player.setPos(x,y,z) This moves the host player. mc.events.pollBlockHits() This gets a list of all blocks that have been hit since the last time the events were requested. Each event describes the position of the block that was hit. mc.events.clearAll() This clears the list of events. At the time of writing, only block hits are recorded, but more event types may be included in the future. The block module Another useful module is the block module: use import block to gain access to its contents. The block module has a list of all available blocks, and the numbers used to represent them. For example, a block of dirt is represented by the number 3. You can use 3 directly in your code if you like, or you can use the helpful name block.DIRT, which will help to make your code more readable. Some blocks, such as wool, have additional information to describe their color. This data can be provided after the block's ID in all functions. For example, to create a block of red wool, where 14 is the data value representing "red": mc.setBlock(x, y, z, block.WOOL, 14) Full information on the additional data values can be found online at http://www.minecraftwiki.net/wiki/Data values(Pocket_Edition). Summary This article gave us a simple code to interact with the game. It also explained how Python communicates with the game. The other aspects brought forth by the article include overview of other API functions, which can build useful functions on top of existing ones. Resources for Article: Further resources on this subject: Creating a file server (Samba) [Article] Webcam and Video Wizardry [Article] Instant Minecraft Designs – Building a Tudor-style house [Article]
Read more
  • 0
  • 0
  • 7301

article-image-creating-tfs-scheduled-jobs
Packt
28 Sep 2015
12 min read
Save for later

Creating TFS Scheduled Jobs

Packt
28 Sep 2015
12 min read
In this article by Gordon Beeming, the author of the book, Team Foundation Server 2015 Customization, we are going to cover TFS scheduled jobs. The topics that we are going to cover include: Writing a TFS Job Deploying a TFS Job Removing a TFS Job You would want to write a scheduled job for any logic that needs to be run at specific times, whether it is at certain increments or at specific times of the day. A scheduled job is not the place to put logic that you would like to run as soon as some other event, such as a check-in or a work item change, occurs. It will automatically link change sets to work items based on the comments. (For more resources related to this topic, see here.) The project setup First off, we'll start with our project setup. This time, we'll create a Windows console application. Creating a new windows console application The references that we'll need this time around are: Microsoft.VisualStudio.Services.WebApi.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.Framework.Server.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent on the TFS server. That's all the setup that is required for your TFS job project. Any class that inherit ITeamFoundationJobExtension will be able to be used for a TFS Job. Writing the TFS job So, as mentioned, we are going to need a class that inherits from ITeamFoundationJobExtension. Let's create a class called TfsCommentsToChangeSetLinksJob and inherit from ITeamFoundationJobExtension. As part of this, we will need to implement the Run method, which is part of an interface, like this: public class TfsCommentsToChangeSetLinksJob : ITeamFoundationJobExtension { public TeamFoundationJobExecutionResult Run( TeamFoundationRequestContext requestContext, TeamFoundationJobDefinition jobDefinition, DateTime queueTime, out string resultMessage) { throw new NotImplementedException(); } } Then, we also add the using statement: using Microsoft.TeamFoundation.Framework.Server; Now, for this specific extension, we'll need to add references to the following: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll All of these can be found in C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgent. Now, for the logic of our plugin, we use the following code inside of the Run method as a basic shell, where we'll then place the specific logic for this plugin. This basic shell will be adding a try catch block, and at the end of the try block, it will return a successful job run. We'll then add to the job message what exception may be thrown and returning that the job failed: resultMessage = string.Empty; try { // place logic here return TeamFoundationJobExecutionResult.Succeeded; } catch (Exception ex) { resultMessage += "Job Failed: " + ex.ToString(); return TeamFoundationJobExecutionResult.Failed; } Along with this code, you will need the following using function: using Microsoft.TeamFoundation; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Linq; using System.Text.RegularExpressions; So next, we need to place some logic specific to this job in the try block. First, let's create a connection to TFS for version control: TfsTeamProjectCollection tfsTPC = TfsTeamProjectCollectionFactory.GetTeamProjectCollection( new Uri("http://localhost:8080/tfs")); VersionControlServer vcs = tfsTPC.GetService<VersionControlServer>(); Then, we will query the work item store's history and get the last 25 check-ins: WorkItemStore wis = tfsTPC.GetService<WorkItemStore>(); // get the last 25 check ins foreach (Changeset changeSet in vcs.QueryHistory("$/", RecursionType.Full, 25)) { // place the next logic here } Now that we have the changeset history, we are going to check the comments for any references to work items using a simple regex expression: //try match the regex for a hash number in the comment foreach (Match match in Regex.Matches((changeSet.Comment ?? string.Empty), @"#d{1,}")) { // place the next logic here } Getting into this loop, we'll know that we have found a valid number in the comment and that we should attempt to link the check-in to that work item. But just the fact that we have found a number doesn't mean that the work item exists, so let's try find a work item with the found number: int workItemId = Convert.ToInt32(match.Value.TrimStart('#')); var workItem = wis.GetWorkItem(workItemId); if (workItem != null) { // place the next logic here } Here, we are checking to make sure that the work item exists so that if the workItem variable is not null, then we'll proceed to check whether a relationship for this changeSet and workItem function already exists: //now create the link ExternalLink changesetLink = new ExternalLink( wis.RegisteredLinkTypes[ArtifactLinkIds.Changeset], changeSet.ArtifactUri.AbsoluteUri); //you should verify if such a link already exists if (!workItem.Links.OfType<ExternalLink>() .Any(l => l.LinkedArtifactUri == changeSet.ArtifactUri.AbsoluteUri)) { // place the next logic here } If a link does not exist, then we can add a new link: changesetLink.Comment = "Change set " + $"'{changeSet.ChangesetId}'" + " auto linked by a server plugin"; workItem.Links.Add(changesetLink); workItem.Save(); resultMessage += $"Linked CS:{changeSet.ChangesetId} " + $"to WI:{workItem.Id}"; We just have the extra bit here so as to get the last 25 change sets. If you were using this for production, you would probably want to store the last change set that you processed and then get history up until that point, but I don't think it's needed to illustrate this sample. Then, after getting the list of change sets, we basically process everything 100 percent as before. We check whether there is a comment and whether that comment contains a hash number that we can try linking to a changeSet function. We then check whether a workItem function exists for the number that we found. Next, we add a link to the work item from the changeSet function. Then, for each link we add to the overall resultMessage string so that when we look at the results from our job running, we can see which links were added automatically for us. As you can see, with this approach, we don't interfere with the check-in itself but rather process this out-of-hand way of linking changeSet to work with items at a later stage. Deploying our TFS Job Deploying the code is very simple; change the project's Output type to Class Library. This can be done by going to the project properties, and then in the Application tab, you will see an Output type drop-down list. Now, build your project. Then, copy the TfsJobSample.dll and TfsJobSample.pdb output files to the scheduled job plugins folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Application TierTFSJobAgentPlugins. Unfortunately, simply copying the files into this folder won't make your scheduled job automatically installed, and the reason for this is that as part of the interface of the scheduled job, you don't specify when to run your job. Instead, you register the job as a separate step. Change Output type back to Console Application option for the next step. You can, and should, split the TFS job from its installer into different projects, but in our sample, we'll use the same one. Registering, queueing, and deregistering a TFS Job If you try install the job the way you used to in TFS 2013, you will now get the TF400444 error: TF400444: The creation and deletion of jobs is no longer supported. You may only update the EnabledState or Schedule of a job. Failed to create, delete or update job id 5a7a01e0-fff1-44ee-88c3-b33589d8d3b3 This is because they have made some changes to the job service, for security reasons, and these changes prevent you from using the Client Object Model. You are now forced to use the Server Object Model. The code that you have to write is slightly more complicated and requires you to copy your executable to multiple locations to get it working properly. Place all of the following code in your program.cs file inside the main method. We start off by getting some arguments that are passed through to the application, and if we don't get at least one argument, we don't continue: #region Collect commands from the args if (args.Length != 1 && args.Length != 2) { Console.WriteLine("Usage: TfsJobSample.exe <command "+ "(/r, /i, /u, /q)> [job id]"); return; } string command = args[0]; Guid jobid = Guid.Empty; if (args.Length > 1) { if (!Guid.TryParse(args[1], out jobid)) { Console.WriteLine("Job Id not a valid Guid"); return; } } #endregion We then wrap all our logic in a try catch block, and for our catch, we only write the exception that occurred: try { // place logic here } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Place the next steps inside the try block, unless asked to do otherwise. As part of using the Server Object Model, you'll need to create a DeploymentServiceHost. This requires you to have a connection string to the TFS Configuration database, so make sure that the connection string set in the following is valid for you. We also need some other generic path information, so we'll mimic what we could expect the job agents' paths to be: #region Build a DeploymentServiceHost string databaseServerDnsName = "localhost"; string connectionString = $"Data Source={databaseServerDnsName};"+ "Initial Catalog=TFS_Configuration;Integrated Security=true;"; TeamFoundationServiceHostProperties deploymentHostProperties = new TeamFoundationServiceHostProperties(); deploymentHostProperties.HostType = TeamFoundationHostType.Deployment | TeamFoundationHostType.Application; deploymentHostProperties.Id = Guid.Empty; deploymentHostProperties.PhysicalDirectory = @"C:Program FilesMicrosoft Team Foundation Server 14.0"+ @"Application TierTFSJobAgent"; deploymentHostProperties.PlugInDirectory = $@"{deploymentHostProperties.PhysicalDirectory}Plugins"; deploymentHostProperties.VirtualDirectory = "/"; ISqlConnectionInfo connInfo = SqlConnectionInfoFactory.Create(connectionString, null, null); DeploymentServiceHost host = new DeploymentServiceHost(deploymentHostProperties, connInfo, true); #endregion Now that we have a TeamFoundationServiceHost function, we are able to create a TeamFoundationRequestContext function . We'll need it to call methods such as UpdateJobDefinitions, which adds and/or removes our job, and QueryJobDefinition, which is used to queue our job outside of any schedule: using (TeamFoundationRequestContext requestContext = host.CreateSystemContext()) { TeamFoundationJobService jobService = requestContext.GetService<TeamFoundationJobService>() // place next logic here } We then create a new TeamFoundationJobDefinition instance with all of the information that we want for our TFS job, including the name, schedule, and enabled state: var jobDefinition = new TeamFoundationJobDefinition( "Comments to Change Set Links Job", "TfsJobSample.TfsCommentsToChangeSetLinksJob"); jobDefinition.EnabledState = TeamFoundationJobEnabledState.Enabled; jobDefinition.Schedule.Add(new TeamFoundationJobSchedule { ScheduledTime = DateTime.Now, PriorityLevel = JobPriorityLevel.Normal, Interval = 300, }); Once we have the job definition, we check what the command was and then execute the code that will relate to that command. For the /r command, we will just run our TFS job outside of the TFS job agent: if (command == "/r") { string resultMessage; new TfsCommentsToChangeSetLinksJob().Run(requestContext, jobDefinition, DateTime.Now, out resultMessage); } For the /i command, we will install the TFS job: else if (command == "/i") { jobService.UpdateJobDefinitions(requestContext, null, new[] { jobDefinition }); } For the /u command, we will uninstall the TFS Job: else if (command == "/u") { jobService.UpdateJobDefinitions(requestContext, new[] { jobid }, null); } Finally, with the /q command, we will queue the TFS job to be run inside the TFS job agent and outside of its schedule: else if (command == "/q") { jobService.QueryJobDefinition(requestContext, jobid); } Now that we have this code in the program.cs file, we need to compile the project and then copy TfsJobSample.exe and TfsJobSample.pdb to the TFS Tools folder, which is C:Program FilesMicrosoft Team Foundation Server 14.0Tools. Now open a cmd window as an administrator. Change the directory to the Tools folder and then run your application with a /i command, as follows: Installing the TFS Job Now, you have successfully installed the TFS Job. To uninstall it or force it to be queued, you will need the job ID. But basically you have to run /u with the job ID to uninstall, like this: Uninstalling the TFS Job You will be following the same approach as prior for queuing, simply specifying the /q command and the job ID. How do I know whether my TFS Job is running? The easiest way to check whether your TFS Job is running or not is to check out the job history table in the configuration database. To do this, you will need the job ID (we spoke about this earlier), which you can obtain by running the following query against the TFS_Configuration database: SELECT JobId FROM Tfs_Configuration.dbo.tbl_JobDefinition WITH ( NOLOCK ) WHERE JobName = 'Comments to Change Set Links Job' With this JobId, we will then run the following lines to query the job history: SElECT * FROM Tfs_Configuration.dbo.tbl_JobHistory WITH (NOLOCK) WHERE JobId = '<place the JobId from previous query here>' This will return you a list of results about the previous times the job was run. If you see that your job has a Result of 6 which is extension not found, then you will need to stop and restart the TFS job agent. You can do this by running the following commands in an Administrator cmd window: net stop TfsJobAgent net start TfsJobAgent Note that when you stop the TFS job agent, any jobs that are currently running will be terminated. Also, they will not get a chance to save their state, which, depending on how they were written, could lead to some unexpected situations when they start again. After the agent has started again, you will see that the Result field is now different as it is a job agent that will know about your job. If you prefer browsing the web to see the status of your jobs, you can browse to the job monitoring page (_oi/_jobMonitoring#_a=history), for example, http://gordon-lappy:8080/tfs/_oi/_jobMonitoring#_a=history. This will give you all the data that you can normally query but with nice graphs and grids. Summary In this article, we looked at how to write, install, uninstall, and queue a TFS Job. You learned that the way we used to install TFS Jobs will no longer work for TFS 2015 because of a change in the Client Object Model for security. Resources for Article: Further resources on this subject: Getting Started with TeamCity[article] Planning for a successful integration[article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 6749