Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Servers

95 Articles
article-image-installing-apache-karaf
Packt
31 Oct 2013
7 min read
Save for later

Installing Apache Karaf

Packt
31 Oct 2013
7 min read
Before Apache Karaf can provide you with an OSGi-based container runtime, we'll have to set up our environment first. The process is quick, requiring a minimum of normal Java usage integration work. In this article we'll review: The prerequisites for Apache Karaf Obtaining Apache Karaf Installing Apache Karaf and running it for the first time Prerequisites As a lightweight container, Apache Karaf has sparse system requirements. You will need to check that you have all of the below specifications met or exceeded: Operating System: Apache Karaf requires recent versions of Windows, AIX, Solaris, HP-UX, and various Linux distributions (RedHat, Suse, Ubuntu, and so on). Disk space: It requires at least 20 MB free disk space. You will require more free space as additional resources are provisioned into the container. As a rule of thumb, you should plan to allocate 100 to 1000 MB of disk space for logging, bundle cache, and repository. Memory: At least 128 MB memory is required; however, more than 2 GB is recommended. Java Runtime Environment (JRE): The runtime environments such as JRE 1.6 or JRE 1.7 are required. The location of the JRE should be made available via environment setting JAVA_HOME. At the time of writing, Java 1.6 is "end of life". For our demos we'll use Apache Maven 3.0.x and Java SDK 1.7.x; these tools should be obtained for future use. However, they will not be necessary to operate the base Karaf installation. Before attempting to build demos, please set the MAVEN_HOME environment variable to point towards your Apache Maven distribution. After verifying you have the above prerequisite hardware, operating system, JVM, and other software packages, you will have to set up your environment variables for JAVA_HOME and MAVEN_HOME. Both of these will be added to the system PATH. Setting up JAVA_HOME Environment Variable Apache Karaf honors the setting of JAVA_HOME in the system environment; if this is not set, it will pick up and use Java from PATH. For users unfamiliar with setting environment variables, the following batch setup script will set up your windows environment: @echo off REM execute setup.bat to setup environment variables. set JAVA_HOME=C:Program FilesJavajdk1.6.0_31 set MAVEN_HOME=c:x1apache-maven-3.0.4 set PATH=%JAVA_HOME%bin;%MAVEN_HOME%bin;%PATH%echo %PATH% The script creates and sets the JAVA_HOME and MAVEN_HOME variables to point to their local installation directories, and then adds their values to the system PATH. The initial echo off directive reduces console output as the script executes; the final echo command prints the value of PATH. Managing Windows System Environment Variables Windows environment settings can be managed via the Systems Properties control panel. Access to these controls varies according to the Windows release. Conversely, in a Unix-like environment, a script similar to the following one will set up your environment: # execute setup.sh to setup environment variables. JAVA_HOME=/path/to/jdk1.6.0_31 MAVEN_HOME=/path/to/apache-maven-3.0.4 PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH export PATH JAVA_HOME MAVEN_HOME echo $PATH The first two directives create and set the JAVA_HOME and MAVEN_HOME environment variables, respectively. These values are added to the PATH setting, and then made available to the environment via the export command. Obtaining Apache Karaf distribution As an Apache open source project, Apache Karaf is made available in both binary and source distributions. The binary distribution comes in a Linux-friendly, GNU-compressed archive and in Windows ZIP format. Your selection of distribution kit will affect which set of scripts are available in Karaf's bin folder. So, if you're using Windows, select the ZIP file; on Unix-like systems choose the tar.gz file. Apache Karaf distributions may be obtained from http://karaf.apache.org/index/community/download.html. The following screenshot shows this link: The primary download site for Apache Karaf provides a list of available mirror sites; it is advisable that you select a server nearer to your location for faster downloads. For the purposes of this article, we will be focusing on Apache Karaf 2.3.x with notes upon the 3.0.x release series. Apache Karaf 2.3.x versus 3.0.x series The major difference between Apache Karaf 2.3 and 3.0 lines is the core OSGi specification supported. Karaf 2.3 utilizes OSGi rev4.3, while Karaf 3.0 uses rev5.0. Karaf 3 also introduces several command name changes. There are a multitude of other internal differences between the code bases, and wherever appropriate, we'll highlight those changes that impact users throughout this text. Installing Apache Karaf The installation of Apache Karaf only requires you to extract the tar.gz or .zip file in your desired target folder destination. The following command is used in Windows: unzip apache-karaf-.zip The following command is used in Unix: tar –zxf apache-karaf-.tar.gz After extraction, the following folder structure will be present: The LICENSE, NOTICE, README, and RELEASE-NOTES files are plain text artifacts contained in each Karaf distribution. The RELEASE-NOTES files are of particular interest, as upon each major and minor release of Karaf, this file is updated with a list of changes. The LICENSE, NOTICE, README, and RELEASE-NOTES files are plain text artifacts contained in each Karaf distribution. The RELEASE-NOTES files are of particular interest, as upon each major and minor release of Karaf, this file is updated with a list of changes. The bin folder contains the Karaf scripts for the interactive shell (Karaf), starting and stopping background Karaf service, a client for connecting to running Karaf instances, and additional utilities. The data folder is home to Karaf's logfiles, bundle cache, and various other persistent data. The demos folder contains an assortment of sample projects for Karaf. It is advisable that new users explore these examples to gain familiarity with the system. For the purposes of this book we strived to create new sample projects to augment those existing in the distribution. The instances folder will be created when you use Karaf child instances. It stores the child instance folders and files. The deploy folder is monitored for hot deployment of artifacts into the running container. The etc folder contains the base configuration files of Karaf; it is also monitored for dynamic configuration updates to the configuration admin service in the running container. An HTML and PDF format copy of the Karaf manual is included in each kit. The lib folder contains the core libraries required for Karaf to boot upon a JVM. The system folder contains a simple repository of dependencies Karaf requires for operating at runtime. This repository has each library jar saved under a Maven-style directory structure, consisting of the library Maven group ID, artifact ID, version, artifact ID-version, any classifier, and extension. First boot! After extracting the Apache Karaf distribution kit and setting our environment variables, we are now ready to start up the container. The container can be started by invoking the Karaf script provided in the bin directory: On Windows, use the following command: binkaraf.bat On Unix, use the following command: ./bin/karaf The following image shows the first boot screen: Congratulations, you have successfully booted Apache Karaf! To stop the container, issue the following command in the console: karaf@root> shutdown –f The inclusion of the –for –-force flag to the shutdown command instructs Karaf to skip asking for confirmation of container shutdown. Pressing Ctrl+ D will shut down Karaf when you are on the shell; however, if you are connected remotely (using SSH), this action will just log off the SSH session, it won't shut down Karaf. Summary We have discovered the prerequisites for installing Karaf, which distribution to obtain, how to install the container, and finally how to start it. Resources for Article: Further resources on this subject: Apache Felix Gogo [Article] WordPress 3 Security: Apache Modules [Article] Configuring Apache and Nginx [Article]
Read more
  • 0
  • 0
  • 6251

article-image-weblogic-security-realm
Packt
03 Jan 2013
7 min read
Save for later

WebLogic Security Realm

Packt
03 Jan 2013
7 min read
(For more resources related to this topic, see here.) Configuration of local LDAP server: user/roles/lockout The simplest way to configure your security realm is through the WebLogic Administration Console; you can find all about security in the section, on the main tree, Security Realms, where the default configuration called myrealm is placed. Under Security Realms, we have a preconfigured subset of Users, Groups, Authentication methods, Role Mapping, Credential Mapping providers, and some other security settings. You can configure many realms' security sets, but only one will be active. On the myrealm section, we find all security parameters of the internal LDAP server configurations, including users and groups. Consider this; Oracle declares that the embedded WebLogic LDAP server works well with less than 10,000 users; for more users, consider using a different LDAP server and Authentication Provider, for example, an Active Directory Server. Users and groups Obviously, here you can and configure some internal users and some internal groups. A user is an entity that can be authenticated and used to protect our application resources. A group is an aggregation of users who usually have something in common, such as a subset of permissions and authorizations. Users section The console path for the Users section is as follows: Click on Security Realms | myrealm | Users and Groups | Users. In this section, by default you will find your administrator account, used to log in to the WebLogic Administration Console and configured on the wizard during the installation phase; you can also create some other users (note: the names are case insensitive insert ) and set the following settings: User Description: An internal string description tag User Password: User password subjected to some rules View User Attributes: Some user attributes Associate groups: Predefined in the Groups section Please be attentive to preserve the integrity of the administrative user created in the installation configuration wizard; this user is vital for the WebLogic Server (startup process); don't remove this user if you don't have some advanced knowledge of what you are doing and how to roll back changes Take care also to change the admin user's password after installation phase; if you use the automatic startup process without providing a user and password (required when needed to start the admin console in the OS as a service, without prompting any interactive request) you will need to reconfigure the credentials file to start up the admin server at boot. The following file needs to be changed: $DOMAIN_HOMEserversAdminserversecurityboot.properties username=weblogic password=weblogicpassword After the first boot, the WebLogic admin server will encrypt this file with its internal encryption method. Groups section The console path for the Groups section is as follows: Security Realms | myrealm | Users and Groups | Groups In this section, by default, you will find some groups used to profile user grants (only the Administrators' and Oracle System's group was populated) whose names are case insensitive. Define new groups before creating a user to associate with them. The most important groups are as follows: Administrators : This is the most powerful group, which can do everything in the WebLogic environment. Do not add plenty of people to it, otherwise you will have too many users with the power to modify your server configuration. Deployers: This group can manage applications and resources (for example, JDBC, web services) and is very appropriate for the operations team that needs to deploy and update different versions of applications often during the day. Monitors: This group provides a read-only access to WebLogic and is convenient for monitoring WebLogic resources and status Operators: This group provides the grant privilege to stop, start, and resume WebLogic nodes. All users without an associated group are recognized to an Anonymous role. In this case the implicit group (not present in the list) will be the everyone group. Security role condition The console path for Roles and Policies are as follows: Go to Security Realms | myrealm | Users and Groups | Realm Roles | Realm Policies | Roles Go to Security Realms | myrealm | Users and Groups | Realm Roles | Realm Policies | Policies In WebLogic, you can configure some advanced collection of rules to trust or deny the access over role security configuration dynamically; all conditions need to be true if you want to grant a security role. There are some available conditions in WebLogic role mapping, which we will now explore in the next section. Basic The available options are as follows: User: This option adds the user to a specific role if his username matches the specified string Group: This option adds the specified group to the role in the same way as the previous rule Server is in development mode: This option adds the user or group in a role if the server is started in the development mode Allow access to everyone: This option adds all users and groups to the role Deny access to everyone: This option rejects all users from being in the role Date and time-based When used, this role condition can configure a rule based on a date or on a time basis (between, after, before, and specified) to grant a role assignment. Context element The server retrieves information from the ContextHandler object and allows you to define role conditions based on the values of HTTP servlet request attributes, HTTP session attributes, and EJB method parameters. User lockout The console path for User Lockout is Security Realms | myrealm | User Lockout. User Lockout is enabled by default; this process prevents user intrusion and dictionary attacks. It also improves the server security and can configure some policies to lock our local configured users. This option is globally applied to any configured security provider. In this section, you can define the maximum number of consecutive invalid login attempts that can occur before a user's account is locked out and how long the lock lasts. After that period, the account is automatically re-enabled. If you are using an Authentication Provider that has its own mechanism for protecting user accounts, disable the Lockout Enabled option. When a user is locked, you can find a message similar to the following message in the server logs: <Apr 6, 2012 11:10:00 AM CEST> <Notice> <Security> <BEA-090078> <User Test in security realm myrealm has had 5 invalid login attempts, locking account for 30 minutes.> Unlocking user The result of lock settings are a blocked user; if you need to unlock him immediately, you have to go to the section named Domain, created in the wizard installation phase in the left pane under the Security section. Here, you can view the Unlock User tab, where you can specify that the username be re-enabled. Remember to click on the Lock & Edit button before you do any changes. When you manually unlock a user, you can find a message similar to the following message in the server logs: ... .<1333703687507> <BEA-090022> <Explicitly unlocked, user Test.> Summary By using this recipe, we have focused on the key steps to follow application resources in a fast and easy way. Resources for Article : Further resources on this subject: Oracle Enterprise Manager Key Concepts and Subsystems [Article] Configuring and Deploying the EJB 3.0 Entity in WebLogic Server [Article] Developing an EJB 3.0 entity in WebLogic Server [Article]
Read more
  • 0
  • 0
  • 6186

article-image-lync-2013-hybrid-and-lync-online
Packt
06 Feb 2015
27 min read
Save for later

Lync 2013 Hybrid and Lync Online

Packt
06 Feb 2015
27 min read
In this article, by the authors, Fabrizio Volpe, Alessio Giombini, Lasse Nordvik Wedø, and António Vargas of the book, Lync Server Cookbook, we will cover the following recipes: Introducing Lync Online Administering with the Lync Admin Center Using Lync Online Remote PowerShell Using Lync Online cmdlets Introducing Lync in a hybrid scenario Planning and configuring a hybrid deployment Moving users to the cloud Moving users back on-premises Debugging Lync Online issues (For more resources related to this topic, see here.) Introducing Lync Online Lync Online is part of the Office 365 offer and provides online users with the same Instant Messaging (IM), presence, and conferencing features that we would expect from an on-premises deployment of Lync Server 2013. Enterprise Voice, however, is not available on Office 365 tenants (or at least, it is available only with limitations regarding both specific Office 365 plans and geographical locations). There is no doubt that forthcoming versions of Lync and Office 365 will add what is needed to also support all the Enterprise Voice features in the cloud. Right now, the best that we are able to achieve is to move workloads, homing a part of our Lync users (the ones with no telephony requirements) in Office 365, while the remaining Lync users are homed on-premises. These solutions might be interesting for several reasons, including the fact that we can avoid the costs of expanding our existing on-premises resources by moving a part of our Lync-enabled users to Office 365. The previously mentioned configuration, which involves different kinds of Lync tenants, is called a hybrid deployment of Lync, and we will see how to configure it and move our users from online to on-premises and vice versa. In this Article, every time we talk about Lync Online and Office 365, we will assume that we have already configured an Office tenant. Administering with the Lync Admin Center Lync Online provides the Lync Admin Center (LAC), a dedicated control panel, to manage Lync settings. To open it, access the Office 365 portal and select Service settings, Lync, and Manage settings in the Lync admin center, as shown in the following screenshot: LAC, if you compare it with the on-premises Lync Control Panel (or with the Lync Management Shell), offers few options. For example, it is not possible to create or delete users directly inside Lync. We will see some of the tasks we are able to perform in LAC, and then, we will move to the (more powerful) Remote PowerShell. There is an alternative path to open LAC. From the Office 365 portal, navigate to Users & Groups | Active Users. Select a user, after which you will see a Quick Steps area with an Edit Lync Properties link that will open the user-editable part of LAC. How to do it... LAC is divided into five areas: users, organization, dial-in conferencing, meeting invitation, and tools, as you can see in the following screenshot: The Users panel will show us the configuration of the Lync Online enabled users. It is possible to modify the settings with the Edit option (the small pencil icon on the right): I have tried to summarize all the available options (inside the general, external communications, and dial-in conferencing tabs) in the following screenshot: Some of the user's settings are worth a mention; in the General tab, we have the following:    The Record Conversations and meetings option enables the Start recording option in the Lync client    The Allow anonymous attendees to dial-out option controls whether the anonymous users that are dialing-in to a conference are required to call the conferencing service directly or are authorized for callback    The For compliance, turn off non-archived features option disables Lync features that are not recorded by In-Place Hold for Exchange When you place an Exchange 2013 mailbox on In-Place Hold or Litigation Hold, the Microsoft Lync 2013 content (instant messaging conversations and files shared in an online meeting) is archived in the mailbox. In the dial-in conferencing tab, we have the configuration required for dial-in conferencing. The provider's drop-down menu shows a list of third parties that are able to deliver this kind of feature. The Organization tab manages privacy for presence information, push services, and external access (the equivalent of the Lync federation on-premises). If you enable external access, we will have the option to turn on Skype federation, as we can see in the following screenshot: The Dial-In Conferencing option is dedicated to the configuration of the external providers. The Meeting Invitation option allows the user to customize the Lync Meeting invitation. The Tools options offer a collection of troubleshooting resources. See also For details about Exchange In-Place Hold, see the TechNet post In-Place Hold and Litigation Hold at http://technet.microsoft.com/en-us/library/ff637980(v=exchg.150).aspx. Using Lync Online Remote PowerShell The possibility to manage Lync using Remote PowerShell on a distant deployment has been available since Lync 2010. This feature has always required a direct connection from the management station to the Remote Lync, and a series of steps that is not always simple to set up. Lync Online supports Remote PowerShell using a dedicated (64-bit only) PowerShell module, the Lync Online Connector. It is used to manage online users, and it is interesting because there are many settings and automation options that are available only through PowerShell. Getting ready Lync Online Connector requires one of the following operating systems: Windows 7 (with Service Pack 1), Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows 8, or Windows 8.1. At least PowerShell 3.0 is needed. To check it, we can use the $PSVersionTable variable. The result will be like the one in the following screenshot (taken on Windows 8.1, which uses PowerShell 4.0): How to do it... Download Windows PowerShell Module for Lync Online from the Microsoft site at http://www.microsoft.com/en-us/download/details.aspx?id=39366 and install it. It is useful to store our Office 365 credentials in an object (it is possible to launch the cmdlets at step 3 anyway, and we will be required with the Office 365 administrator credentials, but using this method, we will have to insert the authentication information again every time it is required). We can use the $credential = Get-Credential cmdlet in a PowerShell session. We will be prompted for our username and password for Lync Online, as shown in the following screenshot: To use the Online Connector, open a PowerShell session and use the New-CsOnlineSession cmdlet. One of the ways to start a remote PowerShell session is $session = New-CsOnlineSession -Credential $credential. Now, we need to import the session that we have created with Lync Online inside PowerShell, with the Import-PSSession $session cmdlet. A temporary Windows PowerShell module will be created, which contains all the Lync Online cmdlets. The name of the temporary module will be similar to the one we can see in the following screenshot: Now, we will have the cmdlets of the Lync Online module loaded in memory, in addition to any command that we already have available in PowerShell. How it works... The feature is based on a PowerShell module, the LyncOnlineConnector, shown in the following screenshot: It contains only two cmdlets, the Set-WinRMNetworkDelayMS and New-CsOnlineSession cmdlets. The latter will load the required cmdlets in memory. As we have seen in the previous steps, the Online Connector adds the Lync Online PowerShell cmdlets to the ones already available. This is something we will use when talking about hybrid deployments, where we will start from the Lync Management Shell and then import the module for Lync Online. It is a good habit to verify (and close) your previous remote sessions. This can be done by selecting a specific session (using Get-PSSession and then pointing to a specific session with the Remove-PSSession statement) or closing all the existing ones with the Get-PSSession | Remove-PSSession cmdlet. In the previous versions of the module, Microsoft Online Services Sign-In Assistant was required. This prerequisite was removed from the latest version. There's more... There are some checks that we are able to perform when using the PowerShell module for Lync Online. By launching the New-CsOnlineSession cmdlet with the –verbose switch, we will see all the messages related to the opening of the session. The result should be similar to the one shown in the following screenshot: Another verification comes from the Get-Command -Module tmp_gffrkflr.ufz command, where the module name (in this example, tmp_gffrkflr.ufz) is the temporary module we saw during the Import-PSSession step. The output of the command will show all the Lync Online cmdlets that we have loaded in memory. The Import-PSSession cmdlet imports all commands except the ones that have the same name of a cmdlet that already exists in the current PowerShell session. To overwrite the existing cmdlets, we can use the -AllowClobber parameter. See also During the introduction of this section, we also discussed the possibility to administer on-premises, remote Lync Server 2013 deployment with a remote PowerShell session. John Weber has written a great post about it in his blog Lync 2013 Remote Admin with PowerShell at http://tsoorad.blogspot.it/2013/10/lync-2013-remote-admin-with-powershell.html, which is helpful if you want to use the previously mentioned feature. Using Lync Online cmdlets In the previous recipe, we outlined the steps required to establish a remote PowerShell session with Lync Online. We have less than 50 cmdlets, as shown in the result of the Get-Command -Module command in the following screenshot: Some of them are specific for Lync Online, such as the following: Get-CsAudioConferencingProvider Get-CsOnlineUser Get-CsTenant Get-CsTenantFederationConfiguration Get-CsTenantHybridConfiguration Get-CsTenantLicensingConfiguration Get-CsTenantPublicProvider New-CsEdgeAllowAllKnownDomains New-CsEdgeAllowList New-CsEdgeDomainPattern Set-CsTenantFederationConfiguration Set-CsTenantHybridConfiguration Set-CsTenantPublicProvider Update-CsTenantMeetingUrl All the remaining cmdlets can be used either with Lync Online or with the on-premises version of Lync Server 2013. We will see the use of some of the previously mentioned cmdlets. How to do it... The Get-CsTenant cmdlet will list Lync Online tenants configured for use in our organization. The output of the command includes information such as the preferred language, registrar pool, domains, and assigned plan. The Get-CsTenantHybridConfiguration cmdlet gathers information about the hybrid configuration of Lync. Management of the federation capability for Lync Online (the feature that enables Instant Messaging and Presence information exchange with users of other domains) is based on the allowed domain and blocked domain lists, as we can see in the organization and external communications screen of LAC, shown in the following screenshot: There are similar ways to manage federation from the Lync Online PowerShell, but it required to put together different statements as follows:     We can use an accept all domains excluding the ones in the exceptions list approach. To do this, we have put the New-CsEdgeAllowAllKnownDomains cmdlet inside a variable. Then, we can use the Set-CsTenantFederationConfiguration cmdlet to allow all the domains (except the ones in the block list) for one of our domains on a tenant. We can use the example on TechNet (http://technet.microsoft.com/en-us/library/jj994088.aspx) and integrate it with Get-CsTenant.     If we prefer, we can use a block all domains but permit the ones in the allow list approach. It is required to define a domain name (pattern) for every domain to allow the New-CsEdgeDomainPattern cmdlet, and each one of them will be saved in a variable. Then, the New-CsEdgeAllowList cmdlet will create a list of allowed domains from the variables. Finally, the Set-CsTenantFederationConfiguration cmdlet will be used. The domain we will work on will be (again) cc3b6a4e-3b6b-4ad4-90be-6faa45d05642. The example on Technet (http://technet.microsoft.com/en-us/library/jj994023.aspx) will be used: $x = New-CsEdgeDomainPattern -Domain "contoso.com" $y = New-CsEdgeDomainPattern -Domain "fabrikam.com" $newAllowList = New-CsEdgeAllowList -AllowedDomain $x,$y Set-CsTenantFederationConfiguration -Tenant " cc3b6a4e-3b6b-4ad4-90be-6faa45d05642" -AllowedDomains $newAllowList The Get-CsOnlineUser cmdlet provides information about users enabled on Office 365. The result will show both users synced with Active Directory and users homed in the cloud. The command supports filters to limit the output; for example, the Get-CsOnlineUser -identity fab will gather information about the user that has alias = fab. This is an account synced from the on-premises Directory Services, so the value of the DirSyncEnabled parameter will be True. See also All the cmdlets of the Remote PowerShell for Lync Online are listed in the TechNet post Lync Online cmdlets at http://technet.microsoft.com/en-us/library/jj994021.aspx. This is the main source of details on the single statement. Introducing Lync in a hybrid scenario In a Lync hybrid deployment, we have the following: User accounts and related information homed in the on-premises Directory Services and replicated to Office 365. A part of our Lync users that consume on-premises resources and a part of them that use online (Office 365 / Lync Online) resources. The same (public) domain name used both online and on-premises (Lync-split DNS). Other Office 365 services and integration with other applications available to all our users, irrespective of where their Lync is provisioned. One way to define Lync hybrid configuration is by using an on-premises Lync deployment federated with an Office 365 / Lync Online tenant subscription. While it is not a perfect explanation, it gives us an idea of the scenario we are talking about. Not all the features of Lync Server 2013 (especially the ones related to Enterprise Voice) are available to Lync Online users. The previously mentioned motivations, along with others (due to company policies, compliance requirements, and so on), might recommend a hybrid deployment of Lync as the best available solution. What we have to clarify now is how to make those users on different deployments talk to each other, see each other's presence status, and so on. What we will see in this section is a high-level overview of the required steps. The Planning and configuring a hybrid deployment recipe will provide more details about the individual steps. The list of steps here is the one required to configure a hybrid deployment, starting from Lync on-premises. In the following sections, we will also see the opposite scenario (with our initial deployment in the cloud). How to do it... It is required to have an available Office 365 tenant configuration. Our subscription has to include Lync Online. We have to configure an Active Directory Federation Services (AD FS) server in our domain and make it available to the Internet using a public FQDN and an SSL certificate released from a third-party certification authority. Office 365 must be enabled to synchronize with our company's Directory Services, using Active Directory Sync. Our Office 365 tenant must be federated. The last step is to configure Lync for a hybrid deployment. There's more... One of the requirements for a hybrid distribution of Lync is an on-premises deployment of Lync Server 2013 or Lync Server 2010. For Lync Server 2010, it is required to have the latest available updates installed, both on the Front Ends and on the Edge servers. It is also required to have the Lync Server 2013 administrative tools installed on a separate server. More details about supported configuration are available on the TechNet post Planning for Lync Server 2013 hybrid deployments at http://technet.microsoft.com/en-us/library/jj205403.aspx. DNS SRV records for hybrid deployments, _sipfederationtls._tcp.<domain> and _sip._tls.<domain>, should point to the on-premises deployment. The lyncdiscover. <domain> record will point to the FQDN of the on-premises reverse proxy server. The _sip._tls. <domain> SRV record will resolve to the public IP of the Access Edge service of Lync on-premises. Depending on the kind of service we are using for Lync, Exchange, and SharePoint, only a part of the features related to the integration with the additional services might be available. For example, skills search is available only if we are using Lync and SharePoint on-premises. The following TechNet post Supported Lync Server 2013 hybrid configurations at http://technet.microsoft.com/en-us/library/jj945633.aspx offers a matrix of features / service deployment combinations. See also Interesting information about Lync Hybrid configuration is presented in sessions available on Channel9 and coming from the Lync Conference 2014 (Lync Online Hybrid Deep Dive at http://channel9.msdn.com/Events/Lync-Conference/Lync-Conference-2014/ONLI302) and from TechEd North America 2014 (Microsoft Lync Online Hybrid Deep Dive at http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/OFC-B341#fbid=). Planning and configuring a hybrid deployment The planning phase for a hybrid deployment starts from a simple consideration: do we have an on-premises deployment of Lync Server? If the previously mentioned scenario is true, do we want to move users to the cloud or vice versa? Although the first situation is by far the most common, we have to also consider the case in which we have our first deployment in the cloud. How to do it... This step is all that is required for the scenario that starts from Lync Online. We have to completely deploy our Lync on-premises. Establish a remote PowerShell session with Office 365. Use the shared SIP address cmdlet Set-CsTenantFederationConfiguration -SharedSipAddressSpace $True to enable Office 365 to use a Shared Session Initiation Protocol (SIP) address space with our on-premises deployment. To verify this, we can use the Get-CsTenantFederationConfiguration command. The SharedSipAddressSpace value should be set to True. All the following steps are for the scenario that starts from the on-premises Lync deployment. After we have subscribed with a tenant, the first step is to add the public domain we use for our Lync users to Office 365 (so that we can split it on the two deployments). To access the Office 365 portal, select Domains. The next step is Specify a domain name and confirm ownership. We will be required to type a domain name. If our domain is hosted on some specific providers (such as GoDaddy), the verification process can be automated, or we have to proceed manually. The process requires to add one DNS record (TXT or MX), like the ones shown in the following screenshot: If we need to check our Office 365 and on-premises deployments before continuing with the hybrid deployment, we can use the Setup Assistant for Office 365. The tool is available inside the Office 365 portal, but we have to launch it from a domain-joined computer (the login must be performed with the domain administrative credentials). In the Setup menu, we have a Quick Start and an Extend Your Setup option (we have to select the second one). The process can continue installing an app or without software installation, as shown in the following screenshot: The app (which makes the assessment of the existing deployment easier) is installed by selecting Next in the previous screen (it requires at least Windows 7 with Service Pack 1, .NET Framework 3.5, and PowerShell 2.0). Synchronization with the on-premises Active Directory is required. This last step federates Lync Server 2013 with Lync Online to allow communication between our users. The first cmdlet to use is Set-CSAccessEdgeConfiguration -AllowOutsideUsers 1 -AllowFederatedUsers 1 -UseDnsSrvRouting -EnablePartnerDiscovery 1. Note that the -EnablePartnerDiscovery parameter is required. Setting it to 1 enables automatic discovery of federated partner domains. It is possible to set it to 0. The second required cmdlet is New-CSHostingProvider -Identity LyncOnline -ProxyFqdn "sipfed.online.lync.com" -Enabled $true -EnabledSharedAddressSpace $true -HostsOCSUsers $true –VerificationLevel UseSourceVerification -IsLocal $false -AutodiscoverUrl https://webdir.online.lync.com/Autodiscover/AutodiscoverService.svc/root. The result of the commands is shown in the following screenshot: If Lync Online is already defined, we have to use the Set- CSHostingProvider cmdlet, or we can remove it (Remove-CsHostingProvider -Identity LyncOnline) and then create it using the previously mentioned cmdlet. There's more... In the Lync hybrid scenario, users created in the on-premises directory are replicated to the cloud, while users generated in the cloud will not be replicated on-premises. Lync Online users are managed using the Office 365 portal, while the users on-premises are managed using the usual tools (Lync Control Panel and Lync Management Shell). Moving users to the cloud By moving users from Lync on-premises to the cloud, we will lose some of the parameters. The operation requires the Lync administrative tools and the PowerShell module for Lync Online to be installed on the same computer. If we install the module for Lync Online before the administrative tools for Lync 2013 Server, the OCSCore.msi file overwrites the LyncOnlineConnector.ps1 file, and New-CsOnlineSession will require a -TargetServer parameter. In this situation, we have to reinstall the Lync Online module (see the following post on the Microsoft support site at http://support.microsoft.com/kb/2955287). Getting ready Remember that to move the user to Lync Online, they must be enabled for both Lync Server on-premises and Lync Online (so we have to assign the user a license for Lync Online by using the Office 365 portal). Users with no assigned licenses will show the error Move-CsUser : HostedMigration fault: Error=(507), Description=(User must has an assigned license to use Lync Online. For more details, refer to the Microsoft support site at http://support.microsoft.com/kb/2829501. How to do it... Open a new Lync Management Shell session and launch the remote session on Office 365 with the cmdlets' sequence we saw earlier. We have to add the –AllowClobber parameter so that the Lync Online module's cmdlets are able to overwrite the corresponding Lync Management Shell cmdlets: $credential = Get-Credential $session = New-CsOnlineSession -Credential $credential Import-PSSession $session -AllowClobber Open the Lync Admin Center (as we have seen in the dedicated section) by going to Service settings | Lync | Manage settings in the Lync Admin Center, and copy the first part of the URL, for example, https://admin0e.online.lync.com. Add the following string to the previous URL /HostedMigration/hostedmigrationservice.svc (in our example, the result will be https://admin0a.online.lync.com/HostedMigration/hostedmigrationservice.svc). The following cmdlet will move users from Lync on-premises to Lync Online. The required parameters are the identity of the Lync user and the URL that we prepared in step 2. The user identity is [email protected]: Move-CsUser -Identity [email protected] –Target sipfed.online.lync.com -Credential $creds -HostedMigrationOverrideUrl https://admin0e.online.lync.com/HostedMigration/hostedmigrationservice.sVc Usually, we are required to insert (again) the Office 365 administrative credentials, after which we will receive a warning about the fact that we are moving our user to a different version of the service, like the one in the following screenshot: See the There's more... section of this recipe for details about user information that is migrated to Lync Online. We are able to quickly verify whether the user has moved to Lync Online by using the Get-CsUser | fl DisplayName,HostingProvider,RegistrarPool,SipAddress command. On-premises HostingProvider is equal to SRV: and RegistrarPool is madhatter.wonderland.lab (the name of the internal Lync Front End). Lync Online values are HostingProvider : sipfed.online.lync.com, and leave RegistrarPool empty, as shown in the following screenshot (the user Fabrizio is homed on-premises, while the user Fabrizio volpe is homed on the cloud): There's more... If we plan to move more than one user, we have to add a selection and pipe it before the cmdlet we have already used, removing the –identity parameter. For example, to move all users from an Organizational Unit (OU), (for example, the LyncUsers in the Wonderland.Lab domain) to Lync Online, we can use Get-CsUser -OU "OU=LyncUsers,DC=wonderland,DC=lab"| Move-CsUser -Target sipfed.online.lync.com -Credential $creds -HostedMigrationOverrideUrl https://admin0e.online.lync.com/HostedMigration/hostedmigrationservice.sVc. We are also able to move users based on a parameter to match using the Get-CsUser –Filter cmdlet. As we mentioned earlier, not all the user information is migrated to Lync Online. Migration contact list, groups, and access control lists are migrated, while meetings, contents, and schedules are lost. We can use the Lync Meeting Update Tool to update the meeting links (which have changed when our user's home server has changed) and automatically send updated meeting invitations to participants. There is a 64-bit version (http://www.microsoft.com/en-us/download/details.aspx?id=41656) and a 32-bit version (http://www.microsoft.com/en-us/download/details.aspx?id=41657) of the previously mentioned tool. Moving users back on-premises It is possible to move back users that have been moved from the on-premises Lync deployment to the cloud, and it is also possible to move on-premises users that have been defined and enabled directly in Office 365. In the latter scenario, it is important to create the user also in the on-premises domain (Directory Service). How to do it… The Lync Online user must be created in the Active Directory (for example, I will define the BornOnCloud user that already exists in Office 365). The user must be enabled in the on-premises Lync deployment, for example, using the Lync Management Shell with the following cmdlet: Enable-CsUser -Identity "BornOnCloud" -SipAddress "SIP:[email protected]" -HostingProviderProxyFqdn "sipfed.online.lync.com" Sync the Directory Services. Now, we have to save our Office 365 administrative credentials in a $cred = Get-Credential variable and then move the user from Lync Online to the on-premises Front End using the Lync Management Shell (the -HostedMigrationOverrideURL parameter has the same value that we used in the previous section): Move-CsUser -Identity [email protected] -Target madhatter.wonderland.lab -Credential $cred -HostedMigrationOverrideURL https://admin0e.online.lync.com/HostedMigration/hostedmigrationservice.svc The Get-CsUser | fl DisplayName,HostingProvider,RegistrarPool,SipAddress cmdlet is used to verify whether the user has moved as expected. See also Guy Bachar has published an interesting post on his blog Moving Users back to Lync on-premises from Lync Online (http://guybachar.wordpress.com/2014/03/31/moving-users-back-to-lync-on-premises-from-lync-online/), where he shows how he solved some errors related to the user motion by modifying the HostedMigrationOverrideUrl parameter. Debugging Lync Online issues Getting ready When moving from an on-premises solution to a cloud tenant, the first aspect we have to accept is that we will not have the same level of control on the deployment we had before. The tools we will list are helpful in resolving issues related to Lync Online, but the level of understanding on an issue they give to a system administrator is not the same we have with tools such as Snooper or OCSLogger. Knowing this, the more users we will move to the cloud, the more we will have to use the online instruments. How to do it… The Set up Lync Online external communications site on Microsoft Support (http://support.microsoft.com/common/survey.aspx?scid=sw;en;3592&showpage=1) is a guided walk-through that helps in setting up communication between our Lync Online users and external domains. The tool provides guidelines to assist in the setup of Lync Online for small to enterprise businesses. As you can see in the following screenshot, every single task is well explained: The Remote Connectivity Analyzer (RCA) (https://testconnectivity.microsoft.com/) is an outstanding tool to troubleshoot both Lync on-premises and Lync Online. The web page includes tests to analyze common errors and misconfigurations related to Microsoft services such as Exchange, Lync, and Office 365. To test different scenarios, it is necessary to use various network protocols and ports. If we are working on a firewall-protected network, using the RCA, we are also able to test services that are not directly available to us. For Lync Online, there are some tests that are especially interesting; in the Office 365 tab, the Office 365 General Tests section includes the Office 365 Lync Domain Name Server (DNS) Connectivity Test and the Office 365 Single Sign-On Test, as shown in the following screenshot: The Single Sign-On test is really useful in a scenario. The test requires our domain username and password, both synced with the on-premises Directory Services. The steps include searching the FQDN of our AD FS server on an Internet DNS, verifying the certificate and connectivity, and then validating the token that contains the credentials. The Client tab offers to download the Microsoft Connectivity Analyzer Tool and the Microsoft Lync Connectivity Analyzer Tool, which we will see in the following two dedicated steps: The Microsoft Connectivity Analyzer Tool makes many of the tests we see in the RCA available on our desktop. The list of prerequisites is provided in the article Microsoft Connectivity Analyzer Tool (http://technet.microsoft.com/library/jj851141(v=exchg.80).aspx), and includes Windows Vista/Windows 2008 or later versions of the operating system, .NET Framework 4.5, and an Internet browser, such as Internet Explorer, Chrome, or Firefox. For the Lync tests, a 64-bit operating system is mandatory, and the UCMA runtime 4.0 is also required (it is part of Lync Server 2013 setup, and is also available for download at http://www.microsoft.com/en-us/download/details.aspx?id=34992). The tools propose ways to solve different issues, and then, they run the same tests available on the RCA site. We are able to save the results in an HTML file. The Microsoft Lync Connectivity Analyzer Tool is dedicated to troubleshooting the clients for mobile devices (the Lync Windows Store app and Lync apps). It tests all the required configurations, including autodiscover and webticket services. The 32-bit version is available at http://www.microsoft.com/en-us/download/details.aspx?id=36536, while the 64-bit version can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=36535. .NET Framework 4.5 is required. The tool itself requires a few configuration parameters; we have to insert the user information that we usually add in the Lync app, and we have to use a couple of drop-down menus to describe the scenario we are testing (on-premises or Internet, and the kind of client we are going to test). The Show drop-down menu enables us to look not only at a summary of the test results but also at the detailed information. The detailed view includes all the information and requests sent and received during the test, with the FQDN included in the answer ticket from our services, and so on, as shown in the following screenshot: The Troubleshooting Lync Online sign-in post is a support page, available in two different versions (admins and users), and is a walk-through to help admins (or users) to troubleshoot login issues. The admin version is available at http://support.microsoft.com/common/survey.aspx?scid=sw;en;3695&showpage=1, while the user version is available at http://support.microsoft.com/common/survey.aspx?scid=sw;en;3719&showpage=1. Based on our answers to the different scenario questions, the site will propose to information or solution steps. The following screenshot is part of the resolution for the log-I issues of a company that has an enterprise subscription with a custom domain: The Office 365 portal includes some information to help us monitor our Lync subscription. In the Service Health menu, navigate to Service Health; we have a list of all the incidents and service issues of the past days. In the Reports menu, we have statistics about our Office 365 consumption, including Lync. In the following screenshot, we can see the previously mentioned pages: There's more... One interesting aspect of the Microsoft Lync Connectivity Analyzer Tool that we have seen is that it enables testing for on-premises or Office 365 accounts (both testing from inside our network and from the Internet). The previously mentioned capability makes it a great tool to troubleshoot the configuration for Lync on the mobile devices that we have deployed in our internal network. This setup is usually complex, including hair-pinning and split DNS, so the diagnostic is important to quickly find misconfigured services. See also The Troubleshooting Lync Sign-in Errors (Administrators) page on Office.com at http://office.microsoft.com/en-001/communicator-help/troubleshooting-lync-sign-in-errors-administrators-HA102759022.aspx contains a list of messages related to sign-in errors with a suggested solution or a link to additional external resources. Summary In this article, we have learned about managing Lync 2013 and Lync Online and using Lync Online Remote PowerShell and Lync Online cmdlets. Resources for Article: Further resources on this subject: Adding Dialogs [article] Innovation of Communication and Information Technologies [article] Choosing Lync 2013 Clients [article]
Read more
  • 0
  • 0
  • 6170
Visually different images

article-image-key-trends-in-software-infrastructure-in-2019
Richard Gall
17 Dec 2018
10 min read
Save for later

Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity

Richard Gall
17 Dec 2018
10 min read
Software infrastructure has, over the last decade or so, become a key concern for developers of all stripes. Long gone are narrowly defined job roles; thanks to DevOps, accountability for how code is now shared between teams on both development and deployment sides. For anyone that’s ever been involved in the messy frustration of internal code wars, this has been a welcome change. But as developers who have traditionally sat higher up the software stack dive deeper into the mechanics of deploying and maintaining software, for those of us working in system administration, DevOps, SRE, and security (the list is endless, apologies if I’ve forgotten you), the rise of distributed systems only brings further challenges. Increased complexity not only opens up new points of failure and potential vulnerability, at a really basic level it makes understanding what’s actually going on difficult. And, essentially, this is what it will mean to work in software delivery and maintenance in 2019. Understanding what’s happening, minimizing downtime, taking steps to mitigate security threats - it’s a cliche, but finding strategies to become more responsive rather than reactive will be vital. Indeed, many responses to these kind of questions have emerged this year. Chaos engineering and observability, for example, have both been gaining traction within the SRE world, and are slowly beginning to make an impact beyond that particular job role. But let’s take a deeper look at what is really going to matter in the world of software infrastructure and architecture in 2019. Observability and the rise of the service mesh Before we decide what to actually do, it’s essential to know what’s actually going on. That seems obvious, but with increasing architectural complexity, that’s getting harder. Observability is a term that’s being widely thrown around as a response to this - but it has been met with some cynicism. For some developers, observability is just a sexed up way of talking about good old fashioned monitoring. But although the two concepts have a lot in common, observability is more of an approach, a design pattern maybe, rather than a specific activity. This post from The New Stack explains the difference between monitoring and observability incredibly well. Observability is “a measure of how well internal states of a system can be inferred from knowledge of its external outputs.” which means observability is more a property of a system, rather than an activity. There are a range of tools available to help you move towards better observability. Application management and logging tools like Splunk, Datadog, New Relic and Honeycomb can all be put to good use and are a good first step towards developing a more observable system. Want to learn how to put monitoring tools to work? Check out some of these titles: AWS Application Architecture and Management [Video]     Hands on Microservices Monitoring and Testing       Software Architecture with Spring 5.0      As well as those tools, if you’re working with containers, Kubernetes has some really useful features that can help you more effectively monitor your container deployments. In May, Google announced StackDriver Kubernetes Monitoring, which has seen much popularity across the community. Master monitoring with Kubernetes. Explore these titles: Google Cloud Platform Administration     Mastering Kubernetes      Kubernetes in 7 Days [Video]        But there’s something else emerging alongside observability which only appears to confirm it’s importance: that thing is the notion of a service mesh. The service mesh is essentially a tool that allows you to monitor all the various facets of your software infrastructure helping you to manage everything from performance to security to reliability. There are a number of different options out there when it comes to service meshes - Istio, Linkerd, Conduit and Tetrate being the 4 definitive tools out there at the moment. Learn more about service meshes inside these titles: Microservices Development Cookbook     The Ultimate Openshift Bootcamp [Video]     Cloud Native Application Development with Java EE [Video]       Why is observability important? Observability is important because it sets the foundations for many aspects of software management and design in various domains. Whether you’re an SRE or security engineer, having visibility on the way in which your software is working will be essential in 2019. Chaos engineering Observability lays the groundwork for many interesting new developments, chaos engineering being one of them. Based on the principle that modern, distributed software is inherently unreliable, chaos engineering ‘stress tests’ software systems. The word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. Using something called chaos experiments - adding something unexpected into your system, or pulling a piece of it out like a game of Jenga - chaos engineering helps you to better understand the way it will act in various situations. In turn, this allows you to make the necessary changes that can help ensure resiliency. Chaos engineering is particularly important today simply because so many people, indeed, so many things, depend on software to actually work. From an eCommerce site to a self driving car, if something isn’t working properly there could be terrible consequences. It’s not hard to see how chaos engineering fits alongside something like observability. To a certain extent, it’s really another way of achieving observability. By running chaos experiments, you can draw out issues that may not be visible in usual scenarios. However, the caveat is that chaos engineering isn’t an easy thing to do. It requires a lot of confidence and engineering intelligence. Running experiments shouldn’t be done carelessly - in many ways, the word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. While chaos engineering isn’t straightforward, there are tools and platforms available to make it more manageable. Gremlin is perhaps the best example, offering what they describe as ‘resiliency-as-a-service’. But if you’re not ready to go in for a fully fledged platform, it’s worth looking at open source tools like Chaos Monkey and ChaosToolkit. Want to learn how to put the principles of chaos engineering into practice? Check out this title: Microservice Patterns and Best Practices       Learn the principles behind resiliency with these SRE titles: Real-World SRE       Practical Site Reliability Engineering       Better integrated security and code testing Both chaos engineering and observability point towards more testing. And this shouldn’t be surprising: testing is to be expected in a world where people are accountable for unpredictable systems. But what’s particularly important is how testing is integrated. Whether it’s for security or simply performance, we’re gradually moving towards a world where testing is part of the build and deploy process, not completely isolated from it. There are a diverse range of tools that all hint at this move. Archery, for example, is a tool designed for both developers and security testers to better identify and assess security vulnerabilities at various stages of the development lifecycle. With a useful dashboard, it neatly ties into the wider trend of observability. ArchUnit (sounds similar but completely unrelated) is a Java testing library that allows you to test a variety of different architectural components. Similarly on the testing front, headless browsers continue to dominate. We’ve seen some of the major browsers bringing out headless browsers, which will no doubt delight many developers. Headless browsers allow developers to run front end tests on their code as if it were live and running in the browser. If this sounds a lot like PhantomJS, that’s because it is actually quite a bit like PhantomJS. However, headless browsers do make the testing process much faster. Smarter software purchasing and the move to hybrid cloud The key trends we’ve seen in software architecture are about better understanding your software. But this level of insight and understanding doesn’t matter if there’s no alignment between key decision makers and purchasers. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This can manifest itself in various ways. Essentially, it’s a symptom of decision makers being disconnected from engineers buried deep in their software. This is by no means a new problem, cloud coming to define just about every aspect of software, it’s now much easier for confusion to take hold. The best thing about cloud is also the worst thing - the huge scope of opportunities it opens up. It makes decision making a minefield - which provider should we use? What parts of it do we need? What’s going to be most cost effective? Of course, with hybrid cloud, there's a clear way of meeting those issues. But it's by no means a silver bullet. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This is something that ThoughtWorks references in its most recent edition of Radar (November 2018). Identifying two trends they call ‘bounded buy’ and ‘risk commensurate vendor strategy’ ThoughtWorks highlights how organizations can find their SaaS of choice shaping their strategy in its own image (bounded buy) or look to outsource business critical applications, functions or services. T ThoughtWorks explains: “This trade-off has become apparent as the major cloud providers have expanded their range of service offerings. For example, using AWS Secret Management Service can speed up initial development and has the benefit of ecosystem integration, but it will also add more inertia if you ever need to migrate to a different cloud provider than it would if you had implemented, for example, Vault”. Relatedly, ThoughtWorks also identifies a problem with how organizations manage cost. In the report they discuss what they call ‘run cost as architecture fitness function’ which is really an elaborate way of saying - make sure you look at how much things cost. So, for example, don’t use serverless blindly. While it might look like a cheap option for smaller projects, your costs could quickly spiral and leave you spending more than you would if you ran it on a typical cloud server. Get to grips with hybrid cloud: Hybrid Cloud for Architects       Building Hybrid Clouds with Azure Stack     Become an effective software and solutions architect in 2019: AWS Certified Solutions Architect - Associate Guide     Architecting Cloud Computing Solutions     Hands-On Cloud Solutions with Azure       Software complexity needs are best communicated in a simple language: money In practice, this takes us all the way back to the beginning - it’s simply the financial underbelly of observability. Performance, visibility, resilience - these matter because they directly impact the bottom line. That might sound obvious, but if you’re trying to make the case, say, for implementing chaos engineering, or using a any other particular facet of a SaaS offering, communicating to other stakeholders in financial terms can give you buy-in and help to guarantee alignment. If 2019 should be about anything, it’s getting closer to this fantasy of alignment. In the end, it will keep everyone happy - engineers and businesses
Read more
  • 0
  • 0
  • 6064

article-image-using-nginx-reverse-proxy
Packt
23 May 2011
7 min read
Save for later

Using Nginx as a Reverse Proxy

Packt
23 May 2011
7 min read
  Nginx 1 Web Server Implementation Cookbook Over 100 recipes to master using the Nginx HTTP server and reverse proxy         Read more about this book       (For more resources on Nginx, see here.) Introduction Nginx has found most applications acting as a reverse proxy for many sites. A reverse proxy is a type of proxy server that retrieves resources for a client from one or more servers. These resources are returned to the client as though they originated from the proxy server itself. Due to its event driven architecture and C codebase, it consumes significantly lower CPU power and memory than many other better known solutions out there. This article will deal with the usage of Nginx as a reverse proxy in various common scenarios. We will have a look at how we can set up a rail application, set up load balancing, and also look at a caching setup using Nginx, which will potentially enhance the performance of your existing site without any codebase changes.   Using Nginx as a simple reverse proxy Nginx in its simplest form can be used as a reverse proxy for any site; it acts as an intermediary layer for security, load distribution, caching, and compression purposes. In effect, it can potentially enhance the overall quality of the site for the end user without any change of application source code by distributing the load from incoming requests to multiple backend servers, and also caching static, as well as dynamic content. How to do it... You will need to first define proxy.conf, which will be later included in the main configuration of the reverse proxy that we are setting up: proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;client_max_body_size 10m;client_body_buffer_size 128k;proxy_connect_timeout 90;proxy_send_timeout 90;proxy_read_timeout 90;sproxy_buffers 32 4k To use Nginx as a reverse proxy for a site running on a local port of the server, the following configuration will suffice: server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug;location / { include proxy.conf; proxy_pass http://127.0.0.1:8080; }} How it works... In this recipe, Nginx simply acts as a proxy for the defined backend server which is running on the 8080 port of the server, which can be any HTTP web application. Later in this article, other advanced recipes will have a look at how one can define more backend servers, and how we can set them up to respond to requests.   Setting up a rails site using Nginx as a reverse proxy In this recipe, we will set up a working rails site and set up Nginx working on top of the application. This will assume that the reader has some knowledge of rails and thin. There are other ways of running Nginx and rails, as well, like using Passenger Phusion. How to do it... This will require you to set up thin first, then to configure thin for your application, and then to configure Nginx. If you already have gems installed then the following command will install thin, otherwise you will need to install it from source: sudo gem install thin Now you need to generate the thin configuration. This will create a configuration in the /etc/thin directory: sudo thin config -C /etc/thin/myapp.yml -c /var/rails/myapp--servers 5 -e production Now you can start the thin service. Depending on your operating system the start up command will vary. Assuming that you have Nginx installed, you will need to add the following to the configuration file: upstream thin_cluster { server unix:/tmp/thin.0.sock; server unix:/tmp/thin.1.sock; server unix:/tmp/thin.2.sock; server unix:/tmp/thin.3.sock; server unix:/tmp/thin.4.sock;} server { listen 80; server_name www.example1.com; root /var/www.example1.com/public; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; try_files $uri $uri/index.html $uri.html @thin; location @thin { include proxy.conf; proxy_pass http://thin_cluster; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }} How it works... This is a fairly simple rails stack, where we basically configure and run five upstream thin threads which interact with Nginx through socket connections. There are a few rewrites that ensure that Nginx serves the static files, and all dynamic requests are processed by the rails backend. It can also be seen how we set proxy headers correctly to ensure that the client IP is forwarded correctly to the rails application. It is important for a lot of applications to be able to access the client IP to show geo-located information, and logging this IP can be useful in identifying if geography is a problem when the site is not working properly for specific clients.   Setting up correct reverse proxy timeouts In this section we will set up correct reverse proxy timeouts which will affect your user's interaction when your backend application is unable to respond to the client's request. In such a case, it is advisable to set up some sensible timeout pages so that the user can understand that further refreshing may only aggravate the issues on the web application. How to do it... You will first need to set up proxy.conf which will later be included in the configuration: proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;client_max_body_size 10m;client_body_buffer_size 128k;proxy_connect_timeout 90;proxy_send_timeout 90;proxy_read_timeout 90;sproxy_buffers 32 4k Reverse proxy timeouts are some fairly simple flags that we need to set up in the Nginx configuration like in the following example: server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://127.0.0.1:8080; }} How it works... In the preceding configuration we have set the following variables, it is fairly clear what these variables achieve in the context of the configurations:   Setting up caching on the reverse proxy In a setup where Nginx acts as the layer between the client and the backend web application, it is clear that caching can be one of the benefits that can be achieved. In this recipe, we will have a look at setting up caching for any site to which Nginx is acting as a reverse proxy. Due to extremely small footprint and modular architecture, Nginx has become quite the Swiss knife of the modern web stack. How to do it... This example configuration shows how we can use caching when utilizing Nginx as a reverse proxy web server: http { proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8mmax_size=1000m inactive=600m; proxy_temp_path /var/www/cache/tmp;...server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_pass http://127.0.0.1:8080/; proxy_cache my-cache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; }}} How it works... This configuration implements a simple cache with 1000MB maximum size, and keeps all HTTP response 200 pages in the cache for 60 minutes and HTTP response 404 pages in cache for 1 minute. There is an initial directive that creates the cache file on initialization, in further directives we basically configure the location that is going to be cached. It is possible to actually set up more than one cache path for multiple locations. There's more... This was a relatively small show of what can be achieved with the caching aspect of the proxy module. Here are some more directives that can be really useful in optimizing and making your stack faster and more efficient:  
Read more
  • 0
  • 0
  • 5913

article-image-setting-upa-network-backup-server-bacula
Packt
19 Sep 2016
12 min read
Save for later

Setting Up a Network Backup Server with Bacula

Packt
19 Sep 2016
12 min read
In this article by Timothy Boronczyk,the author of the book CentOS 7 Server Management Cookbook,we'll discuss how to set up a network backup server with Bacula. The fact of the matter is that we are living in a world that is becoming increasingly dependent on data. Also, from accidental deletion to a catastrophic hard drive failure, there are many threats to the safety of your data. The more important your data is and the more difficult it is to recreate if it were lost, the more important it is to have backups. So, this article shows you how you can set up a backup server using Bacula and how to configure other systems on your network to back up their data to it. (For more resources related to this topic, see here.) Getting ready This article requires at least two CentOS systems with working network connections. The first system is the local system which we'll assume has the hostname benito and the IP address 192.168.56.41. The second system is the backup server. You'll need administrative access on both systems, either by logging in with the root account or through the use of sudo. How to do it… Perform the following steps on your local system to install and configure the Bacula file daemon: Install the bacula-client package. yum install bacula-client Open the file daemon's configuration file with your text editor. vi /etc/bacula/bacula-fd.conf In the FileDaemon resource, update the value of the Name directive to reflect the system's hostname with the suffix -fd. FileDaemon {   Name = benito-fd ... } Save the changes and close the file. Start the file daemon and enable it to start when the system reboots. systemctl start bacula-fd.service systemctl enable bacula-fd.service Open the firewall to allow TCP traffic through to port 9102. firewall-cmd --zone=public --permanent --add-port=9102/tcp firewall-cmd --reload Repeat steps 1-6 on each system that will be backed up. Install the bacula-console, bacula-director, bacula-storage, and bacula-client packages. yum install bacula-console bacula-director bacula-storage bacula-client Re-link the catalog library to use SQLite database storage. alternatives --config libbaccats.so Type 2 when asked to provide the selection number. Create the SQLite database file and import the table schema. /usr/libexec/bacula/create_sqlite3_database /usr/libexec/bacula/make_sqlite3_tables Open the director's configuration file with your text editor. vi /etc/bacula/bacula-dir.conf In the Job resource where Name has the value BackupClient1, change the value of the Name directive to reflect one of the local systems. Then add a Client directive with a value that matches that system's FileDaemonName. Job {   Name = "BackupBenito"   Client = benito-fd   JobDefs = "DefaultJob" } Duplicate the Job resource and update its directive values as necessary so that there is a Job resource defined for each system to be backed up. For each system that will be backed up, duplicate the Client resource where the Name directive is set to bacula-fd. In the copied resource, update the Name and Address directives to identify that system. Client {   Name = bacula-fd   Address = localhost   ... } Client {   Name = benito-fd   Address = 192.168.56.41   ... } Client {   Name = javier-fd   Address = 192.168.56.42   ... } Save your changes and close the file. Open the storage daemon's configuration file. vi /etc/bacula/bacula-sd.conf In the Device resource where Name has the value FileStorage, change the value of the Archive Device directive to /bacula. Device {   Name = FileStorage   Media Type = File   Archive Device = /bacula ... Save the update and close the file. Create the /bacula directory and assign it the proper ownership. mkdir /bacula chown bacula:bacula /bacula If you have SELinux enabled, reset the security context on the new directory. restorecon -Rv /bacula Start the director and storage daemons and enable them to start when the system reboots. systemctl start bacula-dir.service bacula-sd.service bacula-fd.service systemctl enable bacula-dir.service bacula-sd.service bacula-fd.service Open the firewall to allow TCP traffic through to ports 9101-9103. firewall-cmd --zone=public --permanent --add-port=9101-9103/tcp firewall-cmd –reload Launch Bacula's console interface. bconsole Enter label to create a destination for the backup. When prompted for the volume name, use Volume0001 or a similar value. When prompted for the pool, select the File pool. label Enter quit to leave the console interface. How it works… The suite's distributed architecture and the amount of flexibility it offers us can make configuring Bacula a daunting task.However, once you have everything up and running, you'll be able to rest easy knowing that your data is safe from disasters and accidents. Bacula is broken up into several components. In this article, our efforts centered on the following three daemons: the director, the file daemon, and the storage daemon. The file daemon is installed on each local system to be backed up and listens for connections from the director. The director connects to each file daemon as scheduled and tells it whichfiles to back up and where to copy them to (the storage daemon). This allows us to perform all scheduling at a central location. The storage daemon then receives the data and writes it to the backup medium, for example, disk or tape drive. On the local system, we installed the file daemon with the bacula-client package andedited the file daemon's configuration file at /etc/bacula/bacula-fd.conf to specify the name of the process. The convention is to add the suffix -fd to the system's hostname. FileDaemon {   Name = benito-fd   FDPort = 9102   WorkingDirectory = /var/spool/bacula   Pid Directory = /var/run   Maximum Concurrent Jobs = 20 } On the backup server, we installed thebacula-console, bacula-director, bacula-storage, and bacula-client packages. This gives us the director and storage daemon and another file daemon. This file daemon's purpose is to back up Bacula's catalog. Bacula maintains a database of metadata about previous backup jobs called the catalog, which can be managed by MySQL, PostgreSQL, or SQLite. To support multiple databases, Bacula is written so that all of its database access routines are contained in shared libraries with a different library for each database. When Bacula wants to interact with a database, it does so through libbaccats.so, a fake library that is nothing more than a symbolic link pointing to one of the specific database libraries. This let's Bacula support different databases without requiring us to recompile its source code. To create the symbolic link, we usedalternatives and selected the real library that we want to use. I chose SQLite since it's an embedded database library and doesn't require additional services. Next, we needed to initialize the database schema using scripts that come with Bacula. If you want to use MySQL, you'll need to create a dedicated MySQL user for Bacula to use and then initialize the schema with the following scripts instead. You'll also need to review Bacula's configuration files to provide Bacula with the required MySQL credentials. /usr/libexec/bacula/grant_mysql_privileges /usr/libexec/bacula/create_mysql_database /usr/libexec/bacula/make_mysql_tables Different resources are defined in the director's configuration file at /etc/bacula/bacula-dir.conf, many of which consist not only of their own values but also reference other resources. For example, the FileSet resource specifies which files are included or excluded in backups and restores, while a Schedule resource specifies when backups should be made. A JobDef resource can contain various configuration directives that are common to multiple backup jobs and also reference particular FileSet and Schedule resources. Client resources identify the names and addresses of systems running file daemons, and a Job resource will pull together a JobDef and Client resource to define the backup or restore task for a particular system. Some resources define things at a more granular level and are used as building blocks to define other resources. Thisallows us to create complex definitions in a flexible manner. The default resource definitions outline basic backup and restore jobs that are sufficient for this article (you'll want to study the configuration and see how the different resources fit together so that you can tweak them to better suit your needs). We customized the existing backup Jobresource by changing its name and client. Then, we customized the Client resource by changing its name and address to point to a specific system running a file daemon. A pair of Job and Client resources can be duplicated for each additional system youwant to back up. However, notice that I left the default Client resource that defines bacula-fd for the localhost. This is for the file daemon that's local to the backup server and will be the target for things such as restore jobs and catalog backups. Job {   Name = "BackupBenito"   Client = benito-fd   JobDefs = "DefaultJob" }   Job {   Name = "BackupJavier"   Client = javier-fd   JobDefs = "DefaultJob" }   Client {   Name = bacula-fd   Address = localhost   ... }   Client {   Name = benito-fd   Address = 192.168.56.41   ... }   Client {   Name = javier-fd   Address = 192.168.56.42   ... } To complete the setup, we labeled a backup volume. This task, as with most others, is performed through bconsole, a console interface to the Bacula director. We used thelabel command to specify a label for the backup volume and when prompted for the pool, we assigned the labeled volume to the File pool. In a way very similar to how LVM works, an individual device or storage unit is allocated as a volume and the volumes are grouped into storage pools. If a pool contains two volumes backed by tape drives for example and one of the drives is full, the storage daemon will write the data to the tape that has space available. Even though in our configuration we're storing the backup to disk, we still need to create a volume as the destination for data to be written to. There's more... At this point, you should consider which backup strategy works best for you. A full backup is a complete copy of your data, a differential backup captures only the files that have changed since the last full backup, and an incremental backup copies the files that have changed since the last backup (regardless of the type of backup). Commonly, administrators employ a previous combination, perhaps making a full backup at the start of the week and then differential or incremental backups each day thereafter. This saves storage space because the differential and incremental backups are not only smaller but also convenient when the need to restore a file arises because a limited number of backups need to be searched for the file. Another consideration is the expected size of each backup and how long it will take for the backup to run to completion. Full backups obviously take longer to run, and in an office with 9-5 working hours, Monday through Friday and it may not be possible to run a full backup during the evenings. Performing a full backup on Fridays gives the backup time over the weekend to run. Smaller, incremental backups can be performed on the other days when time is lesser. Yet another point that is important in your backup strategy is, how long the backups will be kept and where they will be kept. A year's worth of backups is of no use if your office burns down and they were sitting in the office's IT closet. At one employer, we kept the last full back up and last day's incremental on site;they were then duplicated to tape and stored off site. Regardless of the strategy you choose to implement, your backups are only as good as your ability to restore data from them. You should periodically test your backups to make sure you can restore your files. To run a backup job on demand, enter run in bconsole. You'll be prompted with a menu to select one of the current configured jobs. You'll then be presented with the job's options, such as what level of backup will be performed (full, incremental, or differential), it's priority, and when it will run. You can type yes or no to accept or cancel it or mod to modify a parameter. Once accepted, the job will be queued and assigned a job ID. To restore files from a backup, use the restore command. You'll be presented with a list of options allowing you to specify which backup the desired files will be retrieved from. Depending on your selection, the prompts will be different. Bacula's prompts are rather clear, so read them carefully and they will guide you through the process. Apart from the run and restore commands, another useful command is status. It allows you to see the current status of the Bacula components, if there are any jobs currently running, and which jobs have completed. A full list of commands can be retrieved by typing help in bconsole. See also For more information on working with Bacula, refer to the following resources: Bacula documentation (http://blog.bacula.org/documentation/) How to use Bacula on CentOS 7 (http://www.digitalocean.com/community/tutorial_series/how-to-use-bacula-on-centos-7) Bacula Web (a web-based reporting and monitoring tool for Bacula) (http://www.bacula-web.org/) Summary In this article, we discussed how we can set up a backup server using Bacula and how to configure other systems on our network to back up our data to it. Resources for Article: Further resources on this subject: Jenkins 2.0: The impetus for DevOps Movement [article] Gearing Up for Bootstrap 4 [article] Introducing Penetration Testing [article]
Read more
  • 0
  • 0
  • 5811
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-creating-vbnet-application-enterprisedb
Packt
27 Oct 2009
5 min read
Save for later

Creating a VB.NET application with EnterpriseDB

Packt
27 Oct 2009
5 min read
Overview of the tutorial You will begin by creating an ODBC datasource for accessing data on the Postgres server. Using the User DSN created you will be connecting to the Postgres server data. You will derive a dataset from the table which you will be using to display in a datagrid view on a form in a windows application. We start with the Categories table that was migrated from MS SQL Server 2008. This table with all of its columns is shown in the Postgres studio in the next figure. Creating the ODBC DSN Navigate to Start | Control Panel | Administrative Tools | Data Sources (ODBC) to bring up the ODBC Database Manager window. Click on Add.... In the Create New Data Source scroll down to EnterpriseDB 8.2 under the list heading Name as shown. Click Finish. The EnterpriseDB ODBC Driver page gets displayed as shown. Accept the default name for the Data Source(DSN) or, if you prefer, change the name. Here the default is accepted. The Database, Server, User Name, Port and the Password should all be available to you [Read article 1]. If you click on the option button Datasource you display a window with two pages as shown. Make no changes to the pages and accept defaults but make sure you review the pages. Click OK and you will be back in the EnterpriseDB Driver window. If you click on the button Global the Global Settings window gets displayed (not shown). These are logging options as the page describes. Click Cancel to the Global Settings window. Click on the Test button and verify that the connection was successful. Click on the Save button and save the DSN under the list heading User DSN. The DSN EnterpriseDB enters the list of DSN's created as shown here. Create a Windows Forms application and Establish a connection to Postgres Open Visual Studio 2008 from its shortcut. Click File | New | Project... and open the New Project window. Choose a windows forms project for Framework 2.0. Besides Framework 2.0 you can also create projects in other versions in Visual Studio 2008. In Server Explorer window double click the Connection icon as shown. This brings up the Add Connection window as shown. Click on Change... button to display the Change Data Source window. Scroll up and select Microsoft ODBC Data Source as shown. Click OK. Click on the drop-down handle for the option Use user or system data source name and choose EnterpriseDB you created earlier as shown. Insert User Name and Password and click on the Test Connection button. You should get a connection succeeded message as shown. Click OK on the message screen as well as to the add connection window. The connection appears in the Visual Studio 2008 in the Server Explorer as shown.     Displaying data from the table Drag and drop a DataGridView under Data in the Toolbox onto the form as shown (shown with SmartTasks handle clicked) Click on Choose Data Source handle to display a drop-down menu as shown below. Click on Add Project Data Source at the bottom. This displays the Choose a Data Source Type page of the Data Source Configuration Wizard. Accept the default datasource type and click Next. In the Choose Your Data Connection page of the wizard choose the ODBC.localhost.PGNorthwind as shown in the drop-down list. Click Next in the page that gets displayed and accept the default to save the connection string to the application configuration file as shown. Click Next. In the Choose Your Database Objects page, expand Tables and choose the categories table as shown. The default Dataset name can be changed. Herein the default is accepted. Click Finish. The DatagridView on Form1 gets displayed with two columns and a row but can be extended to the right by using drag handles to reveal all the four columns as shown. Three other objects PGNorthwindDataSet, CategoriesBindingSource, and CategoriesTableAdapter are also added to the control tray as shown. The PGNorthwindDataset.xsd file gets added to the project. Now build the project and run. The Form 1 gets displayed with the data from the PGNorthwind database as shown. In the design view of the form few more tasks have been added as shown. Here you can Add Query... to filter the data displayed; Edit the details of the columns and you can choose to add a column if you had chosen fewer columns from the original table. For example, Edit Column brings up its editor as shown where you can make changes to the styles if you desire to do so. The next figure shows slightly modified form by editing the columns and resizing the cell heights as shown. Summary A step-by-step procedure was described to display the data stored in a table in the Postgres database in a Windows Forms application. Procedure to create an ODBC DSN was also described. Using this ODBC DSN a connection was established to the Postgres server in Visual Studio 2008.
Read more
  • 0
  • 0
  • 5783

article-image-migration-apache-lighttpd
Packt
22 Oct 2009
7 min read
Save for later

Migration from Apache to Lighttpd

Packt
22 Oct 2009
7 min read
Now starting from a working Apache installation, what can Lighttpd offer us? Improved performance for most cases (as in more hits per second) Reduced CPU time and memory usage Improved security Of course, the move to Lighttpd is not a small one, especially if our Apache configuration makes use of its many features. Systems tied into Apache as a module may make the move hard or even impossible without porting the module to a Lighttpd module or moving the functionality into CGI programs, if possible. We can ease the pain by moving in small steps. The following descriptions assume that we have one Apache instance running on one hardware instance. But we can scale the method by repeating it for every hardware instance. When not to migrateBefore we start this journey, we need to know that our hardware and operating systems support Lighttpd, that we have root access (or access to someone who has), and that the system has enough space for another Lighttpd installation (yes, I know, Lighttpd should reduce space concerns, but I have seen Apache installations munching away entire RAID arrays). Probably, this only makes sense if we plan on moving a big percentage of traffic to Lighttpd. We also might make extensive use of Apache module, which means a complete migration would involve finding or writing suitable substitutes for Lighttpd. Adding Lighttpd to the Mix Install Lighttpd on the system that Apache runs on. Find an unused port (refer to a port scanner if needed) to set server.port to. For example, if port 4080 is unused on our system, we would look for server.port in our Lighttpd configuration and change it to: server.port = 4080 If we want to use SSL, we should change all occurrences of the port 443 to another free port, say 4443. We assume our Apache is answering requests on HTTP port 80. Now let's use this Lighttpd instance as a proxy for our Apache by adding the following configuration: server.modules = (#..."mod_proxy",#...)#...proxy.server = ("" => ( # proxy everythinghost => "127.0.0.1" # localhostport => "80")) This tells our Lighttpd to proxy all requests to the server that answers on localhost, port 80, which happens to be our Apache server. Now, when we start our Lighttpd and point our browser to http://localhost:4080/, we should be able to see the same thing that our Apache is returning. What is a proxy?A Proxy stands in front of another object, simulating the object by relaying all requests to it. A proxy can change requests on the fly, filter requests, and so on. In our case, Lighttpd is the web server to the outside, whilst Apache will still get all requests as usual. Excursion: mod_proxy mod_proxy is the module that allows Lighttpd to relay requests to another web server. It is not to be confused with mod_proxy_core (of Lighttpd 1.5.0), which provides a basis for other interfaces such as CGI. Usually, we want to proxy only a specific subset of requests, for example, we might want to proxy requests for Java server pages to a Tomcat server. This could be done with the following proxy directive: proxy.server = (".jsp" => ( host => "127.0.0.1", port => "8080" )# given our tomcat is on port 8080) Thus the tomcat server only serves JSPs, which is what it was built to do, whilst our Lighttpd does the rest. Or we might have another server which we want to include in our Web presence at some given directory: proxy.server = ("/somepath" => ( host => "127.0.0.1", port => "8080" )) Assuming the server is on port 8080, this will do the trick. Now http://localhost/somepath/index.html will be the same as http://localhost:8080/index.html. Reducing Apache Load Note that as most Lighttpd directives, proxy.server can be moved into a selector, thereby reducing its reach. This way, we can reduce the set of files Apache will have to touch in a phased manner. For example, YouTube™ uses Lighttpd to serve the videos. Usually, we want to make Lighttpd serve static files such as images, CSS, and JavaScript, leaving Apache to serve the dynamically generated pages. Now, we have two options: we can either filter the extensions we want Apache to handle, or we can filter the addresses we want Lighttpd to serve without asking Apache. Actually, the first can be done in two ways. Assuming we want to give all addresses ending with .cgi and .php to Apache, we could either use the matching of proxy.server: proxy.server = (".cgi" => ( host = "127.0.0.1", port = "8080" ),".php" => ( host = "127.0.0.1", port = "8080" )) or match by selector: $HTTP['url'] =~ "(.cgi|.php)$" {proxy.server = ( "" => ( host = "127.0.0.1", port = "8080" ) )} The second way also allows negative filtering and filtering by regexp — just use !~ instead of =~. mod_perl, mod_php, and mod_python There are no Lighttpd modules to embed scripting languages into Lighttpd (with the exception of mod_magnet, which embeds Lua) because this is simply not the Lighttpd way of doing things. Instead, we have the CGI, SCGI, and FastCGI interfaces to outsource this work to the respective interpreters. Most mod_perl scripts are easily converted to FastCGI using CGI::Fast. Usually, our mod_perl script will look a lot like the following script: use CGI;my $q = CGI->new;initialize(); # this might need to be done only onceprocess_query($q); # this should be done per requestprint response($q); # this, too Using the easiest way to convert to FastCGI: use CGI:Fast # instead of CGIwhile (my $q = CGI:Fast->new) { # get requests in a while-loopinitialize();process_query($q);print response($q);} If this runs, we may try to put the initialize() call outside of the loop to make our script run even faster than under mod_perl. However, this is just the basic case. There are mod_perl scripts that manipulate the Apache core or use special hooks, so these scripts can get a little more complicated to migrate. Migrating from mod_php to php-fcgi is easier — we do not need to change the scripts, just the configuration. This means that we do not get the benefits of an obvious request loop, but we can work around that by setting some global variables only if they are not already set. The security benefit is obvious. Even for Apache, there are some alternatives to mod_php, which try to provide more security, often with bad performance implications. mod_python can be a little more complicated, because Apache calls out to the python functions directly, converting form fields to function arguments on the fly. If we are lucky, our python scripts could implement the WSGI (Web Server Gateway Interface). In this case, we can just use a WSGI-FastCGI wrapper. Looking on the Web, I already found two: one standalone (http://svn.saddi.com/py-lib/trunk/fcgi.py), and one, a part of the PEAK project (http://peak.telecommunity.com/DevCenter/FrontPage). Otherwise, python usually has excellent support for SCGI. As with mod_perl, there are some internals that have to be moved into the configuration (for example dynamic 404 pages, the directive for this is server.error-handler-405, which can also point to a CGI script). However, for basic scripts, we can use SCGI (either from http://www.mems-exchange.org/software/scgi/ or as a python-only version from http://www.cherokee-project.com/download/pyscgi/). We also need to change import cgi to import scgi and change CGIHandler and CGIServer to SCGIHandler and SCGIServer, respectively.
Read more
  • 0
  • 0
  • 5769

article-image-configuring-apache-and-nginx
Packt
19 Jul 2010
8 min read
Save for later

Configuring Apache and Nginx

Packt
19 Jul 2010
8 min read
(For more resources on Nginx, see here.) There are basically two main parts involved in the configuration, one relating to Apache and one relating to Nginx. Note that while we have chosen to describe the process for Apache in particular, this method can be applied to any other HTTP server. The only point that differs is the exact configuration sections and directives that you will have to edit. Otherwise, the principle of reverse-proxy can be applied, regardless of the server software you are using. Reconfiguring Apache There are two main aspects of your Apache configuration that will need to be edited in order to allow both Apache and Nginx to work together at the same time. But let us first clarify where we are coming from, and what we are going towards. Configuration overview At this point, you probably have the following architecture set up on your server: A web server application running on port 80, such as Apache A dynamic server-side script processing application such as PHP, communicating with your web server via CGI, FastCGI, or as a server module The new configuration that we are going towards will resemble the following: Nginx running on port 80 Apache or another web server running on a different port, accepting requests coming from local sockets only The script processing application configuration will remain unchanged As you can tell, only two main configuration changes will be applied to Apache as well as the other web server that you are running. Firstly, change the port number in order to avoid conflicts with Nginx, which will then be running as the frontend server. Secondly, (although this is optional) you may want to disallow requests coming from the outside and only allow requests forwarded by Nginx. Both configuration steps are detailed in the next sections. Resetting the port number Depending on how your web server was set up (manual build, automatic configuration from server panel managers such as cPanel, Plesk, and so on) you may find yourself with a lot of configuration files to edit. The main configuration file is often found in /etc/httpd/conf/ or /etc/apache2/, and there might be more depending on how your configuration is structured. Some server panel managers create extra configuration files for each virtual host. There are three main elements you need to replace in your Apache configuration: The Listen directive is set to listen on port 80 by default. You will have to replace that port by another such as 8080. This directive is usually found in the main configuration file. You must make sure that the following configuration directive is present in the main configuration file: NameVirtualHost A.B.C.D:8080, where A.B.C.D is the IP address of the main network interface on which server communications go through. The port you just selected needs to be reported in all your virtual host configuration sections, as described below. The virtual host sections must be transformed from the following template <VirtualHost A.B.C.D:80> ServerName example.com ServerAlias www.example.com [...]</VirtualHost> to the following: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...]</VirtualHost> In this example, A.B.C.D is the IP address of the virtual host and example.com is the virtual host's name. The port must be edited on the first two lines. Accepting local requests only There are many ways you can restrict Apache to accept only local requests, denying access to the outside world. But first, why would you want to do that? As an extra layer positioned between the client and Apache, Nginx provides a certain comfort in terms of security. Visitors no longer have direct access to Apache, which decreases the potential risk regarding all security issues the web server may have. Globally, it's not necessarily a bad idea to only allow access to your frontend server. The first method consists of changing the listening network interface in the main configuration file. The Listen directive of Apache lets you specify a port, but also an IP address, although, by default, no IP address is selected resulting in communications coming from all interfaces. All you have to do is replace the Listen 8080 directive by Listen 127.0.0.1:8080; Apache should then only listen on the local IP address. If you do not host Apache on the same server, you will need to specify the IP address of the network interface that can communicate with the server hosting Nginx. The second alternative is to establish per-virtual-host restrictions: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...] Order deny,allow allow from 127.0.0.1 allow from 192.168.0.1 eny all</VirtualHost> Using the allow and deny Apache directives, you are able to restrict the allowed IP addresses accessing your virtual hosts. This allows for a finer configuration, which can be useful in case some of your websites cannot be fully served by Nginx. Once all your changes are done, don't forget to reload the server to make sure the new configuration is applied, such as service httpd reload or /etc/init.d/ httpd reload. Configuring Nginx There are only a couple of simple steps to establish a working configuration of Nginx, although it can be tweaked more accurately as seen in the next section. Enabling proxy options The first step is to enable proxying of requests from your location blocks. Since the proxy_pass directive cannot be placed at the http or server level, you need to include it in every single place that you want to be forwarded. Usually, a location / { fallback block suffices since it encompasses all requests, except those that match location blocks containing a break statement. Here is a simple example using a single static backend hosted on the same server: server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://127.0.0.1:8080; }} In the following example, we make use of an Upstream block allowing us to specify multiple servers: upstream apache { server 192.168.0.1:80; server 192.168.0.2:80; server 192.168.0.3:80 weight=2; server 192.168.0.4:80 backup;} server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://apache; }} So far, with such a configuration, all requests are proxied to the backend server; we are now going to separate the content into two categories: Dynamic files: Files that require processing before being sent to the client, such as PHP, Perl, and Ruby scripts, will be served by Apache Static files: All other content that does not require additional processing, such as images, CSS files, static HTML files, and media, will be served directly by Nginx We thus have to separate the content somehow to be provided by either server. Separating content In order to establish this separation, we can simply use two different location blocks—one that will match the dynamic file extensions and another one encompassing all the other files. This example passes requests for .php files to the proxy: server { server_name .example.com; root /home/example.com/www; [...] location ~* .php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) proxy_pass http://127.0.0.1:8080; } location / { # Your other options here for static content # for example cache control, alias... expires 30d; }} This method, although simple, will cause trouble with websites using URL rewriting. Most Web 2.0 websites now use links that hide file extensions such as http://example.com/articles/us-economy-strengthens/; some even replace file extensions with links resembling the following: http://example.com/useconomy- strengthens.html. When building a reverse-proxy configuration, you have two options: Port your Apache rewrite rules to Nginx (usually found in the .htaccess file at the root of the website), in order for Nginx to know the actual file extension of the request and proxy it to Apache correctly. If you do not wish to port your Apache rewrite rules, the default behavior shown by Nginx is to return 404 errors for such requests. However, you can alter this behavior in multiple ways, for example, by handling 404 requests with the error_page directive or by testing the existence of files before serving them. Both solutions are detailed below. Here is an implementation of this mechanism, using the error_page directive : server { server_name .example.com; root /home/example.com/www; [...] location / { # Your static files are served here expires 30d; [...] # For 404 errors, submit the query to the @proxy # named location block error_page 404 @proxy; } location @proxy { proxy_pass http://127.0.0.1:8080; }} Alternatively, making use of the if directive from the Rewrite module: server { server_name .example.com; root /home/example.com/www; [...] location / { # If the requested file extension ends with .php, # forward the query to Apache if ($request_filename ~* .php.$) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # If the requested file does not exist, # forward the query to Apache if (!-f $request_filename) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # Your static files are served here expires 30d; }} There is no real performance difference between both solutions, as they will transfer the same amount of requests to the backend server. You should work on porting your Apache rewrite rules to Nginx if you are looking to get optimal performance.
Read more
  • 0
  • 0
  • 5541

article-image-copying-database-sql-server-2005-sql-server-2008-using-copy-database-wizard
Packt
24 Oct 2009
3 min read
Save for later

Copying a Database from SQL Server 2005 to SQL Server 2008 using the Copy Database Wizard

Packt
24 Oct 2009
3 min read
(For more resources on Microsoft, see here.) Using the Copy Database Wizard you will be creating an SQL Server Integration Services package which will be executed by an SQL Server Agent job. It is therefore necessary to set up the SQL Server Agent to work with a proxy that you need to create which can execute the package. Since the proxy needs a credential to workout outside the SQL 2008 boundary you need to create a Credential and a Principal who has the permissions. Creating a credential has been described elsewhere. The main steps in migration using this route are: Create an Credential Create an SQL Server Agent Proxy to work with SSIS Package execution Create the job using the Copy Database Wizard Creating the Proxy In the SQL Server 2008 Management Studio expand the SQL Server Agent node and then expand the Proxies node. You can create proxies for various actions that you may undertake. In the present case the Copy Database wizard creates an Integration Services package and therefore a proxy is needed for this. Right click the SSIS Package Execution folder as shown in the next figure. Click on New Proxy.... This opens the New Proxy Account window as shown. Here Proxy name is the one you provide which will be needed in the Copy Database Wizard. Credential name is the one you created earlier which uses a database login name and password. Description is an optional info to keep track of the proxy. As seen in the previous figure you can create different proxies to deal with different activities. In the present case a proxy will be created for Integration Service Package execution as shown in the next figure. The name CopyPubx has been created as shown. Now click on the ellipsis button along the Credential name and this brings up the Select Credential window as shown. Now click on the Browse... button. This brings up the Browse for Objects window displaying the credential you created earlier. Place a checkmark as shown and click on the OK button. The [mysorian] credential is entered into the Select Credential window. Click on the OK button on the Select Credential window. The credential name gets entered into the New Proxy Account's Credential name. The optional description can be anything suitable as shown. Place a checkmark on the SQL Server Integration Services Package as shown and click on Principals. Since the present proxy is going to be used by the sysadmin, there is no need to add it specifically. Click on the OK button to close this New Proxy Account window. You can now expand the SSIS Package Execution node of the Proxies and verify that CopyPubx has been added. There are two other proxies created in the same way in this folder. Since the SQL Server Agent is needed for this process to succeed, make sure the SQL Server Agent is running. If it has not started yet, you can start this service from the Control Panel.  
Read more
  • 0
  • 0
  • 5337
article-image-squid-proxy-server-tips-and-tricks
Packt
16 Mar 2011
6 min read
Save for later

Squid Proxy Server: Tips and Tricks

Packt
16 Mar 2011
6 min read
Rotating log files frequently Tip: For better performance, it is good practice to rotate log files frequently instead of going with large files. --sysconfdir=/etc/squid/ option Tip: It's a good idea to use the --sysconfdir=/etc/squid/ option with configure, so that you can share the configuration across different Squid installations while testing. tproxy mode Tip: We should note that enabling intercept or tproxy mode disables any configured authentication mechanism. Also, IPv6 is supported for tproxy but requires very recent kernel versions. IPv6 is not supported in the intercept mode. Securing the port Tip: We should set the HTTP port carefully as the standard ports like 3128 or 8080 can pose a security risk if we don't secure the port properly. If we don't want to spend time on securing the port, we can use any arbitrary port number above 10000. ACL naming Tip: We should carefully note that one ACL name can't be used with more than one ACL type. acl destination dstdomain example.com acl destination dst 192.0.2.24 The above code is invalid as it uses ACL name destination across two different ACL types. HTTP access control Tip: The default behavior of HTTP access control is a bit tricky if access for a client can't be identified by any of the access rules. In such cases, the default behavior is to do the opposite of the last access rule. If last access rule is deny, then the action will be to allow access and vice-versa. Therefore, to avoid any confusion or undesired behavior, it's a good practice to add a deny all line after the access rules. Using the http_reply_access directive Tip: We should be really careful while using the http_reply_access directive. When a request is allowed by http_access, Squid will contact the original server, even if a rule with the http_reply_access directive denies the response. This may lead to serious security issues. For example, consider a client receiving a malicious URL, which can submit a client's critical private information using the HTTP POST method. If the client's request passes through http_access rules but the response is denied by an http_reply_access rule, then the client will be under the impression that nothing happened but a hacker will have cleverly stolen our client's private information. refresh_pattern directive Tip: Using refresh_pattern to cache the non-cacheable responses or to alter the lifetime of the cached objects, may lead to unexpected behavior or responses from the web servers. We should use this directive very carefully. Expires HTTP header Tip: We should note that the Expires HTTP header overrides min and max values. Overriding directives Tip: Please note that the directive never_direct overrides hierarchy_stoplist. Path of the PID file Tip: Setting the path of the PID file to none will prevent regular management operations like automatic log rotation or restarting Squid. The operating system will not be able to stop Squid at the time of a shutdown or restart. Parsing the configuration file Tip: It's good practice to parse the configuration file for any errors or warning using the -k parse option before issuing the reconfigure signal. Squid signals Tip: Please note that shutdown, interrupt, and kill are Squid signals and not the system kill signals which are emulated. Squid process in debug mode Tip: The Squid process running in debug mode may write a log of debugging output to the cache.log file and may quickly consume a lot of disk space. Access Control List (ACL) elements with dst Tip: ACL elements configured with dst as a ACL type works slower compared to ACLs with the src ACL type, as Squid will have to resolve the destination domain name before evaluating the ACL, which will involve a DNS query. ACL elements with srcdomain Tip: ACL elements with srcdomain as ACL types works slower, compared to ACLs with the dstdomain ACL type because Squid will have to perform a reverse DNS lookup before evaluating ACL. This will introduce significant latency. Moreover, the reverse DNS lookup may not work properly with local IP addresses. Adding port numbers Tip: We should note that the port numbers we add to the SSL ports list should be added to the safe ports list as well. Take care while using the ident protocol Tip: The ident protocol is not really secure and it's very easy to spoof an ident server. So, it should be used carefully. ident lookups Tip: Please note that the ident lookups are blocking calls and Squid will wait for the reply before it can proceed with processing the request, and that may increase the delays by a significant margin. Denied access by the http_access Tip: If a client is denied access by the http_access rule, it'll never match an http_reply_access rule. This is because, if a client's request is denied then Squid will not fetch a reply. Authentication helpers Tip: Configuring authentication helpers is of no use unless we use the proxy_auth ACL type to control access. basic_pop3_auth helper Tip: The basic_pop3_auth helper uses the Net::POP3 Perl package. So, we should make sure that we have this package installed before using the authentication helper.   --enable-ssl option Tip: : Please note that we should use the --enable-ssl option with the configure program before compiling, if we want Squid to accept HTTPS requests. Also note that several operating systems don't provide packages capable of HTTPS reverse-proxy due to GPL and policy constraints.   URL redirector programs Tip: We should be careful while using URL redirector programs because Squid passes the entire URL along with query parameters to the URL redirector program. This may lead to leakage of sensitive client information as some websites use HTTP GET methods for passing clients' private information.   Using the url_rewrite_access directive to block request types Tip: Please note that certain request types such as POST and CONNECT must not be rewritten as they may lead to errors and unexpected behavior. It's a good idea to block them using the url_rewrite_access directive. In this article we saw some tips and tricks on Squid Proxy server to enhance the performance of your network. Further resources on this subject: Configuring Apache and Nginx [Article] Different Ways of Running Squid Proxy Server [Article] Lighttpd [Book] VirtualBox 3.1: Beginner's Guide [Book] Squid Proxy Server 3.1: Beginner's Guide [Book]
Read more
  • 0
  • 0
  • 5146

article-image-best-practices-microsoft-sql-server-2008-r2-administration
Packt
30 Jun 2011
12 min read
Save for later

Best Practices for Microsoft SQL Server 2008 R2 Administration

Packt
30 Jun 2011
12 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook The reader would benefit by referring to the previous article on Managing the Core Database Engine since the following recipes are related to it. Implementing Utility & Non-utility collection sets The Utility information data collection set is installed and automatically started on each instance of SQL Server 2008 R2 when you complete the Utility Control Point (UCP) as we have seen in the previous article. The data is stored in the UMDW database, which is created during the UCP creation. The SQL Server utility collection set is supported side-by-side with Utility collection sets and non-SQL Server utility collection sets. In this recipe, we will go through the implementation tasks to set up the UCP data collection sets for utility and non-utility categories. SQL Server 2008 R2 introduces the Utility Control Point (UCP) with a set of pre-defined utility collection sets that are managed by UMDW. Similarly, SQL Server 2008 manages the data collection to monitor CPU, disk, and memory resources of an instance using a Data Collector that is managed by Management Data Warehouse (MDW). For this recipe, it is necessary to introduce the MDW feature that stands as a non-utility collection set. The Management Data Warehouse is a relational database that contains all the data that is retained. This database can be on the same system as the data collector, or it can be on another computer. The MDW collection set is run in one of the following collection and upload modes: Non-cached mode: Data collection and upload are on the same schedule. The packages start, collect, and upload data at their configured frequency, and run until they are finished. After the packages finish, they are unloaded from memory. Cached mode: Data collection and upload are on different schedules. The packages collect and cache data until they receive a signal to exit from a loop control-flow task. This ensures that the data flow can be executed repeatedly, which enables continuous data collection. Getting ready The new feature of SQL Server 2008 R2—Utility Control Point (UCP)—allows DBAs to set up and collect the utility collection sets. Once the instances are enrolled, the default capacity policies of utilization across the instances or applications are set. It is essential to check that you are using a SQL Server 2008 R2 instance to register the UCP to design the multi-server management feature. How to do it... Using SQL Server Management Studio, these are the steps to implement the utility and nonutility data collection sets: To implement the utility data collection sets, connect to the Utility Explorer where the UCP is registered. Right-click on Managed Instances and choose Enroll instance (refer to the next screenshot). Specify the instance name of SQL Server to enroll. Specify the service account to run the utility collection set. To specify the account to collect data, you can choose SQL Server Agent service account, but for security precautions, it is recommended to propose a new account or existing domain user account with the required privileges. Review prerequisite validation results and selections. Enroll the instance. After completing the Enroll Instance wizard, click on the Managed Instances node in the Utility Explorer navigation pane. On the right-hand side of the Utility Explorer content pane, the enrolled SQL Server instances are displayed. Next, to implement the non-utility collection sets, from the SSMS tool, use the Configure Management Data Warehouse wizard to configure storage for collected data. Create the management data warehouse. You can install the management data warehouse on the same instance of SQL Server that runs the data collector for the utility collection set. Select the configuration task to install the predefined System Data collection sets. Configure the MDW storage by selecting the SQL Server instance to host and collect the non-utility collection sets. Map logins to management data warehouse roles. Once you have completed the MDW wizard, the data collection information for utility and non-utility collection sets are displayed under the Management folder, as shown in the next screenshot: Before we proceed to enable the data collection, it is essential to restart and upload the non-utility collection sets to the Data Collection. To upload and pass a validation of non-utility collection sets, execute the following TSQL from Query Editor: execmsdb.dbo.sp_syscollector_set_warehouse_database_name NULL execmsdb.dbo.sp_syscollector_set_warehouse_instance_name NULL Under the Management folder, right-click on Data Collection and choose Enable the data collector from SSMS, which is shown in the following screenshot: Once we have completed the MDW wizard, the data collection information will be stored in the data warehouse databases. To ensure that both the utility collection sets exist, review the Data Collection option from SSMS, as shown in the preceding screenshot, which completes the process as a successful implementation of utility and non-utility collection sets on the same instance. How it works... The utility data collection sets are installed and automatically started on each instance of SQL Server 2008 R2 when they are configured using Utility Control Point. The UMDW database is created on the instance where UCP is configured and the following collection set and items are stored: Utility Information—DAC Information Utility Information—SMO Information Utility Information—Utility Allocated CPU Info Utility Information—Utility CPU-Memory Related Info Utility Information—Utility Database FilesInfo Utility Information—Utility Performance Counters Items Utility Information—Utility Performance Counters Items1 Utility Information—Utility Volumes Information The non-utility data collection sets are installed when MDW wizard is completed, but not started until they are enabled. The required schemas and their objects for the pre-defined system collect sets are created when MDW is configured. The various UCP and MDW jobs are created under SQL Server Agent | Jobs folder as follows: collection_set_1_noncached_collect_and_upload collection_set_2_collection collection_set_2_upload collection_set_3_collection collection_set_3_upload collection_set_4_noncached_collect_and_upload mdw_purge_data_[MDW] sysutility_get_cache_tables_data_into_aggregate_tables_daily sysutility_get_views_data_into_cache_tables sysutility_mi_collect_performance sysutility_get_cache_tables_data_into_aggregate_tables_hourly syspolicy_purge_history sysutility_mi_collect_and_upload mdw_purge_data_[sysutility_mdw] The core schema is prefixed by 'core', which describes the tables, stored procedures, and views that are used to manage and identify the collected data. These objects are locked and can only be modified by the owner of the MDW database. The parallel management of SQL Server Utility collection sets (utility and non-utility) requires a preparation on the instance where UCP information is stored and the best practice is to customize the data-collection frequency to avoid any overlap with the MDW data collection schedule. The data collection store contains server activity for all the instances that are configured to manage and host the operating system, such as percent CPU, memory usage, disk I/O usage, network usage, SQL Server waits, and SQL Server activity. Designing and refreshing a Scalable Shared database Designing a Scalable Shared Database (SSD) feature in SQL Server 2008 R2, allows the DBAs to scale out a read-only database (reporting database), which is a copy of a production database, built exclusively for reporting purposes. SSD feature has been part of SQL Server from 2005 Enterprise Edition onwards, that has been enhanced since SQL Server 2008 and this is supported in Enterprise edition and Data Center editions only. To host this reporting database, the disk volumes must be dedicated and read-only, and the scalable shared database feature will permit the smooth update process from production database to the reporting database. The internals behind such a process of building or refreshing a reporting database are known as the build phase or refresh phase, depending on whether a new reporting database is being built or a stale reporting database is being refreshed. The validity of a scalable shared database begins from building a reporting database on a set of reporting volumes and that reporting data eventually becomes too outdated to be useful, which means that the stale database requires a data-refresh as part of each update cycle. Refreshing a stale reporting database involves either updating its data or building a completely new, fresh version of the database. This scalability feature is supported in Enterprise Edition and Data Center editions only. This recipe will cover how to design and refresh a reporting database that is intended for use as a scalable shared database. Getting ready Keeping the reporting database refreshed is a prerequisite as part of each update cycle. The key aspect of having an updated reporting database can be achieved by using the data-copy method, which requires the following: Create or copy a database by designing a SSIS package to use. Execute SQL Task method or Transfer Database task method. From SSMS, use SQL Server Import and Export wizard to copy required objects for the reporting purpose. Restore a backup of the production database into the reporting volume, which will involve a full database backup file to be used. The essential components such as, SAN storage hardware, processing environment, and data access environment must be used. The reporting database must have the same layout as the production database, so we need to use the same drive letter for the reporting volume and the same directory path for the database. Additionally, verify that the reporting servers and the associated reporting database are running on identical platforms. How to do it... To design and refresh a reporting database, you will need to complete the following steps on the production SQL Server instance: Unmask the Logical Unit Number (LUN) on the disks where the Production database is stored. (Refer to the hardware vendor's manual). Mount each reporting volume and mark it as read-write. Obtain the disk volume information. Logon remotely to the server and open a command prompt window to run the following: DiskPart list volumes Use the DiskPart utility to mount the volumes, then on that command prompt window run the following: DISKPART The DiskPart utility will open a prompt for you to enter the following commands: DISKPART> select volume=<drive-number> DISKPART> assign letter=<drive-letter> DISKPART> attribute clear readonly DISKPART> exit The <drive-number> is the volume number assigned by the Windows operating system. The <drive-letter> is the letter assigned to the reporting volume. To ensure that data files are accessible and disks are correctly mounted, list the contents of the directory using the following command from the command prompt: DIR <drive-letter>:<database directory> As we are refreshing an existing reporting database, attach the database to that server instance using SSMS. On Query Editor, enter the following TSQL statements: ALTER DATABASE AdventureWorks2008R2 SET READ_WRITE GO ALTER DATABASE AdventureWorks2008R2 SET RECOVERY FULL, PAGE_VERIFY CHECKSUM; GO Detach the database from that server instance using the sp_detach_db statement from Query Editor. Now, we have to mark each reporting volume as read-only and dismount from the server. Go to the command prompt window and enter the following commands: DiskPart DISKPART> select volume=<drive-number> DISKPART> attribute set readonly DISKPART> remove DISKPART> exit To ensure that the reporting volume is read-only, you should attempt to create a file on the volume. This attempt must return an error. Next, go to the command prompt window and enter the following commands: DiskPart DISKPART> select volume=<drive-number> DISKPART> assign letter = <drive letter> DISKPART> exit The <drive-letter> is the letter assigned to the reporting volume. Attach the database to one or more server instances on each of the reporting servers using the sp_attach_db statement or SSMS tool. Now, the reporting database is made available as a scalable shared database to process the queries from the application. How it works... Using the available hardware vendor-specific servers and disk volumes, the scalable shared database features allow the application to scale out a read-only database built exclusively for reporting purposes. The 'build' phase is the process of mounting the reporting volume on the production server and building the reporting database. After the reporting database is built on the volume, using the defined data-copy methods, the data is updated. Once it is completed, the process of setting each reporting volume to read-only and dismount begins. The 'attach' phase is the process of making the reporting database available as a scalable shared database. After the reporting database is built on a set of reporting volumes, the volumes are marked as read-only and mounted across multiple reporting servers. The individual reporting server service instance will use the reporting database that is attached. There's more... The Scalable Shared Database feature's best practice recommendation: On the basis of hardware, there is no limit on the number of server instances per database; however, for the shared database configuration, ensure that a maximum of eight servers per database are hosted. The SQL Server instance collation and sort order must be similar across all the instances. If the relational or reporting database is spread across the shared servers, then ensure to test and deploy a synchronized update then a rolling update of the scalable shared database. Also, scaling out this solution is possible in SQL Server 2008 Analysis Services with the Read-Only Database capability.  
Read more
  • 0
  • 0
  • 5014

article-image-connecting-database
Packt
28 Nov 2014
20 min read
Save for later

Connecting to a database

Packt
28 Nov 2014
20 min read
In this article by Christopher Ritchie, the author of R WildFly Configuration, Deployment, and Administration, Second Edition, you will learn to configure enterprise services and components, such as transactions, connection pools, and Enterprise JavaBeans. (For more resources related to this topic, see here.) To allow your application to connect to a database, you will need to configure your server by adding a datasource. Upon server startup, each datasource is prepopulated with a pool of database connections. Applications acquire a database connection from the pool by doing a JNDI lookup and then calling getConnection(). Take a look at the following code: Connection result = null; try {    Context initialContext = new InitialContext();    DataSource datasource =    (DataSource)initialContext.lookup("java:/MySqlDS");    result = datasource.getConnection(); } catch (Exception ex) {    log("Cannot get connection: " + ex);} After the connection has been used, you should always call connection.close() as soon as possible. This frees the connection and allows it to be returned to the connection pool—ready for other applications or processes to use. Releases prior to JBoss AS 7 required a datasource configuration file (ds.xml) to be deployed with the application. Ever since the release of JBoss AS 7, this approach has no longer been mandatory due to the modular nature of the application server. Out of the box, the application server ships with the H2 open source database engine (http://www.h2database.com), which, because of its small footprint and browser-based console, is ideal for testing purposes. However, a real-world application requires an industry-standard database, such as the Oracle database or MySQL. In the following section, we will show you how to configure a datasource for the MySQL database. Any database configuration requires a two step procedure, which is as follows: Installing the JDBC driver Adding the datasource to your configuration Let's look at each section in detail. Installing the JDBC driver In WildFly's modular server architecture, you have a couple of ways to install your JDBC driver. You can install it either as a module or as a deployment unit. The first and recommended approach is to install the driver as a module. We will now look at a faster approach to installing the driver. However, it does have various limitations, which we will cover shortly. The first step to install a new module is to create the directory structure under the modules folder. The actual path for the module is JBOSS_HOME/modules/<module>/main. The main folder is where all the key module components are installed, namely, the driver and the module.xml file. So, next we need to add the following units: JBOSS_HOME/modules/com/mysql/main/mysql-connector-java-5.1.30-bin.jar JBOSS_HOME/modules/com/mysql/main/module.xml The MySQL JDBC driver used in this example, also known as Connector/J, can be downloaded for free from the MySQL site (http://dev.mysql.com/downloads/connector/j/). At the time of writing, the latest version is 5.1.30. The last thing to do is to create the module.xml file. This file contains the actual module definition. It is important to make sure that the module name (com.mysql) corresponds to the module attribute defined in the your datasource. You must also state the path to the JDBC driver resource and finally add the module dependencies, as shown in the following code: <module name="com.mysql">    <resources>        <resource-root path="mysql-connector-java-5.1.30-bin.jar"/>    </resources>    <dependencies>        <module name="javax.api"/>        <module name="javax.transaction.api"/>    </dependencies> </module> Here is a diagram showing the final directory structure of this new module: You will notice that there is a directory structure already within the modules folder. All the system libraries are housed inside the system/layers/base directory. Your custom modules should be placed directly inside the modules folder and not with the system modules. Adding a local datasource Once the JDBC driver is installed, you need to configure the datasource within the application server's configuration file. In WildFly, you can configure two kinds of datasources, local datasources and xa-datasources, which are distinguishable by the element name in the configuration file. A local datasource does not support two-phase commits using a java.sql.Driver. On the other hand, an xa-datasource supports two-phase commits using a javax.sql.XADataSource. Adding a datasource definition can be completed by adding the datasource definition within the server configuration file or by using the management interfaces. The management interfaces are the recommended way, as they will accurately update the configuration for you, which means that you do not need to worry about getting the correct syntax. In this article, we are going to add the datasource by modifying the server configuration file directly. Although this is not the recommended approach, it will allow you to get used to the syntax and layout of the file. In this article, we will show you how to add a datasource using the management tools. Here is a sample MySQL datasource configuration that you can copy into your datasources subsystem section within the standalone.xml configuration file: <datasources> <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool"    enabled="true" jta="true" use-java-context="true" use-ccm="true">    <connection-url>      jdbc:mysql://localhost:3306/MyDB    </connection-url>    <driver>mysql</driver>    <pool />    <security>      <user-name>jboss</user-name>      <password>jboss</password>    </security>    <statement/>    <timeout>      <idle-timeout-minutes>0</idle-timeout-minutes>      <query-timeout>600</query-timeout>    </timeout> </datasource> <drivers>    <driver name="mysql" module="com.mysql"/> </drivers> </datasources> As you can see, the configuration file uses the same XML schema definition from the earlier -*.ds.xml file, so it will not be difficult to migrate to WildFly from previous releases. In WildFly, it's mandatory that the datasource is bound into the java:/ or java:jboss/ JNDI namespace. Let's take a look at the various elements of this file: connection-url: This element is used to define the connection path to the database. driver: This element is used to define the JDBC driver class. pool: This element is used to define the JDBC connection pool properties. In this case, we are going to leave the default values. security: This element is used to configure the connection credentials. statement: This element is added just as a placeholder for statement-caching options. timeout: This element is optional and contains a set of other elements, such as query-timeout, which is a static configuration of the maximum seconds before a query times out. Also the included idle-timeout-minutes element indicates the maximum time a connection may be idle before being closed; setting it to 0 disables it, and the default is 15 minutes. Configuring the connection pool One key aspect of the datasource configuration is the pool element. You can use connection pooling without modifying any of the existing WildFly configurations, as, without modification, WildFly will choose to use default settings. If you want to customize the pooling configuration, for example, change the pool size or change the types of connections that are pooled, you will need to learn how to modify the configuration file. Here's an example of pool configuration, which can be added to your datasource configuration: <pool>    <min-pool-size>5</min-pool-size>    <max-pool-size>10</max-pool-size>    <prefill>true</prefill>    <use-strict-min>true</use-strict-min>    <flush-strategy>FailingConnectionOnly</flush-strategy> </pool> The attributes included in the pool configuration are actually borrowed from earlier releases, so we include them here for your reference: Attribute Meaning initial-pool-size This means the initial number of connections a pool should hold (default is 0 (zero)). min-pool-size This is the minimum number of connections in the pool (default is 0 (zero)). max-pool-size This is the maximum number of connections in the pool (default is 20). prefill This attempts to prefill the connection pool to the minimum number of connections. use-strict-min This determines whether idle connections below min-pool-size should be closed. allow-multiple-users This determines whether multiple users can access the datasource through the getConnection method. This has been changed slightly in WildFly. In WildFly, the line <allow-multiple-users>true</allow-multiple-users> is required. In JBoss AS 7, the empty element <allow-multiple-users/> was used. capacity This specifies the capacity policies for the pool—either incrementer or decrementer. connection-listener Here, you can specify org.jboss.jca.adapters.jdbc.spi.listener.ConnectionListener that allows you to listen for connection callbacks, such as activation and passivation. flush-strategy This specifies how the pool should be flushed in the event of an error (default is FailingConnectionsOnly). Configuring the statement cache For each connection within a connection pool, the WildFly server is able to create a statement cache. When a prepared statement or callable statement is used, WildFly will cache the statement so that it can be reused. In order to activate the statement cache, you have to specify a value greater than 0 within the prepared-statement-cache-size element. Take a look at the following code: <statement>    <track-statements>true</track-statements>    <prepared-statement-cache-size>10</prepared-statement-cache-size>    <share-prepared-statements/> </statement> Notice that we have also set track-statements to true. This will enable automatic closing of statements and ResultSets. This is important if you want to use prepared statement caching and/or don't want to prevent cursor leaks. The last element, share-prepared-statements, can only be used when the prepared statement cache is enabled. This property determines whether two requests in the same transaction should return the same statement (default is false). Adding an xa-datasource Adding an xa-datasource requires some modification to the datasource configuration. The xa-datasource is configured within its own element, that is, within the datasource. You will also need to specify the xa-datasource class within the driver element. In the following code, we will add a configuration for our MySQL JDBC driver, which will be used to set up an xa-datasource: <datasources> <xa-datasource jndi-name="java:/XAMySqlDS" pool-name="MySqlDS_Pool"    enabled="true" use-java-context="true" use-ccm="true">    <xa-datasource-property name="URL">      jdbc:mysql://localhost:3306/MyDB    </xa-datasource-property>    <xa-datasource-property name="User">jboss    </xa-datasource-property>    <xa-datasource-property name="Password">jboss    </xa-datasource-property>    <driver>mysql-xa</driver> </xa-datasource> <drivers>    <driver name="mysql-xa" module="com.mysql">      <xa-datasource-class>       com.mysql.jdbc.jdbc2.optional.MysqlXADataSource      </xa-datasource-class>    </driver> </drivers> </datasources> Datasource versus xa-datasource You should use an xa-datasource in cases where a single transaction spans multiple datasources, for example, if a method consumes a Java Message Service (JMS) and updates a Java Persistence API (JPA) entity. Installing the driver as a deployment unit In the WildFly application server, every library is a module. Thus, simply deploying the JDBC driver to the application server will trigger its installation. If the JDBC driver consists of more than a single JAR file, you will not be able to install the driver as a deployment unit. In this case, you will have to install the driver as a core module. So, to install the database driver as a deployment unit, simply copy the mysql-connector-java-5.1.30-bin.jar driver into the JBOSS_HOME/standalone/deployments folder of your installation, as shown in the following image: Once you have deployed your JDBC driver, you still need to add the datasource to your server configuration file. The simplest way to do this is to paste the following datasource definition into the configuration file, as follows: <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool" enabled="true" jta="true" use-java-context="true" use-ccm="true"> <connection-url>    jdbc:mysql://localhost:3306/MyDB </connection-url> <driver>mysql-connector-java-5.1.130-bin.jar</driver> <pool /> <security>    <user-name>jboss</user-name>    <password>jboss</password> </security> </datasource> Alternatively, you can use the command-line interface (CLI) or the web administration console to achieve the same result. What about domain deployment? In this article, we are discussing the configuration of standalone servers. The services can also be configured in the domain servers. Domain servers, however, don't have a specified folder scanned for deployment. Rather, the management interfaces are used to inject resources into the domain. Choosing the right driver deployment strategy At this point, you might wonder about a best practice for deploying the JDBC driver. Installing the driver as a deployment unit is a handy shortcut; however, it can limit its usage. Firstly, it requires a JDBC 4-compliant driver. Deploying a non-JDBC-4-compliant driver is possible, but it requires a simple patching procedure. To do this, create a META-INF/services structure containing the java.sql.Driver file. The content of the file will be the driver name. For example, let's suppose you have to patch a MySQL driver—the content will be com.mysql.jdbc.Driver. Once you have created your structure, you can package your JDBC driver with any zipping utility or the .jar command, jar -uf <your -jdbc-driver.jar> META-INF/services/java.sql.Driver. The most current JDBC drivers are compliant with JDBC 4 although, curiously, not all are recognized as such by the application server. The following table describes some of the most used drivers and their JDBC compliance: Database Driver JDBC 4 compliant Contains java.sql.Driver MySQL mysql-connector-java-5.1.30-bin.jar Yes, though not recognized as compliant by WildFly Yes PostgreSQL postgresql-9.3-1101.jdbc4.jar Yes, though not recognized as compliant by WildFly Yes Oracle ojdbc6.jar/ojdbc5.jar Yes Yes Oracle ojdbc4.jar No No As you can see, the most notable exception to the list of drivers is the older Oracle ojdbc4.jar, which is not compliant with JDBC 4 and does not contain the driver information in META-INF/services/java.sql.Driver. The second issue with driver deployment is related to the specific case of xa-datasources. Installing the driver as deployment means that the application server by itself cannot deduce the information about the xa-datasource class used in the driver. Since this information is not contained inside META-INF/services, you are forced to specify information about the xa-datasource class for each xa-datasource you are going to create. When you install a driver as a module, the xa-datasource class information can be shared for all the installed datasources. <driver name="mysql-xa" module="com.mysql"> <xa-datasource-class>    com.mysql.jdbc.jdbc2.optional.MysqlXADataSource </xa-datasource-class> </driver> So, if you are not too limited by these issues, installing the driver as a deployment is a handy shortcut that can be used in your development environment. For a production environment, it is recommended that you install the driver as a static module. Configuring a datasource programmatically After installing your driver, you may want to limit the amount of application configuration in the server file. This can be done by configuring your datasource programmatically This option requires zero modification to your configuration file, which means greater application portability. The support to configure a datasource programmatically is one of the cool features of Java EE that can be achieved by using the @DataSourceDefinition annotation, as follows: @DataSourceDefinition(name = "java:/OracleDS", className = " oracle.jdbc.OracleDriver", portNumber = 1521, serverName = "192.168.1.1", databaseName = "OracleSID", user = "scott", password = "tiger", properties = {"createDatabase=create"}) @Singleton public class DataSourceEJB { @Resource(lookup = "java:/OracleDS") private DataSource ds; } In this example, we defined a datasource for an Oracle database. It's important to note that, when configuring a datasource programmatically, you will actually bypass JCA, which proxies requests between the client and the connection pool. The obvious advantage of this approach is that you can move your application from one application server to another without the need for reconfiguring its datasources. On the other hand, by modifying the datasource within the configuration file, you will be able to utilize the full benefits of the application server, many of which are required for enterprise applications. Configuring the Enterprise JavaBeans container The Enterprise JavaBeans (EJB) container is a fundamental part of the Java Enterprise architecture. The EJB container provides the environment used to host and manage the EJB components deployed in the container. The container is responsible for providing a standard set of services, including caching, concurrency, persistence, security, transaction management, and locking services. The container also provides distributed access and lookup functions for hosted components, and it intercepts all method invocations on hosted components to enforce declarative security and transaction contexts. Take a look at the following figure: As depicted in this image, you will be able to deploy the full set of EJB components within WildFly: Stateless session bean (SLSB): SLSBs are objects whose instances have no conversational state. This means that all bean instances are equivalent when they are not servicing a client. Stateful session bean (SFSB): SFSBs support conversational services with tightly coupled clients. A stateful session bean accomplishes a task for a particular client. It maintains the state for the duration of a client session. After session completion, the state is not retained. Message-driven bean (MDB): MDBs are a kind of enterprise beans that are able to asynchronously process messages sent by any JMS producer. Singleton EJB: This is essentially similar to a stateless session bean; however, it uses a single instance to serve the client requests. Thus, you are guaranteed to use the same instance across invocations. Singletons can use a set of events with a richer life cycle and a stricter locking policy to control concurrent access to the instance. No-interface EJB: This is just another view of the standard session bean, except that local clients do not require a separate interface, that is, all public methods of the bean class are automatically exposed to the caller. Interfaces should only be used in EJB 3.x if you have multiple implementations. Asynchronous EJB: These are able to process client requests asynchronously just like MDBs, except that they expose a typed interface and follow a more complex approach to processing client requests, which are composed of: The fire-and-forget asynchronous void methods, which are invoked by the client The retrieve-result-later asynchronous methods having a Future<?> return type EJB components that don't keep conversational states (SLSB and MDB) can be optionally configured to emit timed notifications. Configuring the EJB components Now that we have briefly outlined the basic types of EJB, we will look at the specific details of the application server configuration. This comprises the following components: The SLSB configuration The SFSB configuration The MDB configuration The Timer service configuration Let's see them all in detail. Configuring the stateless session beans EJBs are configured within the ejb3.2.0 subsystem. By default, no stateless session bean instances exist in WildFly at startup time. As individual beans are invoked, the EJB container initializes new SLSB instances. These instances are then kept in a pool that will be used to service future EJB method calls. The EJB remains active for the duration of the client's method call. After the method call is complete, the EJB instance is returned to the pool. Because the EJB container unbinds stateless session beans from clients after each method call, the actual bean class instance that a client uses can be different from invocation to invocation. Have a look at the following diagram: If all instances of an EJB class are active and the pool's maximum pool size has been reached, new clients requesting the EJB class will be blocked until an active EJB completes a method call. Depending on how you have configured your stateless pool, an acquisition timeout can be triggered if you are not able to acquire an instance from the pool within a maximum time. You can either configure your session pool through your main configuration file or programmatically. Let's look at both approaches, starting with the main configuration file. In order to configure your pool, you can operate on two parameters: the maximum size of the pool (max-pool-size) and the instance acquisition timeout (instance-acquisition-timeout). Let's see an example: <subsystem > <session-bean> <stateless>    <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> ... </session-bean> ... <pools> <bean-instance-pools>    <strict-max-pool name="slsb-strict-max-pool" max-pool-size=      "25" instance-acquisition-timeout="5" instance-acquisition-      timeout-unit="MINUTES"/> </bean-instance-pools> </pools> ... </subsystem> In this example, we have configured the SLSB pool with a strict upper limit of 25 elements. The strict maximum pool is the only available pool instance implementation; it allows a fixed number of concurrent requests to run at one time. If there are more requests running than the pool's strict maximum size, those requests will get blocked until an instance becomes available. Within the pool configuration, we have also set an instance-acquisition-timeout value of 5 minutes, which will come into play if your requests are larger than the pool size. You can configure as many pools as you like. The pool used by the EJB container is indicated by the attribute pool-name on the bean-instance-pool-ref element. For example, here we have added one more pool configuration, largepool, and set it as the EJB container's pool implementation. Have a look at the following code: <subsystem > <session-bean>    <stateless>      <bean-instance-pool-ref pool-name="large-pool"/>    </stateless> </session-bean> <pools>    <bean-instance-pools>      <strict-max-pool name="large-pool" max-pool-size="100"        instance-acquisition-timeout="5"        instance-acquisition-timeout-unit="MINUTES"/>    <strict-max-pool name="slsb-strict-max-pool"      max-pool-size="25" instance-acquisition-timeout="5"      instance-acquisition-timeout-unit="MINUTES"/>    </bean-instance-pools> </pools> </subsystem> Using the CLI to configure the stateless pool size We have detailed the steps necessary to configure the SLSB pool size through the main configuration file. However, the suggested best practice is to use CLI to alter the server model. Here's how you can add a new pool named large-pool to your EJB 3 subsystem: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool: add(max-pool-size=100) Now, you can set this pool as the default to be used by the EJB container, as follows: /subsystem=ejb3:write-attribute(name=default-slsb-instance-pool, value=large-pool) Finally, you can, at any time, change the pool size property by operating on the max-pool-size attribute, as follows: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool:write- attribute(name="max-pool-size",value=50) Summary In this article, we continued the analysis of the application server configuration by looking at Java's enterprise services. We first learned how to configure datasources, which can be used to add database connectivity to your applications. Installing a datasource in WildFly 8 requires two simple steps: installing the JDBC driver and adding the datasource into the server configuration. We then looked at the enterprise JavaBeans subsystem, which allows you to configure and tune your EJB container. We looked at the basic EJB component configuration of SLSB. Resources for Article: Further resources on this subject: Dart with JavaScript [article] Creating Java EE Applications [article] OpenShift for Java Developers [article]
Read more
  • 0
  • 0
  • 4865
article-image-announcing-linux-5-0
Melisha Dsouza
04 Mar 2019
2 min read
Save for later

Announcing Linux 5.0!

Melisha Dsouza
04 Mar 2019
2 min read
Yesterday, Linus Torvalds, announced the stable release of Linux 5.0. This release comes with AMDGPU FreeSync support, Raspberry Pi touch screen support and much more. According to Torvalds, “I'd like to point out (yet again) that we don't do feature-based releases, and that ‘5.0’ doesn't mean anything more than that the 4.x numbers started getting big enough that I ran out of fingers and toes.” Features of Linux 5.0 AMDGPU FreeSync support, which will improve the display of fast-moving images and will prove advantageous especially for gamers. According to CRN, this will also make Linux a better platform for dense data visualizations and support “a dynamic refresh rate, aimed at providing a low monitor latency and a smooth, virtually stutter-free viewing experience.” Support for the Raspberry Pi’s official touch-screen. All information is copied into a memory mapped area by RPi's firmware, instead of using a conventional bus. Energy-aware scheduling feature, that lets the task scheduler to take scheduling decisions resulting in lower power usage on asymmetric SMP platforms. This feature will use Arm's big.LITTLE CPUs and help achieve better power management in phones Adiantum file system encryption for low power devices. Btrfs can support swap files, but the swap file must be fully allocated as "nocow" with no compression on one device. Support for binderfs, a binder filesystem that will help run multiple instances of Android and is backward compatible. Improvement to reduce Fragmentation by over 90%. This results in better transparent hugepage (THP) usage. Support for Speculation Barrier (SB) instruction This is introduced as part of the fallout from Spectre and Meltdown. The merge window for 5.1 is now open. Read Linux’s official documentation for the detailed list of upgraded features in Linux 5.0. Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers
Read more
  • 0
  • 0
  • 4795

article-image-its-black-friday-but-whats-the-business-and-developer-cost-of-downtime
Richard Gall
23 Nov 2018
4 min read
Save for later

It's Black Friday: But what's the business (and developer) cost of downtime?

Richard Gall
23 Nov 2018
4 min read
Black Friday is back, and, as you've probably already noticed, with a considerable vengeance. According to Adobe Analytics data, online spending is predicted to hit $3.7 billion over this holiday season in the U.S, up from $2.9 billion in 2017. But while consumers clamour for deals and businesses reap the rewards, it's important to remember there's a largely hidden plane of software engineering labour. Without this army of developers, consumers will most likely be hitting their devices in frustration, while business leaders will be missing tough revenue targets - so, as we enter into Black Friday let's pour one out for all those engineers on call and trying their best to keep eCommerce sites on their feet. Here's to the software engineers keeping things running on Black Friday Of course, the pain that hits on days like Black Friday and Cyber Monday can be minimised with smart planning and effective decision making long before those sales begin. However, for engineering teams under-resourced and lacking the right tools, that is simply impossible. This means that software engineers are left in a position where they're treading water, knowing that they're going to be sinking once those big days come around. It doesn't have to be like this. With smarter leadership and, indeed, more respect for the intensive work engineers put in to make websites and apps actually work, revenue driving platforms can become more secure, resilient and stable. Chaos engineering platform Gremlin publishes the 'true cost of downtime' This is the central argument of chaos engineering platform Gremlin, who we've covered a number of times this year. To coincide with Black Friday the team has put together what they believe is the 'true cost of downtime'. On the one hand this is a good marketing hook for their chaos engineering platform, but, cynicism aside, it's also a good explanation of why the principles of chaos engineering can be so valuable from both a business and developer perspective. Estimating the annual revenue of some of the biggest companies in the world, Gremlin has been then created an interactive table to demonstrate what the cost of downtime for each of those businesses would be, for the length of time you are on the page. For 20 minutes downtime, Amazon.com would have lost a staggering $4.4 million. For Walgreens it's more than $80,000. Gremlin provide some context to all this, saying: "Enterprise commerce businesses typically rely on a complex microservices architecture, from fulfillment, to website security, ability to scale with holiday traffic, and payment processing - there is a lot that can go wrong and impact revenue, damage customer trust, and consume engineering time. If an ecommerce site isn’t 100% online and performant, it’s losing revenue." "The holiday season is especially demanding for SREs working in ecommerce. Even the most skilled engineering teams can struggle to keep up with the demands of peak holiday traffic (i.e. Black Friday and Cyber Monday). Just going down for a few seconds can mean thousands in lost revenue, but for some sites, downtime can be exponentially more expensive." For Gremlin, chaos engineering is clearly the answer to many of the problems days like Black Friday poses. While it might not work for every single organization, it's nevertheless true that failing to pay attention to the value of your applications and websites at an hour by hour level could be incredibly damaging. With outages on Facebook, WhatsApp, and Instagram happening earlier this week, these problems aren't hidden away - they're in full view of the public. What does remain hidden, however, is the work and stress that goes in to tackling these issues and ensuring things are working as they should be. Perhaps it's time to start learning the lessons of Black Friday - business revenues will be that little bit healthier, but engineers will also be that little bit happier. 
Read more
  • 0
  • 0
  • 4720