Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS & E-Commerce

830 Articles
article-image-installation-freenas
Packt
28 Oct 2009
28 min read
Save for later

Installation of FreeNAS

Packt
28 Oct 2009
28 min read
Downloading FreeNAS Before you can install the FreeNAS server, you will need to download the latest version from the FreeNAS website (http://www.freenas.org). Go to the download section and find the latest "LiveCD" version. The LiveCD version is what is known as an ISO image file and will have the .iso file extension. An ISO image is an exact copy of the structure and data for a CD or DVD disk. Using a CD burning program, you can create a FreeNAS bootable CD. We will look at this in more detail later on. What Hardware Do I Need? In this tutorial, we will start exploring FreeNAS, so you will need a machine on which to install the FreeNAS software. At this point in time, it doesn't have to be the final machine you are going to use as the FreeNAS server. You can use a "test" machine now and having learnt all about FreeNAS, you can build, install, and deploy a production machine (or machines) later. So, what we need now is a PC with at least 96Mb of RAM (but 128Mb or more is recommended), a bootable CD-ROM drive, a network card, one or more hard disks, and either a floppy disk drive (and a blank formatted disk) or a USB flash disk (MS-DOS formatted and empty). The hard disk will be for the data that you want to store and the floppy disk or USB flash disk will be for storing the configuration information. For the installation and initialization stages, you will also need a monitor and keyboard (but not mouse) attached to the PC. You can remove the monitor later, once FreeNAS is up and running. Warning FreeNAS boots as a LiveCD, which means that it does not use the disks on the host machine during boot up. However, when you start to configure storage on the FreeNAS server (specifically, when you format drives) all the data on the disk will be LOST. Do NOT use a machine that contains important data or an operating system that you will need afterwards. Virtualization  & VMWare The average PC runs just one operating system and inside that operating system, you would run your applications like word processing and email. There is a technology (called virtualization), which allows PCs to run more than one operating system, or to be more precise, to allow a guest virtual PC to run inside your actual PC. This virtual PC is an independent software box that can run its own OS and applications as if it were a physical computer. A virtual PC behaves exactly like a physical PC and has its own virtual CPU, RAM, hard disk, and network interface card (NIC). You can install FreeNAS on a virtual PC and FreeNAS can't tell the difference between the virtual PC and any other physical machine, also, it appears on the network just as a real PC would, running FreeNAS. There are lots of virtualization products available for Windows, Linux, and Apple OS X today. You can learn more at Wikipedia http://en.wikipedia.org/wiki/Virtualization. A very popular virtualization solution is from VMWare (http://www.vmware.com). VMWare have both commercial and freeware offerings and there are pre-configured FreeNAS images available for the VMWare range of products. This makes it an ideal environment for testing the FreeNAS server. Quick Start Guide For the Impatient If you are comfortable with burning ISO images to CDs, setting your computer's BIOS to boot from CDROM, disk partitions, and TCP/IP networking then this little guide should help you get a simple version of the FreeNAS server up and running in just a few minutes. If, however, some of these things sound daunting, then skip this section and go on to the next one where we shall go through the installation process one step at a time. For this example, we will use a USB flash disk to store the configuration information. You can use a floppy but be careful that during the boot process, the PC doesn't try to boot from the floppy before it boots from the CDROM. Burning and Booting Once you have downloaded the ISO image file from the FreeNAS website, you need to burn it to a CD. Having done that, put the CD into the PC as well as the flash disk and switch it on. Make sure that the BIOS is set to boot from CD. If it isn't, you need to enter into the BIOS and configure it to boot from CD. On many modern PCs, it is possible to select the boot device at start-up by pressing a special key (which is often either F8 or F12) to show a boot device menu. You can then select the CD as the boot device. The boot process is in four distinct parts: First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and , which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just a few seconds and you should just let it boot normally, which is the default. The final stage is when the FreeNAS logo appears and the system will boot as FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. To enable access to the web interface, the network of the FreeNAS server must be configured. Press the SPACE bar on the keyboard and the FreeNAS logo will disappear and a simple text menu will appear.       There are two aspects to configuring the network, first, you need to choose which network card to use and second, you need to assign it an address. If you have only one network card in your machine, then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically, by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like this: then the network has been recognized and assigned automatically by FreeNAS. The default IPv4 address for FreeNAS is 192.168.1.250, if this is good for your network, then you can just leave it unchanged. However, if you need to change it then press 2 followed by ENTER. If you want the machine to get its address from DHCP (Dynamic Host Configuration Protocol), answer yes (y) to the IPv4 DHCP question, otherwise answer no (n). If you are not using DHCP, you can now enter the desired IP address. Next, you need to enter the subnet mask. For 255.255.255.0, enter 24, for 255.255.0.0 enter 16, and for 255.0.0.0, enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). If you do want to enter a default gateway and DNS server at this point, they will usually be the IP address of your Internet router. We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! You are now ready to access the web interface. The FreeNAS web interface can be accessed from any machine on the network with a web browser (including Windows, Linux, and OS X machines). On this client machine, type the address of the FreeNAS server with http:// in front of it into your web browser. For example: http://192.168.1.250 Configuring The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. You should now be in the web interface. To configure some storage space, you need to work with "Disks". The logical order of working is that disks must be added, then formatted (if need be), then mounted. Finally, access is given to the various mounted disks by configuring different system services like CIFS and FTP.     So, to add a disk, go to Disks: Management. There is a + sign in a circle on the right-hand-side of the page (it can be easy to miss first time), click on it to add a disk. On the next page, select the disk you want to add. If you click on the drop-down menu, you should see the hard disks of the machines, the CDROM, and the USB flash disk. Dis'k Names in FreeBSD'The disk naming convention in FreeBSD is:/dev/ad0: Is the IDE/ATA Primary Master /dev/ad1 : Is the IDE/ATA Primary Slave/dev/ad2 : Is the IDE/ATA Secondary Master/dev/ad3 : Is the IDE/ATA Secondary Slave/dev/acd0 : Is the first ATA CD/DVD drive detected/dev/da0: Is the first SCSI hard drive, /dev/da1 the second and so on.USB flash disks are controlled using the SCSI driver, so they will appear as /dev/daN drives as well. Make sure ad0 is selected (which it should be by default). The rest of the page you can leave alone. Click Add to add the disk to the system. You then need to click Apply in order for the changes to take effect. You will now have a table showing you the disk you have added, including its size and a description. ApplyIn FreeNAS, the majority of steps need to be applied (which saves the configuration file to disk) by clicking the Apply button. It is normally found near the top of the page before any tables or configuration information is given. If you do not apply the changes, the interface will, on the whole, remember your changes but they will not be enacted in the system. After a reboot, unapplied changes will disappear. It is possible on some pages to make multiple operations and apply them all at the end. Next, the disk needs to be formatted. In Disks: Format, select the disk ad0 (which you just added above). Leave everything else unchanged and click Format disk. The disk will then be formatted. The low level output of the format command will be displayed in a box. It should end with Done!. Now the disk needs to be mounted. Go to Disks: Mount Point. Click on the + in the circle (which I shall refer to as the "add circle" from now on). Leave the Type as Disk and select the disk ad0 again. You need to type in a name, store is as good a name as any, but feel free to use which ever descriptive name you want to. Be DescriptiveIn setting up and configuring your FreeNAS server, you will be called upon to invent various names for mount points and share names etc. Try to be as descriptive as you can without being long winded. Temp, scratch, blob, and even zob are OK for testing, but try more meaningful names like storeage1, storage60gb or backupstorage etc. Don't use spaces in the names, instead use underline and in general, the names should be no longer than 15 characters. Although filling-in the description isn't mandatory in the web interface, it is worth using. Once you have completed the form click Add and then apply the changes. Sharing with Windows Machines Now that the disk has been added, formatted, and mounted, it is time to share it on the network and give other users the ability to read and write to it. FreeNAS supports many different types of access protocol, for this start guide, we will only look at Microsoft's CIFS protocol that primarily allows Windows machines (but also Apple OS X and Linux machines) to access the storage. In Services: CIFS/SMB, tick the enable box (in the title of the configuration data table). At this point, you can just about leave everything else as is with the exception of the workgroup name. We will be leaving the authentication method as "Anonymous" here as this is the easiest to get working and provides unrestricted read/write access to everyone. To make sure that the Windows machines are able to find the shared storage, we need to set the workgroup name, on the FreeNAS server, to be the same as the workgroup name of the Windows PC that will access the share. The default workgroup name for Windows Vista is WORKGROUP but note that the default for Window XP Home Edition was MSHOME. Now click Save and Restart. This will save the changes you have made and restart the CIFS service. Go to the Shares tab and click on add circle. Enter a name for the share. Repeating the name of the mount point is probably the safest policy, so in this case, store and also add a comment. Then click ... in the Path section. This will bring up a simple file system browser. The files you are seeing are on the FreeNAS server and NOT on your local PC. Click store and /mnt/store/ will appear in the little edit box at the click. OK it and you will be taken back to the shares page. Now /mnt/store/ has been added as the path. Leave everything else as it is and click Add and then apply the changes. So now the first hard disk of the computer is formatted, mounted, and shared to the rest of the network. Now, we will access the share from a Windows Vista machine. Testing the Share You can perform this test from any machine that supports the CIFS protocol including Windows 95/98/ME, Windows 2000/XP, Apple OS X, and Linux. Here, we are going to use Windows Vista. Open the Network and Sharing Center by clicking Network on the Start menu. When the window appears, Vista will automatically scan the network for any shared network resources. When it has finished, you will see the available machines on the network including FREENAS.     Open up the FREENAS computer and you will see store, the storage area that you configured. Double click on that and you now "inside" the FreeNAS server from within your Windows machine. Try dragging and dropping a few files in to the store area. Then try deleting them again. To access the FreeNAS server without using the Network and Sharing Center, click Start, and type freenas and then press Enter. This will bring up the shares available on the FreeNAS server directly:     Detailed Overview of Installation It is time to get your hands on a working FreeNAS server and to do that, we need to boot it up onto a PC. There are several steps to this. First, you must burn a CD of the ISO image file you have downloaded. Then, you need to boot the PC from the CD; this may involve changing your computers BIOS to make it boot from the optical drive. Then, you can configure the FreeNAS server to make some storage space available on the network. When using the LiveCD to boot FreeNAS, there are two types of storage on FreeNAS: data and configuration information. The data will be held on the hard drive of the PC, but the configuration needs to be held on a floppy disk or a USB flash disk. For this example, we will use a USB flash disk to store the configuration information. Making the FreeNAS CD To boot the PC into FreeNAS, you need a CD. The ISO image file you have downloaded contains all the information needed for the CD, but it needs to be written onto a physical CD. This process is often known as burning the CD as the laser writes to the disk by heating it and marking or scorching the surface layer. You need to use a PC with a CD-RW drive and a blank CD-R disk (I recommend using a good brand name CD-R for best results). Download the FreeNAS ISO image on to that machine. The PC with the CD writer should have some CD writing software on it (for example Roxio Easy CD or Nero). If you are familiar with the CD writing software, go ahead and burn the ISO file to the CD-R disk. If you aren't familiar with the CD writing software or it doesn't have any CD writing software, then I recommend ISO Recorder. You can download it from http://isorecorder.alexfeinman.com/isorecorder.htm.     Booting from CD Put your newly made FreeNAS CD into the CD drive of the machine on which you want to install FreeNAS, and also put the USB flash disk into a USB port. The flash disk will be used to store the configuration data. (You can also use a floppy disk. If you have both a USB flash disk and a floppy inserted, FreeNAS will save the configuration on the USB device). Now, you need to switch on the PC. When a PC starts, it goes through what is known as the Power On Self Test sequence. Here, the PC will check the amount of memory installed in the PC and find the installed hard drives. After the checks, the PC will try and boot from one of the hard drives, the CDROM, the floppy disk or even a USB flash disk. Which device the PC chooses first as its boot device can be changed by a built-in setup program. The setup program lets you modify basic system configuration settings. These settings are stored in a special battery-backed area of the computer's memory that retains the settings even when the power is switched off. During the POST sequence, there is normally a message telling you how to enter into the built-in setup program. It is normally either the DEL key or F2, on some systems it is also F10. You need to enter into the setup to check and/or change the first boot device to be the CDROM so that the computer will boot into FreeNAS. Each PC has a slightly different setup program, so you will need to search around until you find what you need. The three most popular types of setup programs (also known as BIOS Basic Input Output Program) are the Phoenix setup program, the Phoenix-Award setup program, and the AMI setup program. There are many types of BIOS setup programs and each PC manufacturer modifies the setup program for their own use. The information below is really only a "rough guide" to help you feel your way around. Your BIOS setup program may be significantly different from the examples below. The best source of information is the manual that came with your PC or your motherboard. If you don't have one, most PC manufacturers have them available for download on their websites. Phoenix BIOS If your machine has a Phoenix BIOS, then normally you need to press F2 to enter the setup program. The top of the setup program has a menu that you can navigate with the left and right arrow keys, you need to select the Boot menu.     On the Boot menu page, you can move up and down the available boot devices using the up and down arrow keys. You can expand and collapse sections with the + or signs using the ENTER key. To change the boot order, you use the + and keys. You want to make sure that the CDROM is the first device in the list. After you have changed the boot order list, you need to go to the Exit menu (by pressing the right arrow key) and select Exit Saving Changes. The PC will then reboot and after the POST, it will start to boot from the FreeNAS CD.     Phoenix-Award BIOS If your PC has a Phoenix-Award BIOS, then normally, you need to press DEL to enter the setup program. Once inside, you can the up, down, left, and right keys to navigate around the menus. Go in to Advanced BIOS Features and set the First Boot Device to be CDROM by using the + and keys. You now need to save your changes and exit. Pressing ESC will bring you back to the main menu, then select Save & Exit Setup. Often, pressing F10 will have the same effect. The PC will then reboot and if you have made the changes correctly, it will boot from the FreeNAS CD. AMI BIOS The American Megatrends, Inc (AMI) BIOS normally displays a message telling you to Hit <DEL> if you want to run setup. Once inside, it is quite different to that of the setup programs for Phoenix or Award. Here, the Tab key is used to navigate and the arrow keys are used to change values. To go from one page to the next, press the ALT+P keys. This information should also be printed at the bottom of the BIOS setup page. You need to find the variable Boot Sequence and make sure that it is set to boot from the CDROM first. First Look at FreeNAS The boot process is in 4 distinct parts. First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just five seconds and you should just let it boot normally which is the default. The final stage is when the FreeNAS logo appears and the system will boot as a FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. Configuring the Network The majority of the configuration for FreeNAS is done via a web interface, but before you can use the web interface, the FreeNAS server needs to be configured for your network. This is done via a simple text menu system using the keyboard and monitor attached to the PC with FreeNAS running on it. You probably only need to do this once, and after that this new network information will be saved on the USB flash disk (or floppy disk) and the server will boot into this configuration every time. If you press the SPACE bar on FreeNAS machine, the FreeNAS logo will disappear and a simple menu will appear.     Here, you have a number of options including options to reboot or power off the system. The first two options are about configuring the network and they reflect the two parts to configuring the network, first you need to choose which network card to use (option 1) and second you need to assign it an address (option 2). If you have only one network card in your machine then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like the following screenshot:     then the network has been recognised and assigned automatically by FreeNAS. What is a LAN IP Address? IP stands for Internet Protocol and it is the basic low level language that computers use to talk to each other on the Internet. It is also used on private networks (in the office or at home) to connect different PCs and even printers to each other. An IPv4 address is made up of 4 sets of number (0 to 255) and is expressed in what is known as dot notation (meaning that each number has a dot between it). So 192.168.1.250 is an IP address, it also happens to be the default IP address for the FreeNAS server. Like email, the postal service and telephone, each destination (email account, mailbox or handset) needs a unique way of being identified. This is what IP addresses do; they allow each piece of equipment on the network to have a unique identifier so that messages can be addressed to the right place on the network. Pronouncing IP AddressesIf you need to speak to someone about an IP address, the simplest way is to speak about each digit separately, so 192.168.1.250 isn't "one hundred and ninety two dot" but rather "one nine two dot one six eight dot one dot two five zero". There are two ways in which you can obtain an IP address for the FreeNAS server. The first is to have the address assigned automatically via the DHCP service (Dynamic Host Configuration Protocol), and the second is to assign it manually. What is DHCP?The Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP addresses and other IP parameters (like subnet masks and default gateway). A computer that needs an IP address will send a request to the DHCP server and the server will reply with an IP address from a pool of addresses that have been set aside for this purpose. A DHCP server can be a PC or server (running Windows, OS X or Linux) as well as small devices like modern DSL modems and firewalls. The advantage of the DHCP method is that the IP address assignment, all happens in the background and you don't need to worry about setting it yourself. The disadvantages are that first you need to have an already configured and running DHCP server on your network; and second, DHCP assigns addresses from a pool of available addresses. This means that every time the FreeNAS server boots, it is not guaranteed to have the same address as it had previously. This isn't a problem when using the CIFS protocol, however, for accessing the web interface or using protocols like FTP, it is desirable to have a stable IP address to refer to. However, for testing the FreeNAS server and learning about how it works using a DHCP assigned address could be acceptable for now. It is actually possible to assign fixed, permanent IP address to certain pieces of hardware, including a FreeNAS server over DHCP, but that requires extra advanced configuration changes in the DHCP server that cannot be covered in this tutorial. So opting for the manual IP address, you now need to obtain two pieces of information. The first is the actual IP address for the FreeNAS and the second is what is known as the subnet mask. The subnet mask will also be expressed in the dot notation and is normally something like 255.255.255.0. If you are in an office environment, you need to speak to the network administrator and he/she will be able to give you the information you need. If you are administering your own network, you need to choose an IP that isn't currently allocated to any other machine on your network (and also, isn't part of the address pool of any DHCP server on your network). Having obtained the IP address and subnet mask, you can now configure the FreeNAS server for your network. Select option 2 on the console menu. If you have chosen to have DHCP assign the address, answer yes (y) to the first question about using DHCP for IPv4. Otherwise answer no (n). If you are setting the address manually, you can now enter the address in dot notation, i.e. 192.168.1.240. Next, comes the subnet mask. If your subnet mask is 255.255.255.0: enter 24, for 255.255.0.0: enter 16, and for 255.0.0.0: enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! After you have successful set the IP address, there will be a small message on the screen inviting you to access the web interface by opening the listed URL in your web browser. If you have used DHCP, note down the URL listed. If you set the IP address manually, check that the URL listed is the same as the IP address you set with [http:// http://] in front of it. You are now ready to access the web interface. What is IPv4 and IPv6?The Internet Protocol has been around since the mid 1980's and when it was designed, the popularity of the Internet was not envisaged. The number of computers connected to the Internet is quickly growing beyond the addressing capabilities of the original protocol. As an answer to this, a new version of the IP protocol has been designed and has been given the name IP version 6 or IPv6 for short and the older version has taken the name IP version 4 or IPv4 for short. FreeNAS supports both versions of the Internet Protocol. In this tutorial, we will concentrate just on IPv4 as it still remains the most popular of the two protocols. Basic Configuration With your FreeNAS server now being up and running, it is time to access the web interface. Open a web browser on a computer on the same network as the FreeNAS server. Enter in the URL of the FreeNAS server. This should be the same as the IP address of the server with [http:// http://] in the front. The default URL is http://192.168.1.250     The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. FreeNAS Web Interface You should now have the web interface in your browser. The interface is split into two main sections. Down the left-hand-side are the menus, and the right-hand-side contains the pages for configuration. The menus are split into various sections: System, Interfaces, Disks, Services, Access, Status, Diagnostics, and Advanced.     When talking about a particular menu item, we shall use the notation Subsection: Menu Item to help you find the right menu option easily. So, the Management option, which is in the Disks subsection, will be referred to as Disks: Management. System This section is for system level configuration and operations, here for example you can change the username and password, backup and restore the configuration data, and shutdown or reboot the server. Interfaces Here, you can configure the network of the FreeNAS server much like you did via the console menu. You can change the network card that is used for the web interface and assign permanent or automatic IP addresses. Be careful when you change things here as some changes won't take effect until you reboot. If you have changed any of the addressing, you will need to access the web interface with the IP address. Disks This section of the menu is for administering the disks on the server. Here, you can set up disk redundancy (RAID), control encryption, format disks, and mount the disks on the server. Services The various access protocols like CIFS, NFS, and FTP are controlled from here. Each service is administered individually and by default NONE of the services are enabled, so before you can access files stored on the FreeNAS server, you need to enable at least one of these services. Access Most of the services offered by FreeNAS use some form of list of users to control who has access and who does not. This section is for defining these users and the groups they belong to as well as connecting the FreeNAS server to other directory services. Status The status menu has several reporting tools for you to see the current state of your FreeNAS server including a general overview, memory usage, disk usage, and network usage. You can also configure emails to be sent periodically about the status of the server. Diagnostics The diagnostics menu contains different tools to help diagnose any problem with the FreeNAS server, including logs of all the important services and diagnostic information from the hard disks and other system modules. Advanced The advanced section provides some simple tools for executing commands at the operating system level and should not be used by those unfamiliar with FreeBSD.    
Read more
  • 0
  • 0
  • 5940

article-image-modifying-existing-theme-drupal-6-part-1
Packt
20 Oct 2009
10 min read
Save for later

Modifying an Existing Theme in Drupal 6: Part 1

Packt
20 Oct 2009
10 min read
Setting up the workspace There are several software tools that can make your work modifying themes more efficient. Though no specific tools are required to work with Drupal themes, there are a couple of applications that you might want to consider adding to your tool kit. I work with Firefox as my primary browser, principally due to the fact that I can add into Firefox various extensions that make my life easier. The Web Developer extension, for example, is hugely helpful when dealing with CSS and related issues. I recommend the combination of Firefox and the Web Developer extension to anyone working with Drupal themes. Another extension popular with many developers is Firebug, which is very similar to the Web Developer extension, and indeed more powerful in several regards. Pick up Web Developer, Firebug, and other popular Firefox add-ons at https://addons.mozilla.org/en-US/firefox/ When it comes to working with PHP files and the various theme files, you will need an editor. The most popular application is probably Dreamweaver, from Adobe, although any editor that has syntax highlighting would work well too. I use Dreamweaver as it helps me manage multiple projects and provides a number of features that make working with code easier (particularly for designers). If you choose to use Dreamweaver, you will want to tailor the program a little bit to make it easier to work with Drupal theme files. Specifically, you should configure the application preferences to open and edit the various types of files common to PHPTemplate themes. To set this up, open Dreamweaver, then: Go to the Preferences dialogue. Open file types/editors. Add the following list of file types to Dreamweaver's open in code view field: .engine.info.module.install.theme Save the changes and exit. With these changes, your Dreamweaver application should be able to open and edit all the various PHPTemplate theme files. Previewing your work Note that, as a practical matter, previewing Drupal themes requires the use of a server. Themes are really difficult to preview (with any accuracy) without a server environment. A quick solution to this problem is the XAMPP package. XAMPP provides a one step installer containing everything you need to set up a server environment on your local machine (Apache, MySQL, PHP, phpMyAdmin, and more). Visit http://www.ApacheFriends.org to download XAMPP and you can have your own Dev Server quickly and easily. Another tool that should be on the top of your list is the Theme developer extension for the popular Drupal Devel module. Theme developer can save you untold hours of digging around trying to find the right function or template. When the module is active, all you need to do is click on an element and the Theme developer pop-up window will show you what is generating the element, along with other useful information. In the example later in this article, we will also use another feature of the Devel module, that is, the ability to automatically generate sample content for your site. You can download Theme developer as part of the Devel project at Drupal.org: http://drupal.org/project/devel Note that Theme developer only works on Drupal 6 and due to the way it functions, is only suitable for use in a development environment—you don't want this installed on a client's public site! Visit http://drupal.org/node/209561 for more information on the Theme developer aspects of the Devel module. The article includes links to a screencast showing the module in action—a good quick start and a solid help in grasping what this useful tool can do. Planning the modifications We're going to base our work on the popular Zen theme. We'll take Zen, create a new subtheme, and then modify the subtheme until we reach our final goal. Let's call our new theme "Tao". The Zen theme was chosen for this exercise because it has a great deal of flexibility. It is a good solid place to start if you wish to build a CSS-based theme. The present version of Zen even comes with a generic subtheme (named "STARTERKIT") designed specifically for themers who wish to take a basic theme and customize it. We'll use the Starterkit subtheme as the way forward in the steps that follow. The Zen theme is one of the most active theme development projects. Updated versions of the theme are released regularly. We used version 6.x-1.0-beta2 for the examples in this article. Though that version was current at the time this text was prepared, it is unlikely to be current at the time you read this. To avoid difficulties, we have placed a copy of the files used in this article in the software archive that is provided on the Packt website. Download the files used in this article at http://www.packtpub.com/files/code/5661_Code.zip. You can download the current version of Zen at http://drupal.org/project/zen. Any time you set off down the path of transforming an existing theme into something new, you need to spend some time planning. The principle here is the same as in many other areas of life: A little time spent planning at the front end of a project can pay off big in savings later. A proper dissertation on site planning and usability is beyond the scope of this article; so for our purposes let us focus on defining some loose goals and then work towards satisfying a specific wish list for the final site functionality. Our goal is to create a two-column blog-type theme with solid usability and good branding. Our hypothetical client for this project needs space for advertising and a top banner. The theme must also integrate a forum and a user comments functionality. Specific changes we want to implement include: Main navigation menu in the right column Secondary navigation mirrored at the top and bottom of each page A top banner space below top nav but above the branding area Color scheme and fonts to match brand identity Enable and integrate the Drupal blog, forum, and comments modules In order to make the example easier to follow and to avoid the need to install a variety of third-party extensions, the modifications we will make in this article will be done using only the default components—excepting only the theme itself, Zen. Arguably, were you building a site like this for deployment in the real world (rather than simply for skills development) you might wish to consider implementing one or more specialized third-party extensions to handle certain tasks. Creating a new subtheme Install the Zen theme if you have not done so before now; once that is done we're ready to create a new subtheme. First, make a copy of the directory named STARTERKIT and place the copied files into the directory sites/all/themes. Rename the directory "tao". Note that in Drupal 5.x, subthemes were kept in the same directory as the parent theme, but for Drupal 6.x this is no longer the case. Subthemes should now be placed in their own directory inside the sites/all/themes/directory. Note that the authors of Zen have chosen to vary from the default stylesheet naming. Most themes use a file named style.css for their primary CSS. In Zen, however, the file is named zen.css. We need to grab that file and incorporate it into Tao. Copy the Zen CSS (zen/zen/zen.css) file. Rename it tao.css and place it in the Tao directory (tao/tao.css). When you look in the zen/zen directory, in addition to the key zen.css file, you will note the presence of a number of other CSS files. We need not concern ourselves with the other CSS files. The styles contained in those stylesheets will remain available to us (we inherit them as Zen is our base theme) and if we need to alter them, we can override the selectors as needed via our new tao.css file. In addition to renaming the theme directory, we also need to rename any other theme-name-specific files or functions. Do the following: Rename the STARTERKIT.info file to tao.info. Edit the tao.info file to replace all occurrences of STARTERKIT with tao. Open the tao.info file and find this copy: The name and description of the theme used on the admin/build/themes page. name = Zen Themer's StarterKit description = Read the <a href="http://drupal.org/node/226507">online docs</a> on how to create a sub-theme. Replace that text with this copy: The name and description of the theme used on the admin/build/themes page. name = Tao description = A 2-column fixed-width sub-theme based on Zen. Make sure the name= and description = content is not commented out, else it will not register. Edit the template.php file to replace all occurrences of STARTERKIT with tao. Edit the theme-settings.php file to replace all occurrences of STARTERKIT with tao. Copy the file zen/layout-fixed.css and place it in the tao directory, creating tao/layout-fixed.css. Include the new layout-fixed.css by modifying the tao.info file. Change style sheets[all][] = layout.css to style sheets[all][] = layout-fixed.css. The .info file functions similar to a .ini file: It provides configuration information, in this case, for your theme. A good discussion of the options available within the .info file can be found on the Drupal.org site at: http://drupal.org/node/171205 Making the transition from Zen to Tao The process of transforming an existing theme into something new consists of a set of tasks that can categorized into three groups: Configuring the Theme Adapting the CSS Adapting the Templates & Themable Functions Configuring the theme As stated previously, the goal of this redesign is to create a blog theme with solid usability and a clean look and feel. The resulting site will need to support forums and comments and will need advertising space. Let's start by enabling the functionality we need and then we can drop in some sample contents. Technically speaking, adding sample content is not 100% necessary, but practically speaking, it is extremely useful; let's see the impact of our work with the CSS, the templates, and the themable functions. Before we begin, enable your new theme, if you have not done so already. Log in as the administrator, then go to the themes manager (Administer | Site building | Themes), and enable the theme Tao. Set it to be the default theme and save the changes. Now we're set to begin customizing this theme, first through the Drupal system's default configuration options, and then through our custom styling. Enabling Modules To meet the client's functional requirements, we need to activate several features of Drupal which, although contained in the default distro, are not by default activated. Accordingly, we need to identify the necessary modules and enable them. Let's do that now. Access the module manager screen (Administer | Site building | Modules), and enable the following modules: Blog (enables blog-type presentation of content) Contact (enables the site contact forms) Forum (enables the threaded discussion forum) Search (enables users to search the site) Save your changes and let's move on to the next step in the configuration process.
Read more
  • 0
  • 0
  • 5907

article-image-transferring-data-ms-access-2003-sql-server-2008
Packt
07 Oct 2009
3 min read
Save for later

Transferring Data from MS Access 2003 to SQL Server 2008

Packt
07 Oct 2009
3 min read
Launching the Import and Export Wizard Click Start | All Programs| SQL Server 2008 and click on Import and Export Data (32 bit) as shown in the next figure. This brings up the - "Welcome to the SQL Server Import and Export Wizard”. Make sure you read the explanation on this window. Connecting to Source and Destination databases You will connect to the source and destination. In this example the data to be transferred is in a MS Access 2003 database. A table in the Northwind.mdb file available in MS Access will be used for the source. The destination is the 'tempdb' database (or the master database). These are chosen for no particular reason and you can even create a new database on the server at this point. Connecting to the Source Click Next. The default window gets displayed as shown. The data source is SQL Server Native Client 10.0. As we are transferring from MS Access 2003 to SQL Server 2008 we need to use an appropriate data source. Click on the drop-down handle for the data source and choose Microsoft Access as shown in the next figure. The 'SQL Server Import and Export Wizard' changes to the following. Click on the Browse... button to locate the Northwind.mdb file on your machine also as shown. For Username use the default 'Admin'. Click the Advanced button. It opens the Data Link properties window as shown. You can test the connection as well as look at other connection related properties using the Connection and Advanced tabs. Click OK to the Microsoft Data Link message as well as to the Data Link Properties window. Click Next. Choose Destination The Choose a Destination page of the wizard gets displayed. The Destination source (SQL Server Native Client 10.0) and the Server name that automatically shows up are defaults. These may be different in your case. Accept them. The authentication information is also correct (the SQL Server is configured for Windows Authentication). Click on the drop-down handle for the Database presently displaying <default> as shown. Choose tempdb and click on the Next button. Choosing a table or query for the data to be transferred You can transfer data in table or tables; view or views by copying them over, or you may create a query to collect the data you want transferred. In this article both options will described. Copying a table over to destination The Specify Table Copy or Query page of the wizard gets displayed as shown. First we will copy data from a table. Later we will see how to write a query to do data transfer. Click Next. The Select Source Tables and Views page gets displayed as shown. When you place a check mark for say, Customers, the Destination column gets an entry for the selection you made. We assume that just this table is transferred. However you can see that you can transfer all of them in one shot. This window also helps you with two more important things. First you can Preview... the data you are transferring. Secondly, you can edit the mappings, the scheme of how source columns go over to destination.
Read more
  • 0
  • 0
  • 5870
Visually different images

article-image-shopify-announces-fulfillment-network-video-and-3d-model-assets-custom-storefront-tools-and-more
Vincy Davis
20 Jun 2019
6 min read
Save for later

Shopify announces Fulfillment network, video and 3D model assets, custom storefront tools and more!

Vincy Davis
20 Jun 2019
6 min read
At the ongoing Shopify Unite 2019 conference, Shopify has announced a number of new products like Fulfillment network, video and 3D model assets, custom storefront tools, new online store design experience and others. Most of these products will be launched later this year. Here are the major highlights: Shopify Fulfillment Network Shopify Fulfillment Network, will be a dispersed network of fulfillment centers which uses machine learning to automatically select the optimal inventory quantities per location, as well as the closest fulfillment option for each of the customers’ shipments. Once available, it will be possible to simply install the app, select the products, get a quote, and begin selling the products. Entrepreneurs will be able to provide fast, low-cost delivery to their customers while maintaining ownership of customer data and a branded shipping experience. A single back office A merchants order, inventory, and customer data will stay synced and up-to-date across all warehouse locations and channels. Recommended warehouse locations To save costs on shipping, it will be possible to find the best locations, based on where the sales are coming from. Low stock alerts When inventory runs low, merchants will know when to replenish to continue meeting demands. 99.5% order accuracy The correct package will be chosen and out the door on time, with good accuracy. Hands-on warehouse help A dedicated account manager will assist the merchant in finding the best path, to reach their customers so the costs can be kept low. Shopify Fulfillment Network is currently available in the United States, and interested merchants can apply for early access. https://twitter.com/treklightgear/status/1141406217815724034 Native support to video and 3D model assets Shopify’s product will natively support video and 3D model assets, thus adding a new dimension to products and providing a richer purchase experience for customers. This feature is expected to be released later this year. Manage media through a single location It will now be possible to upload, access, and store video and 3D models from the same place where the images are being managed currently. Deploy through the new Shopify video player Users can use, one of the 10 starter themes, to easily display video 3D models using the new Shopify player for video or the viewer for Shopify AR. New editor apps Shopify is inviting partners and developers to create additional apps and custom integrations to open up new ways, to create and modify images, videos, and AR experiences. https://twitter.com/Scobleizer/status/1141456478496161797 https://twitter.com/thewakdesigns/status/1141607172670926848 New Online Store Design Experience The new online store design will provide entrepreneurs, with more options for customization. This will give them more control over the layout and aesthetic of their store. This feature is expected to be released later this year. Easier customization at the page and store level Any page can now be customized by using sections, just like the homepage. It is also now possible for users to save time by setting content on multiple pages, using master pages. Portable content that moves with you From now on, users will not have to make a duplicate of their theme or move content over manually. The shop’s content will follow the owner so that they are able to make changes, like downloading a new version or trying out a new one. A new workspace to update your store It will now be possible to edit and preview updates before publishing. Any minor tweak or major changes can be done in a new space to draft changes. Custom storefront tools Shopify’s Storefront API allows customers to use the custom storefront tools to free their storefront from certain back-end dependencies. This gives customers the flexibility to sell anywhere and in any way they want. This feature is more exciting for more complex or niche businesses that use the web, mobile, gaming, and other interactive mediums as storefronts to reach their customers. Shopify merchants have already started to use the Storefront API’s. For example, NTWRK hosted a live stream shopping show using Shopify’s Storefront API. Connect microservices to create personalized experiences Third-party shipping services can be used for blogs, storefront, and product pages or accurate shipment alerts Turn the world into your storefront Entrepreneurs can engage with their customers through vending machines, live streams, smart mirrors, voice shopping, and more. Speedy and scalable to have development teams work in parallel The flexible architecture will enable the development teams to create the experiences and storefronts, according to their vision. Interested merchants can create custom experiences on the customer storefront tools website. https://twitter.com/CarloTeran/status/1141475420904157185 Customer loyalty with retail shoppers With the new Shopify Point of Sale cart app extensions, users can apply and edit loyalty and promotional details directly from the customer cart. Apply discounts lightning-quick The number of clicks needed to apply a discount has been reduced to one, thus saving users 10 seconds for each sale on average. Important information at your fingertips Key customer details, from birthdays to reward milestones will surface automatically and in-context, so the merchants or their staff won’t have to navigate to the apps to get alerts. More flexibility Shopify’s app partners will give merchants the flexibility to pick a program that works best for their customer experience, like rewarding a long-time customer, online or in-store. Interested merchants can learn more about the apps on the POS Loyalty and Promotion Apps website. https://twitter.com/ShawnBouchard/status/1141483355252199425 Shopify Payments in multiple currencies and languages From this year onwards, merchants can run their business in their own preferred language. Shopify is already available in French, German, Japanese, Italian, Brazilian Portuguese, and Spanish. Additionally, Shopify will be available in 11 additional languages like Dutch, Simplified Chinese, and more. Also, Shopify Payments will enable selling in multiple currencies and will be globally available to all Shopify merchants later this year. The displayed prices will use simple rounding rules and automatically adjust based on current foreign -exchange rates. Soon shoppers will be able to convert between nine major currencies -GBP, AUD, CAD, EUR, HKD, JPY, NZD, SGD, and USD, thus using their preferred way to pay. https://twitter.com/anthonycook/status/1141392464818900996 Users of Shopify are obviously quite elated with all the announcements. People also touted this as Shopify’s way to combat Amazon, its main competitor in the market. https://twitter.com/tomfgoodwin/status/1141434508257964032 Visit the Shopify blog, for more details. Read More Why Retailers need to prioritize eCommerce Automation in 2019 5 things to consider when developing an eCommerce website Through the customer’s eyes: 4 ways Artificial Intelligence is transforming ecommerce
Read more
  • 0
  • 0
  • 5843

article-image-spotfire-architecture-overview
Packt
21 Oct 2013
6 min read
Save for later

The Spotfire Architecture Overview

Packt
21 Oct 2013
6 min read
(For more resources related to this topic, see here.) The companies of today face innumerable market challenges due to the ferocious competition of a globalized economy. Hence, providing excellent service and having customer loyalty are the priorities for their survival. In order to best achieve both goals and have a competitive edge, companies can resort to the information generated by their digitalized systems, their IT. All the recorded events from Human Resources (HR) to Customer Relationship Management (CRM), Billing, and so on, can be leveraged to better understand the health of a business. The purpose of this article is to present a tool that behaves as a digital event analysis enabler, the TIBCO Spotfire platform. In this article, we will list the main characteristics of this platform, while also presenting its architecture and describing its components. TIBCO Spotfire Spotfire is a visual analytics and business intelligence platform from TIBCO software. It is a part of new breed of tools created to bridge the gap between the massive amount of data that the corporations produce today and the business users who need to interpret this data in order to have the best foundation for the decisions they make. In my opinion, there is no better description of what TIBCO Spotfire delivers than the definition of visual analytics made in the publication named Illuminating the Path: The Research and Development Agenda for Visual Analytics. "Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces". –Illuminating the Path: The Research and Development Agenda for Visual Analytics, James J. Thomas and Kristin A. Cook, IEEE Computer Society Press The TIBCO Spotfire platform offers the possibility of creating very powerful (yet easy to interpret and interact) data visualizations. From real-time Business Activity Monitoring (BAM) to Big Data, data-based decision making becomes easy – the what, the why, and the how becomes evident. Spotfire definitely allowed TIBCO to establish itself in an area where until recently it had very little experience and no sought-after products. The main features of this platform are: More than 30 different data sources to choose from: Several databases (including Big Data Teradata), web services, files, and legacy applications. Big Data analysis: Spotfire delivers the power of MapReduce to regular users. Database analysis: Data visualizations can be built on top of databases using information links. There is no need to pull the analyzed data into the platform, as a live link is maintained between Spotfire and the database. Visual join: Capability of merging data from several distinct sources into a single visualization. Rule-based visualizations: The platform enables the creation and tailoring of rules, and the filtering of data. These features facilitate emphasizing of outliers and foster management by exception. It is also possible to highlight other important features, such as commonalities and anomalies. Data drill-down: For data visualizations it is possible to create one (or many) details visualization(s). This drill-down can be performed in multiple steps as drill-down of drill-down of drill-down and so on. Real-time integration with other TIBCO tools. Spotfire platform 5.x The platform is composed of several intercommunicating components, each one with its own responsibilities (with clear separation of concerns) enabling a clustered deployment. As this is an introductory article, we will not dive deep into all the components, but we will identify the main ones and the underlying architecture. A depiction the platform's components is shown in the following diagram: The descriptions of each of the components in the preceding diagram are as follows: TIBCO Spotfire Server: The Spotfire server makes a set of services available to the analytics clients (TIBCO Spotfire Professional and TIBCO Spotfire Web Player Server): User services: It is responsible for authentication and authorization Deployment services: It handles the consistent upgrade of Spotfire clients Library services: It manages the repository of analysis files Information services: It persists information links to external data sources The Server component is available for several operating systems, such as Linux, Solaris, and Windows. TIBCO Spotfire Professional: This is a client application (fat client) that focuses on the creation of data visualizations, taking advantage of all of the platform's features. This is the main client application, and because of that, it has enabled all the data manipulation functionalities such as use of data filters, drill down, working online and offline (working offline allows embedding data in the visualizations for use in limited connectivity environments), and exporting visualizations to MS PowerPoint, PDF, and HTML. It is only available for Windows environment. TIBCO Spotfire Web Player Server: This offers users the possibility of accessing and interacting with visualizations created in TIBCO Spotfire Professional. The existence of this web application enables the usage of an Internet browser as a client, allowing for thin client access where no software has to be installed on the user's machine. Please be aware that the visualizations cannot be created or altered this way. They can only be accessed in a read-only mode, where all rules are enabled, as well as data is drill down. Since it is developed in ASP.NET, this server must be deployed in a Microsoft IIS server, and so it is restricted to Microsoft Windows environments. Server Database: This database is accessed by the TIBCO Spotfire Server for storage of server information. It should not be confused with the data stores that the platform can access to fetch data from, and build visualizations. Only two vendor databases are supported for this role: Oracle Database and Microsoft SQL Server. TIBCO Spotfire Web Player Client: These are thin clients to the Web Player Server. Several Internet browsers can be used on various operating systems (Microsoft Internet Explorer on Windows, Mozilla Firefox on Windows and Mac OS, Google Chrome on Windows and Android, and so on). TIBCO has also made available an application for iPad, which is available in iTunes. For more details on the iPad client application, please navigate to: https://itunes.apple.com/en/app/spotfire-analytics/id417436823?mt=8 Summary In this article, we introduced the main attributes of the Spotfire platform in the scope of visual analytics, and we detailed the platform's underlying architecture. Resources for Article: Further resources on this subject: Core Data iOS: Designing a Data Model and Building Data Objects [Article] Database/Data Model Round-Trip Engineering with MySQL [Article] Drilling Back to Source Data in Dynamics GP 2013 using Dashboards [Article]
Read more
  • 0
  • 0
  • 5795

article-image-setting-basics-drupal-multilingual-site-languages-and-ui-translation
Packt
25 Jun 2012
10 min read
Save for later

Setting up the Basics for a Drupal Multilingual site: Languages and UI Translation

Packt
25 Jun 2012
10 min read
(For more resources on Drupal, see here.) Getting up and running Before we get started, we obviously need a Drupal 7 website to work on. This section gives you two options, namely, roll your own or install the demo. Using your own site You can use your own Drupal 7 site. It can be an existing site or one you create from scratch. If you are creating a brand new site and weren't planning on using a particular installation profile, you can get a head start by using the Localized Drupal Distribution install profile at drupal.org/project/l10n_install. It is probably obvious, but it's best to run the site on a development machine and not in a production environment. Once all the basic Drupal core modules are configured, you will also want to set up the following additional modules to get the most out of the exercises: Panels: A tool for creating pages with custom layouts Path auto: Settings for creating path aliases automatically Views: A tool for creating custom pages and blocks Using the demo site If you'd prefer a jump-start, a full demo website can be created using a special install profile. Instructions for downloading and installing the demo website are included on the Drupal project page available at drupal.org/project/multilingual_book_demo. The demo site contains additional modules including the modules listed previously as well as the following: Administration Menu: Toolbar for quick access to the site configuration Views Bulk Operations: Extra functionality for Views forms Views Slideshow: Slideshows of content coming from Views These modules provide us with a starting point. As more modules are needed for particular exercises, they will be listed so you can add them. Roles, users, and permissions Although you might already have multiple users on your test site, for simplicity it will be assumed that you are logged in as the super admin (user ID 1). Working with languages If we want a multilingual site, the logical first step is to add more languages! In this section, we will add languages to our site, configure how our languages are detected, and set up ways to go between these languages. Adding languages with the Locale module Drupal has language support built into the core, but it's not fully turned on by default. If you go to your site right now and navigate to Configuration | Regional and language, you will see the Regional settings and Date and time comfit page s for configuring default country, time zone, and date/time formats: To get our languages hooked in, let's enable the core module, Locale. Now go back to Configuration | Regional and language to see more options: Click on Languages and you'll see we only have English in our list so far: Now let's add a language by clicking on the Add language link. You can add a predefined language such as German or you can create a custom language. For our purposes, we will work with predefined languages. So choose a language and click on the Add language button. Drupal will then redirect you to the main language admin page and your new language will be added to the list. Now you can simply repeat the process for each language. In my case, I've added three new languages, namely, Arabic, German, and Polish: The overview table shows the language's name (English and native), its code, and its directionality. The language's direction can be Left to right (LTR) or Right to left (RTL), with most languages using the former. 'Right to left' just means that you start at the right side of the page and move towards the left side when you are writing. RTL languages include Arabic, Hebrew, and Syriac, which are written in their own alphabets. You can choose which languages to enable, order them, and set the site default. Links are provided to edit and delete each language. English only has an edit link since it is the system language and cannot be deleted, but English can be disabled if you use a non-English default. If we edit a language, we can modify all the information from the overview table except for the language's code since we need that as a consistent reference string. Do not change the default language once you have started translating or translations might break. Install String Translation from the Internationalization package (drupal.org/project/i18n), go to Configuration | Regional and language | Multilingual settings | Strings, select the Source language, and click on Save configuration. Do not change this setting once it's configured. Detecting languages We have our languages, so now what? If you click around your site, nothing looks different. That's because we are looking at the English version of the site and we haven't told Drupal how we want to deal with the other languages. We'll do that now. Navigate to Configuration | Regional and language | Languages | Detection and selection and you'll see we have a number of choices available to us: The Default detection method is enabled for us, but we can also enable the URL, Session, User, and Browser options. If you want a cookie-based option, check out the Language Cookie and Locale Cookie modules. Let's go over the core options in more detail. URL If you enable this method, users can navigate to URLs such as example.com/de/ news or example.com/deutsch/news (when using the path prefix option) and example.de/news, deutschexample.com/news, or deutsch.example.com/news (when using the domain option). Configuring domains requires web server changes, but using path prefixes does not. This is a common configuration for multilingual sites, and one we'll use shortly. The language's path prefix can be changed when editing the language. If you want to use path-prefixed URLs, then you should decide on your path prefixes before translating content as changing path prefixes might break links (unless you set up proper redirects). If desired, you can choose one language that does not have any path prefix. This is common for the site's default language. For example, if German is the default language and no path prefix is used, the news page would be accessed as example.com/news whereas other languages would be accessed using a path prefix (for example, example.com/en/news). Session The Session option is available if you want to store a user's language preference inside their user session. It was actually proposed by some Drupal community members that this method be removed from the set of choices as it caused a number of issues in other code. One reason you may not want to use this option is due to the possible inconsistency between the content and the URL language. For example, you could enable both URL and Session methods and order them so that the Session method is first. If a user is at example.com/de and if the session is set to French, then the user will see French content even though the URL corresponds with German. My advice is to just skip this one, or, if you need it, at least make sure that it's ordered below the URL option. User Once the Locale module is enabled, users can specify their preferred language when they edit their account profile. If you enable the User method in the detection settings, the user's profile language will be checked when deciding what language to display. Note that the user profile language defaults to the site's default language. Browser Users can configure their browsers to specify which languages they prefer. If the Browser method is enabled, Drupal will check the browser's request to find out the language setting and use it for the language choice. This option may or may not be useful depending on your site audience. Default The default site language is configured on the Configuration | Regional and language | Languages settings page, and is used for the Default detection method. Although you can't disable this method, you can reorder it if you choose. But, it makes the most sense to keep it at the bottom of the list to use it as the fallback language. Detection method order It is important to note that the detection method order is critical to how detection works. If you were to drag the Default method to the top of the list, then none of the other methods would be used and the site would only use the default language. Similarly, if you allow a user profile language and drag User to top of the list, then the URL method would not matter even if it's enabled. Also, if URL detection is ordered below Session, User, and Browser options, the user might see a UI language that does not match up with the URL language, which could be confusing. Make sure to think carefully about the order of these settings. If you use the URL method, it's likely you will want it first. The Default method should be last. The other detection method positions depend on your preference. When using path-prefixed URLs, if one language does not have a prefix, then detection for that language will work differently. For example, if the URL method is first, then no other detection methods will trigger for any URLs with no path prefix such as example.com/news or example.com/about-us. Our choice For our purposes, let's stick with URL detection and use the path-prefix option as this is the easiest to configure (it doesn't require extra domains). This choice will keep our URLs in sync with our interface language, which is also user and SEO-friendly. Check Enabled for the URL method and press the Save settings button. Now click on Configure for that method and you'll see options for Path prefix and Domain. We'll use the default option, that is Path prefix (for example, example.com/de). Don't panic on the next step. You won't see anything different in the UI until we finish our interface translation process later in the article. Now change the URL in your browser to include the path prefix for one of your languages. In my case, I'll try German and go to example.com/de. You should be able to use the path prefixes for each of your configured languages. Switching between languages Most likely you don't want your users to have to manually type in a different URL to switch between languages. Drupal core provides a language switcher block that you can put somewhere convenient for your users. To use the block, navigate to Structure | Blocks, find the Language switcher (User interface text) block, position it where you'd like, and save your block settings. The order of the languages in the block is based on the order configured at Configuration | Regional and language | Languages. Once enabled, the language switcher block looks like the following screenshot: You can now easily switch between your site languages, and the language chosen is highlighted. The UI won't look different when switching until we finish the next section. Two alternatives to the core language switcher block are provided by the Language Switcher and Language Switcher Drop-down modules. Also, if you want country flag icons added next to each language, you can install the Language Icons module.
Read more
  • 0
  • 0
  • 5731
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-creating-your-first-module-drupal-7-module-development
Packt
14 Dec 2010
15 min read
Save for later

Using Drupal 7 for Module Development

Packt
14 Dec 2010
15 min read
  Drupal 7 Module Development Create your own Drupal 7 modules from scratch Specifically written for Drupal 7 development Write your own Drupal modules, themes, and libraries Discover the powerful new tools introduced in Drupal 7 Learn the programming secrets of six experienced Drupal developers Get practical with this book's project-based format         Read more about this book       The focus of this article by Matt Butcher, author of Drupal 7 Module Development, is module creation. We are going to begin coding in this article. Here are some of the important topics that we will cover in this article: Starting a new module Creating .info files to provide Drupal with module information Creating .module files to store Drupal code Adding new blocks using the Block Subsystem Using common Drupal functions Formatting code according to the Drupal coding standards (For more resources on this subject, see here.) Our goal: a module with a block In this article we are going to build a simple module. The module will use the Block Subsystem to add a new custom block. The block that we add will simply display a list of all of the currently enabled modules on our Drupal installation. We are going to divide this task of building a new module into the three parts: Create a new module folder and module files Work with the Block Subsystem Write automated tests using the SimpleTest framework included in Drupal We are going to proceed in that order for the sake of simplicity. One might object that, following agile development processes, we ought to begin by writing our tests. This approach is called Test-driven Development (TDD), and is a justly popular methodology. Agile software development is a particular methodology designed to help teams of developers effectively and efficiently build software. While Drupal itself has not been developed using an agile process, it does facilitate many of the agile practices. To learn more about agile, visit http://agilemanifesto.org/ However, our goal here is not to exemplify a particular methodology, but to discover how to write modules. It is easier to learn module development by first writing the module, and then learn how to write unit tests. It is easier for two reasons: SimpleTest (in spite of its name) is the least simple part of this article. It will have double the code-weight of our actual module. We will need to become acquainted with the APIs we are going to use in development before we attempt to write tests that assume knowledge of those APIs. In regular module development, though, you may certainly choose to follow the TDD approach of writing tests first, and then writing the module. Let's now move on to the first step of creating a new module. Creating a new module Creating Drupal modules is easy. How easy? Easy enough that over 5,000 modules have been developed, and many Drupal developers are even PHP novices! In fact, the code in this article is an illustration of how easy module coding can be. We are going to create our first module with only one directory and two small files. Module names It goes without saying that building a new module requires naming the module. However, there is one minor ambiguity that ought to be cleared up at the outset, a Drupal module has two names: A human-readable name: This name is designed to be read by humans, and should be one or a couple of words long. The words should be capitalized and separated by spaces. For example, one of the most popular Drupal modules has the human-readable name Views. A less-popular (but perhaps more creatively named) Drupal 6 module has the human-readable name Eldorado Superfly. A machine-readable name: This name is used internally by Drupal. It can be composed of lower-case and upper-case letters, digits, and the underscore character (using upper-case letters in machine names is frowned upon, though). No other characters are allowed. The machine names of the above two modules are views and eldorado_superfly, respectively. By convention, the two names ought to be as similar as possible. Spaces should be replaced by underscores. Upper-case letters should generally be changed to lower-case. Because of the convention of similar naming, the two names can usually be used interchangeably, and most of the time it is not necessary to specifically declare which of the two names we are referring to. In cases where the difference needs to be made (as in the next section), the authors will be careful to make it. Where does our module go? One of the less intuitive aspects of Drupal development is the filesystem layout. Where do we put a new module? The obvious answer would be to put it in the /modules directory alongside all of the core modules. As obvious as this may seem, the /modules folder is not the right place for your modules. In fact, you should never change anything in that directory. It is reserved for core Drupal modules only, and will be overwritten during upgrades. The second, far less obvious place to put modules is in /sites/all/modules. This is the location where all unmodified add-on modules ought to go, and tools like Drush ( a Drupal command line tool) will download modules to this directory. In some sense, it is okay to put modules here. They will not be automatically overwritten during core upgrades. However, as of this writing, /sites/all/modules is not the recommended place to put custom modules unless you are running a multi-site configuration and the custom module needs to be accessible on all sites. The current recommendation is to put custom modules in the /sites/default/modules directory, which does not exist by default. This has a few advantages. One is that standard add-on modules are stored elsewhere, and this separation makes it easier for us to find our own code without sorting through clutter. There are other benefits (such as the loading order of module directories), but none will have a direct impact on us. We will always be putting our custom modules in /sites/default/modules. This follows Drupal best practices, and also makes it easy to find our modules as opposed to all of the other add-on modules. The one disadvantage of storing all custom modules in /sites/default/modules appears only under a specific set of circumstances. If you have Drupal configured to serve multiple sites off of one single instance, then the /sites/default folder is only used for the default site. What this means, in practice, is that modules stored there will not be loaded at all for other sites. In such cases, it is generally advised to move your custom modules into /sites/all/modules/custom. Other module directories Drupal does look in a few other places for modules. However, those places are reserved for special purposes. Creating the module directory Now that we know that our modules should go in /sites/default/modules, we can create a new module there. Modules can be organized in a variety of ways, but the best practice is to create a module directory in /sites/default/modules, and then place at least two files inside the directory: a .info (pronounced "dot-info") file and a .module ("dot-module") file. The directory should be named with the machine-readable name of the module. Similarly, both the .info and .module files should use the machine-readable name. We are going to name our first module with the machine-readable name first, since it is our first module. Thus, we will create a new directory, /sites/default/modules/first, and then create a first.info file and a first.module file: Those are the only files we will need for our module. For permissions, make sure that your webserver can read both the .info and .module files. It should not be able to write to either file, though. In some sense, the only file absolutely necessary for a module is the .info file located at a proper place in the system. However, since the .info file simply provides information about the module, no interesting module can be built with just this file. Next, we will write the contents of the .info file. Writing the .info file The purpose of the .info file is to provide Drupal with information about a module—information such as the human-readable name, what other modules this module requires, and what code files this module provides. A .info file is a plain text file in a format similar to the standard INI configuration file. A directive in the .info file is composed of a name, and equal sign, and a value: name = value By Drupal's coding conventions, there should always be one space on each side of the equals sign. Some directives use an array-like syntax to declare that one name has multiple values. The array-like format looks like this: name[] = value1 name[] = value2 Note that there is no blank space between the opening square bracket and the closing square bracket. If a value spans more than one line, it should be enclosed in quotation marks. Any line that begins with a ; (semi-colon) is treated as a comment, and is ignored by the Drupal INI parser. Drupal does not support INI-style section headers such as those found in the php.ini file. To begin, let's take a look at a complete first.info file for our first module: ;$Id$ name = First description = A first module. package = Drupal 7 Development core = 7.x files[] = first.module ;dependencies[] = autoload ;php = 5.2 This ten-line file is about as complex as a module's .info file ever gets. The first line is a standard. Every .info file should begin with ;$Id$. What is this? It is the placeholder for the version control system to store information about the file. When the file is checked into Drupal's CVS repository, the line will be automatically expanded to something like this: ;$Id: first.info,v 1.1 2009/03/18 20:27:12 mbutcher Exp $ This information indicates when the file was last checked into CVS, and who checked it in. CVS is going away, and so is $Id$. While Drupal has been developed in CVS from the early days through Drupal 7, it is now being migrated to a Git repository. Git does not use $Id$, so it is likely that between the release of Drupal 7 and the release of Drupal 8, $Id$ tags will be removed. You will see all PHP and .info files beginning with the $Id$ marker. Once Drupal uses Git, those tags may go away. The next couple of lines of interest in first.info are these: name = First description = A first module. package = Drupal 7 Development The first two are required in every .info file. The name directive is used to declare what the module's human-readable name is. The description provides a one or two-sentence description of what this module provides or is used for. Among other places, this information is displayed on the module configuration section of the administration interface in Modules. In the screenshot, the values of the name and description fields are displayed in their respective columns. The third item, package, identifies which family (package) of modules this module is related to. Core modules, for example, all have the package Core. In the screenshot above, you can see the grouping package Core in the upper-left corner. Our module will be grouped under the package Drupal 7 Development to represent its relationship. As you may notice, package names are written as human-readable values. When choosing a human-readable module name, remember to adhere to the specifications mentioned earlier in this section. The next directive is the core directive: core = 7.x. This simply declares which main-line version of Drupal is required by the module. All Drupal 7 modules will have the line core = 7.x. Along with the core version, a .info file can also specify what version of PHP it requires. By default, Drupal 7 requires Drupal 5.1 or newer. However, if one were to use, say, closures (a feature introduced in PHP 5.3), then the following line would need to be added: php = 5.3 Next, every .info file must declare which files in the module contain PHP functions, classes, or interfaces. This is done using the files[] directive. Our small initial module will only have one file, first.module. So we need only one files[] directive. files[] = first.module More complex files will often have several files[] directives, each declaring a separate PHP source code file. JavaScript, CSS, image files, and PHP files (like templates) that do not contain functions that the module needs to know about needn't be included in files[] directives. The point of the directive is simply to indicate to Drupal that these files should be examined by Drupal. One directive that we will not use for this module, but which plays a very important role is the dependencies[] directive. This is used to list the other modules that must be installed and active for this module to function correctly. Drupal will not allow a module to be enabled unless its dependencies have been satisfied. Drupal does not contain a directive to indicate that another module is recommended or is optional. It is the task of the developer to appropriately document this fact and make it known. There is currently no recommended best practice to provide such information. Now we have created our first.info file. As soon as Drupal reads this file, the module will appear on our Modules page. In the screenshot, notice that the module appears in the DRUPAL 7 DEVELOPMENT package, and has the NAME and DESCRIPTION as assigned in the .info file. With our .info file completed, we can now move on and code our .module file. Modules checked into Drupal's version control system will automatically have a version directive added to the .info file. This should typically not be altered. Creating a module file The .module file is a PHP file that conventionally contains all of the major hook implementations for a module. We will gain some practical knowledge of them. A hook implementation is a function that follows a certain naming pattern in order to indicate to Drupal that it should be used as a callback for a particular event in the Drupal system. For Object-oriented programmers, it may be helpful to think of a hook as similar to the Observer design pattern. When Drupal encounters an event for which there is a hook (and there are hundreds of such events), Drupal will look through all of the modules for matching hook implementations. It will then execute each hook implementation, one after another. Once all hook implementations have been executed, Drupal will continue its processing. In the past, all Drupal hook implementations had to reside in the .module file. Drupal 7's requirements are more lenient, but in most moderately sized modules, it is still preferable to store most hook implementations in the .module file. There are cases where hook implementations belong in other files. In such cases, the reasons for organizing the module in such a way will be explained. To begin, we will create a simple .module file that contains a single hook implementation – one that provides help information. <?php // $Id$ /** * @file * A module exemplifying Drupal coding practices and APIs. * * This module provides a block that lists all of the * installed modules. It illustrates coding standards, * practices, and API use for Drupal 7. */ /** * Implements hook_help(). */ function first_help($path, $arg) { if ($path == 'admin/help#first') { return t('A demonstration module.'); } } Before we get to the code itself, we will talk about a few stylistic items. To begin, notice that this file, like the .info file, contains an $Id$ marker that CVS will replace when the file is checked in. All PHP files should have this marker following a double-slash-style comment: // $Id$. Next, the preceding code illustrates a few of the important coding standards for Drupal. Source code standards Drupal has a thorough and strictly enforced set of coding standards. All core code adheres to these standards. Most add-on modules do, too. (Those that don't generally receive bug reports for not conforming.) Before you begin coding, it is a good idea to familiarize yourself with the standards as documented here: http://drupal.org/coding-standards. The Coder module can evaluate your code and alert you to any infringement upon the coding standards. We will adhere to the Drupal coding standards. In many cases, we will explain the standards as we go along. Still, the definitive source for standards is the URL listed above, not our code here. We will not re-iterate the coding standards. The details can be found online. However, several prominent standards deserve immediate mention. I will just mention them here, and we will see examples in action as we work through the code. Indenting: All PHP and JavaScript files use two spaces to indent. Tabs are never used for code formatting. The <?php ?> processor instruction: Files that are completely PHP should begin with <?php, but should omit the closing ?>. This is done for several reasons, most notably to prevent the inclusion of whitespace from breaking HTTP headers. Comments: Drupal uses Doxygen-style (/** */) doc-blocks to comment functions, classes, interfaces, constants, files, and globals. All other comments should use the double-slash (//) comment. The pound sign (#) should not be used for commenting. Spaces around operators: Most operators should have a whitespace character on each side. Spacing in control structures: Control structures should have spaces after the name and before the curly brace. The bodies of all control structures should be surrounded by curly braces, and even that of if statements with one-line bodies. Functions: Functions should be named in lowercase letters using underscores to separate words. Later we will see how class method names differ from this. Variables: Variable names should be in all lowercase letters using underscores to separate words. Member variables in objects are named differently. As we work through examples, we will see these and other standards in action. As we work through examples, we will see these and other standards in action.
Read more
  • 0
  • 0
  • 5707

article-image-building-tiny-web-applications-ruby-using-sinatra-2
Packt
03 Sep 2009
5 min read
Save for later

Building tiny Web-applications in Ruby using Sinatra

Packt
03 Sep 2009
5 min read
What’s Sinatra? Sinatra is not a framework but a library i.e. a set of classes that allows you to build almost any kind of web-based solution (no matter what the complexity) in a very simple manner, on top of the abstracted HTTP layer it implements from Rack. When you code in Sinatra you’re bound only by HTTP and your Ruby knowledge. Sinatra doesn’t force anything on you, which can lead to awesome or evil code, in equal measures. Sinatra apps are typically written in a single file. It starts up and shuts down nearly instantaneously. It doesn’t use much memory and it serves requests very quickly. But, it also offers nearly every major feature you expect from a full web framework: RESTful resources, templating (ERB, Haml/Sass, and Builder), mime types, file streaming, etags, development/production mode, exception rendering. It’s fully testable with your choice of test or spec framework. It’s multithreaded by default, though you can pass an option to wrap actions in a mutex. You can add in a database by requiring ActiveRecord or DataMapper. And it uses Rack, running on Mongrel by default. Blake Mizerany the creator of Sinatra says that it is better to learn Sinatra before Ruby on Rails: When you learn a large framework first, you’re introduced to an abundance of ideas, constraints, and magic. Worst of all, they start you with a pattern. In the case of Rails, that’s MVC. MVC doesn’t fit most web-applications from the start or at all. You’re doing yourself a disservice starting with it. Back into patterns, never start with them- Reference here Installing Sinatra      The simplest way to obtain Sinatra is through Rubygems. Open a command window in Windows and type: c:> gem install sinatra Linux/OS X the command would be: sudo gem install sinatra   Installing its Dependencies Sinatra depends on the Rack gem which gets installed along with Sinatra. Installing Mongrel (a fast HTTP library and server for Ruby that is intended for hosting Ruby web applications of any kind using plain HTTP - http://mongrel.rubyforge.org/) is quite simple. In the already open command window, type: c:> gem install mongrel What are Routes? The main feature of Sinatra is defining ‘routes’ as an HTTP verb for a path that executes an arbitrary block of Ruby code. Something like: verb ‘path’ do ... # return/render something end Sinatra’s routes are designed to respond to the HTTP request methods (GET, POST, PUT, DELETE). In Sinatra, a route is an HTTP method paired with an URL matching pattern. These URL handlers (also called "routing") can be used to match anything from a static string (such as /hello) to a string with parameters (/hello/:name) or anything you can imagine using wildcards and regular expressions. Each route is associated with a block. Let us look at an example: get '/' do .. show something ..endget '/hello/:name' do # The /hello portion matches that portion of the URL from the # request you made, and :name will absorb any other text you # give it and put it in the params hash under the key :nameendpost '/' do .. create something ..endput '/' do .. update something ..enddelete '/' do .. delete something ..end Routes are matched in the order they are defined. When a new request comes in, the first route that matches the request is invoked i.e. the handler (the code block) attached to that route gets executed. For this reason, you should put your most specific handlers on top and your most vague handlers on the bottom. A tiny web-application Here’s an example of a simple Sinatra application. Write a Ruby program myapp1.rb and store it in the folder: c:sinatra_programs Though the name of the folder is c:>sinatra_programs, we are going to have only one Sinatra program here. The program is: # myapp1.rbrequire 'sinatra' Sinatra applications can be run directly: ruby myapp1.rb [-h] [-x] [-e ENVIRONMENT] [-p PORT] [-s HANDLER] The above options are: -h # help -p # set the port (default is 4567) -e # set the environment (default is development) -s # specify rack server/handler (default is thin) -x # turn on the mutex lock (default off) – currently not used In the article- http://gist.github.com/54177, it states that using: require ‘rubygems’, is wrong. It is an environmental issue and not an app issue. The article mentions that you might It is an environmental issue and not an app issue. The article mentions that you might use: ruby -rubygems myapp1.rb Another way is to use RUBYOPT. Refer article – http://rubygems.org/read/chapter/3. By setting the RUBYOPT environment variable to the value rubygems, you tell Ruby to load RubyGems every time it starts up. This is similar to the -rubygems options above, but you only have to specify this once (rather than each time you run a Ruby script).
Read more
  • 0
  • 0
  • 5682

article-image-working-report-builder-microsoft-sql-server-2008-part-1
Packt
28 Oct 2009
16 min read
Save for later

Working with the Report Builder in Microsoft SQL Server 2008: Part 1

Packt
28 Oct 2009
16 min read
The Microsoft SQL Server 2008 Reporting Services Report Builder 2.0 tool can be installed from a standalone installer available at this Microsoft site, http://download.microsoft.com/download/a/f/6/af64f194-8b7e-4118-b040-4c515a7dbc46/ReportBuilder.msi. The same file is also available from a collection of download files when you access the Microsoft SQL Server 2008 Feature Pack, October 2008 at http://www.microsoft.com/downloads/details.aspx?FamilyId=228DE03F-3B5A-428A-923F-58A033D316E1&displaylang=en. Report Builder overview In the present version of SQL Server 2008 [Enterprise Evaluation edition] there  are two Report Builders available. Report Builder 1.0, which has remained as a program that can be launched from the Report Manager, and the new Report Builder 2.0, which is a stand alone report authoring tool that needs to be independently launched. Although Report Builder 1.0 can access Report Models built with Visual Studio 2008 and the Report Manager, it cannot be used to create reports using those models. It also does not work with Reports generated by Visual Studio 2008/BIDS/Report Builder 2.0. The errors can be summarized as follows: When you try to access the Report Server 2008 from the link provided on the Report Builder 1.0 interface you get the following error message: Specifying credentials in a URL is not supported When you try to open a report created using VS2008/BIDS/ReportBuilder2.0 using the Open Report… and Open File… navigational items in Report Builder 1.0 you get the following error message: System.IO.StreamReader: The Report element was not found Report Builder 1.0 allows you to access Report Models created with VS2008/BIDS/Report Manager and even allows you create a report in design view but this report cannot be processed on the Report Server. If you try to do so, you get the following error message: MemoryStream length must be non-negative and less than 2^31-1-origin. Parameter name: offset; Remote GDI stream version: ?. Expected version: 11.0.1 In this article the Report Builder 2.0 interface will be described along with the new features that are incorporated into this version. Report Builder 2.0 is admirably suited to address all items in the Report Definition Language of 2008. One of the important features of Report Builder 2.0 is the empowerment it provides business users to create ad hoc reports using the Report Models built on the databases they use. In this article you will be learning mostly about the Report Builder 2.0  interface details and working with it to create reports or modify them. It may be noted that Report Builder generates 2008 compliant RDL files as described in http://download.microsoft.com/download/6/5/7/6575f1c8-4607-48d2-941d-c69622e11c32/RDL_spec_08.pdf and therefore, cannot work with reports generated using 2005 technology. Report Builder 2.0 user interface description Report Builder is a report authoring tool and the basic procedure for authoring a report consists of the following steps: Report planning Connecting to a source of data Extracting a dataset from source Designing the report and data binding Previewing the report Although deploying the report is not included in the above, Report Builder can deploy the report as well. It is not always necessary to deploy a completed report, as any part of a report definition file can be deployed. This makes modifying a report on the server very flexible. In the following sections, the various parts of the Report Builder interface will be described starting at the very top and going to the bottom of the interface The menu for file operations Report Builder 2.0 can be accessed from Start | All Programs | Microsoft SQL Server 2008 Report Builder | Report Builder 2.0. This brings up the Report Builder Interface 2.0 as shown with the design area containing two icons: Table or Matrix and Chart. Each of these will launch a  related wizard which will step you through the various tasks. The Report Builder 2.0 interface is very similar to Office 2007. More than one instance of Report Builder can be launched. At the very top of the following screen shown you have the undo and redo controls as well as a save icon. When you click on the save icon the Save as Report window gets displayed as shown. Here you provide a name for the report. The default save extension is  *.rdl and it will be saved to the report server. It may also be persisted to a folder on your machine. Clicking on the Office Button (top left) opens a drop-down window shown in the following screenshot: In this window, you can carry out a number of tasks such as creating a new report, opening an existing report, saving a report, and saving a report with a different name. The Save button saves it to the default location seen earlier and Save as invokes the same window to save the report with a different name as seen earlier displying the report server instance as the Save to location. The Recent Documents pane shows the more recent reports created with this tool. New allows you to create a new report. When you click on Open, the following Open Report window gets displayed with the default location http://Hodentek2:8080/ReportServer_SANGAM/My Reports. You will also notice the message: This folder is not available because the My Reports feature is not enabled on the computer. Also the Open Reports window allows you look for reports with the extension .rdl. Therefore, unless the My Reports feature is enabled, this window is unusable. This is supposed to be possible from Report Manager but there are no controls in Report Manager that would do this. An alternative was suggested by one of the MSDN forum moderators (see http://social.msdn.microsoft.com/forums/en-US/sqlreportingservices/thread/6c695160-29e8-4185-be6d-5fe027a6975c/). Hands-on exercise (Part 2) will describe how you may enable My Reports. The idea of My Reports is similar to My Documents where each user can keep his reports. When the Options button (in the previous screenshot) is clicked it opens the window Report Builder Options window with two tabbed pages Settings and Resource shown as follows: Here you can view, as well as modify, Report Builder settings. The defaults are more than adequate to work with the examples in this book. Clicking on the Resources button brings up this interesting window which enables you to interact with Microsoft regarding SSRS activities, concerns, community, and so on. If you are serious about Reporting Services, these are very valuable links. The About button when clicked can provide you with Report Builder version information. The ribbon The main menu consists of Home, Insert, and View menu items which are part of the "ribbon". The ribbon introduced by Microsoft in Office 2007 is actually a container for other toolbar items. The ribbon is the replacement for the classic menus, toolbars, and is supposed to be more efficient and discoverable by the user. In fact you see a lot more on the "ribbon" than in the classic menu. Home The next figure shows the Home menu with its toolbar arranged from left to right and divided into sections. The Run toolbar item with the title Views when clicked would run the report open in the design view (in fact, even without a report open in the design view, the report can be run. The result would be the current date and time getting displayed in the center of the screen of an untitled report which has just ExecutionTime as the only item in the report). The Font, Paragraph, Border, and Number toolbar sections become enabled if parts of a report need editing. The formatting of textboxes in the report, the formatting of numbers in the report, and the alignment of components in the layout can all be independently managed using these toolbar items. Insert When you click on the Insert menu item on the "ribbon", the tabbed page for this item is displayed as shown in the following screenshot: It has four sections: Data Regions, Report Items, Subreports, and Header & Footer. These are all the normal items that are used either individually or together to make up a report. There can be more than one data region in a report. Data Regions In the Data Regions section you have both the Tablix (Table, Matrix, and List) and the graphic controls that can be bound to data—the Chart and the Gauge. Gauge is new in SQL Server Reporting Services 2008. Chart and gauge implementations are the off shoot of collaboration with Dundas (http://www.dundas.com/). Report Builder is built in such a way that the dataset must be defined before any of the data regions are added to the report body. For the purpose of describing the various data regions in this section, it is assumed (in order to get the screen shots shown here) that a dataset has been defined and the default wizards on the design surface have been removed. Table The Table is meant for displaying data retrieved from a database either all data detailed in groups or a combination (some grouped and some detailed) of both. It has a fixed number of columns which can be adjusted at design time. The table length expands to accommodate the rows. Data can be grouped by a single field or by multiple fields. Expression designer can be used in grouping as well. The grouping is carried out by creating row groups. Static rows can be added for row headings (labels) and totals. Aggregates for groups can be added. Both detailed data as well as grouped data can be hidden initially and the user can interactively reveal the data needed by drill downs. When you click on Insert | Table | Insert Table and then click on the design surface you can add a table to the design area. The table appears as shown with handles to adjust its dimensions. The table can be dragged to any other location on the design surface (the body of the report) as well. After placing the table, which by default has three columns and two rows, when you click on any other part of the design area you will see the table as shown. When you hover over the cell marked Data on the table you will see a little icon. This icon is a minimized version of the dataset fields. The grayed out feature that surrounds the table indicate the position of the rows and columns of the table. It also shows such other features as whether it is a detail, or whether it is a group. In the case of group, within a group the feature would indicate the nesting schematically as well. When you want to increase the size of a column or a row you can drag the double headed arrow that gets displayed when your cursor is placed between two columns or between two cells as shown. When you click on the dataset icon in the cell Data you get a drop-down list containing the fields in the dataset as shown. You can choose any of the fields to occupy the cell you clicked and the corresponding header will be added to the table. In this particular dataset there are nine fields and you can choose any of them to occupy the cell. When you right-click on a cell, a drop-down menu will be available. It can be used for the following: Work with the highlighted textbox (each cell of the table is a textbox) including to copy, cut, delete, and paste contents. Work with the properties of the Textbox. Populate the textbox with an expression using the expression builder. The expression builder gets displayed when fx Expression is clicked. Use Select to select the body or the Tablix. Insert a new column or a new row. Columns can be added to the right or the left of the clicked cell and rows can be added above or below the clicked cell. Delete columns and rows. Add a group. Both row and column groups can be added. When you click on the properties of the textbox, the Text Box Properties window is displayed. The textbox has several properties which are arranged on the left as a list with each item having its own page as shown. The Help button on any of the pages will take you directly to the definition of the properties and is extremely useful. In the General page, you can make changes to the elements in the Name, Value, and Sizing options page as shown. The Value is one which you choose among the column values (from the drop-down) from the dataset. You may also add a text for the ToolTip, which will display this text when the report is generated and this cell is accessed by hovering over it in the report. Alternatively you can set the Value and Tooltip using fx—the button that brings up the Expression window. In the Number page you can set the number and date data type formatting options for the cell that contains a number or a date. This is what you normally would find in most Microsoft products such as Excel and Access. In the Alignment page you can choose the vertical and horizontal alignments as well as the padding of the textbox content from the edges of the cell. Similarly the Font and Border properties are the same ones you find in most Microsoft products. The Fill property lets you add or change background color to the report as well as add a graphic element. The graphic element can be embedded, external, or originate from a database (being one of the fields accessed). Expressions can be developed to set a desired color for the Fill. The Visibility of the textbox can be any of Show, Hide, Show or Hide based on an expression. In each of these cases the visibility can be toggled when another table cell is clicked (which can be chosen). This page also gives access to the Expression window which is similar to the MS Access expression builder. The Interactive Sorting page allows you to define interactive sorting options on  the textbox. Matrix Matrix provides a similar functionality (roughly speaking rows against columns) to cross-tab reports in MS Access (http://aspalliance.com/1041_Creating_a_Crosstab_Report_in_Visual_Studio_2005_Using_Crystal_Reports.all) and Pivot Table dynamic views (http://www.aspfree.com/c/a/MS-SQL-Server/On-Accessing-Data-From-An-OLAP-Server-Using-MS-Excel/3/). The matrix should have at least one row group and one column group. The matrix can expand both ways to accommodate the data, horizontally for column groups and vertically for row groups. The matrix cells (intersection of rows and columns) display summary information (aggregates). When you click on Insert Matrix in the Insert menu and drop it on the design area of Report Builder 2.0, it gets displayed as shown in the following figure: Now if you click inside the boundary of the (2x2) empty matrix you will see more features of the matrix as shown in the following screenshot. The basic elements are the ColumnGroup (Column Groups), the RowGroup (Row Groups), and the Data. The group information is also displayed as shown by overlaid lines pointing to them. There needs to be a minimum of one group and one column for the matrix and there could be a hierarchy of column and row groups. The row and column group cells have their own properties which can be displayed when you right-click on them as shown in the next screenshot for the row group. When you right-click on the cell marked Rows, the following drop-down menu  pops up. In addition to the properties that you can set for the textbox in that cell, you have additional submenu items that work with the grouping and totaling. These are part of representing data in a matrix. Each of the Tablix for the Rows and Columns has the additional submenu items which are shown here for the Rows. Similar ones apply for the Columns as well. These are useful when you want to create nested groups. With the Matrix design interface in SQL Server 2005 this would not have been possible. Add Group Row Group Parent Group... Child Group... -------------------- Adjacent Above Adjacent Below Row Group Delete Group Group Properties Add Total Before After In addition to the above, each of the items Rows and Columns cells has the following items as well. These specify how new columns and rows are inserted with reference to the current cell as shown. The differences are due to the geometrical positions that are allowed for the new columns or rows as shown. For the "Columns" cell: Insert Column Inside Group-Left Inside Group-Right ------------------ Outside Group-Left Outside Group-Right Insert Row Inside Group-Above Inside Group-Below ------------------ Outside Group_Above For the "Rows" cell: Insert Column Inside Group-Left Inside Group-Right ------------------ Outside Group-Left Insert Row Inside Group-Above Inside Group-Below ------------------ Outside Group_Above Outside Group_Below Besides using a cell as a starting point, one could also use the rows as a whole or column as a whole to add further structure as shown in the next figure. Of course you need to use the proper submenu option to arrive at a particular matrix structure. Clicking at the indicated points would let you choose the structure you want for your matrix. If you click at the location shown for the Tablix you could choose to the delete the whole matrix. The Tablix graphical arrangement gives you the maximum flexibility in extending the matrix in 2-dimensions. List The list data region repeats for each row of data. List element provides a single container for the data which can be used to generate what are called Free Form Reports. In this kind of report there is no rigid structure such as a table for the data. You can also place a list inside another list or even a chart inside a list. You can drag a column from a dataset and drop it into the list. You can work with the list using the properties of the Rectangle it contains as well as its Tablix properties. As described earlier, the design interface is very flexible and you can leverage all features provided by the Tablix structure like displaying details and adding groups either independent, or nested. The properties pages described earlier allow you to sort and filter grouped data. When you drop a List on the design surface you will see just a single cell as shown. You can change its dimensions to suit your needs. When you click on the List you can access its handles as shown: When you add a List, there is one column and one row (just one cell). This can be extended in both directions by choosing the appropriate submenu items. These can be displayed by right-clicking on the handles as shown:
Read more
  • 0
  • 0
  • 5668

article-image-magento-theme-development
Packt
24 Feb 2016
7 min read
Save for later

Magento Theme Development

Packt
24 Feb 2016
7 min read
In this article by Fernando J. Miguel, author of the book Magento 2 Development Essentials, we will learn the basics of theme development. Magento can be customized as per your needs because it is based on the Zend framework, adopting the Model-View-Controller (MVC) architecture as a software design pattern. When planning to create your own theme, the Magento theme process flow becomes a subject that needs to be carefully studied. Let's focus on the concepts that help you create your own theme. (For more resources related to this topic, see here.) The Magento base theme The Magento Community Edition (CE) version 1.9 comes with a new theme named rwd that implements the Responsive Web Design (RWD) practices. Magento CE's responsive theme uses a number of new technologies as follows: Sass/Compass: This is a CSS precompiler that provides a reusable CSS that can even be organized well. jQuery: This is used for customization of JavaScript in the responsive theme. jQuery operates in the noConflict() mode, so it doesn't conflict with Magento's existing JavaScript library. Basically, the folders that contain this theme are as follows: app/design/frontend/rwd skin/frontend/rwd The following image represents the folder structure: As you can see, all the files of the rwd theme are included in the app/design/frontend and skin/frontend folders: app/design/frontend: This folder stores all the .phtml visual files and .xml configurations files of all the themes. skin/frontend: This folder stores all JavaScript, CSS, and image files from their respective app/design/frontend themes folders. Inside these folders, you can see another important folder called base. The rwd theme uses some base theme features to be functional. How is it possible? Logically, Magento has distinct folders for every theme, but Magento is very smart to reuse code. Magento takes advantage of fall-back system. Let's check how it works. The fall-back system The frontend of Magento allows the designers to create new themes based on the basic theme, reusing the main code without changing its structure. The fall-back system allows us to create only the files that are necessary for the customization. To create the customization files, we have the following options: Create a new theme directory and write the entire new code Copy the files from the base theme and edit them as you wish The second option could be more productive for study purposes. You will learn basic structure by exercising the code edit. For example, let's say you want to change the header.phtml file. You can copy the header.html file from the app/design/frontend/base/default/template/page/html path to the app/design/frontend/custom_package/custom_theme/template/page/html path. In this example, if you activate your custom_theme on Magento admin panel, your custom_theme inherits all the structure from base theme, and applies your custom header.phtml on the theme. Magento packages and design themes Magento has the option to create design packages and themes as you saw on the previous example of custom_theme. This is a smart functionality because on same packages you can create more than one theme. Now, let's take a deep look at the main folders that manage the theme structure in Magento. The app/design structure In the app/design structure, we have the following folders: The folder details are as follows: adminhtml: In this folder, Magento keeps all the layout configuration files and .phtml structure of admin area. frontend: In this folder, Magento keeps all the theme's folders and their respective .phtml structure of site frontend. install: This folder stores all the files of installation Magento screen. The layout folder Let's take a look at the rwd theme folder: As you can see, the rwd is a theme folder and has a template folder called default. In Magento, you can create as many template folders as you wish. The layout folders allow you to define the structure of the Magento pages through the XML files. The layout XML files has the power to manage the behavior of your .phtml file: you can incorporate CSS or JavaScript to be loaded on specific pages. Every page on Magento is defined by a handle. A handle is a reference name that Magento uses to refer to a particular page. For example, the <cms_page> handle is used to control the layout of the pages in your Magento. In Magento, we have two main type of handles: Default handles: These manage the whole site Non-default handles: These manage specific parts of the site In the rwd theme, the .xml files are located in app/design/frontend/rwd/default/layout. Let's take a look at an .xml layout file example: This piece of code belongs to the page.xml layout file. We can see the <default> handle defining the .css and .js files that will be loaded on the page. The page.xml file has the same name as its respective folder in app/design/frontend/rwd/default/template/page. This is an internal Magento control. Please keep this in mind: Magento works with a predefined naming file pattern. Keeping this in your mind can avoid unnecessary errors. The template folder The template folder, taking rwd as a reference, is located at app/design/frontend/rwd/default/template. Every subdirectory of template controls a specific page of Magento. The template files are the .phtml files, a mix of HTML and PHP, and they are the layout structure files. Let's take a look at a page/1column.phtml example: The locale folder The locale folder has all the specific translation of the theme. Let's imagine that you want to create a specific translation file for the rwd theme. You can create a locale file at app/design/frontend/rwd/locale/en_US/translate.csv. The locale folder structure basically has a folder of the language (en_US), and always has the translate.csv filename. The app/locale folder in Magento is the main translation folder. You can take a look at it to better understand. But the locale folder inside the theme folder has priority in Magento loading. For example, if you want to create a Brazilian version of the theme, you have to duplicate the translate.csv file from app/design/frontend/rwd/locale/en_US/ to app/design/frontend/rwd/locale/pt_BR/. This will be very useful to those who use the theme and will have to translate it in the future. Creating new entries in translate If you want to create a new entry in your translate.csv, first of all put this code in your PHTML file: <?php echo $this->__('Translate test'); ?> In CSV file, you can put the translation in this format: 'Translate test', 'Translate test'. The SKIN structure The skin folder basically has the css and js files and images of the theme, and is located in skin/frontend/rwd/default. Remember that Magento has a filename/folder naming pattern. The skin folder named rwd will work with rwd theme folder. If Magento has rwd as a main theme and is looking for an image that is not in the skin folder, Magento will search this image in skin/base folder. Remember also that Magento has a fall-back system. It is keeping its search in the main themes folder to find the correct file. Take advantage of this! CMS blocks and pages Magento has a flexible theme system. Beyond Magento code customization, the admin can create blocks and content on Magento admin panel. CMS (Content Management System) pages and blocks on Magento give you the power to embed HTML code in your page. Summary In this article, we covered the basic concepts of Magento theme. These may be used to change the display of the website or its functionality. These themes are interchangeable with Magento installations. Resources for Article: Further resources on this subject: Preparing and Configuring Your Magento Website [article] Introducing Magento Extension Development [article] Installing Magento [article]
Read more
  • 0
  • 0
  • 5666
article-image-creating-efficient-reports-visual-studio
Packt
21 Oct 2009
5 min read
Save for later

Creating efficient reports with Visual Studio

Packt
21 Oct 2009
5 min read
Report Services, Analysis Services, and Integration Services are the three pillars of Business Intelligence in Microsoft's vision that continues to evolve. Reporting is a basic activity, albeit one of the most important activities of an organization because it provides a specialized and customized view of the data of various forms (relational, text, xml etc) that live in data stores. The report is useful in making business decisions, scheduling business campaigns, or assessing the competition. The report itself may be required in hard copy in several document formats such as DOC, HTML, PDF, etc. Many times it is also required to be retrieved in an interactive form from the data store and viewed on a suitable interface, including a web browser. The Microsoft SQL Server 2005 Reporting Services, popularly known by its acronym SSRS, provides all that is necessary to create and manage reports and deploy them on a report server with output available in several document formats. The reader will greatly benefit from reading the several articles detailed in the author's Hodentek Blog. The content for the articles were developed using VS 2003, VS 2005, SQL 2000 and SQL 2005. (For more resources on Microsoft, see here.) The content for the present tutorial uses a Visual Studio 2008 Professional and a Microsoft SQL Server Compact 3.5 embeddable database for its data. In Visual Studio a Report Design Wizard guides you through fashioning a report from your choices. Create a Windows Project in VS2008 Create a new project from File | New | Project. Provide a name instead of the default name (WindowsApplicaiton1). This is changed to ReportDesign for this tutorial as shown in the next figure. VS 2008 supports multi-version targeting. In the top right of the New Project window you can see that this report is targeted for the NET 2.0 Framework Version and can be published to a Net 2.0 web site. Slightly enlarge the Form1. Drag and drop the Microsoft Report Viewer control shown in the next figure on to the form from the Toolbox. This has the same functionality as the ReportViewer control in VS 2005 as shown in the next figure. The control will be housed on the form as shown in the next figure. You can display the tasks needed to configure the Report Viewer by clicking on the Smart Task as shown in the same figure. The report will have all the functionalities like print, save to different formats, navigating through pages, etc. Working with the Report Wizard Now click on the Design a new report task. The opens the Report Wizard window as shown in the figure. Read the instructions on this page carefully. Click on the Next Button. This displays the Data Source Configuration Wizard shown in the next figure. Choosing a Data Source The application can obtain data from these different resources. Click on the Database icon and then click on the Next button. This displays the window where you need to select a connection to the data source. If there are existing connections you should be able to see them in the drop-down list box. Making a Connection to Get Data Click on the New Connection button. This brings up the Add Connection window showing a default connection to a Microsoft SQL Server Compact 3.5.NET Framework Data Provider. It also shows the location to be My Computer. This source can be changed by clicking on the Change... button. This will bring up the Change Data Source window where you can choose. As found in this version you have the following options: Microsoft SQL Server option lets you connect to SQL 2000 or 2005 using the .NET Framework Data Provider for SQL Server. Microsoft SQL Server Compact 3.5 lets you connect to a database file. Microsoft SQL Server Database File lets you connect to a Local Microsoft SQL Server Instance including a SQL Express. Although it is not explicitly stated what these versions are. For this tutorial the Compact 3.5 will be used (also uses a .NET Framework Data Provider of Compact 3.5). Click on the OK button in the Change Data Source window. VS 2008 installation also installs a database file on the computer for the SQL Server Compact 3.5. Click on Browse button (you could also create one if you like, herein it will be browsed). This brings up the Select SQL Server Compact 3.5 Database File window with the default location where the database file is parked as shown in the next figure. Click on the Northwind icon in the window and click on the Open button. This updates the Add Connection window with this information as shown in the next figure. You may test the connection by hitting the Test Connection button which should display a successful outcome as shown in the next figure. There is no need for a password as you are the owner. Click OK twice and this will take you back to the Data Source Configuration Wizard updating the connection information which you may review as shown in the next figure. Click on the Next button. This brings up the Microsoft Visual Studio message window giving you the option to bring this data source to your project.    
Read more
  • 0
  • 0
  • 5602

article-image-eav-model
Packt
10 Aug 2015
11 min read
Save for later

EAV model

Packt
10 Aug 2015
11 min read
In this article by Allan MacGregor, author of the book Magento PHP Developer's Guide - Second Edition, we cover details about EAV models, its usefulness in retrieving data, and the advantages it provides to the merchants and developers. EAV stands for entity, attribute, and value and is probably the most difficult concept for new Magento developers to grasp. While the EAV concept is not unique to Magento, it is rarely implemented on modern systems. Additionally, a Magento implementation is not a simple one. (For more resources related to this topic, see here.) What is EAV? In order to understand what EAV is and what its role within Magento is, we need to break down parts of the EAV model: Entity: This represents the data items (objects) inside Magento products, customers, categories, and orders. Each entity is stored in the database with a unique ID. Attribute: These are our object properties. Instead of having one column per attribute on the product table, attributes are stored on separate sets of tables. Value: As the name implies, it is simply the value link to a particular attribute. This data model is the secret behind Magento's flexibility and power, allowing entities to add and remove new properties without having to make any changes to the code, templates, or the database schema. This model can be seen as a vertical way of growing our database (new attributes and more rows), while the traditional model involves a horizontal growth pattern (new attributes and more columns), which would result in a schema redesign every time new attributes are added. The EAV model not only allows for the fast evolution of our database, but is also more effective because it only works with non-empty attributes, avoiding the need to reserve additional space in the database for null values. If you are interested in exploring and learning more about the Magento database structure, I highly recommend visiting www.magereverse.com. Adding a new product attribute is as simple going to the Magento backend and specifying the new attribute type, be it color, size, brand, or anything else. The opposite is true as well and we can get rid of unused attributes on our products or customer models. For more information on managing attributes, visit http://www.magentocommerce.com/knowledge-base/entry/how-do-attributes-work-in-magento. The Magento community edition currently has eight different types of EAV objects: Customer Customer Address Products Product Categories Orders Invoices Credit Memos Shipments The Magento Enterprise Edition has one additional type called RMA item, which is part of the Return Merchandise Authorization (RMA) system. All this flexibility and power is not free; there is a price to pay. Implementing the EAV model results in having our entity data distributed on a large number of tables. For example, just the Product Model is distributed to around 40 different tables. The following diagram only shows a few of the tables involved in saving the information of Magento products: Other major downsides of EAV are the loss of performance while retrieving large collections of EAV objects and an increase in the database query complexity. As the data is more fragmented (stored in more tables), selecting a single record involves several joins. One way Magento works around this downside of EAV is by making use of indexes and flat tables. For example, Magento can save all the product information into the flat_catalog table for easier and faster access. Let's continue using Magento products as our example and manually build the query to retrieve a single product. If you have phpmyadmin or MySQL Workbench installed on your development environment, you can experiment with the following queries. Each can be downloaded on the PHPMyAdmin website at http://www.phpmyadmin.net/ and the MySQL Workbench website at http://www.mysql.com/products/workbench/. The first table that we need to use is the catalog_product_entity table. We canconsider this our main product EAV table since it contains the main entity records for our products: Let's query the table by running the following SQL query: SELECT FROM `catalog_product_entity`; The table contains the following fields: entity_id: This is our product unique identifier that is used internally by Magento. entity_type_id: Magento has several different types of EAV models. Products, customers, and orders are just some of them. Identifying each of these by type allows Magento to retrieve the attributes and values from the appropriate tables. attribute_set_id: Product attributes can be grouped locally into attribute sets. Attribute sets allow even further flexibility on the product structure as products are not forced to use all available attributes. type_id: There are several different types of products in Magento: simple, configurable, bundled, downloadable, and grouped products; each with unique settings and functionality. sku: This stands for Stock Keeping Unit and is a number or code used to identify each unique product or item for sale in a store. This is a user-defined value. has_options: This is used to identify if a product has custom options. required_options: This is used to identify if any of the custom options that are required. created_at: This is the row creation date. updated_at: This is the last time the row was modified. Now we have a basic understanding of the product entity table. Each record represents a single product in our Magento store, but we don't have much information about that product beyond the SKU and the product type. So, where are the attributes stored? And how does Magento know the difference between a product attribute and a customer attribute? For this, we need to take a look into the eav_attribute table by running the following SQL query: SELECT FROM `eav_attribute`; As a result, we will not only see the product attributes, but also the attributes corresponding to the customer model, order model, and so on. Fortunately, we already have a key to filter the attributes from this table. Let's run the following query: SELECT FROM `eav_attribute` WHERE entity_type_id = 4; This query tells the database to only retrieve the attributes where the entity_type_id column is equal to the product entity_type_id(4). Before moving, let's analyze the most important fields inside the eav_attribute table: attribute_id: This is the unique identifier for each attribute and primary key of the table. entity_type_id: This relates each attribute to a specific eav model type. attribute_code: This is the name or key of our attribute and is used to generate the getters and setters for our magic methods. backend_model: These manage loading and storing data into the database. backend_type: This specifies the type of value stored in the backend (database). backend_table: This is used to specify if the attribute should be stored on a special table instead of the default EAV table. frontend_model: These handle the rendering of the attribute element into a web browser. frontend_input: Similar to the frontend model, the frontend input specifies the type of input field the web browser should render. frontend_label: This is the label/name of the attribute as it should be rendered by the browser. source_model: These are used to populate an attribute with possible values. Magento comes with several predefined source models for countries, yes or no values, regions, and so on. Retrieving the data At this point, we have successfully retrieved a product entity and the specific attributes that apply to that entity. Now it's time to start retrieving the actual values. In order to simplify the example (and the query) a little, we will only try to retrieve the name attribute of our products. How do we know which table our attribute values are stored on? Well, thankfully, Magento follows a naming convention to name the tables. If we inspect our database structure, we will notice that there are several tables using the catalog_product_entity prefix: catalog_product_entity catalog_product_entity_datetime catalog_product_entity_decimal catalog_product_entity_int catalog_product_entity_text catalog_product_entity_varchar catalog_product_entity_gallery catalog_product_entity_media_gallery catalog_product_entity_tier_price Wait! How do we know which is the right table to query for our name attribute values? If you were paying attention, I already gave you the answer. Remember that the eav_attribute table had a column called backend_type? Magento EAV stores each attribute on a different table based on the backend type of that attribute. If we want to confirm the backend type of our name attribute, we can do so by running the following code: SELECT FROM `eav_attribute` WHERE `entity_type_id` =4 AND `attribute_code` = 'name'; As a result, we should see that the backend type is varchar and that the values for this attribute are stored in the catalog_product_entity_varchar table. Let's inspect this table: The catalog_product_entity_varchar table is formed by only 6 columns: value_id: This is the attribute value unique identifier and primary key entity_type_id: This is the entity type ID to which this value belongs attribute_id: This is the foreign key that relates the value to our eav_entity table store_id: This is the foreign key matching an attribute value with a storeview entity_id: This is the foreign key relating to the corresponding entity table, in this case, catalog_product_entity value: This is the actual value that we want to retrieve Depending on the attribute configuration, we can have it as a global value, meaning, it applies across all store views or a value per storeview. Now that we finally have all the tables that we need to retrieve the product information, we can build our query: SELECT p.entity_id AS product_id, var.value AS product_name, p.sku AS product_sku FROM catalog_product_entity p, eav_attribute eav, catalog_product_entity_varchar var WHERE p.entity_type_id = eav.entity_type_id AND var.entity_id = p.entity_id    AND eav.attribute_code = 'name'    AND eav.attribute_id = var.attribute_id From our query, we should see a result set with three columns, product_id, product_name, and product_sku. So let's step back for a second in order to get product names with SKUs with raw SQL. We had to write a five-line SQL query, and we only retrieved two values from our products, from one single EAV value table if we want to retrieve a numeric field such as price or a text-value-like product. If we didn't have an ORM in place, maintaining Magento would be almost impossible. Fortunately, we do have an ORM in place, and most likely, you will never need to deal with raw SQL to work with Magento. That said, let's see how we can retrieve the same product information by using the Magento ORM: Our first step is going to be to instantiate a product collection: $collection = Mage::getModel('catalog/product')->getCollection(); Then we will specifically tell Magento to select the name attribute: $collection->addAttributeToSelect('name'); Then, we will ask it to sort the collection by name: $collection->setOrder('name', 'asc'); Finally, we will tell Magento to load the collection: $collection->load(); The end result is a collection of all products in the store sorted by name. We can inspect the actual SQL query by running the following code: echo $collection->getSelect()->__toString(); In just three lines of code, we are telling Magento to grab all the products in the store, to specifically select the name, and finally order the products by name. The last line $collection->getSelect()->__toString(); allows to see the actual query that Magento is executing in our behalf. The actual query being generated by Magento is as follows: SELECT `e`.. IF( at_name.value_id >0, at_name.value, at_name_default.value ) AS `name` FROM `catalog_product_entity` AS `e` LEFT JOIN `catalog_product_entity_varchar` AS `at_name_default` ON (`at_name_default`.`entity_id` = `e`.`entity_id`) AND (`at_name_default`.`attribute_id` = '65') AND `at_name_default`.`store_id` =0 LEFT JOIN `catalog_product_entity_varchar` AS `at_name` ON ( `at_name`.`entity_id` = `e`.`entity_id` ) AND (`at_name`.`attribute_id` = '65') AND (`at_name`.`store_id` =1) ORDER BY `name` ASC As we can see, the ORM and the EAV models are wonderful tools that not only put a lot of power and flexibility in the hands of the developers, but they also do it in a way that is comprehensive and easy to use. Summary In this article, we learned about EAV models and how they are structured to provide Magento with data flexibility and extensibility that both merchants and developers can take advantage of. Resources for Article: Further resources on this subject: Creating a Shipping Module [article] Preparing and Configuring Your Magento Website [article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 5572

article-image-exporting-data-ms-access-2003-mysql
Packt
07 Oct 2009
4 min read
Save for later

Exporting data from MS Access 2003 to MySQL

Packt
07 Oct 2009
4 min read
Introduction It is assumed that you have a working copy of MySQL which you can use to work with this article. The MySQL version used in this article came with the XAMPP download. XAMPP is an easy to install (and use) Apache distribution containing MySQL, PHP, and Perl. The distribution used in this article is XAMPP for Windows. You can find documentation here. Here is a screen shot of the XAMPP control panel where you can turn the services on and off and carry out other administrative tasks. You need to follow the steps indicated here: Create a database in MySQL to which you will export a table from Microsoft Access 2003 Create a ODBC DSN that helps you connecting Microsoft Access to MySQL Export the table or tables Verify the exported items Creating a database in MySQL You can create a database in MySQL by using the command 'Create Database' in MySQL or using a suitable graphic user interface such as MySQL workbench. You will have to refer to documentation that works with your version of MySQL. Herein the following version was used. The next listing shows how a database named TestMove was created in MySQL starting from the bin folder of the MySQL program folder. Follow the commands and the response from the computer. The Listing 1 and the folders are appropriate for my computer and you may find it in your installation directory. The databases you will be seeing will be different from what you see here except for those created by the installation. Listing 1: Login and create a database Microsoft Windows XP [Version 5.1.2600](C) Copyright 1985-2001 Microsoft Corp.C:Documents and SettingsJayaram Krishnaswamy>cdC:>cd xamppmysqlbinC:xamppmysqlbin>mysql -h localhost -u root -pEnter password: *********Welcome to the MySQL monitor. Commands end with ; or g.Your MySQL connection id is 2Server version: 5.1.30-community MySQL Community Server (GPL)Type 'help;' or 'h' for help. Type 'c' to clear the buffer.mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || cdcol || expacc || mengerie || mydb || mysql || phpmyadmin || test || testdemo || webauth |+--------------------+10 rows in set (0.23 sec)mysql> create database TestMove;Query OK, 1 row affected (0.00 sec)mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || cdcol || expacc || mengerie || mydb || mysql || phpmyadmin || test || testdemo || testmove || webauth |+--------------------+11 rows in set (0.00 sec)mysql> The login detail that works error free is shown. The preference for host name is localhost v/s either the Machine Name (in this case Hodentek2) or the IP address. The first 'Show Databases' command does not display the TestMove we created which you can see in response to the 2nd Show Databases command. In windows the commands are not case sensitive. Creating an ODBC DSN to connect to MySQL When you install from XAMPP you will also be installing an ODBC driver for MySQL for the version of MySQL included in the bundle. In the MySQL version used for this article the version is MySQL ODBC5.1 and the file name is MyODBC5.dll. Click Start | Control Panel | Administrative Tools | Data Sources (ODBC) and open the ODBC Data Source Administrator window as shown. The default tab is User DSN. Change to System DSN as shown here. Click the Add... button to open the Create New Data Source window as shown. Scroll down and choose MySQL ODBC 5.1 Driver as the driver and click Finish. The MySQL Connector/ODBC Data Source Configuration window shows up. You will have to provide a Data Source Name (DSN) and a description. The server is the localhost. You must have your User Name/Password information to proceed further. The database is the name of the database you created earlier (TestMove) and this should show up in the drop-down list if the rest of the information is correct. Accept the default port. If all information is correct the Test button gets enabled. Click and test the connection using the Test button. You should get a response as shown. Click the OK button on the Test Result window. Click OK on the MySQL Connector/ODBC Data Source Configuration window. There are a number of other flags that you can set up using the 'Details' button. The defaults are acceptable for this article. You have successfully created a System DSN 'AccMySQL' as shown in the next window. Click OK. Verify the contents of TestMove The TestMove is a new database created in MySQL and as such it is empty as you verify in the following listing. Listing 2: Database TestMove is empty mysql> use testmove;Database changedmysql> show tables;Empty set (0.00 sec)mysql>
Read more
  • 0
  • 0
  • 5529
article-image-shipping-and-tax-calculations-php-5-ecommerce
Packt
20 Jan 2010
8 min read
Save for later

Shipping and Tax Calculations with PHP 5 Ecommerce

Packt
20 Jan 2010
8 min read
Shipping Shipping is a very important aspect of an e-commerce system; without it customers will not accurately know the cost of their order. The only situation where we wouldn't want to include shipping costs is where we always offer free shipping. However, in that situation, we could either add provisions to ignore shipping costs, or we could set all values to zero, and remove references to shipping costs from the user interface. Shipping methods The first requirement to calculate shipping costs is a shipping method. We may wish to offer a number of different shipping methods to our customers, such as standard shipping, next-day shipping, International shipping, and so on. The system will require a default shipping method, so when the customer visits their basket, they see shipping costs calculated based off the default method. There should be a suitable drop-down list on the basket page containing the list of shipping methods; when this is changed, the costs in the basket should be updated to reflect the selected method. We should store the following details for each shipping method: An ID number A name for the shipping method If the shipping method is active or not, indicating if it should be selectable by customers If the shipping method is the default method for the store A default shipping cost, this would: Be pre-populated in a suitable field when creating new products; however, when the product is created through the administration interface, we would store the shipping cost for the product with the product. Automatically be assigned to existing products in a store when a new shipping method is created to a store that already contains products. This could be suitably stored in our database as the following: Field Type Description ID Integer, Primary Key, Auto Increment ID number for the shipping method Name Varchar The name of the shipping method Active Boolean Indicates if the shipping method is active Default_cost Float The default cost for products for this shipping method This can be represented in the database using the following SQL: CREATE TABLE `shipping_methods` (`ID` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,`name` VARCHAR( 50 ) NOT NULL ,`active` BOOL NOT NULL ,`is_default` BOOL NOT NULL ,`default_cost` DOUBLE NOT NULL ,INDEX ( `active` , `is_default` )) ENGINE = INNODB COMMENT = 'Shipping methods'; Shipping costs There are several different ways to calculate the costs of shipping products to customers: We could associate a cost to each product for each shipping method we have in our store We could associate costs for each shipping method to ranges of weights, and either charge the customer based on the weight-based shipping cost for each product combined, or based on the combined weight of the order We could base the cost on the customer's delivery address The exact methods used, and the way they are used, depends on the exact nature of the store, as there are implications to these methods. If we were to use location-based shipping cost calculations, then the customer would not be aware of the total cost of their order until they entered their delivery address. There are a few ways this can be avoided: the system could assume a default delivery location and associated costs, and then update the customer's delivery cost at a later stage. Alternatively, if we enabled delivery methods for different locations or countries, we could associate the appropriate costs to these methods, although this does of course rely on the customer selecting the correct shipping method for their order to be approved; appropriate notifications to the customer would be required to ensure they do select the correct ones. For this article we will implement: Weight-based shipping costs: Here the cost of shipping is based on the weight of the products. Product-based shipping costs: Here the cost of shipping is set on a per product basis for each product in the customer's basket. We will also discuss location-based shipping costs, and look at how we may implement it. To account for international or long-distance shipping, we will use varying shipping methods; perhaps we could use: Shipping within state X. Shipping outside of state X. International shipping. (This could be broken down per continent if we wanted, without imposing on the customer too much.) Product-based shipping costs Product-based shipping costs would simply require each product to have a shipping cost associated to it for each shipping method in the store. As discussed earlier, when a new method is added to an existing store, a default value will initially be used, so in theory the administrator only needs to alter products whose shipping costs shouldn't be the default cost, and when creating new products, the relevant text box for the shipping cost for that method will have the default cost pre-populated. To facilitate these costs, we need a new table in our database storing: Product IDs Shipping method IDs Shipping costs The following SQL represents this table in our database: CREATE TABLE `shipping_costs_product` (`shipping_id` int(11) NOT NULL, `product_id` int(11) NOT NULL,`cost` float NOT NULL, PRIMARY KEY (`shipping_id`,`product_id`) )ENGINE=InnoDB DEFAULT CHARSET=latin1; Weight-based shipping costs Depending on the store being operated from our framework, we may need to base shipping costs on the weights of products. If a particular courier for a particular shipping method charges based on weights, then there isn't any point in creating costs for each product for that shipping method. Our framework can calculate the shipping costs based on the weight ranges and costs for the method, and the weight of the product. Within our database we would need to store: The shipping method in question A lower bound for the product weight, so we know which cost to apply to a product A cost associated for anything between this and the next weight bound The table below illustrates these fields in our database: Field Type Description ID Integer, primary key, Auto Increment A unique reference for the weight range Shipping_id Integer The shipping method the range applies to Lower_weight Float For working out which products this weight range cost applies to Cost Float The shipping cost for a product of this weight The following SQL represents this table: CREATE TABLE `shipping_costs_weight` (`ID` int(11) NOT NULL auto_increment,`shipping_id` int(11) NOT NULL,`lower_weight` float NOT NULL,`cost` float NOT NULL,PRIMARY KEY (`ID`)) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; To think about: Location-based shipping costs One thing we should still think about is location-based shipping costs, and how we may implement this. There are two primary ways in which we can do this: Assign shipping costs or cost surpluses/reductions to delivery addresses (either countries or states) and shipping methods Calculate costs using third-party service APIs These two methods have one issue, which is why we are not going to implement them—that is the costs are calculated later in the checkout process. We want our customers to be well informed and aware of all of their costs as early as possible. As mentioned earlier, however, we could get round this by assuming a default delivery location and providing customers with a guideline shipping cost, which would be subject to change based on their delivery address. Alternatively, we could allow customers to select their delivery location region from a drop-down list on the main "shopping basket" page. This way they would know the costs right away. Regional shipping costs We could look at storing: Shipping method IDs Region types (states or countries) Region values (an ID corresponding to a list of states or countries) A priority (in some cases, we may need to only consider the state delivery costs, and not country costs; in others cases, it may be the other way around) The associated costs changes (this could be a positive or negative value to be added to a product's delivery cost, as calculated by the other shipping systems already) By doing this, we can then combine the delivery address with the products and lookup a price alteration, which is applied to the product's delivery cost, which has already been calculated. Ideally, we would use all the shipping cost calculation systems discussed, to make something as flexible as possible, based on the needs of a particular product, particular shipping method or courier, or of a particular store or business. Third-party APIs The most accurate method of charging delivery costs, encompassing weights and delivery addresses is via APIs provided by couriers themselves, such as UPS. The following web pages may be of reference: http://www.ups.com/onlinetools http://answers.google.com/answers/threadview/id/429083.html Using such an API, means our shipping cost would be accurate, assuming our weight values were correct for our products, and we would not over or under charge customers for shipping costs. One additional consideration that third-party APIs may require would be dimensions of products, if their costs are also based on product sizes.
Read more
  • 0
  • 0
  • 5457

article-image-personalizing-vim
Packt
14 May 2010
12 min read
Save for later

Personalizing Vim

Packt
14 May 2010
12 min read
(Read more interesting articles on Hacking Vim 7.2 here.) Some of these tasks contain more than one recipe because there are different aspects for personalizing Vim for that particular task. It is you, the reader, who decides which recipes (or parts of it) you would like to read and use. Before we start working with Vim, there are some things that you need to know about your Vim installation, such as where to find the configuration files. Where are the configuration files? When working with Vim, you need to know a range of different configuration files. The location of these files is dependent on where you have installed Vim and the operating system that you are using. In general, there are three configuration files that you must know where to find: vimrc gvimrc exrc The vimrc file is the main configuration file for Vim. It exists in two versions—global and personal. The global vimrc file is placed in the folder where all of your Vim system files are installed. You can find out the location of this folder by opening Vim and executing the following command in normal mode: :echo $VIM The examples could be: Linux: /usr/share/vim/vimrc Windows: c:program filesvimvimrc The personal vimrc file is placed in your home directory. The location of the home directory is dependent on your operating system. Vim was originally meant for Unixes, so the personal vimrc file is set to be hidden by adding a dot as the first character in the filename. This normally hides files on Unix, but not on Microsoft Windows. Instead, the vimrc file is prepended with an underscore on these systems. So, examples would be: Linux: /home/kim/.vimrc Windows: c:documents and settingskim_vimrc Whatever you change in the personal vimrc file will overrule any previous setting made in the global vimrc file. This way you can modify the entire configuration without having to ever have access to the global vimrc file. You can find out what Vim considers as the home directory on your system by executing the following command in normal mode: :echo $HOME Another way of finding out exactly which vimrc file you use as your personal file is by executing the following command in the normal mode: :echo $MYVIMRC The vimrc file contains ex (vi predecessor) commands, one on each line, and is the default place to add modifications to the Vim setup. In the rest of the article, this file is just called vimrc. Your vimrc can use other files as an external source for configurations. In the vimrc file, you use the source command like this: source /path/to/external/file Use this to keep the vimrc file clean, and your settings more structured. (Learn more about how to keep your vimrc clean in Appendix B, Vim Configuration Alternatives). The gvimrc file is a configuration file specifically for Gvim. It resembles the vimrc file previously described, and is placed in the same location as a personal version as well as a global version. For example: Linux: /home/kim/.gvimrc and /usr/share/vim/gvimrc Windows: c:documents and settingskim_gvimrc and c:program filesvimgvimrc This file is used for GUI-specific settings that only Gvim will be able to use. In the rest of the article, this file is called gvimrc. The gvimrc file does not replace the vimrc file, but is simply used for configurationsthat only apply to the GUI version of Vim. In other words, there is no need to haveyour configurations duplicated in both the vimrc file and the gvimrc file. The exrc file is a configuration file that is only used for backwards compatibility with the old vi / ex editor. It is placed at the same location (both global and local) as vimrc, and is used the same way. However, it is hardly used anymore except if you want to use Vim in a vi-compatible mode. Changing the fonts In regular Vim, there is not much to do when it comes to changing the font because the font follows one of the terminals. In Gvim, however, you are given the ability to change the font as much as you like. The main command for changing the font in Linux is: :set guifont=Courier 14 Here, Courier can be exchanged with the name of any font that you have, and 14 with any font size you like (size in points—pt). For changing the font in Windows, use the following command: :set guifont=Courier:14 If you are not sure about whether a particular font is available on the computer or not, you can add another font at the end of the command by adding a comma between the two fonts. For example: :set guifont=Courier New 12, Arial 10 If the font name contains a whitespace or a comma, you will need to escape it with a backslash. For example: :set guifont=Courier New 12 This command sets the font to Courier New size 12, but only for this session. If you want to have this font every time you edit a file, the same command has to be added to your gvimrc file (without the : in front of set). In Gvim on Windows, Linux (using GTK+), Mac OS, or Photon, you can get a font selection window shown if you use this command: :set guifont=*. If you tend to use a lot of different fonts depending on what you are currently working with (code, text, logfiles, and so on.), you can set up Vim to use the correct font according to the file type. For example, if you want to set the font to Arial size 12 every time a normal text file (.txt) is opened, this can be achieved by adding the following line to your vimrc file: autocmd BufEnter *.txt set guifont=Arial 12 The window of Gvim will resize itself every time the font is changed. This means, if you use a smaller font, you will also (as a default) have a smaller window. You will notice this right away if you add several different file type commands like the one previously mentioned, and then open some files of different types. Whenever you switch to a buffer with another file type, the font will change, and hence the window size too. You can find more information about changing fonts in the Vim help system under Help | guifont. Changing color scheme Often, when working in a console environment, you only have a black background and white text in the foreground. This is, however, both dull and dark to look at. Some colors would be desirable. As a default, you have the same colors in the console Vim as in the console you opened it from. However, Vim has given its users the opportunity to change the colors it uses. This is mostly done with a color scheme file. These files are usually placed in a directory called colors wherever you have installed Vim. You can easily change the installed color schemes with the command: :colorscheme mycolors Here, mycolors is the name of one of the installed color schemes. If you don't know the names of the installed color schemes, you can place the cursor after writing: :colorscheme Now, you can browse through the names by pressing the Tab key. When you find the color scheme you want, you can press the Enter key to apply it. The color scheme not only applies to the foreground and background color, but also to the way code is highlighted, how errors are marked, and other visual markings in the text. You will find that some color schemes are very alike and only minor things have changed. The reason for this is that the color schemes are user supplied. If some user did not like one of the color settings in a scheme, he or she could just change that single setting and re-release the color scheme under a different name. Play around with the different color schemes and find the one you like. Now, test it in the situations where you would normally use it and see if you still like all the color settings. While learning Basic Vim Scripting, we will get back to how you can change a color scheme to fit your needs perfectly. Personal highlighting In Vim, the feature of highlighting things is called matching. With matching, you can make Vim mark almost any combination of letters, words, numbers, sentences, and lines. You can even select how it should be marked (errors in red, important words in green, and so on). Matching is done with the following command: :match Group /pattern/ The command has two arguments. The first one is the name of the color group that you will use in the highlight. Compared to a color scheme, which affects the entire color setup, a color group is a rather small combination of background (or foreground) colors that you can use for things such as matches. When Vim is started, a wide range of color groups are set to default colors, depending on the color scheme you have selected. To see a complete list of color groups, use the command: :so $VIMRUNTIME/syntax/hitest.vim. The second argument is the actual pattern you want to match. This pattern is a regular expression and can vary from being very simple to extremely complex, depending on what you want to match. A simple example of the match command in use would be: :match ErrorMsg /^Error/ This command looks for the word Error (marked with a ^) at the beginning of all lines. If a match is found, it will be marked with the colors in the ErrorMsg color group (typically white text on red background). If you don't like any of the available color groups, you can always define your own. The command to do this is as follows: :highlight MyGroup ctermbg=red guibg=red gctermfg=yellowguifg=yellow term=bold This command creates a color group called MyGroup with a red background and yellow text, in both the console (Vim) and the GUI (Gvim). You can change the following options according to your preferences: ctermb Background color in console guibg Background color in Gvim ctermf Text color in console guifg Text color in Gvim gui Font formatting in Gvim term Font formatting in console (for example, bold) If you use the name of an existing color group, you will alter that group for the rest of the session. When using the match command, the given pattern will be matched until you perform a new match or execute the following command: :match NONE The match command can only match one pattern at a time, so Vim has provided you with two extra commands to match up to three patterns at a time. The commands are easy to remember because their names resemble those of the match command: :2match:3match You might wonder what all this matching is good for, as it can often seem quite useless. Here are a few examples to show the strength of matching. Example 1: Mark color characters after a certain column In mails, it is a common rule that you do not write lines more than 74 characters (a rule that also applies to some older programming languages such as, Fortran-77). In a case like this, it would be nice if Vim could warn you when you reached this specific number of characters. This can simply be done with the following command: :match ErrorMsg /%>73v.+/ Here, every character after the 73rd character will be marked as an error. This match is a regular expression that when broken down consists of: %> Match after column with the number right after this 73 The column number V Tells that it should work on virtual columns only .+ Match one or more of any character Example 2: Mark tabs not used for indentation in code When coding, it is generally a good rule of thumb to use tabs only to indent code, and not anywhere else. However, for some it can be hard to obey this rule. Now, with the help of a simple match command, this can easily be prevented. The following command will mark any tabs that are not at the beginning of the line (indentation) as an error: :match errorMsg /[^t]zst+/ Now, you can check if you have forgotten the rule and used the Tab key inside the code. Broken down, the match consists of the following parts: [^ Begin a group of characters that should not be matched t The tab character ] End of the character group zs A zero-width match that places the 'matching' at the beginning of the line ignoring any whitespaces t+ One or more tabs in a row This command says: Don't match all the tab characters; match only the ones that are not used at the beginning of the line (ignoring any whitespaces around it). If instead of using tabs you want to use the space character for indentation, you can change the command to: :match errorMsg /[t]/ This command just says: Match all the tab characters. Example 3: Preventing errors caused by IP addresses If you write a lot of IP addresses in your text, sometimes you tend to enter a wrong value in one (such as 123.123.123.256). To prevent this kind of an error, you can add the following match to your vimrc file: match errorMsg /(2[5][6-9]|2[6-9][0-9]|[3-9][0-9][0-9])[.][0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}|[0-9]{1,3}[.](2[5][6-9]|2[6-9][0-9]| [3-9][0-9][0-9])[.][0-9]{1,3}[.][0-9]{1,3}|[0-9]{1,3}[.][0-9]{1,3}[.](2[5] [6-9]|2[6-9][0-9]|[3-9][0-9][0-9])[.][0-9]{1,3}|[0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}[.](2[5][6-9]|2[6-9][0-9]|[3-9][0-9][0-9])/ Even though this seems a bit too complex for solving a small possible error, you have to remember that even if it helps you just once, it is worth adding. If you want to match valid IP addresses, you can use this, which is a much simpler command: match todo /((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?).) {3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)/
Read more
  • 0
  • 0
  • 5425