Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-games-fortune-scratch-14
Packt
15 Oct 2009
4 min read
Save for later

Games of Fortune with Scratch 1.4

Packt
15 Oct 2009
4 min read
Fortune-teller Most of us enjoy a good circus, carnival, or county fair. There's fun, food, and fortunes. Aah, yes, what would a fair be without the fortune-teller's tent? By the end of this article, you'll know everything you need to spin your fortunes and amaze your friends with your wisdom. Before we start the first exercise, create a new project and add two sprites. The first sprite will be the seeker. The second sprite will be the teller. Choose any sprites you want. My seeker will be a clam and my teller will be a snowman. If you want to add a background, go ahead. Time for action – create a list of questions In order to have a successful fortune-telling, we need two things: a question and an answer. Let's start by defining some questions and answers: Select the seeker from the list of sprites. From the Variables palette, click the Make a list button. In the list name dialog box, type questions and select For this sprite only. Click OK to create the list. Several new blocks display in the Variables palette, and an empty block titled seeker questions displays on the stage. Let's think about a couple of questions we may be tempted to ask, such as the following: Will my hair fall out? How many children will I have? Let's add our proposed questions to the questions list. Click the plus sign located in the bottom-left corner of the seeker questions box (on the stage) to display a text input field. Type Will my hair fall out? Press the plus sign again and enter the second question: How many children will I have? We now have two questions in our list. To automatically add the next item in the list, press enter. Let's add a say for 2 secs block to the scripts area of the seeker sprite so that we can start the dialog. From the Variables palette, drag the item of questions block to the input value of the say for 2 secs block. Double-click on the block and the seeker asks, "Will my hair fall out?" Change the value on the item block to last and double-click the block again. This time the seeker asks, "How many children will I have?" What just happened? I'm certain you could come up with a hundred different questions to ask a fortune-teller. Don't worry, you'll get your chance to ask more questions later. Did you notice that the new list we created behaved a lot like a variable? We were able to make the questions list private; we don't want our teller to peek at our questions, after all. Also, the list became visible on the screen allowing us to edit the contents. The most notable difference is that we added more than one item, and each item corresponds to a number. We essentially created a numbered list. If you work with other programming languages, then you might refer to lists as arrays. Because the seeker's questions were contained in a list, we used the item block to provide special instructions to the     say block in order to ask the question. The first value of the item block was position, which defaulted to one. The second value was the name of the list, which defaulted to questions. In contrast, if we used a variable to store a question, we would only need to supply the name of the variable to the say block. Have a go hero Create an answers list for the teller sprite, and add several items to the list. Remember, there are no wrong answers in this exercise. Work with an item in a list We can use lists to group related items, but accessing the items in the list requires an extra level of specificity. We need to know the name of the list and the position of the item within the list before we can do anything with the values. The following table shows the available ways to access a specific item in a list.
Read more
  • 0
  • 0
  • 2305

article-image-data-modeling-erwin
Packt
14 Oct 2009
3 min read
Save for later

Data Modeling with ERWin

Packt
14 Oct 2009
3 min read
Depending on your data modeling need and what you already have, there are two other ways to create a data model: Derive from an existing model and Reverse Engineer an existing database. Let’s start with creating a new model by clicking the Create model button. We’d like to create both logical and physical models, so select Logical/Physical. You can see in the Model Explorer that our new model gets Model_1 name, ERWin’s default name. Let’s rename our model to Packt Model. Confirm by looking at the Model Explorer that our model is renamed correctly. Next, we need to choose the ER notation. ERWin offers two notations: IDEF1X and IE. We’ll use IE for our logical and physical models. It’s a good practice during model development to save our work from time to time. Our model is still empty, so let’s next create a logical model in it. Logical Model Logical model in ERWin is basically ER model. An ER model consists of entities and their attributes, and their relationships. Let’s start by creating our first entity: CUSTOMER and its attributes. To add an entity, load (click) the Entity button on the toolbar, and drop it on the diagramming canvas by clicking your mouse on the canvas. Rename the entity’s default E/1 name to CUSTOMER by clicking on the name (E/1) and typing its new name over it. To add an attribute to an entity, right-click the entity and select Attributes. Click the New button. Type in our first attribute name (CUSTOMER_NO) over its name, and select Number as its data type, and then click OK. We want this attribute as the entity’s primary key, so check the Primary Key box. In the same way, add an attribute: CUSTOMER_NAME with a String data type. Our CUSTOMER entity now has two attributes as seen in its ER diagram. Notice that the CUSTOMER_NO attribute is at the key area, the upper part of the entity box; while the CUSTOMER_NAME is in the common attribute area, the bottom part. Similarly, add the rest of the entities and their attributes in our model. When you’re done, we’ll have six entities in our ER diagram. Our entities are not related yet. To rearrange the entities in the diagram, you can move around the entities by clicking and dragging them.
Read more
  • 0
  • 0
  • 3366

article-image-customizing-and-extending-aspnet-mvc-framework
Packt
12 Oct 2009
5 min read
Save for later

Customizing and Extending the ASP.NET MVC Framework

Packt
12 Oct 2009
5 min read
(For more resources on .NET, see here.) Creating a control When building applications, you probably also build controls. Controls are re-usable components that contain functionality that can be re-used in different locations. In ASP.NET Webforms, a control is much like an ASP.NET web page. You can add existing web server controls and markup to a custom control and define properties and methods for it. When, for example, a button on the control is clicked, the page is posted back to the server that performs the actions required by the control. The ASP.NET MVC framework does not support ViewState and postbacks, and therefore, cannot handle events that occur in the control. In ASP.NET MVC, controls are mainly re-usable portions of a view, called partial views, which can be used to display static HTML and generated content, based on ViewData received from a controller. In this topic, we will create a control to display employee details. We will start by creating a new ASP.NET MVC application using File | New | Project... in Visual Studio, and selecting ASP.NET MVC Application under Visual C# - Web. First of all, we will create a new Employee class inside the Models folder. The code for this Employee class is: public class Employee{ public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } public string Department { get; set; }} On the home page of our web application, we will list all of our employees. In order to do this, modify the Index action method of the HomeController to pass a list of employees to the view in the ViewData dictionary. Here's an example that creates a list of two employees and passes it to the view: public ActionResult Index(){ ViewData["Title"] = "Home Page"; ViewData["Message"] = "Our employees welcome you to our site!"; List<Employee> employees = new List<Employee> { new Employee{ FirstName = "Maarten", LastName = "Balliauw", Email = "[email protected]", Department = "Development" }, new Employee{ FirstName = "John", LastName = "Kimble", Email = "[email protected]", Department = "Development" } }; return View(employees);} The corresponding view, Index.aspx in the Views | Home folder of our ASP.NET MVC application, should be modified to accept a List<Employee> as a model. To do this, edit the code behind the Index.aspx.cs file and modify its contents as follows: using System.Collections.Generic;using System.Web.Mvc;using ControlExample.Models;namespace ControlExample.Views.Home{ public partial class Index : ViewPage<List<Employee>> { }} In the Index.aspx view, we can now use this list of employees. Because we will display details of more than one employee somewhere else in our ASP.NET MVC web application, let's make this a partial view. Right-click the Views | Shared folder, click on Add | New Item... and select the MVC View User Control item template under Visual C# | Web | MVC. Name the partial view, DisplayEmployee.ascx. The ASP.NET MVC framework provides the flexibility to use a strong-typed version of the ViewUserControl class, just as the ViewPage class does. The key difference between ViewUserControl and ViewUserControl<T>is that with the latter, the type of view data is explicitly passed in, whereas the non-generic version will contain only a dictionary of objects. Because the DisplayEmployee.aspx partial view will be used to render items of the type Employee, we can modify the DisplayEmployee. ascx code behind the file DisplayEmployee.ascx.cs and make it strong-typed: using ControlExample.Models;namespace ControlExample.Views.Shared{ public partial class DisplayEmployee : System.Web.Mvc.ViewUserControl<Employee> { }} In the view markup of our partial view, the model can now be easily referenced. Just as with a regular ViewPage, the ViewUserControl will have a ViewData property containing a Model property of the type Employee. Add the following code to DisplayEmployee.ascx: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="DisplayEmployee.ascx.cs" Inheits="ControlExample.Views.Shared.DisplayEmployee" %><%=Html.Encode(Model.LastName)%>, <%=Html.Encode(Model.FirstName)%><br/><em><%=Html.Encode(Model.Department)%></em> The control can now be used on any view or control in the application. In the Views | Home | Index.aspx view, use the Model property (which is a List<Employee>) and render the control that we have just created for each employee: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="Index.aspx.cs"Inherits="ControlExample.Views.Home.Index" %><asp:Content ID="indexContent" ContentPlaceHolderID="MainContent"runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p>Here are our employees:</p> <ul> <% foreach (var employee inModel) { %> <li> <% Html.RenderPartial("DisplayEmployee", employee); %> </li> <% } %> </ul></asp:Content> In case the control's ViewData type is equal to the view page's ViewData type, another method of rendering can also be used. This method is similar to ASP.NET Webforms controls, and allows you to specify a control as a tag. Optionally, a ViewDataKey can be specified. The control will then fetch its data from the ViewData dictionary entry having this key. <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="...." /> For example, if the ViewData contains a key emp that is filled with an Employee instance, the user control could be rendered using the following markup: <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="emp" /> After running the ASP.NET MVC web application, the result will appear as shown in the following screenshot:
Read more
  • 0
  • 0
  • 2990
Banner background image
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-mixing-aspnet-webforms-and-aspnet-mvc
Packt
12 Oct 2009
6 min read
Save for later

Mixing ASP.NET Webforms and ASP.NET MVC

Packt
12 Oct 2009
6 min read
Ever since Microsoft started working on the ASP.NET MVC framework, one of the primary concerns was the framework's ability to re-use as many features as possible from ASP.NET Webforms. In this article by Maarten Balliauw, we will see how we can mix ASP.NET Webforms and ASP.NET MVC in one application and how data is shared between both these technologies. (For more resources on .NET, see here.) Not every ASP.NET MVC web application will be built from scratch. Several projects will probably end up migrating from classic ASP.NET to ASP.NET MVC. The question of how to combine both technologies in one application arises—is it possible to combine both ASP.NET Webforms and ASP.NET MVC in one web application? Luckily, the answer is yes. Combining ASP.NET Webforms and ASP.NET MVC in one application is possible—in fact, it is quite easy. The reason for this is that the ASP.NET MVC framework has been built on top of ASP.NET. There's actually only one crucial difference: ASP.NET lives in System.Web, whereas ASP.NET MVC lives in System.Web, System.Web.Routing, System.Web.Abstractions, and System.Web.Mvc. This means that adding these assemblies as a reference in an existing ASP.NET application should give you a good start on combining the two technologies. Another advantage of the fact that ASP.NET MVC is built on top of ASP.NET is that data can be easily shared between both of these technologies. For example, the Session state object is available in both the technologies, effectively enabling data to be shared via the Session state. Plugging ASP.NET MVC into an existing ASP.NET application An ASP.NET Webforms application can become ASP.NET MVC enabled by following some simple steps. First of all, add a reference to the following three assemblies to your existing ASP.NET application: System.Web.Routing System.Web.Abstractions System.Web.Mvc After adding these assembly references, the ASP.NET MVC folder structure should be created. Because the ASP.NET MVC framework is based on some conventions (for example, controllers are located in Controllers), these conventions should be respected. Add the folder Controllers, Views, and Views | Shared to your existing ASP.NET application. The next step in enabling ASP.NET MVC in an ASP.NET Webforms application is to update the web.config file, with the following code: < ?xml version="1.0"?> <configuration> <system.web> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <pages> <namespaces> <add namespace="System.Web.Mvc"/> <add namespace="System.Web.Mvc.Ajax"/> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing"/> <add namespace="System.Linq"/> <add namespace="System.Collections.Generic"/> </namespaces> </pages> <httpModules> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> </configuration> Note that your existing ASP.NET Webforms web.config should not be replaced by the above web.config! The configured sections should be inserted into an existing web.config file in order to enable ASP.NET MVC. There's one thing left to do: configure routing. This can easily be done by adding the default ASP.NET MVC's global application class contents into an existing (or new) global application class, Global.asax. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace MixingBothWorldsExample { public class Global : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } } This code registers a default ASP.NET MVC route, which will map any URL of the form /Controller/Action/Idinto a controller instance and action method. There's one difference with an ASP.NET MVC application that needs to be noted—a catch-all route is defined in order to prevent a request for ASP.NET Webforms to be routed into ASP.NET MVC. This catch-all route looks like this: routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); This is basically triggered on every request ending in .aspx. It tells the routing engine to ignore this request and leave it to ASP.NET Webforms to handle things. With the ASP.NET MVC assemblies referenced, the folder structure created, and the necessary configurations in place, we can now start adding controllers and views. Add a new controller in the Controllers folder, for example, the following simpleHomeController: using System.Web.Mvc; namespace MixingBothWorldsExample.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewData["Message"] = "This is ASP.NET MVC!"; return View(); } } } The above controller will simply render a view, and pass it a message through the ViewData dictionary. This view, located in Views | Home | Index.aspx, would look like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="MixingBothWorldsExample.Views.Home.Index" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head id="Head1" runat="server"> <title></title> </head> <body> <div> <h1><%=Html.Encode(ViewData["Message"]) %></h1> </div> </body> </html> The above view renders a simple HTML page and renders the ViewData dictionary's message as the page title.
Read more
  • 0
  • 1
  • 30069

article-image-securing-your-trixbox-server
Packt
12 Oct 2009
6 min read
Save for later

Securing Your trixbox Server

Packt
12 Oct 2009
6 min read
Start with a good firewall Never have your trixbox system exposed completely on the open Internet; always make sure it is behind a good firewall. While many people think that because trixbox is running on Linux, it is totally secure, Linux, like anything else, has its share of vulnerabilities, and if things are not configured properly, is fairly simple for hackers to get into. There are really good open-source firewalls available, such as pfSense, Viata, and M0n0Wall. Any access to system services, such as HTTP or SSH, should only be done via a VPN or using a pseudo-VPN such as Hamachi. The best designed security starts with being exposed to the outside world as little as possible. If we have remote extensions that cannot use VPNs, then we will be forced to leave SIP ports open, and the next step will be to secure those as well. Stopping unneeded services Since trixbox CE is basically a stock installation of CentOS Linux, very little hardening has been done to the system to secure it. This lack of security is intentional as the first level of defence should always be a good firewall. Since there will be people who still insist on putting the system in a data center with no firewall, some care will need to be taken to ensure that the system is as secure as possible. The first step is to disable any services that are running that could be potential security vulnerabilities. We can see the list of services that are used with the chkconfig –list command. [trixbox1.localdomain rules]# chkconfig --listanacron 0:off 1:off 2:on 3:on 4:on 5:on 6:offasterisk 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-daemon 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-dnsconfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offbgpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offcapi 0:off 1:off 2:off 3:off 4:off 5:off 6:offcrond 0:off 1:off 2:on 3:on 4:on 5:on 6:offdc_client 0:off 1:off 2:off 3:off 4:off 5:off 6:offdc_server 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcrelay 0:off 1:off 2:off 3:off 4:off 5:off 6:offez-ipupdate 0:off 1:off 2:off 3:off 4:off 5:off 6:offhaldaemon 0:off 1:off 2:off 3:on 4:on 5:on 6:offhttpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offip6tables 0:off 1:off 2:off 3:off 4:off 5:off 6:offiptables 0:off 1:off 2:off 3:off 4:off 5:off 6:offisdn 0:off 1:off 2:off 3:off 4:off 5:off 6:offkudzu 0:off 1:off 2:off 3:on 4:on 5:on 6:offlm_sensors 0:off 1:off 2:on 3:on 4:on 5:on 6:offlvm2-monitor 0:off 1:on 2:on 3:on 4:on 5:on 6:offmDNSResponder 0:off 1:off 2:off 3:on 4:on 5:on 6:offmcstrans 0:off 1:off 2:off 3:off 4:off 5:off 6:offmdmonitor 0:off 1:off 2:on 3:on 4:on 5:on 6:offmdmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmemcached 0:off 1:off 2:on 3:on 4:on 5:on 6:offmessagebus 0:off 1:off 2:off 3:on 4:on 5:on 6:offmultipathd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmysqld 0:off 1:off 2:off 3:on 4:on 5:on 6:offnamed 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetfs 0:off 1:off 2:off 3:on 4:on 5:on 6:offnetplugd 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetwork 0:off 1:off 2:on 3:on 4:on 5:on 6:offnfs 0:off 1:off 2:off 3:off 4:off 5:off 6:offnfslock 0:off 1:off 2:off 3:on 4:on 5:on 6:offntpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offospf6d 0:off 1:off 2:off 3:off 4:off 5:off 6:offospfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offportmap 0:off 1:off 2:off 3:on 4:on 5:on 6:offpostfix 0:off 1:off 2:on 3:on 4:on 5:on 6:offrdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:offrestorecond 0:off 1:off 2:on 3:on 4:on 5:on 6:offripd 0:off 1:off 2:off 3:off 4:off 5:off 6:offripngd 0:off 1:off 2:off 3:off 4:off 5:off 6:offrpcgssd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcidmapd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcsvcgssd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsaslauthd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmptrapd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsshd 0:off 1:off 2:on 3:on 4:on 5:on 6:offsyslog 0:off 1:off 2:on 3:on 4:on 5:on 6:offvsftpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offxinetd 0:off 1:off 2:off 3:on 4:on 5:on 6:offzaptel 0:off 1:off 2:on 3:on 4:on 5:on 6:offzebra 0:off 1:off 2:off 3:off 4:off 5:off 6:off The highlighted lines are services that are started automatically on system startup. The following list of services is required by trixbox CE and should not be disabled: Anacron crond haldaemon httpd kudzu lm_sensors lvm2-monitor mDNSResponder mdmonitor memcached messagebus mysqld network ntpd postfix sshd syslog xinetd zaptel To disable a service, we use the command chkconfig <servicename> off. We can now turn off some of the services that are not needed: chkconfig ircd offchkconfig netfs offchkconfig nfslock offchkconfig openibd offchkconfig portmap offchkconfig restorecond offchkconfig rpcgssd offchkconfig rpcidmapd offchkconfig vsftpd off We can also stop the services immediately without having to reboot: service ircd stopservice netfs stopservice nfslock stopservice openibd stopservice portmap stopservice restorecond stopservice rpcgssd stopservice rpcidmapd stopservice vsftpd stop Securing SSH A very large misconception is that by using SSH to access your system, you are safe from outside attacks. The security of SSH access is only as good as the security you have used to secure SSH access. Far too often, we see systems that have been hacked because their root password is very simple to guess (things like password or trixbox are not safe passwords). Any dictionary word is not safe at all, and substituting numbers for letters is very poor practice as well. So, as long as SSH is exposed to the outside, it is vulnerable. The best thing to do, if you absolutely have to have SSH running on the open Internet, is to change the port number used to access SSH. This section will detail the best methods of securing your SSH connections. Create a remote login account First off, we should create a user on the system and only allow SSH connections from it. The username should be something that only you know and is not easily guessed. Here, we will create a user called trixuser and assign a password to it. The password should be something with letters, numbers, symbols, and not based on a dictionary word. Also, try to string it into a sentence making sure to use the letters, numbers, and symbols. Spaces in passwords work well too, and are hard to add in scripts that might try to break into your server. A nice and simple tool for creating hard-to-guess passwords can be found at http://www.pctools.com/guides/password/. [trixbox1.localdomain init.d]# useradd trixuser[trixbox1.localdomain init.d]# passwd trixuser Now, ensure that the new account works by using SSH to log in to the trixbox CE server with this new account. If it does not let you in, make sure the password is correct or try to reset it. If it works, continue on. Only allowing one account access to the system over SSH is a great way to lock out most brute force attacks. To do this, we need to edit the file in /etc/ssh/sshd_config and add the following to the file. AllowUsers trixuser The PermitRootLogin setting can be edited so that root can't log in over SSH. Remove the # from in front of the setting and change the yes to no. PermitRootLogin no
Read more
  • 0
  • 0
  • 2881

article-image-preparing-your-forms-conversion-using-oracle-application-express-apex
Packt
09 Oct 2009
9 min read
Save for later

Preparing Your Forms Conversion Using Oracle Application Express (APEX)

Packt
09 Oct 2009
9 min read
When we are participating in a Forms Conversion project, it means we take the source files of our application, turn them into XML files, and upload them into the Forms Conversion part of APEX. This article describes what we do before uploading the XML files and starting our actual Forms Conversion project. Get your stuff! When we talk about source files, it would come in very handy if we got all the right versions of these files. In order to do the Conversion project, we need the same components that are used in the production environment. For these components, we have to get the source files of the components we want to convert. This means we have no use of the runtime files (Oracle Forms runtime files have the FMX extension). In other words, for Forms components we don't need the FMX files, but the FMB source files. These are a few ground rules we have to take into account: We need to make sure that there's no more development on the components we are about to use in our Conversion project. This is because we are now going to freeze our sources and new developments won't be taken into the Conversion project at all. So there will be no changes in our project. Put all the source files in a safe place. In other words, copy the latest version of your files into a new directory to which only you, and perhaps your teammates, have access. If the development team of your organization is using Oracle Designer for the development of its applications, it would be a good idea to generate all the modules from scratch. You would like to use the source on which the runtime files were created only if there are post-generation adjustments to be made in the modules. We need the following files for our Conversion project: Forms Modules: With the FMB extension Object Libraries: With the OLB extension Forms Menus: With the MMB extension PL/SQL Libraries: With the PLL extension Report Files: With the RDF, REX, or JSP extensions When we take these source files, we will be able to create all the necessary XML files that we need for the Forms Conversion project. Creating XML files To create XML files, we need three parts of the Oracle Developer Suite. All of these parts come with a normal 10g or 9i installation of the Developer Suite. These three parts are the Forms Builder, the Reports Builder, and the Forms2XML conversion tool. The Forms2XML conversion tool is the most extensive to understand and is used to create XML files from Form modules, Object Libraries, and Forms Menus. So, we will first discuss the possibilities of this tool. The Forms2XML conversion tool This tool can be used both from the command line as well as a Java applet. As the command line gives us all the possibilities we need and is as easy as a Java applet, we will only use the command-line possibilities. The frmf2xml command comes with some options. The following syntax is used while converting the Forms Modules, the Object Libraries, and the Forms Menus to an XML structure: frmf2xml [option] file [file] In other words, we follow these steps: We first type frmf2xml. Alternatively, we give one of the options with it. We tell the command which file we want to convert, and we have the option to address more than one file for the conversion to XML. We probably want to give the OVERWRITE=YES option with our command. This property ensures that the newly created XML file will overwrite the one with the same name in the directory where we are working. If another file with the same name already exists in this directory and we don't give the OVERWRITE option the value YES (the default is NO), the file will not be generated, as we see in the following screenshot: If there are any images used in modules (Forms or Object Libraries), the Forms2XML tool will refer to the image in the XML file created, and that file will create a TIF file of the image in the directory. The XML files that are created will be stored in the same directory from which we call the command. It will use the following syntax for the name of the XML file: formname.fmb will become formname_fmb.xml libraryname.olb will become libraryname_olb.xml menuname.mmb will become menuname_mmb.xml To convert the .FMB, OLB and, MMB files to XML, we need to do the following steps in the command prompt: Forms Modules The following steps are done in order to convert the .FMB file to XML: We will change the working directory to the directory that has the FMB file. In my example, I have stored all the files in a directory called summit directly under the C drive, like this: C:>cd C:summit Now, we can call the frmf2xml command to convert one of our Forms Modules to an XML file. In this example, we convert the orders.fmb module: C:summit>frmf2xml OVERWRITE=YES orders.fmb As we see in the following screenshot, this command creates an XML file called orders_fmb.xml in the working directory: Object Libraries To convert the .OLB file to XML, the following steps are needed: We first change the working directory to the directory that the OLB file is in. It's done like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Object Libraries to an XML file. In this example, we convert the Form_Builder_II.olb library as follows: C:summit>frmf2xml OVERWRITE=YES Form_Builder_II.olb As we see in the following screenshot, the command creates an XML file calledForm_Builder_II_olb.xml and two images as .tif files in the working directory: Forms Menus To convert the MMB file to XML, we follow these steps: We change the working directory to the directory that the .MMB file is in, like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Forms Menus to an XML file. In this example we convert the customers.mmb menu: C:summit>frmf2xml OVERWRITE=YES customers.mmb As we can see in the following screenshot, the command creates an XML file called customers_mmb.xml in the working directory: Report Files In our example, we will convert the Customers Report from a RDF file to an XML file. To do this, we follow the steps given here: We need to open the Employees.rdf file with Reports Builder. Open Reports Builder from your Start menu. If Reports Builder is opened, we need to cancel the wizard that asks us if we want to create a new report. After this we use Ctrl+O to open the Report File (or in the menu, File | Open) which we want to convert to XML as we see in the following screenshot: After this we use Shift+Ctrl+S (or in the File | Save As menu) to save the Report. We choose that we want to save the report as a Reports XML (*.xml) file and we click on the Save button as shown in the following screenshot: PL/SQL Libraries To convert PL/SQL Libraries to an XML format, it's easiest to use the convert command that comes with the Report Builder. With this command called rwconverter, we define the source type, call the source, and define the destination type and the destination. In this way, we have control over the way we need to convert the original .pll file to a .pld flat file that we can upload into the APEX Forms converter. It is possible to convert the PL/SQL Libraries with the convert option in Forms Builder, but, personally, I think this option works better. The rwconverter command has a few parameters we give with it to execute. They are given as follows: stype: This is the type of source file we need to convert. In our situation, this will be a .pll file and so the value we need to set is pllfile. source: This is the name of the source file, including the extension. In our case, it is wizard.pll. dtype: This is the file type we want to convert our source file to. In our case, it is a .pld file and so the value becomes pldfile. dest: This is the name, including the extension, of the destination file. In our case, it is wizard.pld. In our example, we use the wizard.pll file that's in our summit files directory. This PL/SQL Library that contains .pll files is normally used to create a PL/SQL Library in the Oracle Database. But this time, we will use it to create a .pld flat file that we will upload to APEX. First, we change the directory to work directory which has the original .pll file. In our case, the summit directory directly under the C drive, shown as follows: C:>cd C:summit After this, we call rwconverter in the command prompt as shown here: C:summit> rwconverter stype=pllfile source=wizard.pll dtype=pldfile dest=wizard.pld When you press the Enter key, a screen will open that is used to do the conversion. We will see that the types and names of the files are the same as we entered them in the command line. We need to click on the OK button to convert the file from .pll to .pld. The conversion may take a few seconds, but when the file has been converted we will see a confirmation that the conversion was successful. After this, we can look in the C:summit directory and we will see that a file wizard.pld is created.
Read more
  • 0
  • 0
  • 4489
article-image-getting-your-apex-components-logic-right
Packt
09 Oct 2009
10 min read
Save for later

Getting Your APEX Components Logic Right

Packt
09 Oct 2009
10 min read
Pre-generation editing After reading this article, we will understand our project a lot better. Also, to a certain level, we will be able control the way our application will be generated. Generation is often performed more than once as you refine the definitions and settings between iterations. In this article we will learn a lot of ways to edit the project in order to generate optimally. But we must understand that we will not cover all the exceptions in the generation process. If we want to do a real Forms to APEX conversion project, it will be very wise to carefully read the help texts in the Migration Documentation provided by Oracle in every APEX instance—especially the appendix called Oracle Forms Generation Capabilities and Workarounds, which will help you to understand the choices that can be made in the generation process. The information in these migration help texts tells us how the different components in Oracle Forms will be converted in APEX and how to implement business logic in the APEX application. For example, when we take a look at the Block to Page Region Mappings, we learn how APEX converts certain blocks to APEX regions during conversion. Investigating When we take a look at our conversion project, we must understand what will be generated. In case of generation, the most important parts are the blocks on our Forms modules. These are, quite literally, the building blocks our pages in APEX will be based upon. Of course, we have our program units, triggers, and much more; but the pages that are defined in the APEX application (which we put in production after the project is finished) will be based on Blocks, Reports, and Menus. This is why we need to adjust them before we generate anything. This might seem like a small part of the project as we look at the count of all the components in our project page, but that doesn't make it less important. We can't adjust reports as they are defined by the query that they are built upon, but we can alter the blocks. That's why we focus on those components first. Data blocks The building blocks of our APEX pages are the blocks and, of course, the reports. The blocks we can generate in our project are the ones that are based on database block. Non-database blocks such as those that hold menus and buttons are not generated by default, as they will be generated as blank pages. In the block overview page, we get the basic information about the blocks in our project. The way the blocks will be generated is determined by APEX based on the contents, the number of items on the block, and, most importantly, the number of records displayed. For further details on the generation rules, refer to the Migration Guide—Appendix A: Forms Generation Capabilities and Workarounds. In the Blocks overview page in our conversion project, we notice that not all the blocks are included. In other words, they aren't checked to be included in the project. This is because they are not oriented from a database block. To include or exclude a block during generation, we need to check or uncheck the specific block. Don't confuse this with the applicability of a block. We also might notice that some of the blocks are already set to complete. In our example we see that the S_CUSTOMER1 and S_CUSTOMER blocks are set to complete. If we take a look inside these components and check the annotations, they are indeed set to complete. There's also a note set for us. As we see in the following screenshot, it states Incorporating Enhanced Query: The Enhanced Query is something that we will use later in this article. But beware of the statement that a component is Complete as we will see that we might want to alter the query on which the customer's block is based. If we look at a block that is not yet set to complete in the overview page (such as the Orders block) and we look at the Application Express Page Query region in the details screen, we see that only Original Query is present. This is the query that is in the original Forms XML file we uploaded earlier. Although we have the Original Query present in our page, we can also alter it and customize the query on which this block is based. But this will be done later in the article. In this way, we have a better control over the way we will generate our application. We can't alter this query as it is to be implemented as a Master-Detail Form. Block items Each block contains a number of items. These items define the fields in our application and are derived from our Forms XML files. In the block details pages, we can find the details of the items on the particular block as well. Here we can see the most basic information about the items, namely their Type, Prompt, Column Name, and the Triggers on that particular item. We can also see the Name of the item if it is a Database Item and if the item is complete or not, and whether or not it is Applicable. When a block is set to complete, it is assumed that we have all the information required about the items, as we see in the example shown here: But there are also cases where we don't get all the information about the items we want. In our case, we might want to customize the query the block is based on or define the items further. We will cover this later in the article. In the above screenshot we notice that for all the items the Column Name is not known. This is an indication that the items will not be generated properly and we need to take a further look into the query and, maybe, some of the triggers. When we want to alter the completeness and applicability of the items in our block, there's a great functionality available on the upper-right of the Blocks Details page. In the Block Tasks section, we find a link that states: Set All Block Items Completeness and Applicability. This function is used to make bulk changes in the items in the block we are in. It can be useful to change the completeness of all items when we are not sure what more needs to be done. To set the completeness or the applicability with a bulk change on all the items, we click on the link in the Block Tasks region and this takes us to the following screen: In the Set Block Item & Trigger Status page we can select the Attribute (Items, Block Triggers, or Item Triggers), the Set Tracking Attribute (Complete or Applicable), and the Set Value (Yes or No). To make changes, set the correct attribute, tracking attribute, and value, and then click on Apply Changes. Original versus Enhanced Query As mentioned earlier, we can encounter both Original and Enhanced Queries in the blocks of our Forms. The Original Query is taken from the XML file directly as it is stated in the source of the block we are looking at. So where does the Enhanced Query originate from? This is one of the automatically generated parts of the Forms Conversion tool in APEX. If a block contains a POST QUERY trigger, the Forms Conversion tool generates an Enhanced Query for us. In the following screenshot, we see both the Enhanced Query and the Original Query in the S_CUSTOMER block. We can clearly notice the additional lines at the bottom of the Enhanced Query. The query in the Enhanced Query section still looks a lot like the one in the Original Query section, but is slightly altered. The code is generated automatically by taking the code from both the Original Query and POST QUERY triggers on this block. Please note that the query is automatically generated by APEX by adding a WHERE clause to the SQL query. This means that we will still need to check it and, probably, optimize it to work properly. The following screenshot shows us the POST QUERY trigger. Notice that it's set to both applicable and complete. This is because the code is now embedded in the enhanced query and so the trigger is taken care of for our project. Triggers Besides items, even blocks contain triggers. These define the actions in our blocks and are, therefore, equally important. Most of the triggers are very Forms-specific, but it's nice to be the judge of that ourselves. In the Orders Block, we have the Block Triggers region that contains the triggers in our orders block. The region tells us the name, applicability, and completeness. It gives us a snippet of the code inside the trigger and tells us the level it is set to (ITEM or BLOCK). A lot of the triggers in our project need to be implemented post-generation, which will be discussed later in this article. But as mentioned above, there is one trigger that we need in the pre-generation stage of our project. This is the POST-QUERY trigger. In this example, the applicability in the orders block is set to No. This is also the reason why we have no Enhanced Query to choose from in this block. The reasons behind setting the trigger to not applicable can be many, and you can learn more about the reasons if you read the migration help texts carefully. We probably want to change the applicability of the trigger ourselves because the POST-QUERY trigger contains some necessary information on how we need to define our block. If we click on the edit link (the pencil icon) for the POST-QUERY trigger, we can alter the applicability. Set the value for Applicable to Yes and click on Apply Changes. This will take us back to the Block Details screen. In the Triggers region, we can see that the applicability of the POST QUERY trigger is now set to Yes. Now if we scroll up to the Application Express Page Query region, we can also see that the Enhanced Query is now in place. As shown in the following screenshot, we can see that we automatically generated an extended version of the Original Query, embedding the logic in the Post Query trigger. For the developers among us, we can see that the query produced by the conversion tool in APEX doesn't make the query very optimal. We can rewrite the query in the Custom Query section, which we will describe later in this article. We are able to set the values for our triggers in the same way we used to set the applicability and completeness of the items in our blocks. In the upper-right corner of our Block Details screen, we find the Block Tasks region. Here we find the link to the tasks for items as well as triggers. Click on the Set All Block Triggers Completeness and Applicability to navigate to the screen where we can set the values. In the Attribute section, we can choose from both the block level triggers as well as the item level triggers. We can't adjust them all at once, so we may need to adjust them twice.  
Read more
  • 0
  • 0
  • 1870

article-image-java-server-faces-jsf-tools
Packt
07 Oct 2009
5 min read
Save for later

Java Server Faces (JSF) Tools

Packt
07 Oct 2009
5 min read
Throughout this article, we will follow the "learning by example" technique, and we will develop a completely functional JSF application that will represent a JSF registration form as you can see in the following screenshot. These kinds of forms can be seen on many sites, so it will be very useful to know how to develop them with JSF Tools. The example consists of a simple JSP page, named register.jsp, containing a JSF form with four fields (these fields will be mapped into a managed bean), as follows: personName – this is a text field for the user's name personAge – this is a text field for the user's age personPhone – this is a text field for the user's phone number personBirthDate – this is a text field for the user's birth date The information provided by the user will be properly converted and validated using JSF capabilities. Afterwards, the information will be displayed on a second JSP page, named success.jsp. Overview of JSF Java Server Faces (JSF) is a popular framework used to develop User Interfaces (UI) for server-based applications (it is also known as JSR 127 and is a part of J2EE 1.5). It contains a set of UI components totally managed through JSF support, like handling events, validation, navigation rules, internationalization, accessibility, customizability, and so on. In addition, it contains a tag library for expressing UI components within a JSP page. Among JSF features, we mention the ones that JSF provides: A set of base UI components Extension of the base UI components to create additional UI component libraries Custom UI components Reusable UI components Read/write application date to and from UI components Managing UI state across server requests Wiring client-generated events to server-side application code Multiple rendering capabilities that enable JSF UI components to render themselves differently depending on the client type JSF is tool friendly JSF is implementation agnostic Abstract away from HTTP, HTML, and JSP Speaking of JSF life cycle, you should know that every JSF request that renders a JSP involves a JSF component tree, also called a view. Each request is made up of phases. By standard, we have the following phases (shown in the following figure): The restore view is built Request values are applied Validations are processed Model values are updated The applications is invoked A response is rendered During the above phases, the events are notified to event listeners We end this overview with a few bullets regarding JSF UI components: A UI component can be anything from a simple to a complex user interface (for example, from a simple text field to an entire page) A UI component can be associated to model data objects through value binding A UI component can use helper objects, like converters and validators A UI component can be rendered in different ways, based on invocation A UI component can be invoked directly or from JSP pages using JSF tag libraries Creating a JSF project stub In this section you will see how to create a JSF project stub with JSF Tools. This is a straightforward task that is based on the following steps: From the File menu, select New | Project option. In the New Project window, expand the JBoss Tools Web node and select the JSF Project option (as shown in the following screenshot). After that, click the Next button. In the next window, it is mandatory to specify a name for the new project (Project Name field), a location (Location field—only if you don't use the default path), a JSF implementation (JSF Environment field), and a template (Template field). As you can see, JSF Tools offers a set of predefined templates as follows: JSFBlankWithLibs—this is a blank JSF project with complete JSF support. JSFKickStartWithLibs—this is a demo JSF project with complete JSF support. JSFKickStartWithoutLibs—this is a demo JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support. JSFBlankWithoutLibs—this is a blank JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support (for example, JBoss AS includes JSF support). The next screenshot is an example of how to configure our JSF project at this step. At the end, just click the Next button: This step allows you to set the servlet version (Servlet Version field), the runtime (Runtime field) used for building and compiling the application, and the server where the application will be deployed (Target Server field). Note that this server is in direct relationship with the selected runtime. The following screenshot shows an example of how to complete this step (click on the Finish button): After a few seconds, the new JSF project stub is created and ready to take shape! You can see the new project in the Package Explorer view.
Read more
  • 0
  • 0
  • 2224

article-image-data-access-methods-flex-3
Packt
06 Oct 2009
4 min read
Save for later

Data Access Methods in Flex 3

Packt
06 Oct 2009
4 min read
Flex provides a range of data access components to work with server-side remote data. These components are also commonly called Remote Procedure Call or RPC services. The Remote Procedure Call allows you to execute remote or server-side procedure or method, either locally or remotely in another address space without having to write the details of that method explicitly. (For more information on RPC, visit http://en.wikipedia.org/wiki/Remote_procedure_call.) The Flex data access components are based on Service Oriented Architecture (SOA). The Flex data access components allow you to call the server-side business logic built in Java, PHP, ColdFusion, .NET, or any other server-side technology to send and receive remote data. In this article by Satish Kore, we will learn how to interact with a server environment (specifically built with Java). We will look at the various data access components which includes HTTPService class and WebService class. This article focuses on providing in-depth information on the various data access methods available in Flex. Flex data access components Flex data access components provide call-and-response model to access remote data. There are three methods for accessing remote data in Flex: via the HTTPService class, the WebService class (compliant with Simple Object Access Protocol, or SOAP), or the RemoteObject class. The HTTPService class The HTTPService class is a basic method to send and receive remote data over the HTTP protocol. It allows you to perform traditional HTTP GET and POST requests to a URL. The HTTPService class can be used with any kind of service-side technology such as JSP, PHP, ColdFusion, ASP, and so on. The HTTP service is also known as a REST-style web service. REST stands for Representational State Transfer. It is an approach for getting content from a web server via web pages. Web pages are written in many languages such as JSP, PHP, ColdFusion, ASP, and so on that receive POST or GET requests. Output can be formatted in XML instead of HTML, which can be used in Flex applications for manipulating and displaying content. For example, RSS feeds output standard XML content when accessed using URL. For more information about REST, see www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm. Using the HTTPService tag in MXML The HTTPService class can be used to load both static and dynamic content from remote URLs. For example, you can use HTTPService to load an XML fi le that is located on a server, or you can also use HTTPService in conjunction with server-side technologies that return dynamic results to the Flex application. You can use both the HTTPS and the HTTP protocol to access secure and non-secure remote content. The following is the basic syntax of a HTTPService class: <mx:HTTPService id="instanceName" method="GET|POST|HEAD|OPTIONS|PUT|TRACE|DELETE" resultFormat="object|array|xml|e4x|flashvars|text" url="http://www.mydomain.com/myFile.jsp" fault="faultHandler(event);" result="resultHandler(event)"/> The following are the properties and events: id: Specifies instance identifier name method: Specifies HTTP method; the default value is GET resultFormat: Specifies the format of the result data url: Specifies the complete remote URL which will be called fault: Specifies the event handler method that'll be called when a fault or error occurs while connecting to the URL result: Specifies the event handler method to call when the HttpService call completed successfully and returned results When you call the HTTPService.send() method, Flex will make the HTTP request to the specified URL. The HTTP response will be returned in the result property of the ResultEvent class in the result handler of the HTTPService class. You can also pass an optional parameter to the send() method by passing in an object. Any attributes of the object will be passed in as a URL-encoded query string along with URL request. For example: var param:Object = {title:"Book Title", isbn:"1234567890"}myHS.send(param); In the above code example, we have created an object called param and added two string attributes to it: title and isbn. When you pass the param object to the send() method, these two string attributes will get converted to URL-encoded http parameters and your HTTP request URL will look like this: http://www.domain.com/myjsp.jsp?title=Book Title &isbn=1234567890. This way, you can pass any number of HTTP parameters to the URL and these parameters can be accessed in your server-side code. For example, in a JSP or servlet, you could use request.getParameter("title") to get the HTTP request parameter.
Read more
  • 0
  • 0
  • 2171
article-image-create-local-ubuntu-repository-using-apt-mirror-and-apt-cacher
Packt
05 Oct 2009
7 min read
Save for later

Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher

Packt
05 Oct 2009
7 min read
How can a company or organization minimize bandwidth costs when maintaining multiple Ubuntu installations? With bandwidth becoming the currency of the new millennium, being responsible with the bandwidth you have can be a real concern. In this article by Christer Edwards, we will learn how to create, maintain and make available a local Ubuntu repository mirror, allowing you to save bandwidth and improve network efficiency with each machine you add to your network. (For more resources on Ubuntu, see here.) Background Before I begin, let me give you some background into what prompted me to initially come up with these solutions. I used to volunteer at local university user groups whenever they held "installfests". I always enjoyed offering my support and expertise with new Linux users. I have to say, the distribution that we helped deploy the most was Ubuntu. The fact that Ubuntu's parent company, Canonical, provided us with pressed CDs didn't hurt! Despite having more CDs than we could handle we still regularly ran into the same problem. Bandwidth. Fetching updates and security errata was always our bottleneck, consistently slowing down our deployments. Imagine you are in a conference room with dozens of other Linux users, each of them trying to use the internet to fetch updates. Imagine how slow and bogged down that internet connection would be. If you've ever been to a large tech conference you've probably experienced this first hand! After a few of these "installfests" I started looking for a solution to our bandwidth issue. I knew that all of these machines were downloading the same updates and security errata, therefore duplicating work that another user had already done. It just seemed so inefficient. There had to be a better way. Eventually I came across two solutions, which I will outline below. Apt-Mirror The first method makes use of a tool called “apt-mirror”. It is a perl-based utility for downloading and mirroring the entire contents of a public repository. This may likely include packages that you don't use and will not use, but anything stored in a public repository will also be stored in your mirror. To configure apt-mirror you will need the following: apt-mirror package (sudo aptitude install apt-mirror) apache2 package (sudo aptitude install apache2) roughly 15G of storage per release, per architecture Once these requirements are met you can begin configuring the apt-mirror tool. The main items that you'll need to define are: storage location (base_path) number of download threads (nthreads) release(s) and architecture(s) you want The configuration is done in the /etc/apt/mirror.list file. A default config was put in place when you installed the package, but it is generic. You'll need to update the values as mentioned above. Below I have included a complete configuration which will mirror Ubuntu 8.04 LTS for both 32 and 64 bit installations. It will require nearly 30G of storage space, which I will be putting on a removable drive. The drive will be mounted at /media/STORAGE/. # apt-mirror configuration file#### The default configuration options (uncomment and change to override)###set base_path /media/STORAGE/# set mirror_path $base_path/mirror# set skel_path $base_path/skel# set var_path $base_path/var## set defaultarch <running host architecture>set nthreads 20## 8.04 "hardy" i386 mirrordeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-i386 http://packages.medibuntu.org/ hardy free non-free# 8.04 "hardy" amd64 mirrordeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-amd64 http://packages.medibuntu.org/ hardy free non-free# Cleaning sectionclean http://us.archive.ubuntu.com/clean http://packages.medibuntu.org/ It should be noted that each of the repository lines within the file should begin with deb-i386 or deb-amd64. Formatting may have changed based on the web formatting. Once your configuration is saved you can begin to populate your mirror by running the command: apt-mirror Be warned that, based on your internet connection speeds, this could take quite a long time. Hours, if not more than a day for the initial 30G to download. Each time you run the command after the initial download will be much faster, as only incremental updates are downloaded. You may be wondering if it is possible to cancel the transfer before it is finished and begin again where it left off. Yes, I have done this countless times and I have not run into any issues. You should now have a copy of the repository stored locally on your machine. In order for this to be available to other clients you'll need to share the contents over http. This is why we listed Apache as a requirement above. In my example configuration above I mirrored the repository on a removable drive, and mounted it at /media/STORAGE. What I need to do now is make that address available over the web. This can be done by way of a symbolic link. cd /var/www/sudo ln -s /media/STORAGE/mirror/us.archive.ubuntu.com/ubuntu/ ubuntu The commands above will tell the filesystem to follow any requests for “ubuntu” back up to the mounted external drive, where it will find your mirrored contents. If you have any problems with this linking double-check your paths (as compared to those suggested here) and make sure your link points to the ubuntu directory, which is a subdirectory of the mirror you pulled from. If you point anywhere below this point your clients will not properly be able to find the contents. The additional task of keeping this newly downloaded mirror updated with the latest updates can be automated by way of a cron job. By activating the cron job your machine will automatically run the apt-mirror command on a regular daily basis, keeping your mirror up to date without any additional effort on your part. To activate the automated cron job, edit the file /etc/cron.d/apt-mirror. There will be a sample entry in the file. Simply uncomment the line and (optionally) change the “4” to an hour more convenient for you. If left with its defaults it will run the apt-mirror command, updating and synchronizing your mirror, every morning at 4:00am. The final step needed, now that you have your repository created, shared over http and configured to automatically update, is to configure your clients to use it. Ubuntu clients define where they should grab errata and security updates in the /etc/apt/sources.list file. This will usually point to the default or regional mirror. To update your clients to use your local mirror instead you'll need to edit the /etc/apt/sources.list and comment the existing entries (you may want to revert to them later!) Once you've commented the entries you can create new entries, this time pointing to your mirror. A sample entry pointing to a local mirror might look something like this: deb http://192.168.0.10/ubuntu hardy main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-updates main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-security main restricted universe multiverse Basically what you are doing is recreating your existing entries but replacing the archive.ubuntu.com with the IP of your local mirror. As long as the mirrored contents are made available over http on the mirror-server itself you should have no problems. If you do run into problems check your apache logs for details. By following these simple steps you'll have your own private (even portable!) Ubuntu repository available within your LAN, saving you future bandwidth and avoiding redundant downloads.
Read more
  • 0
  • 0
  • 19961

article-image-ubuntu-user-interface-tweaks
Packt
01 Oct 2009
10 min read
Save for later

Ubuntu User Interface Tweaks

Packt
01 Oct 2009
10 min read
I have spent time on all of the major Desktop operating systems and Linux is by far the most customizable. The GNOME Desktop environment, the default environment of Ubuntu and many other Linux distributions, is a very simple yet very customizable interface. I have spent a lot of time around a lot of Linux users and rarely do I find two Desktops that look the same. Whether it is simple desktop background customizations or much more complex UI alterations, GNOME allows you to make your desktop your own. Just like any other environment that you're going to find yourself in for an extended period of time, you're going to want to make it your own. The GNOME Desktop offers this ability in a number of ways. First of all I'll cover perhaps the more obvious methods, and then I'll move to the more complex. As mentioned in the introduction, by the end of this article you'll know how to automate (script) the customization of your desktop down to the very last detail. This is perfect for those that find themselves reinstalling their machines on a regular basis. Appearance GNOME offers a number of basic customizations within the Applications menu. To use the "Appearance Preferences" tool simply navigate to: System > Preferences > Appearance You'll find that the main screen allows you to change your basic theme. The theme includes the environment color scheme, icon set and window bordering. This is often one of the very first things that users will change on a new installation. Of the default theme selections I generally prefer "Clearlooks" over the default Ubuntu brown color. The next tab allows you to set your background. This is the graphic, color or gradient that you want to appear on your desktop. This is also a very common customization. More often than not users will find third-party graphics specific in this section. A great place to find user-generated desktop content is the http://gnome-look.org website. It is dedicated to user-generated Artwork for the GNOME and Ubuntu desktop. On the third tab you'll find Fonts. I have found that fonts do play a very important role in the look of your desktop. For the longest time I didn't bother with customizing my fonts, but after being introduced to a few that I like, it is a must-have in my desktop customization list. My personal preference is to use the "Droid Sans" font, at 10pt for all settings. I think this is a very clean, crisp font design that really changes the desktop look and feel.  If you'd like to try out this font set you'll have to install it. This can be done using: sudo aptitude install ttf-droid Another noticeable customization to your desktop on the Fonts tab is the Rendering option. For laptops you'll definitely want to select the bottom-right option, "Subpixel Smoothing (LCDs)". You should notice a change right away when you check the box. Finally, the Details button on the fonts tab can make great improvements to the over all look. This is where you can set your font resolution. I highly suggest setting this value to 96 dots per inch (dpi). Recent versions of Ubuntu try to dynamically detect the preferred dpi. Unfortunately I haven't had the best of luck with this new feature, so I've continued to set it manually. I think you'll notice a change if your system is on something other than 96. Setting the font to "Droid Sans" 10pt and the resolution to 96 dpi is one of the biggest visual changes that I make to my system! The final tab in the Appearances tool is the Interface. This tab allows you to customize simple things like whether or not your Applications menu should display icons or not. Personally, I have found that I like the default settings, but I would suggest trying a few customizations and finding out what you like. If you've followed the suggestions so far I'm sure your desktop likely looks a lot different than it did out of the box. By changing the theme, desktop background, font and dpi you may have already made drastic changes. I'd like to also share with you some of the additional changes that I make, which will help demonstrate some of the more advanced, little known features of the GNOME desktop. gconf-editor A default Ubuntu system comes with a behind-the-scenes tool called the "gconf-editor". This is basically a graphical editor for your entire GNOME configuration settings. At first use it can be a bit confusing, but once you figure out where and how to find your preferred settings it becomes much easier. To launch the gconf-editor press the key combination ALT-F2 and type: gconf-editor I have heard people compare this tool to Microsoft's registry tool, but I assure you that it is far less complicated! It simply stores GNOME configuration and application settings. It even includes the changes that you made above! Anytime you make a change to the graphical interface it gets stored, and this tool is a graphical way to view those changes. Let's change something else, this time using the gconf-editor. Another of my favorite interface customizations includes the panels. By default you have two panels, one at the top and one at the bottom of your screen. I prefer to have both panels at the top of my screen, and I like them to be a bit smaller than they are out of the box. Here is how we would make that change using the gconf-editor. Navigate to Edit > Search and search for bottom_panel or top_panel. I will start with bottom_panel. You should come up with a few results, the first one being /apps/panel/toplevels/bottom_panel_screen0. You can now customize the color, size, auto-hide feature and much more of your panel. If you find orientation, double-click the entry, and change the value to "top" you'll find that your panel instantly moves to the top of the screen. You may want to alter the size entry while you're in there as well. Make a note of the Key name that you see for each item. These will come in handy a little bit later. A few other settings that you might find interesting are Nautilus desktop settings such as: computer_icon_visible home_icon_visible network_icon_visible trash_icon_visible volumes_visible These are simple check-box settings, activating or deactivating an option upon click. Basically these allow you to toggle the computer, home, network or trash icons on your desktop. I prefer to make sure each of these is turned off. The only one that I do like to keep on is volumes_visible. Try this out yourself and see what you prefer. Automation Earlier I mentioned that you'll want to make note of the Key Name for the settings that you're playing with. It is these names that allow us to automate, or script, the customization of our desktop environment. After putting a little bit of time into finding the key names for each of the customizations that I like I am now able to completely customize every aspect of my desktop by running a simple script! Let me give you a few examples. Above we found that the key name for the bottom panel was: /apps/panel/toplevels/bottom_panel_screen0 The key name specifically for the orientation was: /apps/panel/toplevels/bottom_panel_screen0/orientation The value we changed was top or bottom. We can now make this change from the command line using by typing: gconftool-2 -s --type string /apps/panel/toplevels/bottom_panel_screen0/orientation top Let us see a few more examples, these will change the font settings for each entry that we saw in the Appearances menu: gconftool -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/font_name "Droid Sans 10" You may or may not have made these changes manually, but just think about the time you could save on your next Ubuntu installation by pasting in these five commands instead! I will warn you though, once you start making a list of gconftool commands it's hard to stop. Considering how simple it is to make environment changes using simple commands, why not list everything! I'd like to share the script that I use to make my preferred changes. You'll likely want to edit the values to match your preferences. #!/bin/bash## customize GNOME interface# ([email protected])#gconftool-2 -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool-2 -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type bool /apps/nautilus/preferences/always_use_browser truegconftool-2 -s --type bool /apps/nautilus/desktop/computer_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/home_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/network_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/trash_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/volumes_visible truegconftool-2 -s --type bool /apps/nautilus-open-terminal/desktop_opens_home_dir truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/Platform/Linux/TrayIconPreferences/StatusIconVisible truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/CorePreferences/QuietStart truegconftool-2 -s --type bool /apps/gnome-terminal/profiles/Default/default_show_menubar falsegconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/font "Droid Sans Mono 10"gconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/scrollbar_position "hidden"gconftool-2 -s --type string /apps/gnome/interface/gtk_theme "Shiki-Brave"gconftool-2 -s --type string /apps/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type integer /apps/panel/toplevels/bottom_panel_screen0/size 23gconftool-2 -s --type integer /apps/panel/toplevels/top_panel_screen0/size 23 Summary By saving the above script into a file called "gnome-setup" and running it after a fresh installation I'm able to update my theme, fonts, visible and non-visible icons, gnome-do preferences, gnome-terminal preferences and much more within seconds. My desktop actually feels like my desktop again! I find that maintaining a simple file like this greatly eases the customization of my desktop environment and lets me focus on getting things done. I no longer spend an hour tweaking each little setting to make my machine my home again. I install, run my script, and get to work! If you have read this article you may be interested to view : Compiling and Running Handbrake in Ubuntu Control of File Types in Ubuntu Ubuntu 9.10: How To Upgrade Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala" Five Years of Ubuntu Control of File Types in Ubuntu What's New In Ubuntu 9.10 "Karmic Koala" Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher
Read more
  • 0
  • 0
  • 3491

article-image-binding-ms-chart-control-linq-data-source-control-2
Packt
03 Sep 2009
4 min read
Save for later

Binding MS Chart Control to LINQ Data Source Control

Packt
03 Sep 2009
4 min read
Introduction LINQ, short for Language Integrated Query, provides an object oriented approach to not only querying relational databases but also any kind of source such as XML, Collection of objects, etc. LINQ to SQL provides (O/R) object-relational mapping and Visual Studio 2008 IDE provides a (O/R) designer. Visual Studio 2008 also has a web server control called LinqDataSource control. This control requires a DataContext which is provided by the LINQ-to-SQL classes, a class generator that maps SQL objects to the model. Without this control one may have to generate the classes from scratch or by using the SQLMetal.exe utility which generates tables and columns. The readers may benefit by reading my  previous article on Displaying SQL Server Data using a Linq data source. We will continue to use the PrincetonTemp table used in the previous article on MySQL Data Transfer using Sql Server Integration Services. We will map this table to LINQ via LinqDataSource Control and then use it as the source of data for the chart. Create a Framework 3.5 Web Site project Open Visual Studio 2008 from its shortcut on the desktop. Click File | New | Web Site...(or Shift+Alt+N) to open the New Web Site window. Change the default name of the site to a name of your choice (herein PieChartWithLinq) on your local web server as shown. Make sure you are creating a .NET Framework 3.5 web site as shown here. This is very similar to creating a web site that can reside on the file system. Add a LinqDataControl and provide the data context Drag and drop a LinqDataSource control from Toolbox | Data shown in the next figure on to the Default.aspx This creates an instance of the control LinqDataSource1 as shown. The source page Default.aspx will display the  control as shown in the next figure. When you try to configure a data source for this control using the smart tasks (see Figure 3) the program would take you to a window which does not allow you create a new data context. But if there are existing items the program allows you to choose. Create a data context for the LinqDataSource control We will add a new data context. Right click the web site in Solution Explorer and pick Add New Item... from the drop-down menu. This opens the Add New Item window as shown. Highlight the item Linq to SQL Classes in the Add New Item window. Replace the default name DataClasses.dbml with one of your own. Herein it is replaced with Princeton.dbml. When you click OK after renaming you will get a Microsoft Visual Studio warning as shown. Read the message on this window. Click Yes to add Princeton.dbml consisting of two items to the App_Code folder on the web site project as shown. One is a layout designer and the other a code page. The (O/R) designer containing two panes is shown in the next figure. In the left pane you drag and drop table (s) from the server explorer assuming you a made a connection with a database. To the right you can add a stored procedure from the server explorer. Read the instructions on this page. Click (View | Server Explorer) to display the connections in the server explorer as shown. In the next figure you can see the expanded node of TestNorthwind displaying the table PrincetonTemp in the SQL Server 2008 Hodentek2Mysorian. The database and the server names are the ones used for this article and you may have different names in your machine. The connection used is already present but you can create a new connection in the server explorer window if you choose to do so. Drag and drop the PrincetonTemp table to the left pane of the (O/R) designer as shown. Build the project. This will make the objects available for the LinqDataSource1 on the default page.
Read more
  • 0
  • 0
  • 2649