Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-apex-plug-ins
Packt
30 Oct 2013
17 min read
Save for later

APEX Plug-ins

Packt
30 Oct 2013
17 min read
(For more resources related to this topic, see here.) In APEX 4.0, Oracle introduced the plug-in feature. A plug-in is an extension to the existing functionality of APEX. The idea behind plug-ins is to make life easier for developers. Plug-ins are reusable, and can be exported and imported. In this way it is possible to create a functionality which is available to all APEX developers. And installing and using them without the need of having a knowledge of what's inside the plug-in. APEX translates settings from the APEX builder to HTML and JavaScript. For example, if you created a text item in the APEX builder, APEX converts this to the following code (simplified): <input type="text" id="P12_NAME" name="P12_NAME" value="your name"> When you create an item type plug-in, you actually take over this conversion task of APEX, and you generate the HTML and JavaScript code yourself by using PL/SQL procedures. That offers a lot of flexibility because now you can make this code generic, so that it can be used for more items. The same goes for region type plug-ins. A region is a container for forms, reports, and so on. The region can be a div or an HTML table. By creating a region type plug-in, you create a region yourself with the possibility to add more functionality to the region. Plug-ins are very useful because they are reusable in every application. To make a plug-in available, go to Shared Components | Plug-ins , and click on the Export Plug-in link on the right-hand side of the page. Select the desired plug-in and file format and click on the Export Plug-in button. The plug-in can then be imported into another application. Following are the six types of plug-in: Item type plug-ins Region type plug-ins Dynamic action plug-ins Process type plug-ins Authorization scheme type plug-ins Authentication scheme type plug-ins In this Aricle we will discuss the first five types of plug-ins. Creating an item type plug-in In an item type plug-in you create an item with the possibility to extend its functionality. To demonstrate this, we will make a text field with a tooltip. This functionality is already available in APEX 4.0 by adding the following code to the HTML form element attributes text field in the Element section of the text field: onmouseover="toolTip_enable(event,this,'A tooltip')" But you have to do this for every item that should contain a tooltip. This can be made more easily by creating an item type plug-in with a built-in tooltip. And if you create an item of type plug-in, you will be asked to enter some text for the tooltip. Getting ready For this recipe you can use an existing page, with a region in which you can put some text items. How to do it... Follow these steps: Go to Shared Components | User Interface | Plug-ins . Click on the Create button. In the Name section, enter a name in the Name text field. In this case we enter tooltip. In the Internal Name text field, enter an internal name. It is advised to use the company's domain address reversed to ensure the name is unique when you decide to share this plug-in. So for example you can use com.packtpub.apex.tooltip. In the Source section, enter the following code in the PL/SQL Code text area: function render_simple_tooltip ( p_item in apex_plugin.t_page_item , p_plugin in apex_plugin.t_plugin , p_value in varchar2 , p_is_readonly in boolean , p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; -- sys.htp.p('<input type="text" id="'||p_item.name||'" name="'||p_item.name||'" class="text_field" onmouseover="toolTip_enable(event,this,'||''''||p_item.attribute_01||''''||')">');--return l_result;end render_simple_tooltip; This function uses the sys.htp.p function to put a text item (<input type="text") on the screen. On the text item, the onmouseover event calls the function toolTip_enable(). This function is an APEX function, and can be used to put a tooltip on an item. The arguments of the function are mandatory. The function starts with the option to show debug information. This can be very useful when you create a plug-in and it doesn't work. After the debug information, the htp.p function puts the text item on the screen, including the call to tooltip_enable. You can also see that the call to tooltip_enable uses p_item.attribute_01. This is a parameter that you can use to pass a value to the plug-in. That is, the following steps in this recipe. The function ends with the return of l_result. This variable is of the type apex_plugin.t_page_item_render_result. For the other types of plug-in there are dedicated return types also, for example t_region_render_result. Click on the Create Plug-in button The next step is to define the parameter (attribute) for this plug-in. In the Custom Attributes section, click on the Add Attribute button. In the Name section, enter a name in the Label text field, for example tooltip. Ensure that the Attribute text field contains the value 1 . In the Settings section, set the Type field to Text . Click on the Create button. In the Callbacks section, enter render_simple_tooltip into the Render Function Name text field. In the Standard Attributes section, check the Is Visible Widget checkbox. Click on the Apply Changes button. The plug-in is now ready. The next step is to create an item of type tooltip plug-in. Go to a page with a region where you want to use an item with a tooltip. In the Items section, click on the add icon to create a new item. Select Plug-ins . Now you will get a list of the available plug-ins. Select the one we just created, that is tooltip . Click on Next . In the Item Name text field, enter a name for the item, for example, tt_item. In the Region drop-down list, select the region you want to put the item in. Click on Next . In the next step you will get a new option. It's the attribute you created with the plug-in. Enter here the tooltip text, for example, This is tooltip text. Click on Next . In the last step, leave everything as it is and click on the Create Item button. You are now ready. Run the page. When you move your mouse pointer over the new item, you will see the tooltip. How it works... As stated before, this plug-in actually uses the function htp.p to put an item on the screen. Together with the call to the JavaScript function, toolTip_enable on the onmouseover event makes this a text item with a tooltip, replacing the normal text item. There's more... The tooltips shown in this recipe are rather simple. You could make them look better, for example, by using the Beautytips tooltips. Beautytips is an extension to jQuery and can show configurable help balloons. Visit http://plugins.jquery.com to download Beautytips. We downloaded Version 0.9.5-rc1 to use in this recipe. Go to Shared Components and click on the Plug-ins link. Click on the tooltip plug-in you just created. In the Source section, replace the code with the following code: function render_simple_tooltip ( p_item in apex_plugin.t_page_item, p_plugin in apex_plugin.t_plugin, p_value in varchar2, p_is_readonly in boolean, p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; The function also starts with the debug option to see what happens when something goes wrong. --Register the JavaScript and CSS library the plug-inuses. apex_javascript.add_library ( p_name => 'jquery.bgiframe.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.bt.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.easing.1.3', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.hoverintent.minified', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'excanvas', p_directory => p_plugin.file_prefix, p_version => null ); After that you see a number of calls to the function apex_javascript.add_library. These libraries are necessary to enable these nice tooltips. Using apex_javascript.add_library ensures that a JavaScript library is included in the final HTML of a page only once, regardless of how many plug-in items appear on that page. sys.htp.p('<input type="text" id="'||p_item.name||'"class="text_field" title="'||p_item.attribute_01||'">');-- apex_javascript.add_onload_code (p_code =>'$("#'||p_item.name||'").bt({ padding: 20 , width: 100 , spikeLength: 40 , spikeGirth: 40 , cornerRadius: 50 , fill: '||''''||'rgba(200, 50, 50, .8)'||''''||' , strokeWidth: 4 , strokeStyle: '||''''||'#E30'||''''||' , cssStyles: {color: '||''''||'#FFF'||''''||',fontWeight: '||''''||'bold'||''''||'} });'); -- return l_result; end render_tooltip; Another difference with the first code is the call to the Beautytips library. In this call you can customize the text balloon with colors and other options. The onmouseover event is not necessary anymore as the call to $().bt in the wwv_flow_javascript.add_onload_code takes over this task. The $().bt function is a jQuery JavaScript function which references the generated HTML of the plug-in item by ID, and converts it dynamically to show a tooltip using the Beautytips plug-in. You can of course always create extra plug-in item type parameters to support different colors and so on per item. To add the other libraries, do the following: In the Files section, click on the Upload new file button. Enter the path and the name of the library. You can use the file button to locate the libraries on your filesystem. Once you have selected the file, click on the Upload button. The files and their locations can be found in the following table: Libra ry Location jquery.bgiframe.min.js bt-0.9.5-rc1other_libsbgiframe_2.1.1 jquery.bt.min.js bt-0.9.5-rc1 jquery.easing.1.3.js bt-0.9.5-rc1other_libs jquery.hoverintent.minified.js bt-0.9.5-rc1other_libs Excanvas.js bt-0.9.5-rc1other_libsexcanvas_r3     If all libraries have been uploaded, the plug-in is ready. The tooltip now looks quite different, as shown in the following screenshot: In the plug-in settings, you can enable some item-specific settings. For example, if you want to put a label in front of the text item, check the Is Visible Widget checkbox in the Standard Attributes section. For more information on this tooltip, go to http://plugins.jquery.com/project/bt. Creating a region type plug-in As you may know, a region is actually a div. With the region type plug-in you can customize this div. And because it is a plug-in, you can reuse it in other pages. You also have the possibility to make the div look better by using JavaScript libraries. In this recipe we will make a carousel with switching panels. The panels can contain images but they can also contain data from a table. We will make use of another jQuery extension, Step Carousel. Getting ready You can download stepcarousel.js from http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm. However, in order to get this recipe work in APEX, we needed to make a slight modification in it. So, stepcarousel.js, arrowl.gif, and arrow.gif are included in this book. How to do it... Follow the given steps, to create the plug-in: Go to Shared Components and click on the Plug-ins link. Click on the Create button. In the Name section, enter a name for the plug-in in the Name field. We will use Carousel. In the Internal Name text field, enter a unique internal name. It is advised to use your domain reversed, for example com.packtpub.carousel. In the Type listbox, select Region . In the Source section, enter the following code in the PL/SQL Code text area: function render_stepcarousel ( p_region in apex_plugin.t_region, p_plugin in apex_plugin.t_plugin, p_is_printer_friendly in boolean ) return apex_plugin.t_region_render_result is cursor c_crl is select id , panel_title , panel_text , panel_text_date from app_carousel order by id; -- l_code varchar2(32767); begin The function starts with a number of arguments. These arguments are mandatory, but have a default value. In the declare section there is a cursor with a query on the table APP_CAROUSEL. This table contains several data to appear in the panels in the carousel. -- add the libraries and stylesheets -- apex_javascript.add_library ( p_name => 'stepcarousel', p_directory => p_plugin.file_prefix, p_version => null ); -- --Output the placeholder for the region which is used by--the Javascript code The actual code starts with the declaration of stepcarousel.js. There is a function, APEX_JAVASCRIPT.ADD_LIBRARY to load this library. This declaration is necessary, but this file needs also to be uploaded in the next step. You don't have to use the extension .js here in the code. -- sys.htp.p('<style type="text/css">'); -- sys.htp.p('.stepcarousel{'); sys.htp.p('position: relative;'); sys.htp.p('border: 10px solid black;'); sys.htp.p('overflow: scroll;'); sys.htp.p('width: '||p_region.attribute_01||'px;'); sys.htp.p('height: '||p_region.attribute_02||'px;'); sys.htp.p('}'); -- sys.htp.p('.stepcarousel .belt{'); sys.htp.p('position: absolute;'); sys.htp.p('left: 0;'); sys.htp.p('top: 0;'); sys.htp.p('}'); sys.htp.p('.stepcarousel .panel{'); sys.htp.p('float: left;'); sys.htp.p('overflow: hidden;'); sys.htp.p('margin: 10px;'); sys.htp.p('width: 250px;'); sys.htp.p('}'); -- sys.htp.p('</style>'); After the loading of the JavaScript library, some style elements are put on the screen. The style elements could have been put in a Cascaded Style Sheet (CSS ), but since we want to be able to adjust the size of the carousel, we use two parameters to set the height and width. And the height and the width are part of the style elements. -- sys.htp.p('<div id="mygallery" class="stepcarousel"style="overflow:hidden"><div class="belt">'); -- for r_crl in c_crl loop sys.htp.p('<div class="panel">'); sys.htp.p('<b>'||to_char(r_crl.panel_text_date,'DD-MON-YYYY')||'</b>'); sys.htp.p('<br>'); sys.htp.p('<b>'||r_crl.panel_title||'</b>'); sys.htp.p('<hr>'); sys.htp.p(r_crl.panel_text); sys.htp.p('</div>'); end loop; -- sys.htp.p('</div></div>'); The next command in the script is the actual creation of a div. Important here is the name of the div and the class. The Step Carousel searches for these identifiers and replaces the div with the stepcarousel. The next step in the function is the fetching of the rows from the query in the cursor. For every line found, the formatted text is placed between the div tags. This is done so that Step Carousel recognizes that the text should be placed on the panels. --Add the onload code to show the carousel -- l_code := 'stepcarousel.setup({ galleryid: "mygallery" ,beltclass: "belt" ,panelclass: "panel" ,autostep: {enable:true, moveby:1, pause:3000} ,panelbehavior: {speed:500, wraparound:true,persist:true} ,defaultbuttons: {enable: true, moveby: 1,leftnav:["'||p_plugin.file_prefix||'arrowl.gif", -5,80], rightnav:["'||p_plugin.file_prefix||'arrowr.gif", -20,80]} ,statusvars: ["statusA", "statusB", "statusC"] ,contenttype: ["inline"]})'; -- apex_javascript.add_onload_code (p_code => l_code); -- return null; end render_stepcarousel; The function ends with the call to apex_javascript.add_onload_code. Here starts the actual code for the stepcarousel and you can customize the carousel, like the size, rotation speed and so on. In the Callbacks section, enter the name of the function in the Return Function Name text field. In this case it is render_stepcarousel. Click on the Create Plug-in button. In the Files section, upload the stepcarousel.js, arrowl.gif, and arrowr.gif files. For this purpose, the file stepcarousel.js has a little modification in it. In the last section (setup:function), document.write is used to add some style to the div tag. Unfortunately, this will not work in APEX, as document.write somehow destroys the rest of the output. So, after the call, APEX has nothing left to show, resulting in an empty page. Document.write needs to be removed, and the following style elements need to be added in the code of the plug-in: sys.htp.p('</p><div id="mygallery" class="stepcarousel" style="overflow: hidden;"><div class="belt">'); In this line of code you see style='overflow:hidden'. That is the line that actually had to be included in stepcarousel.js. This command hides the scrollbars. After you have uploaded the files, click on the Apply Changes button. The plug-in is ready and can now be used in a page. Go to the page where you want this stepcarousel to be shown. In the Regions section, click on the add icon. In the next step, select Plug-ins . Select Carousel . Click on Next . Enter a title for this region, for example Newscarousel. Click on Next . In the next step, enter the height and the width of the carousel. To show a carousel with three panels, enter 800 in the Width text field. Enter 100 in the Height text field. Click on Next . Click on the Create Region button. The plug-in is ready. Run the page to see the result. How it works... The stepcarousel is actually a div. The region type plug-in uses the function sys.htp.p to put this div on the screen. In this example, a div is used for the region, but a HTML table can be used also. An APEX region can contain any HTML output, but for the positioning, mostly a HTML table or a div is used, especially when layout is important within the region. The apex_javascript.add_onload_code starts the animation of the carousel. The carousel switches panels every 3 seconds. This can be adjusted (Pause : 3000). See also For more information on this jQuery extension, go to http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm.
Read more
  • 0
  • 0
  • 1972

article-image-dialog-widget
Packt
30 Oct 2013
14 min read
Save for later

The Dialog Widget

Packt
30 Oct 2013
14 min read
(For more resources related to this topic, see here.) Wijmo additions to the dialog widget at a glance By default, the dialog window includes the pin, toggle, minimize, maximize, and close buttons. Pinning the dialog to a location on the screen disables the dragging feature on the title bar. The dialog can still be resized. Maximizing the dialog makes it take up the area inside the browser window. Toggling it expands or collapses it so that the dialog contents are shown or hidden with the title bar remaining visible. If these buttons cramp your style, they can be turned off with the captionButtons option. You can see how the dialog is presented in the browser from the following screenshot: Wijmo features additional API compared to jQuery UI for changing the behavior of the dialog. The new API is mostly for the buttons in the title bar and managing window stacking. Window stacking determines which windows are drawn on top of other ones. Clicking on a dialog raises it above other dialogs and changes their window stacking settings. The following table shows the options added in Wijmo. Options Events Methods captionButtons contentUrl disabled expandingAnimation stack zIndex blur buttonCreating stateChanged disable enable getState maximize minimize pin refresh reset restore toggle widget The contentUrl option allows you to specify a URL to load within the window. The expandingAnimation option is applied when the dialog is toggled from a collapsed state to an expanded state. The stack and zIndex options determine whether the dialog sits on top of other dialogs. Similar to the blur event on input elements, the blur event for dialog is fired when the dialog loses focus. The buttonCreating method is called when buttons are created and can modify the buttons on the title bar. The disable method disables the event handlers for the dialog. It prevents the default button actions and disables dragging and resizing. The widget method returns the dialog HTML element. The methods maximize, minimize, pin, refresh, reset, restore, and toggle, are available as buttons on the title bar. The best way to see what they do is play around with them. In addition, the getState method is used to find the dialog state and returns either maximized, minimized, or normal. Similarly, the stateChanged event is fired when the state of the dialog changes. The methods are called as a parameter to the wijdialog method. To disable button interactions, pass the string disable: $("#dialog").wijdialog ("disable"); Many of the methods come as pairs, and enable and disable are one of them. Calling enable enables the buttons again. Another pair is restore/minimize. minimize hides the dialog in a tray on the left bottom of the screen. restore sets the dialog back to its normal size and displays it again. The most important option for usability is the captionButtons option. Although users are likely familiar with the minimize, resize, and close buttons; the pin and toggle buttons are not featured in common desktop environments. Therefore, you will want to choose the buttons that are visible depending on your use of the dialog box in your project. To turn off a button on the title bar, set the visible option to false. A default jQuery UI dialog window with only the close button can be created with: $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: false } } }); The other options for each button are click, iconClassOff, and iconClassOn. The click option specifies an event handler for the button. Nevertheless, the buttons come with default actions and you will want to use different icons for custom actions. That's where iconClass comes in. iconClassOn defines the CSS class for the button when it is loaded. iconClassOff is the class for the button icon after clicking. For a list of available jQuery UI icons and their classes, see http://jquery-ui.googlecode.com/svn/tags/1.6rc5/tests/static/icons.html. Our next example uses ui-icon-zoomin, ui-icon-zoomout, and ui-icon-lightbulb. They can be found by toggling the text for the icons on the web page as shown in the preceding screenshot. Adding custom buttons jQuery UI's dialog API lacks an option for configuring the buttons shown on the title bar. Wijmo not only comes with useful default buttons, but also lets you override them easily. <!DOCTYPE HTML> <html> <head> ... <style> .plus { font-size: 150%; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({ autoOpen: true, captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: {visible: true, click: function () { $('#dialog').toggleClass('plus') }, iconClassOn: 'ui-icon-zoomin', iconClassOff: 'ui-icon-zoomout'} , minimize: { visible: false }, maximize: {visible: true, click: function () { alert('To enloarge text, click the zoom icon.') }, iconClassOn: 'ui-icon-lightbulb' }, close: {visible: true, click: self.close, iconClassOn:'ui-icon-close'} } }); }); </script> </head> <body> <div id="dialog" title="Basic dialog"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate</p> </div> </body> </html> We create a dialog window passing in the captionButtons option. The pin, refresh, and minimize buttons have visible set to false so that the title bar is initialized without them. The final output looks as shown in the following screenshot: In addition, the toggle and maximize buttons are modified and given custom behaviors. The toggle button toggles the font size of the text by applying or removing a CSS class. Its default icon, set with iconClassOn, indicates that clicking on it will zoom in on the text. Once clicked, the icon changes to a zoom out icon. Likewise, the behavior and appearance of the maximize button have been changed. In the position where the maximize icon was displayed in the title bar previously, there is now a lightbulb icon with a tip. Although this method of adding new buttons to the title bar seems clumsy, it is the only option that Wijmo currently offers. Adding buttons in the content area is much simpler. The buttons option specifies the buttons to be displayed in the dialog window content area below the title bar. For example, to display a simple confirmation button: $('#dialog').wijdialog({buttons: {ok: function () { $(this).wijdialog('close') }}}); The text displayed on the button is ok and clicking on the button hides the dialog. Calling $('#dialog').wijdialog('open') will show the dialog again. Configuring the dialog widget's appearance Wijmo offers several options that change the dialog's appearance including title, height, width, and position. The title of the dialog can be changed either by setting the title attribute of the div element of the dialog, or by using the title option. To change the dialog's theme, you can use CSS styling on the wijmo-wijdialog and wijmo-wijdialog-captionbutton classes: <!DOCTYPE HTML> <html> <head> ... <style> .wijmo-wijdialog { /*rounded corners*/ -webkit-border-radius: 12px; border-radius: 12px; background-clip: padding-box; /*shadow behind dialog window*/ -moz-box-shadow: 3px 3px 5px 6px #ccc; -webkit-box-shadow: 3px 3px 5px 6px #ccc; box-shadow: 3px 3px 5px 6px #ccc; /*fade contents from dark gray to gray*/ background-image: -webkit-gradient(linear, left top, left bottom, from(#444444), to(#999999)); background-image: -webkit-linear-gradient(top, #444444, #999999); background-image: -moz-linear-gradient(top, #444444, #999999); background-image: -o-linear-gradient(top, #444444, #999999); background-image: linear-gradient(to bottom, #444444, #999999); background-color: transparent; text-shadow: 1px 1px 3px #888; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({width: 350}); }); </script> </head> <body> <div id="dialog" title="Subtle gradients"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate </p> </div> </body> </html> We now add rounded boxes, a box shadow, and a text shadow to the dialog box. This is done with the .wijmo-wijdialog class. Since many of the CSS3 properties have different names on different browsers, the browser specific properties are used. For example, -webkit-box-shadow is necessary on Webkit-based browsers. The dialog width is set to 350 px when initialized so that the title text and buttons all fit on one line. Loading external content Wijmo makes it easy to load content in an iFrame. Simply pass a URL with the contentUrl option: $(document).ready(function () { $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: true }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: true }, close: { visible: false } }, contentUrl: "http://wijmo.com/demo/themes/" }); }); This will load the Wijmo theme explorer in a dialog window with refresh and maximize/restore buttons. This output can be seen in the following screenshot: The refresh button reloads the content in the iFrame, which is useful for dynamic content. The maximize button resizes the dialog window. Form Components Wijmo form decorator widgets for radio button, checkbox, dropdown, and textbox elements give forms a consistent visual style across all platforms. There are separate libraries for decorating the dropdown and other form elements, but Wijmo gives them a consistent theme. jQuery UI lacks form decorators, leaving the styling of form components to the designer. Using Wijmo form components saves time during development and presents a consistent interface across all browsers. Checkbox The checkbox widget is an excellent example of the style enhancements that Wijmo provides over default form controls. The checkbox is used if multiple choices are allowed. The following screenshot shows the different checkbox states: Wijmo adds rounded corners, gradients, and hover highlighting to the checkbox. Also, the increased size makes it more usable. Wijmo checkboxes can be initialized to be checked. The code for this purpose is as follows: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $("#checkbox3").wijcheckbox({checked: true}); $(":input[type='checkbox']:not(:checked)").wijcheckbox(); }); </script> <style> div { display: block; margin-top: 2em; } </style> </head> <body> <div><input type='checkbox' id='checkbox1' /><label for='checkbox1'>Unchecked</label></div> <div><input type='checkbox' id='checkbox2' /><label for='checkbox2'>Hover</label></div> <div><input type='checkbox' id='checkbox3' /><label for='checkbox3'>Checked</label></div> </body> </html>. In this instance, checkbox3 is set to Checked as it is initialized. You will not get the same result if one of the checkboxes is initialized twice. Here, we avoid that by selecting the checkboxes that are not checked after checkbox3 is set to be Checked. Radio buttons Radio buttons, in contrast with checkboxes, allow only one of the several options to be selected. In addition, they are customized through the HTML markup rather than a JavaScript API. To illustrate, the checked option is set by the checked attribute: <input type="radio" checked /> jQuery UI offers a button widget for radio buttons, as shown in the following screenshot, which in my experience causes confusion as users think that they can select multiple options: The Wijmo radio buttons are closer in appearance to regular radio buttons so that users would expect the same behavior, as shown in the following screenshot: Wijmo radio buttons are initialized by calling the wijradiomethod method on radio button elements: <!DOCTYPE html> <html> <head> ... <script id="scriptInit" type="text/javascript">$(document).ready(function () { $(":input[type='radio']").wijradio({ changed: function (e, data) { if (data.checked) { alert($(this).attr('id') + ' is checked') } } }); }); </script> </head> <body> <div id="radio"> <input type="radio" id="radio1" name="radio"/><label for="radio1">Choice 1</label> <input type="radio" id="radio2" name="radio" checked="checked"/><label for="radio2">Choice 2</label> <input type="radio" id="radio3" name="radio"/><label for="radio3">Choice 3</label> </div> </body> </html> In this example, the changed option, which is also available for checkboxes, is set to a handler. The handler is passed a jQuery.Event object as the first argument. It is just a JavaScript event object normalized for consistency across browsers. The second argument exposes the state of the widget. For both checkboxes and radio buttons, it is an object with only the checked property. Dropdown Styling a dropdown to be consistent across all browsers is notoriously difficult. Wijmo offers two options for styling the HTML select and option elements. When there are no option groups, the ComboBox is the better widget to use. For a dropdown with nested options under option groups, only the wijdropdown widget will work. As an example, consider a country selector categorized by continent: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('select[name=country]').wijdropdown(); $('#reset').button().click(function(){ $('select[name=country]').wijdropdown('destroy') }); $('#refresh').button().click(function(){ $('select[name=country]').wijdropdown('refresh') }) }); </script> </head> <body> <button id="reset"> Reset </button> <button id="refresh"> Refresh </button> <select name="country" style="width:170px"> <optgroup label="Africa"> <option value="gam">Gambia</option> <option value="mad">Madagascar</option> <option value="nam">Namibia</option> </optgroup> <optgroup label="Europe"> <option value="fra">France</option> <option value="rus">Russia</option> </optgroup> <optgroup label="North America"> <option value="can">Canada</option> <option value="mex">Mexico</option> <option selected="selected" value="usa">United States</option> </optgroup> </select> </body> </html> The select element's width is set to 170 pixels so that when the dropdown is initialized, both the dropdown menu and items have a width of 170 pixels. This allows the North America option category to be displayed on a single line, as shown in the following screenshot. Although the dropdown widget lacks a width option, it takes the select element's width when it is initialized. To initialize the dropdown, call the wijdropdown method on the select element: $('select[name=country]').wijdropdown(); The dropdown element uses the blind animation to show the items when the menu is toggled. Also, it applies the same click animation as on buttons to the slider and menu: To reset the dropdown to a select box, I've added a reset button that calls the destroy method. If you have JavaScript code that dynamically changes the styling of the dropdown, the refresh method applies the Wijmo styles again. Summary The Wijmo dialog widget is an extension of the jQuery UI dialog. In this article, the features unique to Wijmo's dialog widget are explored and given emphasis. I showed you how to add custom buttons, how to change the dialog appearance, and how to load content from other URLs in the dialog. We also learned about Wijmo's form components. A checkbox is used when multiple items can be selected. Wijmo's checkbox widget has style enhancements over the default checkboxes. Radio buttons are used when only one item is to be selected. While jQuery UI only supports button sets on radio buttons, Wijmo's radio buttons are much more intuitive. Wijmo's dropdown widget should only be used when there are nested or categorized <select> options. The ComboBox comes with more features when the structure of the options is flat. Resources for Article: Further resources on this subject: Wijmo Widgets [Article] jQuery Animation: Tips and Tricks [Article] Building a Custom Version of jQuery [Article]
Read more
  • 0
  • 0
  • 2111

article-image-creating-image-gallery
Packt
30 Oct 2013
5 min read
Save for later

Creating an image gallery

Packt
30 Oct 2013
5 min read
(For more resources related to this topic, see here.) Getting ready Before we get started, we need to find a handful of images that we can use for the gallery. Find four to five images to use for the gallery and put them in the images folder. How to do it... Add the following links to the images to the index.html file: <a class="fancybox"href="images/waterfall.png">Waterfall</a><a class="fancybox" href="images/frozenlake.png">Frozen Lake</a><a class="fancybox" href="images/road-inforest.png">Road in Forest</a><a class="fancybox" href="images/boston.png">Boston</a> The anchor tags no longer have an ID, but a class. It is important that they all have the same class so that Fancybox knows about them. Change our call to the Fancybox plugin in the scripts.js file to use the class that all of the links have instead of show-fancybox ID. $(function() { // Using fancybox class instead of the show-fancybox ID $('.fancybox').fancybox(); }); Fancybox will now work on all of the images but they will not be part of the same gallery. To make images part of a gallery, we use the rel attribute of the anchor tags. Add rel="gallery" to all of the anchor tags, shown as follows: <a class="fancybox" rel="gallery" href="images/waterfall.png">Waterfall</a> <a class="fancybox" rel="gallery" href="images/frozenlake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> Now that we have added rel="gallery" to each of our anchor tags, you should see left and right arrows when you hover over the left-hand side or right-hand side of Fancybox. These arrows allow you to navigate between images as shown in the following screenshot: How it works... Fancybox determines that an image is part of a gallery using the rel attribute of the anchor tags. The order of the images is based on the order of the anchor tags on the page. This is important so that the slideshow order is exactly the same as a gallery of thumbnails without any additional work on our end. We changed the ID of our single image to a class for the gallery because we wanted to call Fancybox on all of the links instead of just one. If we wanted to add more image links to the page, it would just be a matter of adding more anchor tags with the proper href values and the same class. There's more... So, what else can we do with the gallery functionality of Fancybox? Let's take a look at some of the other things that we could do with the gallery that we have currently. Captions and thumbnails All of the functionalities that we discussed for single images apply to galleries as well. So, if we wanted to add a thumbnail, it would just be a matter of adding an img tag inside the anchor tag instead of the text. If we wanted to add a caption, we can do so by adding the title attribute to our anchor tags. Showing slideshow from one link Let's say that we wanted to have just one link to open our gallery slideshow. This can be easily achieved by hiding the other links via CSS with the help of the following step: We start by adding this style tag to the <head> tag just under the <script> tag for our scripts.js file, shown as follows: <style type="text/css"> .hidden { display: none; } </style> Now, we update the HTML file so that all but one of our anchor tags have the hidden class. Next, when we reload the page, we will see only one link. When you click on the link, you should still be able to navigate through the gallery just like all of the links were on the page. <a class="fancybox" rel="gallery" href="images/waterfall.png">Image Gallery</a> <div class="hidden"> <a class="fancybox" rel="gallery" href="images/frozen-lake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> </div> Summary In this article we saw that Fancybox provides very strong image handling functionalities. We also saw how an image gallery is created by Fancybox. We can also display images as thumbnails and display the images as a slideshow using just one link. Resources for Article: Further resources on this subject: Getting started with your first jQuery plugin [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article] The Basics of WordPress and jQuery Plugin [Article]
Read more
  • 0
  • 0
  • 3844
Visually different images

article-image-dhtmlx-grid
Packt
30 Oct 2013
7 min read
Save for later

The DHTMLX Grid

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) The DHTMLX grid component is one of the more widely used components of the library. It has a vast amount of settings and abilities that are so robust we could probably write an entire book on them. But since we have an application to build, we will touch on some of the main methods and get into utilizing it. Some of the cool features that the grid supports is filtering, spanning rows and columns, multiple headers, dynamic scroll loading, paging, inline editing, cookie state, dragging/ordering columns, images, multi-selection, and events. By the end of this article, we will have a functional grid where we will control the editing, viewing, adding, and removing of users. The grid methods and events When creating a DHTMLX grid, we first create the object; second we add all the settings and then call a method to initialize it. After the grid is initialized data can then be added. The order of steps to create a grid is as follows: Create the grid object Apply settings Initialize Add data Now we will go over initializing a grid. Initialization choices We can initialize a DHTMLX grid in two ways, similar to the other DHTMLX objects. The first way is to attach it to a DOM element and the second way is to attach it to an existing DHTMLX layout cell or layout. A grid can be constructed by either passing in a JavaScript object with all the settings or built through individual methods. Initialization on a DOM element Let's attach the grid to a DOM element. First we must clear the page and add a div element using JavaScript. Type and run the following code line in the developer tools console: document.body.innerHTML = "<div id='myGridCont'></div>"; We just cleared all of the body tags content and replaced it with a div tag having the id attribute value of myGridCont. Now, create a grid object to the div tag, add some settings and initialize it. Type and run the following code in the developer tools console: var myGrid = new dhtmlXGridObject("myGridCont"); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1", "Column2", "Column3"]); myGrid.init(); You should see the page with showing just the grid header with three columns. Next, we will create a grid on an existing cell object. Initialization on a cell object Refresh the page and add a grid to the appLayout cell. Type and run the following code in the developer tools console: var myGrid = appLayout.cells("a").attachGrid(); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1","Column2","Column3"]); myGrid.init(); You will now see the grid columns just below the toolbar. Grid methods Now let's go over some available grid methods. Then we can add rows and call events on this grid. For these exercises we will be using the global appLayout variable. Refresh the page. attachGrid We will begin by creating a grid to a cell. The attachGrid method creates and attaches a grid object to a cell. This is the first step in creating a grid. Type and run the following code line in the console: var myGrid = appLayout.cells("a").attachGrid(); setImagePath The setImagePath method allows the grid to know where we have the images placed for referencing in the design. We have the application image path set in the config object. Type and run the following code line in the console: myGrid.setImagePath(config.imagePath); setHeader The setHeader method sets the column headers and determines how many headers we will have. The argument is a JavaScript array. Type and run the following code line in the console: myGrid.setHeader(["Column1", "Column2", "Column3"]); setInitWidths The setinitWidths method will set the initial widths of each of the columns. The asterisk mark (*) is used to set the width automatically. Type and run the following code line in the console: myGrid.setInitWidths("125,95,*");   setColAlign The setColAlign method allows us to align the column's content position. Type and run the following code line in the console: myGrid.setColAlign("right,center,left"); init Up until this point, we haven't seen much going on. It was all happening behind the scenes. To see these changes the grid must be initialized. Type and run the following code line in the console: myGrid.init(); Now you see the columns that we provided. addRow Now that we have a grid created let's add a couple rows and start interacting. The addRow method adds a row to the grid. The parameters are ID and columns. Type and run the following code in the console: myGrid.addRow(1,["test1","test2","test3"]); myGrid.addRow(2,["test1","test2","test3"]); We just created two rows inside the grid. setColTypes The setColTypes method sets what types of data a column will contain. The available type options are: ro (readonly) ed (editor) txt (textarea) ch (checkbox) ra (radio button) co (combobox) Currently, the grid allows for inline editing if you were to double-click on grid cell. We do not want this for the application. So, we will set the column types to read-only. Type and run the following code in the console: myGrid.setColTypes("ro,ro,ro"); Now the cells are no longer editable inside the grid. getSelectedRowId The getSelectedRowId method returns the ID of the selected row. If there is nothing selected it will return null. Type and run the following code line in the console: myGrid.getSelectedRowId(); clearSelection The clearSelection method clears all selections in the grid. Type and run the following code line in the console: myGrid.clearSelection(); Now any previous selections are cleared. clearAll The clearAll method removes all the grid rows. Prior to adding more data to the grid we first must clear it. If not we will have duplicated data. Type and run the following code line in the console: myGrid.clearAll(); Now the grid is empty. parse The parse method allows the loading of data to a grid in the format of an XML string, CSV string, XML island, XML object, JSON object, and JavaScript array. We will use the parse method with a JSON object while creating a grid for the application. Here is what the parse method syntax looks like (do not run this in console): myGrid.parse(data, "json"); Grid events The DHTMLX grid component has a vast amount of events. You can view them in their entirety in the documentation. We will cover the onRowDblClicked and onRowSelect events. onRowDblClicked The onRowDblClicked event is triggered when a grid row is double-clicked. The handler receives the argument of the row ID that was double-clicked. Type and run the following code in console: myGrid.attachEvent("onRowDblClicked", function(rowId){ console.log(rowId); }); Double-click one of the rows and the console will log the ID of that row. onRowSelect The onRowSelect event will trigger upon selection of a row. Type and run the following code in console: myGrid.attachEvent("onRowSelect", function(rowId){ console.log(rowId); }); Now, when you select a row the console will log the id of that row. This can be perceived as a single click. Summary In this article, we learned about the DHTMLX grid component. We also added the user grid to the application and tested it with the storage and callbacks methods. Resources for Article: Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 2533

article-image-creating-and-using-composer-packages
Packt
29 Oct 2013
7 min read
Save for later

Creating and Using Composer Packages

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) Using Bundles One of the great features in Laravel is the ease in which we can include the class libraries that others have made using bundles. On the Laravel site, there are already many useful bundles, some of which automate certain tasks while others easily integrate with third-party APIs. A recent addition to the PHP world is Composer, which allows us to use libraries (or packages) that aren't specific to Laravel. In this article, we'll get up-and-running with using bundles, and we'll even create our own bundle that others can download. We'll also see how to incorporate Composer into our Laravel installation to open up a wide range of PHP libraries that we can use in our application. Downloading and installing packages One of the best features of Laravel is how modular it is. Most of the framework is built using libraries, or packages, that are well tested and widely used in other projects. By using Composer for dependency management, we can easily include other packages and seamlessly integrate them into our Laravel app. For this recipe, we'll be installing two popular packages into our app: Jeffrey Way's Laravel 4 Generators and the Imagine image processing packages. Getting ready For this recipe, we need a standard installation of Laravel using Composer. How to do it... For this recipe, we will follow these steps: Go to https://packagist.org/. In the search box, search for way generator as shown in the following screenshot: Click on the link for way/generators : View the details at https://packagist.org/packages/way/generators and take notice of the require line to get the package's version. For our purposes, we'll use "way/generators": "1.0.*" . In our application's root directory, open up the composer.json file and add in the package to the require section so it looks like this: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*" }, Go back to http://packagist.org and perform a search for imagine as shown in the following screenshot: Click on the link to imagine/imagine and copy the require code for dev-master : Go back to our composer.json file and update the require section to include the imagine package . It should now look similar to the following code: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*", "imagine/imagine": "dev-master" }, Open the command line, and in the root of our application, run the Composer update as follows: php composer.phar update Finally, we'll add the Generator Service Provider, so open the app/config/app.php file and in the providers array, add the following line: 'WayGeneratorsGeneratorsServiceProvider' How it works... To get our package, we first go to packagist.org and search for the package we want. We could also click on the Browse packages link. It will display a list of the most recent packages as well as the most popular. After clicking on the package we want, we'll be taken to the detail page, which lists various links including the package's repository and home page. We could also click on the package's maintainer link to see other packages they have released. Underneath, we'll see the various versions of the package. If we open that version's detail page, we'll find the code we need to use for our composer.json file. We could either choose to use a strict version number, add a wildcard to the version, or use dev-master, which will install whatever is updated on the package's master branch. For the Generators package, we'll only use Version 1.0, but allow any minor fixes to that version. For the imagine package, we'll use dev-master, so whatever is in their repository's master branch will be downloaded, regardless of version number. We then run update on Composer and it will automatically download and install all of the packages we chose. Finally, to use Generators in our app, we need to register the service provider in our app's config file. Using the Generators package to set up an app Generators is a popular Laravel package that automates quite a bit of file creation. In addition to controllers and models, it can also generate views, migrations, seeds, and more, all through a command-line interface. Getting ready For this recipe, we'll be using the Laravel 4 Generators package maintained by Jeffrey Way that was installed in the Downloading and installing packages recipe. We'll also need a properly configured MySQL database. How to do it… Follow these steps for this recipe: Open the command line in the root of our app and, using the generator, create a scaffold for our cities as follows: php artisan generate:scaffold cities --fields="city:string" In the command line, create a scaffold for our superheroes as follows: php artisan generate:scaffold superheroes --fields="name:string, city_id:integer:unsigned" In our project, look in the app/database/seeds directory and find a file named CitiesTableSeeder.php. Open it and add some data to the $cities array as follows: <?php class CitiesTableSeeder extends Seeder { public function run() { DB::table('cities')->delete(); $cities = array( array( 'id' => 1, 'city' => 'New York', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 2, 'city' => 'Metropolis', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 3, 'city' => 'Gotham', 'created_at' => date('Y-m-d g:i:s',time()) ) ); DB::table('cities')->insert($cities); } } In the app/database/seeds directory, open SuperheroesTableSeeder.php and add some data to it: <?php class SuperheroesTableSeeder extends Seeder { public function run() { DB::table('superheroes')->delete(); $superheroes = array( array( 'name' => 'Spiderman', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Superman', 'city_id' => 2, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Batman', 'city_id' => 3, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'The Thing', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ) ); DB::table('superheroes')->insert($superheroes); } } In the command line, run the migration then seed the database as follows: php artisan migrate php artisan db:seed Open up a web browser and go to http://{your-server}/cities. We will see our data as shown in the following screenshot: Now, navigate to http://{your-server}/superheroes and we will see our data as shown in the following screenshot: How it works... We begin by running the scaffold generator for our cities and superheroes tables. Using the --fields tag, we can determine which columns we want in our table and also set options such as data type. For our cities table, we'll only need the name of the city. For our superheroes table, we'll want the name of the hero as well as the ID of the city where they live. When we run the generator, many files will automatically be created for us. For example, with cities, we'll get City.php in our models, CitiesController.php in controllers, and a cities directory in our views with the index, show, create, and edit views. We then get a migration named Create_cities_table.php, a CitiesTableSeeder.php seed file, and CitiesTest.php in our tests directory. We'll also have our DatabaseSeeder.php file and our routes.php file updated to include everything we need. To add some data to our tables, we opened the CitiesTableSeeder.php file and updated our $cities array with arrays that represent each row we want to add. We did the same thing for our SuperheroesTableSeeder.php file. Finally, we run the migrations and seeder and our database will be created and all the data will be inserted. The Generators package has already created the views and controllers we need to manipulate the data, so we can easily go to our browser and see all of our data. We can also create new rows, update existing rows, and delete rows.
Read more
  • 0
  • 0
  • 6891

article-image-understanding-websockets-and-server-sent-events-detail
Packt
29 Oct 2013
10 min read
Save for later

Understanding WebSockets and Server-sent Events in Detail

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) Encoders and decoders in Java API for WebSockets As seen in the previous chapter, the class-level annotation @ServerEndpoint indicates that a Java class is a WebSocket endpoint at runtime. The value attribute is used to specify a URI mapping for the endpoint. Additionally the user can add encoder and decoder attributes to encode application objects into WebSocket messages and WebSocket messages into application objects. The following table summarizes the @ServerEndpoint annotation and its attributes: Annotation Attribute Description @ServerEndpoint   This class-level annotation signifies that the Java class is a WebSockets server endpoint.   value The value is the URI with a leading '/.'   encoders The encoders contains a list of Java classes that act as encoders for the endpoint. The classes must implement the Encoder interface.   decoders The decoders contains a list of Java classes that act as decoders for the endpoint. The classes must implement the Decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ServerEndpoint.Configurator that is used when configuring the server endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. In this section we shall look at providing encoder and decoder implementations for our WebSockets endpoint. The preceding diagram shows how encoders will take an application object and convert it to a WebSockets message. Decoders will take a WebSockets message and convert to an application object. Here is a simple example where a client sends a WebSockets message to a WebSockets java endpoint that is annotated with @ServerEndpoint and decorated with encoder and decoder class. The decoder will decode the WebSockets message and send back the same message to the client. The encoder will convert the message to a WebSockets message. This sample is also included in the code bundle for the book. Here is the code to define ServerEndpoint with value for encoders and decoders: @ServerEndpoint(value="/book", encoders={MyEncoder.class}, decoders = {MyDecoder.class} ) public class BookCollection { @OnMessage public void onMessage(Book book,Session session) { try { session.getBasicRemote().sendObject(book); } catch (Exception ex) { ex.printStackTrace(); } } @OnOpen public void onOpen(Session session) { System.out.println("Opening socket" +session.getBasicRemote() ); } @OnClose public void onClose(Session session) { System.out.println("Closing socket" + session.getBasicRemote()); } } In the preceding code snippet, you can see the class BookCollection is annotated with @ServerEndpoint. The value=/book attribute provides URI mapping for the endpoint. The @ServerEndpoint also takes the encoders and decoders to be used during the WebSocket transmission. Once a WebSocket connection has been established, a session is created and the method annotated with @OnOpen will be called. When the WebSocket endpoint receives a message, the method annotated with @OnMessage will be called. In our sample the method simply sends the book object using the Session.getBasicRemote() which will get a reference to the RemoteEndpoint and send the message synchronously. Encoders can be used to convert a custom user-defined object in a text message, TextStream, BinaryStream, or BinaryMessage format. An implementation of an encoder class for text messages is as follows: public class MyEncoder implements Encoder.Text<Book> { @Override public String encode(Book book) throws EncodeException { return book.getJson().toString(); } } As shown in the preceding code, the encoder class implements Encoder.Text<Book>. There is an encode method that is overridden and which converts a book and sends it as a JSON string. (More on JSON APIs is covered in detail in the next chapter) Decoders can be used to decode WebSockets messages in custom user-defined objects. They can decode in text, TextStream, and binary or BinaryStream format. Here is a code for a decoder class: public class MyDecoder implements Decoder.Text<Book> { @Override public Book decode(String string) throws DecodeException { javax.json.JsonObject jsonObject = javax.json.Json.createReader(new StringReader(string)).readObject(); return new Book(jsonObject); } @Override public boolean willDecode(String string) { try { javax.json.Json.createReader(new StringReader(string)).readObject(); return true; } catch (Exception ex) { } return false; } In the preceding code snippet, the Decoder.Text needs two methods to be overridden. The willDecode() method checks if it can handle this object and decode it. The decode() method decodes the string into an object of type Book by using the JSON-P API javax.json.Json.createReader(). The following code snippet shows the user-defined class Book: public class Book { public Book() {} JsonObject jsonObject; public Book(JsonObject json) { this.jsonObject = json; } public JsonObject getJson() { return jsonObject; } public void setJson(JsonObject json) { this.jsonObject = json; } public Book(String message) { jsonObject = Json.createReader(new StringReader(message)).readObject(); } public String toString () { StringWriter writer = new StringWriter(); Json.createWriter(writer).write(jsonObject); return writer.toString(); } } The Book class is a user-defined class that takes the JSON object sent by the client. Here is an example of how the JSON details are sent to the WebSockets endpoints from JavaScript. var json = JSON.stringify({ "name": "Java 7 JAX-WS Web Services", "author":"Deepak Vohra", "isbn": "123456789" }); function addBook() { websocket.send(json); } The client sends the message using websocket.send() which will cause the onMessage() of the BookCollection.java to be invoked. The BookCollection.java will return the same book to the client. In the process, the decoder will decode the WebSockets message when it is received. To send back the same Book object, first the encoder will encode the Book object to a WebSockets message and send it to the client. The Java WebSocket Client API WebSockets and Server-sent Events , covered the Java WebSockets client API. Any POJO can be transformed into a WebSockets client by annotating it with @ClientEndpoint. Additionally the user can add encoders and decoders attributes to the @ClientEndpoint annotation to encode application objects into WebSockets messages and WebSockets messages into application objects. The following table shows the @ClientEndpoint annotation and its attributes: Annotation Attribute Description @ClientEndpoint   This class-level annotation signifies that the Java class is a WebSockets client that will connect to a WebSockets server endpoint.   value The value is the URI with a leading /.   encoders The encoders contain a list of Java classes that act as encoders for the endpoint. The classes must implement the encoder interface.   decoders The decoders contain a list of Java classes that act as decoders for the endpoint. The classes must implement the decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ClientEndpoint.Configurator, which is used when configuring the client endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. Sending different kinds of message data: blob/binary Using JavaScript we can traditionally send JSON or XML as strings. However, HTML5 allows applications to work with binary data to improve performance. WebSockets supports two kinds of binary data Binary Large Objects (blob) arraybuffer A WebSocket can work with only one of the formats at any given time. Using the binaryType property of a WebSocket, you can switch between using blob or arraybuffer: websocket.binaryType = "blob"; // receive some blob data websocket.binaryType = "arraybuffer"; // now receive ArrayBuffer data The following code snippet shows how to display images sent by a server using WebSockets. Here is a code snippet for how to send binary data with WebSockets: websocket.binaryType = 'arraybuffer'; The preceding code snippet sets the binaryType property of websocket to arraybuffer. websocket.onmessage = function(msg) { var arrayBuffer = msg.data; var bytes = new Uint8Array(arrayBuffer); var image = document.getElementById('image'); image.src = 'data:image/png;base64,'+encode(bytes); } When the onmessage is called the arrayBuffer is initialized to the message.data. The Uint8Array type represents an array of 8-bit unsigned integers. The image.src value is in line using the data URI scheme. Security and WebSockets WebSockets are secured using the web container security model. A WebSockets developer can declare whether the access to the WebSocket server endpoint needs to be authenticated, who can access it, or if it needs an encrypted connection. A WebSockets endpoint which is mapped to a ws:// URI is protected under the deployment descriptor with http:// URI with the same hostname,port path since the initial handshake is from the HTTP connection. So, WebSockets developers can assign an authentication scheme, user roles, and a transport guarantee to any WebSockets endpoints. We will take the same sample as we saw in , WebSockets and Server-sent Events , and make it a secure WebSockets application. Here is the web.xml for a secure WebSocket endpoint: <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <security-constraint> <web-resource-collection> <web-resource-name>BookCollection</web-resource-name> <url-pattern>/index.jsp</url-pattern> <http-method>PUT</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> <http-method>GET</http-method> </web-resource-collection> <user-data-constraint> <description>SSL</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> As you can see in the preceding snippet, we used <transport-guarantee>CONFIDENTIAL</transport-guarantee>. The Java EE specification followed by application servers provides different levels of transport guarantee on the communication between clients and application server. The three levels are: Data Confidentiality (CONFIDENTIAL) : We use this level to guarantee that all communication between client and server goes through the SSL layer and connections won't be accepted over a non-secure channel. Data Integrity (INTEGRAL) : We can use this level when a full encryption is not required but we want our data to be transmitted to and from a client in such a way that, if anyone changed the data, we could detect the change. Any type of connection (NONE) : We can use this level to force the container to accept connections on HTTP and HTTPs. The following steps should be followed for setting up SSL and running our sample to show a secure WebSockets application deployed in Glassfish. Generate the server certificate: keytool -genkey -alias server-alias -keyalg RSA -keypass changeit --storepass changeit -keystore keystore.jks Export the generated server certificate in keystore.jks into the file server.cer: keytool -export -alias server-alias -storepass changeit -file server.cer -keystore keystore.jks Create the trust-store file cacerts.jks and add the server certificate to the trust store: keytool -import -v -trustcacerts -alias server-alias -file server.cer -keystore cacerts.jks -keypass changeit -storepass changeit Change the following JVM options so that they point to the location and name of the new keystore. Add this in domain.xml under java-config: <jvm-options>-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks</jvm-options> <jvm-options>-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks</jvm-options> Restart GlassFish. If you go to https://localhost:8181/helloworld-ws/, you can see the secure WebSocket application. Here is how the the headers look under Chrome Developer Tools: Open Chrome Browser and click on View and then on Developer Tools . Click on Network . Select book under element name and click on Frames . As you can see in the preceding screenshot, since the application is secured using SSL the WebSockets URI, it also contains wss://, which means WebSockets over SSL. So far we have seen the encoders and decoders for WebSockets messages. We also covered how to send binary data using WebSockets. Additionally we have demonstrated a sample on how to secure WebSockets based application. We shall now cover the best practices for WebSocket based-applications.
Read more
  • 0
  • 0
  • 4189
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
Packt
29 Oct 2013
10 min read
Save for later

RESS – The idea and the Controversies

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) The RWD concept appeared first in 2010 in an article by Ethan Marcotte (available at http://alistapart.com/article/responsive-web-design). He presented an approach that allows us to progressively enhance page design within different viewing contexts with the help of fluid grids, flexible images, and media queries. This approach was opposed to the one that separates websites geared toward specific devices. Instead of two or more websites (desktop and mobile), we could have one that adapts to all devices. The technical foundation of RWD (as proposed in Marcotte's article) consists of three things, fluid grids, flexible images, and media queries. Illustration: Fluid (and responsive) grid adapts to device using both column width and column count Fluid grid is basically nothing more than a concept of dividing the monitor width into modular columns, often accompanied by some kind of a CSS framework (some of the best-known examples were the 960 grid system, blueprint, pure, 1140px grid, and elastic), that is, a base stylesheet that simplifies and standardizes writing website-specific CSS. What makes it fluid is the use of relative measurements like %, em, or rem. With changing the screen (or the window), the number of these columns changes (thanks to CSS statements enclosed in media queries). This allows us to adjust the design layout to device capabilities (screen width and pixel density in particular). Images in such a layout become fluid by using a simple technique of setting width, x% or max-width, 100% in CSS, which causes the image to scale proportionally. With those two methods and a little help from media queries, one can radically change the page layout and handle this enormous, up to 800 percent, difference between the thinnest and the widest screen (WQXGA's 2560px/iPhone's 320px). This is a big step forward and a good base to start creating One Web, that is, to use one URL to deliver content to all the devices. Unfortunately, that is not enough to achieve results that would provide an equally great experience and fast loading websites for everybody. The RESS idea Besides screen width, we may need to take into account other things such as bandwidth and pay-per-bandwidth plans, processor speed, available memory, level of HTML/CSS compatibility, monitoring color depth, and possible navigation methods (touch screen, buttons, and keyboard). On a practical level, it means we may have to optimize images and navigation patterns, and reduce page complexity for some devices. To make this possible, some Server Side solutions need to be engaged. We may use Server Side just for optimizing images. Server Side optimization lets us send pages with just some elements adjusted or a completely changed page; we can rethink the application structure to build a RESTful web interface and turn our Server Side application into a web service. The more we need to place responsibility for device optimization on the Server Side, the closer we get to the old way of disparate desktops and mobile web's separate mobile domains, such as iPhone, Android, or Windows applications. There are many ways to build responsive websites but there is no golden rule to tell you which way is the best. It depends on the target audience, technical contexts, money, and time. Ultimately, the way to be chosen depends on the business decisions of the website owner. When we decide to employ Server Side logic to optimize components of a web page designed in a responsive way, we are going the RESS (Responsive Web Design with Server Side components) way. RESS was proposed by Luke Wroblewski on his blog as a result of his experiences on extending RWD with Server Side components. Essentially, the idea was based on storing IDs of resources (such as images) and serving different versions of the same resource, optimized for some defined classes of devices. Device detection and assigning them to respective classes can be based on libraries such as WURFL or YABFDL. Controversies It is worth noting that both of these approaches raised many controversies. Introducing RWD has broken some long-established rules or habits such as standard screen width (the famous 960px maximum page width limit). It has put in question the long-practiced ways of dealing with mobile web (such as separate desktop and mobile websites). It is no surprise that it raises both delight and rage. One can easily find people calling this fool's gold, useless, too difficult, a fad, amazing, future proof, and so on. Each of those opinions has a reason behind it, for better or worse. A glimpse of the following opinions may help us understand some of the key benefits and issues related to RWD. "Separate mobile websites are a good thing" You may have heard this line in an article by Jason Grigsby, Css media query for mobile is fool's gold , available at http://blog.cloudfour.com/css-media-query-for-mobile-is-fools-gold/. Separate mobile websites allow reduction of bandwidth, prepare pages that are less CPU and memory intensive, and at the same time allow us to use some mobile-only features such as geolocation. Also, not all mobile browsers are wise enough to understand media queries. That is generally true and media queries are not enough in most scenarios, but with some JavaScript (Peter-Paul Koch blog available at, http://www.quirksmode.org/blog/archives/2010/08/combining_media.html#more, describes a method to exclude some page elements or change the page structure via JS paired with media queries), it is possible to overcome many of those problems. At the same time, making a separate mobile website introduces its own problems and requires significant additional investment that can easily get to tens or hundreds of times more than the RWD solution (detecting devices, changing application logic, writing separate templates, integrating, and testing the whole thing). Also, at the end of the day, your visitors may prefer the mobile version, but this doesn't have to be the case. Users are often accessing the same content via various devices and providing consistent experience across all of them becomes more and more important. The preceding controversy is just a part of a wider discussion on channels to provide content on the Internet. RWD and RESS are relatively new kids on the block. For years, technologies to provide content for mobile devices were being built and used, from device-detection libraries to platform-specific applications (such as iStore, Google Play, and MS). When, in 2010, US smartphone users started to spend more time using their mobile apps than browsers (Mobile App Usage Further Dominates Web, Spurred by Facebook, at http://blog.flurry.com/bid/80241/Mobile-App-Usage-Further-Dominates-Web-Spurred-by-Facebook), some hailed it as dangerous for the Web (Apps: The Web Is The Platform, available at http://blog.mozilla.org/webdev/2012/09/14/apps-the-web-is-the-platform/). A closer look at stats reveals though, that most of this time was spent on playing games. No matter how much time kids can spend playing Angry Birds now, after more than two years from then, people still prefer to read the news via a browser rather than via native mobile applications. The Future of Mobile News report from October 2012 reveals that for accessing news, 61 percent mobile users prefer a browser while 28 percent would rather use apps (Future of Mobile News, http://www.journalism.org/analysis_report/future_mobile_news). The British government is not keen on apps either, as they say, "Our position is that native apps are rarely justified" (UK Digital Cabinet Office blog, at http://digital.cabinetoffice.gov.uk/2013/03/12/were-not-appy-not-appy-at-all/). Recently, Tim Berners-Lee, the inventor of the Web, criticized closed world apps such as those released by Apple for threatening openness and universality that the architects of the Internet saw as central to its design. He explains it the following way, "When you make a link, you can link to anything. That means people must be able to put anything on the Web, no matter what computer they have, what software they use, or which human language they speak and regardless of whether they have a wired or a wireless Internet connection." This kind of thinking goes in line with the RWD/RESS philosophy to have one URL for the same content, no matter what way you'd like to access it. Nonetheless, it is just one of the reasons why RWD became so popular during the last year. "RWD is too difficult" CSS coupled with JS can get really complex (some would say messy) and requires a lot of testing on all target browsers/platforms. That is or was true. Building RWD websites requires good CSS knowledge and some battlefield experience in this field. But hey, learning is the most important skill in this industry. It actually gets easier and easier with new tools released nearly every week. "RWD means degrading design" Fluid layouts break the composition of the page; Mobile First and Progressive Enhancement mean, in fact, reducing design to a few simplistic and naive patterns. Actually the Mobile First concept contains two concepts. One is design direction and the second is the structure of CSS stylesheets, in particular the order of media queries. With regard to design direction, the Mobile First concept is meant to describe the sequence of designs. First the design for a mobile should be created and then for a desktop. While there are several good reasons for using this approach, one should never forget the basic truth that at the end of the day only the quality of designs matters, not the order they were created in. With regard to the stylesheet structure, Mobile First means that we first write statements for small screens and then add statements for wider screens, such as @media screen and (min-width: 480px). It is a design principle meant to simplify the whole thing. It is assumed here that CSS for small screens is the simplest version, which will be progressively enhanced for larger screens. The idea is smart and helps to maintain a well-structured CSS but sometimes the opposite, the Desktop First approach, seems natural. Typical examples are tables with many columns. The Mobile First principle is not a religious dogma and should not be treated as such. As a side note, it remains an open question why this is still named Mobile First, while the new iPad-related statements should come here at the end (min-width: 2000px). There are some examples of rather poor designs made by RWD celebrities. But there are also examples of great designs that happened, thanks to the freedom that RWD gave to the web design world. The rapid increase in Internet access via mobile devices during 2012 made RWD one of the hottest topics in web design. The numbers vary across countries and websites but no matter what numbers you look at, one thing is certain, mobile is already big and will soon get even bigger (valuable stats on mobile use are available at http://www.thinkwithgoogle.com/mobileplanet/en/). Statistics are not the only reason why Responsive Web Design became popular. Equally important are the benefits for web designers, users, website owners, and developers. Summary This article, as discussed, covered the RESS idea, as well as the controversies associated with it. Resources for Article: Further resources on this subject: Introduction to RWD frameworks [Article] Getting started with Modernizr using PHP IDE [Article] Understanding Express Routes [Article]
Read more
  • 0
  • 0
  • 1020

article-image-getting-store
Packt
28 Oct 2013
21 min read
Save for later

Getting into the Store

Packt
28 Oct 2013
21 min read
(For more resources related to this topic, see here.) This all starts by visiting https://appdev.microsoft.com/StorePortals, which will get you to the store dashboard that you use to submit and manage your applications. If you already have an account you'll just log in here and proceed. If not, we'll take a look at ways of getting it set up. There are a couple of ways to get a store account, which you will need before you can submit any game or application to the store. There are also two different types of accounts: Individual accounts Company accounts In most cases you will only need the first option. It's cheaper and easier to get, and you won't require the enterprise features provided by the company account for a game. For this reason we'll focus on the individual account. To register you'll need a credit card for verification, even if you gain a free account another way. Just follow the registration instructions, pay the fee, and complete verification, after which you'll be ready to go. Free accounts Students and developers with MSDN subscriptions can access registration codes that waive the fee for a minimum of one year. If you meet either of these requirements you can gain a code using the respective methods, and use that code during the registration process to set the fee to zero. Students can access their free accounts using the DreamSpark service that Microsoft runs. To access this you need to create an account on www.dreamspark.com and create an account. From there follow the steps to verify your student status and visit https://www.dreamspark.com/Student/Windows-Store-Access.aspx to get your registration code. If you have access to an MSDN subscription you can use this to gain a store account for free. Just log in to your account and in your account benefits overview you should be able to generate your registration code. Submitting your game So your game is polished and ready to go. What do you need to do to get it in the store? First log in to the dashboard and select Submit an App from the menu on the left. Here you can see the steps required to submit the app. This may look like a lot to do, but don't worry. Most of these are very simple to resolve and can be done before you even start working on the game. The first step is to choose a name for your game, and this can be done whenever you want. By reserving a name and creating the application entry you have a year to submit your application, giving you plenty of time to complete it. This is why it's a good idea to jump in and register your application once you have a name for it. If you change your mind later and want a different name you can always change it. The next step is to choose how and where you will sell your game. The other thing you need to choose here are the markets you want to sell your game in. This can be an area you need to be careful of, because the markets you choose here define the localization or content you need to watch for in your game. Certain markets are restrictive and including content that isn't appropriate for a market you say you want to sell in can cause you to fail the certification process. Once that is done you need to choose when you want to release your game—you can choose to release as soon as certification finishes or on a specific date, and then you choose the app category, which in this case will be Games. Don't forget to specify the genre of your game as the sub-category so players can find it. The final option on the Selling Details page which applies to us is the Hardware requirements section. Here we define the DirectX feature-level required for the game, and the minimum RAM required to run it. This is important because the store can help ensure that players don't try to play your game on systems that cannot run it. The next section allows you to define the in-app offers that will be made available to players. The Age rating and rating certificates section allows you to define the minimum age required to play the game, as well as submit official ratings certificates from ratings boards so that they may be displayed in the store to meet legal requirements. The latter part is optional in some cases, and may affect where you can submit your game depending on local laws. Aside from official ratings, all applications and games submitted to the store require a voluntary rating, chosen from one of the following age options: 3+ 7+ 12+ 16+ 18+ While all content is checked, the 7+ and 3+ ratings both have extra checks because of the extra requirements for those age ranges. The 3+ rating is especially restrictive as apps submitted with that age limit may not contain features that could connect to online services, collect personal information, or use the webcam or microphone. To play it safe it's recommended the 12+ rating is chosen, and if you're still uncertain, higher is safer. GDF Certificates The other entry required here if you have official ratings certificates is a GDF file. This is a Game Definition File, which defines the different ratings in a single location and provides the necessary information to display the rating and inform any parental settings. To do this you need to use the GDFMAKER.exe utility that ships with the Windows 8 SDK, and generate a GDF file which you can submit to the store. Alongside that you need to create a DLL containing that file (as a resource) without any entry point to include in the application package. For full details on how to create the GDF as well as the DLL, view the following MSDN article: http://msdn.microsoft.com/en-us/library/windows/apps/hh465153.aspx The final section before you need to submit your compiled application package is the cryptography declaration. For most games you should be able to declare that you aren't using any cryptography within the game and quickly move through this step. If you are using cryptography, including encrypting game saves or data files, you will need to declare that here and follow the instructions to either complete the step or provide an Export Control Classification Number (ECCN). Now you need to upload the compiled app package before you can continue, so we'll take a look at what it takes to do that before you continue. App packages To submit your game to the store, you need to package it up in a format that makes it easy to upload, and easy for the store to distribute. This is done by compiling the application as an .appx file. But before that happens we need to ensure we have defined all of the required metadata, and fulfill the certification requirements, otherwise we'll be uploading a package only to fail soon after. Part of this is done through the application manifest editor, which is accessible in Visual Studio by double-clicking on the Package.appxmanifest file in solution explorer. This editor is where you specify the name that will be seen in the start menu, as well as the icons used by the application. To pass certification all icons have to be provided at 100 percent DPI, which is referred to as Scale 100 in the editor. Icon/Image Base resolution Required Standard 150 x 150 px Yes Wide 310 x 150 px If Wide Tile Enabled Small 30 x 30 px Yes Store 50 x 50 px Yes Badge 24 x 24 px If Toasts Enabled Splash 620 x 300 px Yes If you wish to provide a higher quality images for people running on high DPI setups, you can do so with a simple filename change. If you add scale-XXX to your filename, just before the extension, and replace XXX with one of the following values, Windows will automatically make use of it at the appropriate DPI. scale-100 scale-140 scale-180 In the following image you can see the options available for editing the visual assets in the application. These all apply to the start menu and application start-up experience, including the splash screen and toast notifications. Toast Notifications in Windows 8 are pop-up notifications that slide in from the edge of the screen and show the users some information for a short period of time. They can click on the toast to open the application if they want. Alongside Live Tiles, Toast Notifications allow you to give the user information when the application is not running (although they work when the application is running). The previous table shows the different images you required for a Windows 8 application, and if they are mandatory or just required in certain situations. Note that this does not include the imagery required for the store, which includes some screenshots of the application and optional promotional art in case you want your application to be featured. You must replace all of the required icons with your own. Automated checks during certification will detect the use of the default "box" icon shown in the previous screenshot and automatically fail the submission. Capabilities Once you have the visual aspects in place, you need to declare the capabilities that the application will receive. Your game may not need any, however you should still only specify what you need to run, as some of these capabilities come with extra implications and non-obvious requirements. Adding a privacy policy One of those requirements is the privacy policy. Even if you are creating a game, there may be situations where you are collecting private information, which requires you to have a privacy policy. The biggest issue here is connecting to the internet. If your game marks any of the Internet capabilities in the manifest, you automatically trigger a check for a privacy policy as private information—in this case an IP address—is being shared. To avoid failing certification for this, you need to put together a privacy policy if you collect privacy information, or use any of the capabilities that would indicate you collect information. These include the Internet capabilities as well as location, webcam, and microphone capabilities. This privacy policy just needs to describe what you will do with the information, and directly mention your game and publisher name. Once you have the policy written, it needs to be posted in two locations. The first is a publicly accessible website, which you will provide a link to when filling out the description after uploading your game. The second is within the game itself. It is recommended you place this policy in the Windows 8 provided settings menu, which you can build using XAML or your own code. If you're going with a completely native Windows 8 application you may want to display the policy in your own way and link to it from options within your game. Declarations Once you've indicated the capabilities you want, you need to declare any operating system integration you've done. For most games you won't use this, however if you're taking advantage of Windows 8 features such as share targets (the destination for data shared using the Share Charm), or you have a Game Definition File, you will need to declare it here and provide the required information for the operating system. In the case of the GDF, you need to provide the file so that the parental controls system can make use of the ratings to appropriately control access. Certification kit The next step is to make sure you aren't going to fail the automated tests during certification. Microsoft provides the same automated tests used when you submit your app in the Windows Application Certification Kit (WACK). WACK is installed by default with Visual Studio 2012 or higher version. There are two ways to run the test: after you build your application package, or by running the kit directly against an installed app. We'll look at the latter first, as you might want to run the test on your deployed test game well before you build anything for the store. This is also the only way to run the WACK on a WinRT device, if you want to cover all bases. If you haven't already deployed or tested your app, deploy it using the Build menu in Visual Studio and then search for the Windows App Cert Kit using the start menu (just start typing). When you run this you will be given an option to choose which type of application you want to validate. In this case we want to select the Windows Store App option, which will then give you access to the list of apps installed on your machine. From there it's just a matter of selecting the app you want and starting the test. At this point you will want to leave your machine alone until the automated tests are complete. Any interference could lead to an incorrect failure of the certification tests. The results will indicate ways you can fix any issues; however, you should be fine for most of the tests. The biggest issues will arise from third party libraries that haven't been developed or ported to Windows 8. In this case the only option is to fix them yourself (if they're open source) or find an alternative. Once you have the test passing, or you feel confident that it won't be an issue, you need to create app packages that are compatible with the store. At this point your game will be associated with the submission you have created in the Windows Store dashboard so that it is prepared for upload. Creating your app packages To do this, right click on your game project in Visual Studio and click on Create App Packages inside the Store menu. Once you do that, you'll be asked if you want to create a package for the store. The difference between the two options comes down to how the package is signed. If you choose No here, you can create a package with your test certificate, which can be distributed for testing. These packages must be manually installed and cannot be submitted to the store. You can, however, use this type of package on other machines to install your game for testers to try out. Choosing No will give you a folder with a .ps1 file (Powershell), which you can run to execute the install script. Choosing Yes at this option will take you to a login screen where you can enter your Windows Store developer account details. Once you've logged in you will be presented with a list of applications that you have registered with the store. If you haven't yet reserved the name of your application, you can click on the Reserve Name link, which will take you directly to the appropriate page in the store dashboard. Otherwise select the name of the game you're trying to build and click on Next. The next screen will allow you to specify which architectures to build for, and the version number of the built package. As this is a C++ game, we need to provide separate packages for the ARM, x86, and x64 builds, depending on what you want to support. Simply providing an x86 and ARM build will cover the entire market; 64 bit can be nice to have if you need a lot of memory, but ultimately it is optional, and some users may not even be able to run x64 code. When you're ready click on Create and Visual Studio will proceed to build your game and compile the requested packages, placing them in the directory specified. If you've built for the store, you will need the .appxupload files from this directory when you proceed to upload your game. Once the build has completed you will be asked if you want to launch the Windows Application Certification Kit. As mentioned previously this will test your game for certification failures, and if you're submitting to the store it's strongly recommended you run this. Doing so at this screen will automatically deploy the built package and run the test, so ensure you have a little bit of time to let it run. Uploading and submitting Now that you have a built app package you can return to the store dashboard to submit your game. Just edit the submission you made previously and enter the Packages section, which will take you to the page where you can upload the appxupload file. Once you have successfully uploaded your game you will gain access to the next section, the Description. This is where you define the details that will be displayed in the store. This is also where your marketing skills come into play as you prepare the content that will hopefully get players to buy your game. You start with the description of your game, and any big feature bullet points you want to emphasize. This is the best place to mention any reviews or praise, as well as give a quick description that will help the players decide if they want to try your game. You can have a number of app features listed; however, like any "back of the box" bullet points, keep them short and exciting. Along with the description, the store requires at least one screenshot to display to the potential player. These screenshots need to be of the entire screen, and that means they need to be at least 1366x768, which is the minimum resolution of Windows 8. These are also one of the best ways to promote your game, so ensure you take some great screenshots that show off the fun and appeal of your game. There are a few ways to take a screenshot of your game. If you're testing in the simulator you can use the screenshot icon on the right toolbar of the simulator to take the screenshot. If not, you can use Windows Key + Prt Scr SysRq to take a screenshot of your entire screen, and then use that (or edit it if you have multiple monitors). Screenshots taken with either of these tools can be found in the Screenshots folder within your Pictures library. There are two other small pieces of information that are required during this stage: Copyright Info and Support contact info. For the support info, an e-mail address will usually suffice. At this point you can also include your website and, if applicable to your game, a link to the privacy policy included in your game. Note that if you require a privacy policy, it must be included in two places: your game, and the privacy policy field on this form. The last items you may want to add here are promotional images. These images are intended for use in store promotions and allow Microsoft to easily feature your game with larger promotional imagery in prominent locations within the store. If you are serious about maximizing the reach of your game, you will want to include these images. If you don't, the number of places your game can be featured will be reduced. At a minimum the 414x180 px image should be included if you want some form of promotion. Now you're almost done! The next section allows you to leave notes for the testing team. This is where you would leave test account details for any features in your game that require an account so that they can test those features. This is also the location to leave any notes about testing in case there are situations where you can point out any features that might not be obvious. In certain situations you may have an exemption from Microsoft for a certification requirement; this would be where you include that exemption. When every step has been completed and you have tick marks in all of the stages, the Submit for Certification button will unlock, allowing you to complete your submission and send it off for certification. At this stage a number of automated tests will run before human testers will try your game on a variety of devices to ensure it fits the requirements for the store. If all goes well, you will receive an email notifying you of your successful certification and, depending on if you set the release date as ASAP, you will find your game in the store a few hours later (it may take a few hours to appear in the store once you receive an email informing you that your game or app is in the store). Certification tips Your first stop should be the certification requirements page, which lists all of the current requirements your game will be tested for: http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx. There are some requirements that you should take note of, and in this section we'll take a look at ways to help ensure you pass those requirements. Privacy The first of course is the privacy policy. As mentioned before, if your game collects any sort of personal information, you will need that policy in two places: In full text within the game Accessible through an Internet link The default app template generated by Visual Studio automatically enables the Internet capability, and by simply having that enabled you require a privacy policy. If you aren't connecting to the Internet at all in your game, you should always ensure that none of the Internet options are enabled before you package your game. If you share any personal information, then you need to provide players with a method of opting in to the sharing. This could be done by gating the functionality behind a login screen. Note that this functionality can be locked away, and the requirement doesn't demand that you find a way to remain fully functional even if the user opts out. Features One requirement is that your game support both touch input and keyboard/mouse input. You can easily support this by using an input system like the one described in this article; however, by supporting touch input you get mouse input for free and technically fulfill this requirement. It's all about how much effort you want to put into the experience your player will have, and that's why including gamepad input is recommended as some players may want to use a connected Xbox 360 gamepad for their input device in games. Legacy APIs Although your game might run while using legacy APIs, it won't pass certification. This is checked through an automated test that also occurs during the WACK testing process, so you can easily check if you have used any illegal APIs. This often arises in third party libraries which make use of parts of the standard IO library such as the console, or the insecure versions of functions such as strcpy or fopen. Some of these APIs don't exist in WinRT for good reason; the console, for example, just doesn't exist, so calling APIs that work directly with the console makes no sense, and isn't allowed. Debug Another issue that may arise through the use of third-party libraries is that some of them may be compiled in debug mode. This could present issues at runtime for your app, and the packaging system will happily include these when compiling your game, unless it has to compile them itself. This is detected by the WACK and can be resolved by finding a release mode version, or recompiling the library. WACK The final tip is: run WACK. This kit quickly and easily finds most of the issues you may encounter during certification, and you see the issues immediately rather than waiting for it to fail during the certification process. Your final step before submitting to the store should be to run WACK, and even while developing it's a good idea to compile in release mode and run the tests to just make sure nothing is broken. Summary By now you should know how to submit your game to the store, and get through certification with little to no issues. We've looked at what you require for the store including imagery and metadata, as well as how to make use of the Windows Application Certification Kit to find problems early on and fix them up without waiting hours or days for certification to fail your game. One area unique to games that we have covered in this article is game ratings. If you're developing your game for certain markets where ratings are required, or if you are developing children's games, you may need to get a rating certificate, and hopefully you have an idea of where to look to do this. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 1721

article-image-getting-started-json
Packt
28 Oct 2013
6 min read
Save for later

Getting Started with JSON

Packt
28 Oct 2013
6 min read
(For more resources related to this topic, see here.) JSON was developed by Douglas Crockford. It is text-based, lightweight, and a human-readable format for data exchange between clients and servers. JSON is derived from JavaScript and bears a close resemblance to JavaScript objects, but it is not dependent on JavaScript. JSON is language-independent, and support for the JSON data format is available in all the popular languages, some of which are C#, PHP, Java, C++, Python, and Ruby JSON is a format and not a language. Prior to JSON, XML was considered to be the chosen data interchange format. XML parsing required an XML DOM implementation on the client side that would ingest the XML response, and then XPath was used to query the response in order to access and retrieve the data. That made life tedious, as querying for data had to be performed at two levels: first on the server side where the data was being queried from a database, and the second time was on the client side using XPath. JSON does not need any specific implementations; the JavaScript engine in the browser handles JSON parsing. XML messages often tend to be heavy and verbose, and take up a lot of bandwidth while sending the data over a network connection. Once the XML message is retrieved, it has to be loaded into memory to parse it; let us take a look at a students data feed in XML and JSON. The following is an example in XML: Let us take a look at the example in JSON: As we notice, the size of the XML message is bigger when compared to its JSON counterpart, and this is just for two records. A real-time feed will begin with a few thousands and go upwards. Another point to note is the amount of data that has to be generated by the server and then transmitted over the Internet is already big, and XML, as it is verbose, makes it bigger. Given that we are in the age of mobile devices where smart phones and tablets are getting more and more popular by the day, transmitting such large volumes of data on a slower network causes slow page loads, hang ups, and poor user experience, thus driving the users away from the site. JSON has come about to be the preferred Internet data interchange format, to avoid the issues mentioned earlier. Since JSON is used to transmit serialized data over the Internet, we will need to make a note of its MIME type. A MIME (Multipurpose Internet Mail Extensions) type is an Internet media type, which is a two-part identifier for content that is being transferred over the Internet. The MIME types are passed through the HTTP headers of an HTTP Request and an HTTP Response. The MIME type is the communication of content type between the server and the browser. In general, a MIME type will have two or more parts that give the browser information about the type of data that is being sent either in the HTTP Request or in the HTTP Response. The MIME type for JSON data is application/json. If the MIME type headers are not sent across the browser, it treats the incoming JSON as plain text. The Hello World program with JSON Now that we have a basic understanding of JSON, let us work on our Hello World program. This is shown in the screenshot that follows: The preceding program will alert World onto the screen when it is invoked from a browser. Let us pay close attention to the script between the <script> tags. In the first step, we are creating a JavaScript variable and initializing the variable with a JavaScript object. Similar to how we retrieve data from a JavaScript object, we use the key-value pair to retrieve the value. Simply put, JSON is a collection of key and value pairs, where every key is a reference to the memory location where the value is stored on the computer. Now let us take a step back and analyze why we need JSON, if all we are doing is assigning JavaScript objects that are readily available. The answer is, JSON is a different format altogether, unlike JavaScript, which is a language. JSON keys and values have to be enclosed in double quotes, if either are enclosed in single quotes, we will receive an error. Now, let us take a quick look at the similarities and differences between JSON and a normal JavaScript object. If we were to create a similar JavaScript object like our hello_world JSON variable from the earlier example, it would look like the JavaScript object that follows: The big difference here is that the key is not wrapped in double quotes. Since a JSON key is a string, we can use any valid string for a key. We can use spaces, special characters, and hyphens in our keys, which is not valid in a normal JavaScript object. When we use special characters, hyphens, or spaces in our keys, we have to be careful while accessing them. The reason the preceding JavaScript statement doesn't work is that JavaScript doesn't accept keys with special characters, hyphens, or strings. So we have to retrieve the data using a method where we will handle the JSON object as an associative array with a string key. This is shown in the screenshot that follows: Another difference between the two is that a JavaScript object can carry functions within, while a JSON object cannot carry any functions. The example that follows has the property getName, which has a function that alerts the name John Doe when it is invoked: Finally, the biggest difference is that a JavaScript object was never intended to be a data interchange format, while the sole purpose of JSON was to use it as a data interchange format. Summary This article introduced us to JSON, took us through its history, and its advantages over XML. It focussed on how JSON can be used in web applications for data transfer Resources for Article: Further resources on this subject: Syntax Validation in JavaScript Testing [Article] Enhancing Page Elements with Moodle and JavaScript [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 2136

article-image-advanced-system-management
Packt
25 Oct 2013
11 min read
Save for later

Advanced System Management

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Beyond backups Of course, backups are not the only issue with managing multiple, remote systems. In particular, managing such multiple configurations using a centralized application is often desirable. Configuration management One of the issues frequently faced by administrators is that of having multiple, remote systems all with similar software for the most part, but with minor differences in what is installed or running. Debian provides several packages that can help manage such an environment in a unified manner. Two of the more popular packages, both available in Debian, are FAI and Puppet. While we don't have the space to go into details, both applications are described briefly here. Fully Automated Installation Fully Automated Installation (FAI) focuses on managing Linux installations, and is developed using Debian, although it works with many different distributions, not just Debian. FAI uses a class concept for categorizing similar systems, and provides a good deal of flexibility and customization via hooks. FAI provides for unattended, automatic installation as well as tools for monitoring and updating groups of systems. FAI is frequently used for creating and maintaining clusters. More information is available at http://fai-project.org/. Puppet Probably the best known application for distributed management is Puppet, developed by Puppet Labs. Unlike FAI, only the Open Source edition is free, the Enterprise edition, which has many additional features, is not. Puppet does include support for environments other than Linux. The desired configuration is described in a custom, high-level definition language, and distributed to systems with installed clients. Unlike FAI, Puppet does not provide its own bare metal remote installation method, but does use existing methods (such as kickstart) to provide this function. A number of companies that make heavy use of distributed and clustered systems use Puppet to manage their environments. More information is available at http://puppetlabs.com/. Other packages There are other packages that can be used to manage a distributed environment, such as Chef and BCFG2. While simpler than Puppet or FAI, they support similar functions and have been used in some distributed and clustered environments. The use of FAI, Puppet, and others in cluster management warrants a brief look at clustering next, and what packages in Debian support clustering. Clusters A cluster is a group of systems that work together in such a way that the whole functions as a single unit. Such clusters can be loosely coupled or tightly coupled. A loosely coupled environment, each system is complete in itself, and can handle all of the tasks any of the other systems can handle. The environment provides mechanisms for redundancy, load sharing, and fail-over between systems, and is often called a High Availability (HA) cluster. In a tightly coupled environment, the systems involved are highly dependent on one another, often sharing memory and disk storage, and all work on the same task together. The environment provides mechanisms for data sharing, avoiding storage conflicts, keeping the systems in synchronization, and splitting up tasks appropriately. This design is often used in super-computing environments. Clustering is an advanced technique that involves more than just installing and configuring software. It also involves hardware integration, and systems and network design, and implementation. Along with the URLs mentioned below, a good text on the subject is Building Clustered Linux Systems, by Robert W. Lucke, Prentice Hall. Here we will only touch the very basics, along with what tools Debian provides. Let's take a brief look at each environment, and some of the tools used to create them. High Availability clusters Two primary functions are required to implement a high availability cluster: A way to handle load balancing and individual host fail-over. A way to synchronize storage so that all servers provide the same view of the data they serve. Debian includes meta packages that bring together software from the Linux High Availability project, including cluster-agents and resource-agents, two of the higher-level meta packages. These packages install various agents that are useful in coordinating and managing load balancing and fail-over. In some cases, a master server is designated to distribute the processing load among other servers. Data synchronization is handled by using shared storage and any of the filesystems that provide for multiple accesses and shared files, such as NFS or AFS. High Availability clusters generally use standard software, along with software that is readily available to manage the dynamics of such environments. Beowulf clusters In addition to the considerations for High Availability clusters, more tightly coupled environments such as Beowulf clusters also require an infrastructure to manage and distribute computing tasks. There are several web pages devoted to creating a Beowulf cluster using Debian as well as packages that aid in creating such a cluster. One such page is https://wiki.debian.org/StartaBeowulf, a Debian Wiki page on Beowulf basics. The manual for FAI also has a section on creating a Beowulf cluster. Books are available as well. Debian provides several packages that are helpful in building such a cluster, such as the OpenMPI libraries for message passing, and various utilities that run commands on multiple systems, such as those in the kadif package. There are even projects that have released scripts and live CDs that allow you to set up a cluster quickly (one such project is the PelicanHPC project, developed for Debian Lenny, hosted at http://www.pelicanhpc.org/. This type of cluster is not something that you can set up and go. Beowulf and other tightly coupled clusters are intended for highly parallel computing, and the programs that do the actual computing must be designed specifically for such an environment. That said, some packages for specific parallel computations do exist in Debian, such as nwchem, which provides several applications for computational chemistry that take advantage of parallelism. Common tools Some common components of clusters have already been mentioned, such as the OpenMPI libraries. Aside from the meta-packages already mentioned, the redhat-cluster suite of tools is available in Debian, as well as many useful libraries, scheduling tools, and failover tools such as booth. All of these can be found using apt-cache or Synaptic by searching for "cluster". Webmin Many administrators will never have to administer a cluster, and many won't be responsible for a large number of systems requiring central backup solutions. However, even administering a single system using command line tools and text editors can be a chore. Even clusters sometimes require administrative tasks on individual systems. Fortunately, there is an application that can ease many administrative tasks, is easy to use, and can handle many aspects of Linux administration. It is called Webmin. Up until Debian Sarge, Webmin was a part of Debian distributions. However, the Debian developer in charge of packaging it had difficulty keeping up with the frequent releases, and it was eventually dropped from Debian. However, the upstream Webmin developers maintain current packages that install cleanly. Some users have reported issues because Webmin does not always handle configuration files exactly as Debian intends, but it most certainly attempts to handle them in a compatible manner, and while some users have experienced problems with upgrades, many administrators are quite happy with Webmin. As long as you are willing to deal with conflicts during upgrades, or restrict use of modules that have major configuration impacts, you will find Webmin quite useful. Installing Webmin Webmin may be installed by adding the following lines to your apt sources file: deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib Usually, this is added to a separate webmin.list file in /etc/apt/sources.list.d. The use of 'sarge' for the release name in the configuration is not a mistake. Since Webmin was dropped after the Sarge release (Debian 3.1), the developers update the repository as it is and haven't bothered changing it to keep up with the Debian code names. However, the versions available in the repository are compatible with any Debian release since 3.1. After updating your cache file, Webmin can be installed and maintained using apt-get, aptitude, or Synaptic. Also, if you request a Webmin upgrade from within Webmin itself on a Debian system, it will use the proper Debian package to upgrade. Using Webmin Webmin runs in the background, and provides an HTTP or HTTPS server on localhost port 10,000. You can use any web browser to connect to http://localhost:10000/ to access Webmin. Upon first installation, only the root user or those in a group allowed to use sudo to access the root account, may log in but Webmin users can be managed separately or in conjunction with local users. Webmin provides extensive and easy to understand menus and icons for various configuration tasks. Webmin is also highly modular and extensible, and an extensive list of standard modules is included with the base package. It is not possible to cover Webmin as fully here as it deserves, but a short list of some of its capabilities includes: Configuration of Webmin itself (the server, users, modules, and security) Local system user and password management Filesystem management Bootup and service management CRON job management Software updates Basic filesystem backups Authentication and security configuration APACHE, DNS, SSH, and FTP (if you're using ProFTP) configuration User mail management Qmail or sendmail configuration Network and Firewall configuration and management Bandwidth monitoring Printer management There are even modules that apply to clusters. Also, Webmin can search and allow access to other Webmin servers on the local network or you can define remote servers manually. This allows a central Webmin server, installed on a particular system, to be the gateway to all of the other servers in your environment, essentially providing a single point of access to manage all Webmin enabled servers. Webmin and Debian Webmin understands the configuration file layout of many distributions. The main problem is when a particular module does not handle certain types of configuration in the way the Debian developers prefer, which can make package upgrades somewhat difficult. This can be handled in a couple of ways. Most modules provide a means to edit configuration files directly, so if you have read the Debian documentation you can modify the configuration appropriately to use Debian specific configuration techniques. Or, you may choose to allow Webmin to modify files as it sees fit, and handle any conflicts manually when you upgrade the software involved. Finally, you can avoid those modules involved with specific software that are more likely to cause problems. One such module is Apache, which doesn't use links from sites-enabled to sites-available. Rather, it configures directly in the sites-enabled directory. Some administrators create the configuration in Webmin, and then move and link the files. Others prefer to manually configure Apache outside of Webmin. Webmin modules are constantly changing, and some actually recognize the Debian file layouts well, so it is not possible to give a comprehensive list of modules to avoid at this time. Best practice when using Webmin is to read the documentation and check the configuration files for specific software prior to using Webmin. Then, after configuring with Webmin, check the files again to determine whether changes may be required to work within the particular package's Debian configuration framework. Based upon this, you can decide whether to continue to configure using Webmin or switch back to manual configuration of that particular software. Webmin security Security is always a concern when remote access to a system is involved. Webmin handles this by requiring authentication and providing for detailed access restrictions that provide a layer of control beyond the firewall. Webmin users can be defined separately, or certain local users can be designated. Access to the various modules in Webmin can be restricted to certain users or groups of users, and detailed logs of Webmin actions are kept. Usermin In addition to Webmin, there is a server called Usermin which may be installed from the same repository as Webmin. It allows individual users to perform a number of functions more easily, such as changing their password, accessing their files, read and manage their email, and managing some aspects of their user profile. It is also modular and has the same security features as Webmin. Summary Several powerful and flexible central backup solutions exist that help manage backups for multiple remote servers and sites. Debian provides packages that assist in building High Availability and Beowulf style multiprocessing clusters as well. And, whether you are involved in managing clusters or not, or even a single system, Webmin can ease an administrator's tasks. Resources for Article: Further resources on this subject: Customizing a Linux kernel [Article] Microsoft SharePoint 2010 Administration: Farm Governance [Article] Testing Workflows for Microsoft Dynamics AX 2009 Administration [Article]
Read more
  • 0
  • 0
  • 1012
article-image-authenticating-your-application-devise
Packt
25 Oct 2013
11 min read
Save for later

Authenticating Your Application with Devise

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Signing in using authentication other than e-mails By default, Devise only allows e-mails to be used for authentication. For some people, this condition will lead to the question, "What if I want to use some other field besides e-mail? Does Devise allow that?" The answer is yes; Devise allows other attributes to be used to perform the sign-in process. For example, I will use username as a replacement for e-mail, and you can change it later with whatever you like, including userlogin, adminlogin, and so on. We are going to start by modifying our user model. Create a migration file by executing the following command inside your project folder: $ rails generate migration add_username_to_users username:string This command will produce a file, which is depicted by the following screenshot: The generated migration file Execute the migrate (rake db:migrate) command to alter your users table, and it will add a new column named username. You need to open the Devise's main configuration file at config/initializers/devise.rb and modify the code: config.authentication_keys = [:username] config.case_insensitive_keys = [:username] config.strip_whitespace_keys = [:username] You have done enough modification to your Devise configuration, and now you have to modify the Devise views to add a username field to your sign-in and sign-up pages. By default, Devise loads its views from its gemset code. The only way to modify the Devise views is to generate copies of its views. This action will automatically override its default views. To do this, you can execute the following command: $ rails generate devise:views It will generate some files, which are shown in the following screenshot: Devise views files As I have previously mentioned, these files can be used to customize another view. But we are going to talk about it a little later in this article. Now, you have the views and you can modify some files to insert the username field. These files are listed as follows: app/views/devise/sessions/new.html.erb: This is a view file for the sign-up page. Basically, all you need to do is change the email field into the username field. #app/views/devise/sessions/new.html.erb <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label :username %><br /> <%= f.text_field :username, :autofocus => true %><div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %></div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> %= render "devise/shared/links" %> You are now allowed to sign in with your username. The modification will be shown, as depicted in the following screenshot: The sign-in page with username app/views/devise/registrations/new.html.erb: This file is a view file for the registration page. It is a bit different from the sign-up page; in this file, you need to add the username field, so that the user can fill in their username when they perform the registration. #app/views/devise/registrations/new.html.erb <h2>Sign Up</h2> <%= form_for() do |f| %> <%= devise_error_messages! %> <div><%= f.label :email %><br /> <%= f.email_field :email, :autofocus => true %></div> <div><%= f.label :username %><br /> <%= f.text_field :username %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <div><%= f.label :password_confirmation %><br /> <%= f.password_field :password_confirmation %></div> <div><%= f.submit "Sign up" %></div> <% end %> <%= render "devise/shared/links" %> Especially for registration, you need to perform extra modifications. Mass assignment rules written in the app/controller/application_controller.rb file, and now, we are going to modify them a little. Add username to the sanitizer for sign-in and sign-up, and you will have something as follows: #these codes are written inside configure_permitted_parameters function devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:email, :username )} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, :username, :password, :password_confirmation)} These changes will allow you to perform a sign-up along with the username data. The result of the preceding example is shown in the following screenshot: The sign-up page with username I want to add a new case for your sign-in, which is only one field for username and e-mail. This means that you can sign in either with your e-mail ID or username like in Twitter's sign-in form. Based on what we have done before, you already have username and email columns; now, open /app/models/user.rb and add the following line: attr_accessor :signin Next, you need to change the authentication keys for Devise. Open /config/initializers/devise.rb and change the value for config.authentication_keys, as shown in the following code snippet: config.authentication_keys = [ :signin ] Let's go back to our user model. You have to override the lookup function that Devise uses when performing a sign-in. To do this, add the following method inside your model class: def self.find_first_by_auth_conditions(warden_conditions) conditions = warden_conditions.dup where(conditions).where(["lower(username) = :value OR lower(email) = :value", { :value => signin.downcase }]).first end As an addition, you can add a validation for your username, so it will be case insensitive. Add the following validation code into your user model: validates :username, :uniqueness => {:case_sensitive => false} Please open /app/controller/application_controller.rb and make sure you have this code to perform parameter filtering: before_filter :configure_permitted_parameters, if: :devise_controller? protected def configure_permitted_parameters devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:signin)} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, : username, :password, :password_confirmation)} end We're almost there! Currently, I assume that you've already stored an account that contains the e-mail ID and username. So, you just need to make a simple change in your sign-in view file (/app/views/devise/sessions/new.html.erb). Make sure that the file contains this code: <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label "Username or Email" %><br /> <%= f.text_field :signin, :autofocus => true %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %> </div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> <%= render "devise/shared/links" %> You can see that you don't have a username or email field anymore. The field is now replaced by a single field named :signin that will accept either the e-mail ID or the username. It's efficient, isn't it? Updating the user account Basically, you are already allowed to access your user account when you activate the registerable module in the model. To access the page, you need to log in first and then go to /users/edit. The page is as shown in the following screenshot: The edit account page But, what if you want to edit your username or e-mail ID? How will you do that? What if you have extra information in your users table, such as addresses, birth dates, bios, and passwords as well? How will you edit these? Let me show you how to edit your user data including your password, or edit your user data without editing your password. Editing your data, including the password: To perform this action, the first thing that you need to do is modify your view. Your view should contain the following code: <div><%= f.label :username %><br /> <%= f.text_field :username %></div> Now, we are going to overwrite Devise's logic. To do this, you have to create a new controller named registrations_controller. Please use the rails command to generate the controller, as shown: $ rails generate controller registrations update It will produce a file located at app/controllers/. Open the file and make sure you write this code within the controller class: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) @user = User.find(current_user.id) if @user.update_with_password(new_params) set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end Let's look at the code. Currently, Rails 4 has a new method in organizing whitelist attributes. Therefore, before performing mass assignment attributes, you have to prepare your data. This is done in the first line of the update method. Now, if you see the code, there's a method defined by Devise named update_with_password. This method will use mass assignment attributes with the provided data. Since we have prepared it before we used it, it will be fine. Next, you have to edit your route file a bit. You should modify the rule defined by Devise, so instead of using the original controller, Devise will use the controller you created before. The modification should look as follows: devise_for :users, :controllers => {:registrations => "registrations"} Now you have modified the original user edit page, and it will be a little different. You can turn on your Rails server and see it in action. The view is as depicted in the following screenshot: The modified account edit page Now, try filling up these fields one by one. If you are filling them with different values, you will be updating all the data (e-mail, username, and password), and this sounds dangerous. You can modify the controller to have better data update security, and it all depends on your application's workflows and rules. Editing your data, excluding the password: Actually, you already have what it takes to update data without changing your password. All you need to do is modify your registrations_controller.rb file. Your update function should be as follows: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) change_password = true if params[:user][:password].blank? params[:user].delete("password") params[:user].delete("password_confirmation") new_params = params.require(:user).permit(:email,:username) change_password = false end @user = User.find(current_user.id) is_valid = false if change_password is_valid = @user.update_with_password(new_params) else @user.update_without_password(new_params) end if is_valid set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end The main difference from the previous code is now you have an algorithm that will check whether the user intends to update your data with their password or not. If not, the code will call the update_without_password method. Now, you have codes that allow you to edit with/without a password. Now, refresh your browser and try editing with or without a password. It won't be a problem anymore. Summary Now, I believe that you will be able to make your own Rails application with Devise. You should be able to make your own customizations based on your needs. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Facebook Application Development with Ruby on Rails [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 2212

article-image-implementing-opencart-modules
Packt
24 Oct 2013
6 min read
Save for later

Implementing OpenCart Modules

Packt
24 Oct 2013
6 min read
(For more resources related to this topic, see here.) OpenCart is an e-commerce cart application built with its own in-house framework which uses Model-View-Controller (MVC) language pattern thus each module in OpenCart also follows the MVCL patterns. Controller creates logics and gathers data from Model and pass it to display them in the view OpenCart modules have admin/ and catalog/ folders and files in admin/ folder helps in controlling setting of module and files in catalog/ folder handles the presentation layer (front-end). Learning how to clone and write codes for Opencart Modules We assume that you already know PHP and have already installed the OpenCart and familiar with the OpenCart backend and frontend as well as some coding knowledge with PHP. You are going to create Hello World module which just has one input box at the admin, settings for the module and same content are shown in front end. First step on module creation is using a unique name, so there will not be conflict with other modules. The same unique name is used to create the file name and class name to extend controller and model. There are generally 6-8 files that need to be created for each module, and they follow a similar structure. If there is interaction with the database tables, we have to create two extra models. The following screenshot shows the hierarchy of files and folder of OpenCart module. So now you know the basic directory structure of OpenCart module. The file structure is divided into two sections admin and catalog. Admin folders and files deal with the setting of the modules and data handling while the catalog folders and files handles the frontend. Let's start with an easy way to make the module. You are going to make the duplicate of the default google_talk module of OpenCart and change it to the Hello World module. We are using the Dreamweaver to work with files. Changes made at the admin folder Go to admin/controller/module/ and copy google_talk.php and paste in the same folder and rename it to helloworld.php and open it to your favorite text editor, then find the following lines: classControllerModuleGoogleTalk extends Controller { Change the class name to classControllerModuleHelloworld extends Controller { Now look for google_talk and replace all with helloworld as shown in the following screenshot: Then, save the file Go to admin/language/english/module/ and copy google_talk.php and paste in the same folder and rename it to helloworld.php and open it. Then look for the following line of code: $_['entry_code'] = 'Google Talk Code:<br /> <span class="help">Goto <a href="http://www.google.com/talk/service/badge/New" target="_blank"> <u>Create a Google Talk chatback badge</u> </a> and copy &amp; paste the generated code into the text box. </span>'; Replace with the following line of code: $_['entry_code'] = 'Hello World Content'; Then again find Google Talk and replace all with Hello World, and save the file. Go to admin/view/template/module/ and copy the google_talk.tpl file and paste in the same folder and rename it to helloworld.tpl and open it and then find google_talk and replace it with helloworld then save it. Changes made at the catalog folder Go to catalog/controller/module/ and copy the google_talk.php file and paste in the same folder and rename it to helloworld.php and open it and look for the following line: class ControllerModuleGoogleTalk extends Controller { change the class name to class ControllerModuleHelloworld extends Controller { Now look for google_talk and replace all with helloworld and save it. Go to catalog/language/english/module/ and copy the google_talk.php file and paste in the same folder and rename it to helloworld.php and open it and then look for Live Chat and replace it with Hello World then save it. Go to catalog/view/theme/default/template/module/ and copy the google_talk.tpl file and paste in the same folder and rename it to helloworld.tpl. With the preceding files and codes change, our Hello World module is ready to be installed. Now log in to the admin section and go to Extensions | Module, look for the Hello World and click on [install] then click on [Edit]. Then type the content that you would like to show at the frontend in Hello World Content field after that click on the Add Module button and provide the setting as per your need and click on Save. Understanding the global Library methods used in OpenCart OpenCart has many predefined methods which can be called anywhere like in the controller or in the model and as well as in the view template files as per the need. You can find system level library files at system/library/. We have defined all the library functions so that it is easy for programmers to use it. For example: $this->customer->login($email, $password, $override = false) Log a customer in. It checks for the customer username and password if $override is passed false, else only for current logged in status and the e-mail. If it finds the correct entry then the cart entry, wishlist entries are retrieved. As well as customer ID, first name, last name, e-mail, telephone, fax, newsletter subscription status, customer group ID, and address ID are globally accessible for the customer. It also updates the customer IP address from where it logs in. Developing and customizing modules, pages, order totals, shipping, and payments extensions in OpenCart We describe the featured module of OpenCart, create feedback module and tips module, and describe and show how codes work and are managed. We helped to learn how to make pages and modules in OpenCart as well as let visualize the uses of database structure; how data are saved as per language, as per store so it helps OpenCart programmers to understand and follow the patterns of Opencart coding style. We describe codes how form works, how to list out the data from database, how edit works in module, and how they are saved. Show them how to code shipping module in OpenCart as well as total order modules and payment modules. We have outlined how templates, models, and controllers work for extensions. Summary In this way we have learned how to clone and write codes for OpenCart modules and the changes made at the admin and catalog folder. We also learned the global library methods used in OpenCart. Also, covered all ways to code the OpenCart extensions. Resources for Article: Further resources on this subject: Upgrading OpenCart [Article] OpenCart FAQs [Article] OpenCart: Layout Structure [Article]
Read more
  • 0
  • 0
  • 2048

article-image-improving-your-development-speed
Packt
23 Oct 2013
7 min read
Save for later

Improving Your Development Speed

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) What all developers want is to do their job as fast as they can without sacrificing the quality of their work. IntelliJ has a large range of features that will reduce the time spent in development. But, to achieve the best performance that IntelliJ can offer, it is important that you understand the IDE and adapt some of your habits. In this article, we will navigate through the features that can help you do your job even faster. You will understand IntelliJ's main elements and how they work, and beyond this, learn how IntelliJ can help you organize your activities and the files you are working on. To further harness IntelliJ's abilities, you will also learn how to manage plugins and see a short list of plugins that can help you. Identifying and understanding window elements Before we start showing you techniques you can use to improve your performance using IntelliJ, you need to identify and understand the visual elements present in the main window of the IDE. Knowing these elements will help you find what you want faster. The following screenshot shows the IntelliJ main window: The main window can be divided into seven parts as shown in the previous screenshot: The main menu contains options that you can use to do tasks such as creating projects, refactoring, managing files in version control, and more. The main toolbar element contains some essential options. Some buttons are shown or hidden depending on the configuration of the project; version control buttons are an example of this. The Navigation Bar is sometimes a quick and good alternative to navigate easily and fast through the project files. Tool tabs are shown on both sides of the screen and at the bottom of IntelliJ. They represent the tools that are available for the project. Some tabs are available only when facets are enabled in the project (e.g. the Persistence tab). When the developer clicks on a tool tab, a window appears. These windows will present the project in different perspectives. The options available in each tool window will provide the developer with a wide range of development tasks. The editor is where you can write your code. The Status Bar indicates the current IDE state and provides some options to manipulate the environment. For example, you can hide the tool tabs by clicking on the icon at the bottom-left of the window. In almost all elements, there are context menus available. These menus will provide extra options that may complement and ease your work. For example, the context menu, available in the tool bar, provides an option to hide itself and another to customize the menu and toolbars. You will notice that some tool tabs have numbers. These numbers are used in conjunction with the Alt key to access the tool window you want quickly, Alt + 1, for example, will open the Project tool window. Each tool window will have different options; some will present search facilities, others will show specific options. They use a common structure: a title bar, a toolbar, and the content pane. Some tool windows don't have a toolbar and, in others, the options in the title bar may vary. However, all of them will have at least two buttons in the rightmost part of the title bar: a gear and a small bar with an arrow. The first button is used to configure some properties of the tool and the second will just minimize the window. The following screenshot shows some options in the Database tool: The options available under the gear button icon generally differ from tool to tool. However, in the drop-down list, you will find four common options: Pinned, Docked, Floating, and Split modes. As you may have already imagined, these options are used to define how the tool window will be shown. The Pinned mode is very useful when it is unmarked; using this, when you focus on code editor you don't lose time minimizing the tool window. Identifying and understanding code editor elements The editor provides some elements that can facilitate navigation through the code and help identify problems in it. In the following screenshot, you can see how the editor is divided: The editor area, as you probably know, is where you edit your source code. The gutter area is where different types of information about the code is shown, simply using icons or special marks like breakpoints and ranges. The indicators used here aren't used to just display information; you can perform some actions depending on the indicator, such as reverting changes or navigating through the code. The smart completion popup, as you've already seen, provides assistance to the developer in accordance with the current context. The document tabs area is where the tabs of each opened document are available. The type of document is identified by an icon and the color in the name of the file shows its status in version control: blue stands for "modified", green for "new", red for "not in VCS", and black for "not changed". This component has a context menu that provides some other facilities as well. The marker bar is positioned to the right-hand side of the IDE and its goal is to show the current status of the code. At the top, the square mark can be green for when your code is OK, yellow for warnings that are not critical, and red for compilation errors, respectively. Below the square situated on top of the IDE this element can have other colored marks used to help the developer go directly to the desired part of the code. Sometimes, while you are coding, you may notice a small icon floating near the cursor; this icon represents that there are some intentions available that could help you: indicates that IntelliJ proposes a code modification that isn't totally necessary. It covers warning corrections to code improvement. indicates an intention action that can be used but doesn't provide any improvement or code correction. indicates there is a quick fix available to correct an eminent code error. indicates that the alert for the intention is disabled but the intention is still available. The following figure shows the working intention: Intention actions can be grouped in four categories listed as follows: Create from usage is the kind of intention action that proposes the creation of code depending on the context. For example, if you enter a method name that doesn't exist, this intention will recognize it and propose the creation of the method. Quick fixes is the type of intention that responds to code mistakes, such as wrong type usage or missing resources. Micro refactoring is the kind of intention that is shown when the code is syntactically correct; however, it could be improved (for readability for example). Fragment action is the type of intention used when there are string literals of an injected language; this type of injection can be used to permit you to edit the corresponding sentence in another editor. Intention actions can be enabled or disabled on-the-fly or in the Intention section in the configuration dialog; by default, all intentions come activated. Adding intentions is possible only after installing plugins for them or creating your own plugin. If you prefer, you can use the Alt + Enter shortcut to invoke the intentions popup. Summary As you have seen in this article, IntelliJ provides a wide range of functionalities that will improve your development speed. More important than knowing all the shortcuts IntelliJ offers, is to understand what is possible do with them and when to use a feature. Resources for Article: Further resources on this subject: NetBeans IDE 7: Building an EJB Application [Article] JBI Binding Components in NetBeans IDE 6 [Article] Smart Processes Using Rules [Article]
Read more
  • 0
  • 0
  • 1254
article-image-working-audio
Packt
23 Oct 2013
9 min read
Save for later

Working with Audio

Packt
23 Oct 2013
9 min read
(For more resources related to this topic, see here.) Planning the audio In Camtasia Studio, we can stack multiple audio tracks on top of each other. While this is a useful and powerful way to build a soundtrack, it can lead to a cluttered audio output if we do not plan ahead. Audio tracks can be used for a wide range of purposes. It's best to storyboard audio to avoid creating a confusing audio mix. If we consider how each audio track will be used before we begin to overlay each file on the timeline, we can visualize the end result and resist the temptation to layer too many audio effects on top of each other. The importance of consistency Producing professional video in Camtasia Studio comes down to consistency and detail. The more consistent we are, the more professional the result will be. The more we pay attention to detail, the more professional the result is. By being consistent in our use of audio effects, we can avoid creating unintentional distractions or misleading the viewer. For example, if we choose to use a ping sound to represent a mouse click, we should make sure that all mouse clicks use the same ping sound so that the viewer understands and associates the sound with the action. A note on background music When deciding what audio we want in our video, we should always think about our target audience and the type of message we are trying to deliver. Never use background music unless it adds to the video content. For example, background music can be a useful way of engaging our viewer, but if we are delivering an important health and safety message, or delivering a quiz, a backing track may be distracting. If our audience are the staff in customer-facing departments, we may not want to include audio tracks at all. We wouldn't want the sound from our videos to be audible to a customer. Types of audio There are three main types of audio we can add to our video: Voice-over tracks Background music Sound effects Preparing to record a voice-over Various factors affect the quality and consistency of voice-over recordings. In Camtasia Studio, we can add effects but it's best to get the source audio right in the first instance. The factors are given as follows: We often don't pay attention to the qualities and tones in our own voices, but they can and do change. From day to day, your tone of voice can subtly change. Air temperature, illness, or mood can affect the way your voice sounds in a recording. In addition, the environment we use to record a voice-over can have a dramatic effect on the end result. Some rooms will give your voice natural reverb; others will sound very dead. The equipment we use will affect the recording. For example, different microphones will produce different results. When we prepare for a voice-over recording, we must aim to keep our voice, environment, and equipment as stable and consistent as possible. That means we should aim to record the voice-over in one session so that we can control all these factors. We may choose a different person to provide the voice-over. Again, we should take a consistent approach in how we use their voice. Voice-over recording is always a long process and involves trial, error, and multiple takes. We should allow more time than we feel is strictly necessary. Many recordings inevitably overrun. If any sections of the recording are questionable, we should aim to record all of the alternatives in the same session for a seamless result. The studio environment Most Camtasia Studio users do not have access to a professional recording studio. This need not be a problem. We can use practically any quiet room to record our voice-over, although there are some basic pointers that will improve the result. When choosing a studio location, consider the following: Ambient noise: Try to record in quiet environment. If we can use an empty room where there are no passers by or devices making any noise, this will make our recording clearer. Choose a room away from potential sources of noise (busy corridors, main roads, and so on). Noise leakage: Ensure that any doors and windows are closed to minimize noise pollution from outside the room and outside the building. Equipment noise: Ensure that all unnecessary programs on the PC are closed to prevent any unwanted sounds or alerts. End any background tasks, such as email checkers or task schedulers, and ensure any instant messaging software is closed or in offline mode. Positioning: Experiment with placing the microphone in different places around the room. The acoustics of a room can greatly affect the quality of a recording and taking time to find the best place for the microphone will help. For efficiency, we can test the audio quality quickly by wearing headphones while speaking into the microphone. Consider posture: Standing up opens up the diaphragm and improves the sound of our voice when we record. Avoid recording while seated, and hold any notes or papers at eye level to maintain a constant tone. Using scripts When it comes to voice-over recording, a well-prepared script is the most important piece of preparation we can do. Working from a script is far simpler than attempting to make up our narration as we go along. It helps to maintain a good pace in the video and greatly reduces the need for multiple takes, making recording far more efficient. Creating a script need not be time-consuming. If we have already planned out and recorded our video track, writing a script will be far simpler. Writing an effective script The script you write should support the action in the video and maintain a healthy pace. There are a number of tips we can bear in mind to do this. These tips are given as follows: Sync audio with video: Plan the script to coincide with any actions we take in the video. This may mean incorporating pauses into the script to allow a certain on-screen action to complete. Be flexible: We may need to go back and lengthen a section of video to incorporate the voice-over and explanation. It is better to do this than rush the voice-over and attempt to force it to fit. Use basic copywriting techniques: We should consider the message in the video and use the appropriate style. For example, if we are describing a process, we would want to use the active voice. In an internal company update, we may want to adopt a more conversational tone. Be direct and concise: A short and simple statement is far easier to process than a long, drawn out argument. We should always test our script prior to the recording session. We should also be prepared to re-write and hone the content. Reading a script aloud is a useful way of estimating its length and picking out any awkward phrases that do not flow. We will save time if we perfect the script before we sit down in front of the microphone. Recording equipment Most laptop computers have a built in microphone, as do some desktop computers. While these microphones are perfectly adequate for video or audio chats and other casual uses, we should not use them to create Camtasia Studio recordings. Although the quality may be good, and the audio may be clear, these microphones often pick up a large amount of ambient noise, such as the fans inside the computer. Additionally, the audio captured using built-in microphones often require processing and amplification, which can degrade its quality. Camtasia Studio has a range of editing tools that can help you to tweak your audio recording. However, processing should always be a last resort. The more we use a tool to process our voice-over, the more the source material is prone to being distorted. If we have better quality source material, we will not need to rely on these features; this will make the editing process much simpler. When working in Camtasia Studio, it is preferable to invest in a good quality external microphone. Basic microphones are inexpensive and offer considerably better audio recording than built-in microphones. Choosing a microphone External microphones are very affordable. Unless you have specific need for a professional-standard microphone, we recommend a USB microphone. Many of these microphones are sold as podcasting microphones and are perfectly adequate for use in Camtasia Studio. There are two main types of external microphone: Consider a lapel microphone if you plan to operate the computer as you record or present to the camera while you are speaking. Lapel microphones clip on to your clothing and leave your hands free. If you are more comfortable working at a desk, a microphone with a sturdy tripod stand will be a good investment. An external microphone with built in noise cancellation can give us a degree of control at the recording stage, rather than having to edit out noise later. A good stand will give us a greater degree of flexibility when it comes to microphone placement. How to set up an external microphone We can set up the external microphone before we begin recording by following the given steps: Navigate to Tools | Voice Narration. The Voice Narration screen is displayed. Click on Audio setup wizard.... The Audio Setup Wizard screen is displayed. Select the Audio device, as shown in the following screenshot. Summary In this article, we have looked at a range of ways to improve the quality of the audio in our Camtasia Studio projects. We have considered voice-over recording techniques, equipment, editing, sound effects, and background music. Resources for Article: Further resources on this subject: Editing attributes [Article] Basic Editing [Article] Video Editing in Blender using Video Sequence Editor: Part 1 [Article]
Read more
  • 0
  • 0
  • 1115

article-image-taking-control-reactivity-inputs-and-outputs
Packt
23 Oct 2013
7 min read
Save for later

Taking Control of Reactivity, Inputs, and Outputs

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) Showing and hiding elements of the UI We'll start easy with a simple function that you are certainly going to need if you build even a moderately complex application. Those of you who have been doing extra credit exercises and/or experimenting with your own applications will probably have already wished for this or, indeed, have already found it. conditionalPanel() allows you to show/hide UI elements based on other selections within the UI. The function takes a condition (in JavaScript, but the form and syntax will be familiar from many languages) and a UI element, and displays the UI only when the condition is true. This is actually used a couple of times in the advanced GA application and indeed in all the applications I've ever written of even moderate complexity. The following is a simpler example (from ui.R, of course, in the first section, within sidebarPanel()), which allows users who request a smoothing line to decide what type they want: conditionalPanel(condition = "input.smoother == true",selectInput("linearModel", "Linear or smoothed",list("lm", "loess"))) As you can see, the condition appears very R/Shiny-like, except with the "." operator familiar to JavaScript users in place of "$", and with "true" in lower case. This is a very simple but powerful way of making sure that your UI is not cluttered with irrelevant material. Giving names to tabPanel elements In order to further streamline the UI, we're going to hide the hour selector when the monthly graph is displayed and the date selector when the hourly graph is displayed. The difference is illustrated in the following screenshot with side-by-side pictures, hourly figures UI on the left-hand side and monthly figures on the right-hand side: In order to do this, we're going to have to first give the tabs of the tabbed output names. This is done as follows (with the new code in bold): tabsetPanel(id ="theTabs",tabPanel("Summary", textOutput("textDisplay"),value = "summary"),tabPanel("Monthly figures",plotOutput("monthGraph"), value = "monthly"),tabPanel("Hourly figures",plotOutput("hourGraph"), value = "hourly")) As you can see, the whole panel is given an ID (theTabs), and then each tabPanel is also given a name (summary, monthly, and hourly). They are referred to in the server.R file very simply as input$theTabs. Let's have a quick look at a chunk of code in server.R that references the tab names; this code makes sure that we subset based on date only when the date selector is actually visible, and by hour only when the hour selector is actually visible. Our function to calculate and pass data now looks like the following (new code again bolded): passData <- reactive({if(input$theTabs != "hourly"){analytics <- analytics[analytics$Date %in%seq.Date(input$dateRange[1], input$dateRange[2],by = "days"),]}if(input$theTabs != "monthly"){analytics <- analytics[analytics$Hour %in%as.numeric(input$minimumTime) :as.numeric(input$maximumTime),]}analytics <- analytics[analytics$Domain %in%unlist(input$domainShow),]analytics}) As you can see, subsetting by month is carried out only when the date display is visible (that is, when the hourly tab is not shown), and vice versa. Finally, we can make our changes to ui.R to remove parts of the UI based on tab selection: conditionalPanel(condition = "input.theTabs != 'hourly'",dateRangeInput(inputId = "dateRange",label = "Date range",start = "2013-04-01",max = Sys.Date())),conditionalPanel(condition = "input.theTabs != 'monthly'",sliderInput(inputId = "minimumTime",label = "Hours of interest- minimum",min = 0,max = 23,value = 0,step = 1),sliderInput(inputId = "maximumTime",label = "Hours of interest- maximum",min = 0,max = 23,value = 23,step = 1)) Note the use in the latter example of two UI elements within the same conditionalPanel() call; it is worth noting that it helps you keep your code clean and easy to debug. Reactive user interfaces Another trick you will definitely want up your sleeve at some point is a reactive user interface. This enables you to change your UI (for example, the number or content of radio buttons) based on reactive functions. For example, consider an application that I wrote related to survey responses across a broad range of health services in different areas. The services are related to each other in quite a complex hierarchy, and over time, different areas and services respond (or cease to exist, or merge, or change their name...), which means that for each time period the user might be interested in, there would be a totally different set of areas and services. The only sensible solution to this problem is to have the user tell you which area and date range they are interested in and then give them back the correct list of services that have survey responses within that area and date range. The example we're going to look at is a little simpler than this, just to keep from getting bogged down in too much detail, but the principle is exactly the same and you should not find this idea too difficult to adapt to your own UI. We are going to imagine that your users are interested in the individual domains from which people are accessing the site, rather than just have them lumped together as the NHS domain and all others. To this end, we will have a combo box with each individual domain listed. This combo box is likely to contain a very high number of domains across the whole time range, so we will let users constrain the data by date and only have the domains that feature in that range return. Not the most realistic example, but it will illustrate the principle for our purposes. Reactive user interface example – server.R The big difference is that instead of writing your UI definition in your ui.R file, you place it in server.R, and wrap it in renderUI(). Then all you do is point to it from your ui.R file. Let's have a look at the relevant bit of the server.R file: output$reacDomains <- renderUI({domainList = unique(as.character(passData()$networkDomain))selectInput("subDomains", "Choose subdomain", domainList)}) The first line takes the reactive dataset that contains only the data between the dates selected by the user and gives all the unique values of domains within it. The second line is a widget type we have not used yet which generates a combo box. The usual id and label arguments are given, followed by the values that the combo box can take. This is taken from the variable defined in the first line. Reactive user interface example – ui.R The ui.R file merely needs to point to the reactive definition as shown in the following line of code (just add it in to the list of widgets within sidebarPanel()): uiOutput("reacDomains") You can now point to the value of the widget in the usual way, as input$subDomains. Note that you do not use the name as defined in the call to renderUI(), that is, reacDomains, but rather the name as defined within it, that is, subDomains. Summary It's a relatively small but powerful toolbox with which you can build a vast array of useful and intuitive applications with comparatively little effort. This article looked at fine-tuning the UI using conditionalPanel() and observe(), and changing our UI reactively. Resources for Article: Further resources on this subject: Fine Tune the View layer of your Fusion Web Application [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article]
Read more
  • 0
  • 0
  • 2618