Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-npm-javascript-predictions-for-2019-react-graphql-and-typescript-are-three-technologies-to-learn
Bhagyashree R
10 Dec 2018
3 min read
Save for later

npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn

Bhagyashree R
10 Dec 2018
3 min read
Based on Laurie Voss’ talk on Node+JS Interactive 2018, on Friday, npm has shared some insights and predictions about JavaScript for 2019. These predictions are aimed to help developers make better technical choices in 2019. Here are the four predictions npm has made: “You will abandon one of your current tools.” In JavaScript, frameworks and tools don’t last and generally enjoy a phase of peak popularity of 3-5 years. This follows a slow decline as developers have to maintain the legacy applications but move to newer frameworks for new work. Mr. Voss said in his talk, “Nothing lasts forever!..Any framework that we see today will have its hay days and then it will have an after-life where it will slowly slowly degrade.” For developers, this essentially means that it is better to keep on learning new frameworks instead of holding on to their current tools too tightly. “Despite a slowdown in growth, React will be the dominant framework in 2019.” Though React’s growth has slowed down in 2018, as compared to 2017, it still continues to dominate the web scene. 60% of npm survey respondents said they are using React. In 2019, npm says that more people will use React for building web applications. As people using it will grow we will have more tutorials, advice, and bug fixes. “You’ll need to learn GraphQL.” The GraphQL client library is showing tremendous popularity and as per npm it is going to be a “technical force to reckon with in 2019.” It was first publicly released in 2015 and it is still too early to put it into production, but going by its growing popularity, developers are recommended to learn its concepts in 2019. npm also predict that developers will see themselves using GraphQL in new projects later in the year and in 2020. “Somebody on your team will bring in TypeScript.” npm’s survey uncovered that 46% of the respondents were using Microsoft’s TypeScript, a typed superset of JavaScript that compiles to plain JavaScript. One of the reason for this major adoption by enthusiasts could be the extra safety TypeScript provides by type-checking. Adopting TypeScript in 2019 could prove really useful, especially if you’re a member of a larger team. Read the detailed report and predictions on npm’s website. 4 key findings from The State of JavaScript 2018 developer survey TypeScript 3.2 released with configuration inheritance and more 7 reasons to choose GraphQL APIs over REST for building your APIs
Read more
  • 0
  • 0
  • 5220

article-image-tuning-solr-jvm-and-container
Packt
22 Jul 2014
6 min read
Save for later

Tuning Solr JVM and Container

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) Some of these JVMs are commercially optimized for production usage; you may find comparison studies at http://dior.ics.muni.cz/~makub/java/speed.html. Some of the JVM implementations provide server versions, which would be more appropriate than normal ones. Since Solr runs in JVM, all the standard optimizations for applications are applicable to it. It starts with choosing the right heap size for your JVM. The heap size depends upon the following aspects: Use of facets and sorting options Size of the Solr index Update frequencies on Solr Solr cache Heap size for JVM can be controlled by the following parameters: Parameter Description -Xms This is the minimum heap size required during JVM initialization, that is, container -Xmx This is the maximum heap size up to which the JVM or J2EE container can consume Deciding heap size Heap in JVM contributes as a major factor while optimizing the performance of any system. JVM uses heap to store its objects, as well as its own content. Poor allocation of JVM heap results in Java heap space OutOfMemoryError thrown at runtime crashing the application. When the heap is allocated with less memory, the application takes a longer time to initialize, as well as slowing the execution speed of the Java process during runtime. Similarly, higher heap size may underutilize expensive memory, which otherwise could have been used by the other application. JVM starts with initial heap size, and as the demand grows, it tries to resize the heap to accommodate new space requirements. If a demand for memory crosses the maximum limit, JVM throws an Out of Memory exception. The objects that expire or are unused, unnecessarily consume memory in JVM. This memory can be taken back by releasing these objects by a process called garbage collection. Although it's tricky to find out whether you should increase or reduce the heap size, there are simple ways that can help you out. In a memory graph, typically, when you start the Solr server and run your first query, the memory usage increases, and based on subsequent queries and memory size, the memory graph may increase or remain constant. When garbage collection is run automatically by the JVM container, it sharply brings down its usage. If it's difficult to trace GC execution from the memory graph, you can run Solr with the following additional parameters: -Xloggc:<some file> -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails If you are monitoring the heap usage continuously, you will find a graph that increases and decreases (sawtooth); the increase is due to the querying that is going on consistently demanding more memory by your Solr cache, and decrease is due to GC execution. In a running environment, the average heap size should not grow over time or the number of GC runs should be less than the number of queries executed on Solr. If that's not the case, you will need more memory. Features such as Solr faceting and sorting requires more memory on top of traditional search. If memory is unavailable, the operating system needs to perform hot swapping with the storage media, thereby increasing the response time; thus, users find huge latency while searching on large indexes. Many of the operating systems allow users to control swapping of programs. How can we optimize JVM? Whenever a facet query is run in Solr, memory is used to store each unique element in the index for each field. So, for example, a search over a small set of facet value (an year from 1980 to 2014) will consume less memory than a search with larger set of facet value, such as people's names (can vary from person to person). To reduce the memory usage, you may set the term index divisor to 2 (default is 4) by setting the following in solrconfig.xml: <indexReaderFactory name="IndexReaderFactory" class="solr.StandardIndexReaderFactory"> <int name="setTermIndexDivisor">2</int> </indexReaderFactory > From Solr 4.x onwards, the ability to set the min, max (term index divisor) block size ability is not available. This will reduce the memory usage for storing all the terms to half; however, it will double the seek time for terms and will impact a little on your search runtime. One of the causes of large heap is the size of index, so one solution is to introduce SolrCloud and the distributed large index into multiple shards. This will not reduce your memory requirement, but will spread it across the cluster. You can look at some of the optimized GC parameters described at http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning page. Similarly, Oracle provides a GC tuning guide for advanced development stages, and it can be seen at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html. Additionally, you can look at the Solr performance problems at http://wiki.apache.org/solr/SolrPerformanceProblems. Optimizing JVM container JVM containers allow users to have their requests served in threads. This in turn enables JVM to support concurrent sessions created for different users connecting at the same time. The concurrency can, however, be controlled to reduce the load on the search server. If you are using Apache Tomcat, you can modify the following entries in server.xml for changing the number of concurrent connections: Similarly, in Jetty, you can control the number of connections held by modifying jetty.xml: Similarly, for other containers, these files can change appropriately. Many containers provide a cache on top of the application to avoid server hits. This cache can be utilized for static pages such as the search page. Containers such as Weblogic provide a development versus production mode. Typically, a development mode runs with 15 threads and a limited JDBC pool size by default, whereas, for a production mode, this can be increased. For tuning containers, besides standard optimization, specific performance-tuning guidelines should be followed, as shown in the following table: Container Performance tuning guide Jetty http://wiki.eclipse.org/Jetty/Howto/High_Load Tomcat http://www.mulesoft.com/tcat/tomcat-performance and http://javamaster.wordpress.com/2013/03/13/apache-tomcat-tuning-guide/ JBoss https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Application_Platform/5/pdf/Performance_Tuning_Guide/JBoss_Enterprise_Application_Platform-5-Performance_Tuning_Guide-en-US.pdf Weblogic http://docs.oracle.com/cd/E13222_01/wls/docs92/perform/WLSTuning.html Websphere http://www.ibm.com/developerworks/websphere/techjournal/0909_blythe/0909_blythe.html Apache Solr works better with the default container it ships with, Jetty, since it offers a small footprint compared to other containers such as JBoss and Tomcat for which the memory required is a little higher. Summary In this article, we have learned about about Apache Solr which runs on the underlying JVM in the J2EE container and tuning containers. Resources for Article: Further resources on this subject: Apache Solr: Spellchecker, Statistics, and Grouping Mechanism [Article] Getting Started with Apache Solr [Article] Apache Solr PHP Integration [Article]
Read more
  • 0
  • 0
  • 5162

article-image-angular-zen
Packt
19 Sep 2013
5 min read
Save for later

Angular Zen

Packt
19 Sep 2013
5 min read
(For more resources related to this topic, see here.) Meet AngularJS AngularJS is a client-side MVC framework written in JavaScript. It runs in a web browser and greatly helps us (developers) to write modern, single-page, AJAX-style web applications. It is a general purpose framework, but it shines when used to write CRUD (Create Read Update Delete) type web applications. Getting familiar with the framework AngularJS is a recent addition to the client-side MVC frameworks list, yet it has managed to attract a lot of attention, mostly due to its innovative templating system, ease of development, and very solid engineering practices. Indeed, its templating system is unique in many respects: It uses HTML as the templating language It doesn't require an explicit DOM refresh, as AngularJS is capable of tracking user actions, browser events, and model changes to figure out when and which templates to refresh It has a very interesting and extensible components subsystem, and it is possible to teach a browser how to interpret new HTML tags and attributes The templating subsystem might be the most visible part of AngularJS, but don't be mistaken that AngularJS is a complete framework packed with several utilities and services typically needed in single-page web applications. AngularJS also has some hidden treasures, dependency injection (DI) and strong focus on testability. The built-in support for DI makes it easy to assemble a web application from smaller, thoroughly tested services. The design of the framework and the tooling around it promote testing practices at each stage of the development process. Finding your way in the project AngularJS is a relatively new actor on the client-side MVC frameworks scene; its 1.0 version was released only in June 2012. In reality, the work on this framework started in 2009 as a personal project of Miško Hevery, a Google employee. The initial idea turned out to be so good that, at the time of writing, the project was officially backed by Google Inc., and there is a whole team at Google working full-time on the framework. AngularJS is an open source project hosted on GitHub (https://github.com/angular/angular.js) and licensed by Google, Inc. under the terms of the MIT license. The community At the end of the day, no project would survive without people standing behind it. Fortunately, AngularJS has a great, supportive community. The following are some of the communication channels where one can discuss design issues and request help: [email protected] mailing list (Google group) Google + community at https://plus.google.com/u/0/communities/115368820700870330756 #angularjs IRC channel [angularjs] tag at http://stackoverflow.com AngularJS teams stay in touch with the community by maintaining a blog (http://blog.angularjs.org/) and being present in the social media, Google + (+ AngularJS), and Twitter (@angularjs). There are also community meet ups being organized around the world; if one happens to be hosted near a place you live, it is definitely worth attending! Online learning resources AngularJS has its own dedicated website (http://www.angularjs.org) where we can find everything that one would expect from a respectable framework: conceptual overview, tutorials, developer's guide, API reference, and so on. Source code for all released AngularJS versions can be downloaded from http://code.angularjs.org. People looking for code examples won't be disappointed, as AngularJS documentation itself has plenty of code snippets. On top of this, we can browse a gallery of applications built with AngularJS (http://builtwith.angularjs.org). A dedicated YouTube channel (http://www.youtube.com/user/angularjs) has recordings from many past events as well as some very useful video tutorials. Libraries and extensions While AngularJS core is packed with functionality, the active community keeps adding new extensions almost every day. Many of those are listed on a dedicated website: http://ngmodules.org. Tools AngularJS is built on top of HTML and JavaScript, two technologies that we've been using in web development for years. Thanks to this, we can continue using our favorite editors and IDEs, browser extensions, and so on without any issues. Additionally, the AngularJS community has contributed several interesting additions to the existing HTML/JavaScript toolbox. Batarang Batarang is a Chrome developer tool extension for inspecting the AngularJS web applications. Batarang is very handy for visualizing and examining the runtime characteristics of AngularJS applications. We are going to use it extensively in this article to peek under the hood of a running application. Batarang can be installed from the Chrome's Web Store (AngularJS Batarang) as any other Chrome extension. Plunker and jsFiddle Both Plunker (http://plnkr.co) and jsFiddle (http://jsfiddle.net) make it very easy to share live-code snippets (JavaScript, CSS, and HTML). While those tools are not strictly reserved for usage with AngularJS, they were quickly adopted by the AngularJS community to share the small-code examples, scenarios to reproduce bugs, and so on. Plunker deserves special mentioning as it was written in AngularJS, and is a very popular tool in the community. IDE extensions and plugins Each one of us has a favorite IDE or an editor. The good news is that there are existing plugins/extensions for several popular IDEs such as Sublime Text 2 (https://github.com/angular-ui/AngularJS-sublime-package), Jet Brains' products (http://plugins.jetbrains.com/plugin?pr=idea&pluginId=6971), and so on.
Read more
  • 0
  • 0
  • 5135
Visually different images

article-image-component-communication-reactjs
Richard Feldman
30 Jun 2014
5 min read
Save for later

Component Communication in React.js

Richard Feldman
30 Jun 2014
5 min read
You can get a long way in React.js solely by having parent components create child components with varying props, and having each component deal only with its own state. But what happens when a child wants to affect its parent’s state or props? Or when a child wants to inspect that parent’s state or props? Or when a parent wants to inspect its child’s state? With the right techniques, you can handle communication between React components without introducing unnecessary coupling. Child Elements Altering Parents Suppose you have a list of buttons, and when you click one, a label elsewhere on the page updates to reflect which button was most recently clicked. Although any button’s click handler can alter that button’s state, the handler has no intrinsic knowledge of the label that we need to update. So how can we give it access to do what we need? The idiomatic approach is to pass a function through props. Like so: var ExampleParent = React.createClass({ getInitialState: function() { return {lastLabelClicked: "none"} }, render: function() { var me = this; var setLastLabel = function(label) { me.setState({lastLabelClicked: label}); }; return <div> <p>Last clicked: {this.state.lastLabelClicked}</p> <LabeledButton label="Alpha Button" setLastLabel={setLastLabel}/> <LabeledButton label="Beta Button" setLastLabel={setLastLabel}/> <LabeledButton label="Delta Button" setLastLabel={setLastLabel}/> </div>; } }); var LabeledButton = React.createClass({ handleClick: function() { this.props.setLastLabel(this.props.label); }, render: function() { return <button onClick={this.handleClick}>{this.props.label}</button>; } }); Note that this does not actually affect the label’s state directly; rather, it affects the parent component’s state, and doing so will cause the parent to re-render the label as appropriate. What if we wanted to avoid using state here, and instead modify the parent’s props? Since props are externally specified, this would be a lot of extra work. Rather than telling the parent to change, the child would necessarily have to tell its parent’s parent—its grandparent, in other words—to change that grandparent’s child. This is not a route worth pursuing; besides being less idiomatic, there is no real benefit to changing the parent’s props when you could change its state instead. Inspecting Props Once created, the only way for a child’s props to “change” is for the child to be recreated when the parent’s render method is called again. This helpfully guarantees that the parent’s render method has all the information needed to determine the child’s props—not only in the present, but for the indefinite future as well. Thus if another of the parent’s methods needs to know the child’s props, like for example a click handler, it’s simply a matter of making sure that data is available outside the parent’s render method. An easy way to do this is to record it in the parent’s state: var ExampleComponent = React.createClass({ handleClick: function() { var buttonStatus = this.state.buttonStatus; // ...do something based on buttonStatus }, render: function() { // Pretend it took some effort to determine this value var buttonStatus = "btn-disabled"; this.setState({buttonStatus: buttonStatus}); return <button className={buttonStatus} onClick={this.handleClick}> Click this button! </button>; } }); It’s even easier to let a child know about its parent’s props: simply have the parent pass along whatever information is necessary when it creates the child. It’s cleaner to pass along only what the child needs to know, but if all else fails you can go as far as to pass in the parent’s entire set of props: var ParentComponent = React.createClass({ render: function() { return <ChildComponent parentProps={this.props} />; } }); Inspecting State State is trickier to inspect, because it can change on the fly. But is it ever strictly necessary for components to inspect each other’s states, or might there be a universal workaround? Suppose you have a child whose click handler cares about its parent’s state. Is there any way we could refactor things such that the child could always know that value, without having to ask the parent directly? Absolutely! Simply have the parent pass the current value of its state to the child as a prop. Whenever the parent’s state changes, it will re-run its render method, so the child (including its click handler) will automatically be recreated with the new prop. Now the child’s click handler will always have an up-to-date knowledge of the parent’s state, just as we wanted. Suppose instead that we have a parent that cares about its child’s state. As we saw earlier with the buttons-and-labels example, children can affect their parent’s states, so we can use that technique again here to refactor our way into a solution. Simply include in the child’s props a function that updates the parent’s state, and have the child incorporate that function into its relevant state changes. With the child thus keeping the parent’s state up to speed on relevant changes to the child’s state, the parent can obtain whatever information it needed simply by inspecting its own state. Takeaways Idiomatic communication between parent and child components can be easily accomplished by passing state-altering functions through props. When it comes to inspecting props and state, a combination of passing props on a need-to-know basis and refactoring state changes can ensure the relevant parties have all the information they need, whenever they need it. About the Author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between.
Read more
  • 0
  • 0
  • 5106

article-image-layout-extnet
Packt
30 Jan 2013
16 min read
Save for later

Layout with Ext.NET

Packt
30 Jan 2013
16 min read
(For more resources related to this topic, see here.) Border layout The Border layout is perhaps one of the more popular layouts. While quite complex at first glance, it is popular because it turns out to be quite flexible to design and to use. It offers common elements often seen in complex web applications, such as an area for header content, footer content, a main content area, plus areas to either side. All are separately scrollable and resizable if needed, among other benefits. In Ext speak, these areas are called Regions, and are given names of North, South, Center, East, and West regions. Only the Center region is mandatory. It is also the one without any given dimensions; it will resize to fit the remaining area after all the other regions have been set. A West or East region must have a width defined, and North or South regions must have a height defined. These can be defined using the Width or Height property (in pixels) or using the Flex property which helps provide ratios. Each region can be any Ext.NET component; a very common option is Panel or a subclass of Panel. There are limits, however: for example, a Window is intended to be floating so cannot be one of the regions. This offers a lot of flexibility and can help avoid nesting too many Panels in order to show other components such as GridPanels or TabPanels, for example. Here is a screenshot showing a simple Border layout being applied to the entire page (that is, the viewport) using a 2-column style layout: We have configured a Border layout with two regions; a West region and a Center region. The Border layout is applied to the whole page (this is an example of using it with Viewport. Here is the code: <%@ Page Language="C#" %> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" Theme="Gray" /> <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Viewport> </body> </html> The code has a Viewport configured with a Border layout via the Layout property. Then, into the Items collection two Panels are added, for the West and Center regions. The value of the Layout property is case insensitive and can take variations, such as Border, border, borderlayout, BorderLayout, and so on. As regions of a Border layout we can also configure options such as whether you want split bars, whether Panels are collapsible, and more. Our example uses the following: The West region Panel has been configured to be collapsible (using Collapsible="true"). This creates a small button in the title area which, when clicked, will smoothly animate the collapse of that region (which can then be clicked again to open it). When collapsed, the title area itself can also be clicked which will float the region into appearance, rather than permanently opening it (allowing the user to glimpse at the content and mouse away to close the region). This floating capability can be turned off by using Floatable="false" on the Panel. Split="true" gives a split bar with a collapse button between the regions. This next example shows a more complex Border layout where all regions are used: The markup used for the previous is very similar to the first example, so we will only show the Viewport portion: <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="North" Split="true" Title="North" Height="75" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel runat="server" Region="Center" Title="Center content" /> <ext:Panel Region="East" Split="true" Title="East" Width="150" Collapsible="true" /> <ext:Panel Region="South" Split="true" Title="South" Height="75" Collapsible="true" /> </Items> </ext:Viewport> Although each Panel has a title set via the Title property, it is optional. For example, you may want to omit the title from the North region if you want an application header or banner bar, where the title bar could be superfluous. Different ways to create the same components The previous examples were shown using the specific Layout="Border" markup. However, there are a number of ways this can be marked up or written in code. For example, You can code these entirely in markup as we have seen You can create these entirely in code You can use a mixture of markup and code to suit your needs Here are some quick examples: Border layout from code This is the code version of the first two-panel Border layout example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { var viewport = new Viewport { Layout = "border", Items = { new Ext.Net.Panel { Region = Region.West, Title = "West", Width = 200, Collapsible = true, Split = true }, new Ext.Net.Panel { Region = Region.Center, Title = "Center content" } } }; this.Form.Controls.Add(viewport); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <form runat="server"> <ext:ResourceManager runat="server" Theme="Gray" /> </form> </body> </html> There are a number of things going on here worth mentioning: The appropriate panels have been added to the Viewport's Items collection Finally, the Viewport is added to the page via the form's Controls Collection If you are used to programming with ASP.NET, you normally add a control to the Controls collection of an ASP.NET control. However, when Ext.NET controls add themselves to each other, it is usually done via the Items collection. This helps create a more optimal initialization script. This also means that only Ext.NET components participate in the layout logic. There is also the Content property in markup (or ContentControls property in code-behind) which can be used to add non-Ext.NET controls or raw HTML, though they will not take part in the layout. It is important to note that configuring Items and Content together should be avoided, especially if a layout is set on the parent container. This is because the parent container will only use the Items collection. Some layouts may hide the Content section altogether or have other undesired results. In general, use only one at a time, not both because the Viewport is the outer-most control; it is added to the Controls collection of the form itself. Another important thing to bear in mind is that the Viewport must be the only top-level visible control. That means it cannot be placed inside a div, for example it must be added directly to the body or to the <form runat="server"> only. In addition, there should not be any sibling controls (except floating widgets, like Window). Mixing markup and code The same 2-panel Border layout can also be mixed in various ways. For example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { this.WestPanel.Title = "West"; this.WestPanel.Split = true; this.WestPanel.Collapsible = true; this.Viewport1.Items.Add(new Ext.Net.Panel { Region = Region.Center, Title = "Center content" }); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" /> <ext:Viewport ID="Viewport1" runat="server" Layout="Border"> <Items> <ext:Panel ID="WestPanel" runat="server" Region="West" Width="200" /> </Items> </ext:Viewport> </body> </html> In the previous example, the Viewport and the initial part of the West region have been defined in markup. The Center region Panel has been added via code and the rest of the West Panel's properties have been set in code-behind. As with most ASP. NET controls, you can mix and match these as you need. Loading layout items via User Controls A powerful capability that Ext.NET provides is being able to load layout components from User Controls. This is achieved by using the UserControlLoader component. Consider this example: <ext:Viewport runat="server" Layout="Border"> <Items> <ext:UserControlLoader Path="WestPanel.ascx" /> <ext:Panel Region="Center" /> </Items> </ext:Viewport> In this code, we have replaced the West region Panel that was used in earlier examples with a UserControlLoader component and set the Path property to load a user control in the same directory as this page. That user control is very simple for our example: <%@ Control Language="C#" %> <ext:Panel runat="server" Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> In other words, we have simply moved our Panel from our earlier example into a user control and loaded that instead. Though a small example, this demonstrates some useful reuse capability. Also note that although we used the UserControlLoader in this Border layout example, it can be used anywhere else as needed, as it is an Ext.NET component. The containing component does not have to be a Viewport Note also that the containing component does not have to be a Viewport. It can be any other appropriate container, such as another Panel or a Window. Let's do just that: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Window> The container has changed from a Viewport to a Window (with dimensions). It will produce this: More than one item with the same region In previous versions of Ext JS and Ext.NET you could only have one component in a given region, for example, only one North region Panel, one West region Panel, and so on. New to Ext.NET 2 is the ability to have more than one item in the same region. This can be very flexible and improve performance slightly. This is because in the past if you wanted the appearance of say multiple West columns, you would need to create nested Border layouts (which is still an option of course). But now, you can simply add two components to a Border layout and give them the same region value. Nested Border layouts are still possible in case the flexibility is needed (and helps make porting from an earlier version easier). First, here is an example using nested Border layouts to achieve three vertical columns: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Inner Center" /> </Items> </ext:Panel> </Items> </ext:Window> This code will produce the following output: The previous code is only a slight variation of the example preceding it, but has a few notable changes: The Center region Panel has itself been given the layout as Border. This means that although this is a Center region for the window that it is a part of, this Panel is itself another Border layout. The nested Border layout then has two further Panels, an additional West region and an additional Center region. Note, the Title has also been removed from the outer Center region so that when they are rendered, they line up to look like three Panels next to each other. Here is the same example, but without using a nested border Panel and instead, just adding another West region Panel to the containing Window: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" Border="false" /> </Items> </ext:Window> Regions are not limited to Panels only A common problem with layouts is to start off creating more deeply nested controls than needed and the example earlier shows that it is not always needed. Multiple items with the same region helps to prevent nesting Border Layouts unnecessarily. Another inefficiency typical with the Border layout usage is using too many containing Panels in each region. For example, there may be a Center region Panel which then contains a TabPanel. However, as TabPanel is a subclass of Panel it can be given a region directly, therefore avoiding an unnecessary Panel to contain the TabPanel: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="False"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="True" /> <ext:TabPanel Region="Center"> <Items> <ext:Panel Title="First Tab" /> <ext:Panel Title="Second Tab" /> </Items> </ext:TabPanel> </Items> </ext:Window> This code will produce the following output: The differences with the nested Border layout example shown earlier are: The outer Center region has been changed from Panel to TabPanel. TabPanels manage their own items' layout so Layout="Border" is removed. The TabPanel also has Border="false" taken out (so it is true by default). The inner Panels have had their regions, Split, and other border related attributes taken out. This is because they are not inside a nested Border layout now; they are tabs. Other Panels, such as TreePanel or GridPanel, can also be used as we will see. Something that can be fiddly from time to time is knowing which borders to take off and which ones to keep when you have nested layouts and controls like this. There is a logic to it, but sometimes a quick bit of trial and error can also help figure it out! As a programmer this sounds minor and unimportant, but usually you want to prevent the borders becoming too thick, as aesthetically it can be off-putting, whereas just the right amount of borders can help make the application look clean and professional. You can always give components a class via the Cls property and then in CSS you can fine tune the borders (and other styles of course) as you need. Weighted regions Another feature new to Ext.NET 2 is that regions can be given weighting to influence how they are rendered and spaced out. Prior versions would require nested Border layouts to achieve this. To see how this works, consider this example to put a South region only inside the Center Panel: To achieve this output, if we used the old way—the nested Border layouts—we would do something like this: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Panel> </Items> </ext:Window> In the preceding code, we make the Center region itself be a Border layout with an inner Center region and a South region. This way the outer West region takes up all the space on the left. If the South region was part of the outer Border layout, then it would span across the entire bottom area of the window. But the same effect can be achieved using weighting. This means you do not need nested Border layouts; the three Panels can all be items of the containing window, which means a few less objects being created on the client: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" Weight="10" /> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Window> The way region weights work is that the region with the highest weight is assigned space from the border before other regions. If more than one region has the same weight as another, they are assigned space based on their position in the owner's Items collection (that is first come, first served). In the preceding code, we set the Weight property to 10 to the West region only, so it is rendered first and, thus, takes up all the space it can before the other two are rendered. This allows for many flexible options and Ext.NET has an example where you can configure different values to see the effects of different weights: http://examples.ext.net/#/Layout/BorderLayout/Regions_Weights/ As the previous examples show, there are many ways to define the layout, offering you more flexibility, especially if generating from code-behind in a very dynamic way. Knowing that there are so many ways to define the layout, we can now speed up our look at many other types of layouts. Summary This article covered one of the numerous layout options available in Ext.NET, that is, the Border layout, to help you organize your web applications. Resources for Article : Further resources on this subject: Your First ASP.NET MVC Application [Article] Customizing and Extending the ASP.NET MVC Framework [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 5106

article-image-authentication-and-authorization-modx
Packt
20 Oct 2009
1 min read
Save for later

Authentication and Authorization in MODx

Packt
20 Oct 2009
1 min read
It is vital to keep this distinction in mind to be able to understand the complexities explained in this article. You will also learn how MODx allows grouping of documents, users, and permissions. Create web users Let us start by creating a web user. Web users are users who can access restricted document groups in the web site frontend; they do not have Manager access. Web users can identify themselves at login by using login forms. They are allowed to log in from the user page, but they cannot log in using the Manager interface. To create a web user, perform the following steps: Click on the Web Users menu item in the Security menu. Click on New Web User. Fill in the fields with the following information: Field Name Value Username samira Password samira123 Email Address [email protected]    
Read more
  • 0
  • 0
  • 5029
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introduction-mapreduce
Packt
25 Jun 2014
10 min read
Save for later

Introduction to MapReduce

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) The Hadoop platform Hadoop can be used for a lot of things. However, when you break it down to its core parts, the primary features of Hadoop are Hadoop Distributed File System (HDFS) and MapReduce. HDFS stores read-only files by splitting them into large blocks and distributing and replicating them across a Hadoop cluster. Two services are involved with the filesystem. The first service, the NameNode acts as a master and keeps the directory tree of all file blocks that exist in the filesystem and tracks where the file data is kept across the cluster. The actual data of the files is stored in multiple DataNode nodes, the second service. MapReduce is a programming model for processing large datasets with a parallel, distributed algorithm in a cluster. The most prominent trait of Hadoop is that it brings processing to the data; so, MapReduce executes tasks closest to the data as opposed to the data travelling to where the processing is performed. Two services are involved in a job execution. A job is submitted to the service JobTracker, which first discovers the location of the data. It then orchestrates the execution of the map and reduce tasks. The actual tasks are executed in multiple TaskTracker nodes. Hadoop handles infrastructure failures such as network issues, node, or disk failures automatically. Overall, it provides a framework for distributed storage within its distributed file system and execution of jobs. Moreover, it provides the service ZooKeeper to maintain configuration and distributed synchronization. Many projects surround Hadoop and complete the ecosystem of available Big Data processing tools such as utilities to import and export data, NoSQL databases, and event/real-time processing systems. The technologies that move Hadoop beyond batch processing focus on in-memory execution models. Overall multiple projects, from batch to hybrid and real-time execution exist. MapReduce Massive parallel processing of large datasets is a complex process. MapReduce simplifies this by providing a design pattern that instructs algorithms to be expressed in map and reduce phases. Map can be used to perform simple transformations on data, and reduce is used to group data together and perform aggregations. By chaining together a number of map and reduce phases, sophisticated algorithms can be achieved. The shared nothing architecture of MapReduce prohibits communication between map tasks of the same phase or reduces tasks of the same phase. Communication that's required happens at the end of each phase. The simplicity of this model allows Hadoop to translate each phase, depending on the amount of data that needs to be processed into tens or even hundreds of tasks being executed in parallel, thus achieving scalable performance. Internally, the map and reduce tasks follow a simplistic data representation. Everything is a key or a value. A map task receives key-value pairs and applies basic transformations emitting new key-value pairs. Data is then partitioned and different partitions are transmitted to different reduce tasks. A reduce task also receives key-value pairs, groups them based on the key, and applies basic transformation to those groups. A MapReduce example To illustrate how MapReduce works, let's look at an example of a log file of total size 1 GB with the following format: INFO MyApp - Entering application. WARNING com.foo.Bar - Timeout accessing DB - Retrying ERROR com.foo.Bar - Did it again! INFO MyApp - Exiting application Once this file is stored in HDFS, it is split into eight 128 MB blocks and distributed in multiple Hadoop nodes. In order to build a MapReduce job to count the amount of INFO, WARNING, and ERROR log lines in the file, we need to think in terms of map and reduce phases. In one map phase, we can read local blocks of the file and map each line to a key and a value. We can use the log level as the key and the number 1 as the value. After it is completed, data is partitioned based on the key and transmitted to the reduce tasks. MapReduce guarantees that the input to every reducer is sorted by key. Shuffle is the process of sorting and copying the output of the map tasks to the reducers to be used as input. By setting the value to 1 on the map phase, we can easily calculate the total in the reduce phase. Reducers receive input sorted by key, aggregate counters, and store results. In the following diagram, every green block represents an INFO message, every yellow block a WARNING message, and every red block an ERROR message: Implementing the preceding MapReduce algorithm in Java requires the following three classes: A Map class to map lines into <key,value> pairs; for example, <"INFO",1> A Reduce class to aggregate counters A Job configuration class to define input and output types for all <key,value> pairs and the input and output files MapReduce abstractions This simple MapReduce example requires more than 50 lines of Java code (mostly because of infrastructure and boilerplate code). In SQL, a similar implementation would just require the following: SELECT level, count(*) FROM table GROUP BY level Hive is a technology originating from Facebook that translates SQL commands, such as the preceding one, into sets of map and reduce phases. SQL offers convenient ubiquity, and it is known by almost everyone. However, SQL is declarative and expresses the logic of a computation without describing its control flow. So, there are use cases that will be unusual to implement in SQL, and some problems are too complex to be expressed in relational algebra. For example, SQL handles joins naturally, but it has no built-in mechanism for splitting data into streams and applying different operations to each substream. Pig is a technology originating from Yahoo that offers a relational data-flow language. It is procedural, supports splits, and provides useful operators for joining and grouping data. Code can be inserted anywhere in the data flow and is appealing because it is easy to read and learn. However, Pig is a purpose-built language; it excels at simple data flows, but it is inefficient for implementing non-trivial algorithms. In Pig, the same example can be implemented as follows: LogLine = load 'file.logs' as (level, message); LevelGroup = group LogLine by level; Result = foreach LevelGroup generate group, COUNT(LogLine); store Result into 'Results.txt'; Both Pig and Hive support extra functionality through loadable user-defined functions (UDF) implemented in Java classes. Cascading is implemented in Java and designed to be expressive and extensible. It is based on the design pattern of pipelines that many other technologies follow. The pipeline is inspired from the original chain of responsibility design pattern and allows ordered lists of actions to be executed. It provides a Java-based API for data-processing flows. Developers with functional programming backgrounds quickly introduced new domain specific languages that leverage its capabilities. Scalding, Cascalog, and PyCascading are popular implementations on top of Cascading, which are implemented in programming languages such as Scala, Clojure, and Python. Introducing Cascading Cascading is an abstraction that empowers us to write efficient MapReduce applications. The API provides a framework for developers who want to think in higher levels and follow Behavior Driven Development (BDD) and Test Driven Development (TDD) to provide more value and quality to the business. Cascading is a mature library that was released as an open source project in early 2008. It is a paradigm shift and introduces new notions that are easier to understand and work with. In Cascading, we define reusable pipes where operations on data are performed. Pipes connect with other pipes to create a pipeline. At each end of a pipeline, a tap is used. Two types of taps exist: source, where input data comes from and sink, where the data gets stored. In the preceding image, three pipes are connected to a pipeline, and two input sources and one output sink complete the flow. A complete pipeline is called a flow, and multiple flows bind together to form a cascade. In the following diagram, three flows form a cascade: The Cascading framework translates the pipes, flows, and cascades into sets of map and reduce phases. The flow and cascade planner ensure that no flow or cascade is executed until all its dependencies are satisfied. The preceding abstraction makes it easy to use a whiteboard to design and discuss data processing logic. We can now work on a productive higher level abstraction and build complex applications for ad targeting, logfile analysis, bioinformatics, machine learning, predictive analytics, web content mining, and for extract, transform and load (ETL) jobs. By abstracting from the complexity of key-value pairs and map and reduce phases of MapReduce, Cascading provides an API that so many other technologies are built on. What happens inside a pipe Inside a pipe, data flows in small containers called tuples. A tuple is like a fixed size ordered list of elements and is a base element in Cascading. Unlike an array or list, a tuple can hold objects with different types. Tuples stream within pipes. Each specific stream is associated with a schema. The schema evolves over time, as at one point in a pipe, a tuple of size one can receive an operation and transform into a tuple of size three. To illustrate this concept, we will use a JSON transformation job. Each line is originally stored in tuples of size one with a schema: 'jsonLine. An operation transforms these tuples into new tuples of size three: 'time, 'user, and 'action. Finally, we extract the epoch, and then the pipe contains tuples of size four: 'epoch, 'time, 'user, and 'action. Pipe assemblies Transformation of tuple streams occurs by applying one of the five types of operations, also called pipe assemblies: Each: To apply a function or a filter to each tuple GroupBy: To create a group of tuples by defining which element to use and to merge pipes that contain tuples with similar schemas Every: To perform aggregations (count, sum) and buffer operations to every group of tuples CoGroup: To apply SQL type joins, for example, Inner, Outer, Left, or Right joins SubAssembly: To chain multiple pipe assemblies into a pipe To implement the pipe for the logfile example with the INFO, WARNING, and ERROR levels, three assemblies are required: The Each assembly generates a tuple with two elements (level/message), the GroupBy assembly is used in the level, and then the Every assembly is applied to perform the count aggregation. We also need a source tap to read from a file and a sink tap to store the results in another file. Implementing this in Cascading requires 20 lines of code; in Scala/Scalding, the boilerplate is reduced to just the following: TextLine(inputFile) .mapTo('line->'level,'message) { line:String => tokenize(line) } .groupBy('level) { _.size } .write(Tsv(outputFile)) Cascading is the framework that provides the notions and abstractions of tuple streams and pipe assemblies. Scalding is a domain-specific language (DSL) that specializes in the particular domain of pipeline execution and further minimizes the amount of code that needs to be typed. Cascading extensions Cascading offers multiple extensions that can be used as taps to either read from or write data to, such as SQL, NoSQL, and several other distributed technologies that fit nicely with the MapReduce paradigm. A data processing application, for example, can use taps to collect data from a SQL database and some more from the Hadoop file system. Then, process the data, use a NoSQL database, and complete a machine learning stage. Finally, it can store some resulting data into another SQL database and update a mem-cache application. Summary This article explains the core technologies used in the distributed model of Hadoop Resources for Article: Further resources on this subject: Analytics – Drawing a Frequency Distribution with MapReduce (Intermediate) [article] Understanding MapReduce [article] Advanced Hadoop MapReduce Administration [article]
Read more
  • 0
  • 0
  • 5025

article-image-forms-grok-10
Packt
12 Feb 2010
13 min read
Save for later

Forms in Grok 1.0

Packt
12 Feb 2010
13 min read
A quick demonstration of automatic forms Let's start by showing how this works, before getting into the details. To do that, we'll add a project model to our application. A project can have any number of lists associated with it, so that related to-do lists can be grouped together. For now, let's consider the project model by itself. Add the following lines to the app.py file, just after the Todo application class definition. We'll worry later about how this fits into the application as a whole. class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', values=['personal','business']) description = schema.Text(title=u'Description')class AddProject(grok.Form): grok.context(Todo) form_fields = grok.AutoFields(IProject) We'll also need to add a couple of imports at the top of the file: from zope import interfacefrom zope import schema Save the file, restart the server, and go to the URL http://localhost:8080/todo/addproject. The result should be similar to the following screenshot: OK, where did the HTML for the form come from? We know that AddProject is some sort of a view, because we used the grok.context class annotation to set its context and name. Also, the name of the class, but in lowercase, was used in the URL, like in previous view examples. The important new thing is how the form fields were created and used. First, a class named IProject was defined. The interface defines the fields on the form, and the grok.AutoFields method assigns them to the Form view class. That's how the view knows which HTML form controls to generate when the form is rendered. We have three fields: name, description, and kind. Later in the code, the grok.AutoFields line takes this IProject class and turns these fields into form fields. That's it. There's no need for a template or a render method. The grok.Form view takes care of generating the HTML required to present the form, taking the information from the value of the form_fields attribute that the grok.AutoFields call generated. Interfaces The I in the class name stands for Interface. We imported the zope.interface package at the top of the file, and the Interface class that we have used as a base class for IProject comes from this package. Example of an interface An interface is an object that is used to specify and describe the external behavior of objects. In a sense, the interface is like a contract. A class is said to implement an interface when it includes all of the methods and attributes defined in an interface class. Let's see a simple example: from zope import interfaceclass ICaveman(interface.Interface): weapon = interface.Attribute('weapon') def hunt(animal): """Hunt an animal to get food""" def eat(animal): """Eat hunted animal""" def sleep() """Rest before getting up to hunt again""" Here, we are describing how cavemen behave. A caveman will have a weapon, and he can hunt, eat, and sleep. Notice that the weapon is an attribute—something that belongs to the object, whereas hunt, eat, and sleep are methods. Once the interface is defined, we can create classes that implement it. These classes are committed to include all of the attributes and methods of their interface class. Thus, if we say: class Caveman(object): interface.implements(ICaveman) Then we are promising that the Caveman class will implement the methods and attributes described in the ICaveman interface: weapon = 'ax'def hunt(animal): find(animal) hit(animal,self.weapon)def eat(animal): cut(animal) bite()def sleep(): snore() rest() Note that though our example class implements all of the interface methods, there is no enforcement of any kind made by the Python interpreter. We could define a class that does not include any of the methods or attributes defined, and it would still work. Interfaces in Grok In Grok, a model can implement an interface by using the grok.implements method. For example, if we decided to add a project model, it could implement the IProject interface as follows: class Project(grok.Container): grok.implements(IProject) Due to their descriptive nature, interfaces can be used for documentation. They can also be used for enabling component architectures, but we'll see about that later on. What is of more interest to us right now is that they can be used for generating forms automatically. Schemas The way to define the form fields is to use the zope.schema package. This package includes many kinds of field definitions that can be used to populate a form. Basically, a schema permits detailed descriptions of class attributes that are using fields. In terms of a form—which is what is of interest to us here—a schema represents the data that will be passed to the server when the user submits the form. Each field in the form corresponds to a field in the schema. Let's take a closer look at the schema we defined in the last section: class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', required=False, values=['personal','business']) description = schema.Text(title=u'Description', required=False) The schema that we are defining for IProject has three fields. There are several kinds of fields, which are listed in the following table. In our example, we have defined a name field, which will be a required field, and will have the label Name beside it. We also have a kind field, which is a list of options from which the user must pick one. Note that the default value for required is True, but it's usually best to specify it explicitly, to avoid confusion. You can see how the list of possible values is passed statically by using the values parameter. Finally, description is a text field, which means it will have multiple lines of text. Available schema attributes and field types In addition to title, values, and required, each schema field can have a number of properties, as detailed in the following table: Attribute Description title A short summary or label. description A description of the field. required Indicates whether a field requires a value to exist. readonly If True, the field's value cannot be changed. default The field's default value may be None, or a valid field value. missing_value If input for this field is missing, and that's OK, then this is the value to use. order The order attribute can be used to determine the order in which fields in a schema are defined. If one field is created after another (in the same thread), its order will be greater. In addition to the field attributes described in the preceding table, some field types provide additional attributes. In the previous example, we saw that there are various field types, such as Text, TextLine, and Choice. There are several other field types available, as shown in the following table. We can create very sophisticated forms just by defining a schema in this way, and letting Grok generate them. Field type Description Parameters Bool Boolean field.   Bytes Field containing a byte string (such as the python str). The value might be constrained to be within length limits.   ASCII Field containing a 7-bit ASCII string. No characters > DEL (chr(127)) are allowed. The value might be constrained to be within length limits.   BytesLine Field containing a byte string without new lines.   ASCIILine Field containing a 7-bit ASCII string without new lines.   Text Field containing a Unicode string.   SourceText Field for the source text of an object.   TextLine Field containing a Unicode string without new lines.   Password Field containing a Unicode string without new lines, which is set as the password.   Int Field containing an Integer value.   Float Field containing a Float.   Decimal Field containing a Decimal.   DateTime Field containing a DateTime.   Date Field containing a date.   Timedelta Field containing a timedelta.   Time Field containing time.   URI A field containing an absolute URI.   Id A field containing a unique identifier. A unique identifier is either an absolute URI or a dotted name. If it's a dotted name, it should have a module or package name as a prefix.   Choice Field whose value is contained in a predefined set. values: A list of text choices for the field. vocabulary: A Vocabulary object that will dynamically produce the choices. source: A different, newer way to produce dynamic choices. Note: only one of the three should be provided. More information about sources and vocabularies is provided later in this book. Tuple Field containing a value that implements the API of a conventional Python tuple. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. List Field containing a value that implements the API of a conventional Python list. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. Set Field containing a value that implements the API of a conventional Python standard library sets.Set or a Python 2.4+ set. value_type: Field value items must conform to the given type, expressed via a field. FrozenSet Field containing a value that implements the API of a conventional Python2.4+ frozenset. value_type: Field value items must conform to the given type, expressed via a field. Object Field containing an object value. Schema: The interface that defines the fields comprising the object. Dict Field containing a conventional dictionary. The key_type and value_type fields allow specification of restrictions for keys and values contained in the dictionary. key_type: Field keys must conform to the given type, expressed via a field. value_type: Field value items must conform to the given type, expressed via a field. Form fields and widgets Schema fields are perfect for defining data structures, but when dealing with forms sometimes they are not enough. In fact, once you generate a form using a schema as a base, Grok turns the schema fields into form fields. A form field is like a schema field but has an extended set of methods and attributes. It also has a default associated widget that is responsible for the appearance of the field inside the form. Rendering forms requires more than the fields and their types. A form field needs to have a user interface, and that is what a widget provides. A Choice field, for example, could be rendered as a <select> box on the form, but it could also use a collection of checkboxes, or perhaps radio buttons. Sometimes, a field may not need to be displayed on a form, or a writable field may need to be displayed as text instead of allowing users to set the field's value. Form components Grok offers four different components that automatically generate forms. We have already worked with the first one of these, grok.Form. The other three are specializations of this one: grok.AddForm is used to add new model instances. grok.EditForm is used for editing an already existing instance. grok.DisplayForm simply displays the values of the fields. A Grok form is itself a specialization of a grok.View, which means that it gets the same methods as those that are available to a view. It also means that a model does not actually need a view assignment if it already has a form. In fact, simple applications can get away by using a form as a view for their objects. Of course, there are times when a more complex view template is needed, or even when fields from multiple forms need to be shown in the same view. Grok can handle these cases as well, which we will see later on. Adding a project container at the root of the site To get to know Grok's form components, let's properly integrate our project model into our to-do list application. We'll have to restructure the code a little bit, as currently the to-do list container is the root object of the application. We need to have a project container as the root object, and then add a to-do list container to it. To begin, let's modify the top of app.py, immediately before the TodoList class definition, to look like this: import grokfrom zope import interface, schemaclass Todo(grok.Application, grok.Container): def __init__(self): super(Todo, self).__init__() self.title = 'To-Do list manager' self.next_id = 0 def deleteProject(self,project): del self[project] First, we import zope.interface and zope.schema. Notice how we keep the Todo class as the root application class, but now it can contain projects instead of lists. We also omitted the addProject method, because the grok.AddForm instance is going to take care of that. Other than that, the Todo class is almost the same. class IProject(interface.Interface): title = schema.TextLine(title=u'Title',required=True) kind = schema.Choice(title=u'Kind of project',values=['personal', 'business']) description = schema.Text(title=u'Description',required=False) next_id = schema.Int(title=u'Next id',default=0) We then have the interface definition for IProject, where we add the title, kind, description, and next_id fields. These were the fields that we previously added during the call to the __init__ method at the time of product initialization. class Project(grok.Container): grok.implements(IProject) def addList(self,title,description): id = str(self.next_id) self.next_id = self.next_id+1 self[id] = TodoList(title,description) def deleteList(self,list): del self[list] The key thing to notice in the Project class definition is that we use the grok.implements class declaration to see that this class will implement the schema that we have just defined. class AddProjectForm(grok.AddForm): grok.context(Todo) grok.name('index') form_fields = grok.AutoFields(Project) label = "To begin, add a new project" @grok.action('Add project') def add(self,**data): project = Project() self.applyData(project,**data) id = str(self.context.next_id) self.context.next_id = self.context.next_id+1 self.context[id] = project return self.redirect(self.url(self.context[id])) The actual form view is defined after that, by using grok.AddForm as a base class. We assign this view to the main Todo container by using the grok.context annotation. The name index is used for now, so that the default page for the application will be the 'add form' itself. Next, we create the form fields by calling the grok.AutoFields method. Notice that this time the argument to this method call is the Project class directly, rather than the interface. This is possible because the Project class was associated with the correct interface when we previously used grok.implements. After we have assigned the fields, we set the label attribute of the form to the text: To begin, add a new project. This is the title that will be shown on the form. In addition to this new code, all occurrences of grok.context(Todo) in the rest of the file need to be changed to grok.context(Project), as the to-do lists and their views will now belong to a project and not to the main Todo application. For details, take a look at the source code of this article for Grok 1.0 Web Development>>Chapter 5.
Read more
  • 0
  • 0
  • 5005

article-image-getting-started-primefaces
Packt
04 Apr 2013
14 min read
Save for later

Getting Started with PrimeFaces

Packt
04 Apr 2013
14 min read
Setting up and configuring the PrimeFaces library PrimeFaces is a lightweight JSF component library with one JAR file, which needs no configuration and does not contain any required external dependencies. To start with the development of the library, all we need is to get the artifact for the library. Getting ready You can download the PrimeFaces library from http://primefaces.org/downloads.html and you need to add the primefaces-{version}.jar file to your classpath. After that, all you need to do is import the namespace of the library, which is necessary to add the PrimeFaces components to your pages, to get started. If you are using Maven (for more information on installing Maven, please visit http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html), you can retrieve the PrimeFaces library by defining the Maven repository in your Project Object Model (POM) file as follows: <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> </repository> Add the dependency configuration as follows: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.4</version> </dependency> At the time of writing this book, the latest and most stable version of PrimeFaces was 3.4. To check out whether this is the latest available or not, please visit http://primefaces.org/downloads.html The code in this book will work properly with PrimeFaces 3.4. In prior versions or the future versions, some methods, attributes, or components' behaviors may change. How to do it... In order to use PrimeFaces components, we need to add the namespace declarations into our pages. The namespace for PrimeFaces components is as follows: For PrimeFaces Mobile, the namespace is as follows: That is all there is to it. Note that the p prefix is just a symbolic link and any other character can be used to define the PrimeFaces components. Now you can create your first page with a PrimeFaces component as shown in the following code snippet: <html > <f:view contentType="text/html"> <h:head /> <h:body> <h:form> <p:spinner /> </h:form> </h:body> </f:view> </html> This will render a spinner component with an empty value as shown in the following screenshot: A link to the working example for the given page is given at the end of this recipe. How it works... When the page is requested, the p:spinner component is rendered with the renderer implemented by the PrimeFaces library. Since the spinner component is a UI input component, the request-processing lifecycle will get executed when the user inputs data and performs a post back on the page. For the first page, we also needed to provide the contentType parameter for f:view, since the WebKit-based browsers, such as Google Chrome and Apple Safari, request the content type application/xhtml+xml by default. This would overcome unexpected layout and styling issues that might occur. There's more... PrimeFaces only requires Java 5+ runtime and a JSF 2.x implementation as mandatory dependencies. There are some optional libraries for certain features. Dependency Version Type Description JSF runtime iText Apache POI Rome commons-fileupload commons-io 2.0 or 2.1 2.1.7 3.7 1.0 1.2.1 1.4 Required Optional Optional Optional Optional Optional Apache MyFaces or Oracle Mojarra DataExporter (PDF) DataExporter (Excel) FeedReader FileUpload FileUpload Please ensure that you have only one JAR file of PrimeFaces or specific PrimeFaces Theme in your classpath in order to avoid any issues regarding resource rendering. Currently PrimeFaces supports the web browsers IE 7, 8, or 9, Safari, Firefox, Chrome, and Opera. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. When the server is running, the showcase for the recipe is available at http://localhost:8080/primefaces-cookbook/views/chapter1 /yourFirstPage.jsf" AJAX basics with Process and Update PrimeFaces provides a partial page rendering (PPR) and view-processing feature based on standard JSF 2 APIs to enable choosing what to process in the JSF lifecycle and what to render in the end with AJAX. PrimeFaces AJAX Framework is based on standard server-side APIs of JSF 2. On the client side, rather than using the client-side API implementations of JSF implementations, such as Mojarra and MyFaces, PrimeFaces scripts are based on the jQuery JavaScript library. How to do it... We can create a simple page with a command button to update a string property with the current time in milliseconds on the server side and an output text to show the value of that string property, as follows: <p:commandButton update="display" action="#{basicPPRController. updateValue}" value="Update" /> <h:outputText id="display" value="#{basicPPRController.value}"/> If we would like to update multiple components with the same trigger mechanism, we can provide the IDs of the components to the update attribute by providing them a space, comma, or both, as follows: <p:commandButton update="display1,display2" /> <p:commandButton update="display1 display2" /> <p:commandButton update="display1,display2 display3" /> In addition, there are reserved keywords that are used for a partial update. We can also make use of these keywords along with the IDs of the components, as described in the following table: Keyword Description @this The component that triggers the PPR is updated @parent The parent of the PPR trigger is updated @form The encapsulating form of the PPR trigger is updated @none PPR does not change the DOM with AJAX response @all The whole document is updated as in non-AJAX requests We can also update a component that resides in a different naming container from the component that triggers the update. In order to achieve this, we need to specify the absolute component identifier of the component that needs to be updated. An example for this could be the following: <h:form id="form1"> <p:commandButton update=":form2:display" action="#{basicPPRController.updateValue}" value="Update" /> </h:form> <h:form id="form2"> <h:outputText id="display" value="#{basicPPRController.value}"/> </h:form> public String updateValue() { value = String.valueOf(System.currentTimeMillis()); return null; } PrimeFaces also provides partial processing, which executes the JSF lifecycle phases—Apply Request Values, Process Validations, Update Model, and Invoke Application—for determined components with the process attribute. This provides the ability to do group validation on the JSF pages easily. Mostly group-validation needs arise in situations where different values need to be validated in the same form, depending on an action that gets executed. By grouping components for validation, errors that would arise from other components when the page has been submitted can be overcome easily. Components like commandButton, commandLink, autoComplete, fileUpload, and many others provide this attribute to process partially instead of the whole view. Partial processing could become very handy in cases when a drop-down list needs to be populated upon a selection on another drop down and when there is an input field on the page with the required attribute set to true. This approach also makes immediate subforms and regions obsolete. It will also prevent submission of the whole page, thus this will result in lightweight requests. Without partially processing the view for the drop downs, a selection on one of the drop downs will result in a validation error on the required field. An example for this is shown in the following code snippet: <h:outputText value="Country: " /> <h:selectOneMenu id="countries" value="#{partialProcessingController. country}"> <f:selectItems value="#{partialProcessingController.countries}" /> <p:ajax listener= "#{partialProcessingController.handleCountryChange}" event="change" update="cities" process="@this"/> </h:selectOneMenu> <h:outputText value="City: " /> <h:selectOneMenu id="cities" value="#{partialProcessingController. city}"> <f:selectItems value="#{partialProcessingController.cities}" /> </h:selectOneMenu> <h:outputText value="Email: " /> <h:inputText value="#{partialProcessingController.email}" required="true" /> With this partial processing mechanism, when a user changes the country, the cities of that country will be populated in the drop down regardless of whether any input exists for the email field. How it works... As seen in partial processing example for updating a component in a different naming container, <p:commandButton> is updating the <h:outputText> component that has the ID display, and absolute client ID :form2:display, which is the search expression for the findComponent method. An absolute client ID starts with the separator character of the naming container, which is : by default. The <h:form>, <h:dataTable>, composite JSF components along with <p:tabView>, <p:accordionPanel>, <p:dataTable>, <p:dataGrid>, <p:dataList>, <p:carousel>, <p:galleria>, <p:ring>, <p:sheet>, and <p:subTable> are the components that implement the NamingContainer interface. The findComponent method, which is described at http://docs.oracle.com/javaee/6/api/javax/faces/component/UIComponent.html, is used by both JSF core implementation and PrimeFaces. There's more... JSF uses : (a colon) as the separator for the NamingContainer interface. The client IDs that will be rendered in the source page will be like :id1:id2:id3. If needed, the configuration of the separator can be changed for the web application to something other than the colon with a context parameter in the web.xml file of the web application, as follows: <context-param> <param-name>javax.faces.SEPARATOR_CHAR</param-name> <param-value>_</param-value> </context-param> It's also possible to escape the : character, if needed, in the CSS files with the character, as :. The problem that might occur with the colon is that it's a reserved keyword for the CSS and JavaScript frameworks, like jQuery, so it might need to be escaped. PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Basic Partial Page Rendering is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/basicPPR.jsf Updating Component in Different Naming Container is available at http://localhost:8080/primefaces-cookbook/views/chapter1/ componentInDifferentNamingContainer.jsf A Partial Processing example at http://localhost:8080/primefacescookbook/ views/chapter1/partialProcessing.jsf Internationalization (i18n) and Localization (L10n) Internationalization (i18n) and Localization (L10n) are two important features that should be provided in the web application's world to make it accessible globally. With Internationalization, we are emphasizing that the web application should support multiple languages; and with Localization, we are stating that the texts, dates, or any other fields should be presented in the form specific to a region. PrimeFaces only provides the English translations. Translations for the other languages should be provided explicitly. In the following sections, you will find the details on how to achieve this. Getting ready For Internationalization, first we need to specify the resource bundle definition under the application tag in faces-config.xml, as follows: <application> <locale-config> <default-locale>en</default-locale> <supported-locale>tr_TR</supported-locale> </locale-config> <resource-bundle> <base-name>messages</base-name> <var>msg</var> </resource-bundle> </application> A resource bundle would be a text file with the .properties suffix that would contain the locale-specific messages. So, the preceding definition states that the resource bundle messages_{localekey}.properties file will reside under classpath and the default value of localekey is en, which is English, and the supported locale is tr_TR, which is Turkish. For projects structured by Maven, the messages_{localekey}.properties file can be created under the src/main/resources project path. How to do it... For showcasing Internationalization, we will broadcast an information message via FacesMessage mechanism that will be displayed in the PrimeFaces growl component. We need two components, the growl itself and a command button, to broadcast the message. <p:growl id="growl" /> <p:commandButton action="#{localizationController.addMessage}" value="Display Message" update="growl" /> The addMessage method of localizationController is as follows: public String addMessage() { addInfoMessage("broadcast.message"); return null; } That uses the addInfoMessage method, which is defined in the static MessageUtil class as follows: public static void addInfoMessage(String str) { FacesContext context = FacesContext.getCurrentInstance(); ResourceBundle bundle = context.getApplication(). getResourceBundle(context, "msg"); String message = bundle.getString(str); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, message, "")); } Localization of components, such as calendar and schedule, can be achieved by providing the locale attribute. By default, locale information is retrieved from the view's locale and it can be overridden by a string locale key or the java.util.Locale instance. Components such as calendar and schedule use a shared PrimeFaces.locales property to display labels. PrimeFaces only provides English translations, so in order to localize the calendar we need to put corresponding locales into a JavaScript file and include the scripting file to the page. The content for the German locale of the Primefaces.locales property for calendar would be as shown in the following code snippet. For the sake of the recipe, only the German locale definition is given and the Turkish locale definition is omitted. PrimeFaces.locales['de'] = { closeText: 'Schließen', prevText: 'Zurück', nextText: 'Weiter', monthNames: ['Januar', 'Februar', 'März', 'April', 'Mai', 'Juni', 'Juli', 'August', 'September', 'Oktober', 'November', 'Dezember'], monthNamesShort: ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep', 'Okt', 'Nov', 'Dez'], dayNames: ['Sonntag', 'Montag', 'Dienstag', 'Mittwoch', 'Donnerstag', 'Freitag', 'Samstag'], dayNamesShort: ['Son', 'Mon', 'Die', 'Mit', 'Don', 'Fre', 'Sam'], dayNamesMin: ['S', 'M', 'D', 'M ', 'D', 'F ', 'S'], weekHeader: 'Woche', FirstDay: 1, isRTL: false, showMonthAfterYear: false, yearSuffix: '', timeOnlyTitle: 'Nur Zeit', timeText: 'Zeit', hourText: 'Stunde', minuteText: 'Minute', secondText: 'Sekunde', currentText: 'Aktuelles Datum', ampm: false, month: 'Monat', week: 'Woche', day: 'Tag', allDayText: 'Ganzer Tag' }; Definition of the calendar components with the locale attribute would be as follows: <p:calendar showButtonPanel="true" navigator="true" mode="inline" id="enCal"/> <p:calendar locale="tr" showButtonPanel="true" navigator="true" mode="inline" id="trCal"/> <p:calendar locale="de" showButtonPanel="true" navigator="true" mode="inline" id="deCal"/> They will be rendered as follows: How it works... For Internationalization of the Faces message, the addInfoMessage method retrieves the message bundle via the defined variable msg. It then gets the string from the bundle with the given key by invoking the bundle.getString(str) method. Finally, the message is added by creating a new Faces message with severity level FacesMessage.SEVERITY_INFO. There's more... For some components, Localization could be accomplished by providing labels to the components via attributes, such as with p:selectBooleanButton. <p:selectBooleanButton value="#{localizationController.selectedValue}" onLabel="#{msg['booleanButton.onLabel']}" offLabel="#{msg['booleanButton.offLabel']}" /> The msg variable is the resource bundle variable that is defined in the resource bundle definition in Faces configuration file. The English version of the bundle key definitions in the messages_en.properties file that resides under classpath would be as follows: booleanButton.onLabel=Yes booleanButton.offLabel=No PrimeFaces Cookbook Showcase application This recipe is available in the PrimeFaces Cookbook Showcase application on GitHub at https://github.com/ova2/primefaces-cookbook. You can find the details there for running the project. For the demos of the showcase, refer to the following: Internationalization is available at http://localhost:8080/primefacescookbook/ views/chapter1/internationalization.jsf Localization of the calendar component is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localization.jsf Localization with resources is available at http://localhost:8080/ primefaces-cookbook/views/chapter1/localizationWithResources. jsf For already translated locales of the calendar, see https://code.google.com/archive/p/primefaces/wikis/PrimeFacesLocales.wiki
Read more
  • 0
  • 0
  • 5003

article-image-angularjs-0
Packt
20 Aug 2014
15 min read
Save for later

AngularJS

Packt
20 Aug 2014
15 min read
In this article, by Rodrigo Branas, author of the book, AngularJS Essentials, we will go through the basics of AngularJS. Created by Miško Hevery and Adam Abrons in 2009, AngularJS is an open source, client-side JavaScript framework that promotes a high productivity web development experience. It was built over the belief that declarative programming is the best choice to construct the user's interface, while imperative programming is much better and preferred to implement the application's business logic. To achieve that, AngularJS empowers the traditional HTML by extending its current vocabulary, making the life of developers easier. The result is the development of expressive, reusable, and maintainable application components, leaving behind a lot of unnecessary code and keeping the team focused on the valuable and important things. (For more resources related to this topic, see here.) Architectural concepts It's been a long time since the famous Model-View-Controller, also known as MVC, started to be widely used in the software development industry, thereby becoming one of the legends of the enterprise architecture design. Basically, the model represents the knowledge that the view is responsible to present, while the controller mediates their relationship. However, these concepts are a little bit abstract, and this pattern may have different implementations depending on the language, platform, and purposes of the application. After a lot of discussions about which architectural pattern the framework follows, its authors declared that from now on, AngularJS is adopting Model-View-Whatever (MVW). Regardless of the name, the most important benefit is that the framework provides a clear separation of the concerns between the application layers, providing modularity, flexibility, and testability. In terms of concepts, a typical AngularJS application consists primarily of view, model, and controller, but there are other important components, such as services, directives, and filters. The view, also called template, is entirely written in HTML, which becomes a great opportunity to see web designers and JavaScript developers working side-by-side. It also takes advantage of the directives mechanism, a kind of extension of the HTML vocabulary that brings the ability to perform the programming language tasks, such as iterating over an array or even evaluating an expression conditionally. Behind the view, there is the controller. At first, the controller contains all business logic implementation used by the view. However, as the application grows, it becomes really important to perform some refactoring activities, such as moving the code from the controller to other components like services, in order to keep the cohesion high. The connection between the view and the controller is done by a shared object called scope. It is located between them and is used to exchange information related to the model. The model is a simple Plain-Old-JavaScript-Object (POJO). It looks very clear and easy to understand, bringing simplicity to the development by not requiring any special syntax to be created. Setting up the framework The configuration process is very simple and in order to set up the framework, we start by importing the angular.js script to our HTML file. After that, we need to create the application module by calling the module function, from the Angular's API, with it's name and dependencies. With the module already created, we just need to place the ng-app attribute with the module's name inside the html element or any other that surrounds the application. This attribute is important because it supports the initialization process of the framework. In the following code, there is an introductory application about a parking lot. At first, we are able to add and also list the parked cars, storing it’s plate in memory. Throughout the book, we will evolve this parking control application by incorporating each newly studied concept. index.html <!doctype html> <!-- Declaring the ng-app --> <html ng-app="parking"> <head> <title>Parking</title> <!-- Importing the angular.js script --> <script src="angular.js"></script> <script> // Creating the module called parking var parking = angular.module("parking", []); // Registering the parkingCtrl to the parking module parking.controller("parkingCtrl", function ($scope) { // Binding the car’s array to the scope $scope.cars = [ {plate: '6MBV006'}, {plate: '5BBM299'}, {plate: '5AOJ230'} ]; // Binding the park function to the scope $scope.park = function (car) { $scope.cars.push(angular.copy(car)); delete $scope.car; }; }); </script> </head> <!-- Attaching the view to the parkingCtrl --> <body ng-controller="parkingCtrl"> <h3>[Packt] Parking</h3> <table> <thead> <tr> <th>Plate</th> </tr> </thead> <tbody> <!-- Iterating over the cars --> <tr ng-repeat="car in cars"> <!-- Showing the car’s plate --> <td>{{car.plate}}</td> </tr> </tbody> </table> <!-- Binding the car object, with plate, to the scope --> <input type="text" ng-model="car.plate"/> <!-- Binding the park function to the click event --> <button ng-click="park(car)">Park</button> </body> </html> The ngController, was used to bind the parkingCtrl to the view while the ngRepeat iterated over the car's array. Also, we employed expressions like {{car.plate}} to display the plate of the car. Finally, to add new cars, we applied the ngModel, which creates a new object called car with the plate property, passing it as a parameter of the park function, called through the ngClick directive. To improve the loading page performance, it is recommended to use the minified and obfuscated version of the script that can be identified by angular.min.js. Both minified and regular distributions of the framework can be found on the official site of AngularJS, that is, http://www.angularjs.org, or they can be directly referenced to Google Content Delivery Network (CDN). What is a directive? A directive is an extension of the HTML vocabulary that allows the creation of new behaviors. This technology lets the developers create reusable components that can be used within the whole application and even provide their own custom components. It may be applied as an attribute, element, class, and even as a comment, by using the camelCase syntax. However, because HTML is case-insensitive, we need to use a lowercase form. For the ngModel directive, we can use ng-model, ng:model, ng_model, data-ng-model, and x-ng-model in the HTML markup. Using AngularJS built-in directives By default, the framework brings a basic set of directives, such as iterate over an array, execute a custom behavior when an element is clicked, or even show a given element based on a conditional expression and many others. ngBind This directive is generally applied to a span element and replaces the content of the element with the result of the provided expression. It has the same meaning as that of the double curly markup, for example, {{expression}}. Why would anyone like to use this directive when a less verbose alternative is available? This is because when the page is being compiled, there is a moment when the raw state of the expressions is shown. Since the directive is defined by the attribute of the element, it is invisible to the user. Here is an example of the ngBind directive usage: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> </body> </html> ngRepeat The ngRepeat directive is really useful to iterate over arrays and objects. It can be used with any kind of element such as rows of a table, elements of a list, and even options of select. We must provide a special repeat expression that describes the array to iterate over and the variable that will hold each item in the iteration. The most basic expression format allows us to iterate over an array, attributing each element to a variable: variable in array In the following code, we will iterate over the cars array and assign each element to the car variable: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> </body> </html> ngModel The ngModel directive attaches the element to a property in the scope, binding the view to the model. In this case, the element could be input (all types), select, or textarea. <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> There is an important advice regarding the use of this directive. We must pay attention to the purpose of the field that is using the ngModel directive. Every time the field is being part of the construction of an object, we must declare in which object the property should be attached. In this case, the object that is being constructed is a car, so we use car.plate inside the directive expression. However, sometimes it might occur that there is an input field that is just used to change a flag, allowing the control of the state of a dialog or another UI component. In these cases, we may use the ngModel directive without any object, as far as it will not be used together with other properties or even persisted. ngClick and other event directives The ngClick directive is one of the most useful kinds of directives in the framework. It allows you to bind any custom behavior to the click event of the element. The following code is an example of the usage of the ngClick directive calling a function: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> <button ng-click="park(car)">Park</button> </body> </html> Here there is another pitfall. Inside the ngClick directive, we call the park function, passing the car as a parameter. As far as we have access to the scope through the controller, would not be easier if we just access it directly, without passing any parameter at all? Keep in mind that we must take care of the coupling level between the view and the controller. One way to keep it low is by avoid reading the scope object directly from the controller, replacing this intention by passing everything it need by parameter from the view. It will increase the controller testability and also make the things more clear and explicitly. Other directives that have the same behavior, but are triggered by other events, are ngBlur, ngChange, ngCopy, ngCut, ngDblClick, ngFocus, ngKeyPress, ngKeyDown, ngKeyUp, ngMousedown, ngMouseenter, ngMouseleave, ngMousemove, ngMouseover, ngMouseup, and ngPaste. Filters The filters are, associated with other technologies like directives and expressions, responsible for the extraordinary expressiveness of framework. It lets us easily manipulate and transform any value, not only combined with expressions inside a template, but also injected in other components like controllers and services. It is really useful when we need to format date and money according to our current locale or even support the filtering feature of a grid component. Filters are the perfect answer to easily perform any data manipulating. currency The currency filter is used to format a number based on a currency. The basic usage of this filter is without any parameter: {{ 10 | currency}} The result of the evaluation will be the number $10.00, formatted and prefixed with the dollar sign. In order to achieve the correct output, in this case R$10,00 instead of R$10.00, we need to configure the Brazilian (PT-BR) locale, available inside the AngularJS distribution package. There, we may find locales to the most part of the countries and we just need to import it to our application such as: <script src="js/lib/angular-locale_pt-br.js"></script> After import the locale, we will not need to use the currency symbol anymore because it's already wrapped inside. Besides the currency, the locale also defines the configuration of many other variables like the days of the week and months, very useful when combined with the next filter used to format dates. date The date filter is one of the most useful filters of the framework. Generally, a date value comes from the database or any other source in a raw and generic format. In this way, filters like that are essential to any kind of application. Basically, we can use this filter by declaring it inside any expression. In the following example, we use the filter on a date variable attached to the scope. {{ car.entrance | date }} The output will be Dec 10, 2013. However, there are thousands of combinations that we can make with the optional format mask. {{ car.entrance | date:'MMMM dd/MM/yyyy HH:mm:ss' }} Using this format, the output changes to December 10/12/2013 21:42:10. filter Have you ever needed to filter a list of data? This filter performs exactly this task, acting over an array and applying any filtering criteria. Now, let's include in our car parking application a field to search any parked car and use this filter to do the job. index.html <input type="text" ng-model="criteria" placeholder="What are you looking for?" /> <table> <thead> <tr> <th></th> <th>Plate</th> <th>Color</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-class="{selected: car.selected}" ng-repeat="car in cars | filter:criteria" > <td> <input type="checkbox" ng-model="car.selected" /> </td> <td>{{car.plate}}</td> <td>{{car.color}}</td> <td>{{car.entrance | date:'dd/MM/yyyy hh:mm'}}</td> </tr> </tbody> </table> The result is really impressive. With an input field and the filter declaration we did the job. Integrating the backend with AJAX AJAX, also known as Asynchronous JavaScript and XML, is a technology that allows the applications to send and retrieve data from the server asynchronously, without refreshing the page. The $http service wraps the low-level interaction with the XMLHttpRequest object, providing an easy way to perform calls. This service could be called by just passing a configuration object, used to set many important information like the method, the URL of the requested resource, the data to be sent, and many others: $http({method: "GET", url: "/resource"}) .success(function (data, status, headers, config, statusText) { }) .error(function (data, status, headers, config, statusText) { }); To make it easier to use, there are the following shortcuts methods available for this service. In this case, the configuration object is optional. $http.get(url, [config]) $http.post(url, data, [config]) $http.put(url, data, [config]) $http.head(url, [config]) $http.delete(url, [config]) $http.jsonp(url, [config]) Now, it’s time to integrate our parking application with the back-end by calling the resource cars with the method GET. It will retrieve the cars, binding it to the $scope object. In case of something went wrong, we are going to log it to the console. controllers.js parking.controller("parkingCtrl", function ($scope, $http) { $scope.appTitle = "[Packt] Parking"; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; var retrieveCars = function () { $http.get("/cars") .success(function(data, status, headers, config) { $scope.cars = data; }) .error(function(data, status, headers, config) { switch(status) { case 401: { $scope.message = "You must be authenticated!" break; } case 500: { $scope.message = "Something went wrong!"; break; } } console.log(data, status); }); }; retrieveCars(); }); Summary This article introduced you to the fundamentals of AngularJS in order to design and construct reusable, maintainable, and modular web applications. Resources for Article: Further resources on this subject: AngularJS Project [article] Working with Live Data and AngularJS [article] CreateJS – Performing Animation and Transforming Function [article]
Read more
  • 0
  • 0
  • 5003
article-image-null-12
Packt
23 Jul 2012
13 min read
Save for later

Ruby with MongoDB for Web Development

Packt
23 Jul 2012
13 min read
Creating documents Let's first see how we can create documents in MongoDB. As we have briefly seen, MongoDB deals with collections and documents instead of tables and rows. Time for action – creating our first document Suppose we want to create the book object having the following schema: book = { name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] }   On the Mongo CLI, we can add this book object to our collection using the following command: > db.books.insert(book)   Suppose we also add the shelf collection (for example, the floor, the row, the column the shelf is in, the book indexes it maintains, and so on that are part of the shelf object), which has the following structure: shelf : { name : 'Fiction', location : { row : 10, column : 3 }, floor : 1 lex : { start : 'O', end : 'P' }, }   Remember, it's quite possible that a few years down the line, some shelf instances may become obsolete and we might want to maintain their record. Maybe we could have another shelf instance containing only books that are to be recycled or donated. What can we do? We can approach this as follows: The SQL way: Add additional columns to the table and ensure that there is a default value set in them. This adds a lot of redundancy to the data. This also reduces the performance a little and considerably increases the storage. Sad but true! The NoSQL way: Add the additional fields whenever you want. The following are the MongoDB schemaless object model instances: > db.book.shelf.find() { "_id" : ObjectId("4e81e0c3eeef2ac76347a01c"), "name" : "Fiction", "location" : { "row" : 10, "column" : 3 }, "floor" : 1 } { "_id" : ObjectId("4e81e0fdeeef2ac76347a01d"), "name" : "Romance", "location" : { "row" : 8, "column" : 5 }, "state" : "window broken", "comments" : "keep away from children" } What just happened? You will notice that the second object has more fields, namely comments and state. When fetching objects, it's fine if you get extra data. That is the beauty of NoSQL. When the first document is fetched (the one with the name Fiction), it will not contain the state and comments fields but the second document (the one with the name Romance) will have them. Are you worried what will happen if we try to access non-existing data from an object, for example, accessing comments from the first object fetched? This can be logically resolved—we can check the existence of a key, or default to a value in case it's not there, or ignore its absence. This is typically done anyway in code when we access objects. Notice that when the schema changed we did not have to add fields in every object with default values like we do when using a SQL database. So there is no redundant information in our database. This ensures that the storage is minimal and in turn the object information fetched will have concise data. So there was no redundancy and no compromise on storage or performance. But wait! There's more. NoSQL scores over SQL databases The way many-to-many relations are managed tells us how we can do more with MongoDB that just cannot be simply done in a relational database. The following is an example: Each book can have reviews and votes given by customers. We should be able to see these reviews and votes and also maintain a list of top voted books. If we had to do this in a relational database, this would be somewhat like the relationship diagram shown as follows: (get scared now!) The vote_count and review_count fields are inside the books table that would need to be updated every time a user votes up/down a book or writes a review. So, to fetch a book along with its votes and reviews, we would need to fire three queries to fetch the information: SELECT * from book where id = 3; SELECT * from reviews where book_id = 3; SELECT * from votes where book_id = 3; We could also use a join for this: SELECT * FROM books JOIN reviews ON reviews.book_id = books.id JOIN votes ON votes.book_id = books.id; In MongoDB, we can do this directly using embedded documents or relational documents. Using MongoDB embedded documents Embedded documents, as the name suggests, are documents that are embedded in other documents. This is one of the features of MongoDB and this cannot be done in relational databases. Ever heard of a table embedded inside another table? Instead of four tables and a complex many-to-many relationship, we can say that reviews and votes are part of a book. So, when we fetch a book, the reviews and the votes automatically come along with the book. Embedded documents are analogous to chapters inside a book. Chapters cannot be read unless you open the book. Similarly embedded documents cannot be accessed unless you access the document. For the UML savvy, embedded documents are similar to the contains or composition relationship. Time for action – embedding reviews and votes In MongoDB, the embedded object physically resides inside the parent. So if we had to maintain reviews and votes we could model the object as follows: book : { name: "Oliver Twist", reviews : [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] votes: [ "Gautam", "Tom", "Dick"] } What just happened? We now have reviews and votes inside the book. They cannot exist on their own. Did you notice that they look similar to JSON hashes and arrays? Indeed, they are an array of hashes. Embedded documents are just like hashes inside another object. There is a subtle difference between hashes and embedded objects as we shall see later on in the book. Have a go hero – adding more embedded objects to the book Try to add more embedded objects such as orders inside the book document. It works! order = { name: "Toby Jones" type: "lease", units: 1, cost: 40 } Fetching embedded objects We can fetch a book along with the reviews and the votes with it. This can be done by executing the following command: > var book = db.books.findOne({name : 'Oliver Twist'}) > book.reviews.length 2 > book.votes.length 3 > book.reviews [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] > book.votes [ "Gautam", "Tom", "Dick"] This does indeed look simple, doesn't it? By fetching a single object, we are able to get the review and vote count along with the data. Use embedded documents only if you really have to! Embedded documents increase the size of the object. So, if we have a large number of embedded documents, it could adversely impact performance. Even to get the name of the book, the reviews and the votes are fetched. Using MongoDB document relationships Just like we have embedded documents, we can also set up relationships between different documents. Time for action – creating document relations The following is another way to create the same relationship between books, users, reviews, and votes. This is more like the SQL way. book: { _id: ObjectId("4e81b95ffed0eb0c23000002"), name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] } Every document that is created in MongoDB has an object ID associated with it. In the next chapter, we shall soon learn about object IDs in MongoDB. By using these object IDs we can easily identify different documents. They can be considered as primary keys. So, we can also create the reviews collection and the votes collection as follows: users: [ { _id: ObjectId("8d83b612fed0eb0bee000702"), name: "Gautam" }, { _id : ObjectId("ab93b612fed0eb0bee000883"), name: "Harry" } ] reviews: [ { _id: ObjectId("5e85b612fed0eb0bee000001"), user_id: ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Very interesting read" }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Who is Oliver Twist?" } ] votes: [ { _id: ObjectId("6e95b612fed0eb0bee000123"), user_id : ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), } ] What just happened? Hmm!! Not very interesting, is it? It doesn't even seem right. That's because it isn't the right choice in this context. It's very important to know how to choose between nesting documents and relating them. In your object model, if you will never search by the nested document (that is, look up for the parent from the child), embed it. Just in case you are not sure about whether you would need to search by an embedded document, don't worry too much – it does not mean that you cannot search among embedded objects. You can use Map/Reduce to gather the information. Comparing MongoDB versus SQL syntax This is a good time to sit back and evaluate the similarities and dissimilarities between the MongoDB syntax and the SQL syntax. Let's map them together: SQL commands NoSQL (MongoDB) equivalent SELECT * FROM books db.books.find() SELECT * FROM books WHERE id = 3; db.books.find( { id : 3 } ) SELECT * FROM books WHERE name LIKE 'Oliver%' db.books.find( { name : /^Oliver/ } ) SELECT * FROM books WHERE name like '%Oliver%' db.books.find( { name : /Oliver/ } ) SELECT * FROM books WHERE publisher = 'Dover Publications' AND published_date = "2011-8-01" db.books.find( { publisher : "Dover Publications", published_date : ISODate("2011-8-01") } ) SELECT * FROM books WHERE published_date > "2011-8-01" db.books.find ( { published_date : { $gt : ISODate("2011-8-01") } } ) SELECT name FROM books ORDER BY published_date db.books.find( {}, { name : 1 } ).sort( { published_date : 1 } ) SELECT name FROM books ORDER BY published_date DESC db.books.find( {}, { name : 1 } ).sort( { published_date : -1 } ) SELECT votes.name from books JOIN votes where votes.book_id = books.id db.books.find( { votes : { $exists : 1 } }, { votes.name : 1 } ) Some more notable comparisons between MongoDB and relational databases are: MongoDB does not support joins. Instead it fires multiple queries or uses Map/Reduce. We shall soon see why the NoSQL faction does not favor joins. SQL has stored procedures. MongoDB supports JavaScript functions. MongoDB has indexes similar to SQL. MongoDB also supports Map/Reduce functionality. MongoDB supports atomic updates like SQL databases. Embedded or related objects are used sometimes instead of a SQL join. MongoDB collections are analogous to SQL tables. MongoDB documents are analogous to SQL rows. Using Map/Reduce instead of join We have seen this mentioned a few times earlier—it's worth jumping into it, at least briefly. Map/Reduce is a concept that was introduced by Google in 2004. It's a way of distributed task processing. We "map" tasks to works and then "reduce" the results. Understanding functional programming Functional programming is a programming paradigm that has its roots from lambda calculus. If that sounds intimidating, remember that JavaScript could be considered a functional language. The following is a snippet of functional programming: $(document).ready( function () { $('#element').click( function () { # do something here }); $('#element2').change( function () { # do something here }) }); We can have functions inside functions. Higher-level languages (such as Java and Ruby) support anonymous functions and closures but are still procedural functions. Functional programs rely on results of a function being chained to other functions. Building the map function The map function processes a chunk of data. Data that is fed to this function could be accessed across a distributed filesystem, multiple databases, the Internet, or even any mathematical computation series! function map(void) -> void The map function "emits" information that is collected by the "mystical super gigantic computer program" and feeds that to the reducer functions as input. MongoDB as a database supports this paradigm making it "the all powerful" (of course I am joking, but it does indeed make MongoDB very powerful). Time for action – writing the map function for calculating vote statistics Let's assume we have a document structure as follows: { name: "Oliver Twist", votes: ['Gautam', 'Harry'] published_on: "December 30, 2002" } The map function for such a structure could be as follows: function() { emit( this.name, {votes : this.votes} ); } What just happened? The emit function emits the data. Notice that the data is emitted as a (key, value) structure. Key: This is the parameter over which we want to gather information. Typically it would be some primary key, or some key that helps identify the information. For the SQL savvy, typically the key is the field we use in the GROUP BY clause. Value: This is a JSON object. This can have multiple values and this is the data that is processed by the reduce function. We can call emit more than once in the map function. This would mean we are processing data multiple times for the same object. Building the reduce function The reduce functions are the consumer functions that process the information emitted from the map functions and emit the results to be aggregated. For each emitted data from the map function, a reduce function emits the result. MongoDB collects and collates the results. This makes the system of collection and processing as a massive parallel processing system giving the all mighty power to MongoDB. The reduce functions have the following signature: function reduce(key, values_array) -> value Time for action – writing the reduce function to process emitted information This could be the reduce function for the previous example: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } What just happened? reduce takes an array of values – so it is important to process an array every time. There are various options to Map/Reduce that help us process data. Let's analyze this function in more detail: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The variable result has a structure similar to what was emitted from the map function. This is important, as we want the results from every document in the same format. If we need to process more results, we can use the finalize function (more on that later). The result function has the following structure: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The values are always passed as arrays. It's important that we iterate the array, as there could be multiple values emitted from different map functions with the same key. So, we processed the array to ensure that we don't overwrite the results and collate them.
Read more
  • 0
  • 0
  • 4808

article-image-ibm-lotus-domino-creating-action-buttons-and-adding-style-views
Packt
11 May 2011
7 min read
Save for later

IBM Lotus Domino: Creating Action Buttons and Adding Style to Views

Packt
11 May 2011
7 min read
IBM Lotus Domino: Classic Web Application Development Techniques A step-by-step guide for web application development and quick tips to enhance applications using Lotus Domino Provide view navigation buttons Simple views intended to provide information (for example, a table of values) or links to a limited number of documents can stand alone quite nicely, embedded on a page or a view template. But if more than a handful of documents display in the view, you should provide users a way to move forward and backward through the view. If you use the View Applet, enable the scroll bars; otherwise add some navigational buttons to the view templates to enable users to move around in it. Code next and previous navigation buttons If you set the line count for a view, only that number of rows is sent to the browser. You need to add Action buttons or hotspots on the view template to enable users to advance the view to the next set of documents or to return to the previous set of documents—essentially paging backward and forward through the view. Code a Next button with this formula: @DbCommand("Domino"; "ViewNextPage") Code a Previous button with this formula: @DbCommand("Domino"; "ViewPreviousPage") Code first and last buttons Buttons can be included on the view template to page to the first and last documents in the view. Code an @Formula in a First button's Click event to compute and open a relative URL. The link reopens the current view and positions it at the first document: @URLOpen("/"+@WebDbName+"/"+@Subset(@ViewTitle;-1) + "?OpenView&Start=1") For a Last button, add a Computed for Display field to the view template with this @Formula: @Elements(@DbColumn("":"NoCache"; "" ; @ViewTitle; 1)) The value for the field (vwRows in this example) is the current number of documents in the view. This information is used in the @Formula for the Last button's Click event: url := "/" + @WebDbName + "/" + @Subset(@ViewTitle;-1) ; @URLOpen(url + "?OpenView&Start=" + @Text(vwRows)) When Last is clicked, the view reopens, positioned at the last document. Please note that for very large views, the @Formula for field vwRows may fail because of limitations in the amount of data that can be returned by @DbColumn. Let users specify a line count As computer monitors today come in a wide range of sizes and resolutions, it may be difficult to determine the right number of documents to display in a view to accommodate all users. On some monitors the view may seem too short, on others too long. Here is a strategy you might adapt to your application, that enables users to specify how many lines to display. The solution relies on several components working together: Several Computed for display fields on the view template A button that sets the number of lines with JavaScript Previous and Next buttons that run JavaScript to page through the view The technique uses the Start and Count parameters, which can be used when you open a view with a URL. The Start parameter, used in a previous example, specifies the row or document within a view that should display at the top of the view window on a page. The Count parameter specifies how many rows or documents should display on the page. The Count parameter overrides the line count setting that you may have set on an embedded view element. Here are the Computed for display fields to be created on the view template. The Query_String_Decoded field (a CGI variable) must be named as such, but all the other field names in this list are arbitrary. Following each field name is the @Formula that computes its value: Query_String_Decoded: Query_String_Decoded vwParms: @Right(@LowerCase(Query_String_Decoded); "&") vwStart: @If(@Contains(vwParms; "start="); @Middle(vwParms; "start="; "&"); "1") vwCount: @If(@Contains(vwParms; "count="); @Middle(vwParms; "count="; "&"); "10") vwURL: "/" + @WebDbName + "/"+ @Subset(@ViewTitle;1) + "?OpenView" vwRows: @Elements(@DbColumn("":"NoCache"; ""; @ViewTitle; 1)) countFlag "n" newCount: "1" Add several buttons to the view template. Code JavaScript in each button's onClick event. You may want to code these scripts inline for testing, and then move them to a JavaScript library when you know they are working the way you want them to. The Set Rows button's onClick event is coded with JavaScript that receives a line count from the user. If the user-entered line count is not good, then the current line count is retained. A flag is set indicating that the line count may have been changed: var f = document.forms[0] ; var rows = parseInt(f.vwRows.value) ; var count = prompt("Number of Rows?","10") ; if ( isNaN(count) | count < 1 | count >= rows ) { count = f.vwCount.value ; } f.newCount.value = count ; f.countFlag.value = "y" ; The Previous button's onClick event is coded to page backward through the view using the user-entered line count: var f = document.forms[0] ; var URL = f.vwURL.value ; var ctFlag = f.countFlag.value ; var oCT = parseInt(f.vwCount.value) ; var nCT = parseInt(f.newCount.value) ; var oST = parseInt(f.vwStart.value) ; var count ; var start ; if ( ctFlag == "n" ) { count = oCT ; start = oST - oCT ; } else { count = nCT ; start = oST - nCT ; } if (start < 1 ) { start = 1 ; } location.href = URL + "&Start=" + start + "&Count=" + count ; The Next button pages forward through the view using the user-entered line count: var f = document.forms[0] ; var URL = f.vwURL.value ; var ctFlag = f.countFlag.value ; var oCT = parseInt(f.vwCount.value) ; var nCT = parseInt(f.newCount.value) ; var start = parseInt(f.vwStart.value) + oCT ; if ( ctFlag == "n" ) { location.href = URL + "&Start=" + start + "&Count=" + oCT ; } else { location.href = URL + "&Start=" + start + "&Count=" + nCT ; } Finally, if First and Last buttons are included with this scheme, they need to be recoded as well to work with a user-specified line count. The @formula in the First button's Click event now looks like this: count := @If(@IsAvailable(vwCount); vwCount; "10") ; parms := "?OpenView&Start=1&Count=" + count ; @URLOpen("/" + @WebDbName + "/" + @Subset(@ViewTitle;-1) + parms) ;7 The @formula in the Last button's Click event is also a little more complicated. Note that if the field vwRows is not available, then the Start value is set to 1,000. This is really more for debugging since the Start parameter should always be set to the value of vwRows: start := @If(@IsAvailable(vwRows); @Text(vwRows); "1000") ; count := @If(@IsAvailable(vwCount); vwCount; "10") ; parms := "?OpenView&Start=" + start + "&Count=" + count ; url := "/" + @WebDbName + "/" + @Subset(@ViewTitle;-1) ; @URLOpen(url + parms) ; Code expand and collapse buttons for categorized views Two other navigational buttons should be included on the view template for categorized views or views that include document hierarchies. These buttons expand all categories and collapse all categories respectively: The Expand All button's Click event contains this @Command: @Command([ViewExpandAll]) The Collapse All button's Click event contains this @Command: @Command([ViewCollapseAll]) Co-locate and define all Action buttons Action Bar buttons can be added to a view template as well as to a view. If Action buttons appear on both design elements, then Domino places all the buttons together on the same top row. In the following image, the first button is from the view template, and the last three are from the view itself: If it makes more sense for the buttons to be arranged in a different order, then take control of their placement by co-locating them all either on the view template or on the view. Create your own Action buttons As mentioned previously, Action Bar buttons are rendered in a table placed at the top of a form. But on typical Web pages, buttons and hotspots are located below a banner, or in a menu at the left or the right. Buttons along the top of a form look dated and may not comply with your organization's web development standards. You can replace the view template and view Action buttons with hotspot buttons placed elsewhere on the view template: Create a series of hotspots or hotspot buttons on the view template, perhaps below a banner. Code @formulas for the hotspots that are equivalent to the Action Bar button formulas. Define a CSS class for those hotspots, and code appropriate CSS rules. Delete or hide from the Web all standard Action Bar buttons on the view template and on the view.
Read more
  • 0
  • 0
  • 4788

article-image-building-first-vue-js-web-application
Kunal Chaudhari
17 Apr 2018
11 min read
Save for later

Building your first Vue.js 2 Web application

Kunal Chaudhari
17 Apr 2018
11 min read
Vue is a relative newcomer in the JavaScript frontend landscape, but a very serious challenger to the current leading libraries. It is simple, flexible, and very fast, while still providing a lot of features and optional tools that can help you build a modern web app efficiently. In today’s tutorial, we will explore Vue.js library and then we will start creating our first web app. Why another frontend framework? Its creator, Evan You, calls it the progressive framework. Vue is incrementally adoptable, with a core library focused on user interfaces that you can use in existing projects You can make small prototypes all the way up to large and sophisticated web applications Vue is approachable-- beginners can pick up the library easily, and confirmed developers can be productive very quickly Vue roughly follows a Model-View-ViewModel architecture, which means the View (the user interface) and the Model (the data) are separated, with the ViewModel (Vue) being a mediator between the two. It handles the updates automatically and has been already optimized for you. Therefore, you don't have to specify when a part of the View should update because Vue will choose the right way and time to do so. The library also takes inspiration from other similar libraries such as React, Angular, and Polymer. The following is an overview of its core features: A reactive data system that can update your user interface automatically, with a lightweight virtual-DOM engine and minimal optimization efforts, is required Flexible View declaration--artist-friendly HTML templates, JSX (HTML inside JavaScript), or hyperscript render functions (pure JavaScript) Composable user interfaces with maintainable and reusable components Official companion libraries that come with routing, state management, scaffolding, and more advanced features, making Vue a non-opinionated but fully fleshed out frontend framework Vue.js - A trending project Evan You started working on the first prototype of Vue in 2013, while working at Google, using Angular. The initial goal was to have all the cool features of Angular, such as data binding and data-driven DOM, but without the extra concepts that make a framework opinionated and heavy to learn and use. The first public release was published on February 2014 and had immediate success the very first day, with HackerNews frontpage, /r/javascript at the top spot and 10k unique visits on the official website. The first major version 1.0 was reached in October 2015, and by the end of that year, the npm downloads rocketed to 382k ytd, the GitHub repository received 11k stars, the official website had 363k unique visitors, and the popular PHP framework Laravel had picked Vue as its official frontend library instead of React. The second major version, 2.0, was released in September 2016, with a new virtual DOM- based renderer and many new features such as server-side rendering and performance improvements. This is the version we will use in this article. It is now one of the fastest frontend libraries, outperforming even React according to a comparison refined with the React team. At the time of writing this article, Vue was the second most popular frontend library on GitHub with 72k stars, just behind React and ahead of Angular 1. The next evolution of the library on the roadmap includes more integration with Vue-native libraries such as Weex and NativeScript to create native mobile apps with Vue, plus new features and improvements. Today, Vue is used by many companies such as Microsoft, Adobe, Alibaba, Baidu, Xiaomi, Expedia, Nintendo, and GitLab. Compatibility requirements Vue doesn't have any dependency and can be used in any ECMAScript 5 minimum- compliant browser. This means that it is not compatible with Internet Explorer 8 or less, because it needs relatively new JavaScript features such as Object.defineProperty, which can't be polyfilled on older browsers. In this article, we are writing code in JavaScript version ES2015 (formerly ES6), so you will need a modern browser to run the examples (such as Edge, Firefox, or Chrome). At some point, we will introduce a compiler called Babel that will help us make our code compatible with older browsers. One-minute setup Without further ado, let's start creating our first Vue app with a very quick setup. Vue is flexible enough to be included in any web page with a simple script tag. Let's create a very simple web page that includes the library, with a simple div element and another script tag: <html> <head> <meta charset="utf-8"> <title>Vue Project Guide setup</title> </head> <body> <!-- Include the library in the page --> <script src="https://unpkg.com/vue/dist/vue.js"></script> <!-- Some HTML --> <div id="root"> <p>Is this an Hello world?</p> </div> <!-- Some JavaScript →> <script> console.log('Yes! We are using Vue version', Vue.version) </script> </body> </html> In the browser console, we should have something like this: Yes! We are using Vue version 2.0.3 As you can see in the preceding code, the library exposes a Vue object that contains all the features we need to use it. We are now ready to go. Creating an app For now, we don't have any Vue app running on our web page. The whole library is based on Vue instances, which are the mediators between your View and your data. So, we need to create a new Vue instance to start our app: // New Vue instance var app = new Vue({ // CSS selector of the root DOM element el: '#root', // Some data data () { return { message: 'Hello Vue.js!', } }, }) The Vue constructor is called with the new keyword to create a new instance. It has one argument--the option object. It can have multiple attributes (called options). For now, we are using only two of them. With the el option, we tell Vue where to add (or "mount") the instance on our web page using a CSS selector. In the example, our instance will use the <div id="root"> DOM element as its root element. We could also use the $mount method of the Vue instance instead of the el option: var app = new Vue({ data () { return { message: 'Hello Vue.js!', } }, }) // We add the instance to the page app.$mount('#root') Most of the special methods and attributes of a Vue instance start with a dollar character. We will also initialize some data in the data option with a message property that contains a string. Now the Vue app is running, but it doesn't do much, yet. You can add as many Vue apps as you like on a single web page. Just create a new Vue instance for each of them and mount them on different DOM elements. This comes in handy when you want to integrate Vue in an existing project. Vue devtools An official debugger tool for Vue is available on Chrome as an extension called Vue.js devtools. It can help you see how your app is running to help you debug your code. You can download it from the Chrome Web Store (https://chrome.google.com/webstore/ search/vue) or from the Firefox addons registry (https://addons.mozilla.org/en-US/ firefox/addon/vue-js-devtools/?src=ss). For the Chrome version, you need to set an additional setting. In the extension settings, enable Allow access to file URLs so that it can detect Vue on a web page opened from your local drive: On your web page, open the Chrome Dev Tools with the F12 shortcut (or Shift + command + c on OS X) and search for the Vue tab (it may be hidden in the More tools... dropdown). Once it is opened, you can see a tree with our Vue instance named Root by convention. If you click on it, the sidebar displays the properties of the instance: You can drag and drop the devtools tab to your liking. Don't hesitate to place it among the first tabs, as it will be hidden in the page where Vue is not in development mode or is not running at all. You can change the name of your instance with the name option: var app = new Vue({ name: 'MyApp', // ...         }) This will help you see where your instance in the devtools is when you will have many more: Templates make your DOM dynamic With Vue, we have several systems at our disposal to write our View. For now, we will start with templates. A template is the easiest way to describe a View because it looks like HTML a lot, but with some extra syntax to make the DOM dynamically update very easily. Displaying text The first template feature we will see is the text interpolation, which is used to display dynamic text inside our web page. The text interpolation syntax is a pair of double curly braces containing a JavaScript expression of any kind. Its result will replace the interpolation when Vue will process the template. Replace the <div id="root"> element with the following: <div id="root"> <p>{{ message }}</p> </div> The template in this example has a <p> element whose content is the result of the message JavaScript expression. It will return the value of the message attribute of our instance. You should now have a new text displayed on your web page--Hello Vue.js!. It doesn't seem like much, but Vue has done a lot of work for us here--we now have the DOM wired with our data. To demonstrate this, open your browser console and change the app.message value and press Enter on the keyboard: app.message = 'Awesome!' The message has changed. This is called data-binding. It means that Vue is able to automatically update the DOM whenever your data changes without requiring anything from your part. The library includes a very powerful and efficient reactivity system that keeps track of all your data and is able to update what's needed when something changes. All of this is very fast indeed. Adding basic interactivity with directives Let's add some interactivity to our otherwise quite static app, for example, a text input that will allow the user to change the message displayed. We can do that in templates with special HTML attributes called directives. All the directives in Vue start with v- and follow the kebab-case syntax. That means you should separate the words with a dash. Remember that HTML attributes are case insensitive (whether they are uppercase or lowercase doesn't matter). The directive we need here is v-model, which will bind the value of our <input> element with our message data property. Add a new <input> element with the v-model="message" attribute inside the template: <div id="root"> <p>{{ message }}</p> <!-- New text input --> <input v-model="message" /> </div> Vue will now update the message property automatically when the input value changes. You can play with the content of the input to verify that the text updates as you type and the value in the devtools changes: There are many more directives available in Vue, and you can even create your own. To summarize, we quickly set up a web page to get started using Vue and wrote a simple app. We created a Vue instance to mount the Vue app on the page and wrote a template to make the DOM dynamic. Inside this template, we used a JavaScript expression to display text, thanks to text interpolations. Finally, we added some interactivity with an input element that we bound to our data with the v-model directive. You read an excerpt from a book written by Guillaume Chau, titled Vue.js 2 Web Development Projects. Its a project-based, practical guide to get hands-on into Vue.js 2.5 development by building beautiful, functional and performant web. Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js      
Read more
  • 0
  • 0
  • 4742
article-image-generic-section-single-page-based-website
Savia Lobo
17 May 2018
14 min read
Save for later

How to create a generic reusable section for a single page based website

Savia Lobo
17 May 2018
14 min read
There are countless variations when it comes to different sections that can be incorporated into the design of a single page website. In this tutorial, we will cover how to create a generic section that can be extended to multiple sections. This section provides the ability to display any information your website needs. Single page sections are commonly used to display the following data to the user: Contact form (will be implemented in the next chapter). About us: This can be as simple as a couple of paragraphs talking about the company/individual or more complex with images, even showing the team and their roles. Projects/work: Any work you or the company has done and would like to showcase. They are usually linked to external pages or pop up boxes containing more information about the project. Useful company info such as opening times. These are just some of the many uses for sections in a single page website. A good rule of thumb is that if it can be a page on another website it can most likely be adapted into sections on a single page website. Also, depending on the amount of information a single section has, it could potentially be split into multiple sections. This article is an excerpt taken from the book,' Responsive Web Design by Example', written by Frahaan Hussain. Single page section examples Let's go through some examples of the sections mentioned above. Example 1: Contact form As can be seen, by the contact form from Richman, the elements used are very similar to that of a contact page. A form is used with inputs for the various pieces of information required from the user along with a button for submission: Not all contact forms will have the same fields. Put what you need, it may be more or less, there is no right or wrong answer. Also at the bottom of the section is the company's logo along with some written contact information, which is also very common. Some websites also display a map usually using the Google Maps API; these mainly have a physical presence such as a store. Website link—http://richman-kcm.com/ Example 2: About us This is an excellent example of an about us page that uses the following elements to convey the information: Images: Display the individual's face. Creates a very personal touch to the otherwise digital website. Title: Used to display the individual's name. This can also be an image if you want a fancier title. Simple text: Talks about who the person is and what they do. Icons: Linking to the individual's social media accounts. Website link—http://designedbyfew.com/ Example 3: Projects/work This website shows its work off very elegantly and cleanly using images and little text: It also provides a carousel-like slider to display the work, which is extremely useful for displaying the content bigger without displaying all of it at once and it allows a lot of content for a small section to be used. Website link: http://peeltheorange.com/#recent-work Example 4: Opening times This website uses a background image similar to the introduction section created in the previous chapter and an additional image on top to display the opening times. This can also be achieved using a mixture of text and CSS styling for various facets such as the border. Website link—http://www.mumbaigate.co.uk/ Implementing generic reusable single page section We will now create a generic section that can easily be modified and reused to our single page portfolio website. But we still need some sort of layout/design in mind before we implement the section, let's go with an Our Team style section. What will the Our Team section contain? The Our Team section will be a bit simpler, but it can easily be modified to accommodate the animations and styles displayed on the previously mentioned websites. It will be similar to the following example: Website link—http://demo.themeum.com/html/oxygen/ The preceding example consists of the following elements: Heading Intro text (Lorem Ipsum in this case) Images displaying each member of the team Team member's name Their role Text informing the viewer a little bit about them Social links We will also create our section using a similar layout. We are now finally going to use the column system to its full potential to provide a responsive experience using breakpoints. Creating the Our Team section container First let's implement a simple container, with the title and section introduction text, without any extra elements such as an image. We will then use this to link to our navigation bar. Add the following code to the jumbotron div: Let's go over what the preceding code is doing: Line 9 creates a container that is fluid, allowing it to span the browser's width fully. This can be changed to a regular container if you like. The id will be used very soon to link to the navigation bar. Line 10 creates a row in which our text elements will be stored. Line 11 creates a div that spans all the 12 columns on all screen sizes and centers the text inside of it. Line 12 creates a simple header for the Team section. Line 14 to Line 16 adds introduction text. I have put the first two sentences of "Lorem Ipsum..." inside of it, but you can put anything you like. All of this produces the following result: Anchoring the Team section to the navigation bar We will now link the navigation bar to the Team section. This will allow the user to navigate to the Team section without having to scroll up or down. At the moment, there is no need to scroll up, but when more content is added this can become a problem as a single page website can become quite long. Fortunately, we have already done the heavy lifting with the navigation bar through HTML and JavaScript, phew! First, let's change the name of the second button in the navigation bar to Team. Update the navigation bar like so: The navigation bar will now look as follows: Fantastic, our navigation bar is looking more like what you would see on a real website. Now let's change href to the same ID as the Team section, which was #TeamSection like so: Now when we click on any of the navigation buttons we get no JavaScript errors like we would have in the previous chapter. Also, it automatically scrolls to each section without any extra JavaScript code. Adding team pictures Now let's use images to showcase the team members. I will use the image from the following link for our employees, but in a real website you would obviously use different images: http://res.cloudinary.com/dmliyxggm/image/upload/v1511699813/John_vepwoz.png I have modified the image so all the background is removed and the image is trimmed, so it looks as follows: Up until now, all images that we have used have been stored on other websites such as CDN's, this is great, but the need may arise when the use of a custom image like the previous one is needed. We can either store it on a CDN, which is a very good approach, and I would recommend Cloudinary (http://cloudinary.com/), or we can store it locally, which we will do now. A CDN is a Content Delivery Network that has a sole purpose of delivering content such as images to other websites using the best and fastest servers available to a specific user. I would definitely recommend using one. Create a folder called Images and place the image using the following folder structure: Root CSS Images Team Thumbnails Thumbnails.png Index.php JS SNIPPETS This may seem like overkill, considering we only have one image, but as your website gets more complex you will store more images and having an intelligent folder structure/hierarchy will save an immense amount of time. Add the following code to the first row like so: The code we have added doesn't actually provide any visual changes as it is nothing but empty div classes. But these div classes will serve as structures for each team member and their respective content such as name and social links. We created a new row to group our new div classes. Inside each div we will represent each team member. The classes have been set up to be displayed like so: Extra small screens will only show a single team member on a single row Small and medium screens will show two team members on a single row Large and extra large screens will show four team members on a single row The rows are rows in their literal sense and not the class row. Another way to look at them is as lines. The sizes/breakpoints can easily be changed using the information regarding the grid from this article What Is Bootstrap, Why Do We Use It? Now let's add the team's images, update the previous code like so: The preceding code produces the following result: As you can see, this is not the desired effect we were looking for. As there are no size restrictions on the image, it is displayed at its original size. Which, on some screens, will produce a result similar to the monstrosity you saw before; worry not, this can easily be fixed. Add the classes img-fluid and img-thumbnail to each one of the images like so: The classes we added are designed to provide the following styling: img-fluid: Provides a responsive image that is automatically restricted based on the number of columns and browser size. img-thumbnail: Is more of an optional class, but it is still very useful. It provides a light border around the images to make them pop. This produces the following result: As it can be seen, this is significantly better than our previous result. Depending on the browser/screen size, the positioning will slightly change based on the column breakpoints we specified. As usual, I recommend that you resize the browser to see the different layouts. These images are almost complete; they look fine on most screen sizes, but they aren't actually centered within their respective div. This is evident in larger screen sizes, as can be seen here: It isn't very noticeable, but the problem is there, it can be seen to the right of the last image. You probably could get away without fixing this, but when creating anything, from a website to a game, or even a table, the smallest details are what separate the good websites from the amazing websites. This is a simple idea called the aggregation of marginal gains. Fortunately for us, like many times before, Bootstrap offers functionality to resolve our little problem. Simply add the text-center class, to the row within the div of the images like so:   This now produces the following result: There is one more slight problem that is only noticeable on smaller screens when the images/member containers are stacked on top of each other. The following result is produced: The problem might not jump out at first glance, but look closely at the gaps between the images that are stacked, or I should say, to the lack of a gap. This isn't the end of the world, but again the small details make an immense difference to the look of a website. This can be easily fixed by adding padding to each team member div. First, add a class of teamMemberContainer to each team member div like so: Add the following CSS code to the index.css file to provide a more visible gap through the use of padding: This simple solution now produces the following result: If you want the gap to be bigger, simply increase the value and lower it to reduce the gap. Team member info text The previous section covered quite a lot if you're not 100% on what we did just go back and take a second look. This section will thankfully be very simple as it will incorporate techniques and features we have already covered, to add the following information to each team member: Name Job title Member info text Plus anything else you need Update each team member container with the following code: Let's go over the new code line by line: Line 24 adds a simple header that is intended to display the team member's name. I have chosen an h4 tag, but you can use something bigger or smaller if you like. Line 26 adds the team member's job title, I have used a paragraph element with the Bootstrap class text-muted, which lightens the text color. If you would like more information regarding text styling within Bootstrap, feel free to check out the following link. Line 28 adds a simple paragraph with no extra styling to display some information about the team member. Bootstrap text styling link—https://v4-alpha.getbootstrap.com/utilities/colors/ The code that we just added will produce the following result: As usual, resize your browser to simulate different screen sizes. I use Chrome as my main browser, but Safari has an awesome feature baked right in that allows you to see how your website will run on different browsers/devices, this link will help you use this feature—https://www.tekrevue.com/tip/safari-responsive-design-mode/ Most browsers have a plethora of plugins to aid in this process, but not only does Safari have it built in, it works really well. It all looks fantastic, but again I will nitpick at the gaps. The image is right on top of the team member name text; a small gap would really help improve the visual fidelity. Add a class of teamMemberImage to each image tag as it is demonstrated here: Now add the following code to the index.css file, which will apply a margin of 10px below the image, hence moving all the content down:  Change the margin to suit your needs. This very simple code will produce the following similar yet subtly different and more visually appealing result: Team member social links We have almost completed the Team section, only the social links remain for each team member. I will be using simple images for the social buttons from the following link: https://simplesharebuttons.com/html-share-buttons/ I will also only be adding three social icons, but feel free to add as many or as few as you need. Add the following code to the button of each team member container: Let's go over each new line of code: Line 30 creates a div to store all the social buttons for each team member Line 31 creates a link to Facebook (add your social link in the href) Line 32 adds an image to show the Facebook social link Line 35 creates a link to Google+ (add your social link in the href) Line 36 adds an image to show the Google+ social link Line 39 creates a link to Twitter (add your social link in the href) Line 40 adds an image to show the Twitter social link We have added a class that needs to be implemented, but let's first run our website to see the result without any styling: It looks OK, but the social icons are a bit big, especially if we were to have more icons. Add the following CSS styling to the index.css file: This piece of code simply restricts the social icons size to 50px. Only setting the width causes the height to be automatically calculated, this ensures that any changes to the image that involves a ratio change won't mess up the look of the icons. This now produces the following result: Feel free to change the width to suit your desires. With the social buttons implemented, we are done. If you've enjoyed this tutorial, check out Responsive Web Design by Example, to create a cool blog page, beautiful portfolio site, or professional business site to make them all totally responsive. 5 things to consider when developing an eCommerce website What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability
Read more
  • 0
  • 0
  • 4732

article-image-feeds-facebook-applications
Packt
16 Oct 2009
7 min read
Save for later

Feeds in Facebook Applications

Packt
16 Oct 2009
7 min read
{literal} What Are Feeds? Feeds are the way to publish news in Facebook. As we have already mentioned before, there are two types of feeds in Facebook, News feed and Mini feed. News feed instantly tracks activities of a user's online friends, ranging from changes in relationship status to added photos to wall comments. Mini feed appears on individuals' profiles and highlights recent social activity. You can see your news feed right after you log in, and point your browser to http://www.facebook.com/home.php. It looks like the following, which is, in fact, my news feed. Mini feeds are seen in your profile page, displaying your recent activities and look like the following one: Only the last 10 entries are being displayed in the mini feed section of the profile page. But you can always see the complete list of mini feeds by going to http://www.facebook.com/minifeed.php. Also the mini feed of any user can be accessed from http://www.facebook.com/minifeed.php?id=userid. There is another close relation between news feed and mini feed. When applications publish a mini feed in your profile, it will also appear in your friend's news feed page. How to publish Feeds Facebook provides three APIs to publish mini feeds and news feeds. But these are restricted to call not more than 10 times for a particular user in a 48 hour cycle. This means you can publish a maximum of 10 feeds in a specific user's profile within 48 hours. The following three APIs help to publish feeds: feed_publishStoryToUser—this function publishes the story to the news feed of any user (limited to call once every 12 hours). feed_publishActionOfUser—this one publishes the story to a user's mini feed, and to his or her friend's news feed (limited to call 10 times in a rolling 48 hour slot). feed_publishTemplatizedAction—this one also publishes mini feeds and news feeds, but in an easier way (limited to call 10 times in a rolling 48 hour slot). You can test this API also from http://developers.facebook.com/tools.php?api, and by choosing Feed Preview Console, which will give you the following interface: And once you execute the sample, like the previous one, it will preview the sample of your feed. Sample application to play with Feeds Let's publish some news to our profile, and test how the functions actually work. In this section, we will develop a small application (RateBuddies) by which we will be able to send messages to our friends, and then publish our activities as a mini feed. The purpose of this application is to display friends list and rate them in different categories (Awesome, All Square, Loser, etc.). Here is the code of our application: index.php<?include_once("prepend.php"); //the Lib and key container?><div style="padding:20px;"><?if (!empty($_POST['friend_sel'])){ $friend = $_POST['friend_sel']; $rating = $_POST['rate']; $title = "<fb:name uid='{$fbuser}' useyou='false' /> just <a href='http://apps.facebook.com/ratebuddies/'>Rated</a> <fb:name uid='{$friend}' useyou='false' /> as a '{$rating}' "; $body = "Why not you also <a href='http://apps.facebook.com/ratebuddies/'>rate your friends</a>?";try{//now publish the story to user's mini feed and on his friend's news feed $facebook->api_client->feed_publishActionOfUser($title, $body, null, $null,null, null, null, null, null, null, 1); } catch(Exception $e) { //echo "Error when publishing feeds: "; echo $e->getMessage(); }}?> <h1>Welcome to RateBuddies, your gateway to rate your friends</h1> <div style="padding-top:10px;"> <form method="POST"> Seect a friend: <br/><br/> <fb:friend-selector uid="<?=$fbuser;?>" name="friendid" idname="friend_sel" /> <br/><br/><br/> And your friend is: <br/> <table> <tr> <td valign="middle"><input name="rate" type="radio" value="funny" /></td> <td valign="middle">Funny</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="hot tempered" /></td> <td valign="middle">Hot Tempered</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="awesome" /></td> <td valign="middle">Awesome</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="naughty professor" /></td> <td valign="middle">Naughty Professor</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="looser" /></td> <td valign="middle">Looser</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="empty veseel" /></td> <td valign="middle">Empty Vessel</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="foxy" /></td> <td valign="middle">Foxy</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="childish" /></td> <td valign="middle">Childish</td> </tr> </table> &nbsp; <input type="submit" value="Rate Buddy"/> </form> </div></div> index.php includes another file called prepend.php. In that file, we initialized the facebook api client using the API key and Secret key of the current application. It is a good practice to keep them in separate file because we need to use them throughout our application, in as many pages as we have. Here is the code of that file: prepend.php<?php// this defines some of your basic setupinclude 'client/facebook.php'; // the facebook API library// Get these from ?http://www.facebook.com/developers/apps.phphttp://www.facebook.com/developers/apps.php$api_key = 'your api key';//the api ket of this application$secret = 'your secret key'; //the secret key$facebook = new Facebook($api_key, $secret); //catch the exception that gets thrown if the cookie has an invalid session_key in it try { if (!$facebook->api_client->users_isAppAdded()) { $facebook->redirect($facebook->get_add_url()); } } catch (Exception $ex) { //this will clear cookies for your application and redirect them to a login prompt $facebook->set_user(null, null); $facebook->redirect($appcallbackurl); }?> The client is a standard Facebook REST API client, which is available directly from Facebook. If you are not sure about these API keys, then point your browser to http://www.facebook.com/developers/apps.php and collect the API key and secret key from there. Here is a screenshot of that page: Just collect your API key and Secret Key from this page, when you develop your own application. Now, when you point your browser to http://apps.facebooks.com/ratebuddies and successfully add that application, it will look like this: To see how this app works, type a friend in the box, Select a friend, and click on any rating such as Funny or Foxy. Then click on the Rate Buddy button. As soon as the page submits, open your profile page and you will see that it has published a mini feed in your profile.
Read more
  • 0
  • 0
  • 4722
Modal Close icon
Modal Close icon