Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-build-universal-javascript-app-part-2
John Oerter
30 Sep 2016
10 min read
Save for later

Build a Universal JavaScript App, Part 2

John Oerter
30 Sep 2016
10 min read
In this post series, we will walk through how to write a universal (or isomorphic) JavaScript app. Part 1 covered what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating our app, which are serving post data and adding React. In this second part of the series, we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. Let’s get started. Save on some of our very best React and Angular product from the 7th to 13th November - it's a perfect opportunity to get stuck into two tools that are truly redefining modern web development. Save 50% on featured eBooks and 80% on featured video courses here. Step 3: Client-side routing with React Router git checkout client-side-routing && npm install Now that we're pulling and displaying posts, let's add some navigation to individual pages for each post. To do this, we will turn our list of posts from step 2 (see the Part 1 post) into links that are always present on the page. Each post will live at http://localhost:3000/:postId/:postSlug. We can use React Router and a routes.js file to set up this structure: // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" component={Post} /> </Route> ) We've changed the render method in App.js to render links to posts instead of just <li> tags: // components/App.js import React from 'react' import { Link } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } ... render() { const posts = this.state.posts.map((post) => { const linkTo = `/${post.id}/${post.slug}`; return ( <li key={post.id}> <Link to={linkTo}>{post.title}</Link> </li> ) }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> {this.props.children} </div> ) } } export default App And, we'll add a Post.js component to render each post's content: // components/Post.js import React from 'react' class Post extends React.Component { constructor(props) { super(props) this.state = { title: '', content: '' } } fetchPost(id) { const request = new XMLHttpRequest() request.open('GET', '/api/post/' + id, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { const response = JSON.parse(request.response) this.setState({ title: response.title, content: response.content }); } } request.send(); } componentDidMount() { this.fetchPost(this.props.params.postId) } componentWillReceiveProps(nextProps) { this.fetchPost(nextProps.params.postId) } render() { return ( <div> <h3>{this.state.title}</h3> <p>{this.state.content}</p> </div> ) } } export default Post The componentDidMount() and componentWillReceiveProps() methods are important because they let us know when we should fetch a post from the server. componentDidMount() will handle the first time the Post.js component is rendered, and then componentWillReceiveProps() will take over as React Router handles rerendering the component with different props. Run npm build:client && node server.js again to build and run the app. You will now be able to go to http://localhost:3000 and navigate around to the different posts. However, if you try to refresh on a single post page, you will get something like Cannot GET /3/debugging-node-apps. That's because our Express server doesn't know how to handle that kind of route. React Router is handling it completely on the front end. Onward to server rendering! Step 4: Server rendering git checkout server-rendering && npm install Okay, now we're finally getting to the good stuff. In this step, we'll use React Router to help our server take application requests and render the appropriate markup. To do that, we need to also build a server bundle like we build a client bundle, so that the server can understand JSX. Therefore, we've added the below webpack.server.config.js: // webpack.server.config.js var fs = require('fs') var path = require('path') module.exports = { entry: path.resolve(__dirname, 'server.js'), output: { filename: 'server.bundle.js' }, target: 'node', // keep node_module paths out of the bundle externals: fs.readdirSync(path.resolve(__dirname, 'node_modules')).concat([ 'react-dom/server', 'react/addons', ]).reduce(function (ext, mod) { ext[mod] = 'commonjs ' + mod return ext }, {}), node: { __filename: true, __dirname: true }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } We've also added the following code to server.js: // server.js import React from 'react' import { renderToString } from 'react-dom/server' import { match, RouterContext } from 'react-router' import routes from './components/routes' const app = express() ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const appHtml = renderToString(<RouterContext {...props} />) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) function renderPage(appHtml) { return ` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Universal Blog</title> </head> <body> <div id="app">${appHtml}</div> <script src="/bundle.js"></script> </body> </html> ` } ... Using React Router's match function, the server can find the appropriate requested route, renderToString, and send the markup down the wire. Run npm start to build the client and server bundles and start the app. Fantastic right? We're not done yet. Even though the markup is being generated on the server, we're still fetching all the data client side. Go ahead and click through the posts with your dev tools open, and you'll see the requests. It would be far better to load the data while we're rendering the markup instead of having to request it separately on the client. Since server rendering and universal apps are still bleeding-edge, there aren't really any established best practices for data loading. If you're using some kind of Flux implementation, there may be some specific guidance. But for this use case, we will simply grab all the posts and feed them through our app. In order to this, we first need to do some refactoring on our current architecture. Step 5: Data Flow Refactor git checkout data-flow-refactor && npm install It's a little weird how each post page has to make a request to the server for its content, even though the App component already has all the posts in its state. A better solution would be to have an App simply pass the appropriate content down to the Post component. // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" /> </Route> ) In our routes.js, we've made the Post route a componentless route. It's still a child of the App route, but now has to completely rely on the App component for rendering. Below are the changes to App.js: // components/App.js ... render() {    const posts = this.state.posts.map((post) => {      const linkTo = `/${post.id}/${post.slug}`;      return (        <li key={post.id}>          <Link to={linkTo}>{post.title}</Link>        </li>      )    })    const { postId, postName } = this.props.params;    let postTitle, postContent    if (postId && postName) {      const post = this.state.posts.find(p => p.id == postId)      postTitle = post.title      postContent = post.content    }    return (      <div>        <h3>Posts</h3>        <ul>          {posts}        </ul>        {postTitle && postContent ? (          <Post title={postTitle} content={postContent} />        ) : (          <h1>Welcome to the Universal Blog!</h1>        )}      </div>    ) } } export default App If we are on a post page, then props.params.postId and props.params.postName will both be defined and we can use them to grab the desired post and pass the data on to the Post component to be rendered. If those properties are not defined, then we're on the home page and can simply render a greeting. Now, our Post.js component can be a simple stateless functional component that simply renders its properties. // components/Post.js import React from 'react' const Post = ({title, content}) => ( <div> <h3>{title}</h3> <p>{content}</p> </div>) export default Post With that refactoring complete, we're ready to implement data loading. Step 6: Data Loading git checkout data-loading && npm install For this final step, we just need to make two small changes in server.js and App.js: // server.js ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const routerContextWithData = ( <RouterContext {...props} createElement={(Component, props) => { return <Component posts={posts} {...props} /> }} /> ) const appHtml = renderToString(routerContextWithData) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) ... // components/App.js import React from 'react' import Post from './Post' import { Link, IndexLink } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: props.posts || [] } } ... In server.js, we're changing how the RouterContext creates elements by overwriting its createElement function and passing in our data as additional props. These props will get passed to any component that is matched by the route, which in this case will be our App component. Then, when the App component is initialized, it sets its posts state property to what it got from props or an empty array. That's it! Run npm start one last time, and cruise through your app. You can even disable JavaScript, and the app will automatically degrade to requesting whole pages. Thanks for reading! About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 1500

article-image-learning-how-manage-records-visualforce
Packt
29 Sep 2016
7 min read
Save for later

Learning How to Manage Records in Visualforce

Packt
29 Sep 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Using Spring JMX within Java Applications [Article]
Read more
  • 0
  • 0
  • 966

article-image-build-universal-javascript-app-part-1
John Oerter
27 Sep 2016
8 min read
Save for later

Build a Universal JavaScript App, Part 1

John Oerter
27 Sep 2016
8 min read
In this two part post series, we will walk through how to write a universal (or isomorphic) JavaScript app. This first part will cover what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating the app, which are serving post data and adding React. In Part 2 of this series we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. What is a Universal JavaScript app? To put it simply, a universal JavaScript app is an application that can render itself on the client and the server. It combines the features of traditional server-side MVC frameworks (Rails, ASP.NET MVC, and Spring MVC), where markup is generated on the server and sent to the client, with the features of SPA frameworks (Angular, Ember, Backbone, and so on), where the server is only responsible for the data and the client generates markup. Universal or Isomorphic? There has been some debate in the JavaScript community over the terms "universal" and "isomorphic" to describe apps that can run on the client and server. I personally prefer the term "universal," simply because it's a more familiar word and makes the concept easier to understand. If you're interested in this discussion, you can read the below articles: Isomorphic JavaScript: The Future of Web Apps by Spike Brehm popularizes the term "isomorphic". Universal JavaScript by Michael Jackson puts forth the term "universal" as a better alternative. Is "Isomorphic JavaScript" a good term? by Dr. Axel Rauschmayer says that maybe certain applications should be called isomorphic and others should be called universal. What are the advantages? Switching between one language on the server and JavaScript on the client can harm your productivity. JavaScript is a unique language that, for better or worse, behaves in a very different way from most server-side languages. Writing universal JavaScript apps allows you to simplify your workflow and immerse yourself in JavaScript. If you're writing a web application today, chances are that you're writing a lot of JavaScript anyway. Why not dive in? Node continues to improve with better performance and more features thanks to V8 and it's well run community, and npm is a fantastic package manager with thousands of quality packages available. There is tremendous brain power being devoted to JavaScript right now. Take advantage of it! On top of that, maintainability of a universal app is better because it allows more code reuse. How many times have you implemented the same validation logic in your server and front end code? Or rewritten utility functions? With some careful architecture and decoupling, you can write and test code once that will work on the server and client. Performance SPAs are great because they allow the user to navigate applications without waiting for full pages to be sent down from the server. The cost, however, is longer wait times for the application to be initialized on the first load because the browser needs to receive all the assets needed to run the full app up front. What if there are rarely visited areas in your app? Why should every client have to wait for the logic and assets needed for those areas? This was the problem Netflix solved using universal JavaScript. MVC apps have the inverse problem. Each page only has the markup, assets, and JavaScript needed for that page, but the trade-off is round trips to the server for every page. SEO Another disadvantage of SPAs is their weakness on SEO. Although web crawlers are getting better at understanding JavaScript, a site generated on the server will always be superior. With universal JavaScript, any public-facing page on your site can be easily requested and indexed by search engines. Building an Example Universal JavaScript App Now that we've gained some background on universal JavaScript apps, let's walk through building a very simple blog website as an example. Here are the tools we'll use: Express React React Router Babel Webpack I've chosen these tools because of their popularity and ease of accomplishing our task. I won't be covering how to use Redux or other Flux implementations because, while useful in a production application, they are not necessary for demoing how to create a universal app. To keep things simple, we will forgo a database and just store our data in a flat file. We'll also keep the Webpack shenanigans to a minimum and only do what is necessary to transpile and bundle our code. You can grab the code for this walkthrough at here, and follow along. There are branches for each step along the way. Be sure to run npm install for each step. Let's get started! Step 1: Serving Post Data git checkout serving-post-data && npm install We're going to start off slow, and simply set up the data we want to serve. Our posts are stored in the posts.js file, and we just have a simple Express server in server.js that takes requests at /api/post/{id}. Snippets of these files are below. // posts.js module.exports = [ ... { id: 2, title: 'Expert Node', slug: 'expert-node', content: 'Street art 8-bit photo booth, aesthetic kickstarter organic raw denim hoodie non kale chips pour-over occaecat. Banjo non ea, enim assumenda forage excepteur typewriter dolore ullamco. Pickled meggings dreamcatcher ugh, church-key brooklyn portland freegan normcore meditation tacos aute chicharrones skateboard polaroid. Delectus affogato assumenda heirloom sed, do squid aute voluptate sartorial. Roof party drinking vinegar franzen mixtape meditation asymmetrical. Yuccie flexitarian est accusamus, yr 3 wolf moon aliqua mumblecore waistcoat freegan shabby chic. Irure 90's commodo, letterpress nostrud echo park cray assumenda stumptown lumbersexual magna microdosing slow-carb dreamcatcher bicycle rights. Scenester sartorial duis, pop-up etsy sed man bun art party bicycle rights delectus fixie enim. Master cleanse esse exercitation, twee pariatur venmo eu sed ethical. Plaid freegan chambray, man braid aesthetic swag exercitation godard schlitz. Esse placeat VHS knausgaard fashion axe cred. In cray selvage, waistcoat 8-bit excepteur duis schlitz. Before they sold out bicycle rights fixie excepteur, drinking vinegar normcore laboris 90's cliche aliqua 8-bit hoodie post-ironic. Seitan tattooed thundercats, kinfolk consectetur etsy veniam tofu enim pour-over narwhal hammock plaid.' }, ... ] // server.js ... app.get('/api/post/:id?', (req, res) => { const id = req.params.id if (!id) { res.send(posts) } else { const post = posts.find(p => p.id == id); if (post) res.send(post) else res.status(404).send('Not Found') } }) ... You can start the server by running node server.js, and then request all posts by going to localhost:3000/api/post or a single post by id such as localhost:3000/api/post/0. Great! Let's move on. Step 2: Add React git checkout add-react && npm install Now that we have the data exposed via a simple web service, let's use React to render a list of posts on the page. Before we get there, however, we need to set up webpack to transpile and bundle our code. Below is our simple webpack.config.js to do this: // webpack.config.js var webpack = require('webpack') module.exports = { entry: './index.js', output: { path: 'public', filename: 'bundle.js' }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } All we're doing is bundling our code with index.js as an entry point and writing the bundle to a public folder that will be served by Express. Speaking of index.js, here it is: // index.js import React from 'react' import { render } from 'react-dom' import App from './components/app' render ( <App />, document.getElementById('app') ) And finally, we have App.js: // components/App.js import React from 'react' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } componentDidMount() { const request = new XMLHttpRequest() request.open('GET', allPostsUrl, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { this.setState({ posts: JSON.parse(request.response) }); } } request.send(); } render() { const posts = this.state.posts.map((post) => { return <li key={post.id}>{post.title}</li> }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> </div> ) } } export default App Once the App component is mounted, it sends a request for the posts, and renders them as a list. To see this step in action, build the webpack bundle first with npm run build:client. Then, you can run node server.js just like before. http://localhost:3000 will now display a list of our posts. Conclusion Now that React has been added, take a look at Part 2 where we cover client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 2539
Banner background image

article-image-using-model-serializers-eliminate-duplicate-code
Packt
23 Sep 2016
12 min read
Save for later

Using model serializers to eliminate duplicate code

Packt
23 Sep 2016
12 min read
In this article by Gastón C. Hillar, author of, Building RESTful Python Web Services, we will cover the use of model serializers to eliminate duplicate code and use of default parsing and rendering options. (For more resources related to this topic, see here.) Using model serializers to eliminate duplicate code The GameSerializer class declares many attributes with the same names that we used in the Game model and repeats information such as the types and the max_length values. The GameSerializer class is a subclass of the rest_framework.serializers.Serializer, it declares attributes that we manually mapped to the appropriate types, and overrides the create and update methods. Now, we will create a new version of the GameSerializer class that will inherit from the rest_framework.serializers.ModelSerializer class. The ModelSerializer class automatically populates both a set of default fields and a set of default validators. In addition, the class provides default implementations for the create and update methods. In case you have any experience with Django Web Framework, you will notice that the Serializer and ModelSerializer classes are similar to the Form and ModelForm classes. Now, go to the gamesapi/games folder folder and open the serializers.py file. Replace the code in this file with the following code that declares the new version of the GameSerializer class. The code file for the sample is included in the restful_python_chapter_02_01 folder. from rest_framework import serializers from games.models import Game class GameSerializer(serializers.ModelSerializer): class Meta: model = Game fields = ('id', 'name', 'release_date', 'game_category', 'played') The new GameSerializer class declares a Meta inner class that declares two attributes: model and fields. The model attribute specifies the model related to the serializer, that is, the Game class. The fields attribute specifies a tuple of string whose values indicate the field names that we want to include in the serialization from the related model. There is no need to override either the create or update methods because the generic behavior will be enough in this case. The ModelSerializer superclass provides implementations for both methods. We have reduced boilerplate code that we didn’t require in the GameSerializer class. We just needed to specify the desired set of fields in a tuple. Now, the types related to the game fields is included only in the Game class. Press Ctrl + C to quit Django’s development server and execute the following command to start it again. python manage.py runserver Using the default parsing and rendering options and move beyond JSON The APIView class specifies default settings for each view that we can override by specifying appropriate values in the gamesapi/settings.py file or by overriding the class attributes in subclasses. As previously explained the usage of the APIView class under the hoods makes the decorator apply these default settings. Thus, whenever we use the decorator, the default parser classes and the default renderer classes will be associated with the function views. By default, the value for the DEFAULT_PARSER_CLASSES is the following tuple of classes: ( 'rest_framework.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ) When we use the decorator, the API will be able to handle any of the following content types through the appropriate parsers when accessing the request.data attribute. application/json application/x-www-form-urlencoded multipart/form-data When we access the request.data attribute in the functions, Django REST Framework examines the value for the Content-Type header in the incoming request and determines the appropriate parser to parse the request content. If we use the previously explained default values, Django REST Framework will be able to parse the previously listed content types. However, it is extremely important that the request specifies the appropriate value in the Content-Type header. We have to remove the usage of the rest_framework.parsers.JSONParser class in the functions to make it possible to be able to work with all the configured parsers and stop working with a parser that only works with JSON. The game_list function executes the following two lines when request.method is equal to 'POST': game_data = JSONParser().parse(request) game_serializer = GameSerializer(data=game_data) We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(data=request.data) The game_detail function executes the following two lines when request.method is equal to 'PUT': game_data = JSONParser().parse(request) game_serializer = GameSerializer(game, data=game_data) We will make the same edits done for the code in the game_list function. We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(game, data=request.data) By default, the value for the DEFAULT_RENDERER_CLASSES is the following tuple of classes: ( 'rest_framework.renderers.JSONRenderer', 'rest_framework.renderers.BrowsableAPIRenderer', ) When we use the decorator, the API will be able to render any of the following content types in the response through the appropriate renderers when working with the rest_framework.response.Response object. application/json text/html By default, the value for the DEFAULT_CONTENT_NEGOTIATION_CLASS is the rest_framework.negotiation.DefaultContentNegotiation class. When we use the decorator, the API will use this content negotiation class to select the appropriate renderer for the response based on the incoming request. This way, when a request specifies that it will accept text/html, the content negotiation class selects the rest_framework.renderers.BrowsableAPIRenderer to render the response and generate text/html instead of application/json. We have to replace the usages of both the JSONResponse and HttpResponse classes in the functions with the rest_framework.response.Response class. The Response class uses the previously explained content negotiation features, renders the received data into the appropriate content type and returns it to the client. Now, go to the gamesapi/games folder folder and open the views.py file. Replace the code in this file with the following code that removes the JSONResponse class, uses the @api_view decorator for the functions and the rest_framework.response.Response class. The modified lines are highlighted. The code file for the sample is included in the restful_python_chapter_02_02 folder. from rest_framework.parsers import JSONParser from rest_framework import status from rest_framework.decorators import api_view from rest_framework.response import Response from games.models import Game from games.serializers import GameSerializer @api_view(['GET', 'POST']) def game_list(request): if request.method == 'GET': games = Game.objects.all() games_serializer = GameSerializer(games, many=True) return Response(games_serializer.data) elif request.method == 'POST': game_serializer = GameSerializer(data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data, status=status.HTTP_201_CREATED) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) @api_view(['GET', 'PUT', 'POST']) def game_detail(request, pk): try: game = Game.objects.get(pk=pk) except Game.DoesNotExist: return Response(status=status.HTTP_404_NOT_FOUND) if request.method == 'GET': game_serializer = GameSerializer(game) return Response(game_serializer.data) elif request.method == 'PUT': game_serializer = GameSerializer(game, data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) elif request.method == 'DELETE': game.delete() return Response(status=status.HTTP_204_NO_CONTENT) After you save the previous changes, run the following command: http OPTIONS :8000/games/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/. The request will match and run the views.game_list function, that is, the game_list function declared within the games/views.py file. We added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS, PUT Content-Type: application/json Date: Thu, 09 Jun 2016 21:35:58 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game Detail", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with a comma-separated list of HTTP verbs supported by the resource collection as its value: GET, POST, OPTIONS. As our request didn’t specify the allowed content type, the function rendered the response with the default application/json content type. The response body specifies the Content-type that the resource collection parses and the Content-type that it renders. Run the following command to compose and send and HTTP request with the OPTIONS verb for a game resource. Don’t forget to replace 3 with a primary key value of an existing game in your configuration: http OPTIONS :8000/games/3/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/3/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/3/. The request will match and run the views.game_detail function, that is, the game_detail function declared within the games/views.py file. We also added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS Content-Type: application/json Date: Thu, 09 Jun 2016 20:24:31 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game List", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with comma-separated list of HTTP verbs supported by the resource as its value: GET, POST, OPTIONS, PUT. The response body specifies the content-type that the resource parses and the content-type that it renders, with the same contents received in the previous OPTIONS request applied to a resource collection, that is, to a games collection. When we composed and sent POST and PUT commands, we had to use the use the -H "Content-Type: application/json" option to indicate curl to send the data specified after the -d option as application/json instead of the default application/x-www-form-urlencoded. Now, in addition to application/json, our API is capable of parsing application/x-www-form-urlencoded and multipart/form-data data specified in the POST and PUT requests. Thus, we can compose and send a POST command that sends the data as application/x-www-form-urlencoded with the changes made to our API. We will compose and send an HTTP request to create a new game. In this case, we will use the -f option for HTTPie that serializes data items from the command line as form fields and sets the Content-Type header key to the application/x-www-form-urlencoded value. http -f POST :8000/games/ name='Toy Story 4' game_category='3D RPG' played=false release_date='2016-05-18T03:02:00.776594Z' The following is the equivalent curl command. Notice that we don’t use the -H option and curl will send the data in the default application/x-www-form-urlencoded: curl -iX POST -d '{"name":"Toy Story 4", "game_category":"3D RPG", "played": "false", "release_date": "2016-05-18T03:02:00.776594Z"}' :8000/games/ The previous commands will compose and send the following HTTP request: POST http://localhost:8000/games/ with the Content-Type header key set to the application/x-www-form-urlencoded value and the following data: name=Toy+Story+4&game_category=3D+RPG&played=false&release_date=2016-05-18T03%3A02%3A00.776594Z The request specifies /games/, and therefore, it will match '^games/$' and run the views.game_list function, that is, the updated game_detail function declared within the games/views.py file. As the HTTP verb for the request is POST, the request.method property is equal to 'POST', and therefore, the function will execute the code that creates a GameSerializer instance and passes request.data as the data argument for its creation. The rest_framework.parsers.FormParser class will parse the data received in the request, the code creates a new Game and, if the data is valid, it saves the new Game. If the new Game was successfully persisted in the database, the function returns an HTTP 201 Created status code and the recently persisted Game serialized to JSON in the response body. The following lines show an example response for the HTTP request, with the new Game object in the JSON response: HTTP/1.0 201 Created Allow: OPTIONS, POST, GET Content-Type: application/json Date: Fri, 10 Jun 2016 20:38:40 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "game_category": "3D RPG", "id": 20, "name": "Toy Story 4", "played": false, "release_date": "2016-05-18T03:02:00.776594Z" } After the changes we made in the code, we can run the following command to see what happens when we compose and send an HTTP request with an HTTP verb that is not supported: http PUT :8000/games/ The following is the equivalent curl command: curl -iX PUT :8000/games/ The previous command will compose and send the following HTTP request: PUT http://localhost:8000/games/. The request will match and try to run the views.game_list function, that is, the game_list function declared within the games/views.py file. The @api_view decorator we added to this function doesn’t include 'PUT' in the string list with the allowed HTTP verbs, and therefore, the default behavior returns a 405 Method Not Allowed status code. The following lines show the output with the response from the previous request. A JSON content provides a detail key with a string value that indicates the PUT method is not allowed. HTTP/1.0 405 Method Not Allowed Allow: GET, OPTIONS, POST Content-Type: application/json Date: Sat, 11 Jun 2016 00:49:30 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "detail": "Method "PUT" not allowed." } Summary This article covers the use of model serializers and how it is effective in removing duplicate code. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Implementing a WCF Service in the Real World [article] WCF – Windows Communication Foundation [article]
Read more
  • 0
  • 0
  • 4036

article-image-how-to-build-and-deploy-node-app-docker
John Oerter
20 Sep 2016
7 min read
Save for later

How to Build and Deploy a Node App with Docker

John Oerter
20 Sep 2016
7 min read
How many times have you deployed your app that was working perfectly in your local environment to production, only to see it break? Whether it was directly related to the bug or feature you were working on, or another random issue entirely, this happens all too often for most developers. Errors like this not only slow you down, but they're also embarrassing. Why does this happen? Usually, it's because your development environment on your local machine is different from the production environment you're deploying to. The tenth factor of the Twelve-Factor App is Dev/prod parity. This means that your development, staging, and production environments should be as similar as possible. The authors of the Twelve-Factor App spell out three "gaps" that can be present. They are: The time gap: A developer may work on code that takes days, weeks, or even months to go into production. The personnel gap: Developers write code, ops engineers deploy it. The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deployment uses Apache, MySQL, and Linux. (Source) In this post, we will mostly focus on the tools gap, and how to bridge that gap in a Node application with Docker. The Tools Gap In the Node ecosystem, the tools gap usually manifests itself either in differences in Node and npm versions, or differences in package dependency versions. If a package author publishes a breaking change in one of your dependencies or your dependencies' dependencies, it is entirely possible that your app will break on the next deployment (assuming you reinstall dependencies with npm install on every deployment), while it runs perfectly on your local machine. Although you can work around this issue using tools like npm shrinkwrap, adding Docker to the mix will streamline your deployment life cycle and minimize broken deployments to production. Why Docker? Docker is unique because it can be used the same way in development and production. When you enable the architecture of your app to run inside containers, you can easily scale out and create small containers that can be composed together to make one awesome system. Then, you can mimic this architecture in development so you never have to guess how your app will behave in production. In regards to the time gap and the personnel gap, Docker makes it easier for developers to automate deployments, thereby decreasing time to production and making it easier for full-stack teams to own deployments. Tools and Concepts When developing inside Docker containers, the two most important concepts are docker-compose and volumes. docker-compose helps define mulit-container environments and the ability to run them with one command. Here are some of the more often used docker-compose commands: docker-compose build: Builds images for services defined in docker-compose.yml docker-compose up: Creates and starts services. This is the same as running docker-compose create && docker-compose start docker-compose run: Runs a one-off command inside a container Volumes allow you to mount files from the host machine into the container. When the files on your host machine change, they change inside the container as well. This is important so that we don't have to constantly rebuild containers during development every time we make a change. You can also use a tool like node-mon to automatically restart the node app on changes. Let's walk through some tips and tricks with developing Node apps inside Docker containers. Set up Dockerfile and docker-compose.yml When you start a new project with Docker, you'll first want to define a barebones Dockerfile and docker-compose.yml to get you started. Here's an example Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user USER app-user WORKDIR $HOME/app This Dockerfile displays two best practices: Favor exact version tags over floating tags such as latest. Node releases often these days, and you don't want to implicitly upgrade when building your container on another machine. By specifying a version such as 6.2.1, you ensure that anyone who builds the image will always be working from the same node version. Create a new user to run the app inside the container. Without this step, everything would run under root in the container. You certainly wouldn't do that on a physical machine, so don't do in Docker containers either. Here's an example starter docker-compose.yml: web: build: . volumes: - .:/home/app-user/app Pretty simple right? Here we are telling Docker to build the web service based on our Dockerfile and create a volume from our current host directory to /home/app-user/app inside the container. This simple setup lets you build the container with docker-compose build and then run bash inside it with docker-compose run --rm web /bin/bash. Now, it's essentially the same as if you were SSH'd into a remote server or working off a VM, except that any file you create inside the container will be on your host machine and vice versa. With that in mind, you can bootstrap your Node app from inside your container using npm init -y and npm shrinkwrap. Then, you can install any modules you need such as Express. Install node modules on build With that done, we need to update our Dockerfile to install dependencies from npm when the image is built. Here is the updated Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install Notice that we had to change the ownership of the copied files to app-user. This is because files copied into a container are automatically owned by root. Add a volume for the node_modules directory We also need to make an update to our docker-compose.yml to make sure that our modules are installed inside the container properly. web: build: . volumes: - .:/home/app-user/app - /home/app-user/app/node_modules Without adding a data volume to /home/app-user/app/node_modules, the node_modules wouldn't exist at runtime in the container because our host directory, which won't contain the node_modules directory, would be mounted and hide the node_modules directory that was created when the container was built. For more information, see this Stack Overflow post. Running your app Once you've got an entry point to your app ready to go, simply add it as a CMD in your Dockerfile: CMD ["node", "index.js"] This will automatically start your app on docker-compose up. Running tests inside your container is easy as well. docker-compose --rm run web npm test You could easily hook this into CI. Production Now going to production with your Docker-powered Node app is a breeze! Just use docker-compose again. You will probably want to define another docker-compose.yml that is especially written for production use. This means removing volumes, binding to different ports, setting NODE_ENV=production, and so on. Once you have a production config file, you can tell docker-compose to use it, like so: docker-compose -f docker-compose.yml -f docker-compose.production.yml up The -f lets you specify a list of files that are merged in the order specified. Here is a complete Dockerfile and docker-compose.yml for reference: # Dockerfile FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install CMD ["node", "index.js"] # docker-compose.yml web: build: . ports: - '3000:3000' volumes: - .:/home/app-user/app - /home/app-user/app/node_modules About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs here.
Read more
  • 0
  • 0
  • 2982

article-image-hello-tdd
Packt
14 Sep 2016
6 min read
Save for later

Hello TDD!

Packt
14 Sep 2016
6 min read
In this article by Gaurav Sood, the author of the book Scala Test-Driven Development, tells  basics of Test-Driven Development. We will explore: What is Test-Driven Development? What is the need for Test-Driven Development? Brief introduction to Scala and SBT (For more resources related to this topic, see here.) What is Test-Driven Development? Test-Driven Development or TDD(as it is referred to commonly)is the practice of writing your tests before writing any application code. This consists of the following iterative steps: This process is also referred to asRed-Green-Refactor-Repeat. TDD became more prevalent with the use of agile software development process, though it can be used as easily with any of the Agile's predecessors like Waterfall. Though TDD is not specifically mentioned in agile manifesto (http://agilemanifesto.org), it has become a standard methodology used with agile. Saying this, you can still use agile without using TDD. Why TDD? The need for TDD arises from the fact that there can be constant changes to the application code. This becomes more of a problem when we are using agile development process, as it is inherently an iterative development process. Here are some of the advantages, which underpin the need for TDD: Code quality: Tests on itself make the programmer more confident of their code. Programmers can be sure of syntactic and semantic correctness of their code. Evolving architecture: A purely test-driven application code gives way to an evolving architecture. This means that we do not have to pre define our architectural boundaries and the design patterns. As the application grows so does the architecture. This results in an application that is flexible towards future changes. Avoids over engineering: Tests that are written before the application code define and document the boundaries. These tests also document the requirements and application code. Agile purists normally regard comments inside the code as a smell. According to them your tests should document your code. Since all the boundaries are predefined in the tests, it is hard to write application code, which breaches these boundaries. This however assumes that TDD is following religiously. Paradigm shift: When I had started with TDD, I noticed that the first question I asked myself after looking at the problem was; "How can I solve it?" This however is counterproductive. TDD forces the programmer to think about the testability of the solution before its implementation. To understand how to test a problem would mean a better understanding of the problem and its edge cases. This in turn can result into refinement of the requirements or discovery or some new requirements. Now it had become impossible for me not to think about testability of the problem before the solution. Now the first question I ask myself is; "How can I test it?". Maintainable code: I have always found it easier to work on an application that has historically been test-driven rather than on one that is not. Why? Only because when I make change to the existing code, the existing tests make sure that I do not break any existing functionality. This results in highly maintainable code, where many programmers can collaborate simultaneously. Brief introduction to Scala and SBT Let us look at Scala and SBT briefly. It is assumed that the reader is familiar with Scala and therefore will not go into the depth of it. What is Scala Scala is a general-purpose programming language. Scala is an acronym for Scalable Language. This reflects the vision of its creators of making Scala a language that grows with the programmer's experience of it. The fact that Scala and Java objects can be freely mixed, makes transition from Java to Scala quite easy. Scala is also a full-blown functional language. Unlike Haskell, which is a pure functional language, Scala allows interoperability with Java and support for objectoriented programming. Scala also allows use of both pure and impure functions. Impure functions have side affect like mutation, I/O and exceptions. Purist approach to Scala programming encourages use of pure functions only. Scala is a type-safe JVM language that incorporates both object oriented and functional programming into an extremely concise, logical, and extraordinarily powerful language. Why Scala? Here are some advantages of using Scala: Functional solution to problem is always better: This is my personal view and open for contention. Elimination of mutation from application code allows application to be run in parallelacross hosts and cores without any deadlocks. Better concurrency model: Scala has an actor model that is better than Java's model of locks on thread. Concise code:Scala code is more concise than itsmore verbose cousin Java. Type safety/ static typing: Scala does type checking at compile time. Pattern matching: Case statements in Scala are superpowerful. Inheritance:Mixin traits are great and they definitely reduce code repetition. There are other features of Scala like closure and monads, which will need more understanding of functional language concepts to learn. Scala Build Tool Scala Build Tool (SBT) is a build tool that allows compiling, running, testing, packaging, and deployment of your code. SBT is mostly used with Scala projects, but it can as easily be used for projects in other languages. Here, we will be using SBT as a build tool for managing our project and running our tests. SBT is written in Scala and can use many of the features of Scala language. Build definitions for SBT are also written in Scala. These definitions are both flexible and powerful. SBT also allows use of plugins and dependency management. If you have used a build tool like Maven or Gradlein any of your previous incarnations, you will find SBT a breeze. Why SBT? Better dependency management Ivy based dependency management Only-update-on-request model Can launch REPL in project context Continuous command execution Scala language support for creating tasks Resources for learning Scala Here are few of the resources for learning Scala: http://www.scala-lang.org/ https://www.coursera.org/course/progfun https://www.manning.com/books/functional-programming-in-scala http://www.tutorialspoint.com/scala/index.htm Resources for SBT Here are few of the resources for learning SBT: http://www.scala-sbt.org/ https://twitter.github.io/scala_school/sbt.html Summary In this article we learned what is TDD and why to use it. We also learned about Scala and SBT.  Resources for Article: Further resources on this subject: Overview of TDD [article] Understanding TDD [article] Android Application Testing: TDD and the Temperature Converter [article]
Read more
  • 0
  • 0
  • 918
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-gearing-bootstrap-4
Packt
12 Sep 2016
28 min read
Save for later

Gearing Up for Bootstrap 4

Packt
12 Sep 2016
28 min read
In this article by Benjamin Jakobus and Jason Marah, the authors of the book Mastering Bootstrap 4, we will be discussing the key points about Bootstrap as a web development framework that helps developers build web interfaces. Originally conceived at Twitter in 2011 by Mark Otto and Jacob Thornton, the framework is now open source and has grown to be one of the most popular web development frameworks to date. Being freely available for private, educational, and commercial use meant that Bootstrap quickly grew in popularity. Today, thousands of organizations rely on Bootstrap, including NASA, Walmart, and Bloomberg. According to BuiltWith.com, over 10% of the world's top 1 million websites are built using Bootstrap (http://trends.builtwith.com/docinfo/Twitter-Bootstrap). As such, knowing how to use Bootstrap will be an important skill and serve as a powerful addition to any web developer’s tool belt. (For more resources related to this topic, see here.) The framework itself consists of a mixture of JavaScript and CSS, and provides developers with all the essential components required to develop a fully functioning web user interface. Over the course of the book, we will be introducing you to all of the most essential features that Bootstrap has to offer by teaching you how to use the framework to build a complete website from scratch. As CSS and HTML alone are already the subject of entire books in themselves, we assume that you, the reader, has at least a basic knowledge of HTML, CSS, and JavaScript. We begin this article by introducing you to our demo website—MyPhoto. This website will accompany us throughout the book, and serve as a practical point of reference. Therefore, all lessons learned will be taught within the context of MyPhoto. We will then discuss the Bootstrap framework, listing its features and contrasting the current release to the last major release (Bootstrap 3). Last but not least, this article will help you set up your development environment. To ensure equal footing, we will guide you towards installing the right build tools, and precisely detail the various ways in which you can integrate Bootstrap into a project. To summarize, this article will do the following: Introduce you to what exactly we will be doing Explain what is new in the latest version of Bootstrap, and how the latest version differs to the previous major release Show you how to include Bootstrap in our web project Introducing our demo project The book will teach you how to build a complete Bootstrap website from scratch. We will build and improve the website's various sections as we progress through the book. The concept behind our website is simple. To develop a landing page for photographers. Using this landing page, (hypothetical) users will be able to exhibit their wares and services. While building our website, we will be making use of the same third-party tools and libraries that you would if you were working as a professional software developer. We chose these tools and plugins specifically because of their widespread use. Learning how to use and integrate them will save you a lot of work when developing websites in the future. Specifically, the tools that we will use to assist us throughout the development of MyPhoto are Bower, node package manager (npm) and Grunt. From a development perspective, the construction of MyPhoto will teach you how to use and apply all of the essential user interface concepts and components required to build a fully functioning website. Among other things, you will learn how to do the following: Use the Bootstrap grid system to structure the information presented on your website. Create a fixed, branded, navigation bar with animated scroll effects. Use an image carousel for displaying different photographs, implemented using Bootstrap's carousel.js and jumbotron (jumbotron is a design principle for displaying important content). It should be noted that carousels are becoming an increasingly unpopular design choice, however, they are still heavily used and are an important feature of Bootstrap. As such, we do not argue for or against the use of carousels as their effectiveness depends very much on how they are used, rather than on whether they are used. Build custom tabs that allow users to navigate across different contents. Use and apply Bootstrap's modal dialogs. Apply a fixed page footer. Create forms for data entry using Bootstrap's input controls (text fields, text areas, and buttons) and apply Bootstrap's input validation styles. Make best use of Bootstrap's context classes. Create alert messages and learn how to customize them. Rapidly develop interactive data tables for displaying product information. How to use drop-down menus, custom fonts, and icons. In addition to learning how to use Bootstrap 4, the development of MyPhoto will introduce you to a range of third-party libraries such as Scrollspy (for scroll animations), SalvattoreJS (a library for complementing our Bootstrap grid), Animate.css (for beautiful CSS animations, such as fade-in effects at https://daneden.github.io/animate.css/) and Bootstrap DataTables (for rapidly displaying data in tabular form). The website itself will consist of different sections: A Welcome section An About section A Services section A Gallery section A Contact Us section The development of each section is intended to teach you how to use a distinct set of features found in third-party libraries. For example, by developing the Welcome section, you will learn how to use Bootstrap's jumbotron and alert dialogs along with different font and text styles, while the About section will show you how to use cards. The Services section of our project introduces you to Bootstrap's custom tabs. That is, you will learn how to use Bootstrap's tabs to display a range of different services offered by our website. Following on from the Services section, you will need to use rich imagery to really show off the website's sample services. You will achieve this by really mastering Bootstrap's responsive core along with Bootstrap's carousel and third-party jQuery plugins. Last but not least, the Contact Us section will demonstrate how to use Bootstrap's form elements and helper functions. That is, you will learn how to use Bootstrap to create stylish HTML forms, how to use form fields and input groups, and how to perform data validation. Finally, toward the end of the book, you will learn how to optimize your website, and integrate it with the popular JavaScript frameworks AngularJS (https://angularjs.org/) and React (http://facebook.github.io/react/). As entire books have been written on AngularJS alone, we will only cover the essentials required for the integration itself. Now that you have glimpsed a brief overview of MyPhoto, let’s examine Bootstrap 4 in more detail, and discuss what makes it so different to its predecessor. Take a look at the following screenshot: Figure 1.1: A taste of what is to come: the MyPhoto landing page. What Bootstrap 4 Alpha 4 has to offer Much has changed since Twitter’s Bootstrap was first released on August 19th, 2011. In essence, Bootstrap 1 was a collection of CSS rules offering developers the ability to lay out their website, create forms, buttons, and help with general appearance and site navigation. With respect to these core features, Bootstrap 4 Alpha 4 is still much the same as its predecessors. In other words, the framework's focus is still on allowing developers to create layouts, and helping to develop a consistent appearance by providing stylings for buttons, forms, and other user interface elements. How it helps developers achieve and use these features however, has changed entirely. Bootstrap 4 is a complete rewrite of the entire project, and, as such, ships with many fundamental differences to its predecessors. Along with Bootstrap's major features, we will be discussing the most striking differences between Bootstrap 3 and Bootstrap 4 in the sub sections below. Layout Possibly the most important and widely used feature is Bootstrap's ability to lay out and organize your page. Specifically, Bootstrap offers the following: Responsive containers. Responsive breakpoints for adjusting page layout in response to differing screen sizes. A 12 column grid layout for flexibly arranging various elements on your page. Media objects that act as building blocks and allow you to build your own structural components. Utility classes that allow you to manipulate elements in a responsive manner. For example, you can use the layout utility classes to hide elements, depending on screen size. Content styling Just like its predecessor, Bootstrap 4 overrides the default browser styles. This means that many elements, such as lists or headings, are padded and spaced differently. The majority of overridden styles only affect spacing and positioning, however, some elements may also have their border removed. The reason behind this is simple. To provide users with a clean slate upon which they can build their site. Building on this clean slate, Bootstrap 4 provides styles for almost every aspect of your webpage such as buttons (Figure 1.2), input fields, headings, paragraphs, special inline texts, such as keyboard input (Figure 1.3), figures, tables, and navigation controls. Aside from this, Bootstrap offers state styles for all input controls, for example, styles for disabled buttons or toggled buttons. Take a look at the following screenshot: Figure 1.2: The six button styles that come with Bootstrap 4 are btn-primary,btn-secondary, btn-success,btn-danger, btn-link,btn-info, and btn-warning. Take a look at the following screenshot: Figure 1.3: Bootstrap's content styles. In the preceding example, we see inline styling for denoting keyboard input. Components Aside from layout and content styling, Bootstrap offers a large variety of reusable components that allow you to quickly construct your website's most fundamental features. Bootstrap's UI components encompass all of the fundamental building blocks that you would expect a web development toolkit to offer: Modal dialogs, progress bars, navigation bars, tooltips, popovers, a carousel, alerts, drop-down menu, input groups, tabs, pagination, and components for emphasizing certain contents. Let's have a look at the following modal dialog screenshot: Figure 1.4: Various Bootstrap 4 components in action. In the screenshot above we see a sample modal dialog, containing an info alert, some sample text, and an animated progress bar. Mobile support Similar to its predecessor, Bootstrap 4 allows you to create mobile friendly websites without too much additional development work. By default, Bootstrap is designed to work across all resolutions and screen sizes, from mobile, to tablet, to desktop. In fact, Bootstrap's mobile first design philosophy implies that its components must display and function correctly at the smallest screen size possible. The reasoning behind this is simple. Think about developing a website without consideration for small mobile screens. In this case, you are likely to pack your website full of buttons, labels, and tables. You will probably only discover any usability issues when a user attempts to visit your website using a mobile device only to find a small webpage that is crowded with buttons and forms. At this stage, you will be required to rework the entire user interface to allow it to render on smaller screens. For precisely this reason, Bootstrap promotes a bottom-up approach, forcing developers to get the user interface to render correctly on the smallest possible screen size, before expanding upwards. Utility classes Aside from ready-to-go components, Bootstrap offers a large selection of utility classes that encapsulate the most commonly needed style rules. For example, rules for aligning text, hiding an element, or providing contextual colors for warning text. Cross-browser compatibility Bootstrap 4 supports the vast majority of modern browsers, including Chrome, Firefox, Opera, Safari, Internet Explorer (version 9 and onwards; Internet Explorer 8 and below are not supported), and Microsoft Edge. Sass instead of Less Both Less and Sass (Syntactically Awesome Stylesheets) are CSS extension languages. That is, they are languages that extend the CSS vocabulary with the objective of making the development of many, large, and complex style sheets easier. Although Less and Sass are fundamentally different languages, the general manner in which they extend CSS is the same—both rely on a preprocessor. As you produce your build, the preprocessor is run, parsing the Less/Sass script and turning your Less or Sass instructions into plain CSS. Less is the official Bootstrap 3 build, while Bootstrap 4 has been developed from scratch, and is written entirely in Sass. Both Less and Sass are compiled into CSS to produce a single file, bootstrap.css. It is this CSS file that we will be primarily referencing throughout this book (with the exception of Chapter 3, Building the Layout). Consequently, you will not be required to know Sass in order to follow this book. However, we do recommend that you take a 20 minute introductory course on Sass if you are completely new to the language. Rest assured, if you already know CSS, you will not need more time than this. The language's syntax is very close to normal CSS, and its elementary concepts are similar to those contained within any other programming language. From pixel to root em Unlike its predecessor, Bootstrap 4 no longer uses pixel (px) as its unit of typographic measurement. Instead, it primarily uses root em (rem). The reasoning behind choosing rem is based on a well known problem with px, that is websites using px may render incorrectly, or not as originally intended, as users change the size of the browser's base font. Using a unit of measurement that is relative to the page's root element helps address this problem, as the root element will be scaled relative to the browser's base font. In turn, a page will be scaled relative to this root element. Typographic units of measurement Simply put, typographic units of measurement determine the size of your font and elements. The most commonly used units of measurement are px and em. The former is an abbreviation for pixel, and uses a reference pixel to determine a font's exact size. This means that, for displays of 96 dots per inch (dpi), 1 px will equal an actual pixel on the screen. For higher resolution displays, the reference pixel will result in the px being scaled to match the display's resolution. For example, specifying a font size of 100 px will mean that the font is exactly 100 pixels in size (on a display with 96 dpi), irrespective of any other element on the page. Em is a unit of measurement that is relative to the parent of the element to which it is applied. So, for example, if we were to have two nested div elements, the outer element with a font size of 100 px and the inner element with a font size of 2 em, then the inner element's font size would translate to 200 px (as in this case 1 em = 100 px). The problem with using a unit of measurement that is relative to parent elements is that it increases your code's complexity, as the nesting of elements makes size calculations more difficult. The recently introduced rem measurement aims to address both em's and px's shortcomings by combining their two strengths—instead of being relative to a parent element, rem is relative to the page's root element. No more support for Internet Explorer 8 As was already implicit in the feature summary above, the latest version of Bootstrap no longer supports Internet Explorer 8. As such, the decision to only support newer versions of Internet Explorer was a reasonable one, as not even Microsoft itself provides technical support and updates for Internet Explorer 8 anymore (as of January 2016). Furthermore, Internet Explorer 8 does not support rem, meaning that Bootstrap 4 would have been required to provide a workaround. This in turn would most likely have implied a large amount of additional development work, with the potential for inconsistencies. Lastly, responsive website development for Internet Explorer 8 is difficult, as the browser does not support CSS media queries. Given these three factors, dropping support for this version of Internet Explorer was the most sensible path of action. A new grid tier Bootstrap's grid system consists of a series of CSS classes and media queries that help you lay out your page. Specifically, the grid system helps alleviate the pain points associated with horizontal and vertical positioning of a page's contents and the structure of the page across multiple displays. With Bootstrap 4, the grid system has been completely overhauled, and a new grid tier has been added with a breakpoint of 480 px and below. We will be talking about tiers, breakpoints, and Bootstrap's grid system extensively in this book. Bye-bye GLYPHICONS Bootstrap 3 shipped with a nice collection of over 250 font icons, free of use. In an effort to make the framework more lightweight (and because font icons are considered bad practice), the GLYPHICON set is no longer available in Bootstrap 4. Bigger text – No more panels, wells, and thumbnails The default font size in Bootstrap 4 is 2 px bigger than in its predecessor, increasing from 14 px to 16 px. Furthermore, Bootstrap 4 replaced panels, wells, and thumbnails with a new concept—cards. To readers unfamiliar with the concept of wells, a well is a UI component that allows developers to highlight text content by applying an inset shadow effect to the element to which it is applied. A panel too serves to highlight information, but by applying padding and rounded borders. Cards serve the same purpose as their predecessors, but are less restrictive as they are flexible enough to support different types of content, such as images, lists, or text. They can also be customized to use footers and headers. Take a look at the following screenshot: Figure 1.5: The Bootstrap 4 card component replaces existing wells, thumbnails, and panels. New and improved form input controls Bootstrap 4 introduces new form input controls—a color chooser, a date picker, and a time picker. In addition, new classes have been introduced, improving the existing form input controls. For example, Bootstrap 4 now allows for input control sizing, as well as classes for denoting block and inline level input controls. However, one of the most anticipated new additions is Bootstrap's input validation styles, which used to require third-party libraries or a manual implementation, but are now shipped with Bootstrap 4 (see Figure 1.6 below). Take a look at the following screenshot: Figure 1.6: The new Bootstrap 4 input validation styles, indicating the successful processing of input. Last but not least, Bootstrap 4 also offers custom forms in order to provide even more cross-browser UI consistency across input elements (Figure 1.7). As noted in the Bootstrap 4 Alpha 4 documentation, the input controls are: "built on top of semantic and accessible markup, so they're solid replacements for any default form control" – Source: http://v4-alpha.getbootstrap.com/components/forms/ Take a look at the following screenshot: Figure 1.7: Custom Bootstrap input controls that replace the browser defaults in order to ensure cross-browser UI consistency. Customization The developers behind Bootstrap 4 have put specific emphasis on customization throughout the development of Bootstrap 4. As such, many new variables have been introduced that allow for the easy customization of Bootstrap. Using the $enabled-*- Sass variables, one can now enable or disable specific global CSS preferences. Setting up our project Now that we know what Bootstrap has to offer, let us set up our project: Create a new project directory named MyPhoto. This will become our project root directory. Create a blank index.html file and insert the following HTML code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> </head> <body> <div class="alert alert-success"> Hello World! </div> </body> </html> Note the three meta tags: The first tag tells the browser that the document in question is utf-8 encoded. Since Bootstrap optimizes its content for mobile devices, the subsequent meta tag is required to help with viewport scaling. The last meta tag forces the document to be rendered using the latest document rendering mode available if viewed in Internet Explorer. Open the index.html in your browser. You should see just a blank page with the words Hello World. Now it is time to include Bootstrap. At its core, Bootstrap is a glorified CSS style sheet. Within that style sheet, Bootstrap exposes very powerful features of CSS with an easy-to-use syntax. It being a style sheet, you include it in your project as you would with any other style sheet that you might develop yourself. That is, open the index.html and directly link to the style sheet. Viewport scaling The term viewport refers to the available display size to render the contents of a page. The viewport meta tag allows you to define this available size. Viewport scaling using meta tags was first introduced by Apple and, at the time of writing, is supported by all major browsers. Using the width parameter, we can define the exact width of the user's viewport. For example, <meta name="viewport" content="width=320px"> will instruct the browser to set the viewport's width to 320 px. The ability to control the viewport's width is useful when developing mobile-friendly websites; by default, mobile browsers will attempt to fit the entire page onto their viewports by zooming out as far as possible. This allows users to view and interact with websites that have not been designed to be viewed on mobile devices. However, as Bootstrap embraces a mobile-first design philosophy, a zoom out will, in fact, result in undesired side-effects. For example, breakpoints will no longer work as intended, as they now deal with the zoomed out equivalent of the page in question. This is why explicitly setting the viewport width is so important. By writing content="width=device-width, initial-scale=1, shrink-to-fit=no", we are telling the browser the following: To set the viewport's width equal to whatever the actual device's screen width is. We do not want any zoom, initially. We do not wish to shrink the content to fit the viewport. For now, we will use the Bootstrap builds hosted on Bootstrap's official Content Delivery Network (CDN). This is done by including the following HTML tag into the head of your HTML document (the head of your HTML document refers to the contents between the <head> opening tag and the </head> closing tag): <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> Bootstrap relies on jQuery, a JavaScript framework that provides a layer of abstraction in an effort to simplify the most common JavaScript operations (such as element selection and event handling). Therefore, before we include the Bootstrap JavaScript file, we must first include jQuery. Both inclusions should occur just before the </body> closing tag: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> Note that, while these scripts could, of course, be loaded at the top of the page, loading scripts at the end of the document is considered best practice to speed up page loading times and to avoid JavaScript issues preventing the page from being rendered. The reason behind this is that browsers do not download all dependencies in parallel (although a certain number of requests are made asynchronously, depending on the browser and the domain). Consequently, forcing the browser to download dependencies early on will block page rendering until these assets have been downloaded. Furthermore, ensuring that your scripts are loaded last will ensure that once you invoke Document Object Model (DOM) operations in your scripts, you can be sure that your page's elements have already been rendered. As a result, you can avoid checks that ensure the existence of given elements. What is a Content Delivery Network? The objective behind any Content Delivery Network (CDN) is to provide users with content that is highly available. This means that a CDN aims to provide you with content, without this content ever (or rarely) becoming unavailable. To this end, the content is often hosted using a large, distributed set of servers. The BootstrapCDN basically allows you to link to the Bootstrap style sheet so that you do not have to host it yourself. Save your changes and reload the index.html in your browser. The Hello World string should now contain a green background: Figure 1.5: Our "Hello World" styled using Bootstrap 4. Now that the Bootstrap framework has been included in our project, open your browser's developer console (if using Chrome on Microsoft Windows, press Ctrl + Shift + I. On Mac OS X you can press cmd + alt + I). As Bootstrap requires another third-party library, Tether for displaying popovers and tooltips, the developer console will display an error (Figure 1.6). Take a look at the following screenshot: Figure 1.6: Chrome's Developer Tools can be opened by going to View, selecting Developer and then clicking on Developer Tools. At the bottom of the page, a new view will appear. Under the Console tab, an error will indicate an unmet dependency. Tether is available via the CloudFare CDN, and consists of both a CSS file and a JavaScript file. As before, we should include the JavaScript file at the bottom of our document while we reference Tether's style sheet from inside our document head: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/js/tether.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> </body> </html> While CDNs are an important resource, there are several reasons why, at times, using a third party CDN may not be desirable: CDNs introduce an additional point of failure, as you now rely on third-party servers. The privacy and security of users may be compromised, as there is no guarantee that the CDN provider does not inject malicious code into the libraries that are being hosted. Nor can one be certain that the CDN does not attempt to track its users. Certain CDNs may be blocked by the Internet Service Providers of users in different geographical locations. Offline development will not be possible when relying on a remote CDN. You will not be able to optimize the files hosted by your CDN. This loss of control may affect your website's performance (although typically you are more often than not offered an optimized version of the library through the CDN). Instead of relying on a CDN, we could manually download the jQuery, Tether, and Bootstrap project files. We could then copy these builds into our project root and link them to the distribution files. The disadvantage of this approach is the fact that maintaining a manual collection of dependencies can quickly become very cumbersome, and next to impossible as your website grows in size and complexity. As such, we will not manually download the Bootstrap build. Instead, we will let Bower do it for us. Bower is a package management system, that is, a tool that you can use to manage your website's dependencies. It automatically downloads, organizes, and (upon command) updates your website's dependencies. To install Bower, head over to http://bower.io/. How do I install Bower? Before you can install Bower, you will need two other tools: Node.js and Git. The latter is a version control tool—in essence, it allows you to manage different versions of your software. To install Git, head over to http://git-scm.com/and select the installer appropriate for your operating system. NodeJS is a JavaScript runtime environment needed for Bower to run. To install it, simply download the installer from the official NodeJS website: https://nodejs.org/ Once you have successfully installed Git and NodeJS, you are ready to install Bower. Simply type the following command into your terminal: npm install -g bower This will install Bower for you, using the JavaScript package manager npm, which happens to be used by, and is installed with, NodeJS. Once Bower has been installed, open up your terminal, navigate to the project root folder you created earlier, and fetch the bootstrap build: bower install bootstrap#v4.0.0-alpha.4 This will create a new folder structure in our project root: bower_components bootstrap Gruntfile.js LICENSE README.md bower.json dist fonts grunt js less package.js package.json We will explain all of these various files and directories later on in this book. For now, you can safely ignore everything except for the dist directory inside bower_components/bootstrap/. Go ahead and open the dist directory. You should see three sub directories: css fonts js The name dist stands for distribution. Typically, the distribution directory contains the production-ready code that users can deploy. As its name implies, the css directory inside dist includes the ready-for-use style sheets. Likewise, the js directory contains the JavaScript files that compose Bootstrap. Lastly, the fonts directory holds the font assets that come with Bootstrap. To reference the local Bootstrap CSS file in our index.html, modify the href attribute of the link tag that points to the bootstrap.min.css: <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> Let's do the same for the Bootstrap JavaScript file: <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> Repeat this process for both jQuery and Tether. To install jQuery using Bower, use the following command: bower install jquery Just as before, a new directory will be created inside the bower_components directory: bower_components jquery AUTHORS.txt LICENSE.txt bower.json dist sizzle src Again, we are only interested in the contents of the dist directory, which, among other files, will contain the compressed jQuery build jquery.min.js. Reference this file by modifying the src attribute of the script tag that currently points to Google's jquery.min.js by replacing the URL with the path to our local copy of jQuery: <script src="bower_components/jquery/dist/jquery.min.js"></script> Last but not least, repeat the steps already outlined above for Tether: bower install tether Once the installation completes, a similar folder structure than the ones for Bootstrap and jQuery will have been created. Verify the contents of bower_components/tether/dist and replace the CDN Tether references in our document with their local equivalent. The final index.html should now look as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> <link rel="stylesheet" href="bower_components/tether/dist/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="bower_components/jquery/dist/jquery.min.js"></script> <script src="bower_components/tether/dist/js/tether.min.js"></script> <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> </body> </html> Refresh the index.html in your browser to make sure that everything works. What IDE and browser should I be using when following the examples in this book? While we recommend a JetBrains IDE or Sublime Text along with Google Chrome, you are free to use whatever tools and browser you like. Our taste in IDE and browser is subjective on this matter. However, keep in mind that Bootstrap 4 does not support Internet Explorer 8 or below. As such, if you do happen to use Internet Explorer 8, you should upgrade it to the latest version. Summary Aside from introducing you to our sample project MyPhoto, this article was concerned with outlining Bootstrap 4, highlighting its features, and discussing how this new version of Bootstrap differs to the last major release (Bootstrap 3). The article provided an overview of how Bootstrap can assist developers in the layout, structuring, and styling of pages. Furthermore, we noted how Bootstrap provides access to the most important and widely used user interface controls through the form of components that can be integrated into a page with minimal effort. By providing an outline of Bootstrap, we hope that the framework's intrinsic value in assisting in the development of modern websites has become apparent to the reader. Furthermore, during the course of the wider discussion, we highlighted and explained some important concepts in web development, such as typographic units of measurement or the definition, purpose and justification of the use of Content Delivery Networks. Last but not least, we detailed how to include Bootstrap and its dependencies inside an HTML document.
Read more
  • 0
  • 0
  • 2878

article-image-mapping-requirements-modular-web-shop-app
Packt
07 Sep 2016
11 min read
Save for later

Mapping Requirements for a Modular Web Shop App

Packt
07 Sep 2016
11 min read
In this article by Branko Ajzele, author of the book Modular Programming with PHP 7, we will be building a software application from the ground up requires diverse skills, as it involves more than just writing down a code. Writing down functional requirements and sketching out a wireframe are often among the first steps in the process, especially if we are working on a client project. These steps are usually done by roles other than the developer, as they require certain insight into client business case, user behavior, and alike. Being part of a larger development team means that, we as developers, usually get requirements, designs, and wireframes then start coding against them. Delivering projects by oneself, makes it tempting to skip these steps and get our hands started with code alone. More often than not, this is an unproductive approach. Laying down functional requirements and a few wireframes is a skill worth knowing and following, even if one is just a developer. (For more resources related to this topic, see here.) Later in this article, we will go over a high-level application requirement, alongside a rough wireframe. In this article, we will be covering the following topics: Defining application requirements Wireframing Defining a technology stack Defining application requirements We need to build a simple, but responsive web shop application. In order to do so, we need to lay out some basic requirements. The types of requirements we are interested in at the moment are those that touch upon interactions between a user and a system. The two most common techniques to specify requirements in regards to user usage are use case and user story. The user stories are a less formal, yet descriptive enough way to outline these requirements. Using user stories, we encapsulate the customer and store manager actions as mentioned here. A customer should be able to do the following: Browse through static info pages (about us, customer service) Reach out to the store owner via a contact form Browse the shop categories See product details (price, description) See the product image with a large view (zoom) See items on sale See best sellers Add the product to the shopping cart Create a customer account Update customer account info Retrieve a lost password Check out See the total order cost Choose among several payment methods Choose among several shipment methods Get an email notification after an order has been placed Check order status Cancel an order See order history A store manager should be able to do the following: Create a product (with the minimum following attributes: title, price, sku, url-key, description, qty, category, and image) Upload a picture to the product Update and delete a product Create a category (with the minimum following attributes: title, url-key, description, and image) Upload a picture to a category Update and delete a category Be notified if a new sales order has been created Be notified if a new sales order has been canceled See existing sales orders by their statuses Update the status of the order Disable a customer account Delete a customer account User stories are a convenient high-level way of writing down application requirements. Especially useful as an agile mode of development. Wireframing With user stories laid out, let's shift our focus to actual wireframing. For reasons we will get into later on, our wireframing efforts will be focused around the customer perspective. There are numerous wireframing tools out there, both free and commercial. Some commercial tools like https://ninjamock.com, which we will use for our examples, still provide a free plan. This can be very handy for personal projects, as it saves us a lot of time. The starting point of every web application is its home page. The following wireframe illustrates our web shop app's homepage: Here we can see a few sections determining the page structure. The header is comprised of a logo, category menu, and user menu. The requirements don't say anything about category structure, and we are building a simple web shop app, so we are going to stick to a flat category structure, without any sub-categories. The user menu will initially show Register and Login links, until the user is actually logged in, in which case the menu will change as shown in following wireframes. The content area is filled with best sellers and on sale items, each of which have an image, title, price, and Add to Cart button defined. The footer area contains links to mostly static content pages, and a Contact Us page. The following wireframe illustrates our web shop app's category page: The header and footer areas remain conceptually the same across the entire site. The content area has now changed to list products within any given category. Individual product areas are rendered in the same manner as it is on the home page. Category names and images are rendered above the product list. The width of a category image gives some hints as to what type of images we should be preparing and uploading onto our categories. The following wireframe illustrates our web shop app's product page: The content area here now changes to list individual product information. We can see a large image placeholder, title, sku, stock status, price, quantity field, Add to Cart button, and product description being rendered. The IN STOCK message is to be displayed when an item is available for purchase and OUT OF STOCK when an item is no longer available. This is to be related to the product quantity attribute. We also need to keep in mind the "See the product image with a big view (zoom)" requirement, where clicking on an image would zoom into it. The following wireframe illustrates our web shop app's register page: The content area here now changes to render a registration form. There are many ways that we can implement the registration system. More often than not, the minimal amount of information is asked on a registration screen, as we want to get the user in as quickly as possible. However, let's proceed as if we are trying to get more complete user information right here on the registration screen. We ask not just for an e-mail and password, but for entire address information as well. The following wireframe illustrates our web shop app's login page: The content area here now changes to render a customer login and forgotten password form. We provide the user with Email and Password fields in case of login, or just an Email field in case of a password reset action. The following wireframe illustrates our web shop app's customer account page: The content area here now changes to render the customer account area, visible only to logged in customers. Here we see a screen with two main pieces of information. The customer information being one, and order history being the other. The customer can change their e-mail, password, and other address information from this screen. Furthermore, the customer can view, cancel, and print all of their previous orders. The My Orders table lists orders top to bottom, from newest to oldest. Though not specified by the user stories, the order cancelation should work only on pending orders. This is something that we will touch upon in more detail later on. This is also the first screen that shows the state of the user menu when the user is logged in. We can see a dropdown showing the user's full name, My Account, and Sign Out links. Right next to it, we have the Cart (%s) link, which is to list exact quantities in a cart. The following wireframe illustrates our web shop app's checkout cart page: The content area here now changes to render the cart in its current state. If the customer has added any products to the cart, they are to be listed here. Each item should list the product title, individual price, quantity added, and subtotal. The customer should be able to change quantities and press the Update Cart button to update the state of the cart. If 0 is provided as the quantity, clicking the Update Cart button will remove such an item from the cart. Cart quantities should at all time reflect the state of the header menu Cart (%s) link. The right-hand side of a screen shows a quick summary of current order total value, alongside a big, clear Go to Checkout button. The following wireframe illustrates our web shop app's checkout cart shipping page: The content area here now changes to render the first step of a checkout process, the shipping information collection. This screen should not be accessible for non-logged in customers. The customer can provide us with their address details here, alongside a shipping method selection. The shipping method area lists several shipping methods. On the right hand side, the collapsible order summary section is shown, listing current items in the cart. Below it, we have the cart subtotal value and a big clear Next button. The Next button should trigger only when all of the required information is provided, in which case it should take us to payment information on the checkout cart payment page. The following wireframe illustrates our web shop app's checkout cart payment page: The content area here now changes to render the second step of a checkout process, the payment information collection. This screen should not be accessible for non-logged in customers. The customer is presented with a list of available payment methods. For the simplicity of the application, we will focus only on flat/fixed payments, nothing robust such as PayPal or Stripe. On the right-hand side of the screen, we can see a collapsible Order summary section, listing current items in the cart. Below it, we have the order totals section, individually listing Cart Subtotal, Standard Delivery, Order Total, and a big clear Place Order button. The Place Order button should trigger only when all of the required information is provided, in which case it should take us to the checkout success page. The following wireframe illustrates our web shop app's checkout success page: The content area here now changes to output the checkout successful message. Clearly this page is only visible to logged in customers that just finished the checkout process. The order number is clickable and links to the My Account area, focusing on the exact order. By reaching this screen, both the customer and store manager should receive a notification email, as per the Get email notification after order has been placed and Be notified if the new sales order has been created requirements. With this, we conclude our customer facing wireframes. In regards to store manager user story requirements, we will simply define a landing administration interface for now, as shown in the following screenshot: Using the framework later on, we will get a complete auto-generated CRUD interface for the multiple Add New and List & Manage links. The access to this interface and its links will be controlled by the framework's security component, since this user will not be a customer or any user in the database as such. Defining a technology stack Once the requirements and wireframes are set, we can focus our attention to the selection of a technology stack. Choosing the right one in this case, is more of a matter of preference, as application requirements for the most part can be easily met by any one of those frameworks. Our choice however, falls onto Symfony. Aside from PHP frameworks, we still need a CSS framework to deliver some structure, styling, and responsiveness within the browser on the client side. Since the focus of this book is on PHP technologies, let's just say we choose the Foundation CSS framework for that task. Summary Creating web applications can be a tedious and time consuming task. Web shops probably being one of the most robust and intensive type of application out there, as they encompass a great deal of features. There are many components involved in delivering the final product; from database, server side (PHP) code to client side (HTML, CSS, and JavaScript) code. In this article, we started off by defining some basic user stories which in turn defined high-level application requirements for our small web shop. Adding wireframes to the mix helped us to visualize the customer facing interface, while the store manager interface is to be provided out of the box by the framework. We further glossed over two of the most popular frameworks that support modular application design. We turned our attention to Symfony as server side technology and Foundation as a client side responsive framework. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Understanding PHP basics [article] PHP Magic Features [article]
Read more
  • 0
  • 0
  • 1351

article-image-integrating-angular-2-react
Mary Gualtieri
01 Sep 2016
5 min read
Save for later

How to integrate Angular 2 with React

Mary Gualtieri
01 Sep 2016
5 min read
It can be overwhelming to choose which framework is the best framework to use to get the job done in JavaScript, because you have so many options out there. In previous years, we have seen two popular frameworks come to fame: React and Angular. React has gained a lot of popularity because it is what Facebook and Instagram are built on. So, which one do you use? If you ask most JavaScript developers, they advise you to use one or the other, and that comes down to personal choice. Let's go ahead and explore Angular and React, and actually explore how the two can work together A common misconception about React is that React is a full JavaScript framework in competition with Angular. But it actually is not. React is a user interface library and is just the view in an 'MVC' framework, with a little bit of a controller in it. In other words, React is a template (an Angular term for view) where you can add some controller logic.It is the same idea of integrating the jQuery UI into JavaScript. React aids in Angular, which is a frontend framework, and makes it more efficient for the user. This is because you can write a reusable component that can be plugged into an application. Angular has many pros, and that's what makes it so appealing to a lot of developers and companies. With my personal experience, Angular has been very powerful in making solid applications. However, one of the cons that bothers me about Angular is how it goes about executing a template. I always want to practice writing DRY code and that can be a problem with Angular. You can end up writing a bunch of HTML that can be complicated and difficult to read. But you can also end up causing a spaghetti effect in your CSS. One of Angular's big strengths is the watchers and the binders. When executed correctly in well-thought-out places, it can be great for fast binding and good responsiveness. But like every human, we all make errors, and when misused, you can have performance issues when binding way too many elements in your HTML. This leads to a very slow and lagged application that no one wants to use. But there is a way to rectify this. You can use React to aid in Angular's downfalls.React was designed to work really well with other libraries and makes rendering views or templates much faster. The beauty of React is how it uses a more efficient algorithm on the virtual DOM. In plain terms, it allows you to change parts of the application that need to be updated without having to touch the rest of the application. You can send a command to update the user interface and React compares these changes to the existing DOM. Instead of theorizing on how Angular and React complement one another, let's see how it looks in code. First, let's create an HMTL file that has an Angular script and a React script. *Remember, your relative path may be different from the example.  Next, let's create a React component that renders a string that is inputted by the user.  What is happening here is that we are using React to render our model. We created a component that renders the props passed to it. Then we create an Angular directive and controller to start the app. The directive is calling the React component and telling it to render. But, let's take a look at another example that integrates Angular 2. We can demonstrate how a React component can be self-contained but still be injected into the Angular 2 world. Angular 2 has an optional hook, onInit(), that takes advantage of triggering the code to render a React component. As you can see, the host component has defined implementation for the onInit handler, where you can call the static initialize function. If you noticed, the initialize method is passing a title text that is passed down from the React component as a prop. *React Component Another item to consider that unites React and Angular is TypeScript. This is a big deal because we can manage both Angular code and React code in the same compilation step. When it comes down to it, TypeScript is what gets stripped down to regular JavaScript. One thing that you have to remember to do is tell the compiler that you are using JSX by specifying the JSX flag. To conclude, Angular will always remain a very popular framework for developers. For an application to render faster, React is a great way to render templates faster and create a more efficient user experience. React is a great compliment to Angular and will only enhance your application. About the author Mary Gualtieri is a full-stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solveproblems, all while constantly learning. She can be found on GitHub at MaryGualtieri.
Read more
  • 0
  • 2
  • 13177

article-image-aspnet-controllers-and-server-side-routes
Packt
19 Aug 2016
22 min read
Save for later

ASP.NET Controllers and Server-Side Routes

Packt
19 Aug 2016
22 min read
In this article by Valerio De Sanctis, author of the book ASP.NET Web API and Angular 2, we will explore the client-server interaction capabilities of our frameworks: to put it in other words, we need to understand how Angular2 will be able to fetch data from ASP.NET Core using its brand new, MVC6-based API structure. We won't be worrying about how will ASP.NET core retrieve these data – be it from session objects, data stores, DBMS, or any possible data source, that will come later on. For now, we'll just put together some sample, static data in order to understand how to pass them back and forth by using a well-structured, highly-configurable and viable interface. (For more resources related to this topic, see here.) The data flow A Native Web App following the single-page application approach will roughly handle the client-server communication in the following way: In case you are wondering about what these Async Data Requests actually are, the answer is simple, everything, as long as it needs to retrieve data from the server, which is something that most of the common user interactions will normally do, including (yet not limiting to), pressing a button to show more data or to edit/delete something, following a link to another app view, submitting a form and so on. That is, unless the task is so trivial or it involves a minimal amount of data that the client can entirely handle it, meaning that it already has everything he needs. Examples of such tasks are, show/hide element toggles, in-page navigation elements (such as internal anchors), and any temporary job requiring to hit a confirmation or save button to be pressed before being actually processed. The above picture shows, in a nutshell, what we're going to do. Define and implement a pattern to serve these JSON-based, server-side responses our application will need to handle the upcoming requests. Since we've chosen a strongly data-driven application pattern such as a Wiki, we'll surely need to put together a bunch of common CRUD based requests revolving around a defined object which will represent our entries. For the sake of simplicity, we'll call it Item from now on. These requests will address some common CMS-inspired tasks such as: display a list of items, view/edit the selected item's details, handle filters, and text-based search queries and also delete an item. Before going further, let's have a more detailed look on what happens between any of these Data Request issued by the client and JSON Responses send out by the server, i.e. what's usually called the Request/Response flow: As we can see, in order to respond to any client-issued Async Data Request we need to build a server-side MVC6 WebAPIControllerfeaturing the following capabilities: Read and/or Write data using the Data Access Layer. Organize these data in a suitable, JSON-serializableViewModel. Serialize the ViewModel and send it to the client as a JSON Response. Based on these points, we could easily conclude that the ViewModel is the key item here. That's not always correct: it could or couldn't be the case, depending on the project we are building. To better clarify that, before going further, it could be useful to spend a couple words on the ViewModel object itself. The role of the ViewModel We all know that a ViewModel is a container-type class which represents only the data we want to display on our webpage. In any standard MVC-based ASP.NET application, the ViewModel is instantiated by the Controller in response to a GET request using the data fetched from the Model: once built, the ViewModel is passed to the View, where it is used to populate the page contents/input fields. The main reason for building a ViewModel instead of directly passing the Model entities is that it only represents the data that we want to use, and nothing else. All the unnecessary properties that are in the model domain object will be left out, keeping the data transfer as lightweight as possible. Another advantage is the additional security it gives, since we can protect any field from being serialized and passed through the HTTP channel. In a standard Web API context, where the data is passed using RESTFul conventions via serialized formats such as JSON or XML, the ViewModel could be easily replaced by a JSON-serializable dynamic object created on the fly, such as this: var response = new{ Id = "1", Title = "The title", Description = "The description" }; This approach is often viable for small or sample projects, where creating one (or many) ViewModel classes could be a waste of time. That's not our case, though, conversely, our project will greatly benefit from having a well-defined, strongly-typed ViewModel structure, even if they will be all eventually converted into JSON strings. Our first controller Now that we have a clear vision of the Request/Response flow and its main actors, we can start building something up. Let's start with the Welcome View, which is the first page that any user will see upon connecting to our native web App. This is something that in a standard web application would be called Home Page, but since we are following a Single Page Application approach that name isn't appropriate. After all, we are not going to have more than one page. In most Wikis, the Welcome View/Home Page contains a brief text explaining the context/topic of the project and then one or more lists of items ordered and/or filtered in various ways, such as: The last inserted ones (most recent first). The most relevant/visited ones (most viewed first). Some random items (in random order). Let's try to do something like that. This will be our master plan for a suitable Welcome View: In order to do that, we're going to need the following set of API calls: api/items/GetLatest (to fetch the last inserted items). api/items/GetMostViewed (to fetch the last inserted items). api/items/GetRandom (to fetch the last inserted items). As we can see, all of them will be returning a list of items ordered by a well-defined logic. That's why, before working on them, we should provide ourselves with a suitable ViewModel. The ItemViewModel One of the biggest advantages in building a Native Web App using ASP.NET and Angular2 is that we can start writing our code without worrying to much about data sources: they will come later, and only after we're sure about what we really need. This is not a requirement either - you are also free to start with your data source for a number of good reasons, such as: You already have a clear idea of what you'll need. You already have your entity set(s) and/or a defined/populated data structure to work with. You're used to start with the data, then moving to the GUI. All the above reasons are perfectly fine: you won't ever get fired for doing that. Yet, the chance to start with the front-end might help you a lot if you're still unsure about how your application will look like, either in terms of GUI and/or data. In building this Native Web App, we'll take advantage of that: hence why we'll start defining our Item ViewModelinstead of creating its Data Source and Entity class. From Solution Explorer, right-click to the project root node and add a new folder named ViewModels. Once created, right-click on it and add a new item: from the server-side elements, pick a standard Class, name it ItemViewModel.cs and hit the Add button, then type in the following code: using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Threading.Tasks; using Newtonsoft.Json; namespaceOpenGameListWebApp.ViewModels { [JsonObject(MemberSerialization.OptOut)] publicclassItemViewModel { #region Constructor public ItemViewModel() { } #endregion Constructor #region Properties publicint Id { get; set; } publicstring Title { get; set; } publicstring Description { get; set; } publicstring Text { get; set; } publicstring Notes { get; set; } [DefaultValue(0)] publicint Type { get; set; } [DefaultValue(0)] publicint Flags { get; set; } publicstring UserId { get; set; } [JsonIgnore] publicintViewCount { get; set; } publicDateTime CreatedDate { get; set; } publicDateTime LastModifiedDate { get; set; } #endregion Properties } } As we can see, we're defining a rather complex class: this isn't something we could easily handle using dynamic object created on-the-fly, hence why we're using a ViewModel instead. We will be installing Newtonsoft's Json.NET Package using NuGet. We will start using it in this class, by including its namespace in line 6 and decorating our newly-created Item class with a JsonObject Attribute in line 10. That attribute can be used to set a list of behaviours of the JsonSerializer / JsonDeserializer methods, overriding the default ones: notice that we used MemberSerialization.OptOut, meaning that any field will be serialized into JSON unless being decorated by an explicit JsonIgnore attribute or NonSerializedattribute. We are making this choice because we're going to need most of our ViewModel properties serialized, as we'll be seeing soon enough. The ItemController Now that we have our ItemViewModel class, let's use it to return some server-side data. From your project's root node, open the /Controllers/ folder: right-click on it, select Add>New Item, then create a Web API Controller class, name it ItemController.cs and click the Add button to create it. The controller will be created with a bunch of sample methods: they are identical to those present in the default ValueController.cs, hence we don't need to keep them. Delete the entire file content and replace it with the following code: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; usingOpenGameListWebApp.ViewModels; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { // GET api/items/GetLatest/5 [HttpGet("GetLatest/{num}")] publicJsonResult GetLatest(int num) { var arr = newList<ItemViewModel>(); for (int i = 1; i <= num; i++) arr.Add(newItemViewModel() { Id = i, Title = String.Format("Item {0} Title", i), Description = String.Format("Item {0} Description", i) }); var settings = newJsonSerializerSettings() { Formatting = Formatting.Indented }; returnnewJsonResult(arr, settings); } } } This controller will be in charge of all Item-related operations within our app. As we can see, we started defining a GetLatestmethod accepting a single Integerparameter value.The method accepts any GET request using the custom routing rules configured via the HttpGetAttribute: this approach is called Attribute Routing and we'll be digging more into it later in this article. For now, let's stick to the code inside the method itself. The behaviour is really simple: since we don't (yet) have a Data Source, we're basically mocking a bunch of ItemViewModel objects: notice that, although it's just a fake response, we're doing it in a structured and credible way, respecting the number of items issued by the request and also providing different content for each one of them. It's also worth noticing that we're using a JsonResult return type, which is the best thing we can do as long as we're working with ViewModel classes featuring the JsonObject attribute provided by the Json.NET framework: that's definitely better than returning plain string or IEnumerable<string> types, as it will automatically take care of serializing the outcome and setting the appropriate response headers.Let's try our Controller by running our app in Debug Mode: select Debug>Start Debugging from main menu or press F5. The default browser should open, pointing to the index.html page because we did set it as the Launch URL in our project's debug properties. In order to test our brand new API Controller, we need to manually change the URL with the following: /api/items/GetLatest/5 If we did everything correctly, it will show something like the following: Our first controller is up and running. As you can see, the ViewCount property is not present in the Json-serialized output: that's by design, since it has been flagged with the JsonIgnore attribute, meaning that we're explicitly opting it out. Now that we've seen that it works, we can come back to the routing aspect of what we just did: since it is a major topic, it's well worth some of our time. Understanding routes We will acknowledge the fact that the ASP.NET Core pipeline has been completely rewritten in order to merge the MVC and WebAPI modules into a single, lightweight framework to handle both worlds. Although this certainly is a good thing, it comes with the usual downside that we need to learn a lot of new stuff. Handling Routes is a perfect example of this, as the new approach defines some major breaking changes from the past. Defining routing The first thing we should do is giving out a proper definition of what Routing actually is. To cut it simple, we could say that URL routing is the server-side feature that allows a web developer to handle HTTP requests pointing to URIs not mapping to physical files. Such technique could be used for a number of different reasons, including: Giving dynamic pages semantic, meaningful and human-readable names in order to advantage readability and/or search-engine optimization (SEO). Renaming or moving one or more physical files within your project's folder tree without being forced to change their URLs. Setup alias and redirects. Routing through the ages In earlier times, when ASP.NET was just Web Forms, URL routing was strictly bound to physical files: in order to implement viable URL convention patterns the developers were forced to install/configure a dedicated URL rewriting tool by using either an external ISAPI filter such as Helicontech's SAPI Rewrite or, starting with IIS7, the IIS URL Rewrite Module. When ASP.NET MVC got released, the Routing pattern was completely rewritten: the developers could setup their own convention-based routes in a dedicated file (RouteConfig.cs, Global.asax, depending on template) using the Routes.MapRoute method. If you've played along with MVC 1 through 5 or WebAPI 1 and/or 2, snippets like this should be quite familiar to you: routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); This method of defining routes, strictly based upon pattern matching techniques used to relate any given URL requests to a specific Controller Actions, went by the name of Convention-based Routing. ASP.NET MVC5 brought something new, as it was the first version supporting the so-called Attribute-based Routing. This approach was designed as an effort to give to developers a more versatile approach. If you used it at least once you'll probably agree that it was a great addition to the framework, as it allowed the developers to define routes within the Controller file. Even those who chose to keep the convention-based approach could find it useful for one-time overrides like the following, without having to sort it out using some regular expressions: [RoutePrefix("v2Products")] publicclassProductsController : Controller { [Route("v2Index")] publicActionResult Index() { return View(); } } In ASP.NET MVC6, the routing pipeline has been rewritten completely: that's way things like the Routes.MapRoute() method is not used anymore, as well as any explicit default routing configuration. You won't be finding anything like that in the new Startup.cs file, which contains a very small amount of code and (apparently) nothing about routes. Handling routes in ASP.NET MVC6 We could say that the reason behind the Routes.MapRoute method disappearance in the Application's main configuration file is due to the fact that there's no need to setup default routes anymore. Routing is handled by the two brand-new services.AddMvc() and services.UseMvc() methods called within the Startup.cs file, which respectively register MVC using the Dependency Injection framework built into ASP.NET Core and add a set of default routes to our app. We can take a look at what happens behind the hood by looking at the current implementation of the services.UseMvc()method in the framework code (relevant lines are highlighted): public static IApplicationBuilder UseMvc( [NotNull] this IApplicationBuilder app, [NotNull] Action<IRouteBuilder> configureRoutes) { // Verify if AddMvc was done before calling UseMvc // We use the MvcMarkerService to make sure if all the services were added. MvcServicesHelper.ThrowIfMvcNotRegistered(app.ApplicationServices); var routes = new RouteBuilder { DefaultHandler = new MvcRouteHandler(), ServiceProvider = app.ApplicationServices }; configureRoutes(routes); // Adding the attribute route comes after running the user-code because // we want to respect any changes to the DefaultHandler. routes.Routes.Insert(0, AttributeRouting.CreateAttributeMegaRoute( routes.DefaultHandler, app.ApplicationServices)); return app.UseRouter(routes.Build()); } The good thing about this is the fact that the framework now handles all the hard work, iterating through all the Controller's actions and setting up their default routes, thus saving us some work. It worth to notice that the default ruleset follows the standard RESTFulconventions, meaning that it will be restricted to the following action names:Get, Post, Put, Delete. We could say here that ASP.NET MVC6 is enforcing a strict WebAPI-oriented approach - which is much to be expected, since it incorporates the whole ASP.NET Core framework. Following the RESTful convention is generally a great thing to do, especially if we aim to create a set of pragmatic, RESTful basedpublic API to be used by other developers. Conversely, if we're developing our own app and we want to keep our API accessible to our eyes only, going for custom routing standards is just as viable: as a matter of fact, it could even be a better choice to shield our Controllers against some most trivial forms of request flood and/or DDoS-based attacks. Luckily enough, both the Convention-based Routing and the Attribute-based Routing are still alive and well, allowing you to setup your own standards. Convention-based routing If we feel like using the most classic routing approach, we can easily resurrect our beloved MapRoute() method by enhancing the app.UseMvc() call within the Startup.cs file in the following way: app.UseMvc(routes => { // Route Sample A routes.MapRoute( name: "RouteSampleA", template: "MyOwnGet", defaults: new { controller = "Items", action = "Get" } ); // Route Sample B routes.MapRoute( name: "RouteSampleB", template: "MyOwnPost", defaults: new { controller = "Items", action = "Post" } ); }); Attribute-based routing Our previously-shownItemController.cs makes a good use of the Attribute-Based Routing approach, featuring it either at Controller level: [Route("api/[controller]")] public class ItemsController : Controller Also at Action Method level: [HttpGet("GetLatest")] public JsonResult GetLatest() Three choices to route them all Long story short, ASP.NET MVC6 is giving us three different choices for handling routes: enforcing the standard RESTful conventions, reverting back to the good old Convention-based Routing or decorating the Controller files with the Attribute-based Routing. It's also worth noticing that Attribute-based Routes, if and when defined, would override any matchingConvention-basedpattern: both of them, if/when defined, would override the default RESTful conventions created by the built-in UseMvc() method. In this article we're going to use all of these approaches, in order to learn when, where and how to properly make use of either of them. Adding more routes Let's get back to our ItemController. Now that we're aware of the routing patterns we can use, we can use that knowledge to implement the API calls we're still missing. Open the ItemController.cs file and add the following code (new lines are highlighted): using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using OpenGameListWebApp.ViewModels; using Newtonsoft.Json; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { #region Attribute-based Routing ///<summary> /// GET: api/items/GetLatest/{n} /// ROUTING TYPE: attribute-based ///</summary> ///<returns>An array of {n} Json-serialized objects representing the last inserted items.</returns> [HttpGet("GetLatest/{n}")] publicIActionResult GetLatest(int n) { var items = GetSampleItems().OrderByDescending(i => i.CreatedDate).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetMostViewed/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing the items with most user views.</returns> [HttpGet("GetMostViewed/{n}")] public IActionResult GetMostViewed(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderByDescending(i => i.ViewCount).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetRandom/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing some randomly-picked items.</returns> [HttpGet("GetRandom/{n}")] public IActionResult GetRandom(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderBy(i => Guid.NewGuid()).Take(n); return new JsonResult(items, DefaultJsonSettings); } #endregion #region Private Members /// <summary> /// Generate a sample array of source Items to emulate a database (for testing purposes only). /// </summary> /// <param name="num">The number of items to generate: default is 999</param> /// <returns>a defined number of mock items (for testing purpose only)</returns> private List<ItemViewModel> GetSampleItems(int num = 999) { List<ItemViewModel> lst = new List<ItemViewModel>(); DateTime date = new DateTime(2015, 12, 31).AddDays(-num); for (int id = 1; id <= num; id++) { lst.Add(new ItemViewModel() { Id = id, Title = String.Format("Item {0} Title", id), Description = String.Format("This is a sample description for item {0}: Lorem ipsum dolor sit amet.", id), CreatedDate = date.AddDays(id), LastModifiedDate = date.AddDays(id), ViewCount = num - id }); } return lst; } /// <summary> /// Returns a suitable JsonSerializerSettings object that can be used to generate the JsonResult return value for this Controller's methods. /// </summary> private JsonSerializerSettings DefaultJsonSettings { get { return new JsonSerializerSettings() { Formatting = Formatting.Indented }; } } #endregion } We added a lot of things there, that's for sure. Let's see what's new: We added the GetMostViewed(n) and GetRandom(n) methods, built upon the same mocking logic used for GetLatest(n): either one requires a single parameter of Integer type to specify the (maximum) number of items to retrieve. We added two new private members: The GetLatestItems() method, to generate some sample Item objects when we need them. This method is an improved version of the dummy item generator loop we had inside the previous GetLatest() method implementation, as it acts more like a Dummy Data Provider: we'll tell more about it later on. The DefaultJsonSettings property, so we won't have to manually instantiate a JsonSerializerSetting object every time. We also decorated each class member with a dedicated<summary> documentation tag explaining what it does and its return value. These tags will be used by IntelliSense to show real-time information about the type within the Visual Studio GUI. They will also come handy when we'll want to generate an auto-generated XML Documentationfor our project by using industry-standard documentation tools such as Sandcastle. Finally, we added some #region / #endregion pre-processor directives to separate our code into blocks. We'll do this a lot from now on, as this will greatly increase our source code readability and usability, allowing us to expand or collapse different sections/part when we don't need them, thus focusing more on what we're working on. For more info regarding documentation tags, take a look at the following MSDN official documentation page: https://msdn.microsoft.com/library/2d6dt3kf.aspx If you want know more about C# pre-processor directives, this is the one to check out instead: https://msdn.microsoft.com/library/9a1ybwek.aspx The dummy data provider Our new GetLatestItems() method deserves a couple more words. As we can easily see it emulates the role of a Data Provider, returning a list of items in a credible fashion. Notice that we built it in a way that it will always return identical items, as long as the num parameter value remains the same: The generated items Id will follow a linear sequence, from 1 to num. Any generated item will have incremental CreatedDate and LastModifiedDate values based upon their Id: the higher the Id, the most recent the two dates will be, up to 31 December 2015. This follows the assumption that most recent items will have higher Id, as it normally is for DBMS records featuring numeric, auto-incremental keys. Any generated item will have a decreasing ViewCount value based upon their Id: the higher the Idis, the least it will be. This follows the assumption that newer items will generally get less views than older ones. While it obviously lacksany insert/update/delete feature, this Dummy Data Provideris viable enough to serve our purposes until we'll replace it with an actual, persistence-based Data Source. Technically speaking, we could do something better than we did by using one of the many Mocking Framework available through NuGet:Moq, NMock3,NSubstitute orRhino, just to name a few. Summary We spent some time into putting the standard application data flow under our lens: a two-way communication pattern between the server and their clients, built upon the HTTP protocol. We acknowledged the fact that we'll be mostly dealing with Json-serializable object such as Items, so we chose to equip ourselves with an ItemViewModel server-side class, together with an ItemController that will actively use it to expose the data to the client. We started building our MVC6-based WebAPI interface by implementing a number of methods required to create the client-side UI we chose for our Welcome View, consisting of three item listings to show to our users: last inserted ones, most viewed ones and some random picks. We routed the requests to them by using a custom set of Attribute-based routing rules, which seemed to be the best choice for our specific scenario. While we were there, we also took the chance to add a dedicated method to retrieve a single Item from its unique Id, assuming we're going to need it for sure. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] ASP.Net Site Performance: Improving JavaScript Loading [article] Displaying MySQL data on an ASP.NET Web Page [article]
Read more
  • 0
  • 0
  • 3606
article-image-adding-charts-dashboards
Packt
19 Aug 2016
11 min read
Save for later

Adding Charts to Dashboards

Packt
19 Aug 2016
11 min read
In this article by Vince Sesto, author of the book Learning Splunk Web Framework, we will study adding charts to dashboards. We have a development branch to work from and we are going to work further with the SimpleXMLDashboard dashboard. We should already be on our development server environment as we have just switched over to our new development branch. We are going to create a new bar chart, showing the daily NASA site access for our top educational users. We will change the label of the dashboard and finally place an average overlay on top of our chart: (For more resources related to this topic, see here.) Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for all our dashboards: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views We are going to work on the simplexmldashboard.xml file. Open this file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. It is not compulsory to indent and nest your Simple XML code, but it is a good idea to have consistent indentation and commenting to make sure your code is clear and stays as readable as possible. Let's start by changing the name of the dashboard that is displayed to the user. Change line 2 to the following line of code (don't include the line numbers): 2   <label>Educational Site Access</label> Move down to line 16 and you will see that we have closed off our row element with a </row>. We are going to add in a new row where we will place our new chart. After the sixteenth line, add the following three lines to create a new row element, a new panel to add our chart, and finally, open up our new chart element: 17 <row> 18 <panel> 19 <chart> The next two lines will give our chart a title and we can then open up our search: 20 <title>Top Educational User</title> 21 <search> To create a new search, just like we would enter in the Splunk search bar, we will use the query tag as listed with our next line of code. In our search element, we can also set the earliest and latest times for our search, but in this instance we are using the entire data source: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay </query> 23 <earliest>0</earliest> 24 <latest></latest> 25 </search> We have completed our search and we can now modify the way the chart will look on our panel with the option chart elements. In our next four lines of code, we set the chart type as a column chart, set the legend to the bottom of the chart area, remove any master legend, and finally set the height as 250 pixels: 26 <option name="charting.chart">column</option> 27 <option name="charting.legend.placement">bottom</option> 28 <option name="charting.legend.masterLegend">null</option> 29 <option name="height">250px</option> We need to close off the chart, panel, row, and finally the dashboard elements. Make sure you only close off the dashboard element once: 30 </chart> 31 </panel> 32 </row> 33 </dashboard>  We have done a lot of work here. We should be saving and testing our code for every 20 or so lines that we add, so save your changes. And as we mentioned earlier in the article, we want to refresh our cache by entering the following URL into our browser: http://<host:port>/debug/refresh When we view our page, we should see a new column chart at the bottom of our dashboard showing the usage per day for the calclab1.math.tamu.edu domain. But we’re not done with that chart yet. We want to put a line overlay showing the average site access per day for our user. Open up simplexmldashboard.xml again and change our query in line 22 to the following: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay| eventstats avg(count) as average | eval average=round(average,0)</query> Simple XML contains some special characters, which are ', <, >, and &. If you intend to use advanced search queries, you may need to use these characters, and if so you can do so by either using their HTML entity or using the CDATA tags where you can wrap your query with <![CDATA[ and ]]>. We now need to add two new option lines into our Simple XML code. After line 29, add the following two lines, without replacing all of the closing elements that we previously entered. The first will set the chart overlay field to be displayed for the average field; the next will set the color of the overlay: 30 <option name="charting.chart.overlayFields">average</option> 31 <option name="charting.fieldColors">{"count": 0x639BF1, "average":0xFF5A09}</option>    Save your new changes, refresh the cache, and then reload your page. You should be seeing something similar to the following screenshot: The Simple XML of charts As we can see from our example, it is relatively easy to create and configure our charts using Simple XML. When we completed the chart, we used five options to configure the look and feel of the chart, but there are many more that we can choose from. Our chart element needs to always be in between its two parent elements, which are row and panel. Within our chart element, we always start with a title for the chart and a search to power the chart. We can then set out additional optional settings for earliest and latest, and then a list of options to configure the look and feel as we have demonstrated below. If these options are not specified, default values are provided by Splunk: 1 <chart> 2 <title></title> 3 <search> 4 <query></query> 5 <earliest>0</earliest> 6 <latest></latest> 7 </search> 8 <option name=""></option> 9 </chart> There is a long list of options that can be set for our charts; the following is a list of the more important options to know: charting.chart: This is where you set the chart type, with area, bar, bubble, column, fillerGauge, line, markerGauge, pie, radialGauge, and scatter being the charts that you can choose from. charting.backgroudColor: Set the background color of your chart with a Hex color value. charting.drilldown: Set to either all or none. This allows the chart to be clicked on to allow the search to be drilled down for further information. charting.fieldColors: This can map a color to a field as we did with our average field in the preceding example. charting.fontColor: Set the value of the font color in the chart with a Hex color value. height: The height of the chart in pixels. The value must be between 100 and 1,000 pixels. A lot of the options seem to be self-explanatory, but a full list of options and a description can be found on the Splunk reference material at the following URL: http://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference. Expanding our Splunk App with maps We will now go through another example in our NASA Squid and Web Data App to run through a more complex type of visualization to present to our user. We will use the Basic Dashboard that we created, but we will change the Simple XML to give it a more meaningful name, and then set up a map to present to our users where our requests are actually coming from. Maps use a map element and don't rely on the chart element as we have been using. The Simple XML code for the dashboard we created earlier in this article looks like the following: <dashboard> <label>Basic Dashboard</label> </dashboard> So let's get to work and give our Basic Dashboard a little "bling": Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for our Basic Dashboard: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views Open the basic_dashboard.xml file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. We might as well remove all of the code that is in there, because it is going to look completely different than the way it did originally. Now start by setting up your dashboard and label elements, with a label that will give you more information on what the dashboard contains: 1 <dashboard> 2 <label>Show Me Your Maps</label> Open your row, panel, and map elements, and set a title for the new visualization. Make sure you use the map element and not the chart element: 3 <row> 4 <panel> 5 <map> 6 <title>User Locations</title>       We can now add our search query within our search elements. We will only search for IP addresses in our data and use the geostats Splunk function to extract a latitude and longitude from the data: 7 <search> 8 <query>index=main sourcetype="nasasquidlogs" | search From=1* | iplocation From | geostats latfield=lat longfield=lon count by From</query> 9 <earliest>0</earliest> 10 <latest></latest> 11 </search>  The search query that we have in our Simple XML code is more advanced than the previous queries we have implemented. If you need further details on the functions provided in the query, please refer to the Splunk search documentation at the following location: http://docs.splunk.com/Documentation/Splunk/6.4.1/SearchReference/WhatsInThisManual. Now all we need to do is close off all our elements, and that is all that is needed to create our new visualization of IP address requests: visualization of IP address requests: 12 </map> 13 </panel> 14 </row> 15 </dashboard> If your dashboard looks similar to the image below, I think it looks pretty good. But there is more we can do with our code to make it look even better. We can set extra options in our Simple XML code to zoom in, only display a certain part of the map, set the size of the markers, and finally set the minimum and maximum that can be zoomed into the screen. The map looks pretty good, but it seems that a lot of the traffic is being generated by users in USA. Let's have a look at setting some extra configurations in our Simple XML to change the way the map displays to our users. Get back to our basic_dashboard.xml file and add the following options: After our search element is closed off, we can add the following options. First we will set the maximum clusters to be displayed on our map as 100. This will hopefully speed up our map being displayed, and allow all the data points to be viewed further with the drilldown option: 12 <option name="mapping.data.maxClusters">100</option> 13 <option name="mapping.drilldown">all</option> We can now set our central point for the map to load using latitude and longitude values. In this instance, we are going to set the heart of USA as our central point. We are also going to set our zoom value as 4, which will zoom in a little further from the default of 2: 14 <option name="mapping.map.center">(38.48,-102)</option> 15 <option name="mapping.map.zoom">4</option> Remember that we need to have our map, panel, row, and dashboard elements closed off. Save the changes and reload the cache. Let's see what is now displayed: Your map should now be displaying a little faster than what it originally did. It will be focused on USA, where a bulk of the traffic is coming from. The map element has numerous options to use and configure and a full list can be found at the following Splunk reference page: http://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML. Summary In this article we covered about Simple XML charts and how to expand our Splunk App with maps. Resources for Article: Further resources on this subject: The Splunk Web Framework [article] Splunk's Input Methods and Data Feeds [article] The Splunk Interface [article]
Read more
  • 0
  • 0
  • 1690

article-image-server-side-rendering
Packt
16 Aug 2016
5 min read
Save for later

Server-Side Rendering

Packt
16 Aug 2016
5 min read
In this article by Kamil Przeorski, the author of the book Mastering Full Stack React Web Development introduces Universal JavaScript or isomorphic JavaScript features that we are going to implement in thisarticle. To be more exact: we will develop our app the way that we will render the app's pages both on server and client side. It's different to Angular1 or Backbone single-page apps which are mainly rendered on the client side. Our approach is more complicated in technological terms as you need to deploy your full-stack skills which working on a server side rendering, but on the other-side having this experience will make you more desirable programmer so you can advance your career to the next level—you will be able to charge more for your skills on the market. (For more resources related to this topic, see here.) When the server-side is worth implementing h1 The server-side rendering is very useful feature in text's content (like news portals) related startups/companies because it helps to be better indexed by different search engines. It's an essential feature for any news and content heavy websites, because it helps grow them organic traffic. In this article, we will also run our app with server-side rendering. Second segment of companies where server-side rendering may be very useful are entertainment one where users have less patience and they can close the www's browser if a webpage is loading slowly. In general, all B2C (consumer facing) apps shall use server-side rendering to improve its experience with the masses of people who are visiting their websites. Our focus for article will include the following: Making whole server-side code rearrangement to prepare for the server-side rendering Start using react-dom/server and it's renderToString method Are you ready? Our first step is to mock the database's response on the backend (we will create a real DB query after whole server-side rendering will work correctly on the mocked data). Mocking the database response h2 First of all, we will mock our database response on the backend in order to get prepared to go into server-side rendering directly. $ [[you are in the server directory of your project]] $ touch fetchServerSide.js The fetchServerSide.js file will consist of all functions that will fetch data from our database in order to make the server side works.As was mentioned earlier we will mock it for the meanwhile with following code in fetchServerSide.js: export default () => { return { 'article': { '0': { 'articleTitle': 'SERVER-SIDE Lorem ipsum - article one', 'articleContent':'SERVER-SIDE Here goes the content of the article' }, '1': { 'articleTitle':'SERVER-SIDE Lorem ipsum - article two', 'articleContent':'SERVER-SIDE Sky is the limit, the content goes here.' } } } } The goal of making this mocked object once again, is that we will be able to see if our server-side rendering works correctly after implementation because as you probably have already spotted that we have added this SERVER-SIDE in the beginning of each title and content—so it will help us to learn that our app is getting the data from server-side rendering. Later this function will be replaced with a query to MongoDB. Next thing that will help us implement the server-side rendering is to make a handleServerSideRender function that will be triggered each time a request hits the server. In order to make the handleServerSideRender trigger every time the frontend calls our backend we need to use the Express middleware using app.use. So far we were using some external libraries like: app.use(cors()) app.use(bodyParser.json({extended: false})) Now, we will write our own small's middleware function that behaves similar way to the cors or bodyParser (the external libs that are also middlewares). Before doing so, let's import our dependencies that are required in React's server-side rendering (server/server.js): import React from 'react'; import {createStore} from 'redux'; import {Provider} from 'react-redux'; import {renderToStaticMarkup} from 'react-dom/server'; import ReactRouter from 'react-router'; import {RoutingContext, match} from 'react-router'; import * as hist from 'history'; import rootReducer from '../src/reducers'; import reactRoutes from '../src/routes'; import fetchServerSide from './fetchServerSide'; After adding all those imports of the server/server.js, the file will be looking as following: import http from 'http'; import express from 'express'; import cors from 'cors'; import bodyParser from 'body-parser'; import falcor from 'falcor'; import falcorExpress from 'falcor-express'; import falcorRouter from 'falcor-router'; import routes from './routes.js'; import React from 'react' import { createStore } from 'redux' import { Provider } from 'react-redux' import { renderToStaticMarkup } from 'react-dom/server' import ReactRouter from 'react-router'; import { RoutingContext, match } from 'react-router'; import * as hist from 'history'; import rootReducer from '../src/reducers'; import reactRoutes from '../src/routes'; import fetchServerSide from './fetchServerSide'; Important is to import history in the given way as in the example import * as hist from 'history'. The RoutingContext, match is the way of using React-Router on the server side. The renderToStaticMarkup function is going to generate for us a HTML markup on serverside. After we have added those new imports then under falcor's middleware setup: // this already exists in your codebase app.use('/model.json', falcorExpress.dataSourceRoute((req, res) => { return new falcorRouter(routes); // this alrady exsits in your codebase })); Under themodel.jsonfile's code, please add the following: let handleServerSideRender = (req, res) => { return; }; let renderFullHtml = (html, initialState) => { return; };app.use(handleServerSideRender); The app.use(handleServerSideRender) is fired each time the server side receives a request from a client's application. Then we have prepared empty functions that we will use: handleServerSideRender:It will use renderToString in order to create a valid server-side's HTML's markup renderFullHtml:The helper's function will embed our new React's HTML markup into a whole HTML's document as you can later in a moment down below. Summary We have done the basic server-side rendering in this article. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] How to integrate social media with your WordPress website [article] Laravel 5.0 Essentials [article]
Read more
  • 0
  • 0
  • 2502

article-image-setting-mongodb
Packt
12 Aug 2016
10 min read
Save for later

Setting up MongoDB

Packt
12 Aug 2016
10 min read
In this article by Samer Buna author of the book Learning GraphQL and Relay, we're mostly going to be talking about how an API is nothing without access to a database. Let's set up a local MongoDB instance, add some data in there, and make sure we can access that data through our GraphQL schema. (For more resources related to this topic, see here.) MongoDB can be locally installed on multiple platforms. Check the documentation site for instructions for your platform (https://docs.mongodb.com/manual/installation/). For Mac, the easiest way is probably Homebrew: ~ $ brew install mongodb Create a db folder inside a data folder. The default location is /data/db: ~ $ sudo mkdir -p /data/db Change the owner of the /data folder to be the current logged-in user: ~ $ sudo chown -R $USER /data Start the MongoDB server: ~ $ mongod If everything worked correctly, we should be able to open a new terminal and test the mongo CLI: ~/graphql-project $ mongo MongoDB shell version: 3.2.7 connecting to: test > db.getName() test > We're using MongoDB version 3.2.7 here. Make sure that you have this version or newer versions of MongoDB. Let's go ahead and create a new collection to hold some test data. Let's name that collection users: > db.createCollection("users")" { "ok" : 1 } Now we can use the users collection to add documents that represent users. We can use the MongoDB insertOne() function for that: > db.users.insertOne({ firstName: "John"," lastName: "Doe"," email: "[email protected]" }) We should see an output like: { "acknowledged" : true, "insertedId" : ObjectId("56e729d36d87ae04333aa4e1") } Let's go ahead and add another user: > db.users.insertOne({ firstName: "Jane"," lastName: "Doe"," email: "[email protected]" }) We can now verify that we have two user documents in the users collection using: > db.users.count() 2 MongoDB has a built-in unique object ID which you can see in the output for insertOne(). Now that we have a running MongoDB, and we have some test data in there, it's time to see how we can read this data using a GraphQL API. To communicate with a MongoDB from a Node.js application, we need to install a driver. There are many options that we can choose from, but GraphQL requires a driver that supports promises. We will use the official MongoDB Node.js driver which supports promises. Instructions on how to install and run the driver can be found at: https://docs.mongodb.com/ecosystem/drivers/node-js/. To install the MongoDB official Node.js driver under our graphql-project app, we do: ~/graphql-project $ npm install --save mongodb └─┬ [email protected] We can now use this mongodb npm package to connect to our local MongoDB server from within our Node application. In index.js: const mongodb = require('mongodb'); const assert = require('assert'); const MONGO_URL = 'mongodb'://localhost:27017/test'; mongodb.MongoClient.connect(MONGO_URL, (err, db) => { assert.equal(null, err); console.log('Connected' to MongoDB server'); // The readline interface code }); The MONGO_URL variable value should not be hardcoded in code like this. Instead, we can use a node process environment variable to set it to a certain value before executing the code. On a production machine, we would be able to use the same code and set the process environment variable to a different value. Use the export command to set the environment variable value: export MONGO_URL=mongodb://localhost:27017/test Then in the Node code, we can read the exported value by using: process.env.MONGO_URL If we now execute the node index.js command, we should see the Connected to MongoDB server line right before we ask for the Client Request. At this point, the Node.js process will not exit after our interaction with it. We'll need to force exit the process with Ctrl + C to restart it. Let's start our database API with a simple field that can answer this question: How many total users do we have in the database? The query could be something like: { usersCount } To be able to use a MongoDB driver call inside our schema main.js file, we need access to the db object that the MongoClient.connect() function exposed for us in its callback. We can use the db object to count the user documents by simply running the promise: db.collection('users').count() .then(usersCount => console.log(usersCount)); Since we only have access to the db object in index.js within the connect() function's callback, we need to pass a reference to that db object to our graphql() function. We can do that using the fourth argument for the graphql() function, which accepts a contextValue object of globals, and the GraphQL engine will pass this context object to all the resolver functions as their third argument. Modify the graphql function call within the readline interface in index.js to be: graphql.graphql(mySchema, inputQuery, {}, { db }).then(result => { console.log('Server' Answer :', result.data); db.close(() => rli.close()); }); The third argument to the graphql() function is called the rootValue, which gets passed as the first argument to the resolver function on the top level type. We are not using that feature here. We passed the connected database object db as part of the global context object. This will enable us to use db within any resolver function. Note also how we're now closing the rli interface within the callback for the operation that closes the db. We should not leave any open db connections behind. Here's how we can now use the resolver third argument to resolve our usersCount top-level field with the db count() operation: fields: { // "hello" and "diceRoll"..." usersCount: { type: GraphQLInt, resolve: (_, args, { db }) => db.collection('users').count() } } A couple of things to notice about this code: We destructured the db object from the third argument for the resolve() function so that we can use it directly (instead of context.db). We returned the promise itself from the resolve() function. The GraphQL executor has native support for promises. Any resolve() function that returns a promise will be handled by the executor itself. The executor will either successfully resolve the promise and then resolve the query field with the promise-resolved value, or it will reject the promise and return an error to the user. We can test our query now: ~/graphql-project $ node index.js Connected to MongoDB server Client Request: { usersCount } Server Answer : { usersCount: 2 } *** #GitTag: chapter1-setting-up-mongodb *** Setting up an HTTP interface Let's now see how we can use the graphql() function under another interface, an HTTP one. We want our users to be able to send us a GraphQL request via HTTP. For example, to ask for the same usersCount field, we want the users to do something like: /graphql?query={usersCount} We can use the Express.js node framework to handle and parse HTTP requests, and within an Express.js route, we can use the graphql() function. For example (don't add these lines yet): const app = express(); app.use('/graphql', (req, res) => { // use graphql.graphql() to respond with JSON objects }); However, instead of manually handling the req/res objects, there is a GraphQL Express.js middleware that we can use, express-graphql. This middleware wraps the graphql() function and prepares it to be used by Express.js directly. Let's go ahead and bring in both the Express.js library and this middleware: ~/graphql-project $ npm install --save express express-graphql ├─┬ [email protected] └─┬ [email protected] In index.js, we can now import both express and the express-graphql middleware: const graphqlHTTP = require('express-graphql'); const express = require('express'); const app = express(); With these imports, the middleware main function will now be available as graphqlHTTP(). We can now use it in an Express route handler. Inside the MongoClient.connect() callback, we can do: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db } })); app.listen(3000, () => console.log('Running Express.js on port 3000') ); Note that at this point we can remove the readline interface code as we are no longer using it. Our GraphQL interface from now on will be an HTTP endpoint. The app.use line defines a route at /graphql and delegates the handling of that route to the express-graphql middleware that we imported. We pass two objects to the middleware, the mySchema object, and the context object. We're not passing any input query here because this code just prepares the HTTP endpoint, and we will be able to read the input query directly from a URL field. The app.listen() function is the call we need to start our Express.js app. Its first argument is the port to use, and its second argument is a callback we can use after Express.js has started. We can now test our HTTP-mounted GraphQL executor with: ~/graphql-project $ node index.js Connected to MongoDB server Running Express.js on port 3000 In a browser window go to: http://localhost:3000/graphql?query={usersCount} *** #GitTag: chapter1-setting-up-an-http-interface *** The GraphiQL editor The graphqlHTTP() middleware function accepts another property on its parameter object graphiql, let's set it to true: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db }, graphiql: true })); When we restart the server now and navigate to http://localhost:3000/graphql, we'll get an instance of the GraphiQL editor running locally on our GraphQL schema: GraphiQL is an interactive playground where we can explore our GraphQL queries and mutations before we officially use them. GraphiQL is written in React and GraphQL, and it runs completely within the browser. GraphiQL has many powerful editor features such as syntax highlighting, code folding, and error highlighting and reporting. Thanks to GraphQL introspective nature, GraphiQL also has intelligent type-ahead of fields, arguments, and types. Put the cursor in the left editor area, and type a selection set: { } Place the cursor inside that selection set and press Ctrl + space. You should see a list of all fields that our GraphQL schema support, which are the three fields that we have defined so far (hello, diceRoll, and usersCount): If Ctrl +space does not work, try Cmd + space, Alt + space, or Shift + space. The __schema and __type fields can be used to introspectively query the GraphQL schema about what fields and types it supports. When we start typing, this list starts to get filtered accordingly. The list respects the context of the cursor, if we place the cursor inside the arguments of diceRoll(), we'll get the only argument we defined for diceRoll, the count argument. Go ahead and read all the root fields that our schema support, and see how the data gets reported on the right side with the formatted JSON object: *** #GitTag: chapter1-the-graphiql-editor *** Summary In this article, we learned how to set up a local MongoDB instance, add some data in there, so that we can access that data through our GraphQL schema. Resources for Article: Further resources on this subject: Apache Solr and Big Data – integration with MongoDB [article] Getting Started with Java Driver for MongoDB [article] Documents and Collections in Data Modeling with MongoDB [article]
Read more
  • 0
  • 0
  • 3600
article-image-laravel-50-essentials
Packt
12 Aug 2016
9 min read
Save for later

Laravel 5.0 Essentials

Packt
12 Aug 2016
9 min read
In this article by Alfred Nutile from the book, Laravel 5.x Cookbook, we will learn the following topics: Setting up Travis to Auto Deploy when all is Passing Working with Your .env File Testing Your App on Production with Behat (For more resources related to this topic, see here.) Setting up Travis to Auto Deploy when all is Passing Level 0 of any work should be getting a deployment workflow setup. What that means in this case is that a push to GitHub will trigger our Continuous Integration (CI). And then from the CI, if the tests are passing, we trigger the deployment. In this example I am not going to hit the URL Forge gives you but I am going to send an Artifact to S3 and then have call CodeDeploy to deploy this Artifact. Getting ready… You really need to see the section before this, otherwise continue knowing this will make no sense. How to do it… Install the travis command line tool in Homestead as noted in their docs https://github.com/travis-ci/travis.rb#installation. Make sure to use Ruby 2.x: sudo apt-get install ruby2.0-dev sudo gem install travis -v 1.8.2 --no-rdoc --no-ri Then in the recipe folder I run the command > travis setup codedeploy I answer all the questions keeping in mind:     The KEY and SECRET are the ones we made of the I AM User in the Section before this     The S3 KEY is the filename not the KEY we used for a above. So in my case I just use the name again of the file latest.zip since it sits inside the recipe-artifact bucket. Finally I open the .travis.yml file, which the above modifies and I update the before-deploy area so the zip command ignores my .env file otherwise it would overwrite the file on the server. How it works… Well if you did the CodeDeploy section before this one you will know this is not as easy as it looks. After all the previous work we are able to, with the one command travis setup codedeploy punch in securely all the needed info to get this passing build to deploy. So after phpunit reports things are passing we are ready. With that said we had to have a lot of things in place, S3 bucket to put the artifact, permission with the KEY and SECRET to access the Artifact and CodeDeploy, and a CodeDeploy Group and Application to deploy to. All of this covered in the previous section. After that it is just the magic of Travis and CodeDeploy working together to make this look so easy. See also… Travis Docs: https://docs.travis-ci.com/user/deployment/codedeploy https://github.com/travis-ci/travis.rb https://github.com/travis-ci/travis.rb#installation Working with Your .env File The workflow around this can be tricky. Going from Local, to TravisCI, to CodeDeploy and then to AWS without storing your info in .env on GitHub can be a challenge. What I will show here are some tools and techniques to do this well. Getting ready…. A base install is fine I will use the existing install to show some tricks around this. How to do it… Minimize using Conventions as much as possible     config/queue.php I can do this to have one or more Queues     config/filesystems.php Use the Config file as much as possible. For example this is in my .env If I add config/marvel.php and then make it look like this My .env can be trimmed down by KEY=VALUES later on I can call to those:    Config::get('marvel.MARVEL_API_VERSION')    Config::get('marvel.MARVEL_API_BASE_URL') Now to easily send to Staging or Production using the EnvDeployer library >composer require alfred-nutile-inc/env-deployer:dev-master Follow the readme.md for that library. Then as it says in the docs setup your config file so that it matches the destination IP/URL and username and path for those services. I end up with this config file config/envdeployer.php Now the trick to this library is you start to enter KEY=VALUES into your .env file stacked on top of each other. For example, my database settings might look like this. so now I can type: >php artisan envdeployer:push production Then this will push over SSH your .env to production and swap out the related @production values for each KEY they are placed above. How it works… The first mindset to follow is conventions before you put a new KEY=VALUE into the .env file set back and figure out defaults and conventions around what you already must have in this file. For example must haves, APP_ENV, and then I always have APP_NAME so those two together do a lot to make databases, queues, buckets and so on. all around those existing KEYs. It really does add up, whether you are working alone or on a team focus on these conventions and then using the config/some.php file workflow to setup defaults. Then libraries like the one I use above that push this info around with ease. Kind of like Heroku you can command line these settings up to the servers as needed. See also… Laravel Validator for the .env file: https://packagist.org/packages/mathiasgrimm/laravel-env-validator Laravel 5 Fundamentals: Environments and Configuration: https://laracasts.com/series/laravel-5-fundamentals/episodes/6 Testing Your App on Production with Behat So your app is now on Production! Start clicking away at hundreds of little and big features so you can make sure everything went okay or better yet run Behat! Behat on production? Sounds crazy but I will cover some tips on how to do this including how to setup some remote conditions and clean up when you are done. Getting ready… Any app will do. In my case I am going to hit production with some tests I made earlier. How to do it… Tag a Behat test @smoke or just a Scenario that you know it is safe to run on Production for example features/home/search.feature. Update behat.yml adding a profile call production. Then run > vendor/bin/behat -shome_ui --tags=@smoke --profile=production I run an Artisan command to run all these Then you will see it hit the production url and only the Scenarios you feel are safe for Behat. Another method is to login as a demo user. And after logging in as that user you can see data that is related to that user only so you can test authenticated level of data and interactions. For example database/seeds/UserTableSeeder.php add the demo user to the run method Then update your .env. Now push that .env setting up to Production.  >php artisan envdeploy:push production Then we update our behat.yml file to run this test even on Production features/auth/login.feature. Now we need to commit our work and push to GitHub so TravisCI can deploy and changes: Since this is a seed and not a migration I need to rerun seeds on production. Since this is a new site, and no one has used it this is fine BUT of course this would have been a migration if I had to do this later in the applications life. Now let's run this test, from our vagrant box > vendor/bin/behat -slogin_ui --profile=production But it fails because I am setting up the start of this test for my local database not the remote database features/bootstrap/LoginPageUIContext.php. So I can basically begin to create a way to setup the state of the world on the remote server. > php artisan make:controller SetupBehatController And update that controller to do the setup. And make the route app/Http/routes.php Then update the behat test features/bootstrap/LoginPageUIContext.php And we should do some cleanup! First add a new method to features/bootstrap/LoginPageUIContext.php. Then add that tag to the Scenarios this is related to features/auth/login.feature Then add the controller like before and route app/Http/Controllers/CleanupBehatController.php Then Push and we are ready test this user with fresh state and then clean up when they are done! In this case I could test editing the Profile from one state to another. How it works… Not to hard! Now we have a workflow that can save us a ton of clicking around Production after every deployment. To begin with I add the tag @smoke to tests I considered safe for production. What does safe mean? Basically read only tests that I know will not effect that site's data. Using the @smoke tag I have a consistent way to make Suites or Scenarios as safe to run on Production. But then I take it a step further and create a way to test authenticated related state. Like make a Favorite or updating a Profile! By using some simple routes and a user I can begin to tests many other things on my long list of features I need to consider after every deploy. All of this happens with the configurability of Behat and how it allows me to manage different Profiles and Suites in the behat.yml file! Lastly I tie into the fact that Behat has hooks. I this case I tie in to the @AfterScenario by adding that to my Annotation. And I add another hooks @profile so it only runs if the Scenario has that Tag. That is it, thanks to Behat, Hooks and how easy it is to make Routes in Laravel I can easily take care of a large percentage of what otherwise would be a tedious process after every deployment! See also… Behat Docus on Hooks—http://docs.behat.org/en/v3.0/guides/3.hooks.html Saucelabs—on behat.yml setting later and you can test your site on numerous devices: https://saucelabs.com/. Summary This article gives a summary of Setting up Travis, working with .env files and Behat.  Resources for Article: Further resources on this subject: CRUD Applications using Laravel 4 [article] Laravel Tech Page [article] Eloquent… without Laravel! [article]
Read more
  • 0
  • 0
  • 2350

article-image-migrating-version-3
Packt
11 Aug 2016
11 min read
Save for later

Migrating from Version 3

Packt
11 Aug 2016
11 min read
In this article by Matt Lambert, the author of the book Learning Bootstrap 4, has covered how to migrate your Bootstrap 3 project to Version 4 of Bootstrap is a major update. Almost the entire framework has been rewritten to improve code quality, add new components, simplify complex components, and make the tool easier to use overall. We've seen the introduction of new components like Cards and the removal of a number of basic components that weren't heavily used. In some cases, Cards present a better way of assembling a layout than a number of the removed components. Let's jump into this article by showing some specific class and behavioral changes to Bootstrap in version 4. (For more resources related to this topic, see here.) Browser support Before we jump into the component details, let's review the new browser support. If you are currently running on version 3 and support some older browsers, you may need to adjust your support level when migrating to Bootstrap 4. For desktop browsers, Internet Explorer version 8 support has been dropped. The new minimum Internet Explorer version that is supported is version 9. Switching to mobile, iOS version 6 support has been dropped. The minimum iOS supported is now version 7. The Bootstrap team has also added support for Android v5.0 Lollipop's browser and WebView. Earlier versions of the Android Browser and WebView are not officially supported by Bootstrap. Big changes in version 4 Let's continue by going over the biggest changes to the Bootstrap framework in version 4. Switching to Sass Perhaps the biggest change in Bootstrap 4 is the switch from Less to Sass. This will also likely be the biggest migration job you will need to take care of. The good news is you can use the sample code we've created in the book as a starting place. Luckily, the syntax for the two CSS pre-processors is not that different. If you haven't used Sass before, there isn't a huge learning curve that you need to worry about. Let's cover some of the key things you'll need to know when updating your stylesheets for Sass. Updating your variables The main difference in variables is the symbol used to denote one. In Less we use an @ symbol for our variables, while in Sass you use the $ symbol. Here are a couple of examples for you: /* LESS */ @red: #c00; @black: #000; @white: #fff; /* SASS */ $red: #c00; $black: #000; $white: #fff; As you can see, that is pretty easy to do. A simple find and replace should do most of the work for you. However, if you are using @import in your stylesheets, make sure there remains an @ symbol. Updating @import statements Another small change in Sass is how you import different stylesheets using the @import keyword. First, let's take a look at how you do this in Less: @import "components/_buttons.less"; Now let's compare how we do this using Sass: @import "components/_buttons.scss"; As you can see, it's almost identical. You just need to make sure you name all your files with the .scss extension. Then update your file names in the @import to use .scss and not .less. Updating mixins One of the biggest differences between Less and Sass are mixins. Here we'll need to do a little more heavy lifting when we update the code to work for Sass. First, let's take a look at how we would create a border-radius, or round corners, mixin in Less: .border-radius (@radius: 2px) { -moz-border-radius: @radius; -ms-border-radius: @radius; border-radius: @radius; } In Less, all elements that use the border-radius mixin will have a border radius of 2px. That is added to a component, like this: button { .border-radius } Now let's compare how you would do the same thing using Sass. Check out the mixin code: @mixin border-radius($radius) { -webkit-border-radius: $radius; -moz-border-radius: $radius; -ms-border-radius: $radius; border-radius: $radius; } There are a few differences here that you need to note: You need to use the @mixin keyword to initialize any mixin We don't actually define a global value to use with the mixin To use the mixin with a component, you would code it like this: button { @include border-radius(2px); } This is also different from Less in a few ways: First, you need to insert the @include keyword to call the mixin Next, you use the mixin name you defined earlier, in this case, border-radius Finally, you need to set the value for the border-radius for each element, in this case, 2px Personally, I prefer the Less method as you can set the value once and then forget about it. However, since Bootstrap has moved to Sass, we have to learn and use the new syntax. That concludes the main differences that you will likely encounter. There are other differences and if you would like to research them more, I would check out this page: http://sass-lang.com/guide. Additional global changes The change to Sass is one of the bigger global differences in version 4 of Bootstrap. Let's take a look at a few others you should be aware of.  Using REM units In Bootstrap 4, px has been replaced with rem for the primary unit of measure. If you are unfamiliar with rem it stands for root em. Rem is a relative unit of measure where pixels are fixed. Rem looks at the value for font-size on the root element in your stylesheet. It then uses your value declaration, in rems, to determine the computer pixel value. Let's use an example to make this easier to understand: html { font-size: 24px; } p { font-size: 2rem; } In this case, the computed font-size for the <p> tag would be 48px. This is different from the em unit because ems will be affected by wrapping elements that may have a different size. Whereas rem takes a simpler approach and just calculates everything from the root HTML element. It removes the size cascading that can occur when using ems and nested, complicated elements. This may sound confusing, but it is actually easier to use em units. Just remember your root font-size and use that when figuring out your rem values. What this means for migration is that you will need to go through your stylesheet and change any px or em values to use ems. You'll need to recalculate everything to make sure it fits the new format if you want to maintain the same look and feel for your project. Other font updates The trend for a long while has been to make text on a screen larger and easier to read for all users. In the past, we used tons of small typefaces that might have looked cool but were hard to read for anyone visually challenged. To that end, the base font-size for Bootstrap has been changed from 14px to 16px. This is also the standard size for most browsers and makes the readability of text better. Again, from a migration standpoint, you'll need to review your components to ensure they still look correct with the increased font size. You may need to make some changes if you have components that were based off the 14px default font-size in Bootstrap 3. New grid size With the increased use of mobile devices, Bootstrap 4 includes a new smaller grid tier for small screen devices. The new grid tier is called extra small and is configured for devices under 480px in width. For the migration story this shouldn't have a big effect. What it does do is allow you a new breakpoint if you want to further optimize your project for smaller screens. That concludes the main global changes to Bootstrap that you should be aware of when migrating your projects. Next, let's take a look at components. Migrating components With the release of Bootstrap 4, a few components have been dropped and a couple new ones have been added. The most significant change is around the new Cards component. Let's start by breaking down this new option. Migrating to the Cards component With the release of the Cards component, the Panels, Thumbnails, and Wells components have been removed from Bootstrap 4. Cards combines the best of these elements into one and even adds some new functionality that is really useful. If you are migrating from a Bootstrap 3 project, you'll need to update any Panels, Thumbnails, or Wells to use the Cards component instead. Since the markup is a bit different, I would recommend just removing the old components all together, and then recode them using the same content as Cards. Using icon fonts The Glyph-icons icon font has been removed from Bootstrap 4. I'm guessing this is due to licensing reasons as the library was not fully open source. If you don't want to update your icon code, simply download the library from the Glyph-icons website at: http://glyphicons.com/ The other option would be to change the icon library to a different one like Font Awesome. If you go down this route, you'll need to update all of your <i> tags to use the proper CSS class to render the icon. There is a quick reference tool that will allow you to do this called GlyphSearch. This tool supports a number of icon libraries and I use it all the time. Check it out at: http://glyphsearch.com/. Those are the key components you need to be aware of. Next let's go over what's different in JavaScript. Migrating JavaScript The JavaScript components have been totally rewritten in Bootstrap 4. Everything is now coded in ES6 and compiled with Babel which makes it easier and faster to use. On the component size, the biggest difference is the Tooltips component. The Tooltip is now dependant on an external library called Tether, which you can download from: http://github.hubspot.com/tether/. If you are using Tooltips, make sure you include this library in your template. The actual markup looks to be the same for calling a Tooltip but you must include the new library when migrating from version 3 to 4. Miscellaneous migration changes Aside from what I've gone over already, there are a number of other changes you need to be aware of when migrating to Bootstrap 4. Let's go through them all below. Migrating typography The .page-header class has been dropped from version 4. Instead, you should look at using the new display CSS classes on your headers if you want to give them a heading look and feel. Migrating images If you've ever used responsive images in the past, the class name has changed. Previously, the class name was .image-responsive but it is now named .image-fluid. You'll need to update that class anywhere it is used. Migrating tables For the table component, a few class names have changed and there are some new classes you can use. If you would like to create a responsive table, you can now simply add the class .table-responsive to the <table> tag. Previously, you had to wrap the class around the <table> tag. If migrating, you'll need to update your HTML markup to the new format. The .table-condensed class has been renamed to .table-sm. You'll need to update that class anywhere it is used. There are a couple of new table styles you can add called .table-inverse or .table-reflow. Migrating forms Forms are always a complicated component to code. In Bootstrap 4, some of the class names have changed to be more consistent. Here's a list of the differences you need to know about: control-label is now .form-control-label input-lg and .input-sm are now .form-control-lg and .form-control-sm The .form-group class has been dropped and you should instead use .form-control You likely have these classes throughout most of your forms. You'll need to update them anywhere they are used. Migrating buttons There are some minor CSS class name changes that you need to be aware of: btn-default is now .btn-secondary The .btn-xs class has been dropped from Bootstrap 4 Again, you'll need to update these classes when migrating to the new version of Bootstrap. There are some other minor changes when migrating on components that aren't as commonly used. I'm confident my explanation will cover the majority of use cases when using Bootstrap 4. However, if you would like to see the full list of changes, please visit: http://v4-alpha.getbootstrap.com/migration/. Summary That brings this article to a close! Hope you are able to migrate your Bootstrap 3 project to Bootstrap 4. Resources for Article: Further resources on this subject: Responsive Visualizations Using D3.js and Bootstrap [article] Advanced Bootstrap Development Tools [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 1427