Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1797 Articles
article-image-connecting-react-redux-firebase-part-1
AJ Webb
09 Nov 2016
7 min read
Save for later

Connecting React to Redux & Firebase - Part 1

AJ Webb
09 Nov 2016
7 min read
Have you tried using React and now you want to increase your abilities? Are you ready to scale up your small React app? Have you wondered how to offload all of your state into a single place and keep your components more modular? Using Redux with React allows you to have a single source of truth for all of your app's state. The two of them together allow you to never have to set state on a component and allows your components to be completely reusable. For some added sugar, you'll also learn how to leverage Firebase and use Redux actions to subscribe to data that is updated in real time. In this two-part post, you'll walk through creating a new chat app called Yak using React's new CLI, integrating Redux into your app, updating your state and connecting it all to Firebase. Let's get started. Setting up This post is written with the assumption that you have Node.js and NPM already installed. Also, it assumes some knowledge of JavaScript and React.js. If you don't already have Node.js and NPM installed, head over to the Node.js install instructions. At the time of writing this post, the Node.js version is 6.6.0 and NPM version is 3.10.8. Once you have Node.js installed, open up your favorite terminal app and install the NPM package Create React App; the current version at the time of writing this post is 0.6.0, so make sure to specify that version. [~]$ npm install -g [email protected] Now we'll want to set up our app and install our dependencies. First we'll navigate to where we want our project to live. I like to keep my projects at ~/code, so I'll navigate there. You may need to create the directory using mkdir if you don't have it, or you might want to store it elsewhere. It doesn't matter which you choose; just head to where you want to store your project. [~]$ cd ~/code Once there, use Create React App to create the app: [~/code]$ create-react-app yak This command is going to create a directory called yak and create all the necessary files you need in order to start a baseline React.js app. Once the command has completed you should see some commands that you can run in your new app. Create React App has created the boilerplate files for you. Take a moment to familiarize yourself with these files. .gitignore All the files and directories you want ignored by git. README.md Documentation on what has been created. This is a good resource to lean on as you're learning React.js and using your app. node_modules All the packages that are required to run and build the application up to this point. package.json Instructs NPM how scripts run on your app, which packages your app depending on and other meta things such as version and app name. public All the static files that aren't used within the app. Mainly for index.html and favicon.ico. src All the app files; the app is run by Webpack and is set up to watch all the files inside of this directory. This is where you will spend the majority of your time. There are two files that cannot be moved while working on the app; they are public/index.html and src/index.js. The app relies on these two files in order to run. You can change them, but don't move them. Now to get started, navigate into the app folder and start the app. [~/code]$ cd yak [~/code/yak]$ npm start The app should start and automatically open http://localhost:3000/ in your default browser. You should see a black banner with the React.js logo spinning and some instructions on how to get started. To stop the app, press ctrl-c in the terminal window that is running the app. Getting started with Redux Next install Redux and React-Redux: [~/code/yak]$ npm install --save redux react-redux Redux will allow the app to have a single source of truth for state throughout the app. The idea is to keep all the React components ignorant of the state, and to pass that state to them via props. Containers will be used to select data from the state and pass the data to the ignorant components via props. React-Redux is a utility that assists in integrating React with Redux. Redux's state is read-only and you can only change the state by emitting an action that a reducer function uses to take the previous state and return a new state. Make sure as you are writing your reducers to never mutate the state (more on that later). Now you will add Redux to your app, in src/index.js. Just below importing ReactDOM, add: import { createStore, compose } from 'redux'; import { Provider } from 'react-redux'; You now have the necessary functions and components to set up your Redux store and pass it to your React app. Go ahead and get your store initialized. After the last import statement and before ReactDOM.render() is where you will create your store. const store = createStore(); Yikes! If you run the app and open your inspector, you should see the following console error: Uncaught Error: Expected the reducer to be a function. That error is thrown because the createStore function requires a reducer as the first parameter. The second parameter is an optional initial state and the last parameter is for middleware that you may want for your store. Go ahead and create a reducer for your store, and ignore the other two parameters for now. [~/code/yak]$ touch src/reducer.js Now open reducer.js and add the following code to the reducer: const initialState = { messages: [] }; export function yakApp(state = initialState, action) { return state; } Here you have created an initial state for the current reducer, and a function that is either accepting a new state or using ES6 default arguments to set an undefined state to the initial state. The function is simply returning the state and not making any changes for now. This is a perfectly valid reducer and will work to solve the console error and get the app running again. Now it's time to add it to the store. Back in src/index.js, import the reducer in and then set the yakApp function to your store. import { yakApp } from './reducer'; const store = createStore(yakApp); Restart the app and you'll see that it is now working again! One last thing to get things set up in your bootstrapping file src/index.js. You have your store and have imported Provider; now it’s time to connect the two and allow the app to have access to the store. Update the ReactDOM.render method to look like the following. ReactDOM.render( <Provider store={store}> <App /> </Provider>, document.getElementById('root') ); Now you can jump into App.js and connect your store. In App.js, add the following import statement: import { connect } from 'react-redux'; At the bottom of the file, just before the export statement, add: function mapStateToProps(state) { return { messages: state.messages } } And change the export statement to be: export default connect(mapStateToProps)(App); That's it! Your App component is now connected to the redux store. The messages array is being mapped to this.props. Go ahead and try it; add a console log to the render() method just before the return statement. console.log(this.props.messages); The console should log an empty array. This is great! Conclusion In this post, you've learned to create a new React app without having to worry about tooling. You've integrated Redux into the app and created a simple reducer. You've also connected the reducer to your React component. But how do you add data to the array as messages are sent? How do you persist the array of messages after you leave the app? How do you connect all of this to your UI? How do you allow your users to create glorious data for you? In the next post, you'll learn to do all those things. Stay tuned! About the author AJ Webb is a team lead and frontend engineer for @tannerlabs, and the co-creator of Payba.cc.
Read more
  • 0
  • 0
  • 2223

article-image-breaking-microservices-architecture
Packt
08 Nov 2016
15 min read
Save for later

Breaking into Microservices Architecture

Packt
08 Nov 2016
15 min read
In this article by Narayan Prusty, the author of the book Modern JavaScript Applications, we will see the architecture of server side application development for complex and large applications (applications with huge number of users and large volume of data) shouldn't just involve faster response and providing web services for wide variety of platforms. It should be easy to scale, upgrade, update, test, and deploy. It should also be highly available, allowing the developers write components of the server side application in different programming languages and use different databases. Therefore, this leads the developers who build large and complex applications to switch from the common monolithic architecture to microservices architecture that allows us to do all this easily. As microservices architecture is being widely used in enterprises that build large and complex applications, it's really important to learn how to design and create server side applications using this architecture. In this chapter, we will discuss how to create applications based on microservices architecture with Node.js using the Seneca toolkit. (For more resources related to this topic, see here.) What is monolithic architecture? To understand microservices architecture, it's important to first understand monolithic architecture, which is its opposite. In monolithic architecture, different functional components of the server side application, such as payment processing, account management, push notifications, and other components, all blend together in a single unit. For example, applications are usually divided into three parts. The parts are HTML pages or native UI that run on the user's machine, server side application that runs on the server, and database that also runs on the server. The server side application is responsible for handling HTTP requests, retrieving and storing data in a database, executing algorithms, and so on. If the server side application is a single executable (that is running is a single process) that does all these task, than we say that the server side application is monolithic. This is a common way of building server side applications. Almost every major CMS, web servers, server side frameworks, and so on are built using monolithic architecture. This architecture may seem successful, but problems are likely to arise when your application is large and complex. Demerits of monolithic architecture The following are some of the issues caused by server side applications built using the monolithic architecture. Scaling monolithic architecture As traffic to your server side application increases, you will need to scale your server side application to handle the traffic. In case of monolithic architecture, you can scale the server side application by running the same executable on multiple servers and place the servers behind a load balancer or you can use round robin DNS to distribute the traffic among the servers: In the preceding diagram, all the servers will be running the same server side application. Although scaling is easy, scaling monolithic server side application ends up with scaling all the components rather than the components that require greater resource. Thus, causing unbalanced utilization of resources sometimes, depending on the quantity and types of resources the components need. Let's consider some examples to understand the issues caused while scaling monolithic server side applications: Suppose there is a component of server side application that requires a more powerful or special kind of hardware, we cannot simply scale this particular component as all the components are packed together, therefore everything needs to be scaled together. So, to make sure that the component gets enough resources, you need to run the server side application on some more servers with powerful or special hardware, leading to consumption of more resources than actually required. Suppose we have a component that requires to be executed on a specific server operating system that is not free of charge, we cannot simply run this particular component in a non-free operating system as all the components are packed together and therefore, just to execute this specific component, we need to install the non-free operating system in all servers, increasing the cost largely. These are just some examples. There are many more issues that you are likely to come across while scaling a monolithic server side application. So, when we scale monolithic server side applications, the components that don't need more powerful or special kind of resource starts receiving them, therefore deceasing resources for the component that needs them. We can say that scaling monolithic server side application involves scaling all components that are forcing to duplicate everything in the new servers. Writing monolithic server side applications Monolithic server side applications are written in a particular programming language using a particular framework. Enterprises usually have developers who are experts in different programming languages and frameworks to build server side applications; therefore, if they are asked to build a monolithic server side application, then it will be difficult for them to work together. The components of a monolithic server side application can be reused only in the same framework using, which it's built. So, you cannot reuse them for some other kind of project that's built using different technologies. Other issues of monolithic architecture Here are some other issues that developers might face. Depending on the technology that is used to build the monolithic server side application: It may need to be completely rebuild and redeployed for every small change made to it. This is a time-consuming task and makes your application inaccessible for a long time. It may completely fail if any one of the component fails. It's difficult to build a monolithic application to handle failure of specific components and degrade application features accordingly. It may be difficult to find how much resources are each components consuming. It may be difficult to test and debug individual components separately. Microservices architecture to the rescue We saw the problems caused by monolithic architecture. These problems lead developers to switch from monolithic architecture to microservices architecture. In microservices architecture, the server side application is divided into services. A service (or microservice) is a small and independent process that constitutes a particular functionality of the complete server side application. For example, you can have a service for payment processing, another service for account management, and so on; the services need to communicate with each other via network. What do you mean by "small" service? You must be wondering how small a service needs to be and how to tell whether a service is small or not? Well, it actually depends on many factors such as the type of application, team management, availability of resources, size of application, and how small you think is small? However, a small service doesn't have to be the one that is written is less lines of code or provides a very basic functionality. A small service can be the one on which a team of developers can work independently, which can be scaled independently to other services, scaling it doesn't cause unbalanced utilization of recourses, and overall they are highly decoupled (independent and unaware) of other services. You don't have to run each service in a different server, that is, you can run multiple services in a single computer. The ratio of server to services depends on different factors. A common factor is the amount and type of resources and technologies required. For example, if a service needs a lot of RAM and CPU time, then it would be better to run it individually on a server. If there are some services that don't need much resources, then you can run them all in a single server together. The following diagram shows an example of the microservices architecture: Here, you can think of Service 1 as the web server with which a browser communicates and other services providing APIs for various functionalities. The web services communicate with other services to get data. Merits of microservices architecture Due to the fact that services are small and independent and communicate via network, it solves many problems that monolithic architecture had. Here are some of the benefits of microservices architecture: As the services communicate via network, they can be written in different programming languages using different frameworks Making a change to a service only requires that particular service to be redeployed instead of all the services, which is a faster procedure It becomes easier to measure how much resources are consumed by each service as each service runs in a different process It becomes easier to test and debug, as you can analyze each service separately Services can be reused by other applications as they interact via network calls Scaling services Apart from the preceding benefits, one of the major benefits of microservices architecture is that you can scale individual services that require scaling instead of all the services, therefore preventing duplication of resources and unbalanced utilization of resources. Suppose we want to scale Service 1 in the preceding diagram. Here is a diagram that shows how it can be scaled: Here, we are running two instances of Service 1 on two different servers kept behind a load balancer, which distributes the traffic between them. All other services run the same way as scaling them wasn't required. If you wanted to scale Service 3, then you can run multiple instances of Service 3 on multiple servers and place them behind a load balancer. Demerits of microservices architecture Although there are a lot of merits of using microservices architecture compared to monolithic architecture, there are some demerits of microservices architecture as well: As the server side application is divided into services, deploying, and optionally, configuring each service separately is cumbersome and a time-consuming task. Note that developers often use some sort automation technology (such as AWS, Docker, and so on) to make deployment somewhat easier; however, to use it, you still need a good level of experience and expertise of that technology. Communication between services is likely to lag as it's done via network. This sort of server side applications is more prone to network security vulnerabilities as services communicate via network. Writing code for communicating with other services can be harder, that is, you need to make network calls and then parse the data to read it. This also requires more processing. Note that although there are frameworks to build server side applications using microservices that make fetching and parsing of data easier, it still doesn't deduct the processing and network wait time. You will surely need some sort of monitoring tool to monitor services as they may go down due to network, hardware, or software failure. Although you may use the monitoring tool only when your application suddenly stops, to build the monitoring software or use some sort of service, monitoring software needs some level of extra experience and expertise. Microservices-based server side applications are slower than monolithic-based server side applications as communication via networks is slower compared to memory. When to use microservices architecture? It may seem like its difficult to choose between monolithic and microservices architecture, but it's actually not so hard to decide between them. If you are building a server side application using monolithic architecture and you feel that you are unlikely to face any monolithic issues that we discussed earlier, then you can stick to monolithic architecture. In future, if you are facing issues that can be solved using microservices architecture, then you should switch to microservices architecture. If you are switching from a monolithic architecture to microservices architecture, then you don't have to rewrite the complete application, instead you can only convert the components that are causing issues to services by doing some code refactoring. This sort of server side applications where the main application logic is monolithic but some specific functionality is exposed via services is called microservices architecture with monolithic core. As issues increase further, you can start converting more components of the monolithic core to services. If you are building a server side application using monolithic architecture and you feel that you are likely to face any of the monolithic issues that we discussed earlier, then you should immediately switch to microservices architecture or microservices architecture with monolithic core, depending on what suits you the best. Data management In microservices architecture, each service can have its own database to store data and can also use a centralized database to store. Some developers don't use a centralized database at all, instead all services have their own database to store the data. To synchronize the data between the services, the services omit events when their data is changed and other services subscribe to the event and update the data. The problem with this mechanism is that if a service is down, then it may miss some events. There is also going to be a lot of duplicate data, and finally, it is difficult to code this kind of system. Therefore, it's a good idea to have a centralized database and also let each service to maintain their own database if they want to store something that they don't want to share with others. Services should not connect to the centralized database directly, instead there should be another service called database service that provides APIs to work with the centralized database. This extra layer has many advantages, such as the underlying schema can be changed without updating and redeploying all the services that are dependent on the schema, we can add a caching layer without making changes to the services, you can change the type of database without making any changes to the services and there are many other benefits. We can also have multiple database services if there are multiple schemas, or if there are different types of databases, or due to some other reason that benefits the overall architecture and decouples the services. Implementing microservices using Seneca Seneca is a Node.js framework for creating server side applications using microservices architecture with monolithic core. Earlier, we discussed that in microservices architecture, we create a separate service for every component, so you must be wondering what's the point of using a framework for creating services that can be done by simply writing some code to listen to a port and reply to requests. Well, writing code to make requests, send responses, and parse data requires a lot of time and work, but a framework like Seneca make all this easy. Also converting components of monolithic core to services is also a cumbersome task as it requires a lot of code refactoring, but Seneca makes it easy by introducing a concept of actions and plugins. Finally, services written in any other programming language or framework will be able to communicate with Seneca services. In Seneca, an action represents a particular operation. An action is a function that's identified by an object literal or JSON string called as the action's pattern. In Seneca, these operations of a component of monolithic core are written using actions, which we may later want to move from monolithic core to a service and expose it to other services and monolithic core via network. Why actions? You might be wondering what is the benefit of using actions instead of functions to write operations and how actions make it easy to convert components of monolithic core to services? Suppose you want to move an operation of monolithic core that is written using a function to a separate service and expose the function via network then you cannot simply copy and paste the function to the new service, instead you need to define a route (if you are using Express). To call the function inside the monolithic core, you will need to write code to make an HTTP request to the service. To call this operation inside the service, you can simply call a function so that there are two different code snippets depending from where you are executing the operation. Therefore, moving operations requires a lot of code refactoring. However, if you would have written the preceding operation using the Seneca action, then it would have been really easy to move the operation to a separate service. In case the operation is written using action, and you want to move the operation to a separate service and expose the operation via network, then you can simply copy and paste the action to the new service. That's it. Obviously, we also need to tell the service to expose the action via network and tell the monolithic core where to find the action, but all these require just couple of lines of code. A Seneca service exposes actions to other services and monolithic core. While making request to a service, we need to provide a pattern matching an action's pattern to be called in the service. Why patterns? Patterns make it easy to map a URL to action, patterns can overwrite other patterns for specific conditions, therefore it prevents editing of the existing code, as editing of the existing code in a production site is not safe and have many other disadvantages. Seneca also has a concept of plugins. A seneca plugin is actually a set of actions that can be easily distributed and plugged in to a service or monolithic core. As our monolithic core becomes larger and complex, we can convert components to services. That is, move actions of certain components to services. Summary In this chapter, we saw the difference between monolithic and microservices architecture. Then we discussed what microservices architecture with monolithic core means and its benefits. Finally, we jumped into the Seneca framework for implementing microservices architecture with monolithic core and discussed how to create a basic login and registration functionality to demonstrate various features of the Seneca framework and how to use it. In the next chapter, we will create a fully functional e-commerce website using Seneca and Express frameworks Resources for Article: Further resources on this subject: Microservices – Brave New World [article] Patterns for Data Processing [article] Domain-Driven Design [article]
Read more
  • 0
  • 0
  • 2440

article-image-simple-todo-list-web-application-nodejs-express-and-riot
Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
Save for later

Simple ToDo list web application with node.js, Express, and Riot

Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
The frontend space is indeed crowded, but none of the more popular solutions are really convincing to me. I feel Angular is bloated and the double binding is not for me. I also do not like React and its syntax. Riot is, as stated by their creators, "A React-like user interface micro-library" with simpler syntax that is five times smaller than React. What we are going to learn We are going to build a simple Riot application backed by Express, using Jade as our template language. The backend will expose a simple REST API, which we will consume from the UI. We are not going to use any other dependency like JQuery, so this is also a good chance to try XMLHttpRequest2. I deliberately ommited the inclusion of a client package manager like webpack or jspm because I want to focus on the Expressjs + Riotjs. For the same reason, the application data is persisted in memory. Requirements You just need to have any recent version of node.js(+4), text editor of your choice, and some JS, Express and website development knowledge. Project layout Under my project directory we are going to have 3 directories: * public For assets like the riot.js library itself. * views Common in most Express setup, this is where we put the markup. * client This directory will host the Riot tags (we will see more of that later) We will also have the package.json, our project manifesto, and an app.js file, containing the Express application. Our Express server exposes a REST API; its code can be found in api.js. Here is how the layout of the final project looks: ├── api.js ├── app.js ├── client │ ├── tag-todo.jade │ └── tag-todo.js ├── package.json ├── node_modules ├── public │ └── js │ ├── client.js │ └── riot.js └── views └── index.jade Project setup Create your project directory and from there run the following to install the node.js dependencies: $ npm init -y $ npm install --save body-parser express jade And the application directories: $ mkdir -p views public/js client Start! Lets start by creating the Express application file, app.js: 'use strict'; const express = require('express'), app = express(), bodyParser = require('body-parser'); // Set the views directory and template engine app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); // Set our static directory for public assets like client scripts app.use(express.static('public')); // Parses the body on incoming requests app.use(bodyParser.json()); // Pretty prints HTML output app.locals.pretty = true; // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.send('hello'); }); // Start listening for connections app.listen(3000, function (err) { if (err) { console.error('Cannot listen at port 3000', err); } console.log('Todo app listening at port 3000'); }); The #app object we just created, is just a plain Express application. After setting up the application we call the listen function, which will create an HTTP server listening at port 3000. To test our application setup, we open another terminal, cd to our project directory and run $ node app.js. Open a web browser and load http://localhost:3000; can you read "hello"? Node.js will not reload the site if you change the files, so I recommend you to install nodemon. Nodemon monitors your code and reloads the site on every change you perform on the JS source code. The command,$ npm install -g nodemon, installs the program on your computer globally, so you can run it from any directory. Okay, kill our previously created server and start a new one with $ nodemon app.js. Our first Riot tag Riot allows you to encapsulate your UI logic in "custom tags". Tag syntax is pretty straightforward. Judge for yourself. <employee> <span>{ name }</span> </employee> Custom tags can contain code and can be nested as showed in the next code snippet: <employeeList> <employee each="{ items }" onclick={ gotoEmployee } /> <script> gotoEmployee (e) { var item = e.item; // do something } </script> </employeeList> This mechanism enables you to build complex functionality from simple units. Of course you can find more information at their documentation. On the next steps we will create our first tag: ./client/tag-todo.jade. Oh, we have not yet downloaded Riot! Here is the non minified Riot + compiler download. Download it to ./public/js/riot.js. Next step is to create our index view and tell our app to serve it. Locate / router handler, remove the res.send('hello) ', and update to: // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.render('index'); }); Now, create the ./views/index.jade file: doctype html html head script(src="/js/riot.js") body h1 ToDo App todo Go to your browser and reload the page. You can read the big "ToDo App" but nothing else. There is a <todo></todo> tag there, but since the browser does not understand, this tag is not rendered. Let's tell Riot to mount the tag. Mount means Riot will use <todo></todo> as a placeholder for our —not yet there— todo tag. doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") todo script. riot.mount('todo'); Open your browser's dev console and reload the page. riot.mount failed because there was no todo.tag. Tags can be served in many ways, but I choose to serve them as regular Express templates. Of course, you can serve it as static assets or bundled. Just below the / route handler, add the /tags/:name.tag handler. // "/" route handler app.get('/', function (req, res) { res.render('index'); }); // tag route handler app.get('/tags/:name.tag', function (req, res) { var name = 'tag-' + req.params.name; res.render('../client/' + name); }); Now create the tag in ./client/tag-todo.jade: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") And reload the browser again. Errors gone and a new form in your browser. onsubmit="{ add }" is part of Riot's syntax and means on submit call the add function. You can add mix implementation with the markup, but I rather prefer to split markup from code. In Jade (and any other template language),it is trivial to include other files, which is exactly what we are going to do. Update the file as: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") script include tag-todo.js And create ./client/tag-todo.js with this snippet: 'use strict'; var self = this; var api = self.opts; When the tag gets mounted by Riot, it gets a context. That is the reason for var self = this;. That context can include the opts object. opts object can be anything of your choice, defined at the time you mount the tag. Let’s say we have an API object and we pass it to riot.mount as the second option at the time we mount the tag, that isriot.mount('todo', api). Then, at the time the tag is rendered this.opts will point to the api object. This is the mechanism we are going to use to expose our client api with the todo tag. Our form is still waiting for the add function, so edit the tag-todo.js again and append the following: self.add = function (e) { var title = self.todo.value; console.log('New ToDo', title); }; Reload the page, type something at the text field, and hit enter. The expected message should appear in your browser's dev console. Implementing our REST API We are ready to implement our REST API on the Express side. Create ./api.js file and add: 'use strict'; const express = require('express'); var app = module.exports = express(); // simple in memory DB var db = []; // handle ToDo creation app.post('/', function (req, res) { db.push({ title: req.body.title, done: false }); let todoID = db.length - 1; // mountpath = /api/todos/ res.location(app.mountpath + todoID); res.status(201).end(); }); // handle ToDo updates app.put('/', function (req, res) { db[req.body.id] = req.body; res.location('/' + req.body.id); res.status(204).end(); }); Our API supports ToDo creation/update, and it is architected as an Express sub application. To mount it, we just need to update app.js for the last time. Update the require block at app.js to: const express = require('express'), api = require('./api'), app = express(), bodyParser = require('body-parser'); ... And mount the api sub application just before the app.listen... // Mount the api sub application app.use('/api/todos/', api); We said we will implement a client for our API. It should expose two functions –create and update –located at ./public/client.js. Here is its source: 'use strict'; (function (api) { var url = '/api/todos/'; function extractIDFromResponse(xhr) { var location = xhr.getResponseHeader('location'); var result = +location.slice(url.length); return result; } api.create = function createToDo(title, callback) { var xhr = new XMLHttpRequest(); var todo = { title: title, done: false }; xhr.open('POST', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 201) { todo.id = extractIDFromResponse(xhr); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; api.update = function createToDo(todo, callback) { var xhr = new XMLHttpRequest(); xhr.open('PUT', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 200) { console.log('200'); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; })(this.todoAPI = {}); Okay, time to load the API client into the UI and share it with our tag. Modify the index view including it as a dependency: doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") script(src="/js/client.js") todo script. riot.mount('todo', todoAPI); We are now loading the API client and passing it as a reference to the todo tag. Our last change today is to update the add function to consume the API. Reload the browser again, type something into the textbox, and hit enter. Nothing new happens because our add function is not yet using the API. We need to update ./client/tag-todo.js as: 'use strict'; var self = this; var api = self.opts; self.items = []; self.add = function (e) { var title = self.todo.value; api.create(title, function (err, xhr, todo) { if (xhr.status === 201) { self.todo.value = ''; self.items.push(todo); self.update(); } }); }; We have augmented self with an array of items. Everytime we create a new ToDo task (after we get the 201 code from the server) we push that new ToDo object into the array because we are going to print that list of items. In Riot, we can loop the items adding each attribute to any tag. Last, update to ./client/tag-todo.jade todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") ul li(each="{items}") span {title} script include tag-todo.js Finally! Reload the page and create a ToDo! Next steps You can find the complete source code for this article here. The final version of the code also implements a done/undone button, which you can try to implement by yourself. About the author Pedro NarcisoGarcíaRevington is a Senior Full Stack Developer with 10+ years experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD and polyglot persistence.
Read more
  • 0
  • 0
  • 6831
Banner background image

article-image-getting-started-aspnet-core-and-bootstrap-4
Packt
04 Nov 2016
17 min read
Save for later

Getting Started with ASP.NET Core and Bootstrap 4

Packt
04 Nov 2016
17 min read
This article is by Pieter van der Westhuizen, author of the book Bootstrap for ASP.NET MVC - Second edition. As developers, we can find it difficult to create great-looking user interfaces from scratch when using HTML and CSS. This is especially hard when developers have extensive experience developing Windows Forms applications. Microsoft introduced Web Forms to remove the complexities of building websites and to ease the switch from Windows Forms to the Web. This in turn makes it very hard for Web Forms developers, and even harder for Windows Forms developers to switch to ASP.NET MVC. Bootstrap is a set of stylized components, plugins, and a layout grid that takes care of the heavy lifting. Microsoft included Bootstrap in all ASP.NET MVC project templates since 2013. In this article, we will cover the following topics: Files included in the Bootstrap distribution How to create an empty ASP.NET site and enable MVC and static files Adding the Bootstrap files using Bower Automatically compile the Bootstrap Sass files using Gulp Installing additional icon sets How to create a layout file that references the Bootstrap files (For more resources related to this topic, see here.) Files included in the Bootstrap distribution In order to get acquainted with the files inside the Bootstrap distribution, you need to download its source files. At the time of writing, Bootstrap 4 was still in Alpha, and its source files can be downloaded from http://v4-alpha.getbootstrap.com. Bootstrap style sheets Do not be alarmed by the amount of files inside the css folder. This folder contains four .css files and two .map files. We only need to include the bootstrap.css file in our project for the Bootstrap styles to be applied to our pages. The bootstrap.min.css file is simply a minified version of the aforementioned file. The .map files can be ignored for the project we'll be creating. These files are used as a type of debug symbol (similar to the .pdb files in Visual Studio), which allows developers to live edit their preprocessor source files. Bootstrap JavaScript files The js folder contains two files. All the Bootstrap plugins are contained in the bootstrap.js file. The bootstrap.min.js file is simply a minified version of the aforementioned file. Before including the file in your project, make sure that you have a reference to the jQuery library because all Bootstrap plugins require jQuery. Bootstrap fonts/icons Bootstrap 3 uses Glyphicons to display various icons and glyphs in Bootstrap sites. Bootstrap 4 will no longer ship with glyphicons included, but you still have the option to include it manually or to include your own icons. The following two icon sets are good alternatives to Glyphicons: Font Awesome, available from http://fontawesome.io/ GitHub's Octicons, available from https://octicons.github.com/ Bootstrap source files Before you can get started with Bootstrap, you first need to download the Bootstrap source files. At the time of writing, Bootstrap 4 was at version 4 Alpha 3. You have a few choices when adding Bootstrap to you project. You can download the compiled CSS and JavaScript files or you can use a number of package managers to install the Bootstrap Sass source to your project. In this article, you'll be using Bower to add the Bootstrap 4 source files to your project. For a complete list of Bootstrap 4 Alpha installation sources, visit http://v4-alpha.getbootstrap.com/getting-started/download/ CSS pre-processors CSS pre-processors process code written in a pre-processed language, such as LESS or Sass, and convert it into standard CSS, which in turn can be interpreted by any standard web browser. CSS pre-processors extend CSS by adding features that allow variables, mixins, and functions. The benefits of using CSS pre-processors are that they are not bound by any limitations of CSS. CSS pre-processors can give you more functionality and control over your style sheets and allows you to write more maintainable, flexible, and extendable CSS. CSS pre-processors can also help to reduce the amount of CSS and assist with the management of large and complex style sheets that can become harder to maintain as the size and complexity increases. In essence, CSS pre-processors such as Less and Sass enables programmatic control over your style sheets. Bootstrap moved their source files from Less to Sass with version 4.  Less and Sass are very alike in that they share a similar syntax as well as features such as variables, mixins, partials, and nesting, to name but a few. Less was influenced by Sass, and later on, Sass was influenced by Less when it adopted CSS-like block formatting, which worked very well for Less. Creating an empty ASP.NET MVC site and adding Bootstrap manually The default ASP.NET 5 project template in Visual Studio 2015 Update 3 currently adds Bootstrap 3 to the project. In order to use Bootstrap 4 in your ASP.NET project, you'll need to create an empty ASP.NET project and add the Bootstrap 4 files manually. To create a project that uses Bootstrap 4, complete the following process: In Visual Studio 2015, select New | Project from the File menu or use the keyboard shortcut Ctrl + Shift + N. From the New Project dialog window, select ASP.NET Core Web Application (.NET Core), which you'll find under Templates | Visual C# | Web. Select the Empty project template from the New ASP.NET Core Web Application (.NET Core) Project dialog window and click on OK. Enabling MVC and static files The previous steps will create a blank ASP.NET Core project. Running the project as-is will only show a simple Hello World output in your browser. In order for it to serve static files and enable MVC, we'll need to complete the following steps: Double-click on the project.json file inside the Solution Explorer in Visual Studio. Add the following two lines to the dependencies section, and save the project.json file: "Microsoft.AspNetCore.Mvc": "1.0.0", "Microsoft.AspNetCore.StaticFiles": "1.0.0" You should see a yellow colored notification inside the Visual Studio Solution Explorer with a message stating that it is busy restoring packages. Open the Startup.cs file. To enable MVC for the project, change the ConfigureServices method to the following: public void ConfigureServices(IServiceCollection services) {     services.AddMvc(); } Finally, update the Configure method to the following code: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) {     loggerFactory.AddConsole();       if (env.IsDevelopment())     {         app.UseDeveloperExceptionPage();     }       app.UseStaticFiles();       app.UseMvc(routes =>     {         routes.MapRoute(             name: "default",             template: "{controller=Home}/{action=Index}/{id?}");     }); } The preceding code will enable logging and the serving of static files such as images, style sheets, and JavaScript files. It will also set the default MVC route. Creating the default route controller and view When creating an empty ASP.NET Core project, no default controller or views will be created by default. In the previous steps, we've created a default route to the Index action of the Home controller. In order for this to work, we first need to complete the following steps: In the Visual Studio Solution Explorer, right-click on the project name and select Add | New Folder from the context menu. Name the new folder Controllers. Add another folder called Views. Right-click on the Controllers folder and select Add | New Item… from the context menu. Select MVC Controller Class from the Add New Item dialog, located under .NET Core | ASP.NET, and click on Add. The default name when adding a new controller will be HomeController.cs: Next, we'll need to add a subfolder for the HomeController in the Views folder. Right-click on the Views folder and select Add | New Folder from the context menu. Name the new folder Home. Right-click on the newly created Home folder and select Add | New Item… from the context menu. Select the MVC View Page item, located under .NET Core | ASP.NET; from the list, make sure the filename is Index.cshtml and click on the Add button: Your project layout should resemble the following in the Visual Studio Solution Explorer: Adding the Bootstrap 4 files using Bower With ASP.NET 5 and Visual Studio 2015, Microsoft provided the ability to use Bower as a client-side package manager. Bower is a package manager for web frameworks and libraries that is already very popular in the web development community. You can read more about Bower and search the packages it provides by visiting http://bower.io/ Microsoft's decision to allow the use of Bower and package managers other than NuGet for client-side dependencies is because it already has such a rich ecosystem. Do not fear! NuGet is not going away. You can still use NuGet to install libraries and components, including Bootstrap 4! To add the Bootstrap 4 source files to your project, you need to follow these steps: Right-click on the project name inside Visual Studio's Solution Explorer and select Add | New Item…. Under .NET Core | Client-side, select the Bower Configuration File item, make sure the filename is bower.json and click on Add, as shown here: If not already open, double-click on the bower.json file to open it and add Bootstrap 4 to the dependencies array. The code for the file should look similar to the following: {    "name": "asp.net",    "private": true,   "dependencies": {     "bootstrap": "v4.0.0-alpha.3"   } } Save the bower.json file. Once you've saved the bower.json file, Visual Studio will automatically download the dependencies into the wwwroot/lib folder of your project. In the case of Bootstrap 4 it also depends on jQuery and Tether, you'll notice that jQuery and Tether has also been downloaded as part of the Bootstrap dependency. After you've added Bootstrap to your project, your project layout should look similar to the following screenshot: Compiling the Bootstrap Sass files using Gulp When adding Bootstrap 4, you'll notice that the bootstrap folder contains a subfolder called dist. Inside the dist folder, there are ready-to-use Bootstrap CSS and JavaScript files that you can use as-is if you do not want to change any of the default Bootstrap colours or properties. However, because the source Sass files were also added, this gives you extra flexibility in customizing the look and feel of your web application. For instance, the default colour of the base Bootstrap distribution is gray; if you want to change all the default colours to shades of blue, it would be tedious work to find and replace all references to the colours in the CSS file. For example, if you open the _variables.scss file, located in wwwroot/lib/bootstrap/scss, you'll notice the following code: $gray-dark:                 #373a3c !default; $gray:                      #55595c !default; $gray-light:                #818a91 !default; $gray-lighter:              #eceeef !default; $gray-lightest:             #f7f7f9 !default; We're not going to go into too much detail regarding Sass in this article, but the $ in front of the names in the code above indicates that these are variables used to compile the final CSS file. In essence, changing the values of these variables will change the colors to the new values we've specified, when the Sass file is compiled. To learn more about Sass, head over to http://sass-lang.com/ Adding Gulp npm packages We'll need to add the gulp and gulp-sass Node packages to our solution in order to be able to perform actions using Gulp. To accomplish this, you will need to use npm. npm is the default package manager for the Node.js runtime environment. You can read more about it at https://www.npmjs.com/ To add the gulp and gulp-sass npm packages to your ASP.NET project, complete the following steps: Right-click on your project name inside the Visual Studio Solution Explorer and select Add | New Item… from the project context menu. Find the npm Configuration File item, located under .NET Core | Client-side. Keep its name as package.json and click on Add. If not already open, double-click on the newly added package.json file and add the following two dependencies to the devDependencies array inside the file: "devDependencies": {   "gulp": "3.9.1",   "gulp-sass": "2.3.2" } This will add version 3.9.1 of the gulp package and version 2.3.2 of the gulp-sass package to your project. At the time of writing, these were the latest versions. Your version numbers might differ. Enabling Gulp-Sass compilation Visual Studio does not compile Sass files to CSS by default without installing extensions, but we can enable it using Gulp. Gulp is a JavaScript toolkit used to stream client-side code through a series of processes when an event is triggered during build. Gulp can be used to automate and simplify development and repetitive tasks, such as the following: Minify CSS, JavaScript and image files Rename files Combine CSS files Learn more about Gulp at http://gulpjs.com/ Before you can use Gulp to compile your Sass files to CSS, you need to complete the following tasks: Add a new Gulp Configuration File to your project by right-cing Add | New Item… from the context menu. The location of the item is .NET Core | Client-side. Keep the filename as gulpfile.js and click on the Add button. Change the code inside the gulpfile.js file to the following: var gulp = require('gulp'); var gulpSass = require('gulp-sass'); gulp.task('compile-sass', function () {     gulp.src('./wwwroot/lib/bootstrap/scss/bootstrap.scss')         .pipe(gulpSass())         .pipe(gulp.dest('./wwwroot/css')); }); The code in the preceding step first declares that we require the gulp and gulp-sass packages, and then creates a new task called compile-sass that will compile the Sass source file located at /wwwroot/lib/bootstrap/scss/bootstrap.scss and output the result to the /wwwroot/css folder. Running Gulp tasks With the gulpfile.js properly configured, you are now ready to run your first Gulp task to compile the Bootstrap Sass to CSS. Accomplish this by completing the following steps: Right-click on gulpfile.js in the Visual Studio Solution Explorer and choose Task Runner Explorer from the context menu. You should see all tasks declared in the gulpfile.js listed underneath the Tasks node. If you do not see tasks listed, click on the Refresh button, located on the left-hand side of the Task Runner Explorer window. To run the compile-sass task, right-click on it and select Run from the context menu. Gulp will compile the Bootstrap 4 Sass files and output the CSS to the specified folder. Binding Gulp tasks to Visual Studio events Right-clicking on every task in the Task Runner Explorer in order to execute each, could involve a lot of manual steps. Luckily, Visual Studio allows us to bind tasks to the following events inside Visual Studio: Before Build After Build Clean Project Open If, for example, we would like to compile the Bootstrap 4 Sass files before building our project, simply select Before Build from the Bindings context menu of the Visual Studio Task Runner Explorer: Visual Studio will add the following line of code to the top of gulpfile.js to tell the compiler to run the task before building the project: /// <binding BeforeBuild='compile-sass' /> Installing Font Awesome Bootstrap 4 no longer comes bundled with the Glyphicons icon set. However, there are a number of free alternatives available for use with your Bootstrap and other projects. Font Awesome is a very good alternative to Glyphicons that provides you with 650 icons to use and is free for commercial use. Learn more about Font Awesome by visiting https://fortawesome.github.io/Font-Awesome/ You can add a reference to Font Awesome manually, but since we already have everything set up in our project, the quickest option is to simply install Font Awesome using Bower and compile it to the Bootstrap style sheet using Gulp. To accomplish this, follow these steps: Open the bower.json file, which is located in your project route. If you do not see the file inside the Visual Studio Solution Explorer, click on the Show All Files button on the Solution Explorer toolbar. Add font-awesome as a dependency to the file. The complete listing of the bower.json file is as follows: {   "name": "asp.net",   "private": true,   "dependencies": {     "bootstrap": "v4.0.0-alpha.3",     "font-awesome": "4.6.3"   } } Visual Studio will download the Font Awesome source files and add a font-awesome subfolder to the wwwroot/lib/ folder inside your project. Copy the fonts folder located under wwwroot/font-awesome to the wwwroot folder. Next, open the bootstrap.scss file located in the wwwroot/lib/bootstrap/scss folder and add the following line at the end of the file: $fa-font-path: "/fonts"; @import "../../font-awesome/scss/font-awesome.scss"; Run the compile-sass task via the Task Runner Explorer to recompile the Bootstrap Sass. The preceding steps will include Font Awesome in your Bootstrap CSS file, which in turn will enable you to use it inside your project by including the mark-up demonstrated here: <i class="fa fa-pied-piper-alt"></i> Creating a MVC Layout page The final step for using Bootstrap 4 in your ASP.NET MVC project is to create a Layout page that will contain all the necessary CSS and JavaScript files in order to include Bootstrap components in your pages. To create a Layout page, follow these steps: Add a new sub folder called Shared to the Views folder. Add a new MVC View Layout Page to the Shared folder. The item can be found in the .NET Core | Server-side category of the Add New Item dialog. Name the file _Layout.cshtml and click on the Add button: With the current project layout, add the following HTML to the _Layout.cshtml file: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">     <meta http-equiv="x-ua-compatible" content="ie=edge">     <title>@ViewBag.Title</title>     <link rel="stylesheet" href="~/css/bootstrap.css" /> </head> <body>     @RenderBody()       <script src="~/lib/jquery/dist/jquery.js"></script>     <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script> </body> </html> Finally, add a new MVC View Start Page to the Views folder called _ViewStart.cshtml. The _ViewStart.cshtml file is used to specify common code shared by all views. Add the following Razor mark-up to the _ViewStart.cshtml file: @{     Layout = "_Layout"; } In the preceding mark-up, a reference to the Bootstrap CSS file that was generated using the Sass source files and Gulp is added to the <head> element of the file. In the <body> tag, the @RenderBody method is invoked using Razor syntax. Finally, at the bottom of the file, just before the closing </body> tag, a reference to the jQuery library and the Bootstrap JavaScript file is added. Note that jQuery must always be referenced before the Bootstrap JavaScript file. Content Delivery Networks You could also reference the jQuery and Bootstrap library from a Content Delivery Network (CDN). This is a good approach to use when adding references to the most widely used JavaScript libraries. This should allow your site to load faster if the user has already visited a site that uses the same library from the same CDN, because the library will be cached in their browser. In order to reference the Bootstrap and jQuery libraries from a CDN, change the <script> tags to the following: <script src="https://code.jquery.com/jquery-3.1.0.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.2/js/bootstrap.min.js"></script> There are a number of CDNs available on the Internet; listed here are some of the more popular options: MaxCDN: https://www.maxcdn.com/ Google Hosted Libraries: https://developers.google.com/speed/libraries/ CloudFlare: https://www.cloudflare.com/ Amazon CloudFront: https://aws.amazon.com/cloudfront/ Learn more about Bootstrap Frontend development with Bootstrap 4
Read more
  • 0
  • 0
  • 19947

article-image-magento-theme-distribution
Packt
02 Nov 2016
8 min read
Save for later

Magento Theme Distribution

Packt
02 Nov 2016
8 min read
"Invention is not enough. Tesla invented the electric power we use, but he struggled to get it out to people. You have to combine both things: invention and innovation focus, plus the company that can commercialize things and get them to people" – Larry Page In this article written by Fernando J Miguel, author of the book Magento 2 Theme Design Second Edition, you will learn the process of sharing, code hosting, validating, and publishing your subject as well as future components (extensions/modules) that you develop for Magento 2. (For more resources related to this topic, see here.) The following topics will be covered in this article: The packaging process Packaging your theme Hosting your theme The Magento marketplace The packaging process For every theme you develop for distribution in marketplaces and repositories through the sale and delivery of projects to clients and contractors of the service, you must follow some mandatory requirements for the theme to be packaged properly and consequently distributed to different Magento instances. Magento uses the composer.json file to define dependencies and information relevant to the developed component. Remember how the composer.json file is declared in the Bookstore theme: { "name": "packt/bookstore", "description": "BookStore theme", "require": { "php": "~5.5.0|~5.6.0|~7.0.0", "magento/theme-frontend-luma": "~100.0", "magento/framework": "~100.0" }, "type": "magento2-theme", "version": "1.0.0", "license": [ "OSL-3.0", "AFL-3.0" ], "autoload": { "files": [ "registration.php" ], "psr-4": { "Packt\BookStore\": "" } } } The main fields of the declaration components in the composer.json file are as follows: Name: A fully qualified component name Type: This declares the component type Autoload: This specifies the information necessary to be loaded in the component The three main types of Magento 2 component declarations can be described as follows: Module: Use the magento2-module type to declare modules that add to and/or modify functionalities in the Magento 2 system Theme: Use the magento2-theme type to declare themes in Magento 2 storefronts Language package: Use the magento2-language type to declare translations in the Magento 2 system Besides the composer.json file that must be declared in the root directory of your theme, you should follow these steps to meet the minimum requirements for packaging your new theme: Register the theme by declaring the registration.php file. Package the theme, following the standards set by Magento. Validate the theme before distribution. Publish the theme. From the minimum requirements mentioned, you already are familiar with the composer.json and registration.php files. Now we will look at the packaging process, validation, and publication in sequence. Packaging your theme By default, all themes should be compressed in ZIP format and contain only the root directory of the component developed, excluding any file and directory that is not part of the standard structure. The following command shows the compression standard used in Magento 2 components: zip -r vendor-name_package-name-1.0.0.zip package-path/* -x 'package-path/.git/*' Here, the name of the ZIP file has the following components: vendor: This symbolizes the vendor by which the theme was developed name_package: This is the package name name: This is the component name 1.0.0: This is the component version After formatting the component name, it defines which directory will be compressed, followed by the -x parameter, which excludes the git directory from the theme compression. How about applying ZIP compression on the Bookstore theme? To do this, follow these steps: Using a terminal or Command Prompt, access the theme's root directory: <magento_root>/app/design/frontend/Packt/bookstore. Run the zip packt-bookstore-bookstore.1.0.0.zip*-x'.git/*' command. Upon successfully executing this command, you will have packed your theme, and your directory will be as follows: After this, you will validate your new Magento theme using a verification tool. Magento component validation The Magento developer community created the validate_m2_package script to perform validation of components developed for Magento 2. This script is available on the GitHub repository of the Magento 2 development community in the marketplace-tools directory: According to the description, the idea behind Marketplace Tools is to house standalone tools that developers can use to validate and verify their extensions before submitting them to the Marketplace. Here's how to use the validation tool: Download the validate_m2_package.php script, available at https://github.com/magento/marketplace-tools. Move the script to the root directory of the Bookstore theme <magento_root>/app/design/frontend/Packt/bookstore. Open a terminal or Command Prompt. Run the validate_m2_package.php packt-bookstore-bookstore.1.0.0.zip PHP command. This command will validate the package you previously created with the ZIP command. If all goes well, you will not have any response from the command line, which will mean that your package is in line with the minimum requirements for publication. If you wish, you can use the -d parameter that enables you to debug your component by printing messages during verification. To use this option, run the following command: php validate_m2_package.php -d packt-bookstore-bookstore.1.0.0.zip If everything goes as expected, the response will be as follows: Hosting your theme You can share your Magento theme and host your code on different services to achieve greater interaction with your team or even with the Magento development community. Remembering that the standard control system software version used by the Magento development community is Git. There are some options well used in the market, so you can distribute your code and share your work. Let's look at some of these options. Hosting your project on GitHub and Packagist The most common method of hosting your code/theme is to use GitHub. Once you have created a repository, you can get help from the Magento developer community if you are working on an open source project or even one for learning purposes. The major point of using GitHub is the question of your portfolio and the publication of your Magento 2 projects developed, which certainly will make a difference when you are looking for employment opportunities and trying to get selected for new projects. GitHub has a specific help area for users that provides a collection of documentation that developers may find useful. GitHub Help can be accessed directly at https://help.github.com/: To create a GitHub repository, you can consult the official documentation, available at https://help.github.com/articles/create-a-repo/. Once you have your project published on GitHub, you can use the Packagist (https://packagist.org/) service by creating a new account and entering the link of your GitHub package on Packagist: Packagist collects information automatically from the available composer.json file in the GitHub repository, creating your reference to use in other projects. Hosting your project in a private repository In some cases, you will be developing your project for private clients and companies. In case you want to keep your version control in private mode, you can use the following procedure: Create your own package composer repository using the Toran service (https://toranproxy.com/). Create your package as previously described. Send your package to your private repository. Add the following to your composer.json file: { "repositories": [ { "type": "composer", "url": [repository url here] } ] } Magento Marketplace According to Magento, Marketplace (https://marketplace.magento.com/) is the largest global e-commerce resource for applications and services that extend Magento solutions with powerful new features and functionality. Once you have completed developing the first version of your theme, you can upload your project to be a part of the official marketplace of Magento. In addition to allowing theme uploads, Magento Marketplace also allows you to upload shared packages and extensions (modules). To learn more about shared packages, visit http://docs.magento.com/marketplace/user_guide/extensions/shared-package-submit.html. Submitting your theme After the compression and validation processes, you can send your project to be distributed to Magento Marketplace. For this, you should confirm an account on the developer portal (https://developer.magento.com/customer/account/) with a valid e-mail and personal information about the scope of your activities. After this confirmation, you will have access to the extensions area at https://developer.magento.com/extension/extension/list/, where you will find options to submit themes and extensions: After clicking on the Add Theme button, you will need to answer a questionnaire: Which Magento platform your theme will work on The name of your theme Whether your theme will have additional services Additional functionalities your theme has What makes your theme unique After the questionnaire, you will need to fill in the details of your extension, as follows: Extension title Public version Package file (upload) The submitted theme will be evaluated by a technical review, and you will be able to see the evaluation progress through your e-mail and the control panel of the Magento developer area. You can find more information about Magento Marketplace at the following link: http://docs.magento.com/marketplace/user_guide/getting-started.html Summary In this article, you learned about the theme-packaging process besides validation according to the minimum requirements for its publication on Magento Marketplace. You are now ready to develop your solutions! There is still a lot of work left, but I encourage you to seek your way as a Magento theme developer by putting a lot of study, research, and application into the area. Participate in events, be collaborative, and count on the community's support. Good luck and success in your career path! Resources for Article: Further resources on this subject: Installing Magento [article] Social Media and Magento [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 2206

article-image-setting-environment-aspnet-mvc-6
Packt
02 Nov 2016
9 min read
Save for later

Setting Up the Environment for ASP.NET MVC 6

Packt
02 Nov 2016
9 min read
In this article by Mugilan TS Raghupathi author of the book Learning ASP.NET Core MVC Programming explains the setup for getting started with programming in ASP.NET MVC 6. In any development project, it is vital to set up the right kind of development environment so that you can concentrate on the developing the solution rather than solving the environment issues or configuration problems. With respect to .NET, Visual Studio is the de-facto standard IDE (Integrated Development Environment) for building web applications in .NET. In this article, you'll be learning about the following topics: Purpose of IDE Different offerings of Visual Studio Installation of Visual Studio Community 2015 Creating your first ASP.NET MVC 5 project and project structure (For more resources related to this topic, see here.) Purpose of IDE First of all, let us see why we need an IDE, when you can type the code in Notepad, compile, and execute it. When you develop a web application, you might need the following things for you to be productive: Code editor: This is the text editor where you type your code. Your code-editor should be able to recognize different constructs such as the if condition, for loop of your programming language. In Visual Studio, all of your keywords would be highlighted in blue color. Intellisense: Intellisense is a context aware code-completion feature available in most of the modern IDEs including Visual Studio. One such example is, when you type a dot after an object, this Intellisense feature lists out all the methods available on the object. This helps the developers to write code faster and easier. Build/Publish: It would be helpful if you could build or publish the application using a single click or single command. Visual Studio provides several options out of the box to build a separate project or to build the complete solution at a single click. This makes the build and deployment of your application easier. Templates: Depending on the type of the application, you might have to create different folders and files along with the boilerplate code. So, it'll be very helpful if your IDE supports the creation of different kinds of templates. Visual Studio generates different kinds of templates with the code for ASP.Net Web Forms, MVC, and Web API to get you up and running. Ease of addition of items: Your IDE should allow you to add different kinds of items with ease. For example, you should be able to add an XML file without any issues. And if there is any problem with the structure of your XML file, it should be able to highlight the issue along with the information and help you to fix the issues. Visual Studio offerings There are different versions of Visual Studio 2015 available to satisfy the various needs of the developers/organizations. Primarily, there are four versions of Visual Studio 2015: Visual Studio Community Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional System requirements Visual Studio can be installed on computers installed with Operation System Windows 7 Service Pack1 and above. You can get to know the complete list of requirements from the following URL: https://www.visualstudio.com/en-us/downloads/visual-studio-2015-system-requirements-vs.aspx Visual Studio Community 2015 This is a fully featured IDE available for building desktops, web applications, and cloud services. It is available free of cost for individual users. You can download Visual Studio Community from the following URL: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Throughout this book, we will be using the Visual Studio Community version for development as it is available free of cost to individual developers. Visual Studio Professional As the name implies, Visual Studio Professional is targeted at professional developers which contains features such as Code Lens for improving your team's productivity. It also has features for greater collaboration within the team. Visual Studio Enterprise Visual Studio Enterprise is a full blown version of Visual Studio with a complete set of features for collaboration, including a team foundation server, modeling, and testing. Visual Studio Test Professional Visual Studio Test Professional is primarily aimed for the testing team or the people who are involved in the testing which might include developers. In any software development methodology—either the waterfall model or agile—developers need to execute the development suite test cases for the code they are developing. Installation of Visual Studio Community Follow the given steps to install Visual Studio Community 2015: Visit the following link to download Visual Studio Community 2015: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Click on the Download Community 2015 button. Save the file in a folder where you can retrieve it easily later: Run the downloaded executable file: Click on Run and the following screen will appear: There are two types of installation—default and custom installation. Default installation installs the most commonly used features and this will cover most of the use cases of the developer. Custom installation helps you to customize the components that you want to get installed, such as the following: Click on the Install button after selecting the installation type. Depending on your memory and processor speed, it will take 1 to 2 hours to install. Once all the components are installed, you will see the following Setup completed screen: Installation of ASP.NET 5 When we install the Visual Studio Community 2015 edition, ASP.NET 5 will not have been installed by default. As the ASP.NET MVC 6 application runs on top of ASP.NET 5, we need to install ASP.NET 5. There are couple of ways to install ASP.NET 5: Get ASP.NET 5 from https://get.asp.net/ Another option is to install from the New Project template in Visual Studio This option is bit easier as you don't need to search and install. The following are the detailed steps: Create a new project by selecting File | New Project or using the shortcut Ctrl + Shift + N: Select ASP.NET Web Application and enter the project name and click on OK: The following window will appear to select the template. Select the Get ASP.NET 5 RC option as shown in the following screenshot: When you click on OK in the preceding screen, the following window will appear: When you click on the Run or Save button in the preceding dialog, you will get the following screen asking for ASP.NET 5 Setup. Select the checkbox, I agree to the license terms and conditions and click on the Install button: Installation of ASP.NET 5 might take couple of hours and once it is completed you'll get the following screen: During the process of installation of ASP.NET 5 RC1 Update 1, it might ask you to close the Visual Studio. If asked, please do so. Project structure in ASP.Net 5 application Once the ASP.NET 5 RC1 is successfully installed, open the Visual Studio and create a new project and select the ASP.NET 5 Web Application as shown in the following screenshot: A new project will be created and the structure will be like the following: File-based project Whenever you add a file or folder in your file system (inside of our ASP.NET 5 project folder), the changes will be automatically reflected in your project structure. Support for full .NET and .NET core You could see a couple of references in the preceding project: DNX 4.5.1 and DNX Core 5.0. DNX 4.5.1 provides functionalities of full-blown .NET whereas DNX Core 5.0 supports only the core functionalities—which would be used if you are deploying the application across cross-platforms such as Apple OS X, Linux. The development and deployment of an ASP.NET MVC 6 application on a Linux machine will be explained in the book. The Project.json package Usually in an ASP.NET web application, we would be having the assemblies as references and the list of references in a C# project file. But in an ASP.NET 5 application, we have a JSON file by the name of Project.json, which will contain all the necessary configuration with all its .NET dependencies in the form of NuGet packages. This makes dependency management easier. NuGet is a package manager provided by Microsoft, which makes the package installation and uninstallation easier. Prior to NuGet, all the dependencies had to be installed manually. The dependencies section identifies the list of dependent packages available for the application. The frameworks section informs about the frameworks being supported for the application. The scripts section identifies the script to be executed during the build process of the application. Include and exclude properties can be used in any section to include or exclude any item. Controllers This folder contains all of your controller files. Controllers are responsible for handling the requests and communicating the models and generating the views for the same. Models All of your classes representing the domain data will be present in this folder. Views Views are files which contain your frontend components and are presented to the end users of the application. This folder contains all of your Razor View files. Migrations Any database-related migrations will be available in this folder. Database migrations are the C# files which contain the history of any database changes done through an Entity Framework (an ORM framework). This will be explained in detail in the book. The wwwroot folder This folder acts as a root folder and it is the ideal container to place all of your static files such as CSS and JavaScript files. All the files which are placed in wwwroot folder can be directly accessed from the path without going through the controller. Other files The appsettings.json file is the config file where you can configure application level settings. Bower, npm (Node Package Manager), and gulpfile.js are client-side technologies which are supported by ASP.NET 5 applications. Summary In this article, you have learnt about the offerings in Visual Studio. Step-by-step instructions are provided for the installation of the Visual Studio Community version—which is freely available for individual developers. We have also discussed the new project structure of the ASP.Net 5 application and the changes when compared to the previous versions. In this book, we are going to discuss the controllers and their roles and functionalities. We'll also build a controller and associated action methods and see how it works. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Debugging Your .NET Application [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 6244
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introduction-moodle-3-and-moodlecloud
Packt
19 Oct 2016
20 min read
Save for later

An Introduction to Moodle 3 and MoodleCloud

Packt
19 Oct 2016
20 min read
In this article by Silvina Paola Hillar, the author of the book Moodle Theme Development we will introduce e-learning and virtual learning environments such as Moodle and MoodleCloud, explaining their similarities and differences. Apart from that, we will learn and understand screen resolution and aspect ratio, which is the information we need in order to develop Moodle themes. In this article, we shall learn the following topics: Understanding what e-learning is Learning about virtual learning environments Introducing Moodle and MoodleCloud Learning what Moodle and MoodleCloud are Using Moodle on different devices Sizing the screen resolution Calculating the aspect ratio Learning about sharp and soft images Learning about crisp and sharp text Understanding what anti-aliasing is (For more resources related to this topic, see here.) Understanding what e-learning is E-learning is electronic learning, meaning that it is not traditional learning in a classroom with a teacher and students, plus the board. E-learning involves using a computer to deliver classes or a course. When delivering classes or a course, there is online interaction between the student and the teacher. There might also be some offline activities, when a student is asked to create a piece of writing or something else. Another option is that there are collaboration activities involving the interaction of several students and the teacher. When creating course content, there is the option of video conferencing as well. So there is virtual face-to-face interaction within the e-learning process. The time and the date should be set beforehand. In this way, e-learning is trying to imitate traditional learning to not lose human contact or social interaction. The course may be full distance or not. If the course is full distance, there is online interaction only. All the resources and activities are delivered online and there might be some interaction through messages, chats, or emails between the student and the teacher. If the course is not full distance, and is delivered face to face but involving the use of computers, we are referring to blended learning. Blended learning means using e-learning within the classroom, and is a mixture of traditional learning and computers. The usage of blended learning with little children is very important, because they get the social element, which is essential at a very young age. Apart from that, they also come into contact with technology while they are learning. It is advisable to use interactive whiteboards (IWBs) at an early stage. IWBs are the right tool to choose when dealing with blended learning. IWBs are motivational gadgets, which are prominent in integrating technology into the classroom. IWBs are considered a symbol of innovation and a key element of teaching students. IWBs offer interactive projections for class demonstrations; we can usually project resources from computer software as well as from our Moodle platform. Students can interact with them by touching or writing on them, that is to say through blended learning. Apart from that, teachers can make presentations on different topics within a subject and these topics become much more interesting and captivating for students, since IWBs allows changes to be made and we can insert interactive elements into the presentation of any subject. There are several types of technology used in IWBs, such as touch technology, laser scanning, and electromagnetic writing tools. Therefore, we have to bear in mind which to choose when we get an IWB. On the other hand, the widespread use of mobile devices nowadays has turned e-learning into mobile learning. Smartphones and tablets allows students to learn anywhere at any time. Therefore, it is important to design course material that is usable by students on such devices. Moodle is a learning platform through which we can design, build and create e-learning environments. It is possible to create online interaction and have video conferencing sessions with students. Distance learning is another option if blended learning cannot be carried out. We can also choose Moodle mobile. We can download the app from App Store, Google Play, Windows Store, or Windows Phone Store. We can browse the content of courses, receive messages, contact people from the courses, upload different types of file, and view course grades, among other actions. Learning about Virtual Learning Environments Virtual Learning Environment (VLE) is a type of virtual environment that supports both resources and learning activities; therefore, students can have both passive and active roles. There is also social interaction, which can take place through collaborative work as well as video conferencing. Students can also be actors, since they can also construct the VLE. VLEs can be used for both distance and blended learning, since they can enrich courses. Mobile learning is also possible because mobile devices have access to the Internet, allowing teachers and students to log in to their courses. VLEs are designed in such a way that they can carry out the following functions or activities: Design, create, store, access, and use course content Deliver or share course content Communicate, interact, and collaborate between students and teachers Assess and personalize the learning experience Modularize both activities and resources Customize the interface We are going to deal with each of these functions and activities and see how useful they might be when designing our VLE for our class. When using Moodle, we can perform all the functions and activities mentioned here, because Moodle is a VLE. Design, create, store, access and use course content If we use the Moodle platform to create the course, we have to deal with course content. Therefore, when we add a course, we have to add its content. We can choose the weekly outline section or the topic under which we want to add the content. We click on Add an activity or resource and two options appear, resources and activities; therefore, the content can be passive or active for the student. When we create or design activities in Moodle, the options are shown in the following screenshot: Another option for creating course content is to reuse content that has already been created and used before in another VLE. In other words, we can import or export course materials, since most VLEs have specific tools designed for such purposes. This is very useful and saves time. There are a variety of ways for teachers to create course materials, due to the fact that the teacher thinks of the methodology, as well as how to meet the student's needs, when creating the course. Moodle is designed in such a way that it offers a variety of combinations that can fit any course content. Deliver or share course content Before using VLEs, we have to log in, because all the content is protected and is not open to the general public. In this way, we can protect property rights, as well as the course itself. All participants must be enrolled in the course unless it has been opened to the public. Teachers can gain remote access in order to create and design their courses. This is quite profitable since they can build the content at home, rather than in their workplace. They need login access and they need to switch roles to course creator in order to create the content. Follow these steps to switch roles to course creator: Under Administration, click on Switch role to… | Course creator, as shown in the following screenshot: When the role has been changed, the teacher can create content that students can access. Once logged in, students have access to the already created content, either activities or resources. The content is available over the Internet or the institution's intranet connection. Students can access the content anywhere if any of these connections are available. If MoodleCloud is being used, there must be an Internet connection, otherwise it is impossible for both students and teachers to log in. Communicate, interact, and collaborate among students and teachers Communication, interaction, and collaborative working are the key factors to social interaction and learning through interchanging ideas. VLEs let us create course content activities, because these actions are allows they are elemental for our class. There is no need to be an isolated learner, because learners have the ability to communicate between themselves and with the teachers. Moodle offers the possibility of video conferencing through the Big Blue Button. In order to install the Big Blue Button plugin in Moodle, visit the following link:https://moodle.org/plugins/browse.php?list=set&id=2. This is shown in the following screenshot: If you are using MoodleCloud, the Big Blue Button is enabled by default, so when we click on Add an activity or resource it appears in the list of activities, as shown in the following screenshot: Assess and personalize the learning experience Moodle allows the teacher to follow the progress of students so that they can assess and grade their work, as long as they complete the activities. Resources cannot be graded, since they are passive content for students, but teachers can also check when a participant last accessed the site. Badges are another element used to personalize the learning experience. We can create badges for students when they complete an activity or a course; they are homework rewards. Badges are quite good at motivating young learners. Modularize both activities and resources Moodle offers the ability to build personalized activities and resources. There are several ways to present both, with all the options Moodle offers. Activities can be molded according to the methodology the teacher uses. In Moodle 3, there are new question types within the Quiz activity. The question types are as follows: Select missing words Drag and drop into text Drag and drop onto image Drag and drop markers The question types are shown after we choose Quiz in the Add a resource or Activity menu, in the weekly outline section or topic that we have chosen. The types of question are shown in the following screenshot: Customize the interface Moodle allows us to customize the interface in order to develop the look and feel that we require; we can add a logo for the school or institution that the Moodle site belongs to. We can also add another theme relevant to the subject or course that we have created. The main purpose in customizing the interface is to avoid all subjects and courses looking the same. Later in the article, we will learn how to customize the interface. Learning Moodle and MoodleCloud Modular Object-Oriented Dynamic Learning Environment (Moodle) is a learning platform designed in such a way that we can create VLEs. Moodle can be downloaded, installed and run on any web server software using Hypertext Preprocessor (PHP). It can support a SQL database and can run on several operating systems. We can download Moodle 3.0.3 from the following URL: https://download.moodle.org/. This URL is shown in the following screenshot: MoodleCloud, on the other hand, does not need to be downloaded since, as its name suggests, is in the cloud. Therefore, we can get our own Moodle site with MoodleCloud within minutes and for free. It is Moodle's hosting platform, designed and run by the people who make Moodle. In order to get a MoodleCloud site, we need to go to the following URL: https://moodle.com/cloud/. This is shown in the following screenshot: MoodleCloud was created in order to cater for users with fewer requirements and small budgets. In order to create an account, you need to add your cell phone number to receive an SMS which we must be input when creating your site. As it is free, there are some limitations to MoodleCloud, unless we contact Moodle Partners and pay for an expanded version of it. The limitations are as follows: No more than 50 users 200 MB disk space Core themes and plugins only One site per phone number Big Blue Button sessions are limited to 6 people, with no recordings There are advertisements When creating a Moodle site, we want to change the look and functionality of the site or individual course. We may also need to customize themes for Moodle, in order to give the course the desired look. Therefore, this article will explain the basic concepts that we have to bear in mind when dealing with themes, due to the fact that themes are shown in different devices. In the past, Moodle ran only on desktops or laptops, but nowadays Moodle can run on many different devices, such as smartphones, tablets, iPads, and smart TVs, and the list goes on. Using Moodle on different devices Moodle can be used on different devices, at different times, in different places. Therefore, there are factors that we need to be aware of when designing courses and themes.. Therefore, here after in this article, there are several aspects and concepts that we need to deepen into in order to understand what we need to take into account when we design our courses and build our themes. Devices change in many ways, not only in size but also in the way they display our Moodle course. Moodle courses can be used on anything from a tiny device that fits into the palm of a hand to a huge IWB or smart TV, and plenty of other devices in between. Therefore, such differences have to be taken into account when choosing images, text, and other components of our course. We are going to deal with sizing screen resolution, calculating the aspect ratio, types of images such as sharp and soft, and crisp and sharp text. Finally, but importantly, the anti-aliasing method is explained. Sizing the screen resolution Number of pixels the display of device has, horizontally and vertically and the color depth measuring the number of bits representing the color of each pixel makes up the screen resolution. The higher the screen resolution, the higher the productivity we get. In the past, the screen resolution of a display was important since it determined the amount of information displayed on the screen. The lower the resolution, the fewer items would fit on the screen; the higher the resolution, the more items would fit on the screen. The resolution varies according to the hardware in each device. Nowadays, the screen resolution is considered a pleasant visual experience, since we would rather see more quality than more stuff on the screen. That is the reason why the screen resolution matters. There might be different display sizes where the screen resolutions are the same, that is to say, the total number of pixels is the same. If we compare a laptop (13'' screen with a resolution of 1280 x 800) and a desktop (with a 17'' monitor with the same 1280 x 800 resolution), although the monitor is larger, the number of pixels is the same; the only difference is that we will be able to see everything bigger on the monitor. Therefore, instead of seeing more stuff, we see higher quality. Screen resolution chart Code Width Height Ratio Description QVGA 320 240 4:3 Quarter Video Graphics Array FHD 1920 1080 ~16:9 Full High Definition HVGA 640 240 8:3 Half Video Graphics Array HD 1360 768 ~16:9 High Definition HD 1366 768 ~16:9 High Definition HD+ 1600 900 ~16:9 High Definition plus VGA 640 480 4:3 Video Graphics Array SVGA 800 600 4:3 Super Video Graphics Array XGA 1024 768 4:3 Extended Graphics Array XGA+ 1152 768 3:2 Extended Graphics Array plus XGA+ 1152 864 4:3 Extended Graphics Array plus SXGA 1280 1024 5:4 Super Extended Graphics Array SXGA+ 1400 1050 4:3 Super Extended Graphics Array plus UXGA 1600 1200 4:3 Ultra Extended Graphics Array QXGA 2048 1536 4:3 Quad Extended Graphics Array WXGA 1280 768 5:3 Wide Extended Graphics Array WXGA 1280 720 ~16:9 Wide Extended Graphics Array WXGA 1280 800 16:10 Wide Extended Graphics Array WXGA 1366 768 ~16:9 Wide Extended Graphics Array WXGA+ 1280 854 3:2 Wide Extended Graphics Array plus WXGA+ 1440 900 16:10 Wide Extended Graphics Array plus WXGA+ 1440 960 3:2 Wide Extended Graphics Array plus WQHD 2560 1440 ~16:9 Wide Quad High Definition WQXGA 2560 1600 16:10 Wide Quad Extended Graphics Array WSVGA 1024 600 ~17:10 Wide Super Video Graphics Array WSXGA 1600 900 ~16:9 Wide Super Extended Graphics Array WSXGA 1600 1024 16:10 Wide Super Extended Graphics Array WSXGA+ 1680 1050 16:10 Wide Super Extended Graphics Array plus WUXGA 1920 1200 16:10 Wide Ultra Extended Graphics Array WQXGA 2560 1600 16:10 Wide Quad Extended Graphics Array WQUXGA 3840 2400 16:10 Wide Quad Ultra Extended Graphics Array 4K UHD 3840 2160 16:9 Ultra High Definition 4K UHD 1536 864 16:9 Ultra High Definition Considering that 3840 x 2160 displays (also known as 4K, QFHD, Ultra HD, UHD, or 2160p) are already available for laptops and monitors, a pleasant visual experience with high DPI displays can be a good long-term investment for your desktop applications. The DPI setting for the monitor causes another common problem. The change in the effective resolution. Consider the 13.3" display that offers a 3200 x1800 resolution and is configured with an OS DPI of 240 DPI. The high DPI setting makes the system use both larger fonts and UI elements; therefore, the elements consume more pixels to render than the same elements displayed in the resolution configured with an OS DPI of 96 DPI. The effective resolution of a display that provides 3200 x1800 pixels configured at 240 DPI is 1280 x 720. The effective resolution can become a big problem because an application that requires a minimum resolution of the old standard 1024 x 768 pixels with an OS DPI of 96 would have problems with a 3200 x 1800-pixel display configured at 240 DPI, and it wouldn't be possible to display all the necessary UI elements. It may sound crazy, but the effective vertical resolution is 720 pixels, lower than the 768 vertical pixels required by the application to display all the UI elements without problems. The formula to calculate the effective resolution is simple: divide the physical pixels by the scale factor (OS DPI / 96). For example, the following formula calculates the horizontal effective resolution of my previous example: 3200 / (240 / 96) = 3200 / 2.5 = 1280; and the following formula calculates the vertical effective resolution: 1800 / (240 / 96) = 1800 / 2.5 = 720. The effective resolution would be of 1800 x 900 pixels if the same physical resolution were configured at 192 DPI. Effective horizontal resolution: 3200 / (192 / 96) = 3200 / 2 = 1600; and vertical effective resolution: 1800 / (192 / 96) = 1800 / 2 = 900. Calculating the aspect ratio The aspect radio is the proportional relationship between the width and the height of an image. It is used to describe the shape of a computer screen or a TV. The aspect ratio of a standard-definition (SD) screen is 4:3, that is to say, a relatively square rectangle. The aspect ratio is often expressed in W:H format, where W stands for width and H stands for height. 4:3 means four units wide to three units high. With regards to high-definition TV (HDTV), they have a 16:9 ratio, which is a wider rectangle. Why do we calculate the aspect ratio? The answer to this question is that the ratio has to be well defined because the rectangular shape that every frame, digital video, canvas, image, or responsive design has, makes shapes fit into different and distinct devices. Learning about sharp and soft images Images can be either sharp or soft. Sharp is the opposite of soft. A soft image has less pronounced details, while a sharp image has more contrast between pixels. The more pixels the image has, the sharper it is. We can soften the image, in which case it loses information, but we cannot sharpen one; in other words, we can't add more information to an image. In order to compare sharp and soft images, we can visit the following website, where we can convert bitmaps to vector graphics. We can convert a bitmap images such as .png, .jpeg, or .gif into a .svg in order to get an anti-aliased image. We can do this with a simple step. We work with an online tool to vectorize the bitmap using http://vectormagic.com/home. There are plenty of features to take into account when vectorizing. We can design a bitmap using an image editor and upload the bitmap image from the clipboard, or upload the file from our computer. Once the image is uploaded to the application, we can start working. Another possibility is to use the sample images on the website, which we are going to use in order to see that anti-aliasing effect. We convert bitmap images, which are made up of pixels, into vector images, which are made up of shapes. The shapes are mathematical descriptions of images and do not become pixelated when scaling up. Vector graphics can handle scaling without any problems. Vector images are the preferred type to work with in graphic design on paper or clothes. Go to http://vectormagic.com/home and click on Examples, as shown in the following screenshot: After clicking on Examples, the bitmap appears on the left and the vectorized image on the right. The bitmap is blurred and soft; the SVG has an anti-aliasing effect, therefore the image is sharp. The result is shown in the following screenshot: Learning about crisp and sharp text There are sharp and soft images, and there is also crisp and sharp text, so it is now time to look at text. What is the main difference between these two? When we say that the text is crisp, we mean that there is more anti-aliasing, in other words it has more grey pixels around the black text. The difference is shown when we zoom in to 400%. On the other hand, sharp mode is superior for small fonts because it makes each letter stronger. There are four options in Photoshop to deal with text: sharp, crisp, strong, and smooth. Sharp and crisp have already been mentioned in the previous paragraphs. Strong is notorious for adding unnecessary weight to letter forms, and smooth looks closest to the untinted anti-aliasing, and it remains similar to the original. Understanding what anti-aliasing is The word anti-aliasing means the technique used in order to minimize the distortion artifacts. It applies intermediate colors in order to eliminate pixels, that is to say the saw-tooth or pixelated lines. Therefore, we need to look for a lower resolution so that the saw-tooth effect does not appear when we make the graphic bigger. Test your knowledge Before we delve deeper into more content, let's test your knowledge about all the information that we have dealt with in this article: Moodle is a learning platform with which… We can design, build and create E-learning environments. We can learn. We can download content for students. BigBlueButtonBN… Is a way to log in to Moodle. Lets you create links to real-time online classrooms from within Moodle. Works only in MoodleCloud. MoodleCloud… Is not open source. Does not allow more than 50 users. Works only for universities. The number of pixels the display of the device has horizontally and vertically, and the color depth measuring the number of bits representing the color of each pixel, make up… Screen resolution. Aspect ratio. Size of device. Anti-aliasing can be applied to … Only text. Only images. Both images and text. Summary In this article, we have covered most of what needs to be known about e-learning, VLEs, and Moodle and MoodleCloud. There is a slight difference between Moodle and MoodleCloud specially if you don't have access to a Moodle course in the institution where you are working and want to design a Moodle course. Moodle is used in different devices and there are several aspects to take into account when designing a course and building a Moodle theme. We have dealt with screen resolution, aspect ratio, types of images and text, and anti-aliasing effects. Resources for Article: Further resources on this subject: Listening Activities in Moodle 1.9: Part 2 [article] Gamification with Moodle LMS [article] Adding Graded Activities [article]
Read more
  • 0
  • 0
  • 2236

article-image-learning-how-manage-records-visualforc
Packt
14 Oct 2016
7 min read
Save for later

Learning How to Manage Records in Visualforc

Packt
14 Oct 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Learning How to Manage Records in Visualforce [Article]
Read more
  • 0
  • 0
  • 961

article-image-server-side-swift-building-slack-bot-part-2
Peter Zignego
13 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 2

Peter Zignego
13 Oct 2016
5 min read
In Part 1 of this series, I introduced you to SlackKit and Zewo, which allows us to build and deploy a Slack bot written in Swift to a Linux server. Here in Part 2, we will finish the app, showing all of the Swift code. We will also show how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Show Me the Swift Code! Finally, some Swift code! To create our bot, we need to edit our main.swift file to contain our bot logic: import String importSlackKit class Leaderboard: MessageEventsDelegate { // A dictionary to hold our leaderboard var leaderboard: [String: Int] = [String: Int]() letatSet = CharacterSet(characters: ["@"]) // A SlackKit client instance let client: SlackClient // Initalize the leaderboard with a valid Slack API token init(token: String) { client = SlackClient(apiToken: token) client.messageEventsDelegate = self } // Enum to hold commands the bot knows enum Command: String { case Leaderboard = "leaderboard" } // Enum to hold logic that triggers certain bot behaviors enum Trigger: String { casePlusPlus = "++" caseMinusMinus = "--" } // MARK: MessageEventsDelegate // Listen to the messages that are coming in over the Slack RTM connection funcmessageReceived(message: Message) { listen(message: message) } funcmessageSent(message: Message){} funcmessageChanged(message: Message){} funcmessageDeleted(message: Message?){} // MARK: Leaderboard Internal Logic privatefunc listen(message: Message) { // If a message contains our bots user ID and a recognized command, handle that command if let id = client.authenticatedUser?.id, text = message.text { iftext.lowercased().contains(query: Command.Leaderboard.rawValue) &&text.contains(query: id) { handleCommand(command: .Leaderboard, channel: message.channel) } } // If a message contains a trigger value, handle that trigger ifmessage.text?.contains(query: Trigger.PlusPlus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .PlusPlus) } ifmessage.text?.contains(query: Trigger.MinusMinus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .MinusMinus) } } // Text parsing can be messy when you don't have Foundation... privatefunchandleMessageWithTrigger(message: Message, trigger: Trigger) { if let text = message.text, start = text.index(of: "@"), end = text.index(of: trigger.rawValue) { let string = String(text.characters[start...end].dropLast().dropFirst()) let users = client.users.values.filter{$0.id == self.userID(string: string)} // If the receiver of the trigger is a user, use their user ID ifusers.count> 0 { letidString = userID(string: string) initalizationForValue(dictionary: &leaderboard, value: idString) scoringForValue(dictionary: &leaderboard, value: idString, trigger: trigger) // Otherwise just store the receiver value as is } else { initalizationForValue(dictionary: &leaderboard, value: string) scoringForValue(dictionary: &leaderboard, value: string, trigger: trigger) } } } // Handle recognized commands privatefunchandleCommand(command: Command, channel:String?) { switch command { case .Leaderboard: // Send message to the channel with the leaderboard attached if let id = channel { client.webAPI.sendMessage(channel:id, text: "Leaderboard", linkNames: true, attachments: [constructLeaderboardAttachment()], success: {(response) in }, failure: { (error) in print("Leaderboard failed to post due to error:(error)") }) } } } privatefuncinitalizationForValue(dictionary: inout [String: Int], value: String) { if dictionary[value] == nil { dictionary[value] = 0 } } privatefuncscoringForValue(dictionary: inout [String: Int], value: String, trigger: Trigger) { switch trigger { case .PlusPlus: dictionary[value]?+=1 case .MinusMinus: dictionary[value]?-=1 } } // MARK: Leaderboard Interface privatefuncconstructLeaderboardAttachment() -> Attachment? { let Great! But we’ll need to replace the dummy API token with the real deal before anything will work. Getting an API Token We need to create a bot integration in Slack. You’ll need a Slack instance that you have administrator access to. If you don’t already have one of those to play with, go sign up. Slack is free for small teams: Create a new bot here. Enter a name for your bot. I’m going to use “leaderbot”. Click on “Add Bot Integration”. Copy the API token that Slack generates and replace the placeholder token at the bottom of main.swift with it. Testing 1,2,3… Now that we have our API token, we’re ready to do some local testing. Back in Xcode, select the leaderbot command-line application target and run your bot (⌘+R). When we go and look at Slack, our leaderbot’s activity indicator should show that it’s online. It’s alive! To ensure that it’s working, we should give our helpful little bot some karma points: @leaderbot++ And ask it to see the leaderboard: @leaderbot leaderboard Head in the Clouds Now that we’ve verified that our leaderboard bot works locally, it’s time to deploy it. We are deploying on Heroku, so if you don’t have an account, go and sign up for a free one. First, we need to add a Procfile for Heroku. Back in the terminal, run: echo slackbot: .build/debug/leaderbot > Procfile Next, let’s check in our code: git init git add . git commit -am’leaderbot powering up’ Finally, we’ll setup Heroku: Install the Heroku toolbelt. Log in to Heroku in your terminal: heroku login Create our application on Heroku and set our buildpack: heroku create --buildpack https://github.com/pvzig/heroku-buildpack-swift.git leaderbot Set up our Heroku remote: heroku git:remote -a leaderbot Push to master: git push heroku master Once you push to master, you’ll see Heroku going through the process of building your application. Launch! When the build is complete, all that’s left to do is to run our bot: heroku run:detached slackbot Like when we tested locally, our bot should become active and respond to our commands! You’re Done! Congratulations, you’ve successfully built and deployed a Slack bot written in Swift onto a Linux server! Built With: Jay: Pure-Swift JSON parser and formatterkylef’s Heroku buildpack for Swift Open Swift: Open source cross project standards for Swift SlackKit: A Slack client library Zewo: Open source libraries for modern server software Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina, USA. He writes at bytesized.co, tweets at @pvzig, and freelances at Launch Software.
Read more
  • 0
  • 0
  • 2102

article-image-server-side-swift-building-slack-bot-part-1
Peter Zignego
12 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 1

Peter Zignego
12 Oct 2016
5 min read
As a remote iOS developer, I love Slack. It’s my meeting room and my water cooler over the course of a work day. If you’re not familiar with Slack, it is a group communication tool popular in Silicon Valley and beyond. What makes Slack valuable beyond replacing email as the go-to communication method for buisnesses is that it is more than chat; it is a platform. Thanks to Slack’s open attitude toward developers with its API, hundreds of developers have been building what have become known as Slack bots. There are many different libraries available to help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server companies such as Linode and Digital Ocean. But last June, Apple open sourced Swift, including official support for Linux (Ubuntu 14 and 15 specifically). This made it possible to deploy Swift code on Linux servers, and developers hit the ground running to build out the infrastructure needed to make Swift a viable language for server applications. Even with this huge developer effort, it is still early days for server-side Swift. Apple’s Linux Foundation port is a huge undertaking, as is the work to get libdispatch, a concurrency framework that provides much of the underpinning for Foundation. In addition to rough official tooling, writing code for server-side Swift can be a bit like hitting a moving target, with biweekly snapshot releases and multiple, ABI-incompatible versions to target. Zewo to Sixty on Linux Fortunately, there are some good options for deploying Swift code on servers right now, even with Apple’s libraries in flux. I’m going to focus in on one in particular: Zewo. Zewo is modular by design, allowing us to use the Swift Package Manager to pull in only what we need instead of a monolithic framework. It’s open source and is a great community of developers that spans the globe. If you’re interested in the world of server-side Swift, you should get involved! Oh, and of course they have a Slack. Using Zewo and a few other open source libraries, I was able to build a version of SlackKit that runs on Linux. A Swift Tutorial In this two-part post series I have detailed a step-by-step guide to writing a Slack bot in Swift and deploying it to Heroku. I’m going to be using OS X but this is also achievable on Linux using the editor of your choice. Prerequisites Install Homebrew: /usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Install swiftenv: brew install kylef/formulae/swiftenv Configure your shell: echo ‘if which swiftenv > /dev/null; then eval “$(swiftenv init -)”; fi’ >> ~/.bash_profile Download and install the latest Zewo-compatible snapshot: swiftenv install DEVELOPMENT-SNAPSHOT-2016-05-09-a swiftenv local DEVELOPMENT-SNAPSHOT-2016-05-09-a Install and Link OpenSSL: brew install openssl brew link openssl --force Let’s Keep Score The sample application we’ll be building is a leaderboard for Slack, like PlusPlus++ by Betaworks. It works like this: add a point for every @thing++, subtract a point for every @thing--, and show a leaderboard when asked @botname leaderboard. First, we need to create the directory for our application and initialize the basic project structure. mkdir leaderbot && cd leaderbot swift build --init Next, we need to edit Package.swift to add our dependency, SlackKit: importPackageDescription let package = Package( name: "Leaderbot", targets: [], dependencies: [ .Package(url: "https://github.com/pvzig/SlackKit.git", majorVersion: 0, minor: 0), ] ) SlackKit is dependent on several Zewo libraries, but thanks to the Swift Package Manager, we don’t have to worry about importing them explicitly. Then we need to build our dependencies: swift build And our development environment (we need to pass in some linker flags so that swift build knows where to find the version of OpenSSL we installed via Homebrew and the C modules that some of our Zewo libraries depend on): swift build -Xlinker -L$(pwd)/.build/debug/ -Xswiftc -I/usr/local/include -Xlinker -L/usr/local/lib -X In Part 2, I will show all of the Swift code, how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina. He writes at bytesized.co, tweets @pvzig, and freelances at Launch Software.fto help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server 
Read more
  • 0
  • 0
  • 2167
article-image-create-user-profile-system-and-use-null-coalesce-operator
Packt
12 Oct 2016
15 min read
Save for later

Create a User Profile System and use the Null Coalesce Operator

Packt
12 Oct 2016
15 min read
In this article by Jose Palala and Martin Helmich, author of PHP 7 Programming Blueprints, will show you how to build a simple profiles page with listed users which you can click on, and create a simple CRUD-like system which will enable us to register new users to the system, and delete users for banning purposes. (For more resources related to this topic, see here.) You will learn to use the PHP 7 null coalesce operator so that you can show data if there is any, or just display a simple message if there isn’t any. Let's create a simple UserProfile class. The ability to create classes has been available since PHP 5. A class in PHP starts with the word class, and the name of the class: class UserProfile { private $table = 'user_profiles'; } } We've made the table private and added a private variable, where we define which table it will be related to. Let's add two functions, also known as a method, inside the class to simply fetch the data from the database: function fetch_one($id) { $link = mysqli_connect(''); $query = "SELECT * from ". $this->table . " WHERE `id` =' " . $id "'"; $results = mysqli_query($link, $query); } function fetch_all() { $link = mysqli_connect('127.0.0.1', 'root','apassword','my_dataabase' ); $query = "SELECT * from ". $this->table . "; $results = mysqli_query($link, $query); } The null coalesce operator We can use PHP 7's null coalesce operator to allow us to check whether our results contain anything, or return a defined text which we can check on the views—this will be responsible for displaying any data. Lets put this in a file which will contain all the define statements, and call it: //definitions.php define('NO_RESULTS_MESSAGE', 'No results found'); require('definitions.php'); function fetch_all() { …same lines ... $results = $results ?? NO_RESULTS_MESSAGE; return $message; } On the client side, we'll need to come up with a template to show the list of user profiles. Let’s create a basic HTML block to show that each profile can be a div element with several list item elements to output each table. In the following function, we need to make sure that all values have been filled in with at least the name and the age. Then we simply return the entire string when the function is called: function profile_template( $name, $age, $country ) { $name = $name ?? null; $age = $age ?? null; if($name == null || $age === null) { return 'Name or Age need to be set'; } else { return '<div> <li>Name: ' . $name . ' </li> <li>Age: ' . $age . '</li> <li>Country: ' . $country . ' </li> </div>'; } } Separation of concerns In a proper MVC architecture, we need to separate the view from the models that get our data, and the controllers will be responsible for handling business logic. In our simple app, we will skip the controller layer since we just want to display the user profiles in one public facing page. The preceding function is also known as the template render part in an MVC architecture. While there are frameworks available for PHP that use the MVC architecture out of the box, for now we can stick to what we have and make it work. PHP frameworks can benefit a lot from the null coalesce operator. In some codes that I've worked with, we used to use the ternary operator a lot, but still had to add more checks to ensure a value was not falsy. Furthermore, the ternary operator can get confusing, and takes some getting used to. The other alternative is to use the isSet function. However, due to the nature of the isSet function, some falsy values will be interpreted by PHP as being a set. Creating views Now that we have our  model complete, a template render function, we just need to create the view with which we can look at each profile. Our view will be put inside a foreach block, and we'll use the template we wrote to render the right values: //listprofiles.php <html> <!doctype html> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"> </head> <body> <?php foreach($results as $item) { echo profile_template($item->name, $item->age, $item->country; } ?> </body> </html> Let's put the code above into index.php. While we may install the Apache server, configure it to run PHP, install new virtual hosts and the other necessary featuress, and put our PHP code into an Apache folder, this will take time, so, for the purposes of testing this out, we can just run PHP's server for development. To  run the built-in PHP server (read more at http://php.net/manual/en/features.commandline.webserver.php) we will use the folder we are running, inside a terminal: php -S localhost:8000 If we open up our browser, we should see nothing yet—No results found. This means we need to populate our database. If you have an error with your database connection, be sure to replace the correct database credentials we supplied into each of the mysql_connect calls that we made. To supply data into our database, we can create a simple SQL script like this: INSERT INTO user_profiles ('Chin Wu', 30, 'Mongolia'); INSERT INTO user_profiles ('Erik Schmidt', 22, 'Germany'); INSERT INTO user_profiles ('Rashma Naru', 33, 'India'); Let's save it in a file such as insert_profiles.sql. In the same directory as the SQL file, log on to the MySQL client by using the following command: mysql -u root -p Then type use <name of database>: mysql> use <database>; Import the script by running the source command: mysql> source insert_profiles.sql Now our user profiles page should show the following: Create a profile input form Now let's create the HTML form for users to enter their profile data. Our profiles app would be no use if we didn't have a simple way for a user to enter their user profile details. We'll create the profile input form  like this: //create_profile.php <html> <body> <form action="post_profile.php" method="POST"> <label>Name</label><input name="name"> <label>Age</label><input name="age"> <label>Country</label><input name="country"> </form> </body> </html> In this profile post, we'll need to create a PHP script to take care of anything the user posts. It will create an SQL statement from the input values and output whether or not they were inserted. We can use the null coalesce operator again to verify that the user has inputted all values and left nothing undefined or null: $name = $_POST['name'] ?? ""; $age = $_POST['country'] ?? ""; $country = $_POST['country'] ?? ""; This prevents us from accumulating errors while inserting data into our database. First, let's create a variable to hold each of the inputs in one array: $input_values = [ 'name' => $name, 'age' => $age, 'country' => $country ]; The preceding code is a new PHP 5.4+ way to write arrays. In PHP 5.4+, it is no longer necessary to put an actual array(); the author personally likes the new syntax better. We should create a new method in our UserProfile class to accept these values: Class UserProfile { public function insert_profile($values) { $link = mysqli_connect('127.0.0.1', 'username','password', 'databasename'); $q = " INSERT INTO " . $this->table . " VALUES ( '". $values['name']."', '".$values['age'] . "' ,'". $values['country']. "')"; return mysqli_query($q); } } Instead of creating a parameter in our function to hold each argument as we did with our profile template render function, we can simply use an array to hold our values. This way, if a new field needs to be inserted into our database, we can just add another field to the SQL insert statement. While we are at it, let's create the edit profile section. For now, we'll assume that whoever is using this edit profile is the administrator of the site. We'll need to create a page where, provided the $_GET['id'] or has been set, that the user that we will be fetching from the database and displaying on the form. <?php require('class/userprofile.php');//contains the class UserProfile into $id = $_GET['id'] ?? 'No ID'; //if id was a string, i.e. "No ID", this would go into the if block if(is_numeric($id)) { $profile = new UserProfile(); //get data from our database $results = $user->fetch_id($id); if($results && $results->num_rows > 0 ) { while($obj = $results->fetch_object()) { $name = $obj->name; $age = $obj->age; $country = $obj->country; } //display form with a hidden field containing the value of the ID ?> <form action="post_update_profile.php" method="post"> <label>Name</label><input name="name" value="<?=$name?>"> <label>Age</label><input name="age" value="<?=$age?>"> <label>Country</label><input name="country" value="<?=country?>"> </form> <?php } else { exit('No such user'); } } else { echo $id; //this should be No ID'; exit; } Notice that we're using what is known as the shortcut echo statement in the form. It makes our code simpler and easier to read. Since we're using PHP 7, this feature should come out of the box. Once someone submits the form, it goes into our $_POST variable and we'll create a new Update function in our UserProfile class. Admin system Let's finish off by creating a simple grid for an admin dashboard portal that will be used with our user profiles database. Our requirement for this is simple: We can just set up a table-based layout that displays each user profile in rows. From the grid, we will add the links to be able to edit the profile, or  delete it, if we want to. The code to display a table in our HTML view would look like this: <table> <tr> <td>John Doe</td> <td>21</td> <td>USA</td> <td><a href="edit_profile.php?id=1">Edit</a></td> <td><a href="profileview.php?id=1">View</a> <td><a href="delete_profile.php?id=1">Delete</a> </tr> </table> This script to this is the following: //listprofiles.php $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?=$row['name'];?></td> <td><?=$row['age'];?></td> <td><?=$row['country'];?></td> <td><a href="edit_profile.php?id=<?=$id?>">Edit</a></td> <td><a href="profileview.php?id=<?=$id?>">View</a> <td><a href="delete_profile.php?id=<?=$id?>">Delete</a> </tr> <?php } There's one thing that we haven't yet created: A delete_profile.php page. The view and edit pages - have been discussed already. Here's how the delete_profile.php page would look: <?php //delete_profile.php $connection = mysqli_connect('localhost','<username>','<password>', '<databasename>'); $id = $_GET['id'] ?? 'No ID'; if(is_numeric($id)) { mysqli_query( $connection, "DELETE FROM userprofiles WHERE id = '" .$id . "'"); } else { echo $id; } i(!is_numeric($id)) { exit('Error: non numeric $id'); } else { echo "Profile #" . $id . " has been deleted"; ?> Of course, since we might have a lot of user profiles in our database, we have to create a simple pagination. In any pagination system, you just need to figure out the total number of rows, and how many rows you want displayed per page. We can create a function that will be able to return a URL that contains the page number and how many to view per page. From our queries database, we first create a new function for us to select only up to the total number of items in our database: class UserProfile{ // …. Etc … function count_rows($table) { $dbconn = new mysqli('localhost', 'root', 'somepass', 'databasename'); $query = $dbconn->query("select COUNT(*) as num from '". $table . "'"); $total_pages = mysqli_fetch_array($query); return $total_pages['num']; //fetching by array, so element 'num' = count } For our pagination, we can create a simple paginate function which accepts the base_url of the page where we have pagination, the rows per page — also known as the number of records we want each page to have — and the total number of records found: require('definitions.php'); require('db.php'); //our database class Function paginate ($baseurl, $rows_per_page, $total_rows) { $pagination_links = array(); //instantiate an array to hold our html page links //we can use null coalesce to check if the inputs are null ( $total_rows || $rows_per_page) ?? exit('Error: no rows per page and total rows); //we exit with an error message if this function is called incorrectly $pages = $total_rows % $rows_per_page; $i= 0; $pagination_links[$i] = "<a href="http://". $base_url . "?pagenum=". $pagenum."&rpp=".$rows_per_page. ">" . $pagenum . "</a>"; } return $pagination_links; } This function will help display the above page links in a table: function display_pagination($links) { $display = ' <div class="pagination">'; <table><tr>'; foreach ($links as $link) { echo "<td>" . $link . "</td>"; } $display .= '</tr></table></div>'; return $display; } Notice that we're following the principle that there should rarely be any echo statements inside a function. This is because we want to make sure that other users of these functions are not confused when they debug some mysterious output on their page. By requiring the programmer to echo out whatever the functions return, it becomes easier to debug our program. Also, we're following the separation of concerns—our code doesn't output the display, it just formats the display. So any future programmer can just update the function's internal code and return something else. It also makes our function reusable; imagine that in the future someone uses our function—this way, they won't have to double check that there's some misplaced echo statement within our functions. A note on alternative short tags As you know, another way to echo is to use  the <?= tag. You can use it like so: <?="helloworld"?>.These are known as short tags. In PHP 7, alternative PHP tags have been removed. The RFC states that <%, <%=, %> and <script language=php>  have been deprecated. The RFC at https://wiki.php.net/rfc/remove_alternative_php_tags says that the RFC does not remove short opening tags (<?) or short opening tags with echo (<?=). Since we have laid out the groundwork of creating paginate links, we now just have to invoke our functions. The following script is all that is needed to create a paginated page using the preceding function: $mysqli = mysqli_connect('localhost','<username>','<password>', '<dbname>'); $limit = $_GET['rpp'] ?? 10; //how many items to show per page default 10; $pagenum = $_GET['pagenum']; //what page we are on if($pagenum) $start = ($pagenum - 1) * $limit; //first item to display on this page else $start = 0; //if no page var is given, set start to 0 /*Display records here*/ $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?php echo $row['name']; ?></td> <td><?php echo $row['age']; ?></td> <td><?php echo $row['country']; ?></td> </tr> <?php } /* Let's show our page */ /* get number of records through */ $record_count = $db->count_rows('userprofiles'); $pagination_links = paginate('listprofiles.php' , $limit, $rec_count); echo display_pagination($paginaiton_links); The HTML output of our page links in listprofiles.php will look something like this: <div class="pagination"><table> <tr> <td> <a href="listprofiles.php?pagenum=1&rpp=10">1</a> </td> <td><a href="listprofiles.php?pagenum=2&rpp=10">2</a> </td> <td><a href="listprofiles.php?pagenum=3&rpp=10">2</a> </td> </tr> </table></div> Summary As you can see, we have a lot of use cases for the null coalesce. We learned how to make a simple user profile system, and how to use PHP 7's null coalesce feature when fetching data from the database, which returns null if there are no records. We also learned that the null coalesce operator is similar to a ternary operator, except this returns null by default if there is no data. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Mapping Requirements for a Modular Web Shop App [article] HTML5: Generic Containers [article]
Read more
  • 0
  • 0
  • 3225

article-image-getting-organized-npm-and-bower
Packt
06 Oct 2016
13 min read
Save for later

Getting Organized with NPM and Bower

Packt
06 Oct 2016
13 min read
In this article by Philip Klauzinski and John Moore, the authors of the book Mastering JavaScript Single Page Application Development, we will learn about the basics of NMP and Bower. JavaScript was the bane of the web development industry during the early days of the browser-rendered Internet. Now, powers hugely impactful libraries such as jQuery, and JavaScript-rendered content (as opposed to server-side-rendered content) is even indexed by many search engines. What was once largely considered an annoying language used primarily to generate popup windows and alert boxes has now become, arguably, the most popular programming language in the world. (For more resources related to this topic, see here.) Not only is JavaScript now more prevalent than ever in frontend architecture, but it has become a server-side language as well, thanks to the Node.js runtime. We have also now seen the proliferation of document-oriented databases, such as MongoDB, which store and return JSON data. With JavaScript present throughout the development stack, the door is now open for JavaScript developers to become full-stack developers without the need to learn a traditional server-side language. Given the right tools and know-how, any JavaScript developer can create single page applications (SPAs) comprising entirely the language they know best, and they can do so using an architecture such as MEAN (MongoDB, Express, AngularJS, and Node.js). Organization is key to the development of any complex single page application. If you don't get organized from the beginning, you are sure to introduce an inordinate number of regressions to your app. The Node.js ecosystem will help you do this with a full suite of indispensable and open source tools, three of which we will discuss here. In this article, you will learn about: Node Package Manager The Bower front-end package manager What is Node Package Manager? Within any full-stack JavaScript environment, Node Package Manager (NPM) will be your go-to tool for setting up your development environment and managing server-side libraries. NPM can be used within both global and isolated environment contexts. We will first explore the use of NPM globally. Installing Node.js and NPM NPM is a component of Node.js, so before you can use it, you must install Node.js. You can find installers for both Mac and Windows at nodejs.org. Once you have Node.js installed, using NPM is incredibly easy and is done from the command-line interface (CLI). Start by ensuring you have the latest version of NPM installed, as it is updated more often than Node.js itself: $ npm install -g npm When using NPM, the -g option will apply your changes to your global environment. In this case, you want your version of NPM to apply globally. As stated previously, NPM can be used to manage packages both globally and within isolated environments. Therefore, we want essential development tools to be applied globally so that you can use them in multiple projects on the same system. On Mac and some Unix-based systems, you may have to run the npm command as the superuser (prefix the command with sudo) in order to install packages globally, depending on how NPM was installed. If you run into this issue and wish to remove the need to prefix npm with sudo, see docs.npmjs.com/getting-started/fixing-npm-permissions. Configuring your package.json file For any project you develop, you will keep a local package.json file to manage your Node.js dependencies. This file should be stored at the root of your project directory, and it will only pertain to that isolated environment. This allows you to have multiple Node.js projects with different dependency chains on the same system. When beginning a new project, you can automate the creation of the package.json file from the command line: $ npm init Running npm init will take you through a series of JSON property names to define through command-line prompts, including your app's name, version number, description, and more. The name and version properties are required, and your Node.js package will not install without them being defined. Several of the properties will have a default value given within parentheses in the prompt so that you may simply hit Enter to continue. Other properties will simply allow you to hit Enter with a blank entry and will not be saved to the package.json file or be saved with a blank value: name: (my-app) version: (1.0.0) description: entry point: (index.js) The entry point prompt will be defined as the main property in package.json and is not necessary unless you are developing a Node.js application. In our case, we can forgo this field. The npm init command may in fact force you to save the main property, so you will have to edit package.json afterward to remove it; however, that field will have no effect on your web app. You may also choose to create the package.json file manually using a text editor if you know the appropriate structure to employ. Whichever method you choose, your initial version of the package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "description": "My JavaScript single page application." } If you want your project to be private and want to ensure that it does not accidently get published to the NPM registry, you may want to add the private property to your package.json file and set it to true. Additionally, you may remove some properties that only apply to a registered package: { "name": "my-app", "author": "Philip Klauzinski", "description": "My JavaScript single page application.", "private": true } Once you have your package.json file set up the way you like it, you can begin installing Node.js packages locally for your app. This is where the importance of dependencies begins to surface. NPM dependencies There are three types of dependencies that can be defined for any Node.js project in your package.json file: dependencies, devDependencies, and peerDependencies. For the purpose of building a web-based SPA, you will only need to use the devDependencies declaration. The devDependencies ones are those that are required for developing your application, but not required for its production environment or for simply running it. If other developers want to contribute to your Node.js application, they will need to run npm install from the command line to set up the proper development environment. For information on the other types of dependencies, see docs.npmjs.com. When adding devDependencies to your package.json file, the command line again comes to the rescue. Let's use the installation of Browserify as an example: $ npm install browserify --save-dev This will install Browserify locally and save it along with its version range to the devDependencies object in your package.json file. Once installed, your package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "devDependencies": { "browserify": "^12.0.1" } } The devDependencies object will store each package as key-value pairs, in which the key is the package name and the value is the version number or version range. Node.js uses semantic versioning, where the three digits of the version number represent MAJOR.MINOR.PATCH. For more information on semantic version formatting, see semver.org. Updating your development dependencies You will notice that the version number of the installed package is preceded by a caret (^) symbol by default. This means that package updates will only allow patch and minor updates for versions above 1.0.0. This is meant to prevent major version changes from breaking your dependency chain when updating your packages to the latest versions. To update your devDependencies and save the new version numbers, you will enter the following from the command line. $ npm update --save-dev Alternatively, you can use the -D option as a shortcut for --save-dev: $ npm update -D To update all globally installed NPM packages to their latest versions, run npm update with the -g option: $ npm update -g For more information on semantic versioning within NPM, see docs.npmjs.com/misc/semver. Now that you have NPM set up and you know how to install your development dependencies, you can move on to installing Bower. Bower Bower is a package manager for frontend web assets and libraries. You will use it to maintain your frontend stack and control version chains for libraries such as jQuery, AngularJS, and any other components necessary to your app's web interface. Installing Bower Bower is also a Node.js package, so you will install it using NPM, much like you did with the Browserify example installation in the previous section, but this time you will be installing the package globally. This will allow you to run bower from the command line anywhere on your system without having to install it locally for each project. $ npm install -g bower You can alternatively install Bower locally as a development dependency so that you may maintain different versions of it for different projects on the same system, but this is generally not necessary. $ npm install bower --save-dev Next, check that Bower is properly installed by querying the version from the command line. $ bower -v Bower also requires the Git version control system (VCS) to be installed on your system in order to work with packages. This is because Bower communicates directly with GitHub for package management data. If you do not have Git installed on your system, you can find instructions for Linux, Mac, and Windows at git-scm.com. Configuring your bower.json file The process of setting up your bower.json file is comparable to that of the package.json file for NPM. It uses the same JSON format, has both dependencies and devDependencies, and can also be automatically created. $ bower init Once you type bower init from the command line, you will be prompted to define several properties with some defaults given within parentheses: ? name: my-app ? version: 0.0.0 ? description: My app description. ? main file: index.html ? what types of modules does this package expose? (Press <space> to? what types of modules does this package expose? globals ? keywords: my, app, keywords ? authors: Philip Klauzinski ? license: MIT ? homepage: http://gui.ninja ? set currently installed components as dependencies? No ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes These questions may vary depending on the version of Bower you install. Most properties in the bower.json file are not necessary unless you are publishing your project to the Bower registry, indicated in the final prompt. You will most likely want to mark your package as private unless you plan to register it and allow others to download it as a Bower package. Once you have created the bower.json file, you can open it in a text editor and change or remove any properties you wish. It should look something like the following example: { "name": "my-app", "version": "0.0.0", "authors": [ "Philip Klauzinski" ], "description": "My app description.", "main": "index.html", "moduleType": [ "globals" ], "keywords": [ "my", "app", "keywords" ], "license": "MIT", "homepage": "http://gui.ninja", "ignore": [ "**/.*", "node_modules", "bower_components", "test", "tests" ], "private": true } If you wish to keep your project private, you can reduce your bower.json file to two properties before continuing: { "name": "my-app", "private": true } Once you have the initial version of your bower.json file set up the way you like it, you can begin installing components for your app. Bower components location and the .bowerrc file Bower will install components into a directory named bower_components by default. This directory will be located directly under the root of your project. If you wish to install your Bower components under a different directory name, you must create a local system file named .bowerrc and define the custom directory name there: { "directory": "path/to/my_components" } An object with only a single directory property name is all that is necessary to define a custom location for your Bower components. There are many other properties that can be configured within a .bowerrc file. For more information on configuring Bower, see bower.io/docs/config/. Bower dependencies Bower also allows you to define both the dependencies and devDependencies objects like NPM. The distinction with Bower, however, is that the dependencies object will contain the components necessary for running your app, while the devDependencies object is reserved for components that you might use for testing, transpiling, or anything that does not need to be included in your frontend stack. Bower packages are managed using the bower command from the CLI. This is a user command, so it does not require super user (sudo) permissions. Let's begin by installing jQuery as a frontend dependency for your app: $ bower install jquery --save The --save option on the command line will save the package and version number to the dependencies object in bower.json. Alternatively, you can use the -S option as a shortcut for --save: $ bower install jquery -S Next, let's install the Mocha JavaScript testing framework as a development dependency: $ bower install mocha --save-dev In this case, we will use --save-dev on the command line to save the package to the devDependencies object instead. Your bower.json file should now look similar to the following example: { "name": "my-app", "private": true, "dependencies": { "jquery": "~2.1.4" }, "devDependencies": { "mocha": "~2.3.4" } } Alternatively, you can use the -D option as a shortcut for --save-dev: $ bower install mocha –D You will notice that the package version numbers are preceded by the tilde (~) symbol by default, in contrast to the caret (^) symbol, as is the case with NPM. The tilde serves as a more stringent guard against package version updates. With a MAJOR.MINOR.PATCH version number, running bower update will only update to the latest patch version. If a version number is composed of only the major and minor versions, bower update will update the package to the latest minor version. Searching the Bower registry All registered Bower components are indexed and searchable through the command line. If you don't know the exact package name of a component you wish to install, you can perform a search to retrieve a list of matching names. Most components will have a list of keywords within their bower.json file so that you can more easily find the package without knowing the exact name. For example, you may want to install PhantomJS for headless browser testing: $ bower search phantomjs The list returned will include any package with phantomjs in the package name or within its keywords list: phantom git://github.com/ariya/phantomjs.git dt-phantomjs git://github.com/keesey/dt-phantomjs qunit-phantomjs-runner git://github.com/jonkemp/... parse-cookie-phantomjs git://github.com/sindresorhus/... highcharts-phantomjs git://github.com/pesla/highcharts-phantomjs.git mocha-phantomjs git://github.com/metaskills/mocha-phantomjs.git purescript-phantomjs git://github.com/cxfreeio/purescript-phantomjs.git You can see from the returned list that the correct package name for PhantomJS is in fact phantom and not phantomjs. You can then proceed to install the package now that you know the correct name: $ bower install phantom --save-dev Now, you have Bower installed and know how to manage your frontend web components and development tools, but how do you integrate them into your SPA? This is where Grunt comes in. Summary Now that you have learned to set up an optimal development environment with NPM and supply it with frontend dependencies using Bower, it's time to start learning more about building a real app. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 2420

article-image-frontend-development-bootstrap-4
Packt
06 Oct 2016
19 min read
Save for later

Frontend development with Bootstrap 4

Packt
06 Oct 2016
19 min read
In this article by Bass Jobsen author of the book Bootstrap 4 Site Blueprints explains Bootstrap's popularity as a frontend web development framework is easy to understand. It provides a palette of user-friendly, cross-browser-tested solutions for the most standard UI conventions. Its ready-made, community-tested combination of HTML markup, CSS styles, and JavaScript plugins greatly speed up the task of developing a frontend web interface, and it yields a pleasing result out of the gate. With the fundamental elements in place, we can customize the design on top of a solid foundation. (For more resources related to this topic, see here.) However, not all that is popular, efficient, and effective is good. Too often, a handy tool can generate and reinforce bad habits; not so with Bootstrap, at least not necessarily so. Those who have followed it from the beginning know that its first release and early updates have occasionally favored pragmatic efficiency over best practices. The fact is that some best practices, including from semantic markup, mobile-first design, and performance-optimized assets, require extra time and effort for implementation. Quantity and quality If handled well, I feel that Bootstrap is a boon for the web development community in terms of quality and efficiency. Since developers are attracted to the web development framework, they become part of a coding community that draws them increasingly to the current best practices. From the start, Bootstrap has encouraged the implementation of tried, tested, and future-friendly CSS solutions, from Nicholas Galagher's CSS normalize to CSS3's displacement of image-heavy design elements. It has also supported (if not always modeled) HTML5 semantic markup. Improving with age With the release of v2.0, Bootstrap took responsive design into the mainstream, ensuring that its interface elements could travel well across devices, from desktops to tablets to handhelds. With the v3.0 release, Bootstrap stepped up its game again by providing the following features: The responsive grid was now mobile-first friendly Icons now utilize web fonts and, thus, were mobile- and retina-friendly With the drop of the support for IE7, markup and CSS conventions were now leaner and more efficient Since version 3.2, autoprefixer was required to build Bootstrap This article is about the v4.0 release. This release contains many improvements and also some new components, while some other components and plugins are dropped. In the following overview, you will find the most important improvements and changes in Bootstrap 4: Less (Leaner CSS) has been replaced with Sass. CSS code has been refactored to avoid tag and child selectors. There is an improved grid system with a new grid tier to better target the mobile devices. The navbar has been replaced. It has an opt-in flexbox support. It has a new HTML reset module called Reboot. Reboot extends Nicholas Galagher's CSS normalize and handles the box-sizing: border-box declarations. jQuery plugins are written in ES6 now and come with a UMD support. There is an improved auto-placement of tooltips and popovers, thanks to the help of a library called Tether. It has dropped the support for Internet Explorer 8, which enables us to swap pixels with rem and em units. It has added the Card component, which replaces the Wells, thumbnails, and Panels in earlier versions. It has dropped the icons in the font format from the Glyphicon Halflings set. The Affix plugin is dropped, and it can be replaced with the position: sticky polyfill (https://github.com/filamentgroup/fixed-sticky). The power of Sass When working with Bootstrap, there is the power of Sass to consider. Sass is a preprocessor for CSS. It extends the CSS syntax with variables, mixins, and functions and helps you in DRY (Don't Repeat Yourself) coding your CSS code. Sass has originally been written in Ruby. Nowadays, a fast port of Sass written in C++, called libSass, is available. Bootstrap uses the modern SCSS syntax for Sass instead of the older indented syntax of Sass. Using Bootstrap CLI You will be introduced to Bootstrap CLI. Instead of using Bootstrap's bundled build process, you can also start a new project by running the Bootstrap CLI. Bootstrap CLI is the command-line interface for Bootstrap 4. It includes some built-in example projects, but you can also use it to employ and deliver your own projects. You'll need the following software installed to get started with Bootstrap CLI: Node.js 0.12+: Use the installer provided on the NodeJS website, which can be found at http://nodejs.org/ With Node installed, run [sudo] npm install -g grunt bower Git: Use the installer for your OS Windows users can also try Git Gulp is another task runner for the Node.js system. Note that if you prefer Gulp over Grunt, you should install gulp instead of grunt with the following command: [sudo] npm install -g gulp bower The Bootstrap CLI is installed through npm by running the following command in your console: npm install -g bootstrap-cli This will add the bootstrap command to your system. Preparing a new Bootstrap project After installing the Bootstrap CLI, you can create a new Bootstrap project by running the following command in your console: bootstrap new --template empty-bootstrap-project-gulp Enter the name of your project for the question "What's the project called? (no spaces)". A new folder with the project name will be created. After the setup process, the directory and file structure of your new project folder should look as shown in the following figure: The project folder also contains a Gulpfile.js file. Now, you can run the bootstrap watch command in your console and start editing the html/pages/index.html file. The HTML templates are compiled with Panini. Panini is a flat file compiler that helps you to create HTML pages with consistent layouts and reusable partials with ease. You can read more about Panini at http://foundation.zurb.com/sites/docs/panini.html. Responsive features and breakpoints Bootstrap has got four breakpoints at 544, 768, 992, and 1200 pixels by default. At these breakpoints, your design may adapt to and target specific devices and viewport sizes. Bootstrap's mobile-first and responsive grid(s) also use these breakpoints. You can read more about the grids later on. You can use these breakpoints to specify and name the viewport ranges. The extra small (xs) range is for portrait phones with a viewport smaller than 544 pixels, the small (sm) range is for landscape phones with viewports smaller than 768pixels, the medium (md) range is for tablets with viewports smaller than 992pixels, the large (lg) range is for desktop with viewports wider than 992pixels, and finally the extra-large (xl) range is for desktops with a viewport wider than 1200 pixels. The breakpoints are in pixel values, as the viewport pixel size does not depend on the font size and modern browsers have already fixed some zooming bugs. Some people claim that em values should be preferred. To learn more about this, check out the following link: http://zellwk.com/blog/media-query-units/. Those who still prefer em values over pixels value can simply change the $grid-breakpointsvariable declaration in the scss/includes/_variables.scssfile. To use em values for media queries, the SCSS code should as follows: $grid-breakpoints: ( // Extra small screen / phone xs: 0, // Small screen / phone sm: 34em, // 544px // Medium screen / tablet md: 48em, // 768px // Large screen / desktop lg: 62em, // 992px // Extra large screen / wide desktop xl: 75em //1200px ); Note that you also have to change the $container-max-widths variable declaration. You should change or modify Bootstrap's variables in the local scss/includes/_variables.scss file, as explained at http://bassjobsen.weblogs.fm/preserve_settings_and_customizations_when_updating_bootstrap/. This will ensure that your changes are not overwritten when you update Bootstrap. The new Reboot module and Normalize.css When talking about cascade in CSS, there will, no doubt, be a mention of the browser default settings getting a higher precedence than the author's preferred styling. In other words, anything that is not defined by the author will be assigned a default styling set by the browser. The default styling may differ for each browser, and this behavior plays a major role in many cross-browser issues. To prevent these sorts of problems, you can perform a CSS reset. CSS or HTML resets set a default author style for commonly used HTML elements to make sure that browser default styles do not mess up your pages or render your HTML elements to be different on other browsers. Bootstrap uses Normalize.css written by Nicholas Galagher. Normalize.css is a modern, HTML5-ready alternative to CSS resets and can be downloaded from http://necolas.github.io/normalize.css/. It lets browsers render all elements more consistently and makes them adhere to modern standards. Together with some other styles, Normalize.css forms the new Reboot module of Bootstrap. Box-sizing The Reboot module also sets the global box-sizing value from content-box to border-box. The box-sizing property is the one that sets the CSS-box model used for calculating the dimensions of an element. In fact, box-sizing is not new in CSS, but nonetheless, switching your code to box-sizing: border-box will make your work a lot easier. When using the border-box settings, calculation of the width of an element includes border width and padding. So, changing the border width or padding of an element won't break your layouts. Predefined CSS classes Bootstrap ships with predefined CSS classes for everything. You can build a mobile-first responsive grid for your project by only using div elements and the right grid classes. CSS classes for styling other elements and components are also available. Consider the styling of a button in the following HTML code: <button class="btn btn-warning">Warning!</button> Now, your button will be as shown in the following screenshot: You will notice that Bootstrap uses two classes to style a single button. The first is the .btn class that gives the button the general button layout styles. The second class is the .btn-warning class that sets the custom colors of the buttons. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file called app.scss. Then, create a new subdirectory in the new scss directory named includes. Now, you will have to copy bootstrap.scss and _variables.scss from the Bootstrap source code in the bower_components directory to the new scss/includes directory as follows: cp bower_components/bootstrap/scss/bootstrap.scss scss/includes/_bootstrap.scss cp bower_components/bootstrap/scss/_variables.scss scss/includes/ You will notice that the bootstrap.scss file has been renamed to _bootstrap.scss, starting with an underscore, and has become a partial file now. Import the files you have copied in the previous step into the app.scss file, as follows: @import "includes/variables"; @import "includes/bootstrap"; Then, open the scss/includes/_bootstrap.scss file and change the import part for the Bootstrap partial files so that the original code in the bower_components directory will be imported here. Note that we will set the include path for the Sass compiler to the bower_components directory later on. The @import statements should look as shown in the following SCSS code: // Core variables and mixins @import "bootstrap/scss/variables"; @import "bootstrap/scss/mixins"; // Reset and dependencies @import "bootstrap/scss/normalize"; You're importing all of Bootstrap's SCSS code in your project now. When preparing your code for production, you can consider commenting out the partials that you do not require for your project. Modification of scss/includes/_variables.scss is not required, but you can consider removing the !default declarations because the real default values are set in the original _variables.scss file, which is imported after the local one. Note that the local scss/includes/_variables.scss file does not have to contain a copy of all of the Bootstrap's variables. Having them all just makes it easier to modify them for customization; it also ensures that your default values do not change when you are updating Bootstrap. Setting up your project and requirements For this project, you'll use the Bootstrap CLI again, as it helps you create a setup for your project comfortably. Bootstrap CLI requires you to have Node.js and Gulp already installed on your system. Now, create a new project by running the following command in your console: bootstrap new Enter the name of your project and choose the An empty new Bootstrap project. Powered by Panini, Sass and Gulp. template. Now your project is ready to start with your design work. However, before you start, let's first go through the introduction to Sass and the strategies for customization. The power of Sass in your project Sass is a preprocessor for CSS code and is an extension of CSS3, which adds nested rules, variables, mixins, functions, selector inheritance, and more. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file and name it app.scss. Using the CLI and running the code from GitHub Install the Bootstrap CLI using the following commands in your console: [sudo] npm install -g gulp bower npm install bootstrap-cli --global Then, use the following command to set up a Bootstrap 4 Weblog project: bootstrap new --repo https://github.com/bassjobsen/bootstrap-weblog.git The following figure shows the end result of your efforts: Turning our design into a WordPress theme WordPress is a very popular CMS (Content Management System) system; it now powers 25 percent of all sites across the web. WordPress is a free and open source CMS system and is based on PHP. To learn more about WordPress, you can also visit Packt Publishing’s WordPress Tech Page at https://www.packtpub.com/tech/wordpress. Now let's turn our design into a WordPress theme. There are many Bootstrap-based themes that we could choose from. We've taken care to integrate Bootstrap's powerful Sass styles and JavaScript plugins with the best practices found for HTML5. It will be to our advantage to use a theme that does the same. We'll use the JBST4 theme for this exercise. JBST4 is a blank WordPress theme built with Bootstrap 4. Installing the JBST 4 theme Let's get started by downloading the JBST theme. Navigate to your wordpress/wp-content/themes/ directory and run the following command in your console: git clone https://github.com/bassjobsen/jbst-4-sass.git jbst-weblog-theme Then navigate to the new jbst-weblog-theme directory and run the following command to confirm whether everything is working: npm install gulp Download from GitHub You can download the newest and updated version of this theme from GitHub too. You will find it at https://github.com/bassjobsen/jbst-weblog-theme. JavaScript events of the Carousel plugin Bootstrap provides custom events for most of the plugins' unique actions. The Carousel plugin fires the slide.bs.carousel (at the beginning of the slide transition) and slid.bs.carousel (at the end of the slide transition) events. You can use these events to add custom JavaScript code. You can, for instance, change the background color of the body on the events by adding the following JavaScript into the js/main.js file: $('.carousel').on('slide.bs.carousel', function () { $('body').css('background-color','#'+(Math.random()*0xFFFFFF<<0).toString(16)); }); You will notice that the gulp watch task is not set for the js/main.js file, so you have to run the gulp or bootstrap watch command manually after you are done with the changes. For more advanced changes of the plugin's behavior, you can overwrite its methods by using, for instance, the following JavaScript code: !function($) { var number = 0; var tmp = $.fn.carousel.Constructor.prototype.cycle; $.fn.carousel.Constructor.prototype.cycle = function (relatedTarget) { // custom JavaScript code here number = (number % 4) + 1; $('body').css('transform','rotate('+ number * 90 +'deg)'); tmp.call(this); // call the original function }; }(jQuery); The preceding JavaScript sets the transform CSS property without vendor prefixes. The autoprefixer only prefixes your static CSS code. For full browser compatibility, you should add the vendor prefixes in the JavaScript code yourself. Bootstrap exclusively uses CSS3 for its animations, but Internet Explorer 9 doesn’t support the necessary CSS properties. Adding drop-down menus to our navbar Bootstrap’s JavaScript Dropdown Plugin enables you to create drop-down menus with ease. You can also add these drop-down menus in your navbar too. Open the html/includes/header.html file in your text editor. You will notice that the Gulp build process uses the Panini HTML compiler to compile our HTML templates into HTML pages. Panini is powered by the Handlebars template language. You can use helpers, iterations, and custom data in your templates. In this example, you'll use the power of Panini to build the navbar items with drop-down menus. First, create a html/data/productgroups.yml file that contains the titles of the navbar items: Shoes Clothing Accessories Women Men Kids All Departments The preceding code is written in the YAML format. YAML is a human-readable data serialization language that takes concepts from programming languages and ideas from XML; you can read more about it at http://yaml.org/. Using the data described in the preceding code, you can use the following HTML and template code to build the navbar items: <ul class="nav navbar-nav navbar-toggleable-sm collapse" id="collapsiblecontent"> {{#each productgroups}} <li class="nav-item dropdown {{#ifCond this 'Shoes'}}active{{/ifCond}}"> <a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false"> {{ this }} </a> <div class="dropdown-menu"> <a class="dropdown-item" href="#">Action</a> <a class="dropdown-item" href="#">Another action</a> <a class="dropdown-item" href="#">Something else here</a> <div class="dropdown-divider"></div> <a class="dropdown-item" href="#">Separated link</a> </div> </li> {{/each}} </ul> The preceding code uses a (for) each loop to build the seven navbar items; each item gets the same drop-down menu. The Shoes menu got the active class. Handlebars, and so Panini, does not support conditional comparisons by default. The if-statement can only handle a single value, but you can add a custom helper to enable conditional comparisons. The custom helper, which enables us to use the ifCond statement can be found in the html/helpers/ifCond.js file. Read my blog post, How to set up Panini for different environment, at http://bassjobsen.weblogs.fm/set-panini-different-environments/, to learn more about Panini and custom helpers. The HTML code for the drop-down menu is in accordance with the code for drop-down menus as described for the Dropdown plugin at http://getbootstrap.com/components/dropdowns/. The navbar collapses for smaller screen sizes. By default, the drop-down menus look the same on all grids: Now, you will use your Bootstrap skills to build an Angular 2 app. Angular 2 is the successor of AngularJS. You can read more about Angular 2 at https://angular.io/. It is a toolset for building the framework that is most suited to your application development; it lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop. Angular is maintained by Google and a community of individuals and corporations. I have also published the source for Angular 2 with Bootstrap 4 starting point at GitHub. You will find it at the following URL: https://github.com/bassjobsen/angular2-bootstrap4-website-builder. You can install it by simply running the following command in your console: git clone https://github.com/bassjobsen/angular2-bootstrap4-website-builder.git yourproject Next, navigate to the new folder and run the following commands and verify that it works: npm install npm start Other tools to deploy Bootstrap 4 A Brunch skeleton using Bootstrap 4 is available at https://github.com/bassjobsen/brunch-bootstrap4. Brunch is a frontend web app build tool that builds, lints, compiles, concatenates, and shrinks your HTML5 apps. Read more about Brunch at the official website, which can be found at http://brunch.io/. You can try Brunch by running the following commands in your console: npm install -g brunch brunch new -s https://github.com/bassjobsen/brunch-bootstrap4 Notice that the first command requires administrator rights to run. After installing the tool, you can run the following command to build your project: brunch build The preceding command will create a new public/index.html file, after which you can open it in your browser. You'll find that it look like this: Yeoman Yeoman is another build tool. It’s a command-line utility that allows the creation of projects utilizing scaffolding templates, called generators. A Yeoman generator that scaffolds out a frontend Bootstrap 4 web app can be found at the following URL: https://github.com/bassjobsen/generator-bootstrap4 You can run the Yeoman Bootstrap 4 generator by running the following commands in your console: npm install -g yo npm install -g generator-bootstrap4 yo bootstrap4 grunt serve Again, note that the first two commands require administrator rights. The grunt serve command runs a local web server at http://localhost:9000. Point your browser to that address and check whether it look as follows: Summary Beyond this, there are a plethora of resources available for pushing further with Bootstrap. The Bootstrap community is an active and exciting one. This is truly an exciting point in the history of frontend web development. Bootstrap has made a mark in history, and for a good reason. Check out my GitHub pages at http://github.com/bassjobsen for new projects and updated sources or ask me a question on Stack Overflow (http://stackoverflow.com/users/1596547/bass-jobsen). Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 4959
article-image-saying-hello
Packt
04 Oct 2016
6 min read
Save for later

Bootstrap and Angular: Saying Hello!

Packt
04 Oct 2016
6 min read
In this article by Sergey Akopkokhyants, author of the book Learning Web Development with Bootstrap and Angular (Second Edition), will establish a development environment for the simplest application possible. (For more resources related to this topic, see here.) Development environment setup It's time to set up your development environment. This process is one of the most overlooked and often frustrating parts of learning to program because developers don't want to think about it. The developers must know nuances how to install and configure many different programs before they start real development. Everyone's computers are different as a result; the same setup may not work on your computer. We will expose and eliminate all of these problems by defining the various pieces of environment you need to setup. Defining shell The shell is a required part of your software development environment. We will use the shell to install software, run commands to build and start the web server to bring the life to your web project. If your computer has installed Linux operating system then you will use the shell called Terminal. There are many Linux-based distributions out there that use diverse desktop environments, but most of them use the equivalent keyboard shortcut to open the Terminal. Use keyboard shortcut Ctrl + Alt + T to open Terminal in Linux. If you have a Mac computer with installed OS X, then you will use the Terminal shell as well. Use keyboard shortcut Command + Space to open the Spotlight, type Terminal to search and run. If you have a computer with installed Windows operation system, you can use the standard command prompt, but we can do better. In a minute later I will show you how can you install the Git on your computer, and you will have Git Bash free. You can open a Terminal with Git Bash shell program on Windows. I will use the shell bash for all exercises in this book whenever I need to work in the Terminal. Installing Node.js The Node.js is technology we will use as a cross-platform runtime environment for running server-side Web applications. It is a combination of native, platform independent runtime based on Google's V8 JavaScript engine and a huge number of modules written in JavaScript. Node.js ships with different connectors and libraries help you use HTTP, TLS, compression, file system access, raw TCP and UDP, and more. You as a developer can write own modules on JavaScript and run them inside Node.js engine. The Node.js runtime makes ease build a network, event-driven application servers. The terms package and library are synonymous in JavaScript so that we will use them interchangeably. Node.js is utilizing JavaScript Object Notation (JSON) format widely in data exchange between server and client sides because it readily expressed in several parse diagrams, notably without complexities of XML, SOAP, and other data exchange formats. You can use Node.js for the development of the service-oriented applications, doing something different than web servers. One of the most popular service-oriented application is Node Package Manager (NPM) we will use to manage library dependencies, deployment systems, and underlies the many platform-as-a-service (PaaS) providers for Node.js. If you do not have Node.js installed on your computer, you shall download the pre-build installer from https://nodejs.org/en/download. You can start to use the Node.js immediately after installation. Open the Terminal and type: node ––version The Node.js must respond with version number of installed runtime: v4.4.3 Setting up NPM The NPM is a package manager for JavaScript. You can use it to find, share, and reuse packages of code from many developers across the world. The number of packages dramatically grows every day and now is more than 250K. NPM is a Node.js package manager and utilizes it to run itself. NPM is included in setup bundle of Node.js and available just after installation. Open the Terminal and type: npm ––version The NPM must answer on your command with version number: 2.15.1 The following command gives us information about Node.js and NPM install: npm config list There are two ways to install NPM packages: locally or globally. In cases when you would like to use the package as a tool better install it globally: npm install ––global <package_name> If you need to find the folder with globally installed packages you can use the next command: npm config get prefix Installation global packages are important, but best avoid if not needed. Mostly you will install packages locally. npm install <package_name> You may find locally installed packages in a node_modules folder of your project. Installing Git You missed a lot if you are not familiar with Git. Git is a distributed version control system and each Git working directory is a full-fledged repository. It keeps the complete history of changes and has full version tracking capabilities. Each repository is entirely independent of network access or a central server. You can install Git on your computer via a set of pre-build installers available on official website https://git-scm.com/downloads. After installation, you can open the Terminal and type git –version Git must respond with version number git version 2.8.1.windows.1 As I said for developers who use computers with installed Windows operation system now, you have Git Bash free on your system. Code editor You can imagine how many programs for code editing exists but we will talk today only about free, open source and runs everywhere Visual Studio Code from Microsoft. You can use any program you prefer for development, but I use only Visual Studio Code in our future exercises, so please install it from http://code.visualstudio.com/Download. Summary This article, we learned about shell concept, how to install Node.js and Git, and setting up node packages. Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] API with MongoDB and Node.js [article] Mapping Requirements for a Modular Web Shop App [article]
Read more
  • 0
  • 0
  • 2464

article-image-extending-yii
Packt
03 Oct 2016
14 min read
Save for later

Extending Yii

Packt
03 Oct 2016
14 min read
Introduction      In this article by Dmitry Eliseev, the author of the book Yii Application Development Cookbook Third Edition, we will see three Yii extensions—helpers, behaviors, and components. In addition, we will learn how to make your extension reusable and useful for the community and will focus on the many things you should do in order to make your extension as efficient as possible. (For more resources related to this topic, see here.) Helpers There are a lot of built-in framework helpers, like StringHelper in the yiihelpers namespace. It contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. In many cases, for additional behavior you can create your own helper and put any static functions into one. For example, we will implement a number helper in this recipe. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… Create the helpers directory in your project and write the NumberHelper class: <?php namespace apphelpers; class NumberHelper { public static function format($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Add the actionNumbers method into SiteController: <?php ... class SiteController extends Controller { … public function actionNumbers() { return $this->render('numbers', ['value' => 18878334526.3]); } } Add the views/site/numbers.php view: <?php use apphelpersNumberHelper; use yiihelpersHtml; /* @var $this yiiwebView */ /* @var $value float */ $this->title = 'Numbers'; $this->params['breadcrumbs'][] = $this->title; ?> <div class="site-numbers"> <h1><?= Html::encode($this->title) ?></h1> <p> Raw number:<br /> <b><?= $value ?></b> </p> <p> Formatted number:<br /> <b><?= NumberHelper::format($value) ?></b> </p> </div> Open the action and see this result: In other cases you can specify another count of decimal numbers; for example: NumberHelper::format($value, 3) How it works… Any helper in Yii2 is just a set of functions implemented as static methods in corresponding classes. You can use one to implement any different format of output for manipulations with values of any variable, and for other cases. Note: Usually, static helpers are light-weight clean functions with a small count of arguments. Avoid putting your business logic and other complicated manipulations into helpers . Use widgets or other components instead of helpers in other cases. See also For more information about helpers, refer to http://www.yiiframework.com/doc-2.0/guide-helper-overview.html. And for examples of built-in helpers, see sources in the helpers directory of the framework, refer to https://github.com/yiisoft/yii2/tree/master/framework/helpers. Creating model behaviors There are many similar solutions in today's web applications. Leading products such as Google's Gmail are defining nice UI patterns; one of these is soft delete. Instead of a permanent deletion with multiple confirmations, Gmail allows users to immediately mark messages as deleted and then easily undo it. The same behavior can be applied to any object such as blog posts, comments, and so on. Let's create a behavior that will allow marking models as deleted, restoring models, selecting not yet deleted models, deleted models, and all models. In this recipe we'll follow a test-driven development approach to plan the behavior and test if the implementation is correct. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. Create two databases for working and for tests. Configure Yii to use the first database in your primary application in config/db.php. Make sure the test application uses a second database in tests/codeception/config/config.php. Create a new migration: <?php use yiidbMigration; class m160427_103115_create_post_table extends Migration { public function up() { $this->createTable('{{%post}}', [ 'id' => $this->primaryKey(), 'title' => $this->string()->notNull(), 'content_markdown' => $this->text(), 'content_html' => $this->text(), ]); } public function down() { $this->dropTable('{{%post}}'); } } Apply the migration to both working and testing databases: ./yii migrate tests/codeception/bin/yii migrate Create a Post model: <?php namespace appmodels; use appbehaviorsMarkdownBehavior; use yiidbActiveRecord; /** * @property integer $id * @property string $title * @property string $content_markdown * @property string $content_html */ class Post extends ActiveRecord { public static function tableName() { return '{{%post}}'; } public function rules() { return [ [['title'], 'required'], [['content_markdown'], 'string'], [['title'], 'string', 'max' => 255], ]; } } How to do it… Let's prepare a test environment, starting with defining the fixtures for the Post model. Create the tests/codeception/unit/fixtures/PostFixture.php file: <?php namespace apptestscodeceptionunitfixtures; use yiitestActiveFixture; class PostFixture extends ActiveFixture { public $modelClass = 'appmodelsPost'; public $dataFile = '@tests/codeception/unit/fixtures/data/post.php'; } Add a fixture data file in tests/codeception/unit/fixtures/data/post.php: <?php return [ [ 'id' => 1, 'title' => 'Post 1', 'content_markdown' => 'Stored *markdown* text 1', 'content_html' => "<p>Stored <em>markdown</em> text 1</p>n", ], ]; Then, we need to create a test case tests/codeception/unit/MarkdownBehaviorTest: . .php: <?php namespace apptestscodeceptionunit; use appmodelsPost; use apptestscodeceptionunitfixturesPostFixture; use yiicodeceptionDbTestCase; class MarkdownBehaviorTest extends DbTestCase { public function testNewModelSave() { $post = new Post(); $post->title = 'Title'; $post->content_markdown = 'New *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>New <em>markdown</em> text</p>n", $post->content_html); } public function testExistingModelSave() { $post = Post::findOne(1); $post->content_markdown = 'Other *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>Other <em>markdown</em> text</p>n", $post->content_html); } public function fixtures() { return [ 'posts' => [ 'class' => PostFixture::className(), ] ]; } } Run unit tests: codecept run unit MarkdownBehaviorTest and ensure that tests have not passed Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Error Trying to test ... MarkdownBehaviorTest::testExistingModelSave Error --------------------------------------------------------------------------- Time: 289 ms, Memory: 16.75MB Now we need to implement a behavior, attach it to the model, and make sure the test passes. Create a new directory, behaviors. Under this directory, create the MarkdownBehavior class: <?php namespace appbehaviors; use yiibaseBehavior; use yiibaseEvent; use yiibaseInvalidConfigException; use yiidbActiveRecord; use yiihelpersMarkdown; class MarkdownBehavior extends Behavior { public $sourceAttribute; public $targetAttribute; public function init() { if (empty($this->sourceAttribute) || empty($this->targetAttribute)) { throw new InvalidConfigException('Source and target must be set.'); } parent::init(); } public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } private function processContent() { $model = $this->owner; $source = $model->{$this->sourceAttribute}; $model->{$this->targetAttribute} = Markdown::process($source); } } Let's attach the behavior to the Post model: class Post extends ActiveRecord { ... public function behaviors() { return [ 'markdown' => [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Run the test and make sure it passes: Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Ok Trying to test ... MarkdownBehaviorTest::testExistingModelSave Ok --------------------------------------------------------------------------- Time: 329 ms, Memory: 17.00MB That's it. We've created a reusable behavior and can use it for all future projects by just connecting it to a model. How it works… Let's start with the test case. Since we want to use a set of models, we will define fixtures. A fixture set is put into the DB each time the test method is executed. We will prepare unit tests for specifying how the behavior works: First, we test processing new model content. The behavior must convert Markdown text from a source attribute to HTML and store the second one to target attribute. Second, we test updated content of an existing model. After changing Markdown content and saving the model, we must get updated HTML content. Now let's move to the interesting implementation details. In behavior, we can add our own methods that will be mixed into the model that the behavior is attached to. We can also subscribe to our own component events. We are using it to add our own listener: public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } And now we can implement this listener: public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } In all methods, we can use the owner property to get the object the behavior is attached to. In general we can attach any behavior to your models, controllers, application, and other components that extend the yiibaseComponent class. We can also attach one behavior again and again to model for the processing of different attributes: class Post extends ActiveRecord { ... public function behaviors() { return [ [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'description_markdown', 'targetAttribute' => 'description_html', ], [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Besides, we can also extend the yiibaseAttributeBehavior class, like yiibehaviorsTimestampBehavior, to update specified attributes for any event. See also To learn more about behaviors and events, refer to the following pages: http://www.yiiframework.com/doc-2.0/guide-concept-behaviors.html http://www.yiiframework.com/doc-2.0/guide-concept-events.html For more information about Markdown syntax, refer to http://daringfireball.net/projects/markdown/. Creating components If you have some code that looks like it can be reused but you don't know if it's a behavior, widget, or something else, it's most probably a component. The component should be inherited from the yiibaseComponent class. Later on, the component can be attached to the application and configured using the components section of a configuration file. That's the main benefit compared to using just a plain PHP class. We are also getting behaviors, events, getters, and setters support. For our example, we'll implement a simple Exchange application component that will be able to get currency rates from the http://fixer.io site, attach them to the application, and use them. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… To get a currency rate, our component should send an HTTP GET query to a service URL, like http://api.fixer.io/2016-05-14?base=USD. The service must return all supported rates on the nearest working day: { "base":"USD", "date":"2016-05-13", "rates": { "AUD":1.3728, "BGN":1.7235, ... "ZAR":15.168, "EUR":0.88121 } } The component should extract needle currency from the response in a JSON format and return a target rate. Create a components directory in your application structure. Create the component class example with the following interface: <?php namespace appcomponents; use yiibaseComponent; class Exchange extends Component { public function getRate($source, $destination, $date = null) { } } Implement the component functional: <?php namespace appcomponents; use yiibaseComponent; use yiibaseInvalidConfigException; use yiibaseInvalidParamException; use yiicachingCache; use yiidiInstance; use yiihelpersJson; class Exchange extends Component { /** * @var string remote host */ public $host = 'http://api.fixer.io'; /** * @var bool cache results or not */ public $enableCaching = false; /** * @var string|Cache component ID */ public $cache = 'cache'; public function init() { if (empty($this->host)) { throw new InvalidConfigException('Host must be set.'); } if ($this->enableCaching) { $this->cache = Instance::ensure($this->cache, Cache::className()); } parent::init(); } public function getRate($source, $destination, $date = null) { $this->validateCurrency($source); $this->validateCurrency($destination); $date = $this->validateDate($date); $cacheKey = $this->generateCacheKey($source, $destination, $date); if (!$this->enableCaching || ($result = $this->cache->get($cacheKey)) === false) { $result = $this->getRemoteRate($source, $destination, $date); if ($this->enableCaching) { $this->cache->set($cacheKey, $result); } } return $result; } private function getRemoteRate($source, $destination, $date) { $url = $this->host . '/' . $date . '?base=' . $source; $response = Json::decode(file_get_contents($url)); if (!isset($response['rates'][$destination])) { throw new RuntimeException('Rate not found.'); } return $response['rates'][$destination]; } private function validateCurrency($source) { if (!preg_match('#^[A-Z]{3}$#s', $source)) { throw new InvalidParamException('Invalid currency format.'); } } private function validateDate($date) { if (!empty($date) && !preg_match('#d{4}-d{2}-d{2}#s', $date)) { throw new InvalidParamException('Invalid date format.'); } if (empty($date)) { $date = date('Y-m-d'); } return $date; } private function generateCacheKey($source, $destination, $date) { return [__CLASS__, $source, $destination, $date]; } } Attach our component in the config/console.php or config/web.php configuration files: 'components' => [ 'cache' => [ 'class' => 'yiicachingFileCache', ], 'exchange' => [ 'class' => 'appcomponentsExchange', 'enableCaching' => true, ], // ... db' => $db, ], We can now use a new component directly or via a get method: echo Yii::$app->exchange->getRate('USD', 'EUR'); echo Yii::$app->get('exchange')->getRate('USD', 'EUR', '2014-04-12'); Create a demonstration console controller: <?phpnamespace appcommands;use yiiconsoleController;class ExchangeController extends Controller{ public function actionTest($currency, $date = null) { echo Yii::$app->exchange->getRate('USD', $currency, $date) . PHP_EOL; }} And try to run any commands: $ ./yii exchange/test EUR > 0.90196 $ ./yii exchange/test EUR 2015-11-24 > 0.93888 $ ./yii exchange/test OTHER > Exception 'yiibaseInvalidParamException' with message 'Invalid currency format.' $ ./yii exchange/test EUR 2015/24/11 Exception 'yiibaseInvalidParamException' with message 'Invalid date format.' $ ./yii exchange/test ASD > Exception 'RuntimeException' with message 'Rate not found.' As a result you must see rate values in success cases or specific exceptions in error ones. In addition to creating your own components, you can do more. Overriding existing application components Most of the time there will be no need to create your own application components, since other types of extensions, such as widgets or behaviors, cover almost all types of reusable code. However, overriding core framework components is a common practice and can be used to customize the framework's behavior for your specific needs without hacking into the core. For example, to be able to format numbers using the Yii::app()->formatter->asNumber($value) method instead of the NumberHelper::format method from the Helpers recipe, follow the next steps: Extend the yiii18nFormatter component like the following: <?php namespace appcomponents; class Formatter extends yiii18nFormatter { public function asNumber($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Override the class of the built-in formatter component: 'components' => [ // ... formatter => [ 'class' => 'appcomponentsFormatter, ], // … ], Right now, we can use this method directly: echo Yii::app()->formatter->asNumber(1534635.2, 3); or as a new format for GridView and DetailView widgets: <?= yiigridGridView::widget([ 'dataProvider' => $dataProvider, 'columns' => [ 'id', 'created_at:datetime', 'title', 'value:number', ], ]) ?> You can also extend every existing component without overwriting its source code. How it works… To be able to attach a component to an application it can be extended from the yiibaseComponent class. Attaching is as simple as adding a new array to the components’ section of configuration. There, a class value specifies the component's class and all other values are set to a component through the corresponding component's public properties and setter methods. Implementation itself is very straightforward; We are wrapping http://api.fixer.io calls into a comfortable API with validators and caching. We can access our class by its component name using Yii::$app. In our case, it will be Yii::$app->exchange. See also For official information about components, refer to http://www.yiiframework.com/doc-2.0/guide-concept-components.html. For the NumberHelper class sources, see Helpers recipe. Summary In this article we learnt about the Yii extensions—helpers, behavior, and components. Helpers contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. Behaviors allow you to enhance the functionality of an existing component class without needing to change the class's inheritance. Components are the main building blocks of Yii applications. A component is an instance of CComponent or its derived class. Using a component mainly involves accessing its properties and raising/handling its events. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Atmosfall – Managing Game Progress with Coroutines [article] Optimizing Games for Android [article]
Read more
  • 0
  • 0
  • 1693