Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-html5-and-the-rise-of-modern-javascript-browser-apis-tutorial
Pavan Ramchandani
20 Jul 2018
15 min read
Save for later

HTML5 and the rise of modern JavaScript browser APIs [Tutorial]

Pavan Ramchandani
20 Jul 2018
15 min read
The HTMbrowserification arrived in 2008. HTML5, however, was so technologically advanced in 2008 that it was predicted that it would not be ready till at least 2022! However, that turned out to be incorrect, and here we are, with fully supported HTML5 and ES6/ES7/ES8-supported browsers. A lot of APIs used by HTML5 go hand in hand with JavaScript. Before looking at those APIs, let us understand a little about how JavaScript sees the web. This'll eventually put us in a strong position to understand various interesting, JavaScript-related things such as the Web Workers API, etc. In this article, we will introduce you to the most popular web languages HTML and JavaScript and how they came together to become the default platform for building modern front-end web applications. This is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. The HTML DOM The HTML DOM is a tree version of how the document looks. Here is a very simple example of an HTML document: <!doctype HTML> <html> <head> <title>Cool Stuff!</title> </head> <body> <p>Awesome!</p> </body> </html> Here's how its tree version will look: The previous diagram is just a rough representation of the DOM tree. HTML tags consist of head and body; furthermore, the <body> tag consists of a <p> tag, whereas the <head> tag consists of the <title> tag. Simple! JavaScript has access to the DOM directly, and can modify the connections between these nodes, add nodes, remove nodes, change contents, attach event listeners, and so on. What is the Document Object Model (DOM)? Simply put, the DOM is a way to represent HTML or XML documents as nodes. This makes it easier for other programming languages to connect to a DOM-following page and modify it accordingly. To be clear, DOM is not a programming language. DOM provides JavaScript with a way to interact with web pages. You can think of it as a standard. Every element is part of the DOM tree, which can be accessed and modified with APIs exposed to JavaScript. DOM is not restricted to being accessed only by JavaScript. It is language-independent and there are several modules available in various languages to parse DOM (just like JavaScript) including PHP, Python, Java, and so on. As said previously, DOM provides JavaScript with a way to interact with it. How? Well, accessing DOM is as easy as accessing predefined objects in JavaScript: document. The DOM API specifies what you'll find inside the document object. The document object essentially gives JavaScript access to the DOM tree formed by your HTML document. If you notice, you cannot access any element at all without actually accessing the document object first. DOM methods/properties All HTML elements are objects in JavaScript. The most commonly used object is the document object. It has the whole DOM tree attached to it. You can query for elements on that. Let's look at some very common examples of these methods: getElementById method getElementsByTagName method getElementsByClassName method querySelector method querySelectorAll method By no means is this an exhaustive list of all methods available. However, this list should at least get you started with DOM manipulation. Use MDN as your reference for various other methods. Here's the link: https://developer.mozilla.org/en-US/docs/Web/API/Document#Methods. Modern JavaScript browser APIs HTML5 brought a lot of support for some awesome APIs in JavaScript, right from the start. Although some APIs were released with HTML5 itself (such as the Canvas API), some were added later (such as the Fetch API). Let's see some of these APIs and how to use them with some code examples. Page Visibility API - is the user still on the page? The Page Visibility API allows developers to run specific code whenever the page user is on goes in focus or out of foucs. Imagine you run a game-hosting site and want to pause the game whenever the user loses focus on your tab. This is the way to go! function pageChanged() { if (document.hidden) { console.log('User is on some other tab/out of focus') // line #1 } else { console.log('Hurray! User returned') // line #2 } } document.addEventListener("visibilitychange", pageChanged); We're adding an event listener to the document; it fires whenever the page is changed. Sure, the pageChanged function gets an event object as well in the argument, but we can simply use the document.hidden property, which returns a Boolean value depending on the page's visibility at the time the code was called. You'll add your pause game code at line #1 and your resume game code at line #2. navigator.onLine API – the user's network status The navigator.onLine API tells you if the user is online or not. Imagine building a multiplayer game and you want the game to automatically pause if the user loses their internet connection. This is the way to go here! function state(e) { if(navigator.onLine) { console.log('Cool we\'re up'); } else { console.log('Uh! we\'re down!'); } } window.addEventListener('offline', state); window.addEventListener('online', state); Here, we're attaching two event listeners to window global. We want to call the state function whenever the user goes offline or online. The browser will call the state function every time the user goes offline or online. We can access it if the user is offline or online with navigator.onLine, which returns a Boolean value of true if there's an internet connection, and false if there's not. Clipboard API - programmatically manipulating the clipboard The Clipboard API finally allows developers to copy to a user's clipboard without those nasty Adobe Flash plugin hacks that were not cross-browser/cross-device-friendly. Here's how you'll copy a selection to a user's clipboard: <script> function copy2Clipboard(text) { const textarea = document.createElement('textarea'); textarea.value = text; document.body.appendChild(textarea); textarea.focus(); textarea.setSelectionRange(0, text.length); document.execCommand('copy'); document.body.removeChild(textarea); } </script> <button onclick="copy2Clipboard('Something good!')">Click me!</button> First of all, we need the user to actually click the button. Once the user clicks the button, we call a function that creates a textarea in the background using the document.createElement method. The script then sets the value of the textarea to the passed text (this is pretty good!) We then focus on that textarea and select all the contents inside it. Once the contents are selected, we execute a copy with document.execCommand('copy'); this copies the current selection in the document to the clipboard. Since, right now, the value inside the textarea is selected, it gets copied to the clipboard. Finally, we remove the textarea from the document so that it doesn't disrupt the document layout. You cannot trigger copy2Clipboard without user interaction. I mean, obviously you can, but document.execCommand('copy') will not work if the event does not come from the user (click, double-click, and so on). This is a security implementation so that a user's clipboard is not messed around with by every website that they visit. The Canvas API - the web's drawing board HTML5 finally brought in support for <canvas>, a standard way to draw graphics on the web! Canvas can be used pretty much for everything related to graphics you can think of; from digitally signing with a pen, to creating 3D games on the web (3D games require WebGL knowledge, interested? - visit http://bit.ly/webgl-101). Let's look at the basics of the Canvas API with a simple example: <canvas id="canvas" width="100" height="100"></canvas> <script> const canvas = document.getElementById("canvas"); const ctx = canvas.getContext("2d"); ctx.moveTo(0,0); ctx.lineTo(100, 100); ctx.stroke(); </script> This renders the following: How does it do this? Firstly, document.getElementById('canvas') gives us the reference to the canvas on the document. Then we get the context of the canvas. This is a way to say what I want to do with the canvas. You could put a 3D value there, of course! That is indeed the case when you're doing 3D rendering with WebGL and canvas. Once we have a reference to our context, we can do a bunch of things and add methods provided by the API out-of-the-box. Here we moved the cursor to the (0, 0) coordinates. Then we drew a line till (100,100) (which is basically a diagonal on the square canvas). Then we called stroke to actually draw that on our canvas. Easy! Canvas is a wide topic and deserves a book of its own! If you're interested in developing awesome games and apps with Canvas, I recommend you start off with MDN docs: http://bit.ly/canvas-html5. The Fetch API - promise-based HTTP requests One of the coolest async APIs introduced in browsers is the Fetch API, which is the modern replacement for the XMLHttpRequest API. Have you ever found yourself using jQuery just for simplifying AJAX requests with $.ajax? If you have, then this is surely a golden API for you, as it is natively easier to code and read! However, fetch comes natively, hence, there are performance benefits. Let's see how it works: fetch(link) .then(data => { // do something with data }) .catch(err => { // do something with error }); Awesome! So fetch uses promises! If that's the case, we can combine it with async/await to make it look completely synchronous and easy to read! <img id="img1" alt="Mozilla logo" /> <img id="img2" alt="Google logo" /> const get2Images = async () => { const image1 = await fetch('https://cdn.mdn.mozilla.net/static/img/web-docs-sprite.22a6a085cf14.svg'); const image2 = await fetch('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png'); console.log(image1); // gives us response as an object const blob1 = await image1.blob(); const blob2 = await image2.blob(); const url1 = URL.createObjectURL(blob1); const url2 = URL.createObjectURL(blob2); document.getElementById('img1').src = url1; document.getElementById('img2').src = url2; return 'complete'; } get2Images().then(status => console.log(status)); The line console.log(image1) will print the following: You can see the image1 response provides tons of information about the request. It has an interesting field body, which is actually a ReadableStream, and a byte stream of data that can be cast to a  Binary Large Object (BLOB) in our case. A blob object represents a file-like object of immutable and raw data. After getting the Response, we convert it into a blob object so that we can actually use it as an image. Here, fetch is actually fetching us the image directly so we can serve it to the user as a blob (without hot-linking it to the main website). Thus, this could be done on the server side, and blob data could be passed down a WebSocket or something similar. Fetch API customization The Fetch API is highly customizable. You can even include your own headers in the request. Suppose you've got a site where only authenticated users with a valid token can access an image. Here's how you'll add a custom header to your request: const headers = new Headers(); headers.append("Allow-Secret-Access", "yeah-because-my-token-is-1337"); const config = { method: 'POST', headers }; const req = new Request('http://myawesomewebsite.awesometld/secretimage.jpg', config); fetch(req) .then(img => img.blob()) .then(blob => myImageTag.src = URL.createObjectURL(blob)); Here, we added a custom header to our Request and then created something called a Request object (an object that has information about our Request). The first parameter, that is, http://myawesomewebsite.awesometld/secretimage.jpg, is the URL and the second is the configuration. Here are some other configuration options: Credentials: Used to pass cookies in a Cross-Origin Resource Sharing (CORS)-enabled server on cross-domain requests. Method: Specifies request methods (GET, POST, HEAD, and so on). Headers: Headers associated with the request. Integrity: A security feature that consists of a (possibly) SHA-256 representation of the file you're requesting, in order to verify whether the request has been tampered with (data is modified) or not. Probably not a lot to worry about unless you're building something on a very large scale and not on HTTPS. Redirect: Redirect can have three values: Follow: Will follow the URL redirects Error: Will throw an error if the URL redirects Manual: Doesn't follow redirect but returns a filtered response that wraps the redirect response Referrer: the URL that appears as a referrer header in the HTTP request. Accessing and modifying history with the history API You can access a user's history to some level and modify it according to your needs using the history API. It consists of the length and state properties: console.log(history, history.length, history.state); The output is as follows: {length: 4, scrollRestoration: "auto", state: null} 4 null In your case, the length could obviously be different depending on how many pages you've visited from that particular tab. history.state can contain anything you like (we'll come to its use case soon). Before looking at some handy history methods, let us take a look at the window.onpopstate event. Handling window.onpopstate events The window.onpopstate event is fired automatically by the browser when a user navigates between history states that a developer has set. This event is important to handle when you push to history object and then later retrieve information whenever the user presses the back/forward button of the browser. Here's how we'll program a simple popstate event: window.addEventListener('popstate', e => { console.log(e.state); // state data of history (remember history.state ?) }) Now we'll discuss some methods associated with the history object. Modifying history - the history.go(distance) method history.go(x) is equivalent to the user clicking his forward button x times in the browser. However, you can specify the distance to move, that is history.go(5); . This equivalent to the user hitting the forward button in the browser five times. Similarly, you can specify negative values as well to make it move backward. Specifying 0 or no value will simply refresh the page: history.go(5); // forwards the browser 5 times history.go(-1); // similar effect of clicking back button history.go(0); // refreshes page history.go(); // refreshes page Jumping ahead - the history.forward() method This method is simply the equivalent of history.go(1). This is handy when you want to just push the user to the page he/she is coming from. One use case of this is when you can create a full-screen immersive web application and on your screen there are some minimal controls that play with the history behind the scenes: if(awesomeButtonClicked && userWantsToMoveForward()) { history.forward() } Going back - the history.back() method This method is simply the equivalent of history.go(-1). A negative number, makes the history go backwards. Again, this is just a simple (and numberless) way to go back to a page the user came from. Its application could be similar to a forward button, that is, creating a full-screen web app and providing the user with an interface to navigate by. Pushing on the history - history.pushState() This is really fun. You can change the browser URL without hitting the server with an HTTP request. If you run the following JS in your browser, your browser will change the path from whatever it is (domain.com/abc/egh) to  /i_am_awesome (domain.com/i_am_awesome) without actually navigating to any page: history.pushState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); history.pushState({page2: "Packt"}, "This is page2", "/page2_packt"); // <-- state is currently here The History API doesn't care whether the page actually exists on the server or not. It'll just replace the URL as it is instructed. The  popstate event when triggered with the browser's back/forward button, will fire the function below and we can program it like this: window.onpopstate = e => { // when this is called, state is already updated. // e.state is the new state. It is null if it is the root state. if(e.state !== null) { console.log(e.state); } else { console.log("Root state"); } } To run this code, run the onpopstate event first, then the two lines of history.pushState previously. Then press your browser's back button. You should see something like: {myName: "Mehul"} which is the information related to the parent state. Press back button one more time and you'll see the message Root State. pushState does not fire onpopstate event. Only browsers' back/forward buttons do. Pushing on the history stack - history.replaceState() The history.replaceState() method is exactly like history.pushState(), the only difference is that it replaces the current page with another, that is, if you use history.pushState() and press the back button, you'll be directed to the page you came from. However, when you use history.replaceState() and you press the back button, you are not directed to the page you came from because it is replaced with the new one on the stack. Here's an example of working with the replaceState method: history.replaceState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); This replaces (instead of pushing) the current state with the new state. Although using the History API directly in your code may not be beneficial to you right now, many frameworks and libraries such as React, under the hood, use the History API to create a seamless, reload-less, smooth experience for the end user. If you found this article useful, do check out the book Learn ECMAScript, Second Edition to learn the ECMAScript standards for designing quality web applications. What's new in ECMAScript 2018 (ES9)? 8 recipes to master Promises in ECMAScript 2018 Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 6038

article-image-responsive-applications-asynchronous-programming
Packt
13 Jul 2016
9 min read
Save for later

Responsive Applications with Asynchronous Programming

Packt
13 Jul 2016
9 min read
In this article by Dirk Strauss, author of the book C# Programming Cookbook, he sheds some light on how to handle events, exceptions and tasks in asynchronous programming, making your application responsive. (For more resources related to this topic, see here.) Handling tasks in asynchronous programming Task-Based Asynchronous Pattern (TAP) is now the recommended method to create asynchronous code. It executes asynchronously on a thread from the thread pool and does not execute synchronously on the main thread of your application. It allows us to check the task's state by calling the Status property. Getting ready We will create a task to read a very large text file. This will be accomplished using an asynchronous Task. How to do it… Create a large text file (we called ours taskFile.txt) and place it in your C:temp folder: In the AsyncDemo class, create a method called ReadBigFile() that returns a Task<TResult> type, which will be used to return an integer of bytes read from our big text file: public Task<int> ReadBigFile() { } Add the following code to open and read the file bytes. You will see that we are using the ReadAsync() method that asynchronously reads a sequence of bytes from the stream and advances the position in that stream by the number of bytes read from that stream. You will also notice that we are using a buffer to read those bytes. public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); return readBytes; } Exceptions you can expect to handle from the ReadAsync() method are ArgumentNullException, ArgumentOutOfRangeException, ArgumentException, NotSupportedException, ObjectDisposedException and InvalidOperatorException. Finally, add the final section of code just after the var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); line that uses a lambda expression to specify the work that the task needs to perform. In this case, it is to read the bytes in the file: public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); readBytes.ContinueWith(task => { if (task.Status == TaskStatus.Running) Console.WriteLine("Running"); else if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Faulted"); bigFile.Dispose(); }); return readBytes; } If not done so in the previous section, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Now, make sure that you add code to call the AsyncDemo class's ReadBigFile() method asynchronously. Remember to read the result from the method (which are the bytes read) into an integer variable: private async void button1_Click(object sender, EventArgs e) { Console.WriteLine("Start file read"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadBigFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. Clicking on the button1 button will display the outputs to our Output window. Throughout this code execution, the form remains responsive. Take note though that the information displayed in your Output window will differ from the screenshot. This is because the file you used is different from mine. How it works… The task is executed on a separate thread from the thread pool. This allows the application to remain responsive while the large file is being processed. Tasks can be used in multiple ways to improve your code. This recipe is but one example. Exception handling in asynchronous programming Exception handling in asynchronous programming has always been a challenge. This was especially true in the catch blocks. As of C# 6, you are now allowed to write asynchronous code inside the catch and finally block of your exception handlers. Getting ready The application will simulate the action of reading a logfile. Assume that a third-party system always makes a backup of the logfile before processing it in another application. While this processing is happening, the logfile is deleted and recreated. Our application, however, needs to read this logfile on a periodic basis. We, therefore, need to be prepared for the case where the file does not exist in the location we expect it in. Therefore, we will purposely omit the main logfile, so that we can force an error. How to do it… Create a text file and two folders to contain the logfiles. We will, however, only create a single logfile in the BackupLog folder. The MainLog folder will remain empty: In our AsyncDemo class, write a method to read the main logfile in the MainLog folder: private async Task<int> ReadMainLog() { var bigFile = " File.OpenRead(@"C:tempLogMainLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Main Log RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Main Log Faulted"); bigFile.Dispose(); }); return await readBytes; } Create a second method to read the backup file in the BackupLog folder: private async Task<int> ReadBackupLog() { var bigFile = " File.OpenRead(@"C:tempLogBackupLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Backup Log " RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Backup Log Faulted"); bigFile.Dispose(); }); return await readBytes; } In actual fact, we would probably only create a single method to read the logfiles, passing only the path as a parameter. In a production application, creating a class and overriding a method to read the different logfile locations would be a better approach. For the purposes of this recipe, however, we specifically wanted to create two separate methods so that the different calls to the asynchronous methods are clearly visible in the code. We will then create a main ReadLogFile() method that tries to read the main logfile. As we have not created the logfile in the MainLog folder, the code will throw a FileNotFoundException. It will then run the asynchronous method and await that in the catch block of the ReadLogFile() method (something which was impossible in the previous versions of C#), returning the bytes read to the calling code: public async Task<int> ReadLogFile() { int returnBytes = -1; try { Task<int> intBytesRead = ReadMainLog(); returnBytes = await ReadMainLog(); } catch (Exception ex) { try { returnBytes = await ReadBackupLog(); } catch (Exception) { throw; } } return returnBytes; } If not done so in the previous recipe, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click on the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning an asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Next, we will write the code to create a new instance of the AsyncDemo class and attempt to read the main logfile. In a real-world example, it is at this point that the code does not know that the main logfile does not exist: private async void button1_Click(object sender, EventArgs "e) { Console.WriteLine("Read backup file"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadLogFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + O to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. To simulate a file not found exception, we deleted the file from the MainLog folder. You will see that the exception is thrown, and the catch block runs the code to read the backup logfile instead: How it works… The fact that we can await in catch and finally blocks allows developers much more flexibility because asynchronous results can consistently be awaited throughout the application. As you can see from the code we wrote, as soon as the exception was thrown, we asynchronously read the file read method for the backup file. Summary In this article we looked at how TAP is now the recommended method to create asynchronous code. How tasks can be used in multiple ways to improve your code. This allows the application to remain responsive while the large file is being processed also how exception handling in asynchronous programming has always been a challenge and how to use catch and finally block to handle exceptions. Resources for Article: Further resources on this subject: Functional Programming in C#[article] Creating a sample C#.NET application[article] Creating a sample C#.NET application[article]
Read more
  • 0
  • 0
  • 6036

article-image-setting-software-infrastructure-cloud
Packt
23 Sep 2014
42 min read
Save for later

Setting up of Software Infrastructure on the Cloud

Packt
23 Sep 2014
42 min read
In this article by Roberto Freato, author of Microsoft Azure Development Cookbook, we mix some of the recipes of of this book, to build a complete overview of what we need to set up a software infrastructure on the cloud. (For more resources related to this topic, see here.) Microsoft Azure is Microsoft’s Platform for Cloud Computing. It provides developers with elastic building blocks to build scalable applications. Those building blocks are services for web hosting, storage, computation, connectivity, and more, which are usable as stand-alone services or mixed together to build advanced scenarios. Building an application with Microsoft Azure could really mean choosing the appropriate services and mix them together to run our application. We start by creating a SQL Database. Creating a SQL Database server and database SQL Database is a multitenanted database system in which many distinct databases are hosted on many physical servers managed by Microsoft. SQL Database administrators have no control over the physical provisioning of a database to a particular physical server. Indeed, to maintain high availability, a primary and two secondary copies of each SQL Database are stored on separate physical servers, and users can't have any control over them. Consequently, SQL Database does not provide a way for the administrator to specify the physical layout of a database and its logs when creating a SQL Database. The administrator merely has to provide a name, maximum size, and service tier for the database. A SQL Database server is the administrative and security boundary for a collection of SQL Databases hosted in a single Azure region. All connections to a database hosted by the server go through the service endpoint provided by the SQL Database server. At the time of writing this book, an Azure subscription can create up to six SQL Database servers, each of which can host up to 150 databases (including the master database). These are soft limits that can be increased by arrangement with Microsoft Support. From a billing perspective, only the database unit is counted towards, as the server unit is just a container. However, to avoid a waste of unused resources, an empty server is automatically deleted after 90 days of non-hosting user databases. The SQL Database server is provisioned on the Azure Portal. The Region as well as the administrator login and password must be specified during the provisioning process. After the SQL Database server has been provisioned, the firewall rules used to restrict access to the databases associated with the SQL Database server can be modified on the Azure Portal, using Transact SQL or the SQL Database Service Management REST API. The result of the provisioning process is a SQL Database server identified by a fully -qualified DNS name such as SERVER_NAME.database.windows.net, where SERVER_NAME is an automatically generated (random and unique) string that differentiates this SQL Database server from any other. The provisioning process also creates the master database for the SQL Database server and adds a user and associated login for the administrator specified during the provisioning process. This user has the rights to create other databases associated with this SQL Database server as well as any logins needed to access them. Remember to distinguish between the SQL Database service and the famous SQL Server engine available on the Azure platform, but as a plain installation over VMs. In the latter case, you will continue to own the complete control of the instance that runs the SQL Server, the installation details, and the effort to maintain it during the time. Also, remember that the SQL Server virtual machines have a different pricing from the standard VMs due to their license costs. An administrator can create a SQL Database either on the Azure Portal or using the CREATE DATABASE Transact SQL statement. At the time of this writing this book, SQL Database runs in the following two different modes: Version 1.0: This refers to Web or Business Editions Version 2.0: This refers to Basic, Standard, or Premium service tiers with performance levels The first version is deprecating in few months. Web Edition was designed for small databases under 5 GB and Business Edition for databases of 10 GB and larger (up to 150 GB). There is no difference in these editions other than the maximum size and billing increment. The second version introduced service tiers (the equivalent of Editions) with an additional parameter (performance level) that sets the amount of dedicated resource to a given database. The new service tiers (Basic, Standard, and Premium) introduced a lot of advanced features such as active/passive Geo-replication, point-in-time restore, cross-region copy, and restore. Different performance levels have different limits such as the Database Throughput Unit (DTU) and the maximum DB size. An updated list of service tiers and performance levels can be found at http://msdn.microsoft.com/en-us/library/dn741336.aspx. Once a SQL Database has been created, the ALTER DATABASE Transact SQL statement can be used to alter either the edition or the maximum size of the database. The maximum size is important as the database is made read only once it reaches that size (with the The database has reached its size quota error message and number 40544). In this recipe, we'll learn how to create a SQL Database server and a database using the Azure Portal and T-SQL. Getting Ready To perform the majority of operations of the recipe, just a plain internet browser is needed. However, to connect directly to the server, we will use the SQL Server Management Studio (also available in the Express version). How to do it... First, we are going to create a SQL Database server using the Azure Portal. We will do this using the following steps: On the Azure Portal, go to the SQL DATABASES section and then select the SERVERS tab. In the bottom menu, select Add. In the CREATE SERVER window, provide an administrator login and password. Select a Subscription and Region that will host the server. To enable access from the other service in WA to the server, you can check the Allow Windows Azure Services to access the server checkbox; this is a special firewall rule that allows the 0.0.0.0 to 0.0.0.0 IP range. Confirm and wait a few seconds to complete the operation. After that, using the Azure Portal,.go to the SQL DATABASES section and then the SERVERS tab. Select the previously created server by clicking on its name. In the server page, go to the DATABASES tab. In the bottom menu, click on Add; then, after clicking on NEW SQL DATABASE, the CUSTOM CREATE window will open. Specify a name and select the Web Edition. Set the maximum database size to 5 GB and leave the COLLATION dropdown to its default. SQL Database fees are charged differently if you are using the Web/Business Edition rather than the Basic/Standard/Premium service tiers. The most updated pricing scheme for SQL Database can be found at http://azure.microsoft.com/en-us/pricing/details/sql-database/ Verify the server on which you are creating the database (it is specified correctly in the SERVER dropdown) and confirm it. Alternatively, using Transact SQL, launch Microsoft SQL Server Management Studio and open the Connect to Server window. In the Server name field, specify the fully qualified name of the newly created SQL Database server in the following form: serverName.database.windows.net. Choose the SQL Server Authentication method. Specify the administrative username and password associated earlier. Click on the Options button and specify the Encrypt connection checkbox. This setting is particularly critical while accessing a remote SQL Database. Without encryption, a malicious user could extract all the information to log in to the database himself, from the network traffic. Specifying the Encrypt connection flag, we are telling the client to connect only if a valid certificate is found on the server side. Optionally check the Remember password checkbox and connect to the server. To connect remotely to the server, a firewall rule should be created. In the Object Explorer window, locate the server you connected to, navigate to Databases | System Databases folder, and then right-click on the master database and select New Query. 18. Copy and execute this query and wait for its completion:. CREATE DATABASE DATABASE_NAME ( MAXSIZE = 1 GB ) How it works... The first part is pretty straightforward. In steps 1 and 2, we go to the SQL Database section of the Azure portal, locating the tab to manage the servers. In step 3, we fill the online popup with the administrative login details, and in step 4, we select a Region to place the SQL Database server. As a server (with its database) is located in a Region, it is not possible to automatically migrate it to another Region. After the creation of the container resource (the server), we create the SQL Database by adding a new database to the newly created server, as stated from steps 6 to 9. In step 10, we can optionally change the default collation of the database and its maximum size. In the last part, we use the SQL Server Management Studio (SSMS) (step 12) to connect to the remote SQL Database instance. We notice that even without a database, there is a default database (the master one) we can connect to. After we set up the parameters in step 13, 14, and 15, we enable the encryption requirement for the connection. Remember to always set the encryption before connecting or listing the databases of a remote endpoint, as every single operation without encryption consists of plain credentials sent over the network. In step 17, we connect to the server if it grants access to our IP. Finally, in step 18, we open a contextual query window, and in step 19, we execute the creation query, specifying a maximum size for the database. Note that the Database Edition should be specified in the CREATE DATABASE query as well. By default, the Web Edition is used. To override this, the following query can be used: CREATE DATABASE MyDB ( Edition='Basic' ) There's more… We can also use the web-based Management Portal to perform various operations against the SQL Database, such as invoking Transact SQL commands, altering tables, viewing occupancy, and monitoring the performance. We will launch the Management Portal using the following steps: Obtain the name of the SQL Database server that contains the SQL Database. Go to https://serverName.database.windows.net. In the Database fields, enter the database name (leave it empty to connect to the master database). Fill the Username and Password fields with the login information and confirm. Increasing the size of a database We can use the ALTER DATABASE command to increase the size (or the Edition, with the Edition parameter) of a SQL Database by connecting to the master database and invoking the following Transact SQL command: ALTER DATABASE DATABASE_NAME MODIFY ( MAXSIZE = 5 GB ) We must use one of the allowable database sizes. Connecting to a SQL Database with Entity Framework The Azure SQL Database is a SQL Server-like fully managed relation database engine. In many other recipes, we showed you how to connect transparently to the SQL Database, as we did in the SQL Server, as the SQL Database has the same TDS protocol as its on-premise brethren. In addition, using the raw ADO.NET could lead to some of the following issues: Hardcoded SQL: In spite of the fact that a developer should always write good code and make no errors, there is the finite possibility to make mistake while writing stringified SQL, which will not be verified at design time and might lead to runtime issues. These kind of errors lead to runtime errors, as everything that stays in the quotation marks compiles. The solution is to reduce every line of code to a command that is compile time safe. Type safety: As ADO.NET components were designed to provide a common layer of abstraction to developers who connect against several different data sources, the interfaces provided are generic for the retrieval of values from the fields of a data row. A developer could make a mistake by casting a field to the wrong data type, and they will realize it only at run time. The solution is to reduce the mapping of table fields to the correct data type at compile time. Long repetitive actions: We can always write our own wrapper to reduce the code replication in the application, but using a high-level library, such as the ORM, can take off most of the repetitive work to open a connection, read data, and so on. Entity Framework hides the complexity of the data access layer and provides developers with an intermediate abstraction layer to let them operate on a collection of objects instead of rows of tables. The power of the ORM itself is enhanced by the usage of LINQ, a library of extension methods that, in synergy with the language capabilities (anonymous types, expression trees, lambda expressions, and so on), makes the DB access easier and less error prone than in the past. This recipe is an introduction to Entity Framework, the ORM of Microsoft, in conjunction with the Azure SQL Database. Getting Ready The database used in this recipe is the Northwind sample database of Microsoft. It can be downloaded from CodePlex at http://northwinddatabase.codeplex.com/. How to do it… We are going to connect to the SQL Database using Entity Framework and perform various operations on data. We will do this using the following steps: Add a new class named EFConnectionExample to the project. Add a new ADO.NET Entity Data Model named Northwind.edmx to the project; the Entity Data Model Wizard window will open. Choose Generate from database in the Choose Model Contents step. In the Choose Your Data Connection step, select the Northwind connection from the dropdown or create a new connection if it is not shown. Save the connection settings in the App.config file for later use and name the setting NorthwindEntities. If VS asks for the version of EF to use, select the most recent one. In the last step, choose the object to include in the model. Select the Tables, Views, Stored Procedures, and Functions checkboxes. Add the following method, retrieving every CompanyName, to the class: private IEnumerable<string> NamesOfCustomerCompanies() { using (var ctx = new NorthwindEntities()) { return ctx.Customers .Select(p => p.CompanyName).ToArray(); } } Add the following method, updating every customer located in Italy, to the class: private void UpdateItalians() { using (var ctx = new NorthwindEntities()) { ctx.Customers.Where(p => p.Country == "Italy") .ToList().ForEach(p => p.City = "Milan"); ctx.SaveChanges(); } } Add the following method, inserting a new order for the first Italian company alphabetically, to the class: private int FirstItalianPlaceOrder() { using (var ctx = new NorthwindEntities()) { var order = new Orders() { EmployeeID = 1, OrderDate = DateTime.UtcNow, ShipAddress = "My Address", ShipCity = "Milan", ShipCountry = "Italy", ShipName = "Good Ship", ShipPostalCode = "20100" }; ctx.Customers.Where(p => p.Country == "Italy") .OrderBy(p=>p.CompanyName) .First().Orders.Add(order); ctx.SaveChanges(); return order.OrderID; } } Add the following method, removing the previously inserted order, to the class: private void RemoveTheFunnyOrder(int orderId) { using (var ctx = new NorthwindEntities()) { var order = ctx.Orders .FirstOrDefault(p => p.OrderID == orderId); if (order != null) ctx.Orders.Remove(order); ctx.SaveChanges(); } } Add the following method, using the methods added earlier, to the class: public static void UseEFConnectionExample() { var example = new EFConnectionExample(); var customers=example.NamesOfCustomerCompanies(); foreach (var customer in customers) { Console.WriteLine(customer); } example.UpdateItalians(); var order=example.FirstItalianPlaceOrder(); example.RemoveTheFunnyOrder(order); } How it works… This recipe uses EF to connect and operate on a SQL Database. In step 1, we create a class that contains the recipe, and in step 2, we open the wizard for the creation of Entity Data Model (EDMX). We create the model, starting from an existing database in step 3 (it is also possible to write our own model and then persist it in an empty database), and then, we select the connection in step 4. In fact, there is no reference in the entire code to the Windows Azure SQL Database. The only reference should be in the App.config settings created in step 5; this can be changed to point to a SQL Server instance, leaving the code untouched. The last step of the EDMX creation consists of concrete mapping between the relational table and the object model, as shown in step 6. This method generates the code classes that map the table schema, using strong types and collections referred to as Navigation properties. It is also possible to start from the code, writing the classes that could represent the database schema. This method is known as Code-First. In step 7, we ask for every CompanyName of the Customers table. Every table in EF is represented by DbSet<Type>, where Type is the class of the entity. In steps 7 and 8, Customers is DbSet<Customers>, and we use a lambda expression to project (select) a property field and another one to create a filter (where) based on a property value. The SaveChanges method in step 8 persists to the database the changes detected in the disconnected object data model. This magic is one of the purposes of an ORM tool. In step 9, we use the navigation property (relationship) between a Customers object and the Orders collection (table) to add a new order with sample data. We use the OrderBy extension method to order the results by the specified property, and finally, we save the newly created item. Even now, EF automatically keeps track of the newly added item. Additionally, after the SaveChanges method, EF populates the identity field of Order (OrderID) with the actual value created by the database engine. In step 10, we use the previously obtained OrderID to remove the corresponding order from the database. We use the FirstOrDefault() method to test the existence of the ID, and then, we remove the resulting object like we removed an object from a plain old collection. In step 11, we use the methods created to run the demo and show the results. Deploying a Website Creating a Website is an administrative task, which is performed in the Azure Portal in the same way we provision every other building block. The Website created is like a "deployment slot", or better, "web space", since the abstraction given to the user is exactly that. Azure Websites does not require additional knowledge compared to an old-school hosting provider, where FTP was the standard for the deployment process. Actually, FTP is just one of the supported deployment methods in Websites, since Web Deploy is probably the best choice for several scenarios. Web Deploy is a Microsoft technology used for copying files and provisioning additional content and configuration to integrate the deployment process. Web Deploy runs on HTTP and HTTPS with basic (username and password) authentication. This makes it a good choice in networks where FTP is forbidden or the firewall rules are strict. Some time ago, Microsoft introduced the concept of Publish Profile, an XML file containing all the available deployment endpoints of a particular website that, if given to Visual Studio or Web Matrix, could make the deployment easier. Every Azure Website comes with a publish profile with unique credentials, so one can distribute it to developers without giving them grants on the Azure Subscription. Web Matrix is a client tool of Microsoft, and it is useful to edit live sites directly from an intuitive GUI. It uses Web Deploy to provide access to the remote filesystem as to perform remote changes. In Websites, we can host several websites on the same server farm, making administration easier and isolating the environment from the neighborhood. Moreover, virtual directories can be defined from the Azure Portal, enabling complex scenarios or making migrations easier. In this recipe, we will cope with the deployment process, using FTP and Web Deploy with some variants. Getting ready This recipe assumes we have and FTP client installed on the local machine (for example, FileZilla) and, of course, a valid Azure Subscription. We also need Visual Studio 2013 with the latest Azure SDK installed (at the time of writing, SDK Version 2.3). How to do it… We are going to create a new Website, create a new ASP.NET project, deploy it through FTP and Web Deploy, and also use virtual directories. We do this as follows: Create a new Website in the Azure Portal, specifying the following details: The URL prefix (that is, TestWebSite) is set to [prefix].azurewebsites.net The Web Hosting Plan (create a new one) The Region/Location (select West Europe) Click on the newly created Website and go to the Dashboard tab. Click on Download the publish profile and save it on the local computer. Open Visual Studio and create a new ASP.NET web application named TestWebSite, with an empty template and web forms' references. Add a sample Default.aspx page to the project and paste into it the following HTML: <h1>Root Application</h1> Press F5 and test whether the web application is displayed correctly. Create a local publish target. Right-click on the project and select Publish. Select Custom and specify Local Folder. In the Publish method, select File System and provide a local folder where Visual Studio will save files. Then click on Publish to complete. Publish via FTP. Open FileZilla and then open the Publish profile (saved in step 3) with a text editor. Locate the FTP endpoint and specify the following: publishUrl as the Host field username as the Username field userPWD as the Password field Delete the hostingstart.html file that is already present on the remote space. When we create a new Azure Website, there is a single HTML file in the root folder by default, which is served to the clients as the default page. By leaving it in the Website, the file could be served after users' deployments as well if no valid default documents are found. Drag-and-drop all the contents of the local folder with the binaries to the remote folder, then run the website. Publish via Web Deploy. Right-click on the Project and select Publish. Go to the Publish Web wizard start and select Import, providing the previously downloaded Publish Profile file. When Visual Studio reads the Web Deploy settings, it populates the next window. Click on Confirm and Publish the web application. Create an additional virtual directory. Go to the Configure tab of the Website on the Azure Portal. At the bottom, in the virtual applications and directories, add the following: /app01 with the path siteapp01 Mark it as Application Open the Publish Profile file and duplicate the <publishProfile> tag with the method FTP, then edit the following: Add the suffix App01 to profileName Replace wwwroot with app01 in publishUrl Create a new ASP.NET web application called TestWebSiteApp01 and create a new Default.aspx page in it with the following code: <h1>App01 Application</h1> Right-click on the TestWebSiteApp01 project and Publish. Select Import and provide the edited Publish Profile file. In the first step of the Publish Web wizard (go back if necessary), select the App01 method and select Publish. Run the Website's virtual application by appending the /app01 suffix to the site URL. How it works... In step 1, we create the Website on the Azure Portal, specifying the minimal set of parameters. If the existing web hosting plan is selected, the Website will start in the specified tier. In the recipe, by specifying a new web hosting plan, the Website is created in the free tier with some limitations in configuration. The recipe uses the Azure Portal located at https://manage.windowsazure.com. However, the new Azure Portal will be at https://portal.azure.com. New features will be probably added only in the new Portal. In steps 2 and 3, we download the Publish Profile file, which is an XML containing the various endpoints to publish the Website. At the time of writing, Web Deploy and FTP are supported by default. In steps 4, 5, and 6, we create a new ASP.NET web application with a sample ASPX page and run it locally. In steps 7, 8, and 9, we publish the binaries of the Website, without source code files, into a local folder somewhere in the local machine. This unit of deployment (the folder) can be sent across the wire via FTP, as we do in steps 10 to 13 using the credentials and the hostname available in the Publish Profile file. In steps 14 to 16, we use the Publish Profile file directly from Visual Studio, which recognizes the different methods of deployment and suggests Web Deploy as the default one. If we perform the steps 10-13, with steps14-16 we overwrite the existing deployment. Actually, Web Deploy compares the target files with the ones to deploy, making the deployment incremental for those file that have been modified or added. This is extremely useful to avoid unnecessary transfers and to save bandwidth. In steps 17 and 18, we configure a new Virtual Application, specifying its name and location. We can use an FTP client to browse the root folder of a website endpoint, since there are several folders such as wwwroot, locks, diagnostics, and deployments. In step 19, we manually edit the Publish Profile file to support a second FTP endpoint, pointing to the new folder of the Virtual Application. Visual Studio will correctly understand this while parsing the file again in step 22, showing the new deployment option. Finally, we verify whether there are two applications: one on the root folder / and one on the /app01 alias. There's more… Suppose we need to edit the website on the fly, editing a CSS of JS file or editing the HTML somewhere. We can do this using Web Matrix, which is available from the Azure Portal itself through a ClickOnce installation: Go to the Dashboard tab of the Website and click on WebMatrix at the bottom. Follow the instructions to install the software (if not yet installed) and, when it opens, select Edit live site directly (the magic is done through the Publish Profile file and Web Deploy). In the left-side tree, edit the Default.aspx file, and then save and run the Website again. Azure Websites gallery Since Azure Websites is a PaaS service, with no lock-in or particular knowledge or framework required to run it, it can hosts several Open Source CMS in different languages. Azure provides a set of built-in web applications to choose while creating a new website. This is probably not the best choice for production environments; however, for testing or development purposes, it should be a faster option than starting from scratch. Wizards have been, for a while, the primary resources for developers to quickly start off projects and speed up the process of creating complex environments. However, the Websites gallery creates instances of well-known CMS with predefined configurations. Instead, production environments are manually crafted, customizing each aspect of the installation. To create a new Website using the gallery, proceed as follows: Create a new Website, specifying from gallery. Select the web application to deploy and follow the optional configuration steps. If we create some resources (like databases) while using the gallery, they will be linked to the site in the Linked Resources tab. Building a simple cache for applications Azure Cache is a managed service with (at the time of writing this book) the following three offerings: Basic: This service has a unit size of 128 MB, up to 1 GB with one named cache (the default one) Standard: This service has a unit size of 1 GB, up to 10 GB with 10 named caches and support for notifications Premium: This service has a unit size of 5 GB, up to 150 GB with ten named caches, support for notifications, and high availability Different offerings have different unit prices, and remember that when changing from one offering to another, all the cache data is lost. In all offerings, users can define the items' expiration. The Cache service listens to a specific TCP port. Accessing it from a .NET application is quite simple, with the Microsoft ApplicationServer Caching library available on NuGet. In the Microsoft.ApplicationServer.Caching namespace, the following are all the classes that are needed to operate: DataCacheFactory: This class is responsible for instantiating the Cache proxies to interpret the configuration settings. DataCache: This class is responsible for the read/write operation against the cache endpoint. DataCacheFactoryConfiguration: This is the model class of the configuration settings of a cache factory. Its usage is optional as cache can be configured in the App/Web.config file in a specific configuration section. Azure Cache is a key-value cache. We can insert and even get complex objects with arbitrary tree depth using string keys to locate them. The importance of the key is critical, as in a single named cache, only one object can exist for a given key. The architects and developers should have the proper strategy in place to deal with unique (and hierarchical) names. Getting ready This recipe assumes that we have a valid Azure Cache endpoint of the standard type. We need the standard type because we use multiple named caches, and in later recipes, we use notifications. We can create a Standard Cache endpoint of 1 GB via PowerShell. Perform the following steps to create the Standard Cache endpoint : Open the Azure PowerShell and type Add-AzureAccount. A popup window might appear. Type your credentials connected to a valid Azure subscription and continue. Optionally, select the proper Subscription, if not the default one. Type this command to create a new Cache endpoint, replacing myCache with the proper unique name: New-AzureManagedCache -Name myCache -Location "West Europe" -Sku Standard -Memory 1GB After waiting for some minutes until the endpoint is ready, go to the Azure Portal and look for the Manage Keys section to get one of the two Access Keys of the Cache endpoint. In the Configure section of the Cache endpoint, a cache named default is created by default. In addition, create two named caches with the following parameters: Expiry Policy: Absolute Time: 10 Notifications: Enabled Expiry Policy could be Absolute (the default expiration time or the one set by the user is absolute, regardless of how many times the item has been accessed), Sliding (each time the item has been accessed, the expiration timer resets), or Never (items do not expire). This Azure Cache endpoint is now available in the Management Portal, and it will be used in the entire article. How to do it… We are going to create a DataCache instance through a code-based configuration. We will perform simple operations with Add, Get, Put, and Append/Prepend, using a secondary-named cache to transfer all the contents of the primary one. We will do this by performing the following steps: Add a new class named BuildingSimpleCacheExample to the project. Install the Microsoft.WindowsAzure.Caching NuGet package. Add the following using statement to the top of the class file: using Microsoft.ApplicationServer.Caching; Add the following private members to the class: private DataCacheFactory factory = null; private DataCache cache = null; Add the following constructor to the class: public BuildingSimpleCacheExample(string ep, string token,string cacheName) { DataCacheFactoryConfiguration config = new DataCacheFactoryConfiguration(); config.AutoDiscoverProperty = new DataCacheAutoDiscoverProperty(true, ep); config.SecurityProperties = new DataCacheSecurity(token, true); factory = new DataCacheFactory(config); cache = factory.GetCache(cacheName); } Add the following method, creating a palindrome string into the cache: public void CreatePalindromeInCache() { var objKey = "StringArray"; cache.Put(objKey, ""); char letter = 'A'; for (int i = 0; i < 10; i++) { cache.Append(objKey, char.ConvertFromUtf32((letter+i))); cache.Prepend(objKey, char.ConvertFromUtf32((letter + i))); } Console.WriteLine(cache.Get(objKey)); } Add the following method, adding an item into the cache to analyze its subsequent retrievals: public void AddAndAnalyze() { var randomKey = DateTime.Now.Ticks.ToString(); var value="Cached string"; cache.Add(randomKey, value); DataCacheItem cacheItem = cache.GetCacheItem(randomKey); Console.WriteLine(string.Format( "Item stored in {0} region with {1} expiration", cacheItem.RegionName,cacheItem.Timeout)); cache.Put(randomKey, value, TimeSpan.FromSeconds(60)); cacheItem = cache.GetCacheItem(randomKey); Console.WriteLine(string.Format( "Item stored in {0} region with {1} expiration", cacheItem.RegionName, cacheItem.Timeout)); var version = cacheItem.Version; var obj = cache.GetIfNewer(randomKey, ref version); if (obj == null) { //No updates } } Add the following method, transferring the contents of the cache named initially into a second one: public void BackupToDestination(string destCacheName) { var destCache = factory.GetCache(destCacheName); var dump = cache.GetSystemRegions() .SelectMany(p => cache.GetObjectsInRegion(p)) .ToDictionary(p=>p.Key,p=>p.Value); foreach (var item in dump) { destCache.Put(item.Key, item.Value); } } Add the following method to clear the cache named first: public void ClearCache() { cache.Clear(); } Add the following method, using the methods added earlier, to the class: public static void RunExample() { var cacheName = "[named cache 1]"; var backupCache = "[named cache 2]"; string endpoint = "[cache endpoint]"; string token = "[cache token/key]"; BuildingSimpleCacheExample example = new BuildingSimpleCacheExample(endpoint, token, cacheName); example.CreatePalindromeInCache(); example.AddAndAnalyze(); example.BackupToDestination(backupCache); example.ClearCache(); } How it works... From steps 1 to 3, we set up the class. In step 4, we add private members to store the DataCacheFactory object used to create the DataCache object to access the Cache service. In the constructor that we add in step 5, we initialize the DataCacheFactory object using a configuration model class (DataCacheFactoryConfiguration). This strategy is for code-based initialization whenever settings cannot stay in the App.config/Web.config file. In step 6, we use the Put() method to write an empty string into the StringArray bucket. We then use the Append() and Prepend() methods, designed to concatenate strings to existing strings, to build a palindrome string in the memory cache. This sample does not make any sense in real-world scenarios, and we must pay attention to some of the following issues: Writing an empty string into the cache is somehow useless. Each Append() or Prepend() operation travels on TCP to the cache and goes back. Though it is very simple, it requires resources, and we should always try to consolidate calls. In step 7, we use the Add() method to add a string to the cache. The difference between the Add() and Put() methods is that the first method throws an exception if the item already exists, while the second one always overwrites the existing value (or writes it for the first time). GetCacheItem() returns a DataCacheItem object, which wraps the value together with other metadata properties, such as the following: CacheName: This is the named cache where the object is stored. Key: This is the key of the associated bucket. RegionName (user defined or system defined): This is the region of the cache where the object is stored. Size: This is the size of the object stored. Tags: These are the optional tags of the object, if it is located in a user-defined region. Timeout: This is the current timeout before the object would expire. Version: This is the version of the object. This is a DataCacheItemVersion object whose properties are not accessible due to their modifier. However, it is not important to access this property, as the Version object is used as a token against the Cache service to implement the optimistic concurrency. As for the timestamp value, its semantic can stay hidden from developers. The first Add() method does not specify a timeout for the object, leaving the default global expiration timeout, while the next Put() method does, as we can check in the next Get() method. We finally ask the cache about the object with the GetIfNewer() method, passing the latest version token we have. This conditional Get method returns null if the object we own is already the latest one. In step 8, we list all the keys of the first named cache, using the GetSystemRegions() method (to first list the system-defined regions), and for each region, we ask for their objects, copying them into the second named cache. In step 9, we clear all the contents of the first cache. In step 10, we call the methods added earlier, specifying the Cache endpoint to connect to and the token/password, along with the two named caches in use. Replace [named cache 1], [named cache 2], [cache endpoint], and [cache token/key] with actual values. There's more… Code-based configuration is useful when the settings stay in a different place as compared to the default config files for .NET. It is not a best practice to hardcode them, so this is the standard way to declare them in the App.config file: <configSections> <section name="dataCacheClients" type="Microsoft.ApplicationServer.Caching.DataCacheClientsSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere" /> </configSections> The XML mentioned earlier declares a custom section, which should be as follows: <dataCacheClients> <dataCacheClient name="[name of cache]"> <autoDiscover isEnabled="true" identifier="[domain of cache]" /> <securityProperties mode="Message" sslEnabled="true"> <messageSecurity authorizationInfo="[token of endpoint]" /> </securityProperties> </dataCacheClient> </dataCacheClients> In the upcoming recipes, we will use this convention to set up the DataCache objects. ASP.NET Support With almost no effort, the Azure Cache can be used as Output Cache in ASP.NET to save the session state. To enable this, in addition to the configuration mentioned earlier, we need to include those declarations in the <system.web> section as follows: <sessionState mode="Custom" customProvider="AFCacheSessionStateProvider"> <providers> <add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheSessionState"/> </providers> </sessionState> <caching> <outputCache defaultProvider="AFCacheOutputCacheProvider"> <providers> <add name="AFCacheOutputCacheProvider" type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheOutputCache" /> </providers> </outputCache> </caching> The difference between [name of cache] and [named cache] is as follows: The [name of cache] part is a friendly name of the cache client declared above an alias. The [named cache] part is the named cache created into the Azure Cache service. Connecting to the Azure Storage service In an Azure Cloud Service, the storage account name and access key are stored in the service configuration file. By convention, the account name and access key for data access are provided in a setting named DataConnectionString. The account name and access key needed for Azure Diagnostics must be provided in a setting named Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString. The DataConnectionString setting must be declared in the ConfigurationSettings section of the service definition file. However, unlike other settings, the connection string setting for Azure Diagnostics is implicitly defined when the Diagnostics module is specified in the Imports section of the service definition file. Consequently, it must not be specified in the ConfigurationSettings section. A best practice is to use different storage accounts for application data and diagnostic data. This reduces the possibility of application data access being throttled by competition for concurrent writes from the diagnostics monitor. What is Throttling? In shared services, where the same resources are shared between tenants, limiting the concurrent access to them is critical to provide service availability. If a client misuses the service or, better, generates a huge amount of traffic, other tenants pointing to the same shared resource could experience unavailability. Throttling (also known as Traffic Control plus Request Cutting) is one of the most adopted solutions that is solving this issue. It also provides a security boundary between application data and diagnostics data, as diagnostics data might be accessed by individuals who should have no access to application data. In the Azure Storage library, access to the storage service is through one of the Client classes. There is one Client class for each Blob service, Queue service, and Table service; they are CloudBlobClient, CloudQueueClient, and CloudTableClient, respectively. Instances of these classes store the pertinent endpoint as well as the account name and access key. The CloudBlobClient class provides methods to access containers, list their contents, and get references to containers and blobs. The CloudQueueClient class provides methods to list queues and get a reference to the CloudQueue instance that is used as an entry point to the Queue service functionality. The CloudTableClient class provides methods to manage tables and get the TableServiceContext instance that is used to access the WCF Data Services functionality while accessing the Table service. Note that the CloudBlobClient, CloudQueueClient, and CloudTableClient instances are not thread safe, so distinct instances should be used when accessing these services concurrently. The client classes must be initialized with the account name, access key, as well as the appropriate storage service endpoint. The Microsoft.WindowsAzure namespace has several helper classes. The StorageCredential class initializes an instance from an account name and access key or from a shared access signature. In this recipe, we'll learn how to use the CloudBlobClient, CloudQueueClient, and CloudTableClient instances to connect to the storage service. Getting ready This recipe assumes that the application's configuration file contains the following: <appSettings> <add key="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/> <add key="AccountName" value="{ACCOUNT_NAME}"/> <add key="AccountKey" value="{ACCOUNT_KEY}"/> </appSettings> We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the storage account name and access key, respectively. We are not working in a Cloud Service but in a simple console application. Storage services, like many other building blocks of Azure, can also be used separately from on-premise environments. How to do it... We are going to connect to the Table service, the Blob service, and the Queue service, and perform a simple operation on each. We will do this using the following steps: Add a new class named ConnectingToStorageExample to the project. Add the following using statements to the top of the class file: using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Blob; using Microsoft.WindowsAzure.Storage.Queue; using Microsoft.WindowsAzure.Storage.Table; using Microsoft.WindowsAzure.Storage.Auth; using System.Configuration; The System.Configuration assembly should be added via the Add Reference action onto the project, as it is not included in most of the project templates of Visual Studio. Add the following method, connecting the blob service, to the class: private static void UseCloudStorageAccountExtensions() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DataConnectionString"]); CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient(); CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference( "{NAME}"); cloudBlobContainer.CreateIfNotExists(); } Add the following method, connecting the Table service, to the class: private static void UseCredentials() { string accountName = ConfigurationManager.AppSettings[ "AccountName"]; string accountKey = ConfigurationManager.AppSettings[ "AccountKey"]; StorageCredentials storageCredentials = new StorageCredentials( accountName, accountKey); CloudStorageAccount cloudStorageAccount = new CloudStorageAccount(storageCredentials, true); CloudTableClient tableClient = new CloudTableClient( cloudStorageAccount.TableEndpoint, storageCredentials); CloudTable table = tableClient.GetTableReference("{NAME}"); table.CreateIfNotExists(); } Add the following method, connecting the Queue service, to the class: private static void UseCredentialsWithUri() { string accountName = ConfigurationManager.AppSettings[ "AccountName"]; string accountKey = ConfigurationManager.AppSettings[ "AccountKey"]; StorageCredentials storageCredentials = new StorageCredentials( accountName, accountKey); StorageUri baseUri = new StorageUri(new Uri(string.Format( "https://{0}.queue.core.windows.net/", accountName))); CloudQueueClient cloudQueueClient = new CloudQueueClient(baseUri, storageCredentials); CloudQueue cloudQueue = cloudQueueClient.GetQueueReference("{NAME}"); cloudQueue.CreateIfNotExists(); } Add the following method, using the other methods, to the class: public static void UseConnectionToStorageExample() { UseCloudStorageAccountExtensions(); UseCredentials(); UseCredentialsWithUri(); } How it works... In steps 1 and 2, we set up the class. In step 3, we implement the standard way to access the storage service using the Storage Client library. We use the static CloudStorageAccount.Parse() method to create a CloudStorageAccount instance from the value of the connection string stored in the configuration file. We then use this instance with the CreateCloudBlobClient() extension method of the CloudStorageAccount class to get the CloudBlobClient instance that we use to connect to the Blob service. We can also use this technique with the Table service and the Queue service, using the relevant extension methods, CreateCloudTableClient() and CreateCloudQueueClient(), respectively, for them. We complete this example using the CloudBlobClient instance to get a CloudBlobContainer reference to a container and then create it if it does not exist We need to replace {NAME} with the name for a container. In step 4, we create a StorageCredentials instance directly from the account name and access key. We then use this to construct a CloudStorageAccount instance, specifying that any connection should use HTTPS. Using this technique, we need to provide the Table service endpoint explicitly when creating the CloudTableClient instance. We then use this to create the table. We need to replace {NAME} with the name of a table. We can use the same technique with the Blob service and Queue service using the relevant CloudBlobClient or CloudQueueClient constructor. In step 5, we use a similar technique, except that we avoid the intermediate step of using a CloudStorageAccount instance and explicitly provide the endpoint for the Queue service. We use the CloudQueueClient instance created in this step to create the queue. We need to replace {NAME} with the name of a queue. Note that we hardcoded the endpoint for the Queue service. Though this last method is officially supported, it is not a best practice to bind our code to hardcoded strings with endpoint URIs. So, it is preferable to use one of the previous methods that hides the complexity of the URI generation at the library level. In step 6, we add a method that invokes the methods added in the earlier steps. There's more… With the general availability of the .NET Framework Version 4.5, many libraries of the CLR have been added with the support of asynchronous methods with the Async/Await pattern. Latest versions of the Azure Storage Library also have these overloads, which are useful while developing mobile applications, and fast web APIs. They are generally useful when it is needed to combine the task execution model into our applications. Almost each long-running method of the library has its corresponding methodAsync() method to be called as follows: await cloudQueue.CreateIfNotExistsAsync(); In the rest of the book, we will continue to use the standard, synchronous pattern. Adding messages to a Storage queue The CloudQueue class in the Azure Storage library provides both synchronous and asynchronous methods to add a message to a queue. A message comprises up to 64 KB bytes of data (48 KB if encoded in Base64). By default, the Storage library Base64 encodes message content to ensure that the request payload containing the message is valid XML. This encoding adds overhead that reduces the actual maximum size of a message. A message for a queue should not be intended to transport a big payload, since the purpose of a Queue is just messaging and not storing. If required, a user can store the payload in a Blob and use a Queue message to point to that, letting the receiver fetch the message along with the Blob from its remote location. Each message added to a queue has a time-to-live property after which it is deleted automatically. The maximum and default time-to-live value is 7 days. In this recipe, we'll learn how to add messages to a queue. Getting ready This recipe assumes the following code is in the application configuration file: <appSettings> <add key="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/> </appSettings> We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values of the account name and access key. How to do it... We are going to create a queue and add some messages to it. We do this as follows: Add a new class named AddMessagesOnStorageExample to the project. Install the WindowsAzure.Storage NuGet package and add the following assembly references to the project: System.Configuration Add the following using statements to the top of the class file: using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Queue; using System.Configuration; Add the following private member to the class: private CloudQueue cloudQueueClient; Add the following constructor to the class: public AddMessagesOnStorageExample(String queueName) { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DataConnectionString"]); CloudQueueClient cloudQueueClient = cloudStorageAccount.CreateCloudQueueClient(); cloudQueue = cloudQueueClient.GetQueueReference(queueName); cloudQueue.CreateIfNotExists(); } Add the following method to the class, adding two messages: public void AddMessages() { String content1 = "Do something"; CloudQueueMessage message1 = new CloudQueueMessage(content1); cloudQueue.AddMessage(message1); String content2 = "Do something that expires in 1 day"; CloudQueueMessage message2 = new CloudQueueMessage(content2); cloudQueue.AddMessage(message2, TimeSpan.FromDays(1.0)); String content3 = "Do something that expires in 2 hours,"+ " starting in 1 hour from now"; CloudQueueMessage message3 = new CloudQueueMessage(content3); cloudQueue.AddMessage(message2, TimeSpan.FromHours(2),TimeSpan.FromHours(1)); } Add the following method, that uses the AddMessage() method, to the class: public static void UseAddMessagesExample() { String queueName = "{QUEUE_NAME}"; AddMessagesOnStorageExample example = new AddMessagesOnStorageExample (queueName); example.AddMessages(); } How it works... In steps 1 through 3, we set up the class. In step 4, we add a private member to store the CloudQueue object used to interact with the Queue service. We initialize this in the constructor we add in step 5 where we also create the queue. In step 6, we add a method that adds three messages to a queue. We create three CloudQueueMessage objects. We add the first message to the queue with the default time-to-live of seven days, the second is added specifying an expiration of 1 day, and the third will become visible after 1 hour since its entrance in the queue, with an absolute expiration of 2 hours. Note that a client (library) exception is thrown if we specify a visibility delay higher than the absolute TTL of the message. This is naturally obvious and it is enforced at the client side, instead making a (failing) server call. In step 7, we add a method that invokes the methods we added earlier. We need to replace {QUEUE_NAME} with an appropriate name for a queue. There's more… To clear the queue from the messages we added in this recipe, we can proceed by calling the Clear() method in the CloudQueue class as follows: public void ClearQueue() { cloudQueue.Clear(); } Summary In this article, we have learned some of the recipes in order to build a complete overview of the software infrastructure that we need to set up on the cloud. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] vCloud Networks [Article] Setting Up a Test Infrastructure [Article]
Read more
  • 0
  • 0
  • 6035

article-image-inkscape-live-path-effects
Packt
03 May 2011
9 min read
Save for later

Inkscape: live path effects

Packt
03 May 2011
9 min read
The process of creating the building blocks can be fully automated using features known as Live Path Effects (LPEs). They are very useful, easy to use, and ubiquitous. In a nutshell, LPEs twist, replicate, transform, and apply many other modifications to shapes in a sequential and repeatable way. There are many different types of LPEs available for Inkscape 0.48, and even more are being developed. In this article we will cover some of the most interesting ones available for this release. Bending paths In this recipe we will go through the basic Live Path Effect options using the Bend path effect. We will create a simple rectangle and morph it into a waving flag. We will also bend some letter shapes directly and through linking to objects to use as bending paths. How to do it... The following steps show how to use bending paths: Create a rectangle using the Rectangle tool (F4 or R), set its fill to #D5D5FF, stroke to 60% Gray, and stroke width to 1. Open the Path Effect Editor by using Shift + Ctrl + 7 or by going to Path | Path Effect Editor.... Under the Apply new effect, choose Bend and press Add, this will apply the Bend LPE to the rectangle but the rectangle won't change as a result of it. We now have several options available under Current effect. Select the Edit on-canvas option , and notice the bending line on the rectangle that appeared. Click on the green line (the Bezier segment) and drag slightly to bend the rectangle. Adjust more precisely using handles on the end nodes, and click on a node to show the handles. Here is what we have so far: Duplicate the rectangle (Ctrl + D) and move it away. Notice that the duplicated object will keep the LPE applied. Switch to the Node tool (F2 or N) and notice the four corner nodes of the original rectangle appear. The rectangle was automatically converted to a path when applying the LPE. Turn on the Show path outline (without path effects) and Show Bezier handles of selected nodes to see the original shape and its node handles. Move the nodes with the Node tool and notice how the bent shape updates accordingly: Press the 7 key to switch to the edit LPE bending path mode; if more adjustments are necessary, switch back to the Node tool using F2 or N. Select the other rectangle (the one we got in Step 6), click on the Copy path button in the Path Effect Editor and paste somewhere on canvas (Ctrl + V or right-click and choose Paste from the pop-up menu). Remove the fill and add a stroke to the pasted path for more clarity. This is a copy of the rectangle bending path but it is in no way connected to the rectangle or its LPE. Create an "S" using the normal mode of the Pencil tool (F6 or P) with Smoothing: set to 50. Switch to the Selector tool (Space or F1 or S) and copy the "S" (Ctrl + C). Select the rectangle with Bend applied to it and press the Paste path button in the Path Effect Editor to bend our rectangle in the form of the "S". The shape might look a bit weird if our rectangle length and width are similar; this can be corrected in the Bend LPE by changing the Width value. In this case, reducing the width from 1 to 0.2 produces the desired results: Create a "C" using the normal mode of the Pencil tool (F6 or P) with Smoothing: set to 50 and copy it (Ctrl + C). Select one of the objects with Bend applied to it and press the Link to path button in the Path Effect Editor in order to bend our rectangle in the form of the "C". The LPE object will be positioned over the "C" and the "C" shape will still be available for editing. Change the Width value if necessary. In our case, a value of 0.2 proved adequate. Select the "C" shape and edit it using the Path tool (F2 or N). Notice how editing the "C" shape automatically updates the LPE linked to it: There's more... Unexpected results can happen if the shape we're trying to bend is vertical (taller than wide); the Inkscape developers, in their wisdom, have provided a special Original path is vertical option that will adapt the bending effect accordingly. Bending groups We bent only one object in this recipe but bending groups is also possible. Just make sure the objects within the group are converted to paths before the bending is applied. Stacking LPEs It's possible to apply more than one LPE to a single object. You can preview what the shape looks like with or without a particular LPE applied by toggling the "eye" icon in the Effect list. Removing Path Effects Live Path Effects change the shape of an object while keeping the original shape information intact (these changes are made "Live", as in real time). This means that LPEs can be undone, and the Remove Path Effect option under the Path menu does exactly that. You have to remove the object from the selection and then select it again to update the Path Effect Editor display. Using Pattern Along Path We have already used the Pattern Along Path effect without even knowing it by choosing a shape in the Shape: option of the Pen and Pencil tools (we told you LPEs are ubiqitous!). This LPE takes a path and applies it along another one, like a reverse Bend LPE. In this recipe we will learn another method to widen the outline of a tree trunk created using Spiro Spline and explore some other options of this LPE. How to do it... The following steps will show how to widen an outline using Pattern Along Path: Create a tree trunk using the Spiro spline mode with the Pencil (F6 or P) tool. Set the Smoothing: to 34 and the Shape: to Triangle in. Open the Path Effect Editor by using Shift + Ctrl + 7 or by going to Path | Path Effect Editor.... You will see two LPEs listed: Spiro spline and Pattern Along Path. Select the Pattern Along Path by clicking on it to get to its options. Use the arrow handles to widen the shape. Notice how this affects only the skeleton path and the "tree trunk" remains the same width because it's governed by the LPE shape. To make the "tree trunk" appear wider change the original triangle using the Edit on-canvas option of the Pattern source. It will appear as a green path near the top-left corner of the page. Move the top triangle corner upwards and the bottom one downwards. Notice how the "tree trunk" appears wider. There is also another way to achieve the same result—using the Width option. Change it to 5 and see how the "tree trunk" gets even wider. This time the original triangle shape isn't modified. Tick the Pattern is vertical option. Notice how the shape changes depending on what base triangle orientation is taken as the pattern. Change the Pattern copies from Single, stretched to Single. Notice we only see a small triangle at the beginning of the skeleton path because the original triangle shape is much smaller than the skeleton path, and we're only using a single one without stretching it. Change the Pattern copies to Repeated, stretched. The triangles are "copied" along the skeleton and positioned one next to the other. If there is a small part of the skeleton path not covered using the original triangle size they are stretched a bit to compensate for it. Since they are so small compared to the skeleton path there isn't much difference between using the Repeated, stretched and Repeated option. Notice the Pattern is vertical is still enabled. Untick the Pattern is vertical option. Notice the change in triangles. Change the Spacing option to 25. Notice how the triangles are now spaced out. Tick the Pattern is vertical option. The triangles change shape again based on the orientation but now the spacing between them is noticeable. How it works... The Pattern Along Path LPE uses one path as a pattern that is to be stretched and/or repeated along a skeleton path. Other than using the Shape: option of the Pen and Pencil tools we can create separate patterns and skeletons to be combined through the Path Effect Editor just like with any other LPE. The usual options to edit, copy, paste, and link are available, and both the pattern and the skeleton can be edited live. Pattern Along Path also has some specific options to determine how the pattern is applied along the skeleton. We can choose to stretch it or repeat it, therefore increasing the separation between the repeated items. There's more... Normal and Tangential offsets are also specific options that influence the positions of the patterns in relation to the skeleton path. Normal offset moves the patterns away from the skeleton, and the tangential offset moves them along its tangent. The difference between Pattern along Path and the other LPEs is that we can't use groups as patterns or skeletons. Pattern along Path extension Another (and older) way to achieve the same effect is by using the Pattern along Path extension found under the Extensions | Generate from Path menu. The extension also offers to choose between Snake and Ribbon as the Deformation type. Groups can be used as patterns, but the changes made to the pattern cannot be undone. The z-order determines which object is used as the pattern—the top-most one. The extension doesn't always give the same results as the LPE; the straight segments stay straight when the extension is used, but the LPE bends them.
Read more
  • 0
  • 0
  • 6034

article-image-microsoft-technology-evangelist-matthew-weston-on-how-microsoft-powerapps-is-democratizing-app-development-interview
Bhagyashree R
04 Dec 2019
10 min read
Save for later

Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview]

Bhagyashree R
04 Dec 2019
10 min read
In recent years, we have seen a wave of app-building tools coming in that enable users to be creators. Another such powerful tool is Microsoft PowerApps, a full-featured low-code / no-code platform. This platform aims to empower “all developers” to quickly create business web and mobile apps that fit best with their use cases. Microsoft defines “all developers” as citizen developers, IT developers, and Pro developers. To know what exactly PowerApps is and its use cases, we sat with Matthew Weston, the author of Learn Microsoft PowerApps. Weston was kind enough to share a few tips for creating user-friendly and attractive UI/UX. He also shared his opinion on the latest updates in the Power Platform including AI Builder, Portals, and more. [box type="shadow" align="" class="" width=""] Further Learning Weston’s book, Learn Microsoft PowerApps will guide you in creating powerful and productive apps that will help you transform old and inefficient processes and workflows in your organization. In this book, you’ll explore a variety of built-in templates and understand the key difference between types of PowerApps such as canvas and model-driven apps. You’ll learn how to generate and integrate apps directly with SharePoint, gain an understanding of PowerApps key components such as connectors and formulas, and much more. [/box] Microsoft PowerApps: What is it and why is it important With the dawn of InfoPath, a tool for creating user-designed form templates without coding, came Microsoft PowerApps. Introduced in 2016, Microsoft PowerApps aims to democratize app development by allowing users to build cross-device custom business apps without writing code and provides an extensible platform to professional developers. Weston explains, “For me, PowerApps is a solution to a business problem. It is designed to be picked up by business users and developers alike to produce apps that can add a lot of business value without incurring large development costs associated with completely custom developed solutions.” When we asked how his journey started with PowerApps, he said, “I personally ended up developing PowerApps as, for me, it was an alternative to using InfoPath when I was developing for SharePoint. From there I started to explore more and more of its capabilities and began to identify more and more in terms of use cases for the application. It meant that I didn’t have to worry about controls, authentication, or integration with other areas of Office 365.” Microsoft PowerApps is a building block of the Power Platform, a collection of products for business intelligence, app development, and app connectivity. Along with PowerApps, this platform includes Power BI and Power Automate (earlier known as Flow). These sit on the top of the Common Data Service (CDS) that stores all of your structured business data. On top of the CDS, you have over 280 data connectors for connecting to popular services and on-premise data sources such as SharePoint, SQL Server, Office 365, Salesforce, among others. All these tools are designed to work together seamlessly. You can analyze your data with Power BI, then act on the data through web and mobile experiences with PowerApps, or automate tasks in the background using Power Automate. Explaining its importance, Weston said, “Like all things within Office 365, using a single application allows you to find a good solution. Combining multiple applications allows you to find a great solution, and PowerApps fits into that thought process. Within the Power Platform, you can really use PowerApps and Power Automate together in a way that provides a high-quality user interface with a powerful automation platform doing the heavy processing.” “Businesses should invest in PowerApps in order to maximize on the subscription costs which are already being paid as a result of holding Office 365 licenses. It can be used to build forms, and apps which will automatically integrate with the rest of Office 365 along with having the mobile app to promote flexible working,” he adds. What skills do you need to build with PowerApps PowerApps is said to be at the forefront of the “Low Code, More Power” revolution. This essentially means that anyone can build a business-centric application, it doesn’t matter whether you are in finance, HR, or IT. You can use your existing knowledge of PowerPoint or Excel concepts and formulas to build business apps. Weston shared his perspective, “I believe that PowerApps empowers every user to be able to create an app, even if it is only quite basic. Most users possess the ability to write formulas within Microsoft Excel, and those same skills can be transferred across to PowerApps to write formulas and create logic.” He adds, “Whilst the above is true, I do find that you need to have a logical mind in order to really create rich apps using the Power Platform, and this often comes more naturally to developers. Saying that, I have met people who are NOT developers, and have actually taken to the platform in a much better way.” Since PowerApps is a low-code platform, you might be wondering what value it holds for developers. We asked Weston what he thinks about developers investing time in learning PowerApps when there are some great cross-platform development frameworks like React Native or Ionic. Weston said, “For developers, and I am from a development background, PowerApps provides a quick way to be able to create apps for your users without having to write code. You can use formulae and controls which have already been created for you reducing the time that you need to invest in development, and instead you can spend the time creating solutions to business problems.” Along with enabling developers to rapidly publish apps at scale, PowerApps does a lot of heavy lifting around app architecture and also gives you real-time feedback as you make changes to your app. This year, Microsoft also released the PowerApps component framework which enables developers to code custom controls and use them in PowerApps. You also have tooling to use alongside the PowerApps component framework for a better end to end development experience: the new PowerApps CLI and Visual Studio plugins. On the new PowerApps capabilities: AI Builder, Portals, and more At this year’s Microsoft Ignite, the Power Platform team announced almost 400 improvements and new features to the Power Platform. The new AI Builder allows you to build and train your own AI model or choose from pre-built models. Another feature is PowerApps Portals using which organizations can create portals and share them with both internal and external users. We also have UI flows, currently a preview feature that provides Robotic Process Automation (RPA) capabilities to Power Automate. Weston said, “I love to use the Power Platform for integration tasks as much as creating front end interfaces, especially around Power Automate. I have created automations which have taken two systems, which won’t natively talk to each other, and process data back and forth without writing a single piece of code.” Weston is already excited to try UI flows, “I’m really looking forward to getting my hands on UI flows and seeing what I can create with those, especially for interacting with solutions which don’t have the usual web service interfaces available to them.” However, he does have some concerns regarding RPA. “My only reservation about this is that Robotic Process Automation, which this is effectively doing, is usually a deep specialism, so I’m keen to understand how this fits into the whole citizen developer concept,” he shared. Some tips for building secure, user-friendly, and optimized PowerApps When it comes to application development, security and responsiveness are always on the top in the priority list. Weston suggests, “Security is always my number one concern, as that drives my choice for the data source as well as how I’m going to structure the data once my selection has been made. It is always harder to consider security at the end, when you try to shoehorn a security model onto an app and data structure which isn’t necessarily going to accept it.” “The key thing is to never take your data for granted, secure everything at the data layer first and foremost, and then carry those considerations through into the front end. Office 365 employs the concept of security trimming, whereby if you don’t have access to something then you don’t see it. This will automatically apply to the data, but the same should also be done to the screens. If a user shouldn’t be seeing the admin screen, don’t even show them a link to get there in the first place,” he adds. For responsiveness, he suggests, “...if an app is slow to respond then the immediate perception about the app is negative. There are various things that can be done if working directly with the data source, such as pulling data into a collection so that data interactions take place locally and then sync in the background.” After security, another important aspect of building an app is user-friendly and attractive UI and UX. Sharing a few tips for enhancing your app's UI and UX functionality, Weston said, “When creating your user interface, try to find the balance between images and textual information. Quite often I see PowerApps created which are just line after line of text, and that immediately switches me off.” He adds, “It could be the best app in the world that does everything I’d ever want, but if it doesn’t grip me visually then I probably won’t be using it. Likewise, I’ve seen apps go the other way and have far too many images and not enough text. It’s a fine balance, and one that’s quite hard.” Sharing a few ways for optimizing your app performance, Weston said, “Some basic things which I normally do is to load data on the OnVisible event for my first screen rather than loading everything on the App OnStart. This means that the app can load and can then be pulling in data once something is on the screen. This is generally the way that most websites work now, they just get something on the screen to keep the user happy and then pull data in.” “Also, give consideration to the number of calls back and forth to the data source as this will have an impact on the app performance. Only read and write when I really need to, and if I can, I pull as much of the data into collections as possible so that I have that temporary cache to work with,” he added. About the author Matthew Weston is an Office 365 and SharePoint consultant based in the Midlands in the United Kingdom. Matthew is a passionate evangelist of Microsoft technology, blogging and presenting about Office 365 and SharePoint at every opportunity. He has been creating and developing with PowerApps since it became generally available at the end of 2016. Usually, Matthew can be seen presenting within the community about PowerApps and Flow at SharePoint Saturdays, PowerApps and Flow User Groups, and Office 365 and SharePoint User Groups. Matthew is a Microsoft Certified Trainer (MCT) and a Microsoft Certified Solutions Expert (MCSE) in Productivity. Check out Weston’s latest book, Learn Microsoft PowerApps on PacktPub. This book is a step-by-step guide that will help you create lightweight business mobile apps with PowerApps.  Along with exploring the different built-in templates and types of PowerApps, you will generate and integrate apps directly with SharePoint. The book will further help you understand how PowerApps can use several Microsoft Power Automate and Azure functionalities to improve your applications. Follow Matthew Weston on Twitter: @MattWeston365. Denys Vuika on building secure and performant Electron apps, and more Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 6029

article-image-working-client-object-model-microsoft-sharepoint
Packt
05 Oct 2011
9 min read
Save for later

Working with Client Object Model in Microsoft Sharepoint

Packt
05 Oct 2011
9 min read
Microsoft SharePoint 2010 is the best-in-class platform for content management and collaboration. With Visual Studio, developers have an end-to-end business solutions development IDE. To leverage this powerful combination of tools it is necessary to understand the different building blocks of SharePoint. In this article by Balaji Kithiganahalli, author of Microsoft SharePoint 2010 Development with Visual Studio 2010 Expert Cookbook, we will cover: Creating a list using a Client Object Model Handling exceptions Calling Object Model asynchronously (For more resources on Microsoft Sharepoint, see here.) Introduction Since out-of-the-box web services does not provide the full functionality that the server model exposes, developers always end up creating custom web services for use with client applications. But there are situations where deploying custom web services may not be feasible. For example, if your company is hosting SharePoint solutions in a cloud environment where access to the root folder is not permitted. In such cases, developing client applications with new Client Object Model (OM) will become a very attractive proposition. SharePoint exposes three OMs which are as follows: Managed Silverlight JavaScript (ECMAScript) Each of these OMs provide object interface to functionality exposed in Microsoft. SharePoint namespace. While none of the object models expose the full functionality that the server-side object exposes, the understanding of server Object Models will easily translate for a developer to develop applications using an OM. A managed OM is used to develop custom .NET managed applications (service, WPF, or console applications). You can also use the OM for ASP.NET applications that are not running in the SharePoint context as well. A Silverlight OM is used by Silverlight client applications. A JavaScript OM is only available to applications that are hosted inside the SharePoint applications like web part pages or application pages. Even though each of the OMs provide different programming interfaces to build applications, behind the scenes, they all call a service called Client.svc to talk to SharePoint. This Client.svc file resides in the ISAPI folder. The service calls are wrapped around with an Object Model that developers can use to make calls to SharePoint server. This way, developers make calls to an OM and the calls are all batched together in XML format to send it to the server. The response is always received in JSON format which is then parsed and associated with the right objects. The basic architectural representation of the client interaction with the SharePoint server is as shown in the following image: The three Object Models come in separate assemblies. The following table provides the locations and names of the assemblies: Object OM Location Names Managed ISAPI folder Microsoft.SharePoint.Client.dll Microsoft.SharePoint.Client.Runtime.dll Silverlight LayoutsClientBin Microsoft.SharePoint.Client. Silverlight.dll Microsoft.SharePoint.Client. Silverlight.Runtime.dll JavaScript Layouts SP.js The Client Object Model can be downloaded as a redistributable package from the Microsoft download center at:http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b4579045-b183-4ed4-bf61-dc2f0deabe47 OM functionality focuses on objects at the site collection and below. The main reason being that it will be used to enhance the end-user interaction. Hence the OM is a smaller subset of what is available through the server Object Model. In all three Object Models, main object names are kept the same, and hence the knowledge from one OM is easily portable to another. As indicated earlier, knowledge of server Object Models easily transfer development using client OM The following table shows some of the major objects in the OM and their equivalent names in the server OM: Client OM Server OM ClientContext SPContext Site SPSite Web SPWeb List SPList ListItem SPListItem Field SPField Creating a list using a Managed OM In this recipe, we will learn how to create a list using a Managed Object Model. We will also add a new column to the list and insert about 10 rows of data to the list. For this recipe, we will create a console application that makes use of a generic list template. Getting ready You can copy the DLLs mentioned earlier to your development machine. Your development machine need not have the SharePoint server installed. But you should be able to access one with proper permission. You also need Visual Studio 2010 IDE installed on the development machine. How to do it… In order to create a list using a Managed OM, adhere to the following steps: Launch your Visual Studio 2010 IDE as an administrator (right-click the shortcut and select Run as administrator). Select File | New | Project . The new project wizard dialog box will be displayed (make sure to select .NET Framework 3.5 in the top drop-down box). Select Windows Console application under the Visual C# | Windows | Console Application node from Installed Templates section on the left-hand side. Name the project OMClientApplication and provide a directory location where you want to save the project and click on OK to create the console application template. To add a references to Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll, go to the menu Project | Add Reference and navigate to the location where you copied the DLLs and select them as shown In the following screenshot: Now add the code necessary to create a list. A description field will also be added to our list. Your code should look like the following (make sure to change the URL passed to the ClientContext constructor to your environment): using Microsoft.SharePoint.Client;namespace OMClientApplication{ class Program { static void Main(string[] args) { using (ClientContext clientCtx = new ClientContext("http://intsp1")) { Web site = clientCtx.Web; // Create a list. ListCreationInformation listCreationInfo = new ListCreationInformation(); listCreationInfo.Title = "OM Client Application List"; listCreationInfo.TemplateType = (int)ListTemplateType.GenericList; listCreationInfo.QuickLaunchOption = QuickLaunchOptions.On; List list = site.Lists.Add(listCreationInfo); string DescriptionFieldSchema = "<Field Type='Note' DisplayName='Item Description' Name='Description' Required='True' MaxLength='500' NumLines='10' />"; list.Fields.AddFieldAsXml(DescriptionFieldSchema, true, AddFieldOptions.AddToDefaultContentType);// Insert 10 rows of data - Concat loop Id with "Item Number" string. for (int i = 1; i < 11; ++i) { ListItemCreationInformation listItemCreationInfo = new ListItemCreationInformation(); ListItem li = list.AddItem(listItemCreationInfo); li["Title"] = string.Format("Item number {0}",i); li["Item_x0020_Description"] = string.Format("Item number {0} from client Object Model", i); li.Update(); } clientCtx.ExecuteQuery(); Console.WriteLine("List creation completed"); Console.Read(); } } }} Build and execute the solution by pressing F5 or from the menu Debug | Start Debugging . This should bring up the command window with a message indicating that the List creation completed as shown in the following screenshot. Press Enter and close the command window. Navigate to your site to verify that the list has been created. The following screenshot shows the list with the new field and ten items inserted: (Move the mouse over the image to enlarge.) How it works... The first line of the code in the Main method is to create an instance of ClientContext class. The ClientContext instance provides information about the SharePoint server context in which we will be working. This is also the proxy for the server we will be working with. We passed the URL information to the context to get the entry point to that location. When you have access to the context instance, you can browse the site, web, and list objects of that location. You can access all the properties like Name , Title , Description , and so on. The ClientContext class implements the IDisposable interface, and hence you need to use the using statement. Without that you have to explicitly dispose the object. If you do not do so, your application will have memory leaks. For more information on disposing objects refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee557362.aspx From the context we were able to obtain access to our site object on which we wanted to create the list. We provided list properties for our new list through the ListCreationInformation instance. Through the instance of ListCreationInformation, we set the values to list properties like name, the templates we want to use, whether the list should be shown in the quick launch bar or not, and so on. We added a new field to the field collection of the list by providing the field schema. Each of the ListItems are created by providing ListItemCreationInformation. The ListItemCreationInformation is similar to ListCreationInformation where you would provide information regarding the list item like whether it belongs to a document library or not, and so on. For more information on ListCreationInformation and ListItemCreationInformation members refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee536774.aspx. All of this information is structured as an XML and batched together to send it to the server. In our case, we created a list and added a new field and about ten list items. Each of these would have an equivalent server-side call, and hence, all these multiple calls were batched together to send it to the server. The request is only sent to the server when we issue an ExecuteQuery or ExecuteQueryAsync method in the client context. The ExecuteQuery method creates an XML request and passes that to Client.svc. The application waits until the batch process on the server is completed and then returns back with the JSON response. Client.svc makes the server Object Model call to execute our request. There's more... By default, ClientContext instance uses windows authentication. It makes use of the windows identity of the person executing the application. Hence, the person running the application should have proper authorization on the site to execute the commands. Exceptions will be thrown if proper permissions are not available for the user executing the application. We will learn about handling exceptions in the next recipe. It also supports Anonymous and FBA (ASP.Net form based authentication) authentication. The following is the code for passing FBA credentials if your site supports it: using (ClientContext clientCtx = new ClientContext("http://intsp1")){clientCtx.AuthenticationMode = ClientAuthenticationMode.FormsAuthentication;FormsAuthenticationLoginInfo fba = new FormsAuthenticationLoginInfo("username", "password");clientCtx.FormsAuthenticationLoginInfo = fba;//Business Logic} Impersonation In order to impersonate you can pass in credential information to the ClientContext as shown in the following code: clientCtx.Credentials = new NetworkCredential("username", "password", "domainname"); Passing credential information this way is supported only in Managed OM.  
Read more
  • 0
  • 0
  • 6028
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-administration-rights-for-power-bi-users
Pravin Dhandre
02 Jul 2018
8 min read
Save for later

Administration rights for Power BI users

Pravin Dhandre
02 Jul 2018
8 min read
In this tutorial, you will understand and learn administration rights/rules for Power BI users. This includes setting and monitoring rules like; who in the organization can utilize which feature, how Power BI Premium capacity is allocated and by whom, and other settings such as embed codes and custom visuals. This article is an excerpt from a book written by Brett Powell titled Mastering Microsoft Power BI. The admin portal is accessible to Office 365 Global Administrators and users mapped to the Power BI service administrator role. To open the admin portal, log in to the Power BI service and select the Admin portal item from the Settings (Gear icon) menu in the top right, as shown in the following screenshot: All Power BI users, including Power BI free users, are able to access the Admin portal. However, users who are not admins can only view the Capacity settings page. The Power BI service administrators and Office 365 global administrators have view and edit access to the following seven pages: Administrators of Power BI most commonly utilize the Tenant settings and Capacity settings as described in the Tenant Settings and Power BI Premium Capacities sections later in this tutorial. However, the admin portal can also be used to manage any approved custom visuals for the organization. Usage metrics The Usage metrics page of the Admin portal provides admins with a Power BI dashboard of several top metrics, such as the most consumed dashboards and the most consumed dashboards by workspace. However, the dashboard cannot be modified and the tiles of the dashboard are not linked to any underlying reports or separate dashboards to support further analysis. Given these limitations, alternative monitoring solutions are recommended, such as the Office 365 audit logs and usage metric datasets specific to Power BI apps. Details of both monitoring options are included in the app usage metrics and Power BI audit log activities sections later in this chapter. Users and Audit logs The Users and Audit logs pages only provide links to the Office 365 admin center. In the admin center, Power BI users can be added, removed and managed. If audit logging is enabled for the organization via the Create audit logs for internal activity and auditing and compliance tenant setting, this audit log data can be retrieved from the Office 365 Security & Compliance Center or via PowerShell. This setting is noted in the following section regarding the Tenant settings tab of the Power BI admin portal. An Office 365 license is not required to utilize the Office 365 admin center for Power BI license assignments or to retrieve Power BI audit log activity. Tenant settings The Tenant settings page of the Admin portal allows administrators to enable or disable various features of the Power BI web service. Likewise, the administrator could allow only a certain security group to embed Power BI content in SaaS applications such as SharePoint Online. The following diagram identifies the 18 tenant settings currently available in the admin portal and the scope available to administrators for configuring each setting: From a data security perspective, the first seven settings within the Export and Sharing and Content packs and apps groups are most important. For example, many organizations choose to disable the Publish to web feature for the entire organization. Additionally, only certain security groups may be allowed to export data or to print hard copies of reports and dashboards. As shown in the Scope column of the previous table and the following example, granular security group configurations are available to minimize risk and manage the overall deployment. Currently, only one tenant setting is available for custom visuals and this setting (Custom visuals settings) can be enabled or disabled for the entire organization only. For organizations that wish to restrict or prohibit custom visuals for security reasons, this setting can be used to eliminate the ability to add, view, share, or interact with custom visuals. More granular controls to this setting are expected later in 2018, such as the ability to define users or security groups of users who are allowed to use custom visuals. In the following screenshot from the Tenant settings page of the Admin portal, only the users within the BI Admin security group who are not also members of the BI Team security group are allowed to publish apps to the entire organization: For example, a report author who also helps administer the On-premises data gateway via the BI Admin security group would be denied the ability to publish apps to the organization given membership in the BI Team security group. Many of the tenant setting configurations will be more simple than this example, particularly for smaller organizations or at the beginning of Power BI deployments. However, as adoption grows and the team responsible for Power BI changes, it's important that the security groups created to help administer these settings are kept up to date. Embed Codes Embed Codes are created and stored in the Power BI service when the Publish to web feature is utilized. As described in the Publish to web section of the previous chapter, this feature allows a Power BI report to be embedded in any website or shared via URL on the public internet. Users with edit rights to the workspace of the published to web content are able to manage the embed codes themselves from within the workspace. However, the admin portal provides visibility and access to embed codes across all workspaces, as shown in the following screenshot: Via the Actions commands on the far right of the Embed Codes page, a Power BI Admin can view the report in a browser (diagonal arrow) or remove the embed code. The Embed Codes page can be helpful to periodically monitor the usage of the Publish to web feature and for scenarios in which data was included in a publish to web report that shouldn't have been, and thus needs to be removed. As shown in the Power BI Tenant settings table referenced in the previous section, this feature can be enabled or disabled for the entire organization or for specific users within security groups. Organizational Custom visuals The Custom Visuals page allows admins to upload and manage custom visuals (.pbiviz files) that have been approved for use within the organization. For example, an organization may have proprietary custom visuals developed internally, which it wishes to expose to business users. Alternatively, the organization may wish to define a set of approved custom visuals, such as only the custom visuals that have been certified by Microsoft. In the following screenshot, the Chiclet Slicer custom visual is added as an organizational custom visual from the Organizational visuals page of the Power BI admin portal: The Organizational visuals page provides a link (Add a custom visual) to launch the form and identifies all uploaded visuals, as well as their last update. Once a visual has been uploaded, it can be deleted but not updated or modified. Therefore, when a new version of an organizational visual becomes available, this visual can be added to the list of organizational visuals with a descriptive title (Chiclet Slicer v2.0). Deleting an organizational custom visual will cause any reports that use this visual to stop rendering. The following screenshot reflects the uploaded Chiclet Slicer custom visual on the Organization visuals page: Once the custom visual has been uploaded as an organizational custom visual, it will be accessible to users in Power BI Desktop. In the following screenshot from Power BI Desktop, the user has opened the MARKETPLACE of custom visuals and selected MY ORGANIZATION: In this screenshot, rather than searching through the MARKETPLACE, the user can go directly to visuals defined by the organization. The marketplace of custom visuals can be launched via either the Visualizations pane or the From Marketplace icon on the Home tab of the ribbon. Organizational custom visuals are not supported for reports or dashboards shared with external users. Additionally, organizational custom visuals used in reports that utilize the publish to web feature will not render outside the Power BI tenant. Moreover, Organizational custom visuals are currently a preview feature. Therefore, users must enable the My organization custom visuals feature via the Preview features tab of the Options window in Power BI Desktop. With this, we got you acquainted with features and processes applicable in administering Power BI for an organization. This includes the configuration of tenant settings in the Power BI admin portal, analyzing the usage of Power BI assets, and monitoring overall user activity via the Office 365 audit logs. If you found this tutorial useful, do check out the book Mastering Microsoft Power BI to develop visually rich, immersive, and interactive Power BI reports and dashboards. Unlocking the secrets of Microsoft Power BI A tale of two tools: Tableau and Power BI Building a Microsoft Power BI Data Model
Read more
  • 0
  • 0
  • 6025

article-image-understanding-php-basics
Packt
17 Feb 2016
27 min read
Save for later

Understanding PHP basics

Packt
17 Feb 2016
27 min read
In this article by Antonio Lopez Zapata, the author of the book Learning PHP 7, you need to understand not only the syntax of the language, but also its grammatical rules, that is, when and why to use each element of the language. Luckily, for you, some languages come from the same root. For example, Spanish and French are romance languages as they both evolved from spoken Latin; this means that these two languages share a lot of rules, and learning Spanish if you already know French is much easier. (For more resources related to this topic, see here.) Programming languages are quite the same. If you already know another programming language, it will be very easy for you to go through this chapter. If it is your first time though, you will need to understand from scratch all the grammatical rules, so it might take some more time. But fear not! We are here to help you in this endeavor. In this chapter, you will learn about these topics: PHP in web applications Control structures Functions PHP in web applications Even though the main purpose of this chapter is to show you the basics of PHP, doing so in a reference-manual way is not interesting enough. If we were to copy paste what the official documentation says, you might as well go there and read it by yourself. Instead, let's not forget the main purpose of this book and your main goal—to write web applications with PHP. We will show you how can you apply everything you are learning as soon as possible, before you get too bored. In order to do that, we will go through the journey of building an online bookstore. At the very beginning, you might not see the usefulness of it, but that is just because we still haven't seen all that PHP can do. Getting information from the user Let's start by building a home page. In this page, we are going to figure out whether the user is looking for a book or just browsing. How do we find this out? The easiest way right now is to inspect the URL that the user used to access our application and extract some information from there. Save this content as your index.php file: <?php $looking = isset($_GET['title']) || isset($_GET['author']); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>You lookin'? <?php echo (int) $looking; ?></p> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> </body> </html> And now, access http://localhost:8000/?author=Harper Lee&title=To Kill a Mockingbird. You will see that the page is printing some of the information that you passed on to the URL. For each request, PHP stores in an array—called $_GET- all the parameters that are coming from the query string. Each key of the array is the name of the parameter, and its associated value is the value of the parameter. So, $_GET contains two entries: $_GET['author'] contains Harper Lee and $_GET['title'] contains To Kill a Mockingbird. On the first highlighted line, we are assigning a Boolean value to the $looking variable. If either $_GET['title'] or $_GET['author'] exists, this variable will be true; otherwise, false. Just after that, we close the PHP tag and then we start printing some HTML, but as you can see, we are actually mixing HTML with PHP code. Another interesting line here is the second highlighted line. We are printing the content of $looking, but before that, we cast the value. Casting means forcing PHP to transform a type of value to another one. Casting a Boolean to an integer means that the resultant value will be 1 if the Boolean is true or 0 if the Boolean is false. As $looking is true since $_GET contains valid keys, the page shows 1. If we try to access the same page without sending any information as in http://localhost:8000, the browser will say "Are you looking for a book? 0". Depending on the settings of your PHP configuration, you will see two notice messages complaining that you are trying to access the keys of the array that do not exist. Casting versus type juggling We already knew that when PHP needs a specific type of variable, it will try to transform it, which is called type juggling. But PHP is quite flexible, so sometimes, you have to be the one specifying the type that you need. When printing something with echo, PHP tries to transform everything it gets into strings. Since the string version of the false Boolean is an empty string, this would not be useful for our application. Casting the Boolean to an integer first assures that we will see a value, even if it is just "0". HTML forms HTML forms are one of the most popular ways to collect information from users. They consist a series of fields called inputs in the HTML world and a final submit button. In HTML, the form tag contains two attributes: the action points, where the form will be submitted and method that specifies which HTTP method the form will use—GET or POST. Let's see how it works. Save the following content as login.html and go to http://localhost:8000/login.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore - Login</title> </head> <body> <p>Enter your details to login:</p> <form action="authenticate.php" method="post"> <label>Username</label> <input type="text" name="username" /> <label>Password</label> <input type="password" name="password" /> <input type="submit" value="Login"/> </form> </body> </html> This form contains two fields, one for the username and one for the password. You can see that they are identified by the name attribute. If you try to submit this form, the browser will show you a Page Not Found message, as it is trying to access http://localhost:8000/authenticate.phpand the web server cannot find it. Let's create it then: <?php $submitted = !empty($_POST); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>Form submitted? <?php echo (int) $submitted; ?></p> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> </body> </html> As with $_GET, $_POST is an array that contains the parameters received by POST. In this piece of code, we are first asking whether that array is not empty—note the ! operator. Afterwards, we just display the information received, just as in index.php. Note that the keys of the $_POST array are the values for the name argument of each input field. Control structures So far, our files have been executed line by line. Due to that, we are getting some notices on some scenarios, such as when the array does not contain what we are looking for. Would it not be nice if we could choose which lines to execute? Control structures to the rescue! A control structure is like a traffic diversion sign. It directs the execution flow depending on some predefined conditions. There are different control structures, but we can categorize them in conditionals and loops. A conditional allows us to choose whether to execute a statement or not. A loop will execute a statement as many times as you need. Let's take a look at each one of them. Conditionals A conditional evaluates a Boolean expression, that is, something that returns a value. If the expression is true, it will execute everything inside its block of code. A block of code is a group of statements enclosed by {}. Let's see how it works: <?php echo "Before the conditional."; if (4 > 3) { echo "Inside the conditional."; } if (3 > 4) { echo "This will not be printed."; } echo "After the conditional."; In this piece of code, we are using two conditionals. A conditional is defined by the keyword if followed by a Boolean expression in parentheses and by a block of code. If the expression is true, it will execute the block; otherwise, it will skip it. You can increase the power of conditionals by adding the keyword else. This tells PHP to execute a block of code if the previous conditions were not satisfied. Let's see an example: if (2 > 3) { echo "Inside the conditional."; } else { echo "Inside the else."; } This will execute the code inside else as the condition of if was not satisfied. Finally, you can also add an elseif keyword followed by another condition and block of code to continue asking PHP for more conditions. You can add as many elseif as you need after if. If you add else, it has to be the last one of the chain of conditions. Also keep in mind that as soon as PHP finds a condition that resolves to true, it will stop evaluating the rest of the conditions: <?php if (4 > 5) { echo "Not printed"; } elseif (4 > 4) { echo "Not printed"; } elseif (4 == 4) { echo "Printed."; } elseif (4 > 2) { echo "Not evaluated."; } else { echo "Not evaluated."; } if (4 == 4) { echo "Printed"; } In this last example, the first condition that evaluates to true is the one that is highlighted. After that, PHP does not evaluate any more conditions until a new if starts. With this knowledge, let's try to clean up a bit of our application, executing statements only when needed. Copy this code to your index.php file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p> <?php if (isset($_COOKIE[username'])) { echo "You are " . $_COOKIE['username']; } else { echo "You are not authenticated."; } ?> </p> <?php if (isset($_GET['title']) && isset($_GET['author'])) { ?> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> <?php } else { ?> <p>You are not looking for a book?</p> <?php } ?> </body> </html> In this new code, we are mixing conditionals and HTML code in two different ways. The first one opens a PHP tag and adds an if-else clause that will print whether we are authenticated or not with echo. No HTML is merged within the conditionals, which makes it clear. The second option—the second highlighted block—shows an uglier solution, but this is sometimes necessary. When you have to print a lot of HTML code, echo is not that handy, and it is better to close the PHP tag; print all the HTML you need and then open the tag again. You can do that even inside the code block of an if clause, as you can see in the code. Mixing PHP and HTML If you feel like the last file we edited looks rather ugly, you are right. Mixing PHP and HTML is confusing, and you have to avoid it by all means. Let's edit our authenticate.php file too, as it is trying to access $_POST entries that might not be there. The new content of the file would be as follows: <?php $submitted = isset($_POST['username']) && isset($_POST['password']); if ($submitted) { setcookie('username', $_POST['username']); } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <?php if ($submitted): ?> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> <?php else: ?> <p>You did not submitted anything.</p> <?php endif; ?> </body> </html> This code also contains conditionals, which we already know. We are setting a variable to know whether we've submitted a login or not and to set the cookies if we have. However, the highlighted lines show you a new way of including conditionals with HTML. This way, tries to be more readable when working with HTML code, avoiding the use of {} and instead using : and endif. Both syntaxes are correct, and you should use the one that you consider more readable in each case. Switch-case Another control structure similar to if-else is switch-case. This structure evaluates only one expression and executes the block depending on its value. Let's see an example: <?php switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; break; case 'Lord of the Rings': echo "A classic!"; break; default: echo "Dunno that one."; break; } The switch case takes an expression; in this case, a variable. It then defines a series of cases. When the case matches the current value of the expression, PHP executes the code inside it. As soon as PHP finds break, it will exit switch-case. In case none of the cases are suitable for the expression, if there is a default case         , PHP will execute it, but this is optional. You also need to know that breaks are mandatory if you want to exit switch-case. If you do not specify any, PHP will keep on executing statements, even if it encounters a new case. Let's see a similar example but without breaks: <?php $title = 'Twilight'; switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; case 'Twilight': echo 'Uh...'; case 'Lord of the Rings': echo "A classic!"; default: echo "Dunno that one."; } If you test this code in your browser, you will see that it is printing "Uh...A classic!Dunno that one.". PHP found that the second case is valid so it executes its content. But as there are no breaks, it keeps on executing until the end. This might be the desired behavior sometimes, but not usually, so we need to be careful when using it! Loops Loops are control structures that allow you to execute certain statements several times—as many times as you need. You might use them on several different scenarios, but the most common one is when interacting with arrays. For example, imagine you have an array with elements but you do not know what is in it. You want to print all its elements so you loop through all of them. There are four types of loops. Each of them has their own use cases, but in general, you can transform one type of loop into another. Let's see them closely While While is the simplest of the loops. It executes a block of code until the expression to evaluate returns false. Let's see one example: <?php $i = 1; while ($i < 4) { echo $i . " "; $i++; } Here, we are defining a variable with the value 1. Then, we have a while clause in which the expression to evaluate is $i < 4. This loop will execute the content of the block of code until that expression is false. As you can see, inside the loop we are incrementing the value of $i by 1 each time, so after 4 iterations, the loop will end. Check out the output of that script, and you will see "0 1 2 3". The last value printed is 3, so by that time, $i was 3. After that, we increased its value to 4, so when the while was evaluating whether $i < 4, the result was false. Whiles and infinite loops One of the most common problems with while loops is creating an infinite loop. If you do not add any code inside while, which updates any of the variables considered in the while expression so it can be false at some point, PHP will never exit the loop! For This is the most complex of the four loops. For defines an initialization expression, an exit condition, and the end of the iteration expression. When PHP first encounters the loop, it executes what is defined as the initialization expression. Then, it evaluates the exit condition, and if it resolves to true, it enters the loop. After executing everything inside the loop, it executes the end of the iteration expression. Once this is done, it will evaluate the end condition again, going through the loop code and the end of iteration expression until it evaluates to false. As always, an example will help clarify this: <?php for ($i = 1; $i < 10; $i++) { echo $i . " "; } The initialization expression is $i = 1 and is executed only the first time. The exit condition is $i < 10, and it is evaluated at the beginning of each iteration. The end of the iteration expression is $i++, which is executed at the end of each iteration. This example prints numbers from 1 to 9. Another more common usage of the for loop is with arrays: <?php $names = ['Harry', 'Ron', 'Hermione']; for ($i = 0; $i < count($names); $i++) { echo $names[$i] . " "; } In this example, we have an array of names. As it is defined as a list, its keys will be 0, 1, and 2. The loop initializes the $i variable to 0, and it will iterate until the value of $i is not less than the amount of elements in the array 3 The first iteration $i is 0, the second will be 1, and the third one will be 2. When $i is 3, it will not enter the loop as the exit condition evaluates to false. On each iteration, we are printing the content of the $i position of the array; hence, the result of this code will be all three names in the array. We careful with exit conditions It is very common to set an exit condition. This is not exactly what we need, especially with arrays. Remember that arrays start with 0 if they are a list, so an array of 3 elements will have entries 0, 1, and 2. Defining the exit condition as $i <= count($array) will cause an error on your code, as when $i is 3, it also satisfies the exit condition and will try to access to the key 3, which does not exist. Foreach The last, but not least, type of loop is foreach. This loop is exclusive for arrays, and it allows you to iterate an array entirely, even if you do not know its keys. There are two options for the syntax, as you can see in these examples: <?php $names = ['Harry', 'Ron', 'Hermione']; foreach ($names as $name) { echo $name . " "; } foreach ($names as $key => $name) { echo $key . " -> " . $name . " "; } The foreach loop accepts an array; in this case, $names. It specifies a variable, which will contain the value of the entry of the array. You can see that we do not need to specify any end condition, as PHP will know when the array has been iterated. Optionally, you can specify a variable that will contain the key of each iteration, as in the second loop. Foreach loops are also useful with maps, where the keys are not necessarily numeric. The order in which PHP will iterate the array will be the same order in which you used to insert the content in the array. Let's use some loops in our application. We want to show the available books in our home page. We have the list of books in an array, so we will have to iterate all of them with a foreach loop, printing some information from each one. Append the following code to the body tag in index.php: <?php endif; $books = [ [ 'title' => 'To Kill A Mockingbird', 'author' => 'Harper Lee', 'available' => true, 'pages' => 336, 'isbn' => 9780061120084 ], [ 'title' => '1984', 'author' => 'George Orwell', 'available' => true, 'pages' => 267, 'isbn' => 9780547249643 ], [ 'title' => 'One Hundred Years Of Solitude', 'author' => 'Gabriel Garcia Marquez', 'available' => false, 'pages' => 457, 'isbn' => 9785267006323 ], ]; ?> <ul> <?php foreach ($books as $book): ?> <li> <i><?php echo $book['title']; ?></i> - <?php echo $book['author']; ?> <?php if (!$book['available']): ?> <b>Not available</b> <?php endif; ?> </li> <?php endforeach; ?> </ul> The highlighted code shows a foreach loop using the : notation, which is better when mixing it with HTML. It iterates all the $books arrays, and for each book, it will print some information as a HTML list. Also note that we have a conditional inside a loop, which is perfectly fine. Of course, this conditional will be executed for each entry in the array, so you should keep the block of code of your loops as simple as possible. Functions A function is a reusable block of code that, given an input, performs some actions and optionally returns a result. You already know several predefined functions, such as empty, in_array, or var_dump. These functions come with PHP so you do not have to reinvent the wheel, but you can create your own very easily. You can define functions when you identify portions of your application that have to be executed several times or just to encapsulate some functionality. Function declaration Declaring a function means to write it down so that it can be used later. A function has a name, takes arguments, and has a block of code. Optionally, it can define what kind of value is returning. The name of the function has to follow the same rules as variable names; that is, it has to start by a letter or underscore and can contain any letter, number, or underscore. It cannot be a reserved word. Let's see a simple example: function addNumbers($a, $b) { $sum = $a + $b; return $sum; } $result = addNumbers(2, 3); Here, the function's name is addNumbers, and it takes two arguments: $a and $b. The block of code defines a new variable $sum that is the sum of both the arguments and then returns its content with return. In order to use this function, you just need to call it by its name, sending all the required arguments, as shown in the highlighted line. PHP does not support overloaded functions. Overloading refers to the ability of declaring two or more functions with the same name but different arguments. As you can see, you can declare the arguments without knowing what their types are, so PHP would not be able to decide which function to use. Another important thing to note is the variable scope. We are declaring a $sum variable inside the block of code, so once the function ends, the variable will not be accessible any more. This means that the scope of variables declared inside the function is just the function itself. Furthermore, if you had a $sum variable declared outside the function, it would not be affected at all since the function cannot access that variable unless we send it as an argument. Function arguments A function gets information from outside via arguments. You can define any number of arguments—including 0. These arguments need at least a name so that they can be used inside the function, and there cannot be two arguments with the same name. When invoking the function, you need to send the arguments in the same order as we declared them. A function may contain optional arguments; that is, you are not forced to provide a value for those arguments. When declaring the function, you need to provide a default value for those arguments, so in case the user does not provide a value, the function will use the default one: function addNumbers($a, $b, $printResult = false) { $sum = $a + $b; if ($printResult) { echo 'The result is ' . $sum; } return $sum; } $sum1 = addNumbers(1, 2); $sum1 = addNumbers(3, 4, false); $sum1 = addNumbers(5, 6, true); // it will print the result This new function takes two mandatory arguments and an optional one. The default value is false, and is used as a normal value inside the function. The function will print the result of the sum if the user provides true as the third argument, which happens only the third time that the function is invoked. For the first two times, $printResult is set to false. The arguments that the function receives are just copies of the values that the user provided. This means that if you modify these arguments inside the function, it will not affect the original values. This feature is known as sending arguments by a value. Let's see an example: function modify($a) { $a = 3; } $a = 2; modify($a); var_dump($a); // prints 2 We are declaring the $a variable with the value 2, and then we are calling the modify method, sending $a. The modify method modifies the $a argument, setting its value to 3. However, this does not affect the original value of $a, which reminds to 2 as you can see in the var_dump function. If what you want is to actually change the value of the original variable used in the invocation, you need to pass the argument by reference. To do that, you add & in front of the argument when declaring the function: function modify(&$a) { $a = 3; } Now, after invoking the modify function, $a will be always 3. Arguments by value versus by reference PHP allows you to do it, and in fact, some native functions of PHP use arguments by reference—remember the array sorting functions; they did not return the sorted array; instead, they sorted the array provided. But using arguments by reference is a way of confusing developers. Usually, when someone uses a function, they expect a result, and they do not want their provided arguments to be modified. So, try to avoid it; people will be grateful! The return statement You can have as many return statements as you want inside your function, but PHP will exit the function as soon as it finds one. This means that if you have two consecutive return statements, the second one will never be executed. Still, having multiple return statements can be useful if they are inside conditionals. Add this function inside your functions.php file: function loginMessage() { if (isset($_COOKIE['username'])) { return "You are " . $_COOKIE['username']; } else { return "You are not authenticated."; } } Let's use it in your index.php file by replacing the highlighted content—note that to save some tees, I replaced most of the code that was not changed at all with //…: //... <body> <p><?php echo loginMessage(); ?></p> <?php if (isset($_GET['title']) && isset($_GET['author'])): ?> //... Additionally, you can omit the return statement if you do not want the function to return anything. In this case, the function will end once it reaches the end of the block of code. Type hinting and return types With the release of PHP7, the language allows developers to be more specific about what functions get and return. You can—always optionally—specify the type of argument that the function needs, for example, type hinting, and the type of result the function will return—return type. Let's first see an example: <?php declare(strict_types=1); function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($printSum) { echo 'The sum is ' . $sum; } return $sum; } addNumbers(1, 2, true); addNumbers(1, '2', true); // it fails when strict_types is 1 addNumbers(1, 'something', true); // it always fails This function states that the arguments need to be an integer, and Boolean, and that the result will be an integer. Now, you know that PHP has type juggling, so it can usually transform a value of one type to its equivalent value of another type, for example, the string 2 can be used as integer 2. To stop PHP from using type juggling with the arguments and results of functions, you can declare the strict_types directive as shown in the first highlighted line. This directive has to be declared on the top of each file, where you want to enforce this behavior. The three invocations work as follows: The first invocation sends two integers and a Boolean, which is what the function expects. So, regardless of the value of strict_types, it will always work. The second invocation sends an integer, a string, and a Boolean. The string has a valid integer value, so if PHP was allowed to use type juggling, the invocation would resolve to just normal. But in this example, it will fail because of the declaration on top of the file. The third invocation will always fail as the something string cannot be transformed into a valid integer. Let's try to use a function within our project. In our index.php file, we have a foreach loop that iterates the books and prints them. The code inside the loop is kind of hard to understand as it is mixing HTML with PHP, and there is a conditional too. Let's try to abstract the logic inside the loop into a function. First, create the new functions.php file with the following content: <?php function printableTitle(array $book): string { $result = '<i>' . $book['title'] . '</i> - ' . $book['author']; if (!$book['available']) { $result .= ' <b>Not available</b>'; } return $result; } This file will contain our functions. The first one, printableTitle, takes an array representing a book and builds a string with a nice representation of the book in HTML. The code is the same as before, just encapsulated in a function. Now, index.php will have to include the functions.php file and then use the function inside the loop. Let's see how this is done: <?php require_once 'functions.php' ?> <!DOCTYPE html> <html lang="en"> //... ?> <ul> <?php foreach ($books as $book): ?> <li><?php echo printableTitle($book); ?> </li> <?php endforeach; ?> </ul> //... Well, now our loop looks way cleaner, right? Also, if we need to print the title of the book somewhere else, we can reuse the function instead of duplicating code! Summary In this article, we went through all the basics of procedural PHP while writing simple examples in order to practice them. You now know how to use variables and arrays with control structures and functions and how to get information from HTTP requests among others. Resources for Article: Further resources on this subject: Getting started with Modernizr using PHP IDE[article] PHP 5 Social Networking: Implementing Public Messages[article] Working with JSON in PHP jQuery[article]
Read more
  • 0
  • 0
  • 6025

article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 6012

article-image-low-level-c-practices
Packt
21 Oct 2012
14 min read
Save for later

Low-level C# Practices

Packt
21 Oct 2012
14 min read
Working with generics Visual Studio 2005 included .NET version 2.0 which included generics. Generics give developers the ability to design classes and methods that defer the specification of specific parts of a class or method's specification until declaration or instantiation. Generics offer features previously unavailable in .NET. One benefit to generics, that is potentially the most common, is for the implementation of collections that provide a consistent interface to collections of different data types without needing to write specific code for each data type. Constraints can be used to restrict the types that are supported by a generic method or class, or can guarantee specific interfaces. Limits of generics Constraints within generics in C# are currently limited to a parameter-less constructor, interfaces, or base classes, or whether or not the type is a struct or a class (value or reference type). This really means that code within a generic method or type can either be constructed or can make use of methods and properties. Due to these restrictions types within generic types or methods cannot have operators. Writing sequence and iterator members Visual Studio 2005 and C# 2.0 introduced the yield keyword. The yield keyword is used within an iterator member as a means to effectively implement an IEnumerable interface without needing to implement the entire IEnumerable interface. Iterator members are members that return a type of IEnumerable or IEnumerable<T>, and return individual elements in the enumerable via yield return, or deterministically terminates the enumerable via yield break. These members can be anything that can return a value, such as methods, properties, or operators. An iterator that returns without calling yield break has an implied yield break, just as a void method has an implied return. Iterators operate on a sequence but process and return each element as it is requested. This means that iterators implement what is known as deferred execution. Deferred execution is when some or all of the code, although reached in terms of where the instruction pointer is in relation to the code, hasn't entirely been executed yet. Iterators are methods that can be executed more than once and result in a different execution path for each execution. Let's look at an example: public static IEnumerable<DateTime> Iterator() { Thread.Sleep(1000); yield return DateTime.Now; Thread.Sleep(1000); yield return DateTime.Now; Thread.Sleep(1000); yield return DateTime.Now; }   The Iterator method returns IEnumerable which results in three DateTime values. The creation of those three DateTime values is actually invoked at different times. The Iterator method is actually compiled in such a way that a state machine is created under the covers to keep track of how many times the code is invoked and is implemented as a special IEnumerable<DateTime> object. The actual invocation of the code in the method is done through each call of the resulting IEnumerator. MoveNext method. The resulting IEnumerable is really implemented as a collection of delegates that are executed upon each invocation of the MoveNext method, where the state, in the simplest case, is really which of the delegates to invoke next. It's actually more complicated than that, especially when there are local variables and state that can change between invocations and is used across invocations. But the compiler takes care of all that. Effectively, iterators are broken up into individual bits of code between yield return statements that are executed independently, each using potentially local shared data. What are iterators good for other than a really cool interview question? Well, first of all, due to the deferred execution, we can technically create sequences that don't need to be stored in memory all at one time. This is often useful when we want to project one sequence into another. Couple that with a source sequence that is also implemented with deferred execution, we end up creating and processing IEnumerables (also known as collections) whose content is never all in memory at the same time. We can process large (or even infinite) collections without a huge strain on memory. For example, if we wanted to model the set of positive integer values (an infinite set) we could write an iterator method shown as follows: static IEnumerable<BigInteger> AllThePositiveIntegers() { var number = new BigInteger(0); while (true) yield return number++; }   We can then chain this iterator with another iterator, say something that gets all of the positive squares: static IEnumerable<BigInteger> AllThePostivieIntegerSquares( IEnumerable<BigInteger> sourceIntegers) { foreach(var value in sourceIntegers) yield return value*value; }   Which we could use as follows: foreach(var value in AllThePostivieIntegerSquares(AllThePositiveIntegers())) Console.WriteLine(value);   We've now effectively modeled two infi nite collections of integers in memory. Of course, our AllThePostiveIntegerSquares method could just as easily be used with fi nite sequences of values, for example: foreach (var value in AllThePostivieIntegerSquares( Enumerable.Range(0, int.MaxValue) .Select(v => new BigInteger(v)))) Console.WriteLine(value);   In this example we go through all of the positive Int32 values and square each one without ever holding a complete collection of the set of values in memory. As we see, this is a useful method for composing multiple steps that operate on, and result in, sequences of values. We could have easily done this without IEnumerable<T>, or created an IEnumerator class whose MoveNext method performed calculations instead of navigating an array. However, this would be tedious and is likely to be error-prone. In the case of not using IEnumerable<T>, we'd be unable to operate on the data as a collection with things such as foreach. Context: When modeling a sequence of values that is either known only at runtime, or each element can be reliably calculated at runtime. Practice: Consider using an iterator. Working with lambdas Visual Studio 2008 introduced C# 3.0 . In this version of C# lambda expressions were introduced. Lambda expressions are another form of anonymous functions. Lambdas were added to the language syntax primarily as an easier anonymous function syntax for LINQ queries. Although you can't really think of LINQ without lambda expressions, lambda expressions are a powerful aspect of the C# language in their own right. They are concise expressions that use implicitly-typed optional input parameters whose types are implied through the context of their use, rather than explicit de fi nition as with anonymous methods. Along with C# 3.0 in Visual Studio 2008, the .NET Framework 3.5 was introduced which included many new types to support LINQ expressions, such as Action<T> and Func<T>. These delegates are used primarily as definitions for different types of anonymous methods (including lambda expressions). The following is an example of passing a lambda expression to a method that takes a Func<T1, T2, TResult> delegate and the two arguments to pass along to the delegate: ExecuteFunc((f, s) => f + s, 1, 2);   The same statement with anonymous methods: ExecuteFunc(delegate(int f, int s) { return f + s; }, 1, 2);   It's clear that the lambda syntax has a tendency to be much more concise, replacing the delegate and braces with the "goes to" operator (=>). Prior to anonymous functions, member methods would need to be created to pass as delegates to methods. For example: ExecuteFunc(SomeMethod, 1, 2);   This, presumably, would use a method named SomeMethod that looked similar to: private static int SomeMethod(int first, int second) { return first + second; }   Lambda expressions are more powerful in the type inference abilities, as we've seen from our examples so far. We need to explicitly type the parameters within anonymous methods, which is only optional for parameters in lambda expressions. LINQ statements don't use lambda expressions exactly in their syntax. The lambda expressions are somewhat implicit. For example, if we wanted to create a new collection of integers from another collection of integers, with each value incremented by one, we could use the following LINQ statement: var x = from i in arr select i + 1;   The i + 1 expression isn't really a lambda expression, but it gets processed as if it were first converted to method syntax using a lambda expression: var x = arr.Select(i => i + 1);   The same with an anonymous method would be: var x = arr.Select(delegate(int i) { return i + 1; });   What we see in the LINQ statement is much closer to a lambda expression. Using lambda expressions for all anonymous functions means that you have more consistent looking code. Context: When using anonymous functions. Practice: Prefer lambda expressions over anonymous methods. Parameters to lambda expressions can be enclosed in parentheses. For example: var x = arr.Select((i) => i + 1);   The parentheses are only mandatory when there is more than one parameter: var total = arr.Aggregate(0, (l, r) => l + r);   Context: When writing lambdas with a single parameter. Practice: Prefer no parenthesis around the parameter declaration. Sometimes when using lambda expressions, the expression is being used as a delegate that takes an argument. The corresponding parameter in the lambda expression may not be used within the right-hand expression (or statements). In these cases, to reduce the clutter in the statement, it's common to use the underscore character (_) for the name of the parameter. For example: task.ContinueWith(_ => ProcessSecondHalfOfData());   The task.ContinueWith method takes an Action <Task> delegate. This means the previous lambda expression is actually given a task instance (the antecedent Task). In our example, we don't use that task and just perform some completely independent operation. In this case, we use (_) to not only signify that we know we don't use that parameter, but also to reduce the clutter and potential name collisions a little bit. Context: When writing lambda expression that take a single parameter but the parameter is not used. Practice: Use underscore (_) for the name of the parameter. There are two types of lambda expressions. So far, we've seen expression lambdas. Expression lambdas are a single expression on the right-hand side that evaluates to a value or void. There is another type of lambda expression called statement lambdas. These lambdas have one or more statements and are enclosed in braces. For example: task.ContinueWith(_ => { var value = 10; value += ProcessSecondHalfOfData(); ProcessSomeRandomValue(value); });   As we can see, statement lambdas can declare variables, as well as have multiple statements. Working with extension methods Along with lambda expressions and iterators, C# 3.0 brought us extension methods. These static methods (contained in a static class whose first argument is modified with the this modifier) were created for LINQ so IEnumerable types could be queried without needing to add copious amounts of methods to the IEnumerable interface. An extension method has the basic form of: public static class EnumerableExtensions { public static IEnumerable<int> IntegerSquares( this IEnumerable<int> source) { return source.Select(value => value * value); } }   As stated earlier, extension methods must be within a static class, be a static method, and the first parameter must be modified with the this modifier. Extension methods extend the available instance methods of a type. In our previous example, we've effectively added an instance member to IEnumerable<int> named IntegerSquares so we get a sequence of integer values that have been squared. For example, if we created an array of integer values, we will have added a Cubes method to that array that returns a sequence of the values cubed. For example: var values = new int[] {1, 2, 3}; foreach (var v in values.Cubes()) { Console.WriteLine(v); }   Having the ability to create new instance methods that operate on any public members of a specific type is a very powerful feature of the language. This, unfortunately, does not come without some caveats. Extension methods suffer inherently from a scoping problem. The only scoping that can occur with these methods is the namespaces that have been referenced for any given C# source file. For example, we could have two static classes that have two extension methods named Cubes. If those static classes are in the same namespace, we'd never be able to use those extensions methods as extension methods because the compiler would never be able to resolve which one to use. For example: public static class IntegerEnumerableExtensions { public static IEnumerable<int> Squares( this IEnumerable<int> source) { return source.Select(value => value * value); } public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value * value * value); } } public static class EnumerableExtensions { public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value * value * value); } }   If we tried to use Cubes as an extension method, we'd get a compile error, for example: var values = new int[] {1, 2, 3}; foreach (var v in values.Cubes()) { Console.WriteLine(v); }   This would result in error CS0121: The call is ambiguous between the following methods or properties. To resolve the problem, we'd need to move one (or both) of the classes to another namespace, for example: namespace Integers { public static class IntegerEnumerableExtensions { public static IEnumerable<int> Squares( this IEnumerable<int> source) { return source.Select(value => value*value); } public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value*value*value); } } } namespace Numerical { public static class EnumerableExtensions { public static IEnumerable<int> Cubes( this IEnumerable<int> source) { return source.Select(value => value*value*value); } } }   Then, we can scope to a particular namespace to choose which Cubes to use: Context: When considering extension methods, due to potential scoping problems. Practice: Use extension methods sparingly. Context: When designing extension methods. Practice: Keep all extension methods that operate on a specific type in their own class. Context: When designing classes to contain methods to extend a specific type, TypeName. Practice: Consider naming the static class TypeNameExtensions. Context: When designing classes to contain methods to extend a specific type, in order to scope the extension methods. Practice: Consider placing the class in its own namespace. Generally, there isn't much need to use extension methods on types that you own. You can simply add an instance method to contain the logic that you want to have. Where extension methods really shine is for effectively creating instance methods on interfaces. Typically, when code is necessary for shared implementations of interfaces, an abstract base class is created so each implementation of the interface can derive from it to implement these shared methods. This is a bit cumbersome in that it uses the one-and-only inheritance slot in C#, so an interface implementation would not be able to derive or extend any other classes. Additionally, there's no guarantee that a given interface implementation will derive from the abstract base and runs the risk of not being able to be used in the way it was designed. Extension methods get around this problem by being entirely independent from the implementation of an interface while still being able to extend it. One of the most notable examples of this might be the System.Linq.Enumerable class introduced in .NET 3.5. The static Enumerable class almost entirely consists of extension methods that extend IEnumerable. It is easy to develop the same sort of thing for our own interfaces. For example, say we have an ICoordinate interface to model a three-dimensional position in relation to the Earth's surface: namespace ConsoleApplication { using Numerical; internal class Program { private static void Main(string[] args) { var values = new int[] {1, 2, 3}; foreach (var v in values.Cubes()) { Console.WriteLine(v); } } } }   We could create a static class to contain extension methods to provide shared functionality between any implementation of ICoordinate. For example: public interface ICoordinate { /// <summary>North/south degrees from equator.</summary> double Latitude { get; set; } /// <summary>East/west degrees from meridian.</summary> double Longitude { get; set; } /// <summary>Distance from sea level in meters.</summary> double Altitude { get; set; } }   Context: When designing interfaces that require shared code. Practice: Consider providing extension methods instead of abstract base implementations.
Read more
  • 0
  • 0
  • 6010
Packt
24 Mar 2014
5 min read
Save for later

Build a Chat Application using the Java API for WebSocket

Packt
24 Mar 2014
5 min read
Traditionally, web applications have been developed using the request/response model followed by the HTTP protocol. In this model, the request is always initiated by the client and then the server returns a response back to the client. There has never been any way for the server to send data to the client independently (without having to wait for a request from the browser) until now. The WebSocket protocol allows full-duplex, two-way communication between the client (browser) and the server. Java EE 7 introduces the Java API for WebSocket, which allows us to develop WebSocket endpoints in Java. The Java API for WebSocket is a brand-new technology in the Java EE Standard. A socket is a two-way pipe that stays alive longer than a single request. Applied to an HTML5-compliant browser, this would allow for continuous communication to or from a web server without the need to load a new page (similar to AJAX). Developing a WebSocket Server Endpoint A WebSocket server endpoint is a Java class deployed to the application server that handles WebSocket requests. There are two ways in which we can implement a WebSocket server endpoint via the Java API for WebSocket: either by developing an endpoint programmatically, in which case we need to extend the javax.websocket.Endpoint class, or by decorating Plain Old Java Objects (POJOs) with WebSocket-specific annotations. The two approaches are very similar; therefore, we will be discussing only the annotation approach in detail and briefly explaining the second approach, that is, developing WebSocket server endpoints programmatically, later in this section. In this article, we will develop a simple web-based chat application, taking full advantage of the Java API for WebSocket. Developing an annotated WebSocket server endpoint The following Java class code illustrates how to develop a WebSocket server endpoint by annotating a Java class: package net.ensode.glassfishbook.websocketchat.serverendpoint; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import javax.websocket.OnClose; import javax.websocket.OnMessage; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; @ServerEndpoint("/websocketchat") public class WebSocketChatEndpoint { private static final Logger LOG = Logger.getLogger(WebSocketChatEndpoint.class.getName()); @OnOpen public void connectionOpened() { LOG.log(Level.INFO, "connection opened"); } @OnMessage public synchronized void processMessage(Session session, String message) { LOG.log(Level.INFO, "received message: {0}", message); try { for (Session sess : session.getOpenSessions()) { if (sess.isOpen()) { sess.getBasicRemote().sendText(message); } } } catch (IOException ioe) { LOG.log(Level.SEVERE, ioe.getMessage()); } } @OnClose public void connectionClosed() { LOG.log(Level.INFO, "connection closed"); } } The class-level @ServerEndpoint annotation indicates that the class is a WebSocket server endpoint. The URI (Uniform Resource Identifier) of the server endpoint is the value specified within the parentheses following the annotation (which is "/websocketchat" in this example)—WebSocket clients will use this URI to communicate with our endpoint. The @OnOpen annotation is used to decorate a method that needs to be executed whenever a WebSocket connection is opened by any of the clients. In our example, we are simply sending some output to the server log, but of course, any valid server-side Java code can be placed here. Any method annotated with the @OnMessage annotation will be invoked whenever our server endpoint receives a message from a client. Since we are developing a chat application, our code simply broadcasts the message it receives to all connected clients. In our example, the processMessage() method is annotated with @OnMessage, and takes two parameters: an instance of a class implementing the javax.websocket.Session interface and a String parameter containing the message that was received. Since we are developing a chat application, our WebSocket server endpoint simply broadcasts the received message to all connected clients. The getOpenSessions() method of the Session interface returns a set of session objects representing all open sessions. We iterate through this set to broadcast the received message to all connected clients by invoking the getBasicRemote() method on each session instance and then invoking the sendText() method on the resulting RemoteEndpoint.Basic implementation returned by calling the previous method. The getOpenSessions() method on the Session interface returns all the open sessions at the time it was invoked. It is possible for one or more of the sessions to have closed after the method was invoked; therefore, it is recommended to invoke the isOpen() method on a Session implementation before attempting to return data back to the client. An exception may be thrown if we attempt to access a closed session. Finally, we need to decorate a method with the @OnClose annotation in case we need to handle the event when a client disconnects from the server endpoint. In our example, we simply log a message into the server log. There is one additional annotation that we didn't use in our example—the @OnError annotation; it is used to decorate a method that needs to be invoked in case there's an error while sending or receiving data to or from the client. As we can see, developing an annotated WebSocket server endpoint is straightforward. We simply need to add a few annotations, and the application server will invoke our annotated methods as necessary. If we wish to develop a WebSocket server endpoint programmatically, we need to write a Java class that extends javax.websocket.Endpoint. This class has the onOpen(), onClose(), and onError() methods that are called at appropriate times during the endpoint's life cycle. There is no method equivalent to the @OnMessage annotation to handle incoming messages from clients. The addMessageHandler() method needs to be invoked in the session, passing an instance of a class implementing the javax.websocket.MessageHandler interface (or one of its subinterfaces) as its sole parameter. In general, it is easier and more straightforward to develop annotated WebSocket endpoints compared to their programmatic counterparts. Therefore, we recommend that you use the annotated approach whenever possible.
Read more
  • 0
  • 0
  • 6003

article-image-overview-node-package-manager
Packt
11 Aug 2011
5 min read
Save for later

An Overview of the Node Package Manager

Packt
11 Aug 2011
5 min read
Node Web Development npm package format An npm package is a directory structure with a package.json file describing the package. This is exactly what we just referred to as a Complex Module, except npm recognizes many more package.json tags than does Node. The starting point for npm's package.json is the CommonJS Packages/1.0 specification. The documentation for npm's package.json implementation is accessed with the following command: $ npm help json A basic package.json file is as follows: { name: "packageName", version: "1.0", main: "mainModuleName", modules: { "mod1": "lib/mod1", "mod2": "lib/mod2" } } The file is in JSON format which, as a JavaScript programmer, you should already have seen a few hundred times. The most important tags are name and version. The name will appear in URLs and command names, so choose one that's safe for both. If you desire to publish a package in the public npm repository it's helpful to check and see if a particular name is already being used, at http://search.npmjs.org or with the following command: $ npm search packageName The main tag is treated the same as complex modules. It references the module that will be returned when invoking require('packageName'). Packages can contain many modules within themselves, and those can be listed in the modules list. Packages can be bundled as tar-gzip tarballs, especially to send them over the Internet. A package can declare dependencies on other packages. That way npm can automatically install other modules required by the module being installed. Dependencies are declared as follows: "dependencies": { "foo" : "1.0.0 - 2.9999.9999" , "bar" : ">=1.0.2 <2.1.2" } The description and keywords fields help people to find the package when searching in an npm repository (http://search.npmjs.org). Ownership of a package can be documented in the homepage, author, or contributors fields: "description": "My wonderful packages walks dogs", "homepage": "http://npm.dogs.org/dogwalker/", "author": [email protected] Some npm packages provide executable programs meant to be in the user's PATH. These are declared using the bin tag. It's a map of command names to the script which implements that command. The command scripts are installed into the directory containing the node executable using the name given. bin: { 'nodeload.js': './nodeload.js', 'nl.js': './nl.js' }, The directories tag documents the package directory structure. The lib directory is automatically scanned for modules to load. There are other directory tags for binaries, manuals, and documentation. directories: { lib: './lib', bin: './bin' }, The script tags are script commands run at various events in the lifecycle of the package. These events include install, activate, uninstall, update, and more. For more information about script commands, use the following command: $ npm help scripts This was only a taste of the npm package format; see the documentation (npm help json) for more. Finding npm packages By default npm modules are retrieved over the Internet from the public package registry maintained on http://npmjs.org. If you know the module name it can be installed simply by typing the following: $ npm install moduleName But what if you don't know the module name? How do you discover the interesting modules? The website http://npmjs.org publishes an index of the modules in that registry, and the http://search.npmjs.org site lets you search that index. npm also has a command-line search function to consult the same index: $ npm search mp3 mediatags Tools extracting for media meta-data tags =coolaj86 util m4a aac mp3 id3 jpeg exiv xmp node3p An Amazon MP3 downloader for NodeJS. =ncb000gt Of course upon finding a module it's installed as follows: $ npm install mediatags After installing a module one may want to see the documentation, which would be on the module's website. The homepage tag in the package.json lists that URL. The easiest way to look at the package.json file is with the npm view command, as follows: $ npm view zombie ... { name: 'zombie', description: 'Insanely fast, full-stack, headless browser testing using Node.js', ... version: '0.9.4', homepage: 'http://zombie.labnotes.org/', ... npm ok You can use npm view to extract any tag from package.json, like the following which lets you view just the homepage tag: $ npm view zombie homepage http://zombie.labnotes.org/ Using the npm commands The main npm command has a long list of sub-commands for specific package management operations. These cover every aspect of the lifecycle of publishing packages (as a package author), and downloading, using, or removing packages (as an npm consumer). Getting help with npm Perhaps the most important thing is to learn where to turn to get help. The main help is delivered along with the npm command accessed as follows: For most of the commands you can access the help text for that command by typing the following: $ npm help <command> The npm website (http://npmjs.org/) has a FAQ that is also delivered with the npm software. Perhaps the most important question (and answer) is: Why does npm hate me? npm is not capable of hatred. It loves everyone, even you.  
Read more
  • 0
  • 0
  • 5994

article-image-replication-mysql-admin
Packt
17 Jan 2011
10 min read
Save for later

Replication in MySQL Admin

Packt
17 Jan 2011
10 min read
Replication is an interesting feature of MySQL that can be used for a variety of purposes. It can help to balance server load across multiple machines, ease backups, provide a workaround for the lack of fulltext search capabilities in InnoDB, and much more. The basic idea behind replication is to reflect the contents of one database server (this can include all databases, only some of them, or even just a few tables) to more than one instance. Usually, those instances will be running on separate machines, even though this is not technically necessary. Traditionally, MySQL replication is based on the surprisingly simple idea of repeating the execution of all statements issued that can modify data—not SELECT—against a single master machine on other machines as well. Provided all secondary slave machines had identical data contents when the replication process began, they should automatically remain in sync. This is called Statement Based Replication (SBR). With MySQL 5.1, Row Based Replication (RBR) was added as an alternative method for replication, targeting some of the deficiencies SBR brings with it. While at first glance it may seem superior (and more reliable), it is not a silver bullet—the pain points of RBR are simply different from those of SBR. Even though there are certain use cases for RBR, all recipes in this chapter will be using Statement Based Replication. While MySQL makes replication generally easy to use, it is still important to understand what happens internally to be able to know the limitations and consequences of the actions and decisions you will have to make. We assume you already have a basic understanding of replication in general, but we will still go into a few important details. Statement Based Replication SBR is based on a simple but effective principle: if two or more machines have the same set of data to begin with, they will remain identical if all of them execute the exact same SQL statements in the same order. Executing all statements manually on multiple machines would be extremely tedious and impractical. SBR automates this process. In simple terms, it takes care of sending all the SQL statements that change data on one server (the master) to any number of additional instances (the slaves) over the network. The slaves receiving this stream of modification statements execute them automatically, thereby effectively reproducing the changes the master machine made to its data originally. That way they will keep their local data files in sync with the master's. One thing worth noting here is that the network connection between the master and its slave(s) need not be permanent. In case the link between a slave and its master fails, the slave will remember up to which point it had read the data last time and will continue from there once the network becomes available again. In order to minimize the dependency on the network link, the slaves will retrieve the binary logs (binlogs) from the master as quickly as they can, storing them on their local disk in files called relay logs. This way, the connection, which might be some sort of dial-up link, can be terminated much sooner while executing the statements from the local relay-log asynchronously. The relay log is just a copy of the master's binlog. The following image shows the overall architecture: Filtering In the image you can see that each slave may have its individual configuration on whether it executes all the statements coming in from the master, or just a selection of those. This can be helpful when you have some slaves dedicated to special tasks, where they might not need all the information from the master. All of the binary logs have to be sent to each slave, even though it might then decide to throw away most of them. Depending on the size of the binlogs, the number of slaves and the bandwidth of the connections in between, this can be a heavy burden on the network, especially if you are replicating via wide area networks. Even though the general idea of transferring SQL statements over the wire is rather simple, there are lots of things that can go wrong, especially because MySQL offers some configuration options that are quite counter-intuitive and lead to hard-to-find problems. For us, this has become a best practice: "Only use qualified statements and replicate-*-table configuration options for intuitively predictable replication!" What this means is that the only filtering rules that produce intuitive results are those based on the replicate-do-table and replicate-ignore-table configuration options. This includes those variants with wildcards, but specifically excludes the all-database options like replicate-do-db and replicate-ignore-db. These directives are applied on the slave side on all incoming relay logs. The master-side binlog-do-* and binlog-ignore-* configuration directives influence which statements are sent to the binlog and which are not. We strongly recommend against using them, because apart from hard-to-predict results they will make the binlogs undesirable for server backup and restore. They are often of limited use anyway as they do not allow individual configurations per slave but apply to all of them. Setting up automatically updated slaves of a server based on a SQL dump In this recipe, we will show you how to prepare a dump file of a MySQL master server and use it to set up one or more replication slaves. These will automatically be updated with changes made on the master server over the network. Getting ready You will need a running MySQL master database server that will act as the replication master and at least one more server to act as a replication slave. This needs to be a separate MySQL instance with its own data directory and configuration. It can reside on the same machine if you just want to try this out. In practice, a second machine is recommended because this technique's very goal is to distribute data across multiple pieces of hardware, not place an even higher burden on a single one. For production systems you should pick a time to do this when there is a lighter load on the master machine, often during the night when there are less users accessing the system. Taking the SQL dump uses some extra resources, but unless your server is maxed out already, the performance impact usually is not a serious problem. Exactly how long the dump will take depends mostly on the amount of data and speed of the I/O subsystem. You will need an administrative operating system account on the master and the slave servers to edit the MySQL server configuration files on both of them. Moreover, an administrative MySQL database user is required to set up replication. We will just replicate a single database called sakila in this example. Replicating more than one database In case you want to replicate more than one schema, just add their names to the commands shown below. To replicate all of them, just leave out any database name from the command line. How to do it... At the operating system level, connect to the master machine and open the MySQL configuration file with a text editor. Usually it is called my.ini on Windows and my.cnf on other operating systems. On the master machine, make sure the following entries are present and add them to the [mysqld] section if not already there: server-id=1000 log-bin=master-bin If one or both entries already exist, do not change them but simply note their values. The log-bin setting need not have a value, but can stand alone as well. Restart the master server if you need to modify the configuration. Create a user account on the master that can be used by the slaves to connect: master> grant replication slave on *.* to 'repl'@'%' identified by 'slavepass'; Using the mysqldump tool included in the default MySQL install, create the initial copy to set up the slave(s): $ mysqldump -uUSER -pPASS --master-data --single-transaction sakila > sakila_master.sql Transfer the sakila_master.sql dump file to each slave you want to set up, for example, by using an external drive or network copy. On the slave, make sure the following entries are present and add them to the [mysqld] section if not present: server-id=1001 replicate-wild-do-table=sakila.% When adding more than one slave, make sure the server-id setting is unique among master and all clients. Restart the slave server. Connect to the slave server and issue the following commands (assuming the data dump was stored in the /tmp directory): slave> create database sakila; slave> use sakila; slave> source /tmp/sakila_master.sql; slave> CHANGE MASTER TO master_host='master.example.com', master_port=3306, master_ user='repl', master_password='slavepass'; slave> START SLAVE; Verify the slave is running with: slave> SHOW SLAVE STATUSG ************************** 1. row *************************** ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... How it works... Some of the instructions discussed in the previous section are to make sure that both master and slave are configured with different server-id settings. This is of paramount importance for a successful replication setup. If you fail to provide unique server-id values to all your server instances, you might see strange replication errors that are hard to debug. Moreover, the master must be configured to write binlogs—a record of all statements manipulating data (this is what the slaves will receive). Before taking a full content dump of the sakila demo database, we create a user account for the slaves to use. This needs the REPLICATION SLAVE privilege. Then a data dump is created with the mysqldump command line tool. Notice the provided parameters --master-data and --single-transaction. The former is needed to have mysqldump include information about the precise moment the dump was created in the resulting output. The latter parameter is important when using InnoDB tables, because only then will the dump be created based on a transactional snapshot of the data. Without it, statements changing data while the tool was running could lead to an inconsistent dump. The output of the command is redirected to the /tmp/sakila_master.sql file. As the sakila database is not very big, you should not see any problems. However, if you apply this recipe to larger databases, make sure you send the data to a volume with sufficient free disk space—the SQL dump can become quite large. To save space here, you may optionally pipe the output through gzip or bzip2 at the cost of a higher CPU load on both the master and the slaves, because they will need to unpack the dump before they can load it, of course. If you open the uncompressed dump file with an editor, you will see a line with a CHANGE MASTER TO statement. This is what --master-data is for. Once the file is imported on a slave, it will know at which point in time (well, rather at which binlog position) this dump was taken. Everything that happened on the master after that needs to be replicated. Finally, we configure that slave to use the credentials set up on the master before to connect and then start the replication. Notice that the CHANGE MASTER TO statement used for that does not include the information about the log positions or file names because that was already taken from the dump file just read in. From here on the slave will go ahead and record all SQL statements sent from the master, store them in its relay logs, and then execute them against the local data set. This recipe is very important because the following recipes are based on this! So in case you have not fully understood the above steps yet, we recommend you go through them again, before trying out more complicated setups.
Read more
  • 0
  • 0
  • 5992
article-image-drupal-site-configuration-site-information-triggers-and-file-system
Packt
08 Sep 2010
9 min read
Save for later

Drupal Site Configuration: Site Information, Triggers and File System

Packt
08 Sep 2010
9 min read
(For more resources on Drupal, see here.) Not everything that is available in Drupal's Configuration section is discussed in this article. Some settings are very straightforward and really don't warrant more than perhaps a brief mention. Before we start It is sensible to make note of a few important things before getting our hands dirty. Make it second nature to check how the changes made to the settings affect the site. Quite often settings you modify, or features you add, will not behave precisely as expected and without ensuring that you use a prudent approach to making changes, you can sometimes end up with a bit of a mess. Changes to the site's structure (for example, adding new modules) can affect what is and isn't available for configuration so be aware that it may be necessary to revisit this section. Click on Configuration in the toolbar menu. You should see something like the following screenshot: A quick point to mention is that we aren't giving over much space to the final option—Regional and Language. This is because the settings here are very basic and should give you no trouble at all. There is also an online exercise available to help you with date types and formats if you are interested in customizing these. Let's begin! Site information This page contains a mixed bag of settings, some of which are pretty self-explanatory, while others will require us to think quite carefully about what we need to do. To start with, we are presented with a few text boxes that control things like the name of the site and the site slogan. Nothing too earth shattering, although I should point out that different themes implement these settings differently, while some don't implement them at all. For example, adding a slogan in the default Garland theme prints it after the site name as shown in the following screenshot: Whereas, the Stark theme places the slogan beneath the site name: Let's assume that there is a page of content that should be displayed by default—before anyone views any of the other content. For example, if you wanted to display some sort of promotional information or an introduction page, you could tell Drupal to display that using this setting. Remember that you have to create the content for this post first, and then determine its path before you tell Drupal to use it. For example, we could reference a specific node with its node ID, but equally, a site's blogs could be displayed if you substitute node/x (in node/ID format) for the blog. Once you are looking at the content intended for the front page, take note of the relative URL path and simply enter that into the text box provided. Recall that the relative URL path is that part of the page's address that comes after the standard domain, which is shared by the whole site. For example, setting node/2 works because Drupal maps this relative path to http://localhost/drupal/node/2 The first part of this address, http://localhost/drupal/ is the base URL, and everything after that is the relative URL path. Sometimes, the front page is a slightly more complex beast and it is likely that you will want to consider Panels to create a unique front page. In this case, Panels settings can override this setting to make a specific panel page as the front page. The following settings allow you to broadly deal with the problem of two common site errors that may crop up during a site's normal course of operation—from the perspective of a site visitor. In particular, you may wish to create a couple of customized error pages that will be displayed to the users in the event of a "page not found" or "access denied" problem. Remember that there are already pretty concise pages, which are supplied by default. However, if you wish to make any changes, then the process for creating an error page is exactly the same as creating any other normal page. Let's make a change very quickly. Click on Add new content in the Shortcuts menu and select Basic page. Add whatever content you want for, say the Page not found! error: Don't worry about the host of options available on this page—we will talk about all of this later on. For now, simply click on the Save button and make note of the URL of the page when it is displayed. Then head back to the Site information page, add this URL to the Default 404 (not found) page dialog, and then click on the Save configuration button: If you navigate to a page that doesn't exist, for example, node/3333, you should receive the new error message as follows: In this example, we asked Drupal to find a node that does not exist yet and so it displayed the Page not found! error message. Since Drupal can also provide content that is private or available to only certain users, it also needs the access denied error to explain to the would-be users that they do not have sufficient permissions to view the requested page. This is not the same as not finding a page, of course, but you can create your own access denied page in exactly the same way. Finally, you will need to specify how often cron should run in the Automatically run cron drop-down at the bottom of the Site information page. Cronjobs are automated tasks (of any type—they could be search indexing, feed aggregation, and so on) that should run at specified intervals. Drupal uses them to keep itself up-to-date and ensure optimal operation. Drupal uses web page requests to initiate new cron runs once the specified interval has elapsed. If your website does not get visited regularly, cron itself cannot run regularly. Running cron every few hours is reasonable for the vast majority of sites. Setting it to run too quickly can create a huge load on the server because each time the cron is run, all sorts of scripts are updating data, performing tasks, and consuming server resources. By the same token, run cron too infrequently and your site's content can become outdated, or worse, important module, theme, and core updates can go unnoticed, among other things. Actions and triggers Quite often, it happens that for specific events, it is useful to have Drupal automatically perform a specified task or action. An action, in the Drupal sense, is one of a number of tasks that the system can perform, and these usually relate to e-mailing people or acting upon user accounts or content. There are a number of simple actions that are available as well as a few more advanced ones that can be set up by anyone with sufficient permissions. To configure actions, navigate to Actions in SYSTEM under the Configuration menu in the toolbar: Default simple actions cannot be modified, so we will ignore these for the moment and focus on creating a new, advanced action. Set up a new Send e-mail action by selecting it from the drop-down list and click on the Create button, as shown in the preceding screenshot. This brings up the following page that can be set according to how this specific action will be used: It should be clear that the intention of this e-mail is to notify the staff/administration of any new site members. The Label field is important in this respect because this is how you will distinguish this action from the other ones that you may create in the future. Make the description as accurate, meaningful, and concise as possible to avoid any potential confusion. Also notice that there are several placeholder variables that can be inserted into the Recipient, Subject, and Message fields. In this instance, one has been used to inform the e-mail recipient of the new user name, as part of the message. A click on the Save button adds this new action to the list where it can be modified or deleted, accordingly: So far so good—we have set the action, but this in itself does absolutely nothing. An action cannot do anything unless there is a specific system event that can be triggered to set it off. These system events are, perspicaciously enough, called triggers and Drupal can look for any number of triggers, and perform the actions that are associated with it—this is how actions and triggers work together. Triggers are not part of the topic of Drupal configuration. However, we will discuss them here for completeness, since actions and triggers are integrally linked. Triggers are not enabled by default, so head on over to the Modules section and enable the Triggers module. With the module enabled, there will now be a new Triggers link from the Actions page. Clicking on this brings up the following page: Triggers are divided into five different categories, each providing a range of triggers to which actions can be attached. Assigning a trigger is basically selecting an action to apply from the drop-down list of the relevant trigger and clicking on the Assign button. To continue with our example, select the USER tab from the top of the Triggers overlay and, in the TRIGGER: AFTER CREATING A NEW USER ACCOUNT box, select the newly defined action, as shown in the following screenshot: Click on the Assign button, and the newly assigned action will show up in the relevant trigger box: In the same way, a large number of actions can be automated depending on the system event (or trigger) that fires. To test this out, log off and register a new account—you will find that the New User Alert e-mail is dutifully sent out once the account has been registered (assuming your web server is able to send e-mail).
Read more
  • 0
  • 6
  • 5990

article-image-advanced-theme-liferay-user-interface-development
Packt
01 Dec 2010
16 min read
Save for later

Advanced Theme in Liferay User Interface Development

Packt
01 Dec 2010
16 min read
Liferay User Interface Development Develop a powerful and rich user interface with Liferay Portal 6.0 Design usable and great-looking user interfaces for Liferay portals Get familiar with major theme development tools to help you create a striking new look for your Liferay portal Learn the techniques and tools to help you improve the look and feel of any Liferay portal A practical guide with lots of sample code included from real Liferay Portal Projects free for use for developing your own projects Changing theme.parent property in theme Liferay provides four different theme implementations out-of-box in the ${PORTAL_ROOT_HOME}/html/themes/ folder as shown in the following screenshot: When you create a new theme in the themes/ directory of the Plugins SDK, the Ant script will copy some files to the new theme from one of these three folders, _styled/, _unstyled/, and classic/, which can be specified in the ${PLUGINS_SDK_HOME}/themes/build-common-theme.xml file. For example, you can go to the ${PLUGINS_SDK_HOME}/themes/ directory and run create.bat palmtree "Palm-Tree Publications theme" on Windows or ./create.sh palmtree "Palm-Tree Publications theme" on Linux, respectively. Or you can use Liferay IDE to create same New Liferay Theme Plugin Project. This will create a theme folder named palmtree-theme in the Liferay Plugins SDK environment. When you run ant to build and deploy the theme and then apply the newly created theme to your page, you will find out that the look and feel of the page is messed up, as shown in the following screenshot: If you are familiar with theme development in Liferay Portal 5.2, you may wonder what has happened to the theme created in Liferay Portal 6 Plugins SDK because you would expect a fully working theme with two simple commands, create and ant. This is because the following build.xml file in your theme specifies _styled as the value for the theme.parent property: <?xml version="1.0"?> <project name="palmtree-theme" basedir="." default="deploy"> <import file="../build-common-theme.xml" /> <property name="theme.parent" value="_styled" /> </project> This means that when your newly created theme is built, it will copy all the files from the _styled/ folder in the ${PORTAL_ROOT_HOME}/html/themes/ directory to the docroot/ folder of your theme. The default _styled/ folder does not have enough files to create a completely working theme and that is why you see a messed-up page when the theme is applied to a page. The reason why this default _styled/ folder does not include enough files is that some Liferay users prefer to have minimal set of files to start with. You can modify the build.xml file for your theme in the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/ folder by changing value of the theme.parent property from _styled to classic, if you prefer to use the Classic theme as the basis for your theme modification. <property name="theme.parent" value="classic" /> Now you will see that your page looks exactly the same as that with the Classic theme after you build and apply your theme to the page (refer to the following screenshot): Adding color schemes to a theme When you create a theme in the Plugins SDK environment, the newly created theme by default supports one implementation and does not automatically have color schemes as variations of the theme. This fits well if you would like to have a consistent look and feel, especially in terms of the color display, across all the pages that the theme is applied to. In your portal application, you might have different sites with slightly different look and feel for different groups of users such as physicians and patients. You might also need to display different pages such as public pages with one set of colors and the private pages with a different set of colors. However, you don't want to create several different themes for reasons such as easy maintenance. In this case, you might consider creating different color schemes in your theme. Color schemes are specified using a Cascading Style Sheets (CSS) class name, with which you are able to not only change the colors, but also choose different background images, border colors, and so on. In the previous section, we created a PalmTree Publication theme, which takes the Classic theme as its parent theme. Now we can follow the mentioned steps to add color schemes to this theme: Copy the ${PORTAL_ROOT_HOME}/webapps/palmtree-theme/WEB-INF/liferay-look-and-feel.xml file to the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/WEB-INF/ folder. Open the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/WEB-INF/liferay-look-and-feel.xml file in your text editor. Change <theme id="palmtree" name="PalmTree Publication Theme" /> as shown in the highlighted lines here, and save the change: <look-and-feel> <compatibility> <version>6.0.5+</version> </compatibility> <theme id="palmtree" name="Palm Tree Publications Theme"> <color-scheme id="01" name="Blue"> <css-class>blue</css-class> <color-scheme-images-path>${images-path}/color_schemes/ blue</color-scheme-images-path> </color-scheme> <color-scheme id="02" name="Green"> <css-class>green</css-class> </color-scheme> <color-scheme id="03" name="Orange"> <css-class>orange</css-class> </color-scheme> </theme> </look-and-feel> Go to the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/palmtree-theme/_diffs/ folder and create a css/ subfolder. Copy both custom.css and custom_common.css from the ${PORTAL_ROOT_HOME}/html/themes/classic/_diffs/css/ folder to the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/css/ folder. This is to let the default styling handle the first color scheme blue. Create a color_schemes/ folder under the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/palmtree-theme/_diffs/css/ folder. Place one .css file for each of your additional color schemes. In this case, we are going to create two additional color schemes: green and orange. To make the explanation simpler, copy both the green.css and orange.css files from the ${PORTAL_ROOT_HOME}/html/themes/classic/_diffs/css/color_schemes/ folder to the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/css/color_schemes/ folder. Copy all images from the ${PORTAL_ROOT_HOME}/html/themes/classic/_diffs/images/ folder to the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/images/ folder. Add the following lines in your ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/css/custom.css file: @import url(color_schemes/green.css); @import url(color_schemes/orange.css); If you open either the green.css or the orange.css file, you will be able to identify the styling for the CSS by using a color prefix for each CSS definition. For example, in the orange.css file you would find that each item is defined like this: orange .dockbar { background-color: #AFA798; background-image: url(../../images/color_schemes/orange/dockbar /dockbar_bg.png); } .orange .dockbar .menu-button-active { background-color: #DBAC5C; background-image: url(../../images/color_schemes/orange/dockbar /button_active_bg.png); } In the green.css file, the style is defined like this: green .dockbar { background-color: #A2AF98; background-image: url(../../images/color_schemes/green/dockbar/ dockbar_bg.png); } .green .dockbar .menu-button-active { background-color: #95DB5C; background-image: url(../../images/color_schemes/green/dockbar/ button_active_bg.png); } Also, notice that the related images used for the different color schemes are included in the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/images/color_schemes/ folder. Re-build and deploy the PalmTree Publication theme. Log in as administrator and go to the Control Panel to apply this theme to the PalmTree Publications Inc. organization. You will be able to see three color schemes available under this theme. Select any of them to apply it to the Public Pages. Take a screenshot of the page when each of the color schemes is applied and save it as thumbnail.png in the corresponding blue, green, and orange subfolders in the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/images/color_scheme/ folder. Three screenshots are used to distinguish between the three color schemes in the Control Panel as seen in the following screenshot: The following screenshots shows how each of the three color schemes looks when applied to a portal page: As shown in above screenshot, color scheme blue has been applied on the Home page, by default. The following screenshot shows applying color scheme green on the current page. Of course, you would be able to apply color schemes blue or green in the entire site, if required. Similar to color schemes blue and green, you can apply color scheme orange as well on the current page or the entire site, as shown in following screenshot: So it works! The page background is with a hue of gray color. Now, what if we want to change the page background color to red for the green color schema? Update the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/css/color_schemes/green.css file as follows. The commented outline is the original content. The next line after the comment is the new content. The #FF0000 code is for the red color. body.green, .green .portlet { /*background-color: #F1F3EF;*/ background-color: #FF0000; } Re-deploy the PalmTree theme and refresh the portal page that uses the green color scheme. Now, you should be able to see the portal page with a red background color. As you can see, you can use theme color schemes to create some variants of the same theme without creating multiple themes. This is useful when you have different but related user roles such as physicians, nurses, and patients and you need to build a different site for each of them. You can use the color schemes to display each of these sites in a slightly different look and feel. Configurable theme settings There are many use cases where you would like to change some default settings in the theme so that you can modify the values after the theme is built and deployed. Fortunately, each theme can define a set of settings to make this configurable. The settings are defined as key-value pairs in the liferay-look-and-feel.xml file of the theme with the following syntax: <settings> <setting key="my-setting1" value="my-value1" /> <setting key="my-setting2" value="my-value2" /> </settings> These settings can then be accessed in the theme templates using the following code: $theme.getSetting("my-setting1") $theme.getSetting("my-setting2") For example, I need to create two themes—PalmTree Publications theme and AppleTree Publications theme. They are exactly the same except for some differences in the footer content that includes copyright, terms of use, privacy policy, and so on. Instead of creating two themes packaged as separate .war files, we create one set of two themes that share the majority of the code including CSS, JavaScript, images, and most templates; but with one configurable setting and two different implementations of the footer Velocity files. Here is how this can be done: Open ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/WEB-INF/liferay-look-and-feel.xml in the above sample PalmTree theme. Copy the PalmTree theme section and paste it in the same file but after this section. Rename the values of id and name from palmtree and PalmTree Publications theme to appletree and AppleTree Publications theme in the second section. Add the following setting to the palmtree section before the color-scheme definition: <settings> <setting key="theme-id" value="palmtree" /> </settings> Add the following setting to the appletree section before the color-scheme definition: <settings> <setting key="theme-id" value="appletree" /> </settings> Find the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/WEB-INF/liferay-look-and-feel.xml file as follows: <look-and-feel> // ignore details <theme id="palmtree" name="PalmTree Publications Theme"> <settings> <setting key="theme-id" value="palmtree" /> </settings> <color-scheme id="01" name="Blue"> // ignore details </color-scheme> </theme> <theme id="appletree" name="AppleTree Publications Theme"> <settings> <setting key="theme-id" value="appletree" /> </settings> <color-scheme id="01" name="Blue"> // ignore details </color-scheme> </theme> </look-and-feel> Copy the portal_normal.vm file of the Classic theme from the ${PORTAL_ROOT_HOME}/html/themes/classic/_diffs/templates/ folder to the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/templates/ folder. Open the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/templates/portal_normal.vm file Replace the default footer section with the following code: #set ($theme_id = $theme.getSetting("theme-id")) #if ($theme_id == "palmtree") #parse ("$full_templates_path/footer_palmtree.vm") #else #parse ("$full_templates_path/footer_appletree.vm") #end Create your own version of the two footer Velocity templates in the ${PLUGINS_SDK_HOME}/themes/palmtree-theme/docroot/_diffs/templates/ folder. Add related CSS definitions for your footer in your custom.css file. Build and deploy the theme .war file. Now you should be able to see both the PalmTree and AppleTree themes when you go to the Control Panel to apply either theme to your page. Based on the theme to be used, you should also notice that your footer is different. Of course, we can take other approaches to implement a different footer in the theme. For example, you can dynamically get the organization or community name and render the footer differently. However, the approach we explained previously can be expanded to control the UI of the other theme components such as the header, navigation, and portlets. Portal predefined settings in theme In the previous section, we discussed that theme engineers can add configurable custom settings in the liferay-look-and-feel.xml file of a theme. Liferay portal can also include some predefined out-of-the-box settings such as portlet-setup-show-borders-default and bullet-style-options in a theme to control certain default behavior of the theme. Let us use portlet-setup-show-borders-default as an example to explain how Liferay portal controls the display of the portlet border at different levels. If this predefined setting is set to false in your theme, Liferay portal will turn off the borders for all portlets on all pages where this theme is applied to. <settings> <setting key="portlet-setup-show-borders-default" value="false" /> </settings> By default, the value is set to true, which means that all portlets will display the portlet border by default. If the predefined portlet-setup-show-borders-default setting is set to true, it can be overwritten for individual portlets using the liferay-portlet.xml file of a portlet as follows: <liferay-portlet-app> // ignore details <portlet> <portlet-name>sample</portlet-name> <icon>/icon.png</icon> <use-default-template>false</use-default-template> <instanceable>true</instanceable> <header-portlet-css>/css/main.css</header-portlet-css> <footer-portlet-javascript>/js/main.js</footer-portlet-javascript> <css-class-wrapper>sample-portlet</css-class-wrapper> </portlet> // ignore details </liferay-portlet-app> Set the use-default-template value to true if the portlet uses the default template to decorate and wrap content. Setting it to false will allow the developer to maintain the entire output content of the portlet. The default value is true. The most common use of this is when you want the portlet to look different from the other portlets or if you want the portlet not to have borders around the output content. The use-default-template setting of each portlet, after being set to either true or false, in the liferay-portlet.xml file, can be further overwritten by the portlet's popup CSS setting. Users with the appropriate permissions can change it by going to the Look and Feel | Portlet Configuration | Show Borders checkbox of the portlet, as shown in the next screenshot: Embedding non-instanceable portlets in theme One common requirement in theme development is to add some portlets in different components of a theme. For example, you might want to add the Sign In portlet in the header of your theme, the Web Content Search portlet in the navigation area, and the Site Map portlet in the footer area. All the Liferay out-of-the-box portlets can be referred in the ${PORTAL_ROOT_HOME}/WEB-INF/liferay-portlet.xml file. How can this be achieved? Well, it can be quite easy or pretty tricky to embed an out-of-the-box portlet in a theme. Embedding Dockbar and Breadcrumb portlets in a theme The Dockbar portlet is embedded at the very top in the default Classic theme in the portal_normal.vm file as shown next: #if($is_signed_in) #dockbar() #end In the same way, the Breadcrumb portlet can be embedded in the content area of the Classic theme in the portal_normal.vm file: #breadcrumbs() Embedding Language and Web Content Search portlets in a theme Some other Liferay out-of-the-box portlets such as the Language and Web Content Search portlets can be embedded in a theme or a layout template easily. For example, the Web Content Search portlet can be added to the far right side of the horizontal navigation area of your theme as follows in the navigation.vm file of your theme. <div id="navbar"> <div id="navigation" class="sort-pages modify-pages"> <div id="custom"> $theme.journalContentSearch() </div> // ignore details </div> </div> In the same way, the Language portlet can be embedded in the portal_normal.vm Velocity template file of your theme: $theme.language() Again, you need to add the necessary CSS definitions in your custom.css file to control the look and feel and the location of your embedded portlet(s). Embedding Sign In portlet in the header area of a theme Sometimes the theme design requires that the Liferay Sign In portlet be in the header area. By default, the Sign In portlet has a portlet border but we need to disable it. The previously mentioned approaches for embedding a portlet in a theme through Velocity attributes in a template do not work in this case because we need to customize the default UI of the embedded portlet. Instead, we can add the following code to the header section in the portal_normal.vm file in our sample PalmTree theme: #if(!$is_signed_in) #set ($locPortletId = "58") $velocityPortletPreferences.setValue("portlet-setup-show-borders", "false") #set($locRenderedPortletContent = $theme.runtime($locPortletId, "", $velocityPortletPreferences.toString())) $locRenderedPortletContent $velocityPortletPreferences.reset() #end After the theme is re-built and re-deployed, we can see that the Sign In portlet is rendered in the header area underneath the logo without the portlet border. The next step for us is to modify the custom.css file and the related files by adding CSS definition to control the location and look and feel of this Sign In portlet. The following screenshot shows the Sign In portlet in the header area and the Web Content Search portlet in the navigation area in a working theme in the production environment:
Read more
  • 0
  • 0
  • 5984