Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Full-Stack Web Development

52 Articles
article-image-5-reasons-node-js-developers-might-actually-love-using-azure-sponsored-by-microsoft
Richard Gall
24 May 2019
6 min read
Save for later

5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft]

Richard Gall
24 May 2019
6 min read
If you’re a Node.js developer, it might seem odd to be thinking about Azure. However, as the software landscape becomes increasingly cloud native, it’s well worth thinking about the cloud solution you and your organization uses. It should, after all, make life easier for you as much as it should help your company scale and provide better services and user experiences for customers. We don’t often talk about it, but cloud isn’t one thing: it’s a set of tools that provide developers with new ways of building and managing apps. It helps you experiment and learn. In the development of Azure, developer experience is at the top of the agenda. In many ways the platform represents Microsoft’s transformation as an organization, from one that seemed to distrust open source developers, to one which is hell bent on making them happier and more productive. So, if you’re a Node.js developer - or any kind of JavaScript developer, for that matter - reading this with healthy scepticism (and why wouldn’t you), let’s look at some of the reasons and ways Azure can support you in your work... This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free courtesy of Microsoft here. Deploy apps quickly with Azure App Service  As a developer, deploying applications quickly is one of your top priorities. Azure does that thanks to Azure App Service. Essentially, Azure App Service is a PaaS that brings together a variety of other Azure services and resources helping you to develop and host applications without worrying about your infrastructure.   There are lots of reasons to love Azure App Service, not least the speed with which it allows you to get up and running, but most importantly it gives application developers access to a range of Azure features, such as load balancing and security, as well as the platforms integrations with tools for DevOps processes.   Azure App Service works for developers working with a range of platforms, from Python to PHP - Node.js developers that want to give it a try should start here.   Manage application and infrastructure resources with the Azure CLI The Azure CLI is a useful tool for managing cloud resources. It can also be used to deploy an application quickly. If you’re a developer that likes working with the CLI, this feature really does offer a nice way of working, allowing you to easily move between each step in the development and deployment process. If you want to try deploying a Node.js application using the Azure CLI, check out this tutorial, or learn more about the Azure CLI here. Go serverless with Azure Functions Serverless has been getting serious attention over the last 18 months. While it’s true that serverless is a hyped field, and that in reality there are serious considerations to be made about how and where you choose to run your software, it’s relatively easy to try it out for yourself using Azure. In fact, the name itself is useful in demystifying serverless. The word ‘functions’ is a much more accurate description what you’re doing as a developer. A function is essentially a small piece of code that runs in the cloud that execute certain actions or tasks in specific situations. There are many reasons to go serverless, from a pay per use pricing model to support for your preferred dependencies. And while there are plenty of options in terms of cloud providers, Azure is worth exploring because it makes it so easy for developers to leverage. Learn more about Azure Functions here. Simple, accessible dashboards for logging and monitoring In 2019 building more reliable and observable systems will expand from the preserve of SREs and become something developers are accountable for too. This is the next step in the evolution of software engineering, as new silos are broken down. It’s for this reason that the monitoring tools offered by Azure could prove to be so valuable for developers. With Application Insights and Azure Monitor, you can gain the level of transparency you need to properly manage your application. Learn how to successfully monitor a Node.js app here. Build and deploy applications with Azure DevOps DevOps shouldn’t really require additional effort and thinking - but more often than not it does. Azure is a platform that appears to understand this implicitly, and the team behind it have done a lot to make it easier to cultivate a DevOps culture with several useful tools and integrations. Azure Test Plans is a toolkit for testing applications, which can seriously help you improve the way you test in your development processes, while Azure Boards can support project management from inside the Azure ecosystem - useful if you’re looking for a new way to manage agile workflows. But perhaps the most important feature within Azure DevOps - for developers at least - is Azure Pipelines. Azure Pipelines is particularly useful for JavaScript developers as it gives you the option to run a build pipeline on a Microsoft hosted agent that has a wide range of common JavaScript tools (like Yarn and Gulp) pre-installed. Microsoft claims this is the “simplest way to build and deploy” because “maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.” Find out how to build, test, and deploy Node.js apps using Azure Pipelines with this tutorial. Read next: 5 developers explain why they use Visual Studio Code Conclusion: Azure is a place to experiment and learn We often talk about cloud as a solution or service. And although it can provide solutions to many urgent problems, it’s worth remembering that cloud is really a set of many different tools. It isn’t one thing. Because of this, cloud platforms like Azure are as much places to experiment and try out new ways of working as it is simply someone else’s server space. With that in mind, it could be worth experimenting with Azure to try out new ideas - after all, what’s the worst that can happen? More than anything, cloud native should make development fun. Find out how to get started with Node.js on Azure. Download Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 3839

article-image-how-to-develop-restful-web-services-in-spring
Vijin Boricha
13 Apr 2018
6 min read
Save for later

How to develop RESTful web services in Spring

Vijin Boricha
13 Apr 2018
6 min read
Today, we will explore the basics of creating a project in Spring and how to leverage Spring Tool Suite for managing the project. To create a new project, we can use a Maven command prompt or an online tool, such as Spring Initializr (http://start.spring.io), to generate the project base. This website comes in handy for creating a simple Spring Boot-based web project to start the ball rolling. Creating a project base Let's go to http://start.spring.io in our browser and configure our project by filling in the following parameters to create a project base: Group: com.packtpub.restapp Artifact: ticket-management Search for dependencies: Web (full-stack web development with Tomcat and Spring MVC) After configuring our project, it will look as shown in the following screenshot: Now you can generate the project by clicking Generate Project. The project (ZIP file) should be downloaded to your system. Unzip the .zip file and you should see the files as shown in the following screenshot: Copy the entire folder (ticket-management) and keep it in your desired location. Working with your favorite IDE Now is the time to pick the IDE. Though there are many IDEs used for Spring Boot projects, I would recommend using Spring Tool Suite (STS), as it is open source and easy to manage projects with. In my case, I use sts-3.8.2.RELEASE. You can download the latest STS from this link: https://spring. io/tools/sts/ all. In most cases, you may not need to install; just unzip the file and start using it: After extracting the STS, you can start using the tool by running STS.exe (shown in the preceding screenshot). In STS, you can import the project by selecting Existing Maven Projects, shown as follows: After importing the project, you can see the project in Package Explorer, as shown in the following screenshot: You can see the main Java file (TicketManagementApplication) by default: To simplify the project, we will clean up the existing POM file and update the required dependencies. Add this file configuration to pom.xml: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.packtpub.restapp</groupId> <artifactId>ticket-management</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>ticket-management</name> <description>Demo project for Spring Boot</description> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>5.0.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <version>1.5.7.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> <version>1.5.7.RELEASE</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.9.2</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>5.0.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>5.0.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <version>1.5.7.RELEASE</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> In the preceding configuration, you can check that we have used the following libraries: spring-web spring-boot-starter spring-boot-starter-tomcat spring-bind jackson-databind As the preceding dependencies are needed for the project to run, we have added them to our pom.xml file. So far we have got the base project ready for Spring Web Service. Let's add a basic REST code to the application. First, remove the @SpringBootApplication annotation from the TicketManagementApplication class and add the following annotations: @Configuration @EnableAutoConfiguration @ComponentScan @Controller These annotations will help the class to act as a web service class. I am not going to talk much about what these configurations will do in this chapter. After adding the annotations, please add a simple method to return a string as our basic web service method: @ResponseBody @RequestMapping("/") public String sayAloha(){ return "Aloha"; } Finally, your code will look as follows: package com.packtpub.restapp.ticketmanagement; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; @Configuration @EnableAutoConfiguration @ComponentScan @Controller public class TicketManagementApplication { @ResponseBody @RequestMapping("/") public String sayAloha(){ return "Aloha"; } public static void main(String[] args) { SpringApplication.run(TicketManagementApplication.class, args); } } Once all the coding changes are done, just run the project on Spring Boot App (Run As | Spring Boot App). You can verify the application has loaded by checking this message in the console: Tomcat started on port(s): 8080 (http) Once verified, you can check the API on the browser by simply typing localhost:8080. Check out the following screenshot: If you want to change the port number, you can configure a different port number in application.properties, which is in src/main/resources/application.properties. Check out the following screenshot: You read an excerpt from Building RESTful Web Services with Spring 5 - Second Edition written by Raja CSP Raman. From this book, you will learn to implement the REST architecture to build resilient software in Java. Check out other related posts: Starting with Spring Security Testing RESTful Web Services with Postman Applying Spring Security using JSON Web Token (JWT)
Read more
  • 0
  • 0
  • 3772

article-image-aspnet-controllers-and-server-side-routes
Packt
19 Aug 2016
22 min read
Save for later

ASP.NET Controllers and Server-Side Routes

Packt
19 Aug 2016
22 min read
In this article by Valerio De Sanctis, author of the book ASP.NET Web API and Angular 2, we will explore the client-server interaction capabilities of our frameworks: to put it in other words, we need to understand how Angular2 will be able to fetch data from ASP.NET Core using its brand new, MVC6-based API structure. We won't be worrying about how will ASP.NET core retrieve these data – be it from session objects, data stores, DBMS, or any possible data source, that will come later on. For now, we'll just put together some sample, static data in order to understand how to pass them back and forth by using a well-structured, highly-configurable and viable interface. (For more resources related to this topic, see here.) The data flow A Native Web App following the single-page application approach will roughly handle the client-server communication in the following way: In case you are wondering about what these Async Data Requests actually are, the answer is simple, everything, as long as it needs to retrieve data from the server, which is something that most of the common user interactions will normally do, including (yet not limiting to), pressing a button to show more data or to edit/delete something, following a link to another app view, submitting a form and so on. That is, unless the task is so trivial or it involves a minimal amount of data that the client can entirely handle it, meaning that it already has everything he needs. Examples of such tasks are, show/hide element toggles, in-page navigation elements (such as internal anchors), and any temporary job requiring to hit a confirmation or save button to be pressed before being actually processed. The above picture shows, in a nutshell, what we're going to do. Define and implement a pattern to serve these JSON-based, server-side responses our application will need to handle the upcoming requests. Since we've chosen a strongly data-driven application pattern such as a Wiki, we'll surely need to put together a bunch of common CRUD based requests revolving around a defined object which will represent our entries. For the sake of simplicity, we'll call it Item from now on. These requests will address some common CMS-inspired tasks such as: display a list of items, view/edit the selected item's details, handle filters, and text-based search queries and also delete an item. Before going further, let's have a more detailed look on what happens between any of these Data Request issued by the client and JSON Responses send out by the server, i.e. what's usually called the Request/Response flow: As we can see, in order to respond to any client-issued Async Data Request we need to build a server-side MVC6 WebAPIControllerfeaturing the following capabilities: Read and/or Write data using the Data Access Layer. Organize these data in a suitable, JSON-serializableViewModel. Serialize the ViewModel and send it to the client as a JSON Response. Based on these points, we could easily conclude that the ViewModel is the key item here. That's not always correct: it could or couldn't be the case, depending on the project we are building. To better clarify that, before going further, it could be useful to spend a couple words on the ViewModel object itself. The role of the ViewModel We all know that a ViewModel is a container-type class which represents only the data we want to display on our webpage. In any standard MVC-based ASP.NET application, the ViewModel is instantiated by the Controller in response to a GET request using the data fetched from the Model: once built, the ViewModel is passed to the View, where it is used to populate the page contents/input fields. The main reason for building a ViewModel instead of directly passing the Model entities is that it only represents the data that we want to use, and nothing else. All the unnecessary properties that are in the model domain object will be left out, keeping the data transfer as lightweight as possible. Another advantage is the additional security it gives, since we can protect any field from being serialized and passed through the HTTP channel. In a standard Web API context, where the data is passed using RESTFul conventions via serialized formats such as JSON or XML, the ViewModel could be easily replaced by a JSON-serializable dynamic object created on the fly, such as this: var response = new{ Id = "1", Title = "The title", Description = "The description" }; This approach is often viable for small or sample projects, where creating one (or many) ViewModel classes could be a waste of time. That's not our case, though, conversely, our project will greatly benefit from having a well-defined, strongly-typed ViewModel structure, even if they will be all eventually converted into JSON strings. Our first controller Now that we have a clear vision of the Request/Response flow and its main actors, we can start building something up. Let's start with the Welcome View, which is the first page that any user will see upon connecting to our native web App. This is something that in a standard web application would be called Home Page, but since we are following a Single Page Application approach that name isn't appropriate. After all, we are not going to have more than one page. In most Wikis, the Welcome View/Home Page contains a brief text explaining the context/topic of the project and then one or more lists of items ordered and/or filtered in various ways, such as: The last inserted ones (most recent first). The most relevant/visited ones (most viewed first). Some random items (in random order). Let's try to do something like that. This will be our master plan for a suitable Welcome View: In order to do that, we're going to need the following set of API calls: api/items/GetLatest (to fetch the last inserted items). api/items/GetMostViewed (to fetch the last inserted items). api/items/GetRandom (to fetch the last inserted items). As we can see, all of them will be returning a list of items ordered by a well-defined logic. That's why, before working on them, we should provide ourselves with a suitable ViewModel. The ItemViewModel One of the biggest advantages in building a Native Web App using ASP.NET and Angular2 is that we can start writing our code without worrying to much about data sources: they will come later, and only after we're sure about what we really need. This is not a requirement either - you are also free to start with your data source for a number of good reasons, such as: You already have a clear idea of what you'll need. You already have your entity set(s) and/or a defined/populated data structure to work with. You're used to start with the data, then moving to the GUI. All the above reasons are perfectly fine: you won't ever get fired for doing that. Yet, the chance to start with the front-end might help you a lot if you're still unsure about how your application will look like, either in terms of GUI and/or data. In building this Native Web App, we'll take advantage of that: hence why we'll start defining our Item ViewModelinstead of creating its Data Source and Entity class. From Solution Explorer, right-click to the project root node and add a new folder named ViewModels. Once created, right-click on it and add a new item: from the server-side elements, pick a standard Class, name it ItemViewModel.cs and hit the Add button, then type in the following code: using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Threading.Tasks; using Newtonsoft.Json; namespaceOpenGameListWebApp.ViewModels { [JsonObject(MemberSerialization.OptOut)] publicclassItemViewModel { #region Constructor public ItemViewModel() { } #endregion Constructor #region Properties publicint Id { get; set; } publicstring Title { get; set; } publicstring Description { get; set; } publicstring Text { get; set; } publicstring Notes { get; set; } [DefaultValue(0)] publicint Type { get; set; } [DefaultValue(0)] publicint Flags { get; set; } publicstring UserId { get; set; } [JsonIgnore] publicintViewCount { get; set; } publicDateTime CreatedDate { get; set; } publicDateTime LastModifiedDate { get; set; } #endregion Properties } } As we can see, we're defining a rather complex class: this isn't something we could easily handle using dynamic object created on-the-fly, hence why we're using a ViewModel instead. We will be installing Newtonsoft's Json.NET Package using NuGet. We will start using it in this class, by including its namespace in line 6 and decorating our newly-created Item class with a JsonObject Attribute in line 10. That attribute can be used to set a list of behaviours of the JsonSerializer / JsonDeserializer methods, overriding the default ones: notice that we used MemberSerialization.OptOut, meaning that any field will be serialized into JSON unless being decorated by an explicit JsonIgnore attribute or NonSerializedattribute. We are making this choice because we're going to need most of our ViewModel properties serialized, as we'll be seeing soon enough. The ItemController Now that we have our ItemViewModel class, let's use it to return some server-side data. From your project's root node, open the /Controllers/ folder: right-click on it, select Add>New Item, then create a Web API Controller class, name it ItemController.cs and click the Add button to create it. The controller will be created with a bunch of sample methods: they are identical to those present in the default ValueController.cs, hence we don't need to keep them. Delete the entire file content and replace it with the following code: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; usingOpenGameListWebApp.ViewModels; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { // GET api/items/GetLatest/5 [HttpGet("GetLatest/{num}")] publicJsonResult GetLatest(int num) { var arr = newList<ItemViewModel>(); for (int i = 1; i <= num; i++) arr.Add(newItemViewModel() { Id = i, Title = String.Format("Item {0} Title", i), Description = String.Format("Item {0} Description", i) }); var settings = newJsonSerializerSettings() { Formatting = Formatting.Indented }; returnnewJsonResult(arr, settings); } } } This controller will be in charge of all Item-related operations within our app. As we can see, we started defining a GetLatestmethod accepting a single Integerparameter value.The method accepts any GET request using the custom routing rules configured via the HttpGetAttribute: this approach is called Attribute Routing and we'll be digging more into it later in this article. For now, let's stick to the code inside the method itself. The behaviour is really simple: since we don't (yet) have a Data Source, we're basically mocking a bunch of ItemViewModel objects: notice that, although it's just a fake response, we're doing it in a structured and credible way, respecting the number of items issued by the request and also providing different content for each one of them. It's also worth noticing that we're using a JsonResult return type, which is the best thing we can do as long as we're working with ViewModel classes featuring the JsonObject attribute provided by the Json.NET framework: that's definitely better than returning plain string or IEnumerable<string> types, as it will automatically take care of serializing the outcome and setting the appropriate response headers.Let's try our Controller by running our app in Debug Mode: select Debug>Start Debugging from main menu or press F5. The default browser should open, pointing to the index.html page because we did set it as the Launch URL in our project's debug properties. In order to test our brand new API Controller, we need to manually change the URL with the following: /api/items/GetLatest/5 If we did everything correctly, it will show something like the following: Our first controller is up and running. As you can see, the ViewCount property is not present in the Json-serialized output: that's by design, since it has been flagged with the JsonIgnore attribute, meaning that we're explicitly opting it out. Now that we've seen that it works, we can come back to the routing aspect of what we just did: since it is a major topic, it's well worth some of our time. Understanding routes We will acknowledge the fact that the ASP.NET Core pipeline has been completely rewritten in order to merge the MVC and WebAPI modules into a single, lightweight framework to handle both worlds. Although this certainly is a good thing, it comes with the usual downside that we need to learn a lot of new stuff. Handling Routes is a perfect example of this, as the new approach defines some major breaking changes from the past. Defining routing The first thing we should do is giving out a proper definition of what Routing actually is. To cut it simple, we could say that URL routing is the server-side feature that allows a web developer to handle HTTP requests pointing to URIs not mapping to physical files. Such technique could be used for a number of different reasons, including: Giving dynamic pages semantic, meaningful and human-readable names in order to advantage readability and/or search-engine optimization (SEO). Renaming or moving one or more physical files within your project's folder tree without being forced to change their URLs. Setup alias and redirects. Routing through the ages In earlier times, when ASP.NET was just Web Forms, URL routing was strictly bound to physical files: in order to implement viable URL convention patterns the developers were forced to install/configure a dedicated URL rewriting tool by using either an external ISAPI filter such as Helicontech's SAPI Rewrite or, starting with IIS7, the IIS URL Rewrite Module. When ASP.NET MVC got released, the Routing pattern was completely rewritten: the developers could setup their own convention-based routes in a dedicated file (RouteConfig.cs, Global.asax, depending on template) using the Routes.MapRoute method. If you've played along with MVC 1 through 5 or WebAPI 1 and/or 2, snippets like this should be quite familiar to you: routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); This method of defining routes, strictly based upon pattern matching techniques used to relate any given URL requests to a specific Controller Actions, went by the name of Convention-based Routing. ASP.NET MVC5 brought something new, as it was the first version supporting the so-called Attribute-based Routing. This approach was designed as an effort to give to developers a more versatile approach. If you used it at least once you'll probably agree that it was a great addition to the framework, as it allowed the developers to define routes within the Controller file. Even those who chose to keep the convention-based approach could find it useful for one-time overrides like the following, without having to sort it out using some regular expressions: [RoutePrefix("v2Products")] publicclassProductsController : Controller { [Route("v2Index")] publicActionResult Index() { return View(); } } In ASP.NET MVC6, the routing pipeline has been rewritten completely: that's way things like the Routes.MapRoute() method is not used anymore, as well as any explicit default routing configuration. You won't be finding anything like that in the new Startup.cs file, which contains a very small amount of code and (apparently) nothing about routes. Handling routes in ASP.NET MVC6 We could say that the reason behind the Routes.MapRoute method disappearance in the Application's main configuration file is due to the fact that there's no need to setup default routes anymore. Routing is handled by the two brand-new services.AddMvc() and services.UseMvc() methods called within the Startup.cs file, which respectively register MVC using the Dependency Injection framework built into ASP.NET Core and add a set of default routes to our app. We can take a look at what happens behind the hood by looking at the current implementation of the services.UseMvc()method in the framework code (relevant lines are highlighted): public static IApplicationBuilder UseMvc( [NotNull] this IApplicationBuilder app, [NotNull] Action<IRouteBuilder> configureRoutes) { // Verify if AddMvc was done before calling UseMvc // We use the MvcMarkerService to make sure if all the services were added. MvcServicesHelper.ThrowIfMvcNotRegistered(app.ApplicationServices); var routes = new RouteBuilder { DefaultHandler = new MvcRouteHandler(), ServiceProvider = app.ApplicationServices }; configureRoutes(routes); // Adding the attribute route comes after running the user-code because // we want to respect any changes to the DefaultHandler. routes.Routes.Insert(0, AttributeRouting.CreateAttributeMegaRoute( routes.DefaultHandler, app.ApplicationServices)); return app.UseRouter(routes.Build()); } The good thing about this is the fact that the framework now handles all the hard work, iterating through all the Controller's actions and setting up their default routes, thus saving us some work. It worth to notice that the default ruleset follows the standard RESTFulconventions, meaning that it will be restricted to the following action names:Get, Post, Put, Delete. We could say here that ASP.NET MVC6 is enforcing a strict WebAPI-oriented approach - which is much to be expected, since it incorporates the whole ASP.NET Core framework. Following the RESTful convention is generally a great thing to do, especially if we aim to create a set of pragmatic, RESTful basedpublic API to be used by other developers. Conversely, if we're developing our own app and we want to keep our API accessible to our eyes only, going for custom routing standards is just as viable: as a matter of fact, it could even be a better choice to shield our Controllers against some most trivial forms of request flood and/or DDoS-based attacks. Luckily enough, both the Convention-based Routing and the Attribute-based Routing are still alive and well, allowing you to setup your own standards. Convention-based routing If we feel like using the most classic routing approach, we can easily resurrect our beloved MapRoute() method by enhancing the app.UseMvc() call within the Startup.cs file in the following way: app.UseMvc(routes => { // Route Sample A routes.MapRoute( name: "RouteSampleA", template: "MyOwnGet", defaults: new { controller = "Items", action = "Get" } ); // Route Sample B routes.MapRoute( name: "RouteSampleB", template: "MyOwnPost", defaults: new { controller = "Items", action = "Post" } ); }); Attribute-based routing Our previously-shownItemController.cs makes a good use of the Attribute-Based Routing approach, featuring it either at Controller level: [Route("api/[controller]")] public class ItemsController : Controller Also at Action Method level: [HttpGet("GetLatest")] public JsonResult GetLatest() Three choices to route them all Long story short, ASP.NET MVC6 is giving us three different choices for handling routes: enforcing the standard RESTful conventions, reverting back to the good old Convention-based Routing or decorating the Controller files with the Attribute-based Routing. It's also worth noticing that Attribute-based Routes, if and when defined, would override any matchingConvention-basedpattern: both of them, if/when defined, would override the default RESTful conventions created by the built-in UseMvc() method. In this article we're going to use all of these approaches, in order to learn when, where and how to properly make use of either of them. Adding more routes Let's get back to our ItemController. Now that we're aware of the routing patterns we can use, we can use that knowledge to implement the API calls we're still missing. Open the ItemController.cs file and add the following code (new lines are highlighted): using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using OpenGameListWebApp.ViewModels; using Newtonsoft.Json; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { #region Attribute-based Routing ///<summary> /// GET: api/items/GetLatest/{n} /// ROUTING TYPE: attribute-based ///</summary> ///<returns>An array of {n} Json-serialized objects representing the last inserted items.</returns> [HttpGet("GetLatest/{n}")] publicIActionResult GetLatest(int n) { var items = GetSampleItems().OrderByDescending(i => i.CreatedDate).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetMostViewed/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing the items with most user views.</returns> [HttpGet("GetMostViewed/{n}")] public IActionResult GetMostViewed(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderByDescending(i => i.ViewCount).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetRandom/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing some randomly-picked items.</returns> [HttpGet("GetRandom/{n}")] public IActionResult GetRandom(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderBy(i => Guid.NewGuid()).Take(n); return new JsonResult(items, DefaultJsonSettings); } #endregion #region Private Members /// <summary> /// Generate a sample array of source Items to emulate a database (for testing purposes only). /// </summary> /// <param name="num">The number of items to generate: default is 999</param> /// <returns>a defined number of mock items (for testing purpose only)</returns> private List<ItemViewModel> GetSampleItems(int num = 999) { List<ItemViewModel> lst = new List<ItemViewModel>(); DateTime date = new DateTime(2015, 12, 31).AddDays(-num); for (int id = 1; id <= num; id++) { lst.Add(new ItemViewModel() { Id = id, Title = String.Format("Item {0} Title", id), Description = String.Format("This is a sample description for item {0}: Lorem ipsum dolor sit amet.", id), CreatedDate = date.AddDays(id), LastModifiedDate = date.AddDays(id), ViewCount = num - id }); } return lst; } /// <summary> /// Returns a suitable JsonSerializerSettings object that can be used to generate the JsonResult return value for this Controller's methods. /// </summary> private JsonSerializerSettings DefaultJsonSettings { get { return new JsonSerializerSettings() { Formatting = Formatting.Indented }; } } #endregion } We added a lot of things there, that's for sure. Let's see what's new: We added the GetMostViewed(n) and GetRandom(n) methods, built upon the same mocking logic used for GetLatest(n): either one requires a single parameter of Integer type to specify the (maximum) number of items to retrieve. We added two new private members: The GetLatestItems() method, to generate some sample Item objects when we need them. This method is an improved version of the dummy item generator loop we had inside the previous GetLatest() method implementation, as it acts more like a Dummy Data Provider: we'll tell more about it later on. The DefaultJsonSettings property, so we won't have to manually instantiate a JsonSerializerSetting object every time. We also decorated each class member with a dedicated<summary> documentation tag explaining what it does and its return value. These tags will be used by IntelliSense to show real-time information about the type within the Visual Studio GUI. They will also come handy when we'll want to generate an auto-generated XML Documentationfor our project by using industry-standard documentation tools such as Sandcastle. Finally, we added some #region / #endregion pre-processor directives to separate our code into blocks. We'll do this a lot from now on, as this will greatly increase our source code readability and usability, allowing us to expand or collapse different sections/part when we don't need them, thus focusing more on what we're working on. For more info regarding documentation tags, take a look at the following MSDN official documentation page: https://msdn.microsoft.com/library/2d6dt3kf.aspx If you want know more about C# pre-processor directives, this is the one to check out instead: https://msdn.microsoft.com/library/9a1ybwek.aspx The dummy data provider Our new GetLatestItems() method deserves a couple more words. As we can easily see it emulates the role of a Data Provider, returning a list of items in a credible fashion. Notice that we built it in a way that it will always return identical items, as long as the num parameter value remains the same: The generated items Id will follow a linear sequence, from 1 to num. Any generated item will have incremental CreatedDate and LastModifiedDate values based upon their Id: the higher the Id, the most recent the two dates will be, up to 31 December 2015. This follows the assumption that most recent items will have higher Id, as it normally is for DBMS records featuring numeric, auto-incremental keys. Any generated item will have a decreasing ViewCount value based upon their Id: the higher the Idis, the least it will be. This follows the assumption that newer items will generally get less views than older ones. While it obviously lacksany insert/update/delete feature, this Dummy Data Provideris viable enough to serve our purposes until we'll replace it with an actual, persistence-based Data Source. Technically speaking, we could do something better than we did by using one of the many Mocking Framework available through NuGet:Moq, NMock3,NSubstitute orRhino, just to name a few. Summary We spent some time into putting the standard application data flow under our lens: a two-way communication pattern between the server and their clients, built upon the HTTP protocol. We acknowledged the fact that we'll be mostly dealing with Json-serializable object such as Items, so we chose to equip ourselves with an ItemViewModel server-side class, together with an ItemController that will actively use it to expose the data to the client. We started building our MVC6-based WebAPI interface by implementing a number of methods required to create the client-side UI we chose for our Welcome View, consisting of three item listings to show to our users: last inserted ones, most viewed ones and some random picks. We routed the requests to them by using a custom set of Attribute-based routing rules, which seemed to be the best choice for our specific scenario. While we were there, we also took the chance to add a dedicated method to retrieve a single Item from its unique Id, assuming we're going to need it for sure. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] ASP.Net Site Performance: Improving JavaScript Loading [article] Displaying MySQL data on an ASP.NET Web Page [article]
Read more
  • 0
  • 0
  • 3610
Visually different images

article-image-ajax-form-validation-part-1
Packt
18 Feb 2010
4 min read
Save for later

AJAX Form Validation: Part 1

Packt
18 Feb 2010
4 min read
The server is the last line of defense against invalid data, so even if you implement client-side validation, server-side validation is mandatory. The JavaScript code that runs on the client can be disabled permanently from the browser's settings and/or it can be easily modified or bypassed. Implementing AJAX form validation The form validation application we will build in this article validates the form at the server side on the classic form submit, implementing AJAX validation while the user navigates through the form. The final validation is performed at the server, as shown in Figure 5-1: Doing a final server-side validation when the form is submitted should never be considered optional. If someone disables JavaScript in the browser settings, AJAX validation on the client side clearly won't work, exposing sensitive data, and thereby allowing an evil-intentioned visitor to harm important data on the server (for example, through SQL injection). Always validate user input on the server. As shown in the preceding figure, the application you are about to build validates a registration form using both AJAX validation (client side) and typical server-side validation: AJAX-style (client side): It happens when each form field loses focus (onblur). The field's value is immediately sent to and evaluated by the server, which then returns a result (0 for failure, 1 for success). If validation fails, an error message will appear and notify the user about the failed validation, as shown in Figure 5-3. PHP-style (server side): This is the usual validation you would do on the server—checking user input against certain rules after the entire form is submitted. If no errors are found and the input data is valid, the browser is redirected to a success page, as shown in Figure 5-4. If validation fails, however, the user is sent back to the form page with the invalid fields highlighted, as shown in Figure 5-3. Both AJAX validation and PHP validation check the entered data against our application's rules: Username must not already exist in the database Name field cannot be empty A gender must be selected Month of birth must be selected Birthday must be a valid date (between 1-31) Year of birth must be a valid year (between 1900-2000) The date must exist in the number of days for each month (that is, there's no February 31) E-mail address must be written in a valid email format Phone number must be written in standard US form: xxx-xxx-xxxx The I've read the Terms of Use checkbox must be selected Watch the application in action in the following screenshots: XMLHttpRequest, version 2 We do our best to combine theory and practice, before moving on to implementing the AJAX form validation script, we'll have another quick look at our favorite AJAX object—XMLHttpRequest. On this occasion, we will step up the complexity (and functionality) a bit and use everything we have learned until now. We will continue to build on what has come before as we move on; so again, it's important that you take the time to be sure you've understood what we are doing here. Time spent on digging into the materials really pays off when you begin to build your own application in the real world. Our OOP JavaScript skills will be put to work improving the existing script that used to make AJAX requests. In addition to the design that we've already discussed, we're creating the following features as well: Flexible design so that the object can be easily extended for future needs and purposes The ability to set all the required properties via a JSON object We'll package this improved XMLHttpRequest functionality in a class named XmlHttp that we'll be able to use in other exercises as well. You can see the class diagram in the following screenshot, along with the diagrams of two helper classes: settings is the class we use to create the call settings; we supply an instance of this class as a parameter to the constructor of XmlHttp complete is a callback delegate, pointing to the function we want executed when the call completes The final purpose of this exercise is to create a class named XmlHttp that we can easily use in other projects to perform AJAX calls. With our goals in mind, let's get to it! Time for action – the XmlHttp object In the ajax folder, create a folder named validate, which will host the exercises in this article.
Read more
  • 0
  • 0
  • 3591

article-image-why-coffeescript
Packt
31 Jan 2013
9 min read
Save for later

Why CoffeeScript?

Packt
31 Jan 2013
9 min read
(For more resources related to this topic, see here.) CoffeeScript CoffeeScript compiles to JavaScript and follows its idioms closely. It's quite possible to rewrite any CoffeeScript code in Javascript and it won't look drastically different. So why would you want to use CoffeeScript? As an experienced JavaScript programmer, you might think that learning a completely new language is simply not worth the time and effort. But ultimately, code is for programmers. The compiler doesn't care how the code looks or how clear its meaning is; either it will run or it won't. We aim to write expressive code as programmers so that we can read, reference, understand, modify, and rewrite it. If the code is too complex or filled with needless ceremony, it will be harder to understand and maintain. CoffeeScript gives us an advantage to clarify our ideas and write more readable code. It's a misconception to think that CoffeeScript is very different from JavaScript. There might be some drastic syntax differences here and there, but in essence, CoffeeScript was designed to polish the rough edges of JavaScript to reveal the beautiful language hidden beneath. It steers programmers towards JavaScript's so-called "good parts" and holds strong opinions of what constitutes good JavaScript. One of the mantras of the CoffeeScript community is: "It's just JavaScript", and I have also found that the best way to truly comprehend the language is to look at how it generates its output, which is actually quite readable and understandable code. Throughout this article, we'll highlight some of the differences between the two languages, often focusing on the things in JavaScript that CoffeeScript tries to improve. In this way, I would not only like to give you an overview of the major features of the language, but also prepare you to be able to debug your CoffeeScript from its generated code once you start using it more often, as well as being able to convert existing JavaScript. Let's start with some of the things CoffeeScript fixes in JavaScript. CoffeeScript syntax One of the great things about CoffeeScript is that you tend to write much shorter and more succinct programs than you normally would in JavaScript. Some of this is because of the powerful features added to the language, but it also makes a few tweaks to the general syntax of JavaScript to transform it to something quite elegant. It does away with all the semicolons, braces, and other cruft that usually contributes to a lot of the "line noise" in JavaScript. To illustrate this, let's look at an example. On the left-hand side of the following table is CoffeeScript; on the right-hand side is the generated JavaScript: CoffeeScript JavaScript fibonacci = (n) -> return 0 if n == 0 return 1 if n == 1 (fibonacci n-1) + (fibonacci n-2) alert fibonacci 10 var fibonacci; fibonacci = function(n) { if (n === 0) { return 0; } if (n === 1) { return 1; } return (fibonacci(n - 1)) + (fibonacci(n - 2)); }; alert(fibonacci(10)); To run the code examples in this article, you can use the great Try CoffeeScript online tool, at http://coffeescript.org. It allows you to type in CoffeeScript code, which will then display the equivalent JavaScript in a side pane. You can also run the code right from the browser (by clicking the Run button in the upper-left corner). At first, the two languages might appear to be quite drastically different, but hopefully as we go through the differences, you'll see that it's all still JavaScript with some small tweaks and a lot of nice syntactical sugar. Semicolons and braces As you might have noticed, CoffeeScript does away with all the trailing semicolons at the end of a line. You can still use a semicolon if you want to put two expressions on a single line. It also does away with enclosing braces (also known as curly brackets) for code blocks such as if statements, switch, and the try..catch block. Whitespace You might be wondering how the parser figures out where your code blocks start and end. The CoffeeScript compiler does this by using syntactical whitespace. This means that indentation is used for delimited code blocks instead of braces. This is perhaps one of the most controversial features of the language. If you think about it, in almost all languages, programmers tend to already use indentation of code blocks to improve readability, so why not make it part of the syntax? This is not a new concept, and was mostly borrowed from Python. If you have any experience with significant whitespace language, you will not have any trouble with CoffeeScript indentation. If you don't, it might take some getting used to, but it makes for code that is wonderfully readable and easy to scan, while shaving off quite a few keystrokes. I'm willing to bet that if you do take the time to get over some initial reservations you might have, you might just grow to love block indentation. Blocks can be indented with tabs or spaces, but be careful about being consistent using one or the other, or CoffeeScript will not be able to parse your code correctly. Parenthesis You'll see that the clause of the if statement does not need be enclosed within parentheses. The same goes for the alert function; you'll see that the single string parameter follows the function call without parentheses as well. In CoffeeScript, parentheses are optional in function calls with parameters, clauses for if..else statements, as well as while loops. Although functions with arguments do not need parentheses, it is still a good idea to use them in cases where ambiguity might exist. The CoffeeScript community has come up with a nice idiom: wrapping the whole function call in parenthesis. The use of the alert function in CoffeeScript is shown in the following table: CoffeeScript JavaScript alert square 2 * 2.5 + 1 alert(square(2 * 2.5 + 1)); alert (square 2 * 2.5) + 1 alert((square(2 * 2.5)) + 1); Functions are first class objects in JavaScript. This means that when you refer to a function without parentheses, it will return the function itself, as a value. Thus, in CoffeeScript you still need to add parentheses when calling a function with no arguments. By making these few tweaks to the syntax of JavaScript, CoffeeScript arguably already improves the readability and succinctness of your code by a big factor, and also saves you quite a lot of keystrokes. But it has a few other tricks up its sleeve. Most programmers who have written a fair amount of JavaScript would probably agree that one of the phrases that gets typed the most frequently would have to be the function definition function(){}. Functions are really at the heart of JavaScript, yet not without its many warts. CoffeeScript has great function syntax The fact that you can treat functions as first class objects as well as being able to create anonymous functions is one of JavaScript's most powerful features. However, the syntax can be very awkward and make the code hard to read (especially if you start nesting functions). But CoffeeScript has a fix for this. Have a look at the following snippets: CoffeeScript JavaScript -> alert 'hi there!' square = (n) -> n * n var square; (function() { return alert('hi there!'); }); square = function(n) { return n * n; }; Here, we are creating two anonymous functions, the first just displays a dialog and the second will return the square of its argument. You've probably noticed the funny -> symbol and might have figured out what it does. Yep, that is how you define a function in CoffeeScript. I have come across a couple of different names for the symbol but the most accepted term seems to be a thin arrow or just an arrow. Notice that the first function definition has no arguments and thus we can drop the parenthesis. The second function does have a single argument, which is enclosed in parenthesis, which goes in front of the -> symbol. With what we now know, we can formulate a few simple substitution rules to convert JavaScript function declarations to CoffeeScript. They are as follows: Replace the function keyword with -> If the function has no arguments, drop the parenthesis If it has arguments, move the whole argument list with parenthesis in front of the -> symbol Make sure that the function body is properly indented and then drop the enclosing braces Return isn't required You might have noted that in both the functions, we left out the return keyword. By default, CoffeeScript will return the last expression in your function. It will try to do this in all the paths of execution. CoffeeScript will try turning any statement (fragment of code that returns nothing) into an expression that returns a value. CoffeeScript programmers will often refer to this feature of the language by saying that everything is an expression. This means you don't need to type return anymore, but keep in mind that this can, in many cases, alter your code subtly, because of the fact that you will always return something. If you need to return a value from a function before the last statement, you can still use return. Function arguments Function arguments can also take an optional default value. In the following code snippet you'll see that the optional value specified is assigned in the body of the generated Javascript: CoffeeScript JavaScript square = (n=1) -> alert(n * n) var square; square = function(n) { if (n == null) { n = 1; } return alert(n * n); }; In JavaScript, each function has an array-like structure called arguments with an indexed property for each argument that was passed to the function. You can use arguments to pass in a variable number of parameters to a function. Each parameter will be an element in arguments and thus you don't have to refer to parameters by name. Although the arguments object acts somewhat like an array, it is in not in fact a "real" array and lacks most of the standard array methods. Often, you'll find that arguments doesn't provide the functionality needed to inspect and manipulate its elements like they are used with an array. Summary We saw how it can help you write shorter, cleaner, and more elegant code than you normally would in JavaScript and avoid many of its pitfalls. We came to realize that even though CoffeeScripts' syntax seems to be quite different from JavaScript, it actually maps pretty closely to its generated output. Resources for Article : Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Build iPhone, Android and iPad Applications using jQTouch [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 3577

article-image-understand-and-use-microsoft-silverlight-javascript
Packt
24 Oct 2009
10 min read
Save for later

Understand and Use Microsoft Silverlight with JavaScript

Packt
24 Oct 2009
10 min read
We have come a long way from the day the WEB was created in 1992 by T.B.LEE in Switzerland. From hyper linking which was the only thing at that time, to streaming videos, instant gratification with AJAX, and a host of other do-dads that has breathed new life to JavaScript and internet usability. Silverlight among several others, is a push in this direction to satisfy the ever increasing needs of the internet users. Even so, the web application displays fall short of the rich experience one can achieve with desktop applications, and this is where the tools are being created and honed for creating RIA, short for Rich Internet Applications. In order to create such applications, a great deal of development has taken place in the Microsoft ecosystem . These are all described in the .NET and Windows Presentation Foundation which supports developers to create easily deployable Rich Internet Applications. We have to wait and see how it percolates to the Semantic Web in the future. Silverlight is a cross-platform, cross-browser plug-in that renders XAML, the declarative tag-based files while exposing the JavaScript programming interface. It makes both developers and designers to collaborate and contribute to rich and interactive designs that are well integrated with Microsoft's Expression series of programs. Initial Steps to Take In this article we will be using Silverlight 1.0 with JavaScript. Initially you need to make your browser understand the XAML, and for this you need to install Silverlight available here. There is no need for a server to work with these Silverlight application files as they will be either HTML pages, XAML pages, or JavaScript pages. Of course these files may be hosted on the server as well. The next figure shows some details you need to know before installing the plug-in. Silverlight Project Details After having enabled the browser to recognize XAML - the Extensible Application Mark up Language, you need to consider the different components that will make Silverlight happen. In the present tutorial we will look at using Silverlight 1.0. Silverlight 2.0 is still in Beta stage. If you have Silverlight already installed you may be able to verify the version in the Control Panel / Add Remove Programs and display information as shown in the next figure. To make Silverlight happen you need the following files: An HTML page that you can browse to where the Silverlight plug-in is spawned A XAML page which is all the talk is about which provides the 'Richness' Supporting script files that will create the plug-in and embeds it in the HTML page The next figure shows how these interact with one another somewhat schematically. Basically you can start with your HTML page. You need to reference two .JS files as shown in the above figure. The script file Silverlight.js exposes the properties, methods, etc. of Silverlight. This file will be available in the SDK download. You can copy this file and move it around to any location. The second script createSilvelight.js creates a plug-in which you will embed in the HTML page using yet another short script. You will see how this is created later in the tutorial. The created plug-in then brings-in the XAML page which you will create as well. The first step is to create a blank HTML page, herein called, TestSilverLight.htm as shown in the following listing: Listing 1:TestSilverLight.htm Scaffold file <html><head><script type="text/javascript" src="Silverlight.js"></script><script type="text/javascript" src="createSilverlight.js"></script><title> </title> </head> <body> Next, you go ahead and create the createSilvelight.js file. The following listing shows how this is coded. This is slightly modified although taken from a web resource. Listing 2: createSilverlight.js function createSilverlight() { Silverlight.createObject( "TestSilver.xaml", // Source property value. parentElement, // DOM reference to hosting DIV tag. "SilverlightPlugInHost1", // Unique plug-in ID value. { // Plug-in properties. width:'1024', // Width of rectangular in pixels. height:'530', // Height of rectangular in pixels. inplaceInstallPrompt:false, // install prompt if invalid version is detected. background:'white', // Background color of plug-in. isWindowless:'false', // Determines whether to display in windowless mode. framerate:'24', // MaxFrameRate property value. version:'1.0' // Silverlight version. }, { onError:null, // OnError property value onLoad:null // OnLoad property value }, null, // initParams null); // Context value } This function, createSilverlight(), when called from within a place holder location will create a Silverlight object at that location with some defined properties. You may go and look up the various customizable items in this code on the web. The object that is going to be created will be the TestSilver.xaml at the "id" of the location which will be found using the ECMA script we will see later. The "id" is also named here, found by the "parentElement". To proceed further we need to create (a) the TestSilver.xaml file and (b) create a place holder in the HTML page. At first the changes made to Listing 1 are shown in bold. This is the place holder <div> </div> tags inside the 'body' tags as shown in the next listing with the "id" used in the createSilverlight.js file. You may also use <span> </span> tags, provided you associate a "id" with it. Listing 3: Place holder created in the HTML Page <head><script type="text/javascript" src="Silverlight.js"></script><script type="text/javascript" src="createSilverlight.js"></script><title> </title> </head> <body><div id="SilverlightPlugInHost1"> </div></body> </html> Creating the XAML File If you have neither used XAML, nor created a XAML page you should access the internet where you will find tons of this stuff. A good location is MSDN's Silvelight home page. You may also want to read up this article which will give some idea about XAML. Although this article is focusing on 'Windows' and not 'Web', the idea of what XAML is the same. The next listing describes the declarative syntax that will show a 'canvas', a defined space on your web page in which an image has been brought in. The 'Canvas' is the container and the image is the contained object. A XAML file should be well formed similar to an XML file. Listing 4: A Simple XAML file <Canvas Width="200" Height="200" Background="powderblue"><Image Canvas.Left="50" Canvas.Top="50" Width="200"Source="Fish.JPG"/></Canvas> Save the above file (text) with the extension XAML. If your Silverlight 1.0 is working correctly you should see this displayed on the browser when you browse to it. You also note the [.] notation to access the properties of the Canvas. For example, Canvas.Left is 50 pixels relative to the Canvas. The namespace is very important, more about it later. Without going into too much details, the pale blue area is the canvas whose width and height are 200 pixels each. The fish image is off set by the amounts shown relative to the canvas. Canvas is the portion of the browser window which functions as a place holder. While you use "Canvas" in web, you will have "Window" for desktop applications. The namespace of the canvas should be as shown otherwise you may get errors of various types depending on the typos. Inside the canvas you may place any type of object, buttons, textboxes, shapes, and even other canvases. If and when you design using the Visual Studio designer with intellisense guiding you along you will see a bewildering array of controls, styles, etc. The details of the various XAML tags are outside the scope of this tutorial. Although Notepad is used in this tutorial, you really should use a designer as you cannot possibly remember correctly the various Properties, Methods and Events supported. In some web references you may notice one more additional namespace . Remove this namespace reference as "Canvas" does not exist in this namespace. If you use it, you will get an XamlParseException. Also if you are of the cut and paste type make sure you save the XAML file as of type "All files" with XAML extension. With the above brief background review the TestSilver.xaml file whose listing is shown in the next paragraph. Listing 5: TestSilver.xaml file referenced in Plug-in script <Canvas Width="200" Height="150" Background="powderblue"> <Canvas Width="150" Height="250" Background="PaleGoldenRod"> <Ellipse Width="100" Height="100" Stroke="Black" StrokeThickness="2" Fill="Green" /> </Canvas><Image Canvas.Left="50" Canvas.Top="50" Width="200" Source="http://localhost/IMG_0086.JPG"/></Canvas> In the above code you see a second canvas embedded inside the first with its own independent window. The order they would appear will depend on where they are in the code unless the default order is changed. You also see that the image is now referenced to a graphic file on the local server. Later on you will see the Silverlight.htm hosted on the server. If you are using more recent versions of ASP.NET used on your site, or version of IE you may get to see the complete file and some times you may get to see only part of the XMAL content and additional error message such as this one. For example, while the image in the project folder is displayed, the image on the local server may be skipped. If the setting and versions are optimum, you will get to see this displayed on your browser when you browse to the above file. Script in HTML to Embed Silverlight Plug-in This really is the last piece left to be taken care of to complete this project. The code shown in the next listing shows how this is done. The code segment shown in bold is the script that is added to the place holder we created earlier. Listing 6: Script added to bring Plug-in <html><head><script type="text/javascript" src="Silverlight.js"></script><script type="text/javascript" src="createSilverlight.js"></script><title> </title> </head> <body><div id="SilverlightPlugInHost1"> <script type="text/javascript"> var parentElement = document.getElementById("SilverlightPluginHost1"); createSilverlight();</script></div></body> </html> Hosted Files on the IIS The various files used are then saved to a folder and can be set up as the target of a virtual directory on your IIS as shown. Now you can browse the Silverlight123.htm file on your browser to see the following displayed on your IE. Summary The present tutorial shows how to create a Silverlight project describing the various files used and how they interact with each other. The importance of using the correct namespace and some tips on creating the XAML files as well as hosting them on IIS are also described. A Windows XP with SP2 was used and the Silverlight.htm file tested on IIS 5.1; IE Ver 7.0.5370IC and web site enabled for ASP.NET Version 2.0.50727 with the registered MimeType application/xaml+xml.
Read more
  • 0
  • 0
  • 3489
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-angular-20
Packt
30 Apr 2015
12 min read
Save for later

Angular 2.0

Packt
30 Apr 2015
12 min read
Angular 2.0 was officially announced in ng-conference on October 2014. Angular 2.0 will not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include major changes. In this article by Mohammad Wadood Majid, coauthor of the book Mastering AngularJS for .NET Developers, we will learn the following topics: Why Angular 2.0 Design and features of Angular 2.0 AtScript Routing solution Dependency injection Annotations Instance scope Child injector Data binding and templating (For more resources related to this topic, see here.) Why Angular 2.0 AngularJS is one of the most popular open source frameworks available for client-side web application development. From the last few years, AngularJS's adaption and community support has been remarkable. The current AngularJS Version 1.3 is stable and used by many developers. There are over 1600 applications inside Google that use AngularJS 1.2 or 1.3. In the last few years, the Web has changed significantly, such as in the past, it was very difficult to build a cross-browser application; however, today's browsers are more consistent in their DOM implementations and the Web will continue to change. Angular 2.0 will address the following concerns: Mobile: Angular 2.0 will focus on mobile application development. Modular: Different modules will be removed from the core AngularJS, which will result in a better performance. Angular 2.0 will provide us the ability to pick the module parts we need. Modern: Angular 2.0 will include ECMAScript 6 (ES6). ECMAScript is a scripting language standard developed by Ecma International. It is widely used in client-side scripting, such as JavaScript, JScript, and ActionScript on the Web. Performance: AngularJS was developed around 5 years ago and it was not developed for developers. It was a tool targeting developers to quickly create persistent HTML forms. However, over time, it has been used to build more complex applications. The Angular 1.x team worked over the years to make changes to the current design, allowing it to continue to be relevant as needed for modern web applications. However, there are limits to improve the current AngularJS framework. A number of these limits are related to the performance that results to the current binding and template infrastructure. In order to fix these problems, a new infrastructure is required. In these days, the modern browser already supports some of the features of ES6, but the final implementation in progress will be available in 2015. With new features, developers will be able to describe their own views (template element) and package them for distribution to other developers (HTML imports). When in 2015 all these new features are available in all the browsers, the developer will be able to create as many endeavors for reusable components as required to resolve common problems. However, most of the frameworks, such as AngularJS 1.x, are not prepared, the data binding of the AngularJS 1.x framework works on the assumption of a small number of known HTML elements. In order to take advantage of the new components, an implementation in Angular is required. Design and features of AngularJS 2.0 The current AngularJS framework design in an amalgamation of the changing Web and the general computing landscape; however, it still needs some changes. The current Angular 1.x framework cannot work with new web components, as it lends itself to mobile applications and pushes its own module and class API against the standards. To answer these issues, the AngularJS team is coming up with the AngularJS 2.0 framework. AngularJS 2.0 is a reimaging of AngularJS 1.x for the modern web browser. The following are the changes in Angular 2.0: AtScript AtScript is a language used to develop AngularJS 2.0, which is a superset of ES6. It's managed by the Traceur compiler with ES6 to produce the ES5 code and it will use the TypeScript's syntax to generate runtime type proclamations instead of compile time checks. However, the developer will still be able to use the plain JavaScript (ES5) instead to using AtScript to write AngularJS 2.0 applications. The following is an example of an AtScript code: import {Component} from 'angulat';   import {server} from './server';   @Componenet({selector: 'test'})   export class MyNewComponent {   constructor(serve:Server){      this.sever=server } } In the preceding code, the import and the class come from ES6. There are constructor functions and a server parameter that specifies a type. In AtScript, this type is used to generate a runtime type assertion. The reference is stored, so that the dependency injection framework can use it. The @Component annotation is a metadata annotation. When we decorate some code within @Component, the compiler generates code that instantiates the annotation and stores it in a known location, so that it can be accessed by the AngularJS 2.0 framework. Routing solution In AngularJS 1.x, the routing was designed to handle a few simple cases. As the framework grew, more features were added to it. AngularJS 2.0 includes the following basic routing features, but will still be able to extend: JSON route configuration Optional convention over configuration Static, parameterized, and splat route patterns URL resolver Query string Push state or hash change Navigation model Document title updates 404 route handling Location service History manipulation Child router Screen activate: canActivate activate deactivate Dependency Injection The main feature of AngularJS 1.x was Dependency Injection (DI). It is very easy to used DI and follow the divide and conquer software development approach. In this way, the complex problems can be abstracted together and the applications that are developed in this way can be assembled at runtime with the use of DI. However, there are few issues in the AngularJS 1.x framework. First, the DI implementation was associated with minification; DI was dependant on parsing parameter names from functions, and whenever the names were changed, they were no longer matching with the services, controllers, and other components. Second, the missing features, which are more common to advance server-side DI features, are available in .NET and Java. These two features add constraints to scope control and child injectors. Annotations With the use of AtScript in the AngularJS 2.0 framework, a way to relate metadata with any function was introduced. The formatting data for metadata with AtScript is strong in the face of minification and is easy to write by pointer with ES5. The instance scope In the AngularJS framework 1.x, in the DI container, all the instances were singletons. The same is the case with AngularJS 2.0 by default. However, to get different behavior, we need to use services, providers, constants, and so on. The following code can be used to create a new instance every time the DI. It will become more useful if you create your own scope identifiers for use in the combination with child injectors, as shown: @TransientScope   Export class MyClass{…} The child injector The child injector is a major new feature in AngularJS 2.0. The child injector is inherited from its parent; however, the child injector has the ability to override the parents at the child level. Using this new feature, we can call certain type of objects in the application that will be automatically overridden in various scopes. For example, when a new route has a child route ability, each child route creates its own child injector. This allows each route to inherit from parent routes or to override those services during different navigation scenarios. Data binding and templating The data binding and template are considered a single unit while developing the application. In other words, the data binding and templates go hand in hand while writing an application with the AngularJS framework. When we bind the DOM, the HTML is handed to the template compiler. The compiler goes across the HTML to find out any directives, binding expressions, event handlers, and so on. All of the data is extracted from the DOM to data structures, which can be used to instantiate the template. During this phase, some processing is done on the data, for example, parsing the binding expression. Every node that contains the instructions is tagged with the class to cache the result of the process, so that work does not need to be repeated. Dynamic loading Dynamic loading was missing in AngularJS 1.x. It is very hard to add new directives or controllers at runtime. However, dynamic loading is added to Angular 2.0. Whenever any template is compiled, not only is the compiler provided with a template, but it is also provided with a component definition. The component definition contains metadata of directives, filters, and so on. This confirms that the necessary dependencies are loaded before the template gets managed by the compiler. Directives Directives in the AngularJS framework are meant to extend the HTML. In AngularJS 1.x, the Directive Definition Object (DDO) is used to create directives. In AngularJS 2.0, the directives are made simpler. There are three types of directives in AngularJS 2.0: The component directive: This is a collection of a view and a controller to create custom components. It can be used as an HTML element as well as a router that can map routes to the components. The decorator directive: Use this directive to decorate the HTML element with additional behavior, such as ng-show. The template directive: This directive transforms HTML into a reusable template. The directive developer can control how the template is instantiated and inserted into the DOM, such as ng-if and ng-repeat. The controller in AngularJS 2.0 is not a part of the component. However, the component contains the view and controller, where view is an HTML and controller is JavaScript. In AngularJS 2.0, the developer creates a class with some annotations, as shown in the following code: @dirComponent({   Selector: 'divTabContainter'   Directives:[NgRepeat]   })   Export class TabContainer{      constructor (panes:Query<Pane>){      this.panes=panes      } select(selectPane:Pane){…}   } In the preceding code, the controller of the component is a class. The dependencies are injected automatically into the constructor because the child injectors will be used. It can get access to any service up to the DOM hierarchy as well as it will local to service element. It can be seen in the preceding code that Query is injected. This is a special collection that is automatically synchronized with the child elements and lets us know when anything is added or removed. Templates In the preceding section, we created a dirTabContainer directive using AngularJS 2.0. The following code shows how to use the preceding directive in the DOM: <template>      <div class="border">          <div class="tabs">              <div [ng-repeat|pane]="panes" class="tab"   (^click)="select(pane)">                  <img [src]="pane.icon"><span>${pane.name}</span>              </div>          </div>          <content>        </content>      </div> </template> As you can see in the preceding code, in the <img [src]="pane.icon"><span>${pane.name}</span> image tag, the src attribute is surrounded with [], which tells us that the attribute has binding express. When we see ${}, it means that there is an expression that should be interpolated in the content. These bindings are unidirectional from the model or controller to the view. If you see div in the preceding <div [ng-repeat|pane]="panes" class="tab" (^click)="select(pane)"> template code, it is noticeable that ng-repeat is a template directive and is included with | and the pane word, where pane is the local variable. (^click) indicates that there is an event handler, where ^ means that the event is not a directory attached to the DOM, rather, we let it bubble and will be handled at the document level. In the following code example, we will compare the code of the Angular framework 1.x and AngularJS 2.0; let's create a hello world example for this demonstration. The following code is used to enable the write option in the AngularJS framework 1.x: var module = angular.module("example", []);   module.controller("FormExample", function() {   this.username = "World";   });   <div ng-controller="FormExample as ctrl">   <input ng-model="ctrl.username"> Hello {{ctrl.username}}!   </div> The following code is used to write in the AngluraJS framework 1.x: @Component({   selector: 'form-example'   })   @Template({   // we are binding the input element to the control object // defined in the component's class   inline: '<input [control]="username">Hello            {{username.value}}!', directives: [forms]   })   class FormExample {   constructor() {      this.username = new Control('World');   }   } In the preceding code example, TypeScript 1.5 is used, which will support the metadata annotations. However, the preceding code can be written in the ES5/ES6 JavaScript. More information on annotations can be found in the annotation guide at https://docs.google.com/document/d/1uhs-a41dp2z0NLs-QiXYY-rqLGhgjmTf4iwBad2myzY/edit#heading=h.qbaubqkoiqds. Here are some explanations from TypeScript 1.5: Form behavior cannot be unit tested without compiling the associated template. This is required because certain parts of the application behavior are contained in the template. We want to enable the dynamically generated data-driven forms in AngularJS 2.0 although it is present in AngularJS 1.x. This is because in Angular 1.x, this is not easy. The difficulty to reason your template statically arises because the ng-model directive was built using a generic two-way data binding. An atomic form that can easily be validated or reverted to its original state is required, which is missing from AngularJS 1.x. Although AngularJS 2.0 uses an extra level of indirection, it grants major benefits. The control object decouples form behavior from the template, so that you can test it in isolation. Tests are simpler to write and faster to execute. Summary In this article, we introduced the Angular 2.0 framework; it may not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include breaking changes. We also talked about certain AngularJS 2.0 changes. AngularJS 2.0 will hopefully be released by the end of 2015. Resources for Article: Further resources on this subject: Setting Up The Rig [article] AngularJS Project [article] Working with Live Data and AngularJS [article]
Read more
  • 0
  • 0
  • 3379

article-image-building-your-first-application
Packt
10 Jan 2013
12 min read
Save for later

Building Your First Application

Packt
10 Jan 2013
12 min read
(For more resources related to this topic, see here.) Improving the scaffolding application In this recipe, we discuss how to create your own scaffolding application and add your own configuration file. The scaffolding application is the collection of files that come with any new web2py application. How to do it... The scaffolding app includes several files. One of them is models/db.py, which imports four classes from gluon.tools (Mail, Auth, Crud, and Service), and defines the following global objects: db, mail, auth, crud, and service. The scaffolding application also defines tables required by the auth object, such as db.auth_user. The default scaffolding application is designed to minimize the number of files, not to be modular. In particular, the model file, db.py, contains the configuration, which in a production environment, is best kept in separate files. Here, we suggest creating a configuration file, models/0.py, that contains something like the following: from gluon.storage import Storage settings = Storage() settings.production = False if settings.production: settings.db_uri = 'sqlite://production.sqlite' settings.migrate = False else: settings.db_uri = 'sqlite://development.sqlite' settings.migrate = True settings.title = request.application settings.subtitle = 'write something here' settings.author = 'you' settings.author_email = '[email protected]' settings.keywords = '' settings.description = '' settings.layout_theme = 'Default' settings.security_key = 'a098c897-724b-4e05-b2d8-8ee993385ae6' settings.email_server = 'localhost' settings.email_sender = '[email protected]' settings.email_login = '' settings.login_method = 'local' settings.login_config = '' We also modify models/db.py, so that it uses the information from the configuration file, and it defines the auth_user table explicitly (this makes it easier to add custom fields): from gluon.tools import * db = DAL(settings.db_uri) if settings.db_uri.startswith('gae'): session.connect(request, response, db = db) mail = Mail() # mailer auth = Auth(db) # authentication/authorization crud = Crud(db) # for CRUD helpers using auth service = Service() # for json, xml, jsonrpc, xmlrpc, amfrpc plugins = PluginManager() # enable generic views for all actions for testing purpose response.generic_patterns = ['*'] mail.settings.server = settings.email_server mail.settings.sender = settings.email_sender mail.settings.login = settings.email_login auth.settings.hmac_key = settings.security_key # add any extra fields you may want to add to auth_user auth.settings.extra_fields['auth_user'] = [] # user username as well as email auth.define_tables(migrate=settings.migrate,username=True) auth.settings.mailer = mail auth.settings.registration_requires_verification = False auth.settings.registration_requires_approval = False auth.messages.verify_email = 'Click on the link http://' + request.env.http_host + URL('default','user', args=['verify_email']) + '/%(key)s to verify your email' auth.settings.reset_password_requires_verification = True auth.messages.reset_password = 'Click on the link http://' + request.env.http_host + URL('default','user', args=['reset_password']) + '/%(key)s to reset your password' if settings.login_method=='janrain': from gluon.contrib.login_methods.rpx_account import RPXAccount auth.settings.actions_disabled=['register', 'change_password', 'request_reset_password'] auth.settings.login_form = RPXAccount(request, api_key = settings.login_config.split(':')[-1], domain = settings.login_config.split(':')[0], url = "http://%s/%s/default/user/login" % (request.env.http_host, request.application)) Normally, after a web2py installation or upgrade, the welcome application is tar-gzipped into welcome.w2p, and is used as the scaffolding application. You can create your own scaffolding application from an existing application using the following commands from a bash shell: cd applications/app tar zcvf ../../welcome.w2p * There's more... The web2py wizard uses a similar approach, and creates a similar 0.py configuration file. You can add more settings to the 0.py file as needed. The 0.py file may contain sensitive information, such as the security_key used to encrypt passwords, the email_login containing the password of your smtp account, and the login_config with your Janrain password (http://www.janrain.com/). You may want to write this sensitive information in a read-only file outside the web2py tree, and read them from your 0.py instead of hardcoding them. In this way, if you choose to commit your application to a version-control system, you will not be committing the sensitive information The scaffolding application includes other files that you may want to customize, including views/layout.html and views/default/users.html. Some of them are the subject of upcoming recipes. Building a simple contacts application When you start designing a new web2py application, you go through three phases that are characterized by looking for the answer to the following three questions: What data should the application store? Which pages should be presented to the visitors? How should the page content, for each page, be presented? The answer to these three questions is implemented in the models, the controllers, and the views respectively. It is important for a good application design to try answering those questions exactly in this order, and as accurately as possible. Such answers can later be revised, and more tables, more pages, and more bells and whistles can be added in an iterative fashion. A good web2py application is designed in such a way that you can change the table definitions (add and remove fields), add pages, and change page views, without breaking the application. A distinctive feature of web2py is that everything has a default. This means you can work on the first of those three steps without the need to write code for the second and third step. Similarly, you can work on the second step without the need to code for the third. At each step, you will be able to immediately see the result of your work; thanks to appadmin (the default database administrative interface) and generic views (every action has a view by default, until you write a custom one). Here we consider, as a first example, an application to manage our business contacts, a CRM. We will call it Contacts. The application needs to maintain a list of companies, and a list of people who work at those companies. How to do it... First of all we create the model. In this step we identify which tables are needed and their fields. For each field, we determine whether they: Must contain unique values (unique=True) Contain empty values (notnull=True) Are references (contain a list of a record in another table) Are used to represent a record (format attribute) From now on, we will assume we are working with a copy of the default scaffolding application, and we only describe the code that needs to be added or replaced. In particular, we will assume the default views/layout.html and models/db.py. Here is a possible model representing the data we need to store in models/db_contacts.py: # in file: models/db_custom.py db.define_table('company', Field('name', notnull=True, unique=True), format='%(name)s') db.define_table('contact', Field('name', notnull=True), Field('company', 'reference company'), Field('picture', 'upload'), Field('email', requires=IS_EMAIL()), Field('phone_number', requires=IS_MATCH('[d-() ]+')), Field('address'), format='%(name)s') db.define_table('log', Field('body', 'text',notnull=True), Field('posted_on', 'datetime'), Field('contact', 'reference contact')) Of course, a more complex data representation is possible. You may want to allow, for example, multiple users for the system, allow the same person to work for multiple companies, and keep track of changes in time. Here, we will keep it simple. The name of this file is important. In particular, models are executed in alphabetical order, and this one must follow db.py. After this file has been created, you can try it by visiting the following url: http://127.0.0.1:8000/contacts/appadmin, to access the web2py database administrative interface, appadmin. Without any controller or view, it provides a way to insert, select, update, and delete records. Now we are ready to build the controller. We need to identify which pages are required by the application. This depends on the required workflow. At a minimum we need the following pages: An index page (the home page) A page to list all companies A page that lists all contacts for one selected company A page to create a company A page to edit/delete a company A page to create a contact A page to edit/delete a contact A page that allows to read the information about one contact and the communication logs, as well as add a new communication log Such pages can be implemented as follows: # in file: controllers/default.py def index(): return locals() def companies(): companies = db(db.company).select(orderby=db.company.name) return locals() def contacts(): company = db.company(request.args(0)) or redirect(URL('companies')) contacts = db(db.contact.company==company.id).select( orderby=db.contact.name) return locals() @auth.requires_login() def company_create(): form = crud.create(db.company, next='companies') return locals() @auth.requires_login() def company_edit(): company = db.company(request.args(0)) or redirect(URL('companies')) form = crud.update(db.company, company, next='companies') return locals() @auth.requires_login() def contact_create(): db.contact.company.default = request.args(0) form = crud.create(db.contact, next='companies') return locals() @auth.requires_login() def contact_edit(): contact = db.contact(request.args(0)) or redirect(URL('companies')) form = crud.update(db.contact, contact, next='companies') return locals() @auth.requires_login() def contact_logs(): contact = db.contact(request.args(0)) or redirect(URL('companies')) db.log.contact.default = contact.id db.log.contact.readable = False db.log.contact.writable = False db.log.posted_on.default = request.now db.log.posted_on.readable = False db.log.posted_on.writable = False form = crud.create(db.log) logs = db( db.log.contact==contact.id).select(orderby=db.log.posted_on) return locals() def download(): return response.download(request, db) def user(): return dict(form=auth()) Make sure that you do not delete the existing user, download, and service functions in the scaffolding default.py. Notice how all pages are built using the same ingredients: select queries and crud forms. You rarely need anything else. Also notice the following: Some pages require a request.args(0) argument (a company ID for contacts and company_edit, a contact ID for contact_edit, and contact_logs). All selects have an orderby argument. All crud forms have a next argument that determines the redirection after form submission. All actions return locals(), which is a Python dictionary containing the local variables defined in the function. This is a shortcut. It is of course possible to return a dictionary with any subset of locals(). contact_create sets a default value for the new contact company to the value passed as args(0). The contacts_logs retrieves past logs after processing crud.create for a new log entry. This avoid unnecessarily reloading of the page, when a new log is inserted. At this point our application is fully functional, although the look-and-feel and navigation can be improved.: You can create a new company at: http://127.0.0.1:8000/contacts/default/company_create You can list all companies at: http://127.0.0.1:8000/contacts/default/companies You can edit company #1 at: http://127.0.0.1:8000/contacts/default/company_edit/1 You can create a new contact at: http://127.0.0.1:8000/contacts/default/contact_create You can list all contacts for company #1 at: http://127.0.0.1:8000/contacts/default/contacts/1 You can edit contact #1 at: http://127.0.0.1:8000/contacts/default/contact_edit/1 And you can access the communication log for contact #1 at: http://127.0.0.1:8000/contacts/default/contact_logs/1 You should also edit the models/menu.py file, and replace the content with the following: response.menu = [['Companies', False, URL('default', 'companies')]] The application now works, but we can improve it by designing a better look and feel for the actions. That's done in the views. Create and edit file views/default/companies.html: {{extend 'layout.html'}} <h2>Companies</h2> <table> {{for company in companies:}} <tr> <td>{{=A(company.name, _href=URL('contacts', args=company.id))}}</td> <td>{{=A('edit', _href=URL('company_edit', args=company.id))}}</td> </tr> {{pass}} <tr> <td>{{=A('add company', _href=URL('company_create'))}}</td> </tr> </table> response.menu = [['Companies', False, URL('default', 'companies')]] Here is how this page looks: Create and edit file views/default/contacts.html: {{extend 'layout.html'}} <h2>Contacts at {{=company.name}}</h2> <table> {{for contact in contacts:}} <tr> <td>{{=A(contact.name, _href=URL('contact_logs', args=contact.id))}}</td> <td>{{=A('edit', _href=URL('contact_edit', args=contact.id))}}</td> </tr> {{pass}} <tr> <td>{{=A('add contact', _href=URL('contact_create', args=company.id))}}</td> </tr> </table> Here is how this page looks: Create and edit file views/default/company_create.html: {{extend 'layout.html'}} <h2>New company</h2> {{=form}} Create and edit file views/default/contact_create.html: {{extend 'layout.html'}} <h2>New contact</h2> {{=form}} Create and edit file: views/default/company_edit.html: {{extend 'layout.html'}} <h2>Edit company</h2> {{=form}} Create and edit file views/default/contact_edit.html: {{extend 'layout.html'}} <h2>Edit contact</h2> {{=form}} Create and edit file views/default/contact_logs.html: {{extend 'layout.html'}} <h2>Logs for contact {{=contact.name}}</h2> <table> {{for log in logs:}} <tr> <td>{{=log.posted_on}}</td> <td>{{=MARKMIN(log.body)}}</td> </tr> {{pass}} <tr> <td></td> <td>{{=form}}</td> </tr> </table> Here is how this page looks: Notice that in the last view, we used the function MARKMIN to render the content of the db.log.body, using the MARKMIN markup. This allows embedding links, images, anchors, font formatting information, and tables in the logs. For details about the MARKMIN syntax we refer to: http://web2py.com/examples/static/markmin.html.
Read more
  • 0
  • 0
  • 3221

article-image-how-to-build-microservices-using-rest-framework
Gebin George
28 Mar 2018
7 min read
Save for later

How to build Microservices using REST framework

Gebin George
28 Mar 2018
7 min read
Today, we will learn to build microservices using REST framework. Our microservices are Java EE 8 web projects, built using maven and published as separate Payara Micro instances, running within docker containers. The separation allows them to scale individually, as well as have independent operational activities. Given the BCE pattern used, we have the business component split into boundary, control, and entity, where the boundary comprises of the web resource (REST endpoint) and business service (EJB). The web resource will publish the CRUD operations and the EJB will in turn provide the transactional support for each of it along with making external calls to other resources. Here's a logical view for the boundary consisting of the web resource and business service: The microservices will have the following REST endpoints published for the projects shown, along with the boundary classes XXXResource and XXXService: Power Your APIs with JAXRS and CDI, for Server-Sent Events. In IMS, we publish task/issue updates to the browser using an SSE endpoint. The code observes for the events using the CDI event notification model and triggers the broadcast. The ims-users and ims-issues endpoints are similar in API format and behavior. While one deals with creating, reading, updating, and deleting a User, the other does the same for an Issue. Let's look at this in action. After you have the containers running, we can start firing requests to the /users web resource. The following curl command maps the URI /users to the @GET resource method named getAll() and returns a collection (JSON array) of users. The Java code will simply return a Set<User>, which gets converted to JsonArray due to the JSON binding support of JSON-B. The method invoked is as follows: @GET public Response getAll() {... } curl -v -H 'Accept: application/json' http://localhost:8081/ims-users/resources/users ... HTTP/1.1 200 OK ... [{ "id":1,"name":"Marcus","email":"[email protected]" "credential":{"password":"1234","username":"marcus"} }, { "id":2,"name":"Bob","email":"[email protected]" "credential":{"password":"1234","username":"bob"} }] Next, for selecting one of the users, such as Marcus, we will issue the following curl command, which uses the /users/xxx path. This will map the URI to the @GET method which has the additional @Path("{id}") annotation as well. The value of the id is captured using the @PathParam("id") annotation placed before the field. The response is a User entity wrapped in the Response object returned. The method invoked is as follows: @GET @Path("{id}") public Response get(@PathParam("id") Long id) { ... } curl -v -H 'Accept: application/json' http://localhost:8081/ims-users/resources/users/1 ... HTTP/1.1 200 OK ... { "id":1,"name":"Marcus","email":"[email protected]" "credential":{"password":"1234","username":"marcus"} } In both the preceding methods, we saw the response returned as 200 OK. This is achieved by using a Response builder. Here's the snippet for the method: return Response.ok( ..entity here..).build(); Next, for submitting data to the resource method, we use the @POST annotation. You might have noticed earlier that the signature of the method also made use of a UriInfo object. This is injected at runtime for us via the @Context annotation. A curl command can be used to submit the JSON data of a user entity. The method invoked is as follows: @POST public Response add(User newUser, @Context UriInfo uriInfo) We make use of the -d flag to send the JSON body in the request. The POST request is implied: curl -v -H 'Content-Type: application/json' http://localhost:8081/ims-users/resources/users -d '{"name": "james", "email":"[email protected]", "credential": {"username":"james","password":"test123"}}' ... HTTP/1.1 201 Created ... Location: http://localhost:8081/ims-users/resources/users/3 The 201 status code is sent by the API to signal that an entity has been created, and it also returns the location for the newly created entity. Here's the relevant snippet to do this: //uriInfo is injected via @Context parameter to this method URI location = uriInfo.getAbsolutePathBuilder() .path(newUserId) // This is the new entity ID .build(); // To send 201 status with new Location return Response.created(location).build(); Similarly, we can also send an update request using the PUT method. The method invoked is as follows: @PUT @Path("{id}") public Response update(@PathParam("id") Long id, User existingUser) curl -v -X PUT -H 'Content-Type: application/json' http://localhost:8081/ims-users/resources/users/3 -d '{"name": "jameson", "email":"[email protected]"}' ... HTTP/1.1 200 Ok The last method we need to map is the DELETE method, which is similar to the GET operation, with the only difference being the HTTP method used. The method invoked is as follows: @DELETE @Path("{id}") public Response delete(@PathParam("id") Long id) curl -v -X DELETE http://localhost:8081/ims-users/resources/users/3 ... HTTP/1.1 200 Ok You can try out the Issues endpoint in a similar manner. For the GET requests of /users or /issues, the code simply fetches and returns a set of entity objects. But when requesting an item within this collection, the resource method has to look up the entity by the passed in id value, captured by @PathParam("id"), and if found, return the entity, or else a 404 Not Found is returned. Here's a snippet showing just that: final Optional<Issue> issueFound = service.get(id); //id obtained if (issueFound.isPresent()) { return Response.ok(issueFound.get()).build(); } return Response.status(Response.Status.NOT_FOUND).build(); The issue instance can be fetched from a database of issues, which the service object interacts with. The persistence layer can return a JPA entity object which gets converted to JSON for the calling code. We will look at persistence using JPA in a later section. For the update request which is sent as an HTTP PUT, the code captures the identifier ID using @PathParam("id"), similar to the previous GET operation, and then uses that to update the entity. The entity itself is submitted as a JSON input and gets converted to the entity instance along with the passed in message body of the payload. Here's the code snippet for that: @PUT @Path("{id}") public Response update(@PathParam("id") Long id, Issue updated) { updated.setId(id); boolean done = service.update(updated); return done ? Response.ok(updated).build() : Response.status(Response.Status.NOT_FOUND).build(); } The code is simple to read and does one thing—it updates the identified entity and returns the response containing the updated entity or a 404 for a non-existing entity. The service references that we have looked at so far are @Stateless beans which are injected into the resource class as fields: // Project: ims-comments @Stateless public class CommentsService {... } // Project: ims-issues @Stateless public class IssuesService {... } // Project: ims-users @Stateless public class UsersService {... } These will in turn have the EntityManager injected via @PersistenceContext. Combined with the resource and service, our components have made the boundary ready for clients to use. Similar to the WebSockets section in Chapter 6, Power Your APIs with JAXRS and CDI, in IMS, we use a @ServerEndpoint which maintains the list of active sessions and then uses that to broadcast a message to all users who are connected. A ChatThread keeps track of the messages being exchanged through the @ServerEndpoint class. For the message to besent, we use the stream of sessions and filter it by open sessions, then send the message for each of the sessions: chatSessions.getSessions().stream().filter(Session::isOpen) .forEach(s -> { try { s.getBasicRemote().sendObject(chatMessage); }catch(Exception e) {...} }); To summarize, we practically saw how to leverage REST framework to build microservices. This article is an excerpt from the book, Java EE 8 and Angular written by Prashant Padmanabhan. The book covers building modern user friendly web apps with Java EE  
Read more
  • 0
  • 0
  • 3121

article-image-npm-inc-co-founder-and-chief-data-officer-quits-leaving-the-community-to-question-the-stability-of-the-javascript-registry
Fatema Patrawala
22 Jul 2019
6 min read
Save for later

Npm Inc. co-founder and Chief data officer quits, leaving the community to question the stability of the JavaScript Registry

Fatema Patrawala
22 Jul 2019
6 min read
On Thursday, The Register reported that Laurie Voss, the co-founder and chief data officer of JavaScript package registry, NPM Inc left the company. Voss’s last day in office was 1st July while he officially announced the news on Thursday. Voss joined NPM in January 2014 and decided to leave the company in early May this year. NPM has faced its share of unrest in the company in the past few months. In the month of March  5 NPM employees were fired from the company in an unprofessional and unethical way. Later 3 of those employees were revealed to have been involved in unionization and filed complaints against NPM Inc with the National Labor Relations Board (NLRB).  Earlier this month NPM Inc at the third trial settled the labor claims brought by these three former staffers through the NLRB. Voss’ s resignation will be third in line after Rebecca Turner, former core contributor who resigned in March and Kat Marchan, former CLI and community architect who resigned from NPM early this month. Voss writes on his blog, “I joined npm in January of 2014 as co-founder, when it was just some ideals and a handful of servers that were down as often as they were up. In the following five and a half years Registry traffic has grown over 26,000%, and worldwide users from about 1 million back then to more than 11 million today. One of our goals when founding npm Inc. was to make it possible for the Registry to run forever, and I believe we have achieved that goal. While I am parting ways with npm, I look forward to seeing my friends and colleagues continue to grow and change the JavaScript ecosystem for the better.” Voss also told The Register that he supported unions, “As far as the labor dispute goes, I will say that I have always supported unions, I think they're great, and at no point in my time at NPM did anybody come to me proposing a union,” he said. “If they had, I would have been in favor of it. The whole thing was a total surprise to me.” The Register team spoke to one of the former staffers of NPM and they said employees tend not to talk to management in the fear of retaliation and Voss seemed uncomfortable to defend the company’s recent actions and felt powerless to affect change. In his post Voss is optimistic about NPM’s business areas, he says, “Our paid products, npm Orgs and npm Enterprise, have tens of thousands of happy users and the revenue from those sustains our core operations.” However, Business Insider reports that a recent NPM Inc funding round of the company raised only enough to continue operating until early 2020. https://twitter.com/coderbyheart/status/1152453087745007616 A big question on everyone’s mind currently is the stability of the public Node JS Registry. Most users in the JavaScript community do not have a fallback in place. While the community see Voss’s resignation with appreciation for his accomplishments, some are disappointed that he could not raise his voice against these odds and had to quit. "Nobody outside of the company, and not everyone within it, fully understands how much Laurie was the brains and the conscience of NPM," Jonathan Cowperthwait, former VP of marketing at NPM Inc, told The Register. CJ Silverio, a principal engineer at Eaze who served as NPM Inc's CTO said that it’s good that Voss is out but she wasn't sure whether his absence would matter much to the day-to-day operations of NPM Inc. Silverio was fired from NPM Inc late last year shortly after CEO Bryan Bogensberger’s arrival. “Bogensberger marginalized him almost immediately to get him out of the way, so the company itself probably won’t notice the departure," she said. "What should affect fundraising is the massive brain drain the company has experienced, with the entire CLI team now gone, and the registry team steadily departing. At some point they’ll have lost enough institutional knowledge quickly enough that even good new hires will struggle to figure out how to cope." Silverio also mentions that she had heard rumors of eliminating the public registry while only continuing with their paid enterprise service, which will be like killing their own competitive advantage. She says if the public registry disappears there are alternative projects like the one spearheaded by Silverio and a fellow developer Chris Dickinson, Entropic. Entropic is available under an open source Apache 2.0 license, Silverio says "You can depend on packages from any other Entropic instance, and your home instance will mirror all your dependencies for you so you remain self-sufficient." She added that the software will mirror any packages installed by a legacy package manager, which is to say npm. As a result, the more developers use Entropic, the less they'll need NPM Inc's platform to provide a list of available packages. Voss feels the scale of npm is 3x bigger than any other registry and boasts of an extremely fast growth rate i.e approx 8% month on month. "Creating a company to manage an open source commons creates some tensions and challenges is not a perfect solution, but it is better than any other solution I can think of, and none of the alternatives proposed have struck me as better or even close to equally good." he said. With  NPM Inc. sustainability at stake, the JavaScript community on Hacker News discussed alternatives in case the public registry comes to an end. One of the comments read, “If it's true that they want to kill the public registry, that means I may need to seriously investigate Entropic as an alternative. I almost feel like migrating away from the normal registry is an ethical issue now. What percentage of popular packages are available in Entropic? If someone else's repo is not in there, can I add it for them?” Another user responds, “The github registry may be another reasonable alternative... not to mention linking git hashes directly, but that has other issues.” Other than Entropic another alternative discussed is nixfromnpm, it is a tool in which you can translate NPM packages to Nix expression. nixfromnpm is developed by Allen Nelson and two other contributors from Chicago. Surprise NPM layoffs raise questions about the company culture Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports
Read more
  • 0
  • 0
  • 3044
article-image-working-xml-documents-php-jquery
Packt
23 Dec 2010
8 min read
Save for later

Working with XML Documents in PHP jQuery

Packt
23 Dec 2010
8 min read
PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery Introduction Extensible Markup Language—also known as XML—is a structure for representation of data in human readable format. Contrary to its name, it's actually not a language but a markup which focuses on data and its structure. XML is a lot like HTML in syntax except that where HTML is used for presentation of data, XML is used for storing and data interchange. Moreover, all the tags in an XML are user-defined and can be formatted according to one's will. But an XML must follow the specification recommended by W3C. With a large increase in distributed applications over the internet, XML is the most widely used method of data interchange between applications. Web services use XML to carry and exchange data between applications. Since XML is platform-independent and is stored in string format, applications using different server-side technologies can communicate with each other using XML. Consider the following XML document: From the above document, we can infer that it is a list of websites containing data about the name, URL, and some information about each website. PHP has several classes and functions available for working with XML documents. You can read, write, modify, and query documents easily using these functions. In this article, we will discuss SimpleXML functions and DOMDocument class of PHP for manipulating XML documents. You will learn how to read and modify XML files, using SimpleXML as well as DOM API. We will also explore the XPath method, which makes traversing documents a lot easier. Note that an XML must be well-formed and valid before we can do anything with it. There are many rules that define well-formedness of XML out of which a few are given below: An XML document must have a single root element.   There cannot be special characters like <, >, and soon.   Each XML tag must have a corresponding closing tag.   Tags are case sensitive To know more about validity of an XML, you can refer to this link: http://en.wikipedia.org/wiki/XML#Schemas_and_validation For most of the recipes in this article, we will use an already created XML file. Create a new file, save it as common.xml in the Article3 directory. Put the following contents in this file. <?xml version="1.0"?> <books> <book index="1"> <name year="1892">The Adventures of Sherlock Holmes</name> <story> <title>A Scandal in Bohemia</title> <quote>You see, but you do not observe. The distinction is clear.</quote> </story> <story> <title>The Red-headed League</title> <quote>It is quite a three pipe problem, and I beg that you won't speak to me for fifty minutes.</quote> </story> <story> <title>The Man with the Twisted Lip</title> <quote>It is, of course, a trifle, but there is nothing so important as trifles.</quote> </story> </book> <book index="2"> <name year="1927">The Case-book of Sherlock Holmes</name> <story> <title>The Adventure of the Three Gables</title> <quote>I am not the law, but I represent justice so far as my feeble powers go.</quote> </story> <story> <title>The Problem of Thor Bridge</title> <quote>We must look for consistency. Where there is a want of it we must suspect deception.</quote> </story> <story> <title>The Adventure of Shoscombe Old Place</title> <quote>Dogs don't make mistakes.</quote> </story> </book> <book index="3"> <name year="1893">The Memoirs of Sherlock Holmes</name> <story> <title>The Yellow Face</title> <quote>Any truth is better than indefinite doubt.</quote> </story> <story> <title>The Stockbroker's Clerk</title> <quote>Results without causes are much more impressive. </quote> </story> <story> <title>The Final Problem</title> <quote>If I were assured of your eventual destruction I would, in the interests of the public, cheerfully accept my own.</quote> </story> </book> </books> Loading XML from files and strings using SimpleXML True to its name, SimpleXML functions provide an easy way to access data from XML documents. XML files or strings can be converted into objects, and data can be read from them. We will see how to load an XML from a file or string using SimpleXML functions. You will also learn how to handle errors in XML documents. Getting ready Create a new directory named Article3. This article will contain sub-folders for each recipe. So, create another folder named Recipe1 inside it. How to do it... Create a file named index.php in Recipe1 folder. In this file, write the PHP code that will try to load the common.xml file. On loading it successfully, it will display a list of book names. We have also used the libxml functions that will detect any error and will show its detailed description on the screen. <?php libxml_use_internal_errors(true); $objXML = simplexml_load_file('../common.xml'); if (!$objXML) { $errors = libxml_get_errors(); foreach($errors as $error) { echo $error->message,'<br/>'; } } else { foreach($objXML->book as $book) { echo $book->name.'<br/>'; } } ?> Open your browser and point it to the index.php file. Because we have already validated the XML file, you will see the following output on the screen: The Adventures of Sherlock Holmes The Case-book of Sherlock Holmes The Memoirs of Sherlock Holmes Let us corrupt the XML file now. For this, open the common.xml file and delete any node name. Save this file and reload index.php on your browser. You will see a detailed error description on your screen: How it works... In the first line, passing a true value to the libxml_use_internal_errors function will suppress any XML errors and will allow us to handle errors from the code itself. The second line tries to load the specified XML using the simplexml_load_file function. If the XML is loaded successfully, it is converted into a SimpleXMLElement object otherwise a false value is returned. We then check for the return value. If it is false, we use the libxml_get_errors() function to get all the errors in the form of an array. This array contains objects of type LibXMLError. Each of these objects has several properties. In the previous code, we iterated over the errors array and echoed the message property of each object which contains a detailed error message. If there are no errors in XML, we get a SimpleXMLElement object which has all the XML data loaded in it. There's more... Parameters for simplexml_load_file More parameters are available for the simplexml_load_file method, which are as follows: filename: This is the first parameter that is mandatory. It can be a path to a local XML file or a URL. class_name: You can extend the SimpleXMLElement class. In that case, you can specify that class name here and it will return the object of that class. This parameter is optional. options: This third parameter allows you to specify libxml parameters for more control over how the XML is handled while loading. This is also optional. simplexml_load_string Similar to simplexml_load_file is simplexml_load_string, which also creates a SimpleXMLElement on successful execution. If a valid XML string is passed to it we get a SimpleXMLElement object or a false value otherwise. $objXML = simplexml_load_string('<?xml version="1.0"?><book><name> Myfavourite book</name></book>'); The above code will return a SimpleXMLElement object with data loaded from the XML string. The second and third parameters of this function are same as that of simplexml_load_file. Using SimpleXMLElement to create an object You can also use the constructor of the SimpleXMLElement class to create a new object. $objXML = new SimpleXMLElement('<?xml version="1.0"?><book><name> Myfavourite book</name></book>'); More info about SimpleXML and libxml You can read about SimpleXML in more detail on the PHP site at http://php.net/manual/en/book.simplexml.php and about libxml at http://php.net/manual/en/book.libxml.php.
Read more
  • 0
  • 0
  • 2721

article-image-edge-chrome-brave-share-updates-on-upcoming-releases-recent-milestones-and-more-at-state-of-browsers-event
Bhagyashree R
24 Jun 2019
9 min read
Save for later

Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event

Bhagyashree R
24 Jun 2019
9 min read
Last month, This Dot Labs, a framework-agnostic JavaScript consultancy, conducted its biannual online live streaming event, This.JavaScript - State of Browsers. In this live stream, representatives of popular browsers talk about the amazing features users can look forward to, next releases, and much more. This time Firefox was missing. However, in attendance were: Stephanie Drescher ,  Program Manager, Microsoft Edge Brian Kardell ,  Developer Advocate, Igalia, an active contributor to WebKit Rijubrata Bhaumik , Software Engineer, Intel, who talked about Intel’s contribution towards web Jonathan Sampson ,  Developer Relations, Brave Paul Kinlan , Sr. Developer Advocate, Google Diego Gonzalez, Product Manager, Samsung Internet The event was moderated by Tracy Lee , who is the  founder of This Dot Labs. Following are some of the updates shared by the browser representatives: What’s new with Edge In December last year, Microsoft announced that it will be adopting Chromium in the development of Microsoft Edge for desktop. And, beginning this year we saw its decision coming to fruition. The tech giant made the first preview builds of the Chromium-based Edge available to both macOS and Windows 10 users. These preview builds are available for testing from the Microsoft Edge Insider site. This Chromium-powered Edge is available for iOS and Android users too. Stephanie Drescher shared what has changed for the Edge team after switching to Chromium. This is enabling them to deliver and update the Edge browser across all supported versions of Windows. This is also allowing them to update the browser more frequently as they are no longer tied to the operating system. The Edge team is not just using Chromium but also contributing all the web platform enhancements back to Chromium by default. The team has already made 400+ commits into the Chromium project. Edge comes with support for cross-platform and installable progressive web apps directly from the browser. The team’s next focus area is to improve Windows experience in terms of accessibility, localization, scrolling, and touch. At Build 2019, Microsoft also announced its new WebView that will be available for Win32 and UWP apps. She said this “will give you the option of an evergreen Chromium platform via edge or the option to bring your own version for AppCompat via a model that's similar to Electron.” Moving on to dev tools, the browser has several new dev tools that are visually aligned with VS Code. The updates in dev tools include dark mode on by default, control inputs, and the team is further exploring “more ways to align the experience between your browser dev tools and VS Code.” The browser’s built-in tools can now inspect and debug any Microsoft-Edge powered web content including PWAs, WebView, etc. No doubt these are some amazing features to be excited for. Edge has come to iOS and macOS, however, the question of whether it will support Linux in the future remains unanswered. Drescher said that the team has no plans right now to support Linux, however looking at the number of user requests for Linux support they are starting to think about it. What’s new with Chrome At I/O 2019, Google shared its vision for Chrome, which is making it "instant, powerful, and safe" to help improve the overall browsing experience. To make Chrome faster and lighter, a bunch of improvements to V8, Chrome’s JavaScript engine has been made. Now, JavaScript memory usage is down by 20% for real-world apps. After addressing the startup bottlenecks, Chrome's loading speed has now become 50% better on low-end devices and 10 percent across devices. The scrolling performance has also improved by 18%. Along with these speed gains, the team has also introduced a few features in the web platform that aim to take the burden away from the developers: The lazy loading mechanism reduces the initial payload to improve load time. You just need to add “loading=lazy" in the image or iframe elements. The idea is simple, the web browser will not download an image or iframe that has the loading attribute until the user scrolls near to it. The Portals API, first showcased at I/O this year, aims to make navigation between sites and web pages smoother. Portals is very similar to iframe in that it allows web developers to embed remote content in their pages. The difference is that with Portals you will able to navigate inside the content you are embedding. As a part of making Chrome more powerful, Google is actively working on bridging the capabilities gap between native and web under Project Fugu. It has already introduced two APIs: Web Share and Web Share Target and plans to bring more capabilities like writable file API, event alarms, user idle detection, and more. As the name suggests, the Web Share API allows websites to invoke the native sharing capabilities of the host platform. Users will be able to easily share either a URL or text on pretty much any platform they want to. Till date, we were restricted to share content on native apps that have registered as a share target. With Web Share Target API, installed web apps can also register with the underlying OS as a target to receive shared content. Talking about the safety aspect, Chrome now comes with support for WebAuthn, a new authentication standard by W3C, starting from its 67 version. This API allows servers to integrate strong authenticators that are built into devices, for instance, Windows Hello or Apple’s Touch ID. What's new with Brave Edge, Chrome, and Brave share one common thing and that is they all are Chromium-based. But, what sets Brave apart is the Basic Attention Token (BAT). Jonathan Sampson, who was representing Brave, said that we have seen a “Cambrian Explosion” of cryptocurrencies utility tokens or blockchain assets like Bitcoin, Litecoin, Etherium. Partnership with Coinbase Previously, if we wanted to acquire these assets there was only one way to do it “mining”, which meant a huge investment on expensive GPUs and power bill. Brave believes that the next step to earn these assets is primarily by your “attention”. Brave’s goal is to take users from mining to earning blockchain assets. As a part of this goal, it has partnered with Coinbase, one of the prominent companies in the blockchain space. Users will get 10 dollars in the form of BAT just for learning the state of digital advertising and what Brave and attention tokens are doing in that space. Through BAT, Brave is providing its consumers with a direct way to support their content creators. These content creators can customize and personalize this entire experience by navigating to the signing up on Brave’s creators page. Implementation changes in how BAT is sent to creators The Brave team has also made some implementation changes in terms of how this whole thing works. Previously, consumers could send these tokens to anyone. The token then used to go into an omnibus settlement wallet and stays there until that creator verifies with the program and demonstrates ownership over their web property. Finally, after all this, they get access to these tokens for use. Unfortunately, this could mean that some tokens have to “sit in a state of limbo” for an indefinite amount of time. Now, the team has re-engineered this process to hold these tokens inside your wallet for up to 90 days. If and when that property is verified the tokens are transmitted out. And, if the property is never verified then the tokens are released back inside your wallet. You can send them to another creator instead of letting them sit in that omnibus settlement wallet. Sampson further added, “of course the entire process goes through the anonymize protocol so that brave nor anybody else has any idea which websites you're visiting or to whom you are contributing support.” Inner working of Brave ads To better the ads recommendation Brave comes with a machine learning model integrated. This feature is opt-in so the user gets to decide when and how many ads they want to see in order to earn BAT from their attention. The ML model can study the user and learn about them each day. Every day a catalog is downloaded to each users’ device. Then the individual machines would churn away on that catalog to figure out which ads are relevant to an individual. Once, the relevant ads are found out users will see a small operating system notification. Brave sends 70% of the revenue made from the users’ attention to the user in the form of BAT. Brave Sync (Beta) The beta version of Brave Sync is available across platforms from Windows, macOS, Linux to Android, and iOS. Similar to Brave Ads, this is also an opt-in feature that allows you to automatically sync browsing data across devices. Right now it is in beta and supports syncing only bookmarks. In the future releases, we can expect support for tabs, history, passwords, autofill, as well as Brave Rewards. Once you enable it on one device, you just need to scan a QR code or enter a secret phrase to register another device for syncing. Canary builds available Like all the other browsers, Brave has also started to share their nightly and dev builds to give developers an “earlier insight” into the work they are doing. You can access them through their download page. These were some of the major updates discussed in the live stream. There was also Intel and Samsung who talked about their contributions to the web. Igalia’s developer Brian Kardell talked about the dark mode, pointer events, and more in WebKit. Watch the full event on YouTube for more details. https://www.youtube.com/watch?v=olSQai4EUD8 Elvis Pranskevichus on limitations in SQL and how EdgeQL can help Microsoft makes the first preview builds of Chromium-based Edge available for testing Brave introduces Brave Ads that share 70% revenue with users for viewing ads
Read more
  • 0
  • 0
  • 2543

article-image-working-dataform-microsoft-silverlight-4
Packt
10 May 2010
9 min read
Save for later

Working with DataForm in Microsoft Silverlight 4

Packt
10 May 2010
9 min read
Displaying and editing an object using the DataForm Letting a user work with groups of related data is a requirement of almost every application. It includes displaying a group of people, editing product details, and so on. These groups of data are generally contained in forms and the Silverlight DataForm (from the Silverlight Toolkit) is just that—a form. In the next few recipes, you'll learn how to work with this DataForm. As of now, let's start off with the basics, that is, displaying and editing the data. Getting ready For this recipe, we're starting with a blank solution, but you do need to install the Silverlight Toolkit as the assemblies containing the DataForm are offered through this. You can download and install it from http://www.codeplex.com/Silverlight/. You can find the completed solution in the Chapter05/Dataform_DisplayAndEdit_Completed folder in the code bundle that is available on the Packt website. How to do it... We're going to create a Person object, which will be displayed through a DataForm. To achieve this, we'll carry out the following steps: Start a new Silverlight solution, name it DataFormExample, and add a reference to System.Windows.Controls.Data.DataForm.Toolkit (from the Silverlight Toolkit). Alternatively, you can drag the DataForm from the Toolbox to the design surface. Open MainPage.xaml and add a namespace import statement at the top of this fi le (in the tag) as shown in the following code. This will allow us to use the DataForm, which resides in the assembly that we've just referenced. Add a DataForm to MainPage.xaml and name it as myDataForm. In the DataForm, set AutoEdit to False and CommandButtonsVisibility to All as shown in the following code: <Grid x_Name="LayoutRoot"> <Grid.RowDefinitions> <RowDefinition Height="40" ></RowDefinition> <RowDefinition></RowDefinition> </Grid.RowDefinitions> <TextBlock Text="Working with the DataForm" Margin="10" FontSize="14" > </TextBlock> <df:DataForm x_Name="myDataForm" AutoEdit="False" CommandButtonsVisibility="All" Grid.Row="1" Width="400" Height="300" Margin="10" HorizontalAlignment="Left" VerticalAlignment="Top" > </df:DataForm> </Grid> Add a new class named Person to the Silverlight project having ID, FirstName, LastName, and DateOfBirth as its properties. This class is shown in the following code. We will visualize an instance of the Person class using the DataForm. public class Person { public int ID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public DateTime DateOfBirth { get; set; } } Open MainPage.xaml.cs and add a person property of the Person type to it. Also, create a method named InitializePerson in which we'll initialize this property as shown in the following code: public Person person { get; set; } private void InitializePerson() { person = new Person() { ID = 1, FirstName = "Kevin", LastName = "Dockx", DateOfBirth = new DateTime(1981, 5, 5) }; } Add a call to InitializePerson in the constructor of MainPage.xaml.cs and set the CurrentItem property of the DataForm to a person as shown in the following code: InitializePerson(); myDataForm.CurrentItem = person; You can now build and run your solution. When you do this, you'll see a DataForm that has automatically generated the necessary fields in order to display a person. This can be seen in the following screenshot: How it works... To start off, we needed something to display in our DataForm: a Person entity. This is why we've created the Person class: it will be bound to the DataForm by setting the CurrentItem property to an object of type Person. Doing this will make sure that the DataForm automatically generates the necessary fi elds. It looks at all the public properties of our Person object and generates the correct control depending on the type. A string will be displayed as a TextBox, a Boolean value will be displayed as a CheckBox, and so on. As we have set the CommandButtonsVisibility property on the DataForm to All, we get an Edit icon in the command bar at the top of the DataForm. (Setting AutoEdit to False makes sure that we start in the display mode, rather than the edit mode). When you click on the Edit icon, the DataForm shows the person in the editable mode (using the EditItemTemplate) and an OK button appears. Clicking on the OK button will revert the form to the regular displaying mode. Do keep in mind that the changes you make to the person are persisted immediately in memory (in the case of a TextBox, when it loses focus). If necessary, you can write extra code to persist the Person object from the memory to an underlying datastore by handling the ItemEditEnded event on the DataForm. There's more... At this moment, we've got a DataForm displaying a single item that you can either view or edit. But what if you want to cancel your edit? As of now, the Cancel button appears to be disabled. As the changes you make in the DataForm are immediately persisted to the underlying object in the memory, cancelling the edit would require some extra business logic. Luckily, it's not hard to do. First of all, you'll want to implement the IEditableObject interface on the Person class, which will make sure that cancelling is possible. As a result, the Cancel button will no longer be disabled. The following code is used to implement this: public class Person : IEditableObject { public int ID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public DateTime DateOfBirth { get; set; } public void BeginEdit() {} public void CancelEdit() {} public void EndEdit() {} } This interface exposes three methods: BeginEdit, CancelEdit, and EndEdit. If needed, you can write extra business logic in these methods, which is exactly what we need to do. For most applications, you might want to implement only CancelEdit, which would then refetch the person from the underlying data store. In our example, we're going to solve this problem by using a different approach. (You can use this approach if you haven't got an underlying database from which your data can be refetched, or if you don't want to access the database again.) In the BeginEdit method, we save the current property values of the person. When the edit has been cancelled, we put them back to the way they were before. This is shown in the following code: public void BeginEdit() { // save current values tmpPerson = new Person() { ID = this.ID, FirstName = this.FirstName, LastName = this.LastName, DateOfBirth = this.DateOfBirth }; } public void CancelEdit() { // reset values ID = tmpPerson.ID; FirstName = tmpPerson.FirstName; LastName = tmpPerson.LastName; DateOfBirth = tmpPerson.DateOfBirth; } Now, cancelling an edit is possible and it actually reverts to the previous property values. More on DataForm behavior The DataForm exposes various events such as BeginningEdit (when you begin to edit an item), EditEnding (occurs just before an item is saved), and EditEnded (occurs after an item has been saved). It also exposes properties that you can use to defi ne how the DataForm behaves. Validating a DataForm or a DataGrid As you might have noticed, the DataForm includes validation on your fields automatically. For example, try inputting a string value into the ID field. You'll see that an error message appears. This is beyond the scope of this recipe, but more on this will be discussed in the Validating the DataForm recipe. Managing the editing of an object on different levels There are different levels of managing the editing of an object. You can manage this on the control level itself by handling events such as BeginningEdit or ItemEditEnded in the DataForm. Besides that, you can also handle editing on a business level by implementing the IEditableObject interface and providing custom code for the BeginEdit, CancelEdit, or EndEdit methods in the class itself. Depending on the requirements of your application, you can use either of the levels or even both together. See also In this recipe, we've seen how the DataForm is created automatically. For most applications, you require more control over how your fi elds, for example, are displayed. The DataForm is highly customizable, both on a template level (through template customization) and on how the data is generated (through data annotations). If you want to learn about using the DataForm to display or edit a list of items rather than just one, have a look at the next recipe, Displaying and editing a collection using the DataForm. Displaying and editing a collection using the DataForm In the previous recipe, you learned how to work with the basic features of the DataForm. You can now visualize and edit an entity. But in most applications, this isn't enough. Often, you'll want to have an application that shows you a list of items with the ability to add a new item or delete an item from the list. You'll want the application to allow you to edit every item and provide an easy way of navigating between them. A good example of this would be an application that allows you to manage a list of employees. The DataForm can do all of this and most of it is built-in. In this recipe, you'll learn how to achieve this. Getting ready For this recipe, we're starting with the basic setup that we completed in the previous recipe. If you didn't complete that recipe, you can find a starter solution in the Chapter05/ Dataform_Collection_Starter folder in the code bundle that is available on the Packt website. The finished solution for this recipe can be found in the Chapter05/Dataform_ Collection_Completed folder. In any case, you'll need to install the Silverlight Toolkit as the assemblies containing the DataForm are offered through it. You can download and install it from http://www.codeplex.com/Silverlight/.
Read more
  • 0
  • 0
  • 2397
article-image-building-chat-application
Packt
27 Jun 2013
4 min read
Save for later

Building a Chat Application

Packt
27 Jun 2013
4 min read
(For more resources related to this topic, see here.) The following is a screenshot of our chat application: Creating a project To begin developing our chat application, we need to create an Opa project using the following Opa command: opa create chat This command will create an empty Opa project. Also, it will generate the required directories and files automatically with the structure as shown in the following screenshot: Let's have a brief look at what these source code files do: controller.opa: This file serves as the entry point of the chat application; we start the web server in controller.opa view.opa: This file serves as an user interface model.opa: This is the model of the chat application; it defines the message, network, and the chat room style.css: This is an external stylesheet file Makefile: This file is used to build an application As we do not need database support in the chat application, we can remove --import-package stdlib.database.mongo from the FLAG option in Makefile. Type make and make run to run the empty application. Launching the web server Let's begin with controller.opa, the entry point of our chat application where we launch the web server. We have already discussed the function Server.start in the Server module section. In our chat application, we will use a handlers group to handle users requests. Server.start(Server.http, [ {resources: @static_resource_directory("resources")}, {register: [{css:["/resources/css/style.css"]}]}, {title:"Opa Chat", page: View.page } ]) So, what exactly are the arguments that we are passing to the Server.start function? The line {resources: @static_resource_direcotry("resources")} registers a resource handler and will serve resource files in the resources directory. Next, the line {register: [{css:["/resources/css/style.css"]}]} registers an external CSS file—style.css. This permits us to use styles in the style.css application scope. Finally, the line {title:"Opa Chat", page: View.page} registers a single page handler that will dispatch all other requests to the function View.page. The server uses the default configuration Server.http and will run on port 8080. Designing user interface When the application starts, all the requests (except requests for resources) will be distributed to the function View.page, which displays the chat page on the browser. Let's take a look at the view part; we define a module named View in view.opa. import stdlib.themes.bootstrap.css module View { function page(){ user = Random.string(8) <div id=#title class="navbar navbar-inverse navbar-fixed-top"> <div class=navbar-inner> <div id=#logo /> </div> </div> <div id=#conversation class=container-fluid onready={function(_){Model.join(updatemsg)}} /> <div id=#footer class="navbar navbar-fixed-bottom"> <div class=input-append> <input type=text id=#entry class=input-xxlarge onnewline={broadcast(user)}/> <button class="btn btn-primary" onclick={broadcast(user)}>Post</button> </div> </div> } ... } The module View contains functions to display the page on the browser. In the first line, import stdlib.themes.bootstrap.css, we import Bootstrap styles. This permits us to use Bootstrap markup in our code, such as navbar, navbar-fixtop, and btn-primary. We also registered an external style.css file so we can use styles in style.css such as conversation and footer. As we can see, this code in the function page follows almost the same syntax as HTML. As discussed in earlier, we can use HTML freely in the Opa code, the HTML values having a predefined type xhtml in Opa. Summary In this article, we started by creating and a project and launching the web server. Resources for Article : Further resources on this subject: MySQL 5.1 Plugin: HTML Storage Engine—Reads and Writes [Article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] Oracle Web RowSet - Part1 [Article]
Read more
  • 0
  • 0
  • 2362

article-image-aspnet-social-networks-making-friends-part-2
Packt
22 Oct 2009
18 min read
Save for later

ASP.NET Social Networks—Making Friends (Part 2)

Packt
22 Oct 2009
18 min read
Implementing the presentation layer Now that we have the base framework in place, we can start to discuss what it will take to put it all together. Searching for friends Let's see what it takes to implement a search for friends. SiteMaster Let's begin with searching for friends. We haven't covered too much regarding the actual UI and nothing regarding the master page of this site. Putting in simple words, we have added a text box and a button to the master page to take in a search phrase. When the button is clicked, this method in the MasterPage code behind is fired. protected void ibSearch_Click(object sender, EventArgs e){ _redirector.GoToSearch(txtSearch.Text);} As you can see it simply calls the Redirector class and routes the user to the Search.aspx page passing in the value of txtSearch (as a query string parameter in this case). public void GoToSearch(string SearchText){ Redirect("~/Search.aspx?s=" + SearchText); } Search The Search.aspx page has no interface. It expects a value to be passed in from the previously discussed text box in the master page. With this text phrase we hit our AccountRepository and perform a search using the Contains() operator. The returned list of Accounts is then displayed on the page. For the most part, this page is all about MVP (Model View Presenter) plumbing. Here is the repeater that displays all our data. <%@ Register Src="~/UserControls/ProfileDisplay.ascx" TagPrefix="Fisharoo" TagName="ProfileDisplay" %>...<asp:Repeater ID="repAccounts" runat="server" OnItemDataBound="repAccounts_ItemDataBound"> <ItemTemplate> <Fisharoo:ProfileDisplay ShowDeleteButton="false" ID="pdProfileDisplay" runat="server"> </Fisharoo:ProfileDisplay> </ItemTemplate></asp:Repeater> The fun stuff in this case comes in the form of the ProfileDisplay user control that was created so that we have an easy way to display profile data in various places with one chunk of reusable code that will allow us to make global changes. A user control is like a small self-contained page that you can then insert into your page (or master page). It has its own UI and it has its own code behind (so make sure it also gets its own MVP plumbing!). Also, like a page, it is at the end of the day a simple object, which means that it can have properties, methods, and everything else that you might think to use. Once you have defined a user control you can use it in a few ways. You can programmatically load it using the LoadControl() method and then use it like you would use any other object in a page environment. Or like we did here, you can add a page declaration that registers the control for use in that page. You will notice that we specified where the source for this control lives. Then we gave it a tag prefix and a tag name (similar to using asp:Control). From that point onwards we can refer to our control in the same way that we can declare a TextBox! You should see that we have <Fisharoo:ProfileDisplay ... />. You will also notice that our tag has custom properties that are set in the tag definition. In this case you see ShowDeleteButton="false". Here is the user control code in order of display, code behind, and the presenter: //UserControls/ProfileDisplay.ascx<%@ Import namespace="Fisharoo.FisharooCore.Core.Domain"%><%@ Control Language="C#" AutoEventWireup="true" CodeBehind="ProfileDisplay.ascx.cs" Inherits="Fisharoo.FisharooWeb.UserControls.ProfileDisplay" %><div style="float:left;"> <div style="height:130px;float:left;"> <a href="/Profiles/Profile.aspx?AccountID=<asp:Literal id='litAccountID' runat='server'></asp:Literal>"> <asp:Image style="padding:5px;width:100px;height:100px;" ImageAlign="Left" Width="100" Height="100" ID="imgAvatar" ImageUrl="~/images/ProfileAvatar/ProfileImage.aspx" runat="server" /></a> <asp:ImageButton ImageAlign="AbsMiddle" ID="ibInviteFriend" runat="server" Text="Become Friends" OnClick="lbInviteFriend_Click" ImageUrl="~/images/icon_friends.gif"></asp:ImageButton> <asp:ImageButton ImageAlign="AbsMiddle" ID="ibDelete" runat="server" OnClick="ibDelete_Click" ImageUrl="~/images/icon_close.gif" /><br /> <asp:Label ID="lblUsername" runat="server"></asp:Label><br /> <asp:Label ID="lblFirstName" runat="server"></asp:Label> <asp:Label ID="lblLastName" runat="server"></asp:Label><br /> Since: <asp:Label ID="lblCreateDate" runat="server"></asp:Label><br /> <asp:Label ID="lblFriendID" runat="server" Visible="false"></asp:Label> </div> </div>//UserControls/ProfileDisplay.ascx.csusing System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core.Domain;using Fisharoo.FisharooWeb.UserControls.Interfaces;using Fisharoo.FisharooWeb.UserControls.Presenters;namespace Fisharoo.FisharooWeb.UserControls{ public partial class ProfileDisplay : System.Web.UI.UserControl, IProfileDisplay { private ProfileDisplayPresenter _presenter; protected Account _account; protected void Page_Load(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); ibDelete.Attributes.Add("onclick","javascript:return confirm('Are you sure you want to delete this friend?')"); } public bool ShowDeleteButton { set { ibDelete.Visible = value; } } public bool ShowFriendRequestButton { set { ibInviteFriend.Visible = value; } } public void LoadDisplay(Account account) { _account = account; ibInviteFriend.Attributes.Add("FriendsID",_account.AccountID.ToString()); ibDelete.Attributes.Add("FriendsID", _account.AccountID.ToString()); litAccountID.Text = account.AccountID.ToString(); lblLastName.Text = account.LastName; lblFirstName.Text = account.FirstName; lblCreateDate.Text = account.CreateDate.ToString(); imgAvatar.ImageUrl += "?AccountID=" + account.AccountID.ToString(); lblUsername.Text = account.Username; lblFriendID.Text = account.AccountID.ToString(); } protected void lbInviteFriend_Click(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); _presenter.SendFriendRequest(Convert.ToInt32(lblFriendID.Text)); } protected void ibDelete_Click(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); _presenter.DeleteFriend(Convert.ToInt32(lblFriendID.Text)); } }}//UserControls/Presenter/ProfileDisplayPresenter.csusing System;using System.Data;using System.Configuration;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core;using Fisharoo.FisharooCore.Core.DataAccess;using Fisharoo.FisharooWeb.UserControls.Interfaces;using StructureMap;namespace Fisharoo.FisharooWeb.UserControls.Presenters{ public class ProfileDisplayPresenter { private IProfileDisplay _view; private IRedirector _redirector; private IFriendRepository _friendRepository; private IUserSession _userSession; public ProfileDisplayPresenter() { _redirector = ObjectFactory.GetInstance<IRedirector>(); _friendRepository = ObjectFactory.GetInstance<IFriendRepository>(); _userSession = ObjectFactory.GetInstance<IUserSession>(); } public void Init(IProfileDisplay view) { _view = view; } public void SendFriendRequest(Int32 AccountIdToInvite) { _redirector.GoToFriendsInviteFriends(AccountIdToInvite); } public void DeleteFriend(Int32 FriendID) { if (_userSession.CurrentUser != null) { _friendRepository.DeleteFriendByID(_userSession.CurrentUser.AccountID , FriendID); HttpContext.Current.Response.Redirect(HttpContext.Current.Request.Raw Url); } } }} All this logic and display is very standard. You have the MVP plumbing, which makes up most of it. Outside of that you will notice that the ProfileDisplay control has a LoadDisplay() method responsible for loading the UI for that control. In the Search page this is done in the repAccounts_ItemDataBound() method. protected void repAccounts_ItemDataBound(object sender, RepeaterItemEventArgs e){ if(e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { ProfileDisplay pd = e.Item.FindControl("pdProfileDisplay") as ProfileDisplay; pd.LoadDisplay((Account)e.Item.DataItem); if(_webContext.CurrentUser == null) pd.ShowFriendRequestButton = false; }} The ProfileDisplay control also has a couple of properties one to show/hide the delete friend button and the other to show/hide the invite friend button. These buttons are not appropriate for every page that the control is used in. In the search results page we want to hide the Delete button as the results are not necessarily friends. We would want to be able to invite them in that view. However, in a list of our friends the Invite button (to invite a friend) would no longer be appropriate as each of these users would already be a friend. The Delete button in this case would now be more appropriate. Clicking on the Invite button makes a call to the Redirector class and routes the user to the InviteFriends page. //UserControls/ProfileDisplay.ascx.cspublic void SendFriendRequest(Int32 AccountIdToInvite){ _redirector.GoToFriendsInviteFriends(AccountIdToInvite);}//Core/Impl/Redirector.cspublic void GoToFriendsInviteFriends(Int32 AccoundIdToInvite){ Redirect("~/Friends/InviteFriends.aspx?AccountIdToInvite=" + AccoundIdToInvite.ToString());} Inviting your friends This page allows us to manually enter email addresses of friends whom we want to invite. It is a standard From, To, Message format where the system specifies the sender (you), you specify who to send to and the message that you want to send. //Friends/InviteFriends.aspx<%@ Page Language="C#" MasterPageFile="~/SiteMaster.Master" AutoEventWireup="true" CodeBehind="InviteFriends.aspx.cs" Inherits="Fisharoo.FisharooWeb.Friends.InviteFriends" %><asp:Content ContentPlaceHolderID="Content" runat="server"> <div class="divContainer"> <div class="divContainerBox"> <div class="divContainerTitle">Invite Your Friends</div> <asp:Panel ID="pnlInvite" runat="server"> <div class="divContainerRow"> <div class="divContainerCellHeader">From:</div> <div class="divContainerCell"><asp:Label ID="lblFrom" runat="server"></asp:Label></div> </div> <div class="divContainerRow"> <div class="divContainerCellHeader">To:<br /><div class="divContainerHelpText">(use commas to<BR />separate emails)</div></div> <div class="divContainerCell"><asp:TextBox ID="txtTo" runat="server" TextMode="MultiLine" Columns="40" Rows="5"></asp:TextBox></div> </div> <div class="divContainerRow"> <div class="divContainerCellHeader">Message:</div> <div class="divContainerCell"><asp:TextBox ID="txtMessage" runat="server" TextMode="MultiLine" Columns="40" Rows="10"></asp:TextBox></div> </div> <div class="divContainerFooter"> <asp:Button ID="btnInvite" runat="server" Text="Invite" OnClick="btnInvite_Click" /> </div> </asp:Panel> <div class="divContainerRow"> <div class="divContainerCell"><br /><asp:Label ID="lblMessage" runat="server"> </asp:Label><br /><br /></div> </div> </div> </div></asp:Content> Running the code will display the following: This is a simple page, so the majority of the code for it is MVP plumbing. The most important part to notice here is that when the Invite button is clicked the presenter is notified to send the invitation. //Friends/InviteFriends.aspx.csusing System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooWeb.Friends.Interface;using Fisharoo.FisharooWeb.Friends.Presenter;namespace Fisharoo.FisharooWeb.Friends{ public partial class InviteFriends : System.Web.UI.Page, IInviteFriends { private InviteFriendsPresenter _presenter; protected void Page_Load(object sender, EventArgs e) { _presenter = new InviteFriendsPresenter(); _presenter.Init(this); } protected void btnInvite_Click(object sender, EventArgs e) { _presenter.SendInvitation(txtTo.Text,txtMessage.Text); } public void DisplayToData(string To) { lblFrom.Text = To; } public void TogglePnlInvite(bool IsVisible) { pnlInvite.Visible = IsVisible; } public void ShowMessage(string Message) { lblMessage.Text = Message; } public void ResetUI() { txtMessage.Text = ""; txtTo.Text = ""; } }} Once this call is made we leap across to the presenter (more plumbing!). //Friends/Presenter/InviteFriendsPresenter.csusing System;using System.Data;using System.Configuration;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core;using Fisharoo.FisharooCore.Core.DataAccess;using Fisharoo.FisharooCore.Core.Domain;using Fisharoo.FisharooWeb.Friends.Interface;using StructureMap;namespace Fisharoo.FisharooWeb.Friends.Presenter{ public class InviteFriendsPresenter { private IInviteFriends _view; private IUserSession _userSession; private IEmail _email; private IFriendInvitationRepository _friendInvitationRepository; private IAccountRepository _accountRepository; private IWebContext _webContext; private Account _account; private Account _accountToInvite; public void Init(IInviteFriends view) { _view = view; _userSession = ObjectFactory.GetInstance<IUserSession>(); _email = ObjectFactory.GetInstance<IEmail>(); _friendInvitationRepository = ObjectFactory.GetInstance< IFriendInvitationRepository>(); _accountRepository = ObjectFactory.GetInstance<IAccountRepository>(); _webContext = ObjectFactory.GetInstance<IWebContext>(); _account = _userSession.CurrentUser; if (_account != null) { _view.DisplayToData(_account.FirstName + " " + _account.LastName + " &lt;" + _account.Email + "&gt;"); if (_webContext.AccoundIdToInvite > 0) { _accountToInvite = _accountRepository.GetAccountByID (_webContext.AccoundIdToInvite); if (_accountToInvite != null) { SendInvitation(_accountToInvite.Email, _account.FirstName + " " + _account.LastName + " would like to be your friend!"); _view.ShowMessage(_accountToInvite.Username + " has been sent a friend request!"); _view.TogglePnlInvite(false); } } } } public void SendInvitation(string ToEmailArray, string Message) { string resultMessage = "Invitations sent to the following recipients:<BR>"; resultMessage += _email.SendInvitations (_userSession.CurrentUser,ToEmailArray, Message); _view.ShowMessage(resultMessage); _view.ResetUI(); } }} The interesting thing here is the SendInvitation() method, which takes in a comma delimited array of emails and the message to be sent in the invitation. It then makes a call to the Email.SendInvitations() method. //Core/Impl/Email.cspublic string SendInvitations(Account sender, string ToEmailArray, string Message){ string resultMessage = Message; foreach (string s in ToEmailArray.Split(',')) { FriendInvitation friendInvitation = new FriendInvitation(); friendInvitation.AccountID = sender.AccountID; friendInvitation.Email = s; friendInvitation.GUID = Guid.NewGuid(); friendInvitation.BecameAccountID = 0; _friendInvitationRepository.SaveFriendInvitation(friendInvitation); //add alert to existing users alerts Account account = _accountRepository.GetAccountByEmail(s); if(account != null) { _alertService.AddFriendRequestAlert(_userSession.CurrentUser, account, friendInvitation.GUID, Message); } //TODO: MESSAGING - if this email is already in our system add a message through messaging system //if(email in system) //{ // add message to messaging system //} //else //{ // send email SendFriendInvitation(s, sender.FirstName, sender.LastName, friendInvitation.GUID.ToString(), Message); //} resultMessage += "• " + s + "<BR>"; } return resultMessage;} This method is responsible for parsing out all the emails, creating a new FriendInvitation, and sending the request via email to the person who was invited. It then adds an alert to the invited user if they have an Account. And finally we have to add a notification to the messaging system once it is built. Outlook CSV importer The Import Contacts page is responsible for allowing our users to upload an exported contacts file from MS Outlook into our system. Once they have imported their contacts, the user is allowed to select which email addresses are actually invited into our system. Importing contacts As this page is made up of a couple of views, let's begin with the initial view. //Friends/OutlookCsvImporter.aspx<asp:Panel ID="pnlUpload" runat="server"> <div class="divContainerTitle">Import Contacts</div> <div class="divContainerRow"> <div class="divContainerCellHeader">Contacts File:</div> <div class="divContainerCell"><asp:FileUpload ID="fuContacts" runat="server" /></div> </div> <div class="divContainerRow"> <div class="divContainerFooter"><asp:Button ID="btnUpload" Text="Upload & Preview Contacts" runat="server" OnClick="btnUpload_Click" /></div> </div> <br /><br /> <div class="divContainerRow"> <div class="divContainerTitle">How do I export my contacts from Outlook?</div> <div class="divContainerCell"> <ol> <li> Open Outlook </li> <li> In the File menu choose Import and Export </li> <li> Choose export to a file and click next </li> <li> Choose comma seperated values and click next </li> <li> Select your contacts and click next </li> <li> Browse to the location you want to save your contacts file </li> <li> Click finish </li> </ol> </div> </div></asp:Panel> As you can see from the code we are working in panels here. This panel is responsible for allowing a user to upload their Contacts CSV File. It also gives some directions to the user as to how to go about exporting contacts from Outlook. This view has a file upload box that allows the user to browse for their CSV file, and a button to tell us when they are ready for the upload. There is a method in our presenter that handles the button click from the view. //Friends/Presenter/OutlookCsvImporterPresenter.cspublic void ParseEmails(HttpPostedFile file){ using (Stream s = file.InputStream) { StreamReader sr = new StreamReader(s); string contacts = sr.ReadToEnd(); _view.ShowParsedEmail(_email.ParseEmailsFromText(contacts)); }} This method is responsible for handling the upload process of the HttpPostedFile. It puts the file reference into a StreamReader and then reads the stream into a string variable named contacts. Once we have the entire list of contacts we can then call into our Email class and parse all the emails out. //Core/Impl/Email.cspublic List<string> ParseEmailsFromText(string text){ List<string> emails = new List<string>(); string strRegex = @"w+([-+.]w+)*@w+([-.]w+)*.w+([-.]w+)*"; Regex re = new Regex(strRegex, RegexOptions.Multiline); foreach (Match m in re.Matches(text)) { string email = m.ToString(); if(!emails.Contains(email)) emails.Add(email); } return emails;} This method expects a string that contains some email addresses that we want to parse. It then parses the emails using a regular expression (which we won't go into details about!). We then iterate through all the matches in the Regex and add the found email addresses to our list provided they aren't already present. Once we have found all the email addresses, we will return the list of unique email addresses. The presenter then passes that list of parsed emails to the view. Selecting contacts Once we have handled the upload process and parsed out the emails, we then need to display all the emails to the user so that they can select which ones they want to invite. Now you could do several sneaky things here. Technically the user has uploaded all of their email addresses to you. You have them. You could store them. You could invite every single address regardless of what the user wants. And while this might benefit your community over the short run, your users would eventually find out about your sneaky practice and your community would start to dwindle. Don't take advantage of your user's trust! //Friends/OutlookCsvImporter.aspx<asp:Panel visible="false" ID="pnlEmails" runat="server"> <div class="divContainerTitle">Select Contacts</div> <div class="divContainerFooter"><asp:Button ID="btnInviteContacts1" runat="server" OnClick="btnInviteContacts_Click" Text="Invite Selected Contacts" /></div> <div class="divContainerCell" style="text-align:left;"> <asp:CheckBoxList ID="cblEmails" RepeatColumns="2" runat="server"></asp:CheckBoxList> </div> <div class="divContainerFooter"><asp:Button ID="btnInviteContacts2" runat="server" OnClick="btnInviteContacts_Click" Text="Invite Selected Contacts" /></div></asp:Panel> Notice that we have a checkbox list in our panel. This checkbox list is bound to the returned list of email addresses. public void ShowParsedEmail(List<string> Emails){ pnlUpload.Visible = false; pnlResult.Visible = false; pnlEmails.Visible = true; cblEmails.DataSource = Emails; cblEmails.DataBind();} The output so far looks like this: Now the user has a list of all the email addresses that they uploaded, which they can then go through selecting the ones that they want to invite into our system. Once they are through selecting the emails that they want to invite, they can click on the Invite button. We then iterate through all the items in the checkbox list to locate the selected items. protected void btnInviteContacts_Click(object sender, EventArgs e){ string emails = ""; foreach (ListItem li in cblEmails.Items) { if(li != null && li.Selected) emails += li.Text + ","; } emails = emails.Substring(0, emails.Length - 1); _presenter.InviteContacts(emails);} Once we have gathered all the selected emails, we pass them to the presenter to run the invitation process. public void InviteContacts(string ToEmailArray){ string result = _email.SendInvitations(_userSession.CurrentUser, ToEmailArray, ""); _view.ShowInvitationResult(result);} The presenter promptly passes the selected items to the Email class to handle the invitations. This is the same method that we used in the last section to invite users. //Core/Impl/Email.cspublic string SendInvitations(Account sender, string ToEmailArray, string Message){...} We then output the result of the emails that we invited into the third display. <asp:Panel ID="pnlResult" runat="server" Visible="false"> <div class="divContainerTitle">Invitations Sent!</div> <div class="divContainerCell"> Invitations were sent to the following emails:<br /> <asp:Label ID="lblMessage" runat="server"></asp:Label> </div></asp:Panel>
Read more
  • 0
  • 0
  • 2113
Modal Close icon
Modal Close icon