Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-intro-docker-part-2-developing-simple-application
Julian Gindi
30 Oct 2015
5 min read
Save for later

Intro to Docker Part 2: Developing a Simple Application

Julian Gindi
30 Oct 2015
5 min read
In my last post, we learned some basic concepts related to Docker, and we learned a few basic operations for using Docker containers. In this post, we will develop a simple application using Docker. Along the way we will learn how to use Dockerfiles and Docker's amazing 'compose' feature to link multiple containers together. The Application We will be building a simple clone of Reddit's very awesome and mysterious "The Button". The application will be written in Python using the Flask web framework, and will use Redis as it's storage backend. If you do not know Python or Flask, fear not, the code is very readable and you are not required to understand the code to follow along with the Docker-specific sections. Getting Started Before we get started, we need to create a few files and directories. First, go ahead and create a Dockerfile, requirements.txt (where we will specify project-specific dependencies), and a main app.py file. touch Dockerfile requirements.txt app.py Next we will create a simple endpoint that will return "Hello World". Go ahead and edit your app.py file to look like such: from flask import Flask app = Flask(__name__) @app.route('/') def main(): return 'Hello World!' if __name__ == '__main__': app.run('0.0.0.0') Now we need to tell Docker how to build a container containing all the dependencies and code needed to run the app. Edit your Dockerfile to look like such: 1 FROM python:2.7 2 3 RUN mkdir /code 4 WORKDIR /code 5 6 ADD requirements.txt /code/ 7 RUN pip install -r requirements.txt 8 9 ADD . /code/1011 EXPOSE 5000 Before we move on, let me explain the basics of Dockerfiles. Dockerfiles A Dockerfile is a configuration file that specifies instructions on how to build a Docker container. I will now explain each line in the Dockerfile we just created (I will reference individual lines). 1: First, we specify the base image to use as our starting point (we discussed this in more detail in the last post). Here we are using a stock Python 2.7 image. 3: Dockerfiles can container a few 'directives' that dictate certain behaviors. RUN is one such directive. It does exactly what it sounds like - runs an arbitrary command. Here, were are just making a working directory. 4: We use WORKDIR to specify the main working directory. 6: ADD allows us to selectively add files to the container during the build process. Currently, we just need to add the requirements file to tell Docker while dependencies to install. 7: We use the RUN command and python's pip package manager to install all the needed dependencies. 9: Here we add all the code in our current directory into the Docker container (add /code). 11: Finally we 'expose' the ports we will need to access. In this case, Flask will run on port 5000. Building from a Dockerfile We are almost ready to build an image from this Dockerfile, but first, let's specify the dependencies we will need in our requirements.txt file. flask==0.10.1 redis==2.10.3 I am using specific versions here to ensure that your version will work just like mine does. Once we have all these pieces in place we can build the image with the following command. > docker build -t thebutton . We are 'tagging' this image with an easy-to-remember name that we can use later. Once the build completes, we can run the container and see our message in the browser. > docker run -p 5000:5000 thebutton python app.py We are doing a few things here: The -p flag tells Docker to expose port 5000 inside the container, to port 5000 outside the container (this just makes our lives easier). Next we specify the image name (thebutton) and finally the command to run inside the container - python app.py - this will start the web server and server for our page. We are almost ready to view our page but first, we must discover which IP the site will be on. For linux-based systems, you can use localhost but for Mac you will need to run boot2docker ip to discover the IP address to visit. Navigate to your site (in my case it's 192.168.59.103:5000) and you should see "Hello World" printed. Congrats! You are running your first site from inside a Docker container. Putting it All Together Now, we are going to complete the app, and use Docker Compose to launch the entire project for us. This will contain two containers, one running our Flask app, and another running an instance of Redis. The great thing about docker-compose is that you can specify a system to create, and how to connect all the containers. Let's create our docker-compose.yml file now. redis: image: redis:2.8.19 web: build: . command: python app.py ports: - "5000:5000" links: - redis:redis This file specifies the two containers (web and redis). It specifies how to build each container (we are just using the stock redis image here). The web container is a bit more involved since we first build the container using our local Dockerfile (the build: . line). Than we expose port 5000 and link the Redis container to our web container. The awesome thing about linking containers this way, is that the web container automatically gets information about the redis container. In this case, there is an /etc/host called 'redis' that points to our Redis container. This allows us to configure Redis easily in our application: db = redis.StrictRedis('redis', 6379, 0) To test this all out, you can grab the complete source here. All you will need to run is docker-compose up and than access the site the same way we did before. Congratulations! You now have all the tools you need to use docker effectively! About the author Julian Gindi is a Washington DC-based software and infrastructure engineer. He currently serves as Lead Infrastructure Engineer at [iStrategylabs](isl.co) where he does everything from system administration to designing and building deployment systems. He is most passionate about Operating System design and implementation, and in his free time contributes to the Linux Kernel.
Read more
  • 0
  • 0
  • 1663

article-image-team-project-setup
Packt
29 Oct 2015
5 min read
Save for later

Team Project Setup

Packt
29 Oct 2015
5 min read
In this article, by Tarun Arora and Ahmed Al-Asaad, author of the book Microsoft Team Foundation Server Cookbook, gives you knowledge about: Using Team Explorer to connect to Team Foundation Server Creating and setting up a new Team Project for a scrum team (For more resources related to this topic, see here.) Microsoft Visual Studio Team Foundation Server 2015 is the backbone of Microsoft's Application Lifecycle Management (ALM) solution, providing core services such as version control, work item tracking, reporting, and automated builds. Team Foundation Server helps organizations communicate and collaborate more effectively throughout the process of designing, building, testing, and deploying software—ultimately leading to increased productivity and team output, improved quality, and greater visibility into the application life cycle. Team Foundation Server is Microsoft on premise offering application life cycle management tooling; Visual Studio Online is a collection of developer services that runs on Microsoft Azure and extends the development experience in the cloud. Team Foundation server is very flexible and supports a broad spectrum of topologies. While a simple one-machine setup may suffice for small teams, you'll see enterprises using scaled-out complex topologies. You'll find that TFS topologies are largely driven by the scope and scale of its use in an organization. Ensure that you have details of your Team Foundation Server handy. Please refer to the Microsoft Visual Studio Licensing guide available at the following link to learn about the license requirements for Team Foundation Server: http://www.microsoft.com/en-gb/download/details.aspx?id=13350. Using Team Explorer to connect to Team Foundation Server 2015 and GitHub To build, plan, and track your software development project using Team Foundation Server, you'll need to connect the client of your choice to Team Foundation Server. In this recipe, we'll focus on connecting Team Explorer to Team Foundation Server. Getting ready Team Explorer is installed with each version of Visual Studio; alternatively, you can also install Team Explorer from the Microsoft download center as a standalone client. When you start Visual Studio for the first time, you'll be asked to sign in with a Microsoft account, such as Live or Hotmail, and provide some basic registration information. You should choose a Microsoft account that best represents you. If you already have an MSDN account, it's recommended that you sign in with its associated Microsoft account. If you don't have a Microsoft account, you can create one for free. Logging in is advisable, not mandatory. How to do it... Open Visual Studio 2015. Click on the Team Toolbar and select Connect to Team Foundation Server. In Team Explorer ,click on Select Team Projects.... In the Connect to Team Foundation Server form, the dropdown shows a list of all the TFS Servers you have connected to before. If you can't see the server you want to connect to in the dropdown, click on Servers to enter the details of the team foundation server. Click on Add and enter the details of your TFS Server and then click on OK. You may be required to enter the log in details to authenticate against the TFS server. Click Close on the Add/Remove Team Foundation Server form. You should now see the details of your server in the Connect to Team Foundation Server form. At the bottom left, you'll see the user ID being used to establish this connection. Click on Connect to complete the connection, this will navigate you back to the Team Explorer. At this point, you have successfully connected Team Explorer to Team Foundation Server. Creating and setting up a new Team Project for a Scrum Team Software projects require a logical container to store project artifacts such as work items, code, build, releases, and documents. In the Team Foundation Server the logical container is referred to as Team Project. Different teams follow different processes to organize, manage, and track their work. Team Projects can be customized to specific project delivery frameworks through process templates. This recipe explains how to create a new team project for a scrum team in the Team Foundation Server. Getting ready The new Team Project created action needs to be trigged from Team Explorer. Before you can create a new Team Project, you need to connect Team Explorer to Team Foundation Server. The recipe Connecting Team Explorer to Team Foundation Server explains how this can be done. In order to create a new Team Project, you will need the following permissions: You must have the Create new projects permission on the TFS application tier. This permission is granted by adding users to the Project Collection Administrators TFS group. The Team Foundation Administrators global group also includes this permission. You must have created new team sites permission within the SharePoint site collection that corresponds to the TFS team project collection. This permission is granted by adding the user to a SharePoint group with Full Control rights on the SharePoint site collection. In order to use the SQL Server Reporting Services features, you must be a member of the Team Foundation Content Manager role in Reporting Services. To verify whether you have the correct permissions, you can download Team Foundation Server Administration Tool from Codeplex available at https://tfsadmin.codeplex.com/. TFS Admin is an open source tool available under the Microsoft Public license (Ms-PL). Summary In this article, we have looked at setting up a Team Project in Team Foundation Server 2015. We started off by connecting Team Explorer to Team Foundation Server and GitHub. We then looked at creating a team project and setting up a scrum team. Resources for Article: Further resources on this subject: Introducing Liferay for Your Intranet [article] Preparing our Solution [article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 1046

article-image-c-language-support-asynchrony
Packt
28 Oct 2015
25 min read
Save for later

C# Language Support for Asynchrony

Packt
28 Oct 2015
25 min read
In this article by Eugene Agafonov and Andrew Koryavchenko, the authors of the book, Mastering C# Concurrency, talks about Task Parallel Library in detail. Also, the C# language infrastructure that supports asynchronous calls have been explained. The Task Parallel Library makes it possible to combine asynchronous tasks and set dependencies between them. To get a clear understanding, in this article, we will use this approach to solve a real problem—downloading images from Bing (the search engine). Also, we will do the following: Implement standard synchronous approach Use Task Parallel Library to create an asynchronous version of the program Use C# 5.0 built-in asynchrony support to make the code easier to read and maintain Simulate C# asynchronous infrastructure with the help of iterators Learn about other useful features of Task Parallel Library Make any C# type compatible with built-in asynchronous keywords (For more resources related to this topic, see here.) Implementing the downloading of images from Bing Everyday Bing.com publishes its background image that can be used as desktop wallpaper. There is an XML API to get information about these pictures that can be found at http://www.bing.com/hpimagearchive.aspx. Creating a simple synchronous solution Let's try to write a program to download the last eight images from this site. We will start by defining objects to store image information. This is where a thumbnail image and its description will be stored: using System.Drawing; public class WallpaperInfo{   private readonly Image _thumbnail;   private readonly string _description;   public WallpaperInfo(Image thumbnail, string description) {     _thumbnail = thumbnail;     _description = description;   }   public Image Thumbnail {     get { return _thumbnail; }   }   public string Description {     get { return _description; }   } } The next container type is for all the downloaded pictures and the time required to download and make the thumbnail images from the original pictures: public class WallpapersInfo {   private readonly long _milliseconds;   private readonly WallpaperInfo[] _wallpapers;   public WallpapersInfo(long milliseconds, WallpaperInfo[]     wallpapers) {     _milliseconds = milliseconds;     _wallpapers = wallpapers;   }   public long Milliseconds {     get { return _milliseconds; }   }   public WallpaperInfo[] Wallpapers {     get { return _wallpapers; }   } } Now we need to create a loader class to download images from Bing. We need to define a Loader static class and follow with an implementation. Let's create a method that will make a thumbnail image from the source image stream: private static Image GetThumbnail(Stream imageStream) {   using (imageStream) {     var fullBitmap = Image.FromStream(imageStream);     return new Bitmap(fullBitmap, 192, 108);   } } To communicate via the HTTP protocol, it is recommended to use the System.Net.HttpClient type from the System.Net.dll assembly. Let's create the following extension methods that will allow us to use the POST HTTP method to download an image and get an opened stream: private static Stream DownloadData(this HttpClient client,   string uri) {   var response = client.PostAsync(     uri, new StringContent(string.Empty)).Result;   return response.Content.ReadAsStreamAsync().Result; } private static Task<Stream> DownloadDataAsync(this HttpClient   client, string uri) {   Task<HttpResponseMessage> responseTask = client.PostAsync(     uri, new StringContent(string.Empty));   return responseTask.ContinueWith(task =>     task.Result.Content.ReadAsStreamAsync()).Unwrap(); } To create the easiest implementation possible, we will implement downloading without any asynchrony. Here, we will define HTTP endpoints for the Bing API: private const string _catalogUri =   "http://www.bing.com/hpimagearchive.aspx?     format=xml&idx=0&n=8&mbl=1&mkt=en-ww"; private const string _imageUri =   "http://bing.com{0}_1920x1080.jpg"; Then, we will start measuring the time required to finish downloading and download an XML catalog that has information about the images that we need: var sw = Stopwatch.StartNew(); var client = new HttpClient(); var catalogXmlString = client.DownloadString(_catalogUri); Next, the XML string will be parsed to an XML document: var xDoc = XDocument.Parse(catalogXmlString); Now using LINQ to XML, we will query the information needed from the document and run the download process for each image: var wallpapers = xDoc   .Root   .Elements("image")   .Select(e =>     new {       Desc = e.Element("copyright").Value,       Url = e.Element("urlBase").Value     })   .Select(item =>     new {       item.Desc,       FullImageData = client.DownloadData(         string.Format(_imageUri, item.Url))     })   .Select( item =>     new WallpaperInfo(       GetThumbnail(item.FullImageData),       item.Desc))   .ToArray(); sw.Stop(); The first Select method call extracts image URL and description from each image XML element that is a direct child of root element. This information is contained inside the urlBase and copyright XML elements inside the image element. The second one downloads an image from the Bing site. The last Select method creates a thumbnail image and stores all the information needed inside the WallPaperInfo class instance. To display the results, we need to create a user interface. Windows Forms is a simple and fast to implement technology, so we use it to show the results to the user. There is a button that runs the download, a panel to show the downloaded pictures, and a label that will show the time required to finish downloading. Here is the implementation code. This includes a calculation of the top co-ordinate for each element, a code to display the images and start the download process: private int GetItemTop(int height, int index) {   return index * (height + 8) + 8; } private void RefreshContent(WallpapersInfo info) {   _resultPanel.Controls.Clear();   _resultPanel.Controls.AddRange(     info.Wallpapers.SelectMany((wallpaper, i) => new Control[] {     new PictureBox {       Left = 4,       Image = wallpaper.Thumbnail,       AutoSize = true,       Top = GetItemTop(wallpaper.Thumbnail.Height, i)     },     new Label {       Left = wallpaper.Thumbnail.Width + 8,       Top = GetItemTop(wallpaper.Thumbnail.Height, i),       Text = wallpaper.Description,       AutoSize = true     }   }).ToArray());     _timeLabel.Text = string.Format( "Time: {0}ms", info.Milliseconds); } private void _loadSyncBtn_Click(object sender, System.EventArgs e) {   var info = Loader.SyncLoad();   RefreshContent(info); } The result looks as follows: So the time to download all these images should be about several seconds if the internet connection is broadband. Can we do this faster? We certainly can! Now we will download and process the images one by one, but we totally can process each image in parallel. Creating a parallel solution with Task Parallel Library The Task Parallel Library and the code that shows the relationships between tasks naturally splits into several stages as follows: Load images catalog XML from Bing Parsing the XML document and get the information needed about the images Load each image's data from Bing Create a thumbnail image for each image downloaded The process can be visualized with the dependency chart: HttpClient has naturally asynchronous API, so we only need to combine everything together with the help of a Task.ContinueWith method: public static Task<WallpapersInfo> TaskLoad() {   var sw = Stopwatch.StartNew();   var downloadBingXmlTask = new HttpClient().GetStringAsync(_catalogUri);   var parseXmlTask = downloadBingXmlTask.ContinueWith(task => {     var xmlDocument = XDocument.Parse(task.Result);     return xmlDocument.Root       .Elements("image")       .Select(e =>         new {           Description = e.Element("copyright").Value,           Url = e.Element("urlBase").Value         });   });   var downloadImagesTask = parseXmlTask.ContinueWith(     task => Task.WhenAll(       task.Result.Select(item => new HttpClient()         .DownloadDataAsync(string.Format(_imageUri, item.Url))         .ContinueWith(downloadTask => new WallpaperInfo(           GetThumbnail(downloadTask.Result), item.Description)))))         .Unwrap();   return downloadImagesTask.ContinueWith(task => {     sw.Stop();     return new WallpapersInfo(sw.ElapsedMilliseconds,       task.Result);   }); } The code has some interesting moments. The first task is created by the HttpClient instance, and it completes when the download process succeeds. Now we will attach a subsequent task, which will use the XML string downloaded by the previous task, and then we will create an XML document from this string and extract the information needed. Now this is becoming more complicated. We want to create a task to download each image and continue until all these tasks complete successfully. So we will use the LINQ Select method to run downloads for each image that was defined in the XML catalog, and after the download process completes, we will create a thumbnail image and store the information in the WallpaperInfo instance. This creates IEnumerable<Task<WallpaperInfo>> as a result, and to wait for all these tasks to complete, we will use the Task.WhenAll method. However, this is a task that is inside a continuation task, and the result is going to be of the Task<Task<WallpaperInfo[]>> type. To get the inner task, we will use the Unwrap method, which has the following syntax: public static Task Unwrap(this Task<Task> task) This can be used on any Task<Task> instance and will create a proxy task that represents an entire asynchronous operation properly. The last task is to stop the timer and return the downloaded images and is quite straightforward. We have to add another button to the UI to run this implementation. Notice the implementation of the button click handler: private void _loadTaskBtn_Click(object sender, System.EventArgs e) {   var info = Loader.TaskLoad();   info.ContinueWith(task => RefreshContent(task.Result),     CancellationToken.None,     TaskContinuationOptions.None,     TaskScheduler.FromCurrentSynchronizationContext()); } Since the TaskLoad method is asynchronous, it returns immediately. To display the results, we have to define a continuation task. The default task scheduler will run a task code on a thread pool worker thread. To work with UI controls, we have to run the code on the UI thread, and we use a task scheduler that captures the current synchronization context and runs the continuation task on this. Let's name the button as Load using TPL and test the results. If your internet connection is fast, this implementation will download the images in parallel much faster compared to the previous sequential download process. If we look back at the code, we will see that it is quite hard to understand what it actually does. We can see how one task depends on other, but the original goal is unclear despite the code being very compact and easy. Imagine what will happen if we would try to add exception handling here. We will have to append an additional continuation task with exception handling to each task. This will be much harder to read and understand. In a real-world program, it will be a challenging task to keep in mind these tasks composition and support a code written in such a paradigm. Enhancing the code with C# 5.0 built-in support for asynchrony Fortunately, C# 5.0 introduced the async and await keywords that are intended to make asynchronous code look like synchronous, and thus, makes reading of code and understanding the program flow easier. However, this is another abstraction and it hides many things that happen under the hood from the programmer, which in several situations is not a good thing. Now let's rewrite the previous code using new C# 5.0 features: public static async Task<WallpapersInfo> AsyncLoad() {   var sw = Stopwatch.StartNew();   var client = new HttpClient();   var catalogXmlString = await client.GetStringAsync(_catalogUri);   var xDoc = XDocument.Parse(catalogXmlString);   var wallpapersTask = xDoc     .Root     .Elements("image")     .Select(e =>       new {         Description = e.Element("copyright").Value,         Url = e.Element("urlBase").Value       })     .Select(async item =>       new {         item.Description,         FullImageData = await client.DownloadDataAsync(           string.Format(_imageUri, item.Url))       });   var wallpapersItems = await Task.WhenAll(wallpapersTask);   var wallpapers = wallpapersItems.Select(     item => new WallpaperInfo(       GetThumbnail(item.FullImageData), item.Description));   sw.Stop();   return new WallpapersInfo(sw.ElapsedMilliseconds,     wallpapers.ToArray()); } Now the code looks almost like the first synchronous implementation. The AsyncLoad method has a async modifier and a Task<T> return value, and such methods must always return Task or be declared as void—this is enforced by the compiler. However, in the method's code, the type that is returned is just T. This is strange at first, but the method's return value will be eventually turned into Task<T> by the C# 5.0 compiler. The async modifier is necessary to use await inside the method. In the further code, there is await inside a lambda expression, and we need to mark this lambda as async as well. So what is going on when we use await inside our code? It does not always mean that the call is actually asynchronous. It can happen that by the time we call the method, the result is already available, so we just get the result and proceed further. However, the most common case is when we make an asynchronous call. In this case, we start. for example by downloading a XML string from Bing via HTTP and immediately return a task that is a continuation task and contains the rest of the code after the line with await. To run this, we need to add another button named Load using async. We are going to use await in the button click event handler as well, so we need to mark it with the async modifier: private async void _loadAsyncBtn_Click(object sender, System.EventArgs e) {   var info = await Loader.AsyncLoad();   RefreshContent(info); } Now if the code after await is being run in a continuation task, why is there no multithreaded access exception? The RefreshContent method runs in another task, but the C# compiler is aware of the synchronization context and generates a code that executes the continuation task on the UI thread. The result should be as fast as a TPL implementation but the code is much cleaner and easy to follow. The last but not least, is possibility to put asynchronous method calls inside a try block. The C# compiler generates a code that will propagate the exception into the current context and unwrap the AggregateException instance to get the original exception from it. In C# 5.0, it was impossible to use await inside catch and finally blocks, but C# 6.0 introduced a new async/await infrastructure and this limitation was removed. Simulating C# asynchronous infrastructure with iterators To dig into the implementation details, it makes sense to look at the decompiled code of the AsyncLoad method: public static Task<WallpapersInfo> AsyncLoad() {   Loader.<AsyncLoad>d__21 stateMachine;   stateMachine.<>t__builder =     AsyncTaskMethodBuilder<WallpapersInfo>.Create();   stateMachine.<>1__state = -1;   stateMachine     .<>t__builder     .Start<Loader.<AsyncLoad>d__21>(ref stateMachine);   return stateMachine.<>t__builder.Task; } The method body was replaced by a compiler-generated code that creates a special kind of state machine. We will not review the further implementation details here, because it is quite complicated and is subject to change from version to version. However, what's going on is that the code gets divided into separate pieces at each line where await is present, and each piece becomes a separate state in the generated state machine. Then, a special System.Runtime.CompilerServices.AsyncTaskMethodBuilder structure creates Task that represents the generated state machine workflow. This state machine is quite similar to the one that is generated for the iterator methods that leverage the yield keyword. In C# 6.0, the same universal code gets generated for the code containing yield and await. To illustrate the general principles behind the generated code, we can use iterator methods to implement another version of asynchronous images download from Bing. Therefore, we can turn an asynchronous method into an iterator method that returns the IEnumerable<Task> instance. We replace each await with yield return making each iteration to be returned as Task. To run such a method, we need to execute each task and return the final result. This code can be considered as an analogue of AsyncTaskMethodBuilder: private static Task<TResult> ExecuteIterator<TResult>(   Func<Action<TResult>,IEnumerable<Task>> iteratorGetter) {   return Task.Run(() => {     var result = default(TResult);     foreach (var task in iteratorGetter(res => result = res))       task.Wait();     return result;   }); } We iterate through each task and await its completion. Since we cannot use the out and ref parameters in iterator methods, we use a lambda expression to return the result from each task. To make the code easier to understand, we have created a new container task and used the foreach loop; however, to be closer to the original implementation, we should get the first task and use the ContinueWith method providing the next task to it and continue until the last task. In this case, we will end up having one final task representing an entire sequence of asynchronous operations, but the code will become more complicated as well. Since it is not possible to use the yield keyword inside a lambda expressions in the current C# versions, we will implement image download and thumbnail generation as a separate method: private static IEnumerable<Task> GetImageIterator(   string url,   string desc,   Action<WallpaperInfo> resultSetter) {   var loadTask = new HttpClient().DownloadDataAsync(     string.Format(_imageUri, url));   yield return loadTask;   var thumbTask = Task.FromResult(GetThumbnail(loadTask.Result));   yield return thumbTask;   resultSetter(new WallpaperInfo(thumbTask.Result, desc)); } It looks like a common C# async code with yield return used instead of the await keyword and resultSetter used instead of return. Notice the Task.FromResult method that we used to get Task from the synchronous GetThumbnail method. We can use Task.Run and put this operation on a separate worker thread, but it will be an ineffective solution; Task.FromResult allows us to get Task that is already completed and has a result. If you use await with such task, it will be translated into a synchronous call. The main code can be rewritten in the same way: private static IEnumerable<Task> GetWallpapersIterator(   Action<WallpaperInfo[]> resultSetter) {   var catalogTask = new HttpClient().GetStringAsync(_catalogUri);   yield return catalogTask;   var xDoc = XDocument.Parse(catalogTask.Result);   var imagesTask = Task.WhenAll(xDoc     .Root     .Elements("image")     .Select(e => new {       Description = e.Element("copyright").Value,         Url = e.Element("urlBase").Value     })     .Select(item => ExecuteIterator<WallpaperInfo>(       resSetter => GetImageIterator(         item.Url, item.Description, resSetter))));   yield return imagesTask;   resultSetter(imagesTask.Result); } This combines everything together: public static WallpapersInfo IteratorLoad() {   var sw = Stopwatch.StartNew();   var wallpapers = ExecuteIterator<WallpaperInfo[]>(     GetWallpapersIterator)       .Result;   sw.Stop();   return new WallpapersInfo(sw.ElapsedMilliseconds, wallpapers); } To run this, we will create one more button called Load using iterator. The button click handler just runs the IteratorLoad method and then refreshes the UI. This also works with about the same speed as other asynchronous implementations. This example can help us to understand the logic behind the C# code generation for asynchronous methods used with await. Of course, the real code is much more complicated, but the principles behind it remain the same. Is the async keyword really needed? It is a common question about why do we need to mark methods as async? We have already mentioned iterator methods in C# and the yield keyword. This is very similar to async/await, and yet we do not need to mark iterator methods with any modifier. The C# compiler is able to determine that it is an iterator method when it meets the yield return or yield break operators inside such a method. So the question is, why is it not the same with await and the asynchronous methods? The reason is that asynchrony support was introduced in the latest C# version, and it is very important not to break any legacy code while changing the language. Imagine if any code used await as a name for a field or variable. If C# developers make await a keyword without any conditions, this old code will break and stop compiling. The current approach guarantees that if we do not mark a method with async, the old code will continue to work. Fire-and-forget tasks Besides Task and Task<T>, we can declare an asynchronous method as void. It is useful in the case of top-level event handlers, for example, the button click or text changed handlers in the UI. An event handler that returns a value is possible, but is very inconvenient to use and does not make much sense. So allowing async void methods makes it possible to use await inside such event handlers: private async void button1_Click(object sender, EventArgs e) {   await SomeAsyncStuff(); } It seems that nothing bad is happening, and the C# compiler generates almost the same code as for the Task returning method, but there is an important catch related to exceptions handling. When an asynchronous method returns Task, exceptions are connected to this task and can be handled both by TPL and the try/catch block in case await is used. However, if we have a async void method, we have no Task to attach the exceptions to and those exceptions just get posted to the current synchronization context. These exceptions can be observed using AppDomain.UnhandledException or similar events in a GUI application, but this is very easy to miss and not a good practice. The other problem is that we cannot use a void returning asynchronous method with await, since there is no return value that can be used to await on it. We cannot compose such a method with other asynchronous tasks and participate in the program workflow. It is basically a fire-and-forget operation that we start, and then we have no way to control how it will proceed (if we did not write the code for this explicitly). Another problem is void returning async lambda expression. It is very hard to notice that lambda returns void, and all problems related to usual methods are related to lambda expression as well. Imagine that we want to run some operation in parallel. To achieve this, we can use the Parallel.ForEach method. To download some news in parallel, we can write a code like this: Parallel.ForEach(Enumerable.Range(1,10), async i => {   var news = await newsClient.GetTopNews(i);   newsCollection.Add(news); }); However, this will not work, because the second parameter of the ForEach method is Action<T>, which is a void returning delegate. Thus, we will spawn 10 download processes, but since we cannot wait for completion, we abandon all asynchronous operations that we just started and ignore the results. A general rule of thumb is to avoid using async void methods. If this is inevitable and there is an event handler, then always wrap the inner await method calls in try/catch blocks and provide exception handling. Other useful TPL features Task Parallel Library has a large codebase and some useful features such as Task.Unwrap or Task.FromResult that are not very well known to developers. We have still not mentioned two more extremely useful methods yet. They are covered in the following sections. Task.Delay Often, it is required to wait for a certain amount of time in the code. One of the traditional ways to wait is using the Thread.Sleep method. The problem is that Thread.Sleep blocks the current thread, and it is not asynchronous. Another disadvantage is that we cannot cancel waiting if something has happened. To implement a solution for this, we will have to use system synchronization primitives such as an event, but this is not very easy to code. To keep the code simple, we can use the Task.Delay method: // Do something await Task.Delay(1000); // Do something This method can be canceled with a help of the CancellationToken infrastructure and uses system timer under the hood, so this kind of waiting is truly asynchronous. Task.Yield Sometimes we need a part of the code to be guaranteed to run asynchronously. For example, we need to keep the UI responsive, or maybe we would like to implement a fine-grained scenario. Anyway, as we already know that using await does not mean that the call will be asynchronous. If we want to return control immediately and run the rest of the code as a continuation task, we can use the Task.Yield method: // Do something await Task.Yield(); // Do something Task.Yield just causes a continuation to be posted on the current synchronization context, or if the synchronization context is not available, a continuation will be posted on a thread pool worker thread. Implementing a custom awaitable type Until now we have only used Task with the await operator. However, it is not the only type that is compatible with await. Actually, the await operator can be used with every type that contains the GetAwaiter method with no parameters and the return type that does the following: Implements the INotifyCompletion interface Contains the IsCompleted boolean property Has the GetResult method with no parameters This method can even be an extension method, so it is possible to extend the existing types and add the await compatibility to them. In this example, we will create such a method for the Uri type. This method will download content as a string via HTTP from the address provided in the Uri instance: private static TaskAwaiter<string> GetAwaiter(this Uri url) {   return new HttpClient().GetStringAsync(url).GetAwaiter(); } var content = await new Uri("http://google.com"); Console.WriteLine(content.Substring(0, 10)); If we run this, we will see the first 10 characters of the Google website content. As you may notice, here we used the Task type indirectly, returning the already provided awaiter method for the Task type. We can implement an awaiter method manually from scratch, but it really does not make any sense. To understand how this works it will be enough to create a custom wrapper around an already existing TaskAwaiter: struct DownloadAwaiter : INotifyCompletion {   private readonly TaskAwaiter<string> _awaiter;   public DownloadAwaiter(Uri uri) {     Console.WriteLine("Start downloading from {0}", uri);     var task = new HttpClient().GetStringAsync(uri);     _awaiter = task.GetAwaiter();     Task.GetAwaiter().OnCompleted(() => Console.WriteLine(       "download completed"));   }   public bool IsCompleted {     get { return _awaiter.IsCompleted; }   }   public void OnCompleted(Action continuation) {     _awaiter.OnCompleted(continuation);   }   public string GetResult() {     return _awaiter.GetResult();   } } With this code, we have customized asynchronous execution that provides diagnostic information to the console. To get rid of TaskAwaiter, it will be enough to change the OnCompleted method with custom code that will execute some operation and then a continuation provided in this method. To use this custom awaiter, we need to change GetAwaiter accordingly: private static DownloadAwaiter GetAwaiter(this Uri uri) {   return new DownloadAwaiter(uri); } If we run this, we will see additional information on the console. This can be useful for diagnostics and debugging. Summary In this article, we have looked at the C# language infrastructure that supports asynchronous calls. We have covered the new C# keywords, async and await, and how we can use Task Parallel Library with the new C# syntax. We have learned how C# generates code and creates a state machine that represents an asynchronous operation, and we implemented an analogue solution with the help of iterator methods and the yield keyword. Besides this, we have studied additional Task Parallel Library features and looked at how we can use await with any custom type. Resources for Article: Further resources on this subject: R ─ CLASSIFICATION AND REGRESSION TREES [article] INTRODUCING THE BOOST C++ LIBRARIES [article] CLIENT AND SERVER APPLICATIONS [article]
Read more
  • 0
  • 0
  • 1498
Banner background image

article-image-understanding-ranges
Packt
28 Oct 2015
40 min read
Save for later

Understanding Ranges

Packt
28 Oct 2015
40 min read
In this article by Michael Parker, author of the book Learning D, explains since they were first introduced, ranges have become a pervasive part of D. It's possible to write D code and never need to create any custom ranges or algorithms yourself, but it helps tremendously to understand what they are, where they are used in Phobos, and how to get from a range to an array or another data structure. If you intend to use Phobos, you're going to run into them eventually. Unfortunately, some new D users have a difficult time understanding and using ranges. The aim of this article is to present ranges and functional styles in D from the ground up, so you can see they aren't some arcane secret understood only by a chosen few. Then, you can start writing idiomatic D early on in your journey. In this article, we lay the foundation with the basics of constructing and using ranges in two sections: Ranges defined Ranges in use (For more resources related to this topic, see here.) Ranges defined In this section, we're going to explore what ranges are and see the concrete definitions of the different types of ranges recognized by Phobos. First, we'll dig into an example of the sort of problem ranges are intended to solve and in the process, develop our own solution. This will help form an understanding of ranges from the ground up. The problem As part of an ongoing project, you've been asked to create a utility function, filterArray, which takes an array of any type and produces a new array containing all of the elements from the source array that pass a Boolean condition. The algorithm should be nondestructive, meaning it should not modify the source array at all. For example, given an array of integers as the input, filterArray could be used to produce a new array containing all of the even numbers from the source array. It should be immediately obvious that a function template can handle the requirement to support any type. With a bit of thought and experimentation, a solution can soon be found to enable support for different Boolean expressions, perhaps a string mixin, a delegate, or both. After browsing the Phobos documentation for a bit, you come across a template that looks like it will help with the std.functional.unaryFun implementation. Its declaration is as follows: template unaryFun(alias fun, string parmName = "a"); The alias fun parameter can be a string representing an expression, or any callable type that accepts one argument. If it is the former, the name of the variable inside the expression should be parmName, which is "a" by default. The following snippet demonstrates this: int num = 10; assert(unaryFun!("(a & 1) == 0")(num)); assert(unaryFun!("(x > 0)", "x")(num)); If fun is a callable type, then unaryFun is documented to alias itself to fun and the parmName parameter is ignored. The following snippet calls unaryFun first with struct that implements opCall, then calls it again with a delegate literal: struct IsEven { bool opCall(int x) { return (x & 1) == 0; } } IsEven isEven; assert(unaryFun!isEven(num)); assert(unaryFun!(x => x > 0)(num)); With this, you have everything you need to implement the utility function to spec: import std.functional; T[] filterArray(alias predicate, T)(T[] source) if(is(typeof(unaryFun!predicate(source[0]))) { T[] sink; foreach(t; source) { if(unaryFun!predicate(t)) sink ~= t; } return sink; } unittest { auto ints = [1, 2, 3, 4, 5, 6, 7]; auto even = ints.filterArray!(x => (x & 1) == 0)(); assert(even == [2, 4, 6]); }  The unittest verifies that the function works as expected. As a standalone implementation, it gets the job done and is quite likely good enough. But, what if, later on down the road, someone decides to create more functions that perform specific operations on arrays in the same manner? The natural outcome of that is to use the output of one operation as the input for another, creating a chain of function calls to transform the original data.  The most obvious problem is that any such function that cannot perform its operation in place must allocate at least once every time it's called. This means, chain operations on a single array will end up allocating memory multiple times. This is not the sort of habit you want to get into in any language, especially in performance-critical code, but in D, you have to take the GC into account. Any given allocation could trigger a garbage collection cycle, so it's a good idea to program to the GC; don't be afraid of allocating, but do so only when necessary and keep it out of your inner loops.  In filterArray, the naïve appending can be improved upon, but the allocation can't be eliminated unless a second parameter is added to act as the sink. This allows the allocation strategy to be decided at the call site rather than by the function, but it leads to another problem. If all of the operations in a chain require a sink and the sink for one operation becomes the source for the next, then multiple arrays must be declared to act as sinks. This can quickly become unwieldy. Another potential issue is that filterArray is eager, meaning that every time the function is called, the filtering takes place immediately. If all of the functions in a chain are eager, it becomes quite difficult to get around the need for allocations or multiple sinks. The alternative, lazy functions, do not perform their work at the time they are called, but rather at some future point. Not only does this make it possible to put off the work until the result is actually needed (if at all), it also opens the door to reducing the amount of copying or allocating needed by operations in the chain. Everything could happen in one step at the end.  Finally, why should each operation be limited to arrays? Often, we want to execute an algorithm on the elements of a list, a set, or some other container, so why not support any collection of elements? By making each operation generic enough to work with any type of container, it's possible to build a library of reusable algorithms without the need to implement each algorithm for each type of container. The solution Now we're going to implement a more generic version of filterArray, called filter, which can work with any container type. It needs to avoid allocation and should also be lazy. To facilitate this, the function should work with a well-defined interface that abstracts the container away from the algorithm. By doing so, it's possible to implement multiple algorithms that understand the same interface. It also takes the decision on whether or not to allocate completely out of the algorithms. The interface of the abstraction need not be an actual interface type. Template constraints can be used to verify that a given type meets the requirements.  You might have heard of duck typing. It originates from the old saying, If it looks like a duck, swims like a duck, and quacks like a duck, then it's probably a duck. The concept is that if a given object instance has the interface of a given type, then it's probably an instance of that type. D's template constraints and compile-time capabilities easily allow for duck typing. The interface In looking for inspiration to define the new interface, it's tempting to turn to other languages like Java and C++. On one hand, we want to iterate the container elements, which brings to mind the iterator implementations in other languages. However, we also want to do a bit more than that, as demonstrated by the following chain of function calls: container.getType.algorithm1.algorithm2.algorithm3.toContainer();  Conceptually, the instance returned by getType will be consumed by algorithm1, meaning that inside the function, it will be iterated to the point where it can produce no more elements. But then, algorithm1 should return an instance of the same type, which can iterate over the same container, and which will in turn be consumed by algorithm2. The process repeats for algorithm3. This implies that instances of the new type should be able to be instantiated independent of the container they represent.  Moreover, given that D supports slicing, the role of getType previously could easily be played by opSlice. Iteration need not always begin with the first element of a container and end with the last; any range of elements should be supported. In fact, there's really no reason for an actual container instance to exist at all in some cases. Imagine a random number generator; we should be able to plug one into the preceding function chain; just eliminate the container and replace getType with the generator instance. As long as it conforms to the interface we define, it doesn't matter that there is no concrete container instance backing it. The short version of it is, we don't want to think solely in terms of iteration, as it's only a part of the problem we're trying to solve. We want a type that not only supports iteration, of either an actual container or a conceptual one, but one that also can be instantiated independently of any container, knows both its beginning and ending boundaries, and, in order to allow for lazy algorithms, can be used to generate new instances that know how to iterate over the same elements. Considering these requirements, Iterator isn't a good fit as a name for the new type. Rather than naming it for what it does or how it's used, it seems more appropriate to name it for what it represents. There's more than one possible name that fits, but we'll go with Range (as in, a range of elements). That's it for the requirements and the type name. Now, moving on to the API. For any algorithm that needs to sequentially iterate a range of elements from beginning to end, three basic primitives are required: There must be a way to determine whether or not any elements are available There must be a means to access the next element in the sequence There must be a way to advance the sequence so that another element can be made ready Based on these requirements, there are several ways to approach naming the three primitives, but we'll just take a shortcut and use the same names used in D. The first primitive will be called empty and can be implemented either as a member function that returns bool or as a bool member variable. The second primitive will be called front, which again could be a member function or variable, and which returns T, the element type of the range. The third primitive can only be a member function and will be called popFront, as conceptually it is removing the current front from the sequence to ready the next element. A range for arrays Wrapping an array in the Range interface is quite easy. It looks like this: auto range(T)(T[] array) { struct ArrayRange(T) { private T[] _array; bool empty() @property { return _array.length == 0; } ref T front() { return _array[0]; } void popFront() { _array = _array[1 .. $]; } } return ArrayRange!T(array); } By implementing the iterator as struct, there's no need to allocate GC memory for a new instance. The only member is a slice of the source array, which again avoids allocation. Look at the implementation of popFront. Rather than requiring a separate variable to track the current array index, it slices the first element out of _array so that the next element is always at index 0, consequently shortening.length of the slice by 1 so that after every item has been consumed, _array.length will be 0. This makes the implementation of both empty and front dead simple. ArrayRange can be a Voldemort type because there is no need to declare its type in any algorithm it's passed to. As long as the algorithms are implemented as templates, the compiler can infer everything that needs to be known for them to work. Moreover, thanks to UFCS, it's possible to call this function as if it were an array property. Given an array called myArray, the following is valid: auto range = myArray.range; Next, we need a template to go in the other direction. This needs to allocate a new array, walk the iterator, and store the result of each call to element in the new array. Its implementation is as follows: T[] array(T, R)(R range) @property { T[] ret; while(!range.empty) { ret ~= range.front; range.popFront(); } return ret; } This can be called after any operation that produces any Range in order to get an array. If the range comes at the end of one or more lazy operations, this will cause all of them to execute simply by the call to popFront (we'll see how shortly). In that case, no allocations happen except as needed in this function when elements are appended to ret. Again, the appending strategy here is naïve, so there's room for improvement in order to reduce the potential number of allocations. Now it's time to implement an algorithm to make use of our new range interface. The implementation of filter The filter function isn't going to do any filtering at all. If that sounds counterintuitive, recall that we want the function to be lazy; all of the work should be delayed until it is actually needed. The way to accomplish that is to wrap the input range in a custom range that has an internal implementation of the filtering algorithm. We'll call this wrapper FilteredRange. It will be a Voldemort type, which is local to the filter function. Before seeing the entire implementation, it will help to examine it in pieces as there's a bit more to see here than with ArrayRange. FilteredRange has only one member: private R _source; R is the type of the range that is passed to filter. The empty and front functions simply delegate to the source range, so we'll look at popFront next: void popFront() { _source.popFront(); skipNext(); } This will always pop the front from the source range before running the filtering logic, which is implemented in the private helper function skipNext: private void skipNext() { while(!_source.empty && !unaryFun!predicate(_source.front)) _source.popFront(); } This function tests the result of _source.front against the predicate. If it doesn't match, the loop moves on to the next element, repeating the process until either a match is found or the source range is empty. So, imagine you have an array arr of the values [1,2,3,4]. Given what we've implemented so far, what would be the result of the following chain? Let's have a look at the following code: arr.range.filter!(x => (x & 1) == 0).front; As mentioned previously, front delegates to _source.front. In this case, the source range is an instance of ArrayRange; its front returns _source[0]. Since popFront was never called at any point, the first value in the array was never tested against the predicate. Therefore, the return value is 1, a value which doesn't match the predicate. The first value returned by front should be 2, since it's the first even number in the array. In order to make this behave as expected, FilteredRange needs to ensure the wrapped range is in a state such that either the first call to front will properly return a filtered value, or empty will return true, meaning there are no values in the source range that match the predicate. This is best done in the constructor: this(R source) { _source = source; skipNext(); } Calling skipNext in the constructor ensures that the first element of the source range is tested against the predicate; however, it does mean that our filter implementation isn't completely lazy. In an extreme case, that _source contains no values that match the predicate; it's actually going to be completely eager. The source elements will be consumed as soon as the range is instantiated. Not all algorithms will lend themselves to 100 percent laziness. No matter what we have here is lazy enough. Wrapped up inside the filter function, the whole thing looks like this: import std.functional; auto filter(alias predicate, R)(R source) if(is(typeof(unaryFun!predicate))) { struct FilteredRange { private R _source; this(R source) { _source = source; skipNext(); } bool empty() { return _source.empty; } auto ref front() { return _source.front; } void popFront() { _source.popFront(); skipNext(); } private void skipNext() { while(!_source.empty && !unaryFun!predicate(_source.front)) _source.popFront(); } } return FilteredRange(source); } It might be tempting to take the filtering logic out of the skipNext method and add it to front, which is another way to guarantee that it's performed on every element. Then no work would need to be done in the constructor and popFront would simply become a wrapper for _source.popFront. The problem with that approach is that front can potentially be called multiple times without calling popFront in between. Aside from the fact that it should return the same value each time, which can easily be accommodated, this still means the current element will be tested against the predicate on each call. That's unnecessary work. As a general rule, any work that needs to be done inside a range should happen as a result of calling popFront, leaving front to simply focus on returning the current element. The test With the implementation complete, it's time to put it through its paces. Here are a few test cases in a unittest block: unittest { auto arr = [10, 13, 300, 42, 121, 20, 33, 45, 50, 109, 18]; auto result = arr.range .filter!(x => x < 100 ) .filter!(x => (x & 1) == 0) .array!int(); assert(result == [10,42,20,50,18]); arr = [1,2,3,4,5,6]; result = arr.range.filter!(x => (x & 1) == 0).array!int; assert(result == [2, 4, 6]); arr = [1, 3, 5, 7]; auto r = arr.range.filter!(x => (x & 1) == 0); assert(r.empty); arr = [2,4,6,8]; result = arr.range.filter!(x => (x & 1) == 0).array!int; assert(result == arr); } Assuming all of this has been saved in a file called filter.d, the following will compile it for unit testing: dmd -unittest -main filter That should result in an executable called filter which, when executed, should print nothing to the screen, indicating a successful test run. Notice the test that calls empty directly on the returned range. Sometimes, we might not need to convert a range to a container at the end of the chain. For example, to print the results, it's quite reasonable to iterate a range directly. Why allocate when it isn't necessary? The real ranges The purpose of the preceding exercise was to get a feel of the motivation behind D ranges. We didn't develop a concrete type called Range, just an interface. D does the same, with a small set of interfaces defining ranges for different purposes. The interface we developed exactly corresponds to the basic kind of D range, called an input range, one of one of two foundational range interfaces in D (the upshot of that is that both ArrayRange and FilteredRange are valid input ranges, though, as we'll eventually see, there's no reason to use either outside of this article). There are also certain optional properties that ranges might have, which, when present, some algorithms might take advantage of. We'll take a brief look at the range interfaces now, then see more details regarding their usage in the next section. Input ranges This foundational range is defined to be anything from which data can be sequentially read via the three primitives empty, front, and popFront. The first two should be treated as properties, meaning they can be variables or functions. This is important to keep in mind when implementing any generic range-based algorithm yourself; calls to these two primitives should be made without parentheses. The three higher-order range interfaces, we'll see shortly, build upon the input range interface. To reinforce a point made earlier, one general rule to live by when crafting input ranges is that consecutive calls to front should return the same value until popFront is called; popFront prepares an element to be returned and front returns it. Breaking this rule can lead to unexpected consequences when working with range-based algorithms, or even foreach. Input ranges are somewhat special in that they are recognized by the compiler. The opApply enables iteration of a custom type with a foreach loop. An alternative is to provide an implementation of the input range primitives. When the compiler encounters a foreach loop, it first checks to see if the iterated instance is of a type that implements opApply. If not, it then checks for the input range interface and, if found, rewrites the loop. In a given range someRange, take for example the following loop: foreach(e; range) { ... } This is rewritten to something like this: for(auto __r = range; !__r.empty; __r.popFront()) { auto e = __r.front; ... } This has implications. To demonstrate, let's use the ArrayRange from earlier: auto ar = [1, 2, 3, 4, 5].range; foreach(n; ar) { writeln(n); } if(!ar.empty) writeln(ar.front); The last line prints 1. If you're surprised, look back up at the for loop that the compiler generates. ArrayRange is a struct, so when it's assigned to __r, a copy is generated. The slices inside, ar and __r, point to the same memory, but their .ptr and .length properties are distinct. As the length of the __r slice decreases, the length of the ar slice remains the same. When implementing generic algorithms that loop over a source range, it's not a good idea to assume the original range will not be consumed by the loop. If it's a class instead of struct, it will be consumed by the loop, as classes are references types. Furthermore, there are no guarantees about the internal implementation of a range. There could be struct-based ranges that are actually consumed in a foreach loop. Generic functions should always assume this is the case. Test if a given range type R is an input range: import std.range : isInputRange; static assert(isInputRange!R); There are no special requirements on the return value of the front property. Elements can be returned by value or by reference, they can be qualified or unqualified, they can be inferred via auto, and so on. Any qualifiers, storage classes, or attributes that can be applied to functions and their return values can be used with any range function, though it might not always make sense to do so. Forward ranges The most basic of the higher-order ranges is the forward range. This is defined as an input range that allows its current point of iteration to be saved via a primitive appropriately named save. Effectively, the implementation should return a copy of the current state of the range. For ranges that are struct types, it could be as simple as: auto save() { return this; } For ranges that are class types, it requires allocating a new instance: auto save() { return new MyForwardRange(this); } Forward ranges are useful for implementing algorithms that require a look ahead. For example, consider the case of searching a range for a pair of adjacent elements that pass an equality test: auto saved = r.save; if(!saved.empty) { for(saved.popFront(); !saved.empty; r.popFront(), saved.popFront()) { if(r.front == saved.front) return r; } } return saved; Because this uses a for loop and not a foreach loop, the ranges are iterated directly and are going to be consumed. Before the loop begins, a copy of the current state of the range is made by calling r.save. Then, iteration begins over both the copy and the original. The original range is positioned at the first element, and the call to saved.popFront in the beginning of the loop statement positions the saved range at the second element. As the ranges are iterated in unison, the comparison is always made on adjacent elements. If a match is found, r is returned, meaning that the returned range is positioned at the first element of a matching pair. If no match is found, saved is returned—since it's one element ahead of r, it will have been consumed completely and its empty property will be true. The preceding example is derived from a more generic implementation in Phobos, std.range.findAdjacent. It can use any binary (two argument) Boolean condition to test adjacent elements and is constrained to only accept forward ranges. It's important to understand that calling save usually does not mean a deep copy, but it sometimes can. If we were to add a save function to the ArrayRange from earlier, we could simply return this; the array elements would not be copied. A class-based range, on the other hand, will usually perform a deep copy because it's a reference type. When implementing generic functions, you should never make the assumption that the range does not require a deep copy. For example, given a range r: auto saved = r; // INCORRECT!! auto saved = r.save; // Correct. If r is a class, the first line is almost certainly going to result in incorrect behavior. It would in the preceding example loop. To test if a given range R is a forward range: import std.range : isForwardRange; static assert(isForwardRange!R); Bidirectional ranges A bidirectional range is a forward range that includes the primitives back and popBack, allowing a range to be sequentially iterated in reverse. The former should be a property, the latter a function. Given a bidirectional range r, the following forms of iteration are possible: foreach_reverse(e; r) writeln(e); for(; !r.empty; r.popBack) writeln(r.back); } Like its cousin foreach, the foreach_reverse loop will be rewritten into a for loop that does not consume the original range; the for loop shown here does consume it. Test whether a given range type R is a bidirectional range: import std.range : isBidirectionalRange; static assert(isBidirectionalRange!R); Random-access ranges A random-access range is a bidirectional range that supports indexing and is required to provide a length primitive unless it's infinite (two topics we'll discuss shortly). For custom range types, this is achieved via the opIndex operator overload. It is assumed that r[n] returns a reference to the (n+1)th element of the range, just as when indexing an array. Test whether a given range R is a random-access range: import std.range : isRandomAccessRange; static assert(isRandomAccessRange!R); Dynamic arrays can be treated as random-access ranges by importing std.array. This pulls functions into scope that accept dynamic arrays as parameters and allows them to pass all the isRandomAccessRange checks. This makes our ArrayRange from earlier obsolete. Often, when you need a random-access range, it's sufficient just to use an array instead of creating a new range type. However, char and wchar arrays (string and wstring) are not considered random-access ranges, so they will not work with any algorithm that requires one. Getting a random-access range from char[] and wchar[] Recall that a single Unicode character can be composed of multiple elements in a char or wchar array, which is an aspect of strings that would seriously complicate any algorithm implementation that needs to directly index the array. To get around this, the thing to do in a general case is to convert char[] and wchar[] into dchar[]. This can be done with std.utf.toUTF32, which encodes UTF-8 and UTF-16 strings into UTF-32 strings. Alternatively, if you know you're only working with ASCII characters, you can use std.string.representation to get ubyte[] or ushort[] (on dstring, it returns uint[]). Output ranges The output range is the second foundational range type. It's defined to be anything that can be sequentially written to via the primitive put. Generally, it should be implemented to accept a single parameter, but the parameter could be a single element, an array of elements, a pointer to elements, or another data structure containing elements. When working with output ranges, never call the range's implementation of put directly; instead, use the Phobos' utility function std.range.put. It will call the range's implementation internally, but it allows for a wider range of argument types. Given a range r and element e, it would look like this: import std.range : put; put(r, e); The benefit here is if e is anything other than a single element, such as an array or another range, the global put does what is necessary to pull elements from it and put them into r one at a time. With this, you can define and implement a simple output range that might look something like this: MyOutputRange(T) { private T[] _elements; void put(T elem) { _elements ~= elem; } } Now, you need not worry about calling put in a loop, or overloading it to accept collections of T. For example, let's have a look at the following code: MyOutputRange!int range; auto nums = [11, 22, 33, 44, 55]; import std.range : put; put(range, nums); Note that using UFCS here will cause compilation to fail, as the compiler will attempt to call MyOutputRange.put directly, rather than the utility function. However, it's fine to use UFCS when the first parameter is a dynamic array. This allows arrays to pass the isOutputRange predicate Test whether a given range R is an output range: import std.range : isOutputRange; static assert(isOutputRange!(R, E)); Here, E is the type of element accepted by R.put. Optional range primitives In addition to the five primary range types, some algorithms in Phobos are designed to look for optional primitives that can be used as an optimization or, in some cases, a requirement. There are predicate templates in std.range that allow the same options to be used outside of Phobos. hasLength Ranges that expose a length property can reduce the amount of work needed to determine the number of elements they contain. A great example is the std.range.walkLength function, which will determine and return the length of any range, whether it has a length primitive or not. Given a range that satisfies the std.range.hasLength predicate, the operation becomes a call to the length property; otherwise, the range must be iterated until it is consumed, incrementing a variable every time popFront is called. Generally, length is expected to be a O(1) operation. If any given implementation cannot meet that expectation, it should be clearly documented as such. For non-infinite random-access ranges, length is a requirement. For all others, it's optional. isInfinite An input range with an empty property, which is implemented as a compile-time value set to false is considered an infinite range. For example, let's have a look at the following code: struct IR { private uint _number; enum empty = false; auto front() { return _number; } void popFront() { ++_number; } } Here, empty is a manifest constant, but it could alternatively be implemented as follows: static immutable empty = false; The predicate template std.range.isInfinite can be used to identify infinite ranges. Any range that is always going to return false from empty should be implemented to pass isInfinite. Wrapper ranges (such as the FilterRange we implemented earlier) in some functions might check isInfinite and customize an algorithm's behavior when it's true. Simply returning false from an empty function will break this, potentially leading to infinite loops or other undesired behavior. Other options There are a handful of other optional primitives and behaviors, as follows: hasSlicing: This returns true for any forward range that supports slicing. There are a set of requirements specified by the documentation for finite versus infinite ranges and whether opDollar is implemented. hasMobileElements: This is true for any input range whose elements can be moved around in the memory (as opposed to copied) via the primitives moveFront, moveBack, and moveAt. hasSwappableElements: This returns true if a range supports swapping elements through its interface. The requirements are different depending on the range type. hasAssignableElements: This returns true if elements are assignable through range primitives such as front, back, or opIndex. At http://dlang.org/phobos/std_range_primitives.html, you can find specific documentation for all of these tests, including any special requirements that must be implemented by a range type to satisfy them. Ranges in use The key concept to understand ranges in the general case is that, unless they are infinite, they are consumable. In idiomatic usage, they aren't intended to be kept around, adding and removing elements to and from them as if they were some sort of container. A range is generally created only when needed, passed to an algorithm as input, then ultimately consumed, often at the end of a chain of algorithms. Even forward ranges and output ranges with their save and put primitives usually aren't intended to live long beyond an algorithm. That's not to say it's forbidden to keep a range around; some might even be designed for long life. For example, the random number generators in std.random are all ranges that are intended to be reused. However, idiomatic usage in D generally means lazy, fire-and-forget ranges that allow algorithms to operate on data from any source. For most programs, the need to deal with ranges directly should be rare; most code will be passing ranges to algorithms, then either converting the result to a container or iterating it with a foreach loop. Only when implementing custom containers and range-based algorithms is it necessary to implement a range or call a range interface directly. Still, understanding what's going on under the hood helps in understanding the algorithms in Phobos, even if you never need to implement a range or algorithm yourself. That's the focus of the remainder of this article. Custom ranges When implementing custom ranges, some thought should be given to the primitives that need to be supported and how to implement them. Since arrays support a number of primitives out of the box, it might be tempting to return a slice from a custom type, rather than struct wrapping an array or something else. While that might be desirable in some cases, keep in mind that a slice is also an output range and has assignable elements (unless it's qualified as const or immutable, but those can be cast away). In many cases, what's really wanted is an input range that can never be modified; one that allows iteration and prevents unwanted allocations. A custom range should be as lightweight as possible. If a container uses an array or pointer internally, the range should operate on a slice of the array, or a copy of the pointer, rather than a copy of the data. This is especially true for the save primitive of a forward iterator; it could be called more than once in a chain of algorithms, so an implementation that requires deep copying would be extremely suboptimal (not to mention problematic for a range that requires ref return values from front). Now we're going to implement two actual ranges that demonstrate two different scenarios. One is intended to be a one-off range used to iterate a container, and one is suited to sticking around for as long as needed. Both can be used with any of the algorithms and range operations in Phobos. Getting a range from a stack Here's a barebones, simple stack implementation with the common operations push, pop, top, and isEmpty (named to avoid confusion with the input range interface). It uses an array to store its elements, appending them in the push operation and decreasing the array length in the pop operation. The top of the stack is always _array[$-1]: struct Stack(T) { private T[] _array; void push(T element) { _array ~= element; } void pop() { assert(!isEmpty); _array.length -= 1; } ref T top() { assert(!isEmpty); return _array[$-1]; } bool isEmpty() { return _array.length == 0; } } Rather than adding an opApply to iterate a stack directly, we want to create a range to do the job for us so that we can use it with all of those algorithms. Additionally, we don't want the stack to be modified through the range interface, so we should declare a new range type internally. That might look like this: private struct Range { T[] _elements; bool empty() { return _elements.length == 0; } T front() { return _elements[$-1]; } void popFront() { _elements.length -= 1; } } Add this anywhere you'd like inside the Stack declaration. Note the iteration of popFront. Effectively, this range will iterate the elements backwards. Since the end of the array is the top of the stack, that means it's iterating the stack from the top to the bottom. We could also add back and popBack primitives that iterate from the bottom to the top, but we'd also have to add a save primitive since bidirectional ranges must also be forward ranges. Now, all we need is a function to return a Range instance: auto elements() { return Range(_array); } Again, add this anywhere inside the Stack declaration. A real implementation might also add the ability to get a range instance from slicing a stack. Now, test it out: Stack!int stack; foreach(i; 0..10) stack.push(i); writeln("Iterating..."); foreach(i; stack.elements) writeln(i); stack.pop(); stack.pop(); writeln("Iterating..."); foreach(i; stack.elements) writeln(i); One of the great side effects of this sort of range implementation is that you can modify the container behind the range's back and the range doesn't care: foreach(i; stack.elements) { stack.pop(); writeln(i); } writeln(stack.top); This will still print exactly what was in the stack at the time the range was created, but the writeln outside the loop will cause an assertion failure because the stack will be empty by then. Of course, it's still possible to implement a container that can cause its ranges not just to become stale, but to become unstable and lead to an array bounds error or an access violation or some such. However, D's slices used in conjunction with structs give a good deal of flexibility. A name generator range Imagine that we're working on a game and need to generate fictional names. For this example, let's say it's a music group simulator and the names are those of group members. We'll need a data structure to hold the list of possible names. To keep the example simple, we'll implement one that holds both first and last names: struct NameList { private: string[] _firstNames; string[] _lastNames; struct Generator { private string[] _first; private string[] _last; private string _next; enum empty = false; this(string[] first, string[] last) { _first = first; _last = last; popFront(); } string front() { return _next; } void popFront() { import std.random : uniform; auto firstIdx = uniform(0, _first.length); auto lastIdx = uniform(0, _last.length); _next = _first[firstIdx] ~ " " ~ _last[lastIdx]; } } public: auto generator() { return Generator(_firstNames, _lastNames); } } The custom range is in the highlighted block. It's a struct called Generator that stores two slices, _first and _last, which are both initialized in its only constructor. It also has a field called _next, which we'll come back to in a minute. The goal of the range is to provide an endless stream of randomly generated names, which means it doesn't make sense for its empty property to ever return true. As such, it is marked as an infinite range by the manifest constant implementation of empty that is set to false. This range has a constructor because it needs to do a little work to prepare itself before front is called for the first time. All of the work is done in popFront, which the constructor calls after the member variables are set up. Inside popFront, you can see that we're using the std.random.uniform function. By default, this function uses a global random number generator and returns a value in the range specified by the parameters, in this case 0 and the length of each array. The first parameter is inclusive and the second is exclusive. Two random numbers are generated, one for each array, and then used to combine a first name and a last name to store in the _next member, which is the value returned when front is called. Remember, consecutive calls to front without any calls to popFront should always return the same value. std.random.uniform can be configured to use any instance of one of the random number generator implementations in Phobos. It can also be configured to treat the bounds differently. For example, both could be inclusive, exclusive, or the reverse of the default. See the documentation at http://dlang.org/phobos/std_random.html for details. The generator property of NameList returns an instance of Generator. Presumably, the names in a NameList would be loaded from a file on disk, or a database, or perhaps even imported at compile-time. It's perfectly fine to keep a single Generator instance handy for the life of the program as implemented. However, if the NameList instance backing the range supported reloading or appending, not all changes would be reflected in the range. In that scenario, it's better to go through generator every time new names need to be generated. Now, let's see how our custom range might be used: auto nameList = NameList( ["George", "John", "Paul", "Ringo", "Bob", "Jimi", "Waylon", "Willie", "Johnny", "Kris", "Frank", "Dean", "Anne", "Nancy", "Joan", "Lita", "Janice", "Pat", "Dionne", "Whitney", "Donna", "Diana"], ["Harrison", "Jones", "Lennon", "Denver", "McCartney", "Simon", "Starr", "Marley", "Dylan", "Hendrix", "Jennings", "Nelson", "Cash", "Mathis", "Kristofferson", "Sinatra", "Martin", "Wilson", "Jett", "Baez", "Ford", "Joplin", "Benatar", "Boone", "Warwick", "Houston", "Sommers", "Ross"] ); import std.range : take; auto names = nameList.generator.take(4); writeln("These artists want to form a new band:"); foreach(artist; names) writeln(artist); First up, we initialize a NameList instance with two array literals, one of first names and one of last names. Next, the highlighted line is where the range is used. We call nameList.generator and then, using UFCS, pass the returned Generator instance to std.range.take. This function creates a new lazy range containing a number of elements, four in this case, from the source range. In other words, the result is the equivalent of calling front and popFront four times on the range returned from nameList.generator, but since it's lazy, the popping doesn't occur until the foreach loop. That loop produces four randomly generated names that are each written to standard output. One iteration yielded the following names for me: These artists want to form a new band: Dionne WilsonJohnny StarrRingo SinatraDean Kristofferson Other considerations The Generator range is infinite, so it doesn't need length. There should never be a need to index it, iterate it in reverse, or assign any values to it. It has exactly the interface it needs. But it's not always so obvious where to draw the line when implementing a custom range. Consider the interface for a range from a queue data structure. A basic queue implementation allows two operations to add and remove items—enqueue and dequeue (or push and pop if you prefer). It provides the self-describing properties empty and length. What sort of interface should a range from a queue implement? An input range with a length property is perhaps the most obvious, reflecting the interface of the queue itself. Would it make sense to add a save property? Should it also be a bidirectional range? What about indexing? Should the range be random-access? There are queue implementations out there in different languages that allow indexing, either through an operator overload or a function such as getElementAt. Does that make sense? Maybe. More importantly, if a queue doesn't allow indexing, does it make sense for a range produced from that queue to allow it? What about slicing? Or assignable elements? For our queue type at least, there are no clear-cut answers to these questions. A variety of factors come into play when choosing which range primitives to implement, including the internal data structure used to implement the queue, the complexity requirements of the primitives involved (indexing should be an O(1) operation), whether the queue was implemented to meet a specific need or is a more general-purpose data structure, and so on. A good rule of thumb is that if a range can be made a forward range, then it should be. Custom algorithms When implementing custom, range-based algorithms, it's not enough to just drop an input range interface onto the returned range type and be done with it. Some consideration needs to be given to the type of range used as input to the function and how its interface should affect the interface of the returned range. Consider the FilteredRange we implemented earlier, which provides the minimal input range interface. Given that it's a wrapper range, what happens when the source range is an infinite range? Let's look at it step by step. First, an infinite range is passed in to filter. Next, it's wrapped up in a FilteredRange instance that's returned from the function. The returned range is going to be iterated at some point, either directly by the caller or somewhere in a chain of algorithms. There's one problem, though: with a source range that's infinite, the FilteredRange instance can never be consumed. Because its empty property simply wraps that of the source range, it's always going to return false if the source range is infinite. However, since FilteredRange does not implement empty as a compile-time constant, it will never match the isInfiniteRange predicate. This will cause any algorithm that makes that check to assume it's dealing with a finite range and, if iterating it, enter into an infinite loop. Imagine trying to track down that bug. One option is to prohibit infinite ranges with a template constraint, but that's too restrictive. The better way around this potential problem is to check the source range against the isInfinite predicate inside the FilteredRange implementation. Then, the appropriate form of the empty primitive of FilteredRange can be configured with conditional compilation: import std.range : isInfinite; static if(isInfinite!T) enum empty = false; else bool empty(){ return _source.empty; } With this, FilteredRange will satisfy the isInfinite predicate when it wraps an infinite range, avoiding the infinite loop bug. Another good rule of thumb is that a wrapper range should implement as many of the primitives provided by the source range as it reasonably can. If the range returned by a function has fewer primitives than the one that went in, it is usable with fewer algorithms. But not all ranges can accommodate every primitive. Take FilteredRange as an example again. It could be configured to support the bidirectional interface, but that would have a bit of a performance impact as the constructor would have to find the last element in the source range that satisfies the predicate in addition to finding the first, so that both front and back are primed to return the correct values. Rather than using conditional compilation, std.algorithm provides two functions, filter and filterBidirectional, so that users must explicitly choose to use the latter version. A bidirectional range passed to filter will produce a forward range, but the latter maintains the interface. The random-access interface, on the other hand, makes no sense on FilteredRange. Any value taken from the range must satisfy the predicate, but if users can randomly index the range, they could quite easily get values that don't satisfy the predicate. It could work if the range was made eager rather than lazy. In that case, it would allocate new storage and copy all the elements from the source that satisfies the predicate, but that defeats the purpose of using lazy ranges in the first place. Summary In this article, we've taken an introductory look at ranges in D and how to implement them into containers and algorithms. For more information on ranges and their primitives and traits, see the documentation at http://dlang.org/phobos/std_range.html. Resources for Article: Further resources on this subject: Transactions and Operators [article] Using Protocols and Protocol Extensions [article] Hand Gesture Recognition Using a Kinect Depth Sensor [article]
Read more
  • 0
  • 0
  • 1976

article-image-introduction-mapbox
Packt
26 Oct 2015
7 min read
Save for later

Introduction to MapBox

Packt
26 Oct 2015
7 min read
In this article by Bill Kastanakis, author of the book MapBox Cookbook, he has given an introduction to MapBox. Most of the websites we visit everyday us maps in order to display information about locations or point of interests to the user. It's amazing how this technology has evolved over the past decades. In the early days with the introduction of the Internet, maps used to be static images. Users were unable to interact with maps, and they were limited to just displaying static information. Interactive maps were available only to mapping professionals and accessed via very expensive GIS software. Cartographers have used this type of software to create or improve maps, usually for an agency or an organization. Again, if the location information was to be made available to the public, there were only two options: static images or a printed version. (For more resources related to this topic, see here.) Improvements made on Internet technologies opened up several possibilities for interactive content. It was a natural transition for maps to become live, respond to search queries, and allow user interactions (such as panning and changing the zoom level). Mobile devices were just starting to evolve, and a new age of smartphones was just about to begin. It was natural for maps to become even more important to consumers. Interactive maps are now in their pockets. More importantly, they can tell the users location. These maps also have the ability to display a great variety of data. In the age where smartphones and tables have become aware of the location, information has become even more important to companies. They use it to improve user experience. From general purpose websites (such as Google Maps) to more focused apps (such as Four Square and Facebook), maps are now a crucial component in the digital world. The popularity of mapping technologies is increasing over the years. From free open source solutions to commercial services for web and mobile developers and even services specialized for cartographers and visualization professionals, a number of services have become available to developers. Currently, there is an option for developers to choose from a variety of services that will work better on their specific task, and best of all, if you don't have increased traffic requirements, most of them will offer free plans for their consumers. What is MapBox? The issue with most of the solutions available is that they look extremely similar. Observing the most commonly used websites and services that implement a map, you can easily verify that they completely lack personality. Maps have the same colors and are present with the same features, such as roads, buildings, and labels. Currently, displaying road addresses in a specific website doesn't make sense. Customizing maps is a tedious task and is the main reason why it's avoided. What if the map that is provided by a service is not working well with the color theme used in your website or app? MapBox is a service provider that allows users to select a variety of customization options. This is one of the most popular features that has set it apart from competition. The power to fully customize your map in every detail, including the color theme, features you want to present to the user, information displayed, and so on, is indispensable. MapBox provides you with tools to fully write CartoCSS, the language behind the MapBox customization, SDKs, and frameworks to integrate their maps into your website with minimal effort and a lot more tools to assist you in your task to provide a unique experience to your users. Data Let's see what MapBox has to offer, and we will begin with three available datasets: MapBox Streets is the core technology behind MapBox street data. It's powered by open street maps and has an extremely vibrant community of 1.5 million cartographers and users, which constantly refine and improve map data in real time, as shown in the following screenshot: MapBox Terrain is composed of datasets fetched from 24 datasets owned by 13 organizations. You will be able to access elevation data, hill shades, and topography lines, as shown in the following screenshot: MapBox Satellite offers high-resolution cloudless datasets with satellite imagery, as shown in the following image: MapBox Editor MapBox Editor is an online editor where you can easily create and customize maps. It's purpose is to easily customize the map color theme by choosing from presets or creating your own styles. Additionally, you can add features, such as Markers, Lines, or define areas using polygons. Maps are also multilingual; currently, there are four different language options to choose from when you work with MapBox Editor. Although adding data manually in MapBox Editor is handy, it also offers the ability to batch import data, and it supports the most commonly used formats. The user interface is strictly visual; no coding skills is needed in order to create, customize, and present a map. It is very ideal if you want to quickly create and share maps. The user interface also supports sharing to all the major platforms, such as WordPress, and embedding in forums or on a website using iFrames. CartoCSS CartoCSS is a powerful open source style sheet language developed by MapBox and is widely supported by several other mapping and visualization platforms. It's extremely similar to CSS, and if you ever used CSS, it will be very easy to adapt. Take a look at the following code: #layer { line-color: #C00; line-width: 1; } #layer::glow { line-color: #0AF; line-opacity: 0.5; line-width: 4; } TileMill TileMill is a free open source desktop editor that you can use to write CartoCSS and fully customize your maps. The customization is done by adding layers of data from various sources and then customizing the layer properties using CartoCSS, a CSS-like style sheet language. When you complete the editing of the map, you can then export the tiles and upload them to your MapBox account in order to use the map on your website. TileMill was used as a standard solution for this type of work, but it uses raster data. This changed recently with the introduction of MapBox Studio, which uses vector data. MapBox Studio MapBox Studio is the new open source toolbox that was created by the MapBox team to customize maps, and the plan is to slowly replace TileMill. The advantage is that it uses vector tiles instead of raster. Vector tiles are superior because they hold infinite detail; they are not dependent on the resolution found in a fixed size image. You can still use CartoCSS to customize the map, and as with TileMill, at any point, you can export and share the map on your website. The API and SDK Accessing MapBox data using various APIs is also very easy. You can use JavaScript, WebGL, or simply access the data using REST service calls. If you are into mobile development, they offer separate SDKs to develop native apps for iOS and Android that take advantage of the amazing MapBox technologies and customization while maintaining a native look and feel. MapBox allows you to use your own sources. You can import a custom dataset and overlay the data to Mapbox streets, terrains, or satellite. Another noteworthy feature is that you are not limited to fetching data from various sources, but you can also query the tile metadata. Summary In this article, we learned what Mapbox, Mapbox Editor, CartoCSS, TileMill and MapBox Studio is all about. Resources for Article: Further resources on this subject: Constructing and Evaluating Your Design Solution [article] Designing Site Layouts in Inkscape [article] Displaying SQL Server Data using a Linq Data Source [article]
Read more
  • 0
  • 0
  • 2061

article-image-application-patterns
Packt
20 Oct 2015
9 min read
Save for later

Application Patterns

Packt
20 Oct 2015
9 min read
In this article by Marcelo Reyna, author of the book Meteor Design Patterns, we will cover application-wide patterns that share server- and client- side code. With these patterns, your code will become more secure and easier to manage. You will learn the following topic: Filtering and paging collections (For more resources related to this topic, see here.) Filtering and paging collections So far, we have been publishing collections without thinking much about how many documents we are pushing to the client. The more documents we publish, the longer it will take the web page to load. To solve this issue, we are going to learn how to show only a set number of documents and allow the user to navigate through the documents in the collection by either filtering or paging through them. Filters and pagination are easy to build with Meteor's reactivity. Router gotchas Routers will always have two types of parameters that they can accept: query parameters, and normal parameters. Query parameters are the objects that you will commonly see in site URLs followed by a question mark (<url-path>?page=1), while normal parameters are the type that you define within the route URL (<url>/<normal-parameter>/named_route/<normal-parameter-2>). It is a common practice to set query parameters on things such as pagination to keep your routes from creating URL conflicts. A URL conflict happens when two routes look the same but have different parameters. A products route such as /products/:page collides with a product detail route such as /products/:product-id. While both the routes are differently expressed because of the differences in their normal parameter, you arrive at both the routes using the same URL. This means that the only way the router can tell them apart is by routing to them programmatically. So the user would have to know that the FlowRouter.go() command has to be run in the console to reach either one of the products pages instead of simply using the URL. This is why we are going to use query parameters to keep our filtering and pagination stateful. Stateful pagination Stateful pagination is simply giving the user the option to copy and paste the URL to a different client and see the exact same section of the collection. This is important to make the site easy to share. Now we are going to understand how to control our subscription reactively so that the user can navigate through the entire collection. First, we need to set up our router to accept a page number. Then we will take this number and use it on our subscriber to pull in the data that we need. To set up the router, we will use a FlowRouter query parameter (the parameter that places a question mark next to the URL). Let's set up our query parameter: # /products/client/products.coffee Template.created "products", -> @autorun => tags = Session.get "products.tags" filter = page: Number(FlowRouter.getQueryParam("page")) or 0 if tags and not _.isEmpty tags _.extend filter, tags:tags order = Session.get "global.order" if order and not _.isEmpty order _.extend filter, order:order @subscribe "products", filter Template.products.helpers ... pages: current: -> FlowRouter.getQueryParam("page") or 0 Template.products.events "click .next-page": -> FlowRouter.setQueryParams page: Number(FlowRouter.getQueryParam("page")) + 1 "click .previous-page": -> if Number(FlowRouter.getQueryParam("page")) - 1 < 0 page = 0 else page = Number(FlowRouter.getQueryParam("page")) - 1 FlowRouter.setQueryParams page: page What we are doing here is straightforward. First, we extend the filter object with a page key that gets the current value of the page query parameter, and if this value does not exist, then it is set to 0. getQueryParam is a reactive data source, the autorun function will resubscribe when the value changes. Then we will create a helper for our view so that we can see what page we are on and the two events that set the page query parameter. But wait. How do we know when the limit to pagination has been reached? This is where the tmeasday:publish-counts package is very useful. It uses a publisher's special function to count exactly how many documents are being published. Let's set up our publisher: # /products/server/products_pub.coffee Meteor.publish "products", (ops={}) -> limit = 10 product_options = skip:ops.page * limit limit:limit sort: name:1 if ops.tags and not _.isEmpty ops.tags @relations collection:Tags ... collection:ProductsTags ... collection:Products foreign_key:"product" options:product_options mappings:[ ... ] else Counts.publish this,"products", Products.find() noReady:true @relations collection:Products options:product_options mappings:[ ... ] if ops.order and not _.isEmpty ops.order ... @ready() To publish our counts, we used the Counts.publish function. This function takes in a few parameters: Counts.publish <always this>,<name of count>, <collection to count>, <parameters> Note that we used the noReady parameter to prevent the ready function from running prematurely. By doing this, we generate a counter that can be accessed on the client side by running Counts.get "products". Now you might be thinking, why not use Products.find().count() instead? In this particular scenario, this would be an excellent idea, but you absolutely have to use the Counts function to make the count reactive, so if any dependencies change, they will be accounted for. Let's modify our view and helpers to reflect our counter: # /products/client/products.coffee ... Template.products.helpers pages: current: -> FlowRouter.getQueryParam("page") or 0 is_last_page: -> current_page = Number(FlowRouter.getQueryParam("page")) or 0 max_allowed = 10 + current_page * 10 max_products = Counts.get "products" max_allowed > max_products //- /products/client/products.jade template(name="products") div#products.template ... section#featured_products div.container div.row br.visible-xs //- PAGINATION div.col-xs-4 button.btn.btn-block.btn-primary.previous-page i.fa.fa-chevron-left div.col-xs-4 button.btn.btn-block.btn-info {{pages.current}} div.col-xs-4 unless pages.is_last_page button.btn.btn-block.btn-primary.next-page i.fa.fa-chevron-right div.clearfix br //- PRODUCTS +momentum(plugin="fade-fast") ... Great! Users can now copy and paste the URL to obtain the same results they had before. This is exactly what we need to make sure our customers can share links. If we had kept our page variable confined to a Session or a ReactiveVar, it would have been impossible to share the state of the webapp. Filtering Filtering and searching, too, are critical aspects of any web app. Filtering works similar to pagination; the publisher takes additional variables that control the filter. We want to make sure that this is stateful, so we need to integrate this into our routes, and we need to program our publishers to react to this. Also, the filter needs to be compatible with the pager. Let's start by modifying the publisher: # /products/server/products_pub.coffee Meteor.publish "products", (ops={}) -> limit = 10 product_options = skip:ops.page * limit limit:limit sort: name:1 filter = {} if ops.search and not _.isEmpty ops.search _.extend filter, name: $regex: ops.search $options:"i" if ops.tags and not _.isEmpty ops.tags @relations collection:Tags mappings:[ ... collection:ProductsTags mappings:[ collection:Products filter:filter ... ] else Counts.publish this,"products", Products.find filter noReady:true @relations collection:Products filter:filter ... if ops.order and not _.isEmpty ops.order ... @ready() To build any filter, we have to make sure that the property that creates the filter exists and _.extend our filter object based on this. This makes our code easier to maintain. Notice that we can easily add the filter to every section that includes the Products collection. With this, we have ensured that the filter is always used even if tags have filtered the data. By adding the filter to the Counts.publish function, we have ensured that the publisher is compatible with pagination as well. Let's build our controller: # /products/client/products.coffee Template.created "products", -> @autorun => ops = page: Number(FlowRouter.getQueryParam("page")) or 0 search: FlowRouter.getQueryParam "search" ... @subscribe "products", ops Template.products.helpers ... pages: search: -> FlowRouter.getQueryParam "search" ... Template.products.events ... "change .search": (event) -> search = $(event.currentTarget).val() if _.isEmpty search search = null FlowRouter.setQueryParams search:search page:null First, we have renamed our filter object to ops to keep things consistent between the publisher and subscriber. Then we have attached a search key to the ops object that takes the value of the search query parameter. Notice that we can pass an undefined value for search, and our subscriber will not fail, since the publisher already checks whether the value exists or not and extends filters based on this. It is always better to verify variables on the server side to ensure that the client doesn't accidentally break things. Also, we need to make sure that we know the value of that parameter so that we can create a new search helper under the pages helper. Finally, we have built an event for the search bar. Notice that we are setting query parameters to null whenever they do not apply. This makes sure that they do not appear in our URL if we do not need them. To finish, we need to create the search bar: //- /products/client/products.jade template(name="products") div#products.template header#promoter ... div#content section#features ... section#featured_products div.container div.row //- SEARCH div.col-xs-12 div.form-group.has-feedback input.input-lg.search.form-control(type="text" placeholder="Search products" autocapitalize="off" autocorrect="off" autocomplete="off" value="{{pages.search}}") span(style="pointer-events:auto; cursor:pointer;").form-control-feedback.fa.fa-search.fa-2x ... Notice that our search input is somewhat cluttered with special attributes. All these attributes ensure that our input is not doing the things that we do not want it to for iOS Safari. It is important to keep up with nonstandard attributes such as these to ensure that the site is mobile-friendly. You can find an updated list of these attributes here at https://developer.apple.com/library/safari/documentation/AppleApplications/Reference/SafariHTMLRef/Articles/Attributes.html. Summary This article covered how to control the amount of data that we publish. We also learned a pattern to build pagination that functions with filters as well, along with code examples. Resources for Article: Further resources on this subject: Building the next generation Web with Meteor[article] Quick start - creating your first application[article] Getting Started with Meteor [article]
Read more
  • 0
  • 0
  • 3171
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-extracting-real-time-wildfire-data-arcgis-server-arcgis-rest-api
Packt
20 Oct 2015
6 min read
Save for later

Extracting Real-Time Wildfire Data from ArcGIS Server with the ArcGIS REST API

Packt
20 Oct 2015
6 min read
In this article by Eric Pimpler, the author of the book ArcGIS Blueprints, the ArcGIS platform, which contains a number of different products including ArcGIS Desktop[d1] , ArcGIS Pro, ArcGIS for Server, and ArcGIS Online, provides a robust environment in order to perform geographic analysis and mapping. Content produced by this platform can be integrated using the ArcGIS REST API and a programming language, such as Python. Many of the applications we build in this book use the ArcGIS REST API as a bridge to exchange information between software products. (For more resources related to this topic, see here.) We're going to start by developing a simple ArcGIS Desktop custom script tool in ArcToolbox that connects to an ArcGIS Server map service to retrieve real-time wildfire information. The wildfire information will be retrieved from a United States Geological Survey (USGS)[d1]  map service that provides real-time wildfire data. We'll use the ArcGIS REST API and Python requests module to connect to the map service and request the data. The response from the map service will contain data that will be written to a feature class stored in a local geodatabase using the ArcPy data access module. This will all be accomplished inside a custom script tool attached to an ArcGIS Python toolbox. In this article we will cover the following topics: ArcGIS Desktop Python toolboxes ArcGIS Server map and feature services A Python requests module A Python json module ArcGIS REST API ArcPy data access module (ArcPy.da) Design Before we start building the application, we'll spend some time planning what we'll build. This is a fairly simple application, but it serves to illustrate how ArcGIS Desktop and ArcGIS Server can easily be integrated using the ArcGIS REST API. In this application, we'll build an ArcGIS Python toolbox that serves as a container for a single tool named USGSDownload. The USGSDownload tool will use the Python requests, json, and ArcPy data modules to request real-time wildfire data from a USGS map service. The response from the map service will contain information, including the location of the fire, name of the fire, and some additional information that will then be written to a local geodatabase. The communication between the ArcGIS Desktop Python toolbox and ArcGIS Server map service is accomplished through the ArcGIS REST API and the Python language. Let's get started building the application. Creating the ArcGIS Desktop Python toolbox[d2]  There are two ways to create toolboxes in ArcGIS: script tools in custom toolboxes and script tools in Python toolboxes. Python toolboxes encapsulate everything in one place: parameters, validation code and source code. This is not the case with custom toolboxes, which are created using a wizard and a separate script that processes the business logic. A Python toolbox functions like any other toolbox in ArcToolbox, but it is created entirely in Python and has a file extension of .pyt. It is created programmatically as a class named Toolbox. In this article, you will learn how to create a Python toolbox and add a tool. You'll only create the basic structure of the toolbox and tool that will ultimately connect to an ArcGIS Server map service containing wildfire data. In a later section, you'll complete the functionality of the tool by adding code that connects to the map service, downloads the current data, and inserts it into a feature class. Open ArcCatalog. You can create a Python toolbox [d3] in a folder by right-clicking on the folder and selecting New | Python Toolbox. In ArcCatalog, there is a folder named Toolboxes and inside is a My Toolboxes folder, as seen in the following screenshot. Right-click on this folder and select New | Python Toolbox. The name of the toolbox is controlled by the file name. Name the toolbox InsertWildfires.py, as shown in following screenshot: The Python toolbox file .pyt can be edited in any text or code editor. By default, the code will open in Notepad[d4] . You can change this by setting the default editor for your script by going to Geoprocessing | Geoprocessing Options and the Editor section. You'll note in the Figure A: Geoprocessing options screenshot that I have set my editor to PyScripter, which is my preferred environment. You may want to change this to IDLE or whichever development environment you are currently using.For example, to find the path to the executable for the IDLE development environment, you can go to Start | All Programs | ArcGIS | Python 2.7 | IDLE. right-click on IDLE, and select Properties[d5]  to display the properties window. Inside the Target text box, you should see a path to the executable as seen in the following screenshot: Copy and paste the path into the Editor and Debugger sections inside the Geoprocessing Options dialog, as shown in following screenshot: Figure A: Geoprocessing options Right-click on InsertWildfires.pyt and select Edit. This will open the development environment you defined earlier, as seen in the following screenshot. Your environment will vary depending on the editor that you have defined. Remember that you will not be changing the name of the class, which is Toolbox. However, you will rename the Tool class to reflect the name of the tool you want to create. Each tool will have various methods, including __init__(), which is the constructor for the tool along with getParameterInfo(), isLicensed(), updateParameters(), updateMessages(), and execute(). You can use the __init__() method to set initialization properties, such as the tool's label and description. Find the class named Tool in your code and change the name of this tool to USGSDownload, set the label and description properties. class USGSDownload(object): def __init__(self): """Define the tool (tool name is the name of the class).""" self.label = "USGS Download" self.description = "Download from USGS ArcGIS Server instance" self.canRunInBackground = False You can use the Tool class as a template for other tools you'd like to add to the toolbox by copying and pasting the class and it's methods. We're not going to do it in this article, but you need to be aware of this. Summary Integrating ArcGIS Desktop and ArcGIS Server is easily accomplished using the ArcGIS REST API and the Python programming language. In this article we created an ArcGIS Python toolbox containing a tool that connects to an ArcGIS Server map service, which contains real-time wildfire information and is hosted by the USGS. Resources for Article: Further resources on this subject: ArcGIS – Advanced ArcObjects[article] Using the ArcPy DataAccess Module withFeature Classesand Tables[article] Introduction to Mobile Web ArcGIS Development [article]
Read more
  • 0
  • 0
  • 1800

article-image-mono-micro-services-split-fat-application
Xavier Bruhiere
16 Oct 2015
7 min read
Save for later

Mono to Micro-Services: Splitting that fat application

Xavier Bruhiere
16 Oct 2015
7 min read
As articles state everywhere, we're living in a fast pace digital age. Project complexity, or business growth, challenges existing development patterns. That's why many developers are evolving from the monolithic application toward micro-services. Facebook is moving away from its big blue app. Soundcloud is embracing microservices. Yet this can be a daunting process, so what for? Scale. Better plugging new components than digging into an ocean of code. Split a complex problem into smaller ones, which is easier to solve and maintain. Distribute work through independent teams. Open technologies friendliness. Isolating a service into a container makes it straightforward to distribute and use. It also allows different, loosely coupled stacks to communicate. Once upon a time, there was a fat code block called Intuition, my algorithmic trading platform. In this post, we will engineer a simplified version, divided into well defined components. Code Components First, we're going to write the business logic, following the single responsibility principle, and one of my favorite code mantras: Prefer composition over inheritance The point is to identify key components of the problem, and code a specific solution for each of them. It will articulate our application around the collaboration of clear abstractions. As an illustration, start with the RandomAlgo class. Python tends to be the go-to language for data analysis and rapid prototyping. It is a great fit for our purpose. class RandomAlgo(object): """ Represent the algorithm flow. Heavily inspired from quantopian.com and processing.org """ def initialize(self, params): """ Called once to prepare the algo. """ self.threshold = params.get('threshold', 0.5) # As we will see later, we return here data channels we're interested in return ['quotes'] def event(self, data): """ This method is called every time a new batch of data is ready. :param data: {'sid': 'GOOG', 'quote': '345'} """ # randomly choose to invest or not if random.random() > self.threshold: print('buying {0} of {1}'.format(data['quote'], data['sid'])) This implementation focuses on a single thing: detecting buy signals. But once you get such a signal, how do you invest your portfolio? This is the responsibility of a new component. class Portfolio(object): def__init__(self, amount): """ Starting amount of cash we have. """ self.cash = amount def optimize(self, data): """ We have a buy signal on this data. Tell us how much cash we should bet. """ # We're still baby traders and we randomly choose what fraction of our cash available to invest to_invest = random.random() * self.cash self.cash = self.cash - to_invest return to_invest Then we can improve our previous algorithm's event method, taking advantage of composition. def initialize(self, params): # ... self.portfolio = Portfolio(params.get('starting_cash', 10000)) def event(self, data): # ... print('buying {0} of {1}'.format(portfolio.optimize(data), data['sid'])) Here are two simple components that produce readable and efficient code. Now we can develop more sophisticated portfolio optimizations without touching the algorithm internals. This is also a huge gain early in a project when we're not sure how things will evolve. Developers should only focus on this core logic. In the next section, we're going to unfold a separate part of the system. The communication layer will solve one question: how do we produce and consume events? Inter-components messaging Let's state the problem. We want each algorithm to receive interesting events and publish its own data. The kind of challenge Internet of Things (IoT) is tackling. We will find empirically that our modular approach allows us to pick the right tool, even within a-priori unrelated fields. The code below leverages MQTT to bring M2M messaging to the application. Notice we're diversifying our stack with node.js. Indeed it's one of the most convenient languages to deal with event-oriented systems (Javascript, in general, is gaining some traction in the IoT space). var mqtt = require('mqtt'); // connect to the broker, responsible to route messages // (thanks mosquitto) var conn = mqtt.connect('mqtt://test.mosquitto.org'); conn.on('connect', function () { // we're up ! Time to initialize the algorithm // and subscribe to interesting messages }); // triggered on topic we're listening to conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // Here, pass it to the algo for processing }); That's neat! But we still need to connect this messaging layer with the actual python algorithm. RPC (Remote Procedure Call) protocol comes in handy for the task, especially with zerorpc. Here is the full implementation with more explanations. // command-line interfaces made easy var program = require('commander'); // the MQTT client for Node.js and the browser var mqtt = require('mqtt'); // a communication layer for distributed systems var zerorpc = require('zerorpc'); // import project properties var pkg = require('./package.json') // define the cli program .version(pkg.version) .description(pkg.description) .option('-m, --mqtt [url]', 'mqtt broker address', 'mqtt://test.mosquitto.org') .option('-r, --rpc [url]', 'rpc server address', 'tcp://127.0.0.1:4242') .parse(process.argv); // connect to mqtt broker var conn = mqtt.connect(program.mqtt); // connect to rpc peer, the actual python algorithm var algo = new zerorpc.Client() algo.connect(program.rpc); conn.on('connect', function () { // connections are ready, initialize the algorithm var conf = { cash: 50000 }; algo.invoke('initialize', conf, function(err, channels, more) { // the method returns an array of data channels the algorithm needs for (var i = 0; i < channels.length; i++) { console.log('subscribing to channel', channels[i]); conn.subscribe(channels[i]); } }); }); conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // make the algorithm to process the incoming data algo.invoke('event', JSON.parse(message.toString()), function(err, res, more) { console.log('algo output:', res); // we're done algo.close(); conn.end(); }); }); The code above calls our algorithm's methods. Here is how to expose them over RPC. import click, zerorpc # ... algo code ... @click.command() @click.option('--addr', default='tcp://127.0.0.1:4242', help='address to bind rpc server') def serve(addr): server = zerorpc.Server(RandomAlgo()) server.bind(addr) click.echo(click.style('serving on {} ...'.format(addr), bold=True, fg='cyan')) # listen and serve server.run() if__name__ == '__main__': serve() At this point we are ready to run the app. Let's fire up 3 terminals, install requirements, and make the machines to trade. sudo apt-get install curl libpython-dev libzmq-dev # Install pip curl https://bootstrap.pypa.io/get-pip.py | python # Algorithm requirements pip install zerorpc click # Messaging requirements npm init npm install --save commander mqtt zerorpc # Activate backend python ma.py --addr tcp://127.0.0.1:4242 # Manipulate algorithm and serve messaging system node app.js --rpc tcp://127.0.0.1:4242 # Publish messages node_modules/.bin/mqtt pub -t 'quotes' -h 'test.mosquitto.org' -m '{"goog": 3.45}' In this state, our implementation is over-engineered. But we designed a sustainable architecture to wire up small components. And from here we can extend the system. One can focus on algorithms without worrying about events plumbing. The corollary: switching to a new messaging technology won't affect the way we develop algorithms. We can even swipe algorithms by changing the rpc address. A service discovery component could expose which backends are available and how to reach them. A project like octoblu adds devices authentification, data sharing, and more. We could implement data sources that connect to live market or databases, compute indicators like moving averages and publish them to algorithms. Conclusion Given our API definition, a contributor can hack on any component without breaking the project as a whole. In a fast pace environment, with constant iterations, this architecture can make or break products. This is especially true in the raising container world. Assuming we package each component into specialized containers, we smooth the way to a scalable infrastructure that we can test, distribute, deploy and grow. Not sure where to start when it comes to containers and microservices? Visit our Docker page!  About the Author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 2703

article-image-understanding-crm-extendibility-architecture
Packt
16 Oct 2015
22 min read
Save for later

Understanding CRM Extendibility Architecture

Packt
16 Oct 2015
22 min read
 In this article by Mahender Pal, the author of the book Microsoft Dynamics CRM 2015 Application Design, we will see how Microsoft Dynamics CRM provides different components that can be highly extended to map our custom business requirements. Although CRM provides a rich set of features that help us execute different business operations without any modification. However, we can still extend its behavior and capabilities with the supported customization. (For more resources related to this topic, see here.) The following is the extendibility architecture of CRM 2015, where we can see how different components interact with each other and what are the components that can be extended with the help of CRM APIs: Extendibility Architecture Let's discuss these components one by one and the possible extendibility options for them. CRM databases During installation of CRM, two databases, organization and configuration, are created. The organization database is created with the name of organization_MSCRM and the configuration database is created with the name of MSCRM_CONFIG. The organization database contains complete organization-related data stored on different entities. For every entity in CRM, there is a corresponding table with the name of Entityname+Base. Although technically it is possible but any direct data modification in these tables are not supported. Any changes to CRM data should be done by using CRM APIs only. Adding indexes to the CRM database is supported, you can refer to https://msdn.microsoft.com/en-us/library/gg328350.aspx for more details on supported customizations. Apart from table, CRM also creates a special view for every entity with the name of Filtered+Entityname. These fields view provide data based on the user security role; so for example, if you are a sales person you will only get data while querying filtered views based on the sales person role. We use filtered views for writing custom reports for CRM. You can refer to more details on filtered views at https://technet.microsoft.com/en-us/library/dn531182.aspx. Entity relationship diagram can be downloaded from https://msdn.microsoft.com/en-us/library/jj602918.aspx for CRM 2015. The Platform Layer Platform layer works as middleware between CRM UI and database, it is responsible for executing inbuilt and custom business logic and moving data back and forth. When we browse a CRM application, the platform layer presents data that is available based on the current user security roles. When we develop and deploy custom component on the top of platform layer. Process Process is a way of implementing automation in CRM. We can set up process using process designer and also develop custom assemblies to enhance the capability of workflow designer and include custom steps. CRM web services CRM provides Windows Communication Foundation (WCF) based web services, which help us interact with organization data and metadata; so whenever we want to create or modify an entity's data or want to customize a CRM component's metadata, we need to utilize these web services. We can also develop our custom web services with the help of CRM web services if required. We will be discussing more about CRM web services in details in a later topic. Plugins Plugins are another way of extending the CRM capability. These are .NET assemblies that help us implement our custom business logic in the CRM platform. It helps us to execute our business logic before or after the main platform operation. We can also run our plugin on a transaction that is similar to a SQL transaction, which means if any operation failed, all the changes under transaction will rollback. We can setup asynchronous and synchronous plugins. Reporting CRM provides rich reporting capabilities. We have many out of box reports for every module such as sales, marketing, and service. We can also create new reports and customize existing reports in Visual Studio. While working with reports, we always utilize an entity-specific filtered view so that data can be exposed based on the user security role. We should never use a CRM table while writing reports. Custom reports can be developed using out of box report wizard or using Visual Studio. The report wizard helps us create reports by following a couple of screens where we can select an entity and filter the criteria for our report with different rendering and formatting options. We can create two types of reports in Visual Studio SSRS and FetchXML. Custom SSRS reports are supported on CRM on premise deployment whereas CRM online only FetchXML. You can refer to https://technet.microsoft.com/en-us/library/dn531183.aspx for more details on report development. Client extensions We can also extend the CRM application from the Web and Outlook client. We can also develop custom utility tools for it. Sitemap and Command bar editor add-ons are example of such applications. We can modify different CRM components such as entity structure, web resources, business rules, different type of web resources, and other components. CRM web services can be utilized to map custom requirements. We can make navigational changes from CRM clients by modifying Sitemap and Command Bar definition. Integrated extensions We can also develop custom extensions in terms of custom utility and middle layer to interact with CRM using APIs. It can be a portal application or any .NET or non .NET utility. CRM SDK comes with many tools that help us to develop these integrated applications. We will be discussing more on custom integration with CRM in a later topic. Introduction to Microsoft Dynamics CRM SDK Microsoft Dynamics CRM SDK contains resources that help us develop code for CRM. It includes different CRM APIs and helpful resources such as sample codes (both server side and client side) and a list of tools to facilitate CRM development. It provides a complete documentation of the APIs, methods, and their uses, so if you are a CRM developer, technical consultant, or solution architect, the first thing you need to make sure is to download the latest CRM SDK. You can download the latest version of CRM SDK from http://www.microsoft.com/en-us/download/details.aspx?id=44567. The following table talks about the different resources that come with CRM SDK: Name Descriptions Bin This folder contains all the assemblies of CRM. Resources This folder contains different resources such as data import maps, default entity ribbon XML definition, and images icons of CRM applications. SampleCode This folder contains all the server side and client side sample code that can help you get started with the CRM development. This folder also contains sample PowerShell commands. Schemas This folder contains XML schemas for CRM entities, command bars, and sitemap. These schemas can be imported in Visual Studio while editing the customization of the .xml file manually. Solutions This folder contains the CRM 2015 solution compatibility chart and one portal solution. Templates This folder contains the visual studio templates that can be used to develop components for a unified service desk and the CRM package deployment. Tools This folder contains tools that are shipped with CRM SDK such as the metadata browser that can used to get CRM entity metadata, plugin registration tool, web resource utility, and others. Walkthroughts This folder contains console and web portal applications. CrmSdk2015 This is the .chm help file. EntityMetadata This file contains entity metadata information. Message-entity support for plugins This is a very important file that will help you understand events available for entities to write custom business logic (plug-ins) Learning about CRM assemblies CRM SDK ships with different assemblies under the bin folder that we can use to write CRM application extension. We can utilize them to interact with CRM metadata and organization data. The following table provides details about the most common CRM assemblies: Name Details Microsoft.Xrm.Sdk.Deployment This assembly is used to work with the CRM organization. We can create, update, and delete organization assembly methods. Microsoft.Xrm.Sdk This is very important assembly as it contains the core methods and their details, this assembly is used for every CRM extension. This assembly contains different namespaces for different functionality, for example Query, which contains different classes to query CRM DB; Metadata, which help us interact with the metadata of the CRM application; Discovery, which help us interact with the discover service (we will be discussing the discovery services in a later topic); Messages, which provide classes for all CURD operation requests and responses with metadata classes. Microsoft.Xrm.Sdk.Workflow This assembly helps us extend the CRM workflows' capability. It contains methods and types which are required for writing custom workflow activity. This assembly contains the activities namespace, which is used by the CRM workflow designer. Microsoft.Crm.Sdk.Proxy This assembly contains all noncore requests and response messages. Microsoft.Xrm.Tooling This is a new assembly added in SDK. This assembly helps to write Windows client applications for CRM Microsoft.Xrm.Portal This assembly provides methods for portal development, which includes security management, cache management, and content management. Microsoft.Xrm.Client This is another assembly that is used in the CRM client application to communicate with CRM from the application. It contains connection classes that we can use to setup the connection using different CRM authentication methods. We will be working with these APIs in later topics. Understanding CRM web services Microsoft Dynamics CRM provides web service support, which can be used to work with CRM data or metadata. CRM web services are mentioned here. The deployment service The deployment service helps us work with organizations. Using this web service, we can create a new organization, delete, or update existing organizations. The discovery service Discovery services help us identify correct web service endpoints based on the user. Let's take an example where we have multiple CRM organizations, and we want to get a list of the organization where current users have access, so we can utilize discovery service to find out unique organization ID, endpoint URL and other details. We will be working with discovery service in a later topic. The organization service The organization service is used to work with CRM organization data and metadata. It has the CRUD method and other request and response messages. For example, if we want to create or modify any existing entity record, we can use organization service methods. The organization data service The organization data service is a RESTful service that we can use to get data from CRM. We can use this service's CRUD methods to work with data, but we can't use this service to work with CRM metadata. To work with CRM web services, we can use the following two programming models: Late bound Early bound Early bound In early bound classes, we use proxy classes which are generated by CrmSvcUtil.exe. This utility is included in CRM SDK under the SDKBin path. This utility generates classes for every entity available in the CRM system. In this programming model, a schema name is used to refer to an entity and its attributes. This provides intelligence support, so we don't need to remember the entity and attributes name; as soon as we type the first letter of the entity name, it will display all the entities with that name. We can use the following syntax to generate proxy class for CRM on premise: CrmSvcUtil.exe /url:http://<ServerName>/<organizationName>/XRMServices/2011/ Organization.svc /out:proxyfilename.cs /username:<username> /password:<password> /domain:<domainName> /namespace:<outputNamespace> /serviceContextName:<serviceContextName> The following is the code to generate proxy for CRM online: CrmSvcUtil.exe /url:https://orgname.api.crm.dynamics.com/XRMServices/2011/ Organization.svc /out:proxyfilename.cs /username:"[email protected]" /password:"myp@ssword! Organization service URLs can be obtained by navigating to Settings | Customization | Developer Resources. We are using CRM online for our demo. In case of CRM online, the organization service URL is dependent on the region where your organization is hosted. You can refer to https://msdn.microsoft.com/en-us/library/gg328127.aspx to get details about different CRM online regions. We can follow these steps to generate the proxy class for CRM online: Navigate to Developer Command Prompt under Visual Studio Tools in your development machine where visual studio is installed. Go to the Bin folder under CRM SDK and paste the preceding command: CrmSvcUtil.exe /url:https://ORGName.api.crm5.dynamics.com/XRMServices/2011/ Organization.svc /out:Xrm.cs /username:"[email protected]" /password:"password" CrmSVCUtil Once this file is generated, we can add this file to our visual studio solution. Late bound In the late bound programming model, we use the generic Entity object to refer to entities, which means that we can also refer an entity which is not part of the CRM yet. In this programming mode, we need to use logical names to refer to an entity and its attribute. No intelligence support is available during code development in case of late bound. The following is an example of using the Entity class: Entity AccountObj = new Entity("account"); Using Client APIs for a CRM connection CRM client API helps us connect with CRM easily from .NET applications. It simplifies the developer's task to setup connection with CRM using a simplified connection string. We can use this connection string to create a organization service object. The following is the setup to console applications for our demo: Connect to Visual Studio and go to File | New | Project. Select Visual C# | Console Application and fill CRMConnectiondemo under the Name textbox as shown in the following screenshot: Console app Make sure you have installed the .NET 4.5.2 and .NET 4.5.2 developer packs before creating sample applications. Right-click on References and add the following CRM SDK: Microsoft.Xrm.SDK Microsoft.Xrm.Client We also need to add the following .NET assemblies System.Runtime.Serialization System.Configuration Make sure to add the App.config file if not available under project. We need to right-click on Project Name | Add Item and add Application Configuration File as shown here: app.configfile We need to add a connection string to our app.config file; we are using CRM online for our demo application, so we will be using following connection string: <?xml version="1.0" encoding="UTF-8"?> <configuration> <connectionStrings> <add name="OrganizationService" connectionString="Url=https://CRMOnlineServerURL; [email protected]; Password=Password;" /> </connectionStrings> </configuration> Right-click on Project, select Add Existing File, and browse our file that we generated earlier to add to our console application. Now we can add two classes in our application—one for early bound and another for late bound and let's name them Earlybound.cs and Latebound.cs You can refer to https://msdn.microsoft.com/en-us/library/jj602970.aspx to connection string for other deployment type, if not using CRM online After adding the preceding classes, our solution structure should look like this: Working with organization web services Whenever we need to interact with CRM SDK, we need to use the CRM web services. Most of the time, we will be working with the Organization service to create and modify data. Organization services contains the following methods to interact with metadata and organization data, we will add these methods to our corresponding Earlybound.cs and Latebound.cs files in our console application. Create This method is used to create system or custom entity records. We can use this method when we want to create entity records using CRM SDK, for example, if we need to develop one utility for data import, we can use this method or we want to create lead record in dynamics from a custom website. This methods takes an entity object as a parameter and returns GUID of the record created. The following is an example of creating an account record with early and late bound. With different data types, we are setting some of the basic account entity fields in our code: Early bound: private void CreateAccount() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account accountObject = new Account { Name = "HIMBAP Early Bound Example", Address1_City = "Delhi", CustomerTypeCode = new OptionSetValue(3), DoNotEMail = false, Revenue = new Money(5000), NumberOfEmployees = 50, LastUsedInCampaign = new DateTime(2015, 3, 2) }; crmService.Create(accountObject); } } Late bound: private void Create() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountObj = new Entity("account"); //setting string value accountObj["name"] = "HIMBAP"; accountObj["address1_city"] = "Delhi"; accountObj["accountnumber"] = "101"; //setting optionsetvalue accountObj["customertypecode"] = new OptionSetValue(3); //setting boolean accountObj["donotemail"] = false; //setting money accountObj["revenue"] = new Money(5000); //setting entity reference/lookup accountObj["primarycontactid"] = new EntityReference("contact", new Guid("F6954457- 6005-E511-80F4-C4346BADC5F4")); //setting integer accountObj["numberofemployees"] = 50; //Date Time accountObj["lastusedincampaign"] = new DateTime(2015, 05, 13); Guid AccountID = crmService.Create(accountObj); } } We can also use the create method to create primary and related entity in a single call, for example in the following call, we are creating an account and the related contact record in a single call: private void CreateRecordwithRelatedEntity() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountEntity = new Entity("account"); accountEntity["name"] = "HIMBAP Technology"; Entity relatedContact = new Entity("contact"); relatedContact["firstname"] = "Vikram"; relatedContact["lastname"] = "Singh"; EntityCollection Related = new EntityCollection(); Related.Entities.Add(relatedContact); Relationship accountcontactRel = new Relationship("contact_customer_accounts"); accountEntity.RelatedEntities.Add(accountcontactRel, Related); crmService.Create(accountEntity); } } In the preceding code, first we created account entity objects, and then we created an object of related contact entity and added it to entity collection. After that, we added a related entity collection to the primary entity with the entity relationship name; in this case, it is contact_customer_accounts. After that, we passed our account entity object to create a method to create an account and the related contact records. When we will run this code, it will create the account as shown here: relatedrecord Update This method is used to update existing record properties, for example, we might want to change the account city or any other address information. This methods takes the entity object as the parameter, but we need to make sure to update the primary key field to update any record. The following are the examples of updating the account city and setting the state property: Early bound: private void Update() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account accountUpdate = new Account { AccountId = new Guid("85A882EE-A500- E511-80F9-C4346BAC0E7C"), Address1_City = "Lad Bharol", Address1_StateOrProvince = "Himachal Pradesh" }; crmService.Update(accountUpdate); } } Late bound: private void Update() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountUpdate = new Entity("account"); accountUpdate["accountid"] = new Guid("85A882EE-A500- E511-80F9-C4346BAC0E7C"); accountUpdate["address1_city"] = " Lad Bharol"; accountUpdate["address1_stateorprovince"] = "Himachal Pradesh"; crmService.Update(accountUpdate); } } Similarly, to create method, we can also use the update method to update the primary entity and the related entity in a single call as follows: private void Updateprimaryentitywithrelatedentity() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountToUpdate = new Entity("account"); accountToUpdate["name"] = "HIMBAP Technology"; accountToUpdate["websiteurl"] = "www.himbap.com"; accountToUpdate["accountid"] = new Guid("29FC3E74- B30B-E511-80FC-C4346BAD26CC");//replace it with actual account id Entity relatedContact = new Entity("contact"); relatedContact["firstname"] = "Vikram"; relatedContact["lastname"] = "Singh"; relatedContact["jobtitle"] = "Sr Consultant"; relatedContact["contactid"] = new Guid("2AFC3E74- B30B-E511-80FC-C4346BAD26CC");//replace it with actual contact id EntityCollection Related = new EntityCollection(); Related.Entities.Add(relatedContact); Relationship accountcontactRel = new Relationship("contact_customer_accounts"); accountToUpdate.RelatedEntities.Add (accountcontactRel, Related); crmService.Update(accountToUpdate); } } Retrieve This method is used to get data from the CRM based on the primary field, which means that this will only return one record at a time. This method has the following three parameter: Entity: This is needed to pass the logical name of the entity as fist parameter ID: This is needed to pass the primary ID of the record that we want to query Columnset: This is needed to specify the fields list that we want to fetch The following are examples of using the retrieve method Early bound: private void Retrieve() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account retrievedAccount = (Account)crmService.Retrieve (Account.EntityLogicalName, new Guid("7D5E187C-9344-4267- 9EAC-DD32A0AB1A30"), new ColumnSet(new string[] { "name" })); //replace with actual account id } } Late bound: private void Retrieve() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity retrievedAccount = (Entity)crmService.Retrieve("account", new Guid("7D5E187C- 9344-4267-9EAC-DD32A0AB1A30"), new ColumnSet(new string[] { "name"})); } RetrieveMultiple The RetrieveMultiple method provides options to define our query object where we can define criteria to fetch records from primary and related entities. This method takes the query object as a parameter and returns the entity collection as a response. The following are examples of using retrievemulitple with the early and late bounds: Late Bound: private void RetrieveMultiple() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { QueryExpression query = new QueryExpression { EntityName = "account", ColumnSet = new ColumnSet("name", "accountnumber"), Criteria = { FilterOperator = LogicalOperator.Or, Conditions = { new ConditionExpression { AttributeName = "address1_city", Operator = ConditionOperator.Equal, Values={"Delhi"} }, new ConditionExpression { AttributeName="accountnumber", Operator=ConditionOperator.NotNull } } } }; EntityCollection entityCollection = crmService.RetrieveMultiple(query); foreach (Entity result in entityCollection.Entities) { if (result.Contains("name")) { Console.WriteLine("name ->" + result.GetAttributeValue<string>("name").ToString()); } } } Early Bound: private void RetrieveMultiple() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { QueryExpression RetrieveAccountsQuery = new QueryExpression { EntityName = Account.EntityLogicalName, ColumnSet = new ColumnSet("name", "accountnumber"), Criteria = new FilterExpression { Conditions = { new ConditionExpression { AttributeName = "address1_city", Operator = ConditionOperator.Equal, Values = { "Delhi" } } } } }; EntityCollection entityCollection = crmService.RetrieveMultiple(RetrieveAccountsQuery); foreach (Entity result in entityCollection.Entities) { if (result.Contains("name")) { Console.WriteLine("name ->" + result.GetAttributeValue<string> ("name").ToString()); } } } } Delete This method is used to delete entity records from the CRM database. This methods takes the entityname and primaryid fields as parameters: private void Delete() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { crmService.Delete("account", new Guid("85A882EE-A500-E511- 80F9-C4346BAC0E7C")); } } Associate This method is used to setup a link between two related entities. It has the following parameters: Entity Name: This is the logical name of the primary entity Entity Id: This is the primary entity records it field. Relationship: This is the name of the relationship between two entities Related Entities: This is the correction of references The following is an example of using this method with an early bound: private void Associate() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { EntityReferenceCollection referenceEntities = new EntityReferenceCollection(); referenceEntities.Add(new EntityReference("account", new Guid("38FC3E74-B30B-E511-80FC-C4346BAD26CC"))); // Create an object that defines the relationship between the contact and account (we want to setup primary contact) Relationship relationship = new Relationship("account_primary_contact"); //Associate the contact with the accounts. crmService.Associate("contact", new Guid("38FC3E74-B30B- E511-80FC-C4346BAD26CC "), relationship, referenceEntities); } } Disassociate This method is the reverse of the associate. It is used to remove a link between two entity records. This method takes the same setup of parameter as associate method takes. The following is an example of a disassociate account and contact record: private void Disassociate() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { EntityReferenceCollection referenceEntities = new EntityReferenceCollection(); referenceEntities.Add(new EntityReference("account", new Guid("38FC3E74-B30B-E511-80FC-C4346BAD26CC "))); // Create an object that defines the relationship between the contact and account. Relationship relationship = new Relationship("account_primary_contact"); //Disassociate the records. crmService.Disassociate("contact", new Guid("15FC3E74- B30B-E511-80FC-C4346BAD26CC "), relationship, referenceEntities); } } Execute Apart from the common method that we discussed, the execute method helps to execute requests that is not available as a direct method. This method takes a request as a parameter and returns the response as a result. All the common methods that we used previously can also be used as a request with the execute method. The following is an example of working with metadata and creating a custom event entity using the execute method: private void Usingmetadata() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { CreateEntityRequest createRequest = new CreateEntityRequest { Entity = new EntityMetadata { SchemaName = "him_event", DisplayName = new Label("Event", 1033), DisplayCollectionName = new Label("Events", 1033), Description = new Label("Custom entity demo", 1033), OwnershipType = OwnershipTypes.UserOwned, IsActivity = false, }, PrimaryAttribute = new StringAttributeMetadata { SchemaName = "him_eventname", RequiredLevel = new AttributeRequiredLevelManagedProperty(AttributeRequiredLevel.None), MaxLength = 100, FormatName = StringFormatName.Text, DisplayName = new Label("Event Name", 1033), Description = new Label("Primary attribute demo", 1033) } }; crmService.Execute(createRequest); } } In the preceding code, we have utilized the CreateEntityRequest class, which is used to create a custom entity. After executing the preceding code, we can check out the entity under the default solution by navigating to Settings | Customizations | Customize the System. You can refer to https://msdn.microsoft.com/en-us/library/gg309553.aspx to see other requests that we can use with the execute method. Testing the console application After adding the preceding methods, we can test our console application by writing a simple test method where we can call our CRUD methods, for example, in the following example, we have added method in our Earlybound.cs. public void EarlyboundTesting() { Console.WriteLine("Creating Account Record....."); CreateAccount(); Console.WriteLine("Updating Account Record....."); Update(); Console.WriteLine("Retriving Account Record....."); Retrieve(); Console.WriteLine("Deleting Account Record....."); Delete(); } After that we can call this method in Main method of Program.cs file like below: static void Main(string[] args) { Earlybound obj = new Earlybound(); Console.WriteLine("Testing Early bound"); obj.EarlyboundTesting(); } Press F5 to run your console application. Summary In this article, you learned about the Microsoft Dynamics CRM 2015 SDK feature. We discussed various options that are available in CRM SDK. You learned about the different CRM APIs and their uses. You learned about different programming models in CRM to work with CRM SDK using different methods of CRM web services, and we created a sample console application. Resources for Article: Further resources on this subject: Attracting Leads and Building Your List [article] PostgreSQL in Action [article] Auto updating child records in Process Builder [article]
Read more
  • 0
  • 0
  • 3573

article-image-attracting-leads-and-building-your-list
Packt
14 Oct 2015
9 min read
Save for later

Attracting Leads and Building Your List

Packt
14 Oct 2015
9 min read
In this article, by Paul Sokol, author of the book Infusionsoft Cookbook, we will learn how to create a Contact Us form. Also, we would learn building a lead magnet delivery. Infusionsoft invented a holistic business strategy and customer experience journey framework named Lifecycle Marketing. There are three phases in Lifecycle Marketing: Attract, Sell, and Wow. This article concerns itself with different tactics to attract and capture leads. Any business can use these recipes in one way or another. How you use them is up to you. Be creative! (For more resources related to this topic, see here.) Creating a Contact Us form Every website needs to have some method for people to make general inquiries. This is particularly important for service-based businesses that operate locally. If a website is missing a simple Contact Us form that means good leads from our hard-earned traffic are slipping away. Fixing this hole in our online presence creates another lead channel for the business. Getting ready We need to edit a new campaign and have some manner of getting a form on our site (either ourselves or via webmaster). How to do it... Drag out a new traffic source, a web form goal, and a sequence. Connect them together as shown in the following image and rename all elements for visual clarity: Double-click on the web form goal to edit its content. Add the following four fields to the form:      First Name      Last Name      Email      Phone (can be left as optional) The following screenshot shows these four fields: Create a custom Text Area field for inquiry comments. Add this custom field to the form using the Other snippet and leave as optional: Click on the Submit button to change the call to action. Change the Button Label button to Please Contact Me and select Center alignment; click on Save. Add a Title snippet above all the fields and provide some instruction for the visitor as follows: Click on the Thank-you Page tab at the top-left side of the page. Remove all elements and replace them with a single Title snippet with a confirmation message for the visitor: Click on the Draft button in the upper-right side of the page to change the form to Ready. Click on Back to Campaign in the upper-left side of the page and open the connected sequence. Drag out a new Task step. Connect and rename it appropriately: Double-click on the Task step and configure it accordingly. Don't forget to merge in any appropriate information or instructions for the end user: Click on the Draft button in the upper-right corner of the page to change the task to Ready. Click on Back to Sequence in the upper-left corner of the page. Click on the Draft button in the upper-right corner of the page to change the sequence to Ready. Click on Back to Campaign in the upper-left corner of the page and publish the campaign. Place the Contact Us form on our website. How it works... When a website visitor fills out the form, a task is created for someone to follow up with that visitor. There's more... For a better experience, add a request received e-mail in the post-form sequence to establish an inbox relationship. Be sure to respect their e-mail preferences as this kind of form submission isn't providing direct consent to be marketed to. This recipe out-of-the-box creates a dead end after the form submission. It is recommended to drive traffic from the thank you page somewhere else to capitalize on visitor momentum because they are very engaged after submitting a form. For example, we could point people to follow us on a particular social network, an FAQ page on our site, or our blog. We can merge any captured information onto the thank you page. Use this to create a personalized experience for your brand voice: We can add/remove form fields based on our needs. Just remember that a Contact Us form is for general inquiries and should be kept simple to reduce conversion friction; the fewer fields the better. If we want to segment inquiries based on their type, we can use a radio button to segment inquiry types without sacrificing a custom field because the form's radio buttons can be used within a decision node directly coming out of the form. See Also For a template similar to this recipe, download the Automate Contact Requests campaign from the Marketplace. Building a lead magnet delivery A lead magnet is exactly what it sounds like: it is something designed to attract new leads like a magnet. Offering some digital resource in exchange for contact information is a common example of a lead magnet. A lead magnet can take many different forms, such as: PDF E-book Slideshow Audio file This is by no means an exhaustive list either. Automating the delivery and follow-up of a lead magnet is a simple and very powerful way to save time and get organized. This recipe shows how to build a mechanism for capturing interested leads, delivering an online lead magnet via e-mail, and following up with people who download it. Getting ready We need to have the lead magnet hosted somewhere publicly that is accessible via a URL and be editing a new campaign. How to do it... Drag out a new traffic source, web form goal, link click goal, and two sequences. Connect them as shown in the following image and rename all elements for visual clarity: Create a campaign link and set it as the public download URL for the lead magnet. Double-click on the web form goal to edit its content. Design the form to include:     A Title snippet with the lead magnet's name and a call to action     A first name and e-mail field     A call to action The form should look as follows: Set a confirmation message driving the visitor to their e-mail on the thank you page: Mark the form as Ready, go Back to Campaign, and open the first sequence. Drag out a new Email step, connect, and rename it appropriately: Double-click on the Email step and write a simple delivery message. Make sure the download link(s) in the e-mail are using the campaign link. For best results, thank the person and tease some of the information contained in the lead magnet: Mark the e-mail as Ready and go Back to Sequence. Mark the Sequence as Ready to go Back to Campaign. Double-click on the link click goal. Check the download link within the e-mail and go Back to Campaign: Open the post-link click goal sequence. Drag out a Delay Timer, an Email step, and connect them accordingly. Configure the Delay Timer to wait 1 day then run in the morning and rename the Email step: Double-click on the Email step and write a simple download follow-up. Make sure it furthers the sales conversation, feels personal, and gives a clear next step: Mark the e-mail as Ready and go Back to Sequence. Mark the sequence as Ready and go Back to Campaign; publish the campaign. Place the lead magnet request form on our website. Promote this new offering across social media to drive some initial traffic. How it works... When a visitor fills out the lead magnet request form, Infusionsoft immediately sends them an e-mail with a download link for the lead magnet. Then, it waits until that person clicks on the download link. When this happens, Infusionsoft waits 1 day then sends a follow-up e-mail addressing the download behavior. There's more... If the lead magnet is less than 10 MB, we can upload it to Infusionsoft's file box and grab a hosted URL from there. If the lead magnet is more than 10 MB, use a cloud-based file-sharing service that offers public URLs such as Dropbox, Google Drive, or Box. Leveraging a campaign link ensures updating the resource is easy; especially if the link is used in multiple places. We can also use a campaign merge field for the lead magnet title to ensure scalability and easy duplication of this campaign. It is important the word Email is present in the form's Submit button. This primes them for inbox engagement and creates clear expectations for what will occur after they request the lead magnet. The download follow-up should get a conversation going and feel really personal. This tactic can bubble up hot clients; people appreciate it when others pay attention to them. For a more personal experience, the lead magnet delivery e-mail(s) can come from the company and the follow-up can come directly from an individual. Not everyone is going to download the lead magnet right away. Add extra reminder e-mails into the mix, one at three days and then one at a week, to ensure those who are genuinely interested don't slip through the cracks. Add a second form on the backend that collects addresses to ship a physical copy if appropriate. This would work well for a physical print of an e-book, a burned CD of an audio file, or a DVD of video content. This builds your direct mail database and helps further segment those who are most engaged and trusting. We can also leverage a second form to collect other information like a phone number or e-mail subscription preferences. Adding an image of the lead magnet to the page containing the request web form can boost conversions. Even if there is never a physical version, there are lots of tools out there to create a digital image of an e-book, CD, report, and more. This recipe is using a web form. We can also leverage a formal landing page at the beginning if desired. Although we can tag those who request the lead magnet, we don't have to because a Campaign Goal Completion report can show us all the people who have submitted the form. We would only need to tag them in instances where the goal completion needs to be universally searchable (for instance, doing an order search via a goal completion tag). Summary In this article, we learned how to make a Contact Us form. We also discussed one of the phases of Lifecycle Marketing (Attract) and learned how to build a lead magnet delivery. Resources for Article: Further resources on this subject: Asynchronous Communication between Components [article] Introducing JAX-RS API [article] The API in Detail [article]
Read more
  • 0
  • 0
  • 884
article-image-getting-places
Packt
13 Oct 2015
8 min read
Save for later

Getting Places

Packt
13 Oct 2015
8 min read
In this article by Nafiul Islam, the author of Mastering Pycharm, we'll learn all about navigation. It is divided into three parts. The first part is called Omni, which deals with getting to anywhere from any place. The second is called Macro, which deals with navigating to places of significance. The third and final part is about moving within a file and it is called Micro. By the end of this article, you should be able to navigate freely and quickly within PyCharm, and use the right tool for the job to do so. Veteran PyCharm users may not find their favorite navigation tool mentioned or explained. This is because the methods of navigation described throughout this article will lead readers to discover their own tools that they prefer over others. (For more resources related to this topic, see here.) Omni In this section, we will discuss the tools that PyCharm provides for a user to go from anywhere to any place. You could be in your project directory one second, the next, you could be inside the Python standard library or a class in your file. These tools are generally slow or at least slower than more precise tools of navigation provided. Back and Forward The Back and Forward actions allow you to move your cursor back to the place where it was previously for more than a few seconds or where you've made edits. This information persists throughout sessions, so even if you exit the IDE, you can still get back to the positions that you were in before you quit. This falls into the Omni category because these two actions could potentially get you from any place within a file to any place within a file in your directory (that you have been to) to even parts of the standard library that you've looked into as well as your third-party Python packages. The Back and Forward actions are perhaps two of my most used navigation actions, and you can use Keymap. Or, one can simply click on the Navigate menu to see the keyboard shortcuts: Macro The difference between Macro and Omni is subtle. Omni allows you to go to the exact location of a place, even a place of no particular significance (say, the third line of a documentation string) in any file. Macro, on the other hand, allows you to navigate anywhere of significance, such as a function definition, class declaration, or particular class method. Go to definition or navigate to declaration Go to definition is the old name for Navigate to Declaration in PyCharm. This action, like the one previously discussed, could lead you anywhere—a class inside your project or a third party library function. What this action does is allow you to go to the source file declaration of a module, package, class, function, and so on. Keymap is once again useful in finding the shortcut for this particular action. Using this action will move your cursor to the file where the class or function is declared, may it be in your project or elsewhere. Just place your cursor on the function or class and invoke the action. Your cursor will now be directly where the function or class was declared. There is, however, a slight problem with this. If one tries to go to the declaration of a .so object, such as the datetime module or the select module, what one will encounter is a stub file (discussed in detail later). These are helper files that allow PyCharm to give you the code completion that it does. Modules that are .so files are indicated by a terminal icon, as shown here: Search Everywhere The action speaks for itself. You search for classes, files, methods, and even actions. Universally invoked using double Shift (pressing Shift twice in quick succession), this nifty action looks similar to any other search bar. Search Everywhere searches only inside your project, by default; however, one can also use it to search non-project items as well. Not using this option leads to faster search and a lower memory footprint. Search Everywhere is a gateway to other search actions available in PyCharm. In the preceding screenshot, one can see that Search Everywhere has separate parts, such as Recent Files and Classes. Each of these parts has a shortcut next to their section name. If you find yourself using Search Everywhere for Classes all the time, you might start using the Navigate Class action instead which is much faster. The Switcher tool The Switcher tool allows you to quickly navigate through your currently open tabs, recently opened files as well as all of your panels. This tool is essential since you always navigate between tabs. A star to the left indicates open tabs; everything else is a recently opened or edited file. If you just have one file open, Switcher will show more of your recently opened files. It's really handy this way since almost always the files that you want to go to are options in Switcher. The Project panel The Project panel is what I use to see the structure of my project as well as search for files that I can't find with Switcher. This panel is by far the most used panel of all, and for good reason. The Project panel also supports search; just open it up and start typing to find your file. However, the Project panel can give you even more of an understanding of what your code looks similar to if you have Show Members enabled. Once this is enabled, you can see the classes as well as the declared methods inside your files. Note that search works just like before, meaning that your search is limited to only the files/objects that you can see; if you collapse everything, you won't be able to search either your files or the classes and methods in them. Micro Micro deals with getting places within a file. These tools are perhaps what I end up using the most in my development. The Structure panel The Structure panel gives you a bird's eye view of the file that you are currently have your cursor on. This panel is indispensable when trying to understand a project that one is not familiar with. The yellow arrow indicates the option to show inherited fields and methods. The red arrow indicates the option to show field names, meaning if that it is turned off, you will only see properties and methods. The orange arrow indicates the option to scroll to and from the source. If both are turned on (scroll to and scroll from), where your cursor is will be synchronized with what method, field, or property is highlighted in the structure panel. Inherited fields are grayed out in the display. Ace Jump This is my favorite navigation plugin, and was made by John Lindquist who is a developer at JetBrains (creators of PyCharm). Ace Jump is inspired from the Emacs mode with the same name. It allows you to jump from one place to another within the same file. Before one can use Ace Jump, one has to install the plugin for it. Ace Jump is usually invoked using Ctrl or command + ; (semicolon). You can search for Ace Jump in Keymap as well, and is called Ace Jump. Once invoked, you get a small box in which you can input a letter. Choose a letter from the word that you want to navigate to, and you will see letters on that letter pop up immediately. If we were to hit D, the cursor would move to the position indicated by D. This might seem long winded, but it actually leads to really fast navigation. If we wanted to select the word indicated by the letter, then we'd invoke Ace Jump twice before entering a letter. This turns the Ace Jump box red. Upon hitting B, the named parameter rounding will be selected. Often, we don't want to go to a word, but rather the beginning or the end of a line. In order to do this, just hit invoke Ace Jump and then the left arrow for line beginnings or the right arrow for line endings. In this case, we'd just hit V to jump to the beginning of the line that starts with num_type. This is an example, where we hit left arrow instead of the right one, and we get line-ending options. Summary In this article, I discussed some of the best tools for navigation. This is by no means an exhaustive list. However, these tools will serve as a gateway to more precise tools available for navigation in PyCharm. I generally use Ace Jump, Back, Forward, and Switcher the most when I write code. The Project panel is always open for me, with the most used files having their classes and methods expanded for quick search. Resources for Article: Further resources on this subject: Enhancing Your Blog with Advanced Features [article] Adding a developer with Django forms [article] Deployment and Post Deployment [article]
Read more
  • 0
  • 0
  • 1189

article-image-creating-graph-application-python-neo4j-gephi-linkuriousjs
Greg Roberts
12 Oct 2015
13 min read
Save for later

Creating a graph application with Python, Neo4j, Gephi & Linkurious.js

Greg Roberts
12 Oct 2015
13 min read
I love Python, and to celebrate Packt Python week, I’ve spent some time developing an app using some of my favorite tools. The app is a graph visualization of Python and related topics, as well as showing where all our content fits in. The topics are all StackOverflow tags, related by their co-occurrence in questions on the site. The app is available to view at http://gregroberts.github.io/ and in this blog, I’m going to discuss some of the techniques I used to construct the underlying dataset, and how I turned it into an online application. Graphs, not charts Graphs are an incredibly powerful tool for analyzing and visualizing complex data. In recent years, many different graph database engines have been developed to make use of this novel manner of representing data. These databases offer many benefits over traditional, relational databases because of how the data is stored and accessed. Here at Packt, I use a Neo4j graph to store and analyze data about our business. Using the Cypher query language, it’s easy to express complicated relations between different nodes succinctly. It’s not just the technical aspect of graphs which make them appealing to work with. Seeing the connections between bits of data visualized explicitly as in a graph helps you to see the data in a different light, and make connections that you might not have spotted otherwise. This graph has many uses at Packt, from customer segmentation to product recommendations. In the next section, I describe the process I use to generate recommendations from the database. Make the connection For product recommendations, I use what’s known as a hybrid filter. This considers both content based filtering (product x and y are about the same topic) and collaborative filtering (people who bought x also bought y). Each of these methods has strengths and weaknesses, so combining them into one algorithm provides a more accurate signal. The collaborative aspect is straightforward to implement in Cypher. For a particular product, we want to find out which other product is most frequently bought alongside it. We have all our products and customers stored as nodes, and purchases are stored as edges. Thus, the Cypher query we want looks like this: MATCH (n:Product {title:’Learning Cypher’})-[r:purchased*2]-(m:Product) WITH m.title AS suggestion, count(distinct r)/(n.purchased+m.purchased) AS alsoBought WHERE m<>n RETURN* ORDER BY alsoBought DESC and will very efficiently return the most commonly also purchased product. When calculating the weight, we divide by the total units sold of both titles, so we get a proportion returned. We do this so we don’t just get the titles with the most units; we’re effectively calculating the size of the intersection of the two titles’ audiences relative to their overall audience size. The content side of the algorithm looks very similar: MATCH (n:Product {title:’Learning Cypher’})-[r:is_about*2]-(m:Product) WITH m.title AS suggestion, count(distinct r)/(length(n.topics)+length(m.topics)) AS alsoAbout WHERE m<>n RETURN * ORDER BY alsoAbout DESC Implicit in this algorithm is knowledge that a title is_about a topic of some kind. This could be done manually, but where’s the fun in that? In Packt’s domain there already exists a huge, well moderated corpus of technology concepts and their usage: StackOverflow. The tagging system on StackOverflow not only tells us about all the topics developers across the world are using, it also tells us how those topics are related, by looking at the co-occurrence of tags in questions. So in our graph, StackOverflow tags are nodes in their own right, which represent topics. These nodes are connected via edges, which are weighted to reflect their co-occurrence on StackOverflow: edge_weight(n,m) = (Number of questions tagged with both n & m)/(Number questions tagged with n or m) So, to find topics related to a given topic, we could execute a query like this: MATCH (n:StackOverflowTag {name:'Matplotlib'})-[r:related_to]-(m:StackOverflowTag) RETURN n.name, r.weight, m.name ORDER BY r.weight DESC LIMIT 10 Which would return the following: | n.name | r.weight | m.name ----+------------+----------+-------------------- 1 | Matplotlib | 0.065699 | Plot 2 | Matplotlib | 0.045678 | Numpy 3 | Matplotlib | 0.029667 | Pandas 4 | Matplotlib | 0.023623 | Python 5 | Matplotlib | 0.023051 | Scipy 6 | Matplotlib | 0.017413 | Histogram 7 | Matplotlib | 0.015618 | Ipython 8 | Matplotlib | 0.013761 | Matplotlib Basemap 9 | Matplotlib | 0.013207 | Python 2.7 10 | Matplotlib | 0.012982 | Legend There are many, more complex relationships you can define between topics like this, too. You can infer directionality in the relationship by looking at the local network, or you could start constructing Hyper graphs using the extensive StackExchange API. So we have our topics, but we still need to connect our content to topics. To do this, I’ve used a two stage process. Step 1 – Parsing out the topics We take all the copy (words) pertaining to a particular product as a document representing that product. This includes the title, chapter headings, and all the copy on the website. We use this because it’s already been optimized for search, and should thus carry a fair representation of what the title is about. We then parse this document and keep all the words which match the topics we’ve previously imported. #...code for fetching all the copy for all the products key_re = 'W(%s)W' % '|'.join(re.escape(i) for i in topic_keywords) for i in documents tags = re.findall(key_re, i[‘copy’]) i['tags'] = map(lambda x: tag_lookup[x],tags) Having done this for each product, we have a bag of words representing each product, where each word is a recognized topic. Step 2 – Finding the information From each of these documents, we want to know the topics which are most important for that document. To do this, we use the tf-idf algorithm. tf-idf stands for term frequency, inverse document frequency. The algorithm takes the number of times a term appears in a particular document, and divides it by the proportion of the documents that word appears in. The term frequency factor boosts terms which appear often in a document, whilst the inverse document frequency factor gets rid of terms which are overly common across the entire corpus (for example, the term ‘programming’ is common in our product copy, and whilst most of the documents ARE about programming, this doesn’t provide much discriminating information about each document). To do all of this, I use python (obviously) and the excellent scikit-learn library. Tf-idf is implemented in the class sklearn.feature_extraction.text.TfidfVectorizer. This class has lots of options you can fiddle with to get more informative results. import sklearn.feature_extraction.text as skt tagger = skt.TfidfVectorizer(input = 'content', encoding = 'utf-8', decode_error = 'replace', strip_accents = None, analyzer = lambda x: x, ngram_range = (1,1), max_df = 0.8, min_df = 0.0, norm = 'l2', sublinear_tf = False) It’s a good idea to use the min_df & max_df arguments of the constructor so as to cut out the most common/obtuse words, to get a more informative weighting. The ‘analyzer’ argument tells it how to get the words from each document, in our case, the documents are already lists of normalized words, so we don’t need anything additional done. #create vectors of all the documents vectors = tagger.fit_transform(map(lambda x: x['tags'],rows)).toarray() #get back the topic names to map to the graph t_map = tagger.get_feature_names() jobs = [] for ind, vec in enumerate(vectors): features = filter(lambda x: x[1]>0, zip(t_map,vec)) doc = documents[ind] for topic, weight in features: job = ‘’’MERGE (n:StackOverflowTag {name:’%s’}) MERGE (m:Product {id:’%s’}) CREATE UNIQUE (m)-[:is_about {source:’tf_idf’,weight:%d}]-(n) ’’’ % (topic, doc[‘id’], weight) jobs.append(job) We then execute all of the jobs using Py2neo’s Batch functionality. Having done all of this, we can now relate products to each other in terms of what topics they have in common: MATCH (n:Product {isbn10:'1783988363'})-[r:is_about]-(a)-[q:is_about]-(m:Product {isbn10:'1783289007'}) WITH a.name as topic, r.weight+q.weight AS weight RETURN topic ORDER BY weight DESC limit 6 Which returns: | topic ---+------------------ 1 | Machine Learning 2 | Image 3 | Models 4 | Algorithm 5 | Data 6 | Python Huzzah! I now have a graph into which I can throw any piece of content about programming or software, and it will fit nicely into the network of topics we’ve developed. Take a breath So, that’s how the graph came to be. To communicate with Neo4j from Python, I use the excellent py2neo module, developed by Nigel Small. This module has all sorts of handy abstractions to allow you to work with nodes and edges as native Python objects, and then update your Neo instance with any changes you’ve made. The graph I’ve spoken about is used for many purposes across the business, and has grown in size and scope significantly over the last year. For this project, I’ve taken from this graph everything relevant to Python. I started by getting all of our content which is_about Python, or about a topic related to python: titles = [i.n for i in graph.cypher.execute('''MATCH (n)-[r:is_about]-(m:StackOverflowTag {name:'Python'}) return distinct n''')] t2 = [i.n for i in graph.cypher.execute('''MATCH (n)-[r:is_about]-(m:StackOverflowTag)-[:related_to]-(m:StackOverflowTag {name:'Python'}) where has(n.name) return distinct n''')] titles.extend(t2) then hydrated this further by going one or two hops down each path in various directions, to get a large set of topics and content related to Python. Visualising the graph Since I started working with graphs, two visualisation tools I’ve always used are Gephi and Sigma.js. Gephi is a great solution for analysing and exploring graphical data, allowing you to apply a plethora of different layout options, find out more about the statistics of the network, and to filter and change how the graph is displayed. Sigma.js is a lightweight JavaScript library which allows you to publish beautiful graph visualizations in a browser, and it copes very well with even very large graphs. Gephi has a great plugin which allows you to export your graph straight into a web page which you can host, share and adapt. More recently, Linkurious have made it their mission to bring graph visualization to the masses. I highly advise trying the demo of their product. It really shows how much value it’s possible to get out of graph based data. Imagine if your Customer Relations team were able to do a single query to view the entire history of a case or customer, laid out as a beautiful graph, full of glyphs and annotations. Linkurious have built their product on top of Sigma.js, and they’ve made available much of the work they’ve done as the open source Linkurious.js. This is essentially Sigma.js, with a few changes to the API, and an even greater variety of plugins. On Github, each plugin has an API page in the wiki and a downloadable demo. It’s worth cloning the repository just to see the things it’s capable of! Publish It! So here’s the workflow I used to get the Python topic graph out of Neo4j and onto the web. Use Py2neo to graph the subgraph of content and topics pertinent to Python, as described above Add to this some other topics linked to the same books to give a fuller picture of the Python “world” Add in topic-topic edges and product-product edges to show the full breadth of connections observed in the data Export all the nodes and edges to csv files Import node and edge tables into Gephi. The reason I’m using Gephi as a middle step is so that I can fiddle with the visualisation in Gephi until it looks perfect. The layout plugin in Sigma is good, but this way the graph is presentable as soon as the page loads, the communities are much clearer, and I’m not putting undue strain on browsers across the world! The layout of the graph has been achieved using a number of plugins. Instead of using the pre-installed ForceAtlas layouts, I’ve used the OpenOrd layout, which I feel really shows off the communities of a large graph. There’s a really interesting and technical presentation about how this layout works here. Export the graph into gexf format, having applied some partition and ranking functions to make it more clear and appealing. Now it’s all down to Linkurious and its various plugins! You can explore the source code of the final page to see all the details, but here I’ll give an overview of the different plugins I’ve used for the different parts of the visualisation: First instantiate the graph object, pointing to a container (note the CSS of the container, without this, the graph won’t display properly: <style type="text/css"> #container { max-width: 1500px; height: 850px; margin: auto; background-color: #E5E5E5; } </style> … <div id="container"></div> … <script> s= new sigma({ container: 'container', renderer: { container: document.getElementById('container'), type: 'canvas' }, settings: { … } }); sigma.parsers.gexf - used for (trivially!) importing a gexf file into a sigma instance sigma.parsers.gexf( 'static/data/Graph1.gexf', s, function(s) { //callback executed once the data is loaded, use this to set up any aspects of the app which depend on the data }); sigma.plugins.filter - Adds the ability to very simply hide nodes/edges based on a callback function which returns a boolean. This powers the filtering widgets on the page. <input class="form-control" id="min-degree" type="range" min="0" max="0" value="0"> … function applyMinDegreeFilter(e) { var v = e.target.value; $('#min-degree-val').textContent = v; filter .undo('min-degree') .nodesBy( function(n, options) { return this.graph.degree(n.id) >= options.minDegreeVal; },{ minDegreeVal: +v }, 'min-degree' ) .apply(); }; $('#min-degree').change(applyMinDegreeFilter); sigma.plugins.locate - Adds the ability to zoom in on a single node or collection of nodes. Very useful if you’re filtering a very large initial graph function locateNode (nid) { if (nid == '') { locate.center(1); } else { locate.nodes(nid); } }; sigma.renderers.glyphs - Allows you to add custom glyphs to each node. Useful if you have many types of node. Outro This application has been a very fun little project to build. The improvements to Sigma wrought by Linkurious have resulted in an incredibly powerful toolkit to rapidly generate graph based applications with a great degree of flexibility and interaction potential. None of this would have been possible were it not for Python. Python is my right (left, I’m left handed) hand which I use for almost everything. Its versatility and expressiveness make it an incredibly robust Swiss army knife in any data-analysts toolkit.
Read more
  • 0
  • 0
  • 7859

article-image-running-firefox-os-simulators-webide
Packt
12 Oct 2015
9 min read
Save for later

Running Firefox OS Simulators with WebIDE

Packt
12 Oct 2015
9 min read
In this article by Tanay Pant, the author of the book, Learning Firefox OS Application Development, you will learn how to use WebIDE and its features. We will start by installing Firefox OS simulators in the WebIDE so that we can run and test Firefox OS applications in it. Then, we will study how to install and create new applications with WebIDE. Finally, we will cover topics such as using developer tools for applications that run in WebIDE, and uninstalling applications in Firefox OS. In brief, we will go through the following topics: Getting to know about WebIDE Installing Firefox OS simulator Installing and creating new apps with WebIDE Using developer tools inside WebIDE Uninstalling applications in Firefox OS (For more resources related to this topic, see here.) Introducing WebIDE It is now time to have a peek at Firefox OS. You can test your applications in two ways, either by running it on a real device or by running it in Firefox OS Simulator. Let's go ahead with the latter option since you might not have a Firefox OS device yet. We will use WebIDE, which comes preinstalled with Firefox, to accomplish this task. If you haven't installed Firefox yet, you can do so from https://www.mozilla.org/en-US/firefox/new/. WebIDE allows you to install one or several runtimes (different versions) together. You can use WebIDE to install different types of applications, debug them using Firefox's Developer Tools Suite, and edit the applications/manifest using the built-in source editor. After you install Firefox, open WebIDE. You can open it by navigating to Tools | Web Developer | WebIDE. Let's now take a look at the following screenshot of WebIDE: You will notice that on the top-right side of your window, there is a Select Runtime option. When you click on it, you will see the Install Simulator option. Select that option, and you will see a page titled Extra Components. It presents a list of Firefox OS simulators. We will install the latest stable and unstable versions of Firefox OS. We installed two versions of Firefox OS because we would need both the latest and stable versions to test our applications in the future. After you successfully install both the simulators, click on Select Runtime. This will now show both the OS versions listed, as shown in the following screenshot:. Let's open Firefox OS 3.0. This will open up a new window titled B2G. You should now explore Firefox OS, take a look at its applications, and interact with them. It's all HTML, CSS and JavaScript. Wonderful, isn't it? Very soon, you will develop applications like these:` Installing and creating new apps using WebIDE To install or create a new application, click on Open App in the top-left corner of the WebIDE window. You will notice that there are three options: New App, Open Packaged App, and Open Hosted App. For now, think of Hosted apps like websites that are served from a web server and are stored online in the server itself but that can still use appcache and indexeddb to store all their assets and data offline, if desired. Packaged apps are distributed in a .zip format and they can be thought of as the source code of the website bundled and distributed in a ZIP file. Let's now head to the first option in the Open App menu, which is New App. Select the HelloWorld template, enter Project Name, and click on OK. After completing this, the WebIDE will ask you about the directory where you want to store the application. I have made a new folder named Hello World for this purpose on the desktop. Now, click on Open button and finally, click again on the OK button. This will prepare your app and show details, such as Title, Icon, Description, Location and App ID of your application. Note that beneath the app title, it says Packaged Web. Can you figure out why? As we discussed, it is because of the fact that we are not serving the application online, but from a packaged directory that holds its source code. This covers the right-hand side panel. In the left-hand side panel, we have the directory listing of the application. It contains an icon folder that holds different-sized icons for different screen resolutions It also contains the app.js file, which is the engine of the application and will contain the functionality of the application; index.html, which will contain the markup data for the application; and finally, the manifest.webapp file, which contains crucial information and various permissions about the application. If you click on any filename, you will notice that the file opens in an in-browser editor where you can edit the files to make changes to your application and save them from here itself. Let's make some edits in the application— in app.js and index.html. I have replaced World with Firefox everywhere to make it Hello Firefox. Let's make the same changes in the manifest file. The manifest file contains details of your application, such as its name, description, launch path, icons, developer information, and permissions. These details are used to display information about your application in the WebIDE and Firefox Marketplace. The manifest file is in JSON format. I went ahead and edited developer information in the application as well, to include my name and my website. After saving all the files, you will notice that the information of the app in the WebIDE has changed! It's now time to run the application in Firefox OS. Click on Select Runtime and fire up Firefox OS 3.0. After it is launched, click on the Play button in the WebIDE hovering on which is the prompt that says Install and Run. Doing this will install and launch the application on your simulator! Congratulations, you installed your first Firefox OS application! Using developer tools inside WebIDE WebIDE allows you to use Firefox's awesome developer tools for applications that run in the Simulator via WebIDE as well. To use them, simply click on the Settings icon (which looks like a wrench) beside the Install and Run icon that you had used to get the app installed and running. The icon says Debug App on hovering the cursor over it. Click on this to reveal developer tools for the app that is running via WebIDE. Click on Console, and you will see the message Hello Firefox, which we gave as the input in console.log() in the app.js file. Note that it also specifies the App ID of our application while displaying Hello Firefox. You may have noticed in the preceding illustration that I sent a command via the console alert('Hello Firefox'); and it simultaneously executed the instruction in the app running in the simulator. As you may have noticed, Firefox OS customizes the look and feel of components, such as the alert box (this is browser based). Our application is running in an iframe in Gaia. Every app, including the keyboard application, runs in an iframe for security reasons. You should go through these tools to get a hang of the debugging capabilities if you haven't done so already! One more important thing that you should keep in mind is that inline scripts (for example, <a href="#" onclick="alert(this)">Click Me</a>) are forbidden in Firefox OS apps, due to Content Security Policy (CSP) restrictions. CSP restrictions include the remote scripts, inline scripts, javascript URIs, function constructor, dynamic code execution, and plugins, such as Flash or Shockwave. Remote styles are also banned. Remote Web Workers and eval() operators are not allowed for security reasons and they show 400 error and security errors respectively upon usage. You are warned about CSP violations when submitting your application to the Firefox OS Marketplace. CSP warnings in the validator will not impact whether your app is accepted into the Marketplace. However, if your app is privileged and violates the CSP, you will be asked to fix this issue in order to get your application accepted. Browsing other runtime applications You can also take a look at the source code of the preinstalled/runtime apps that are present in Firefox OS or Gaia, to be precise. For example, the following is an illustration that shows how to open them: You can click on the Hello World button (in the same place where Open App used to exist), and this will show you the whole list of Runtime Apps as shown in the preceding illustration. I clicked on the Camera application and it showed me the source code of its main.js file. It's completely okay if you are daunted by the huge file. If you find these runtime applications interesting and want to contribute to them, then you can refer to Mozilla Developer Network's articles on developing Gaia, which you can find at https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia. Our application looks as follows in the App Launcher of the operating system: Uninstalling applications in Firefox OS You can remove the project from WebIDE by clicking on the Remove Project button in the home page of the application. However, this will not uninstall the application from Firefox OS Simulator. The uninstallation system of the operating system is quite similar to iOS. You just have to double tap in OS X to get the Edit screen, from where you can click on the cross button on the top-left of the app icon to uninstall the app. You will then get a confirmation screen that warns you that all the data of the application will also be deleted along with the app. This will take you back to the Edit screen where you can click on Done to get back to the home screen. Summary In this article, you learned about WebIDE, how to install Firefox OS simulator in WebIDE, using Firefox OS and installing applications in it, and creating a skeleton application using WebIDE. You then learned how to use developer tools for applications that run in the simulator, browsing other preinstalled runtime applications present in Firefox OS. Finally, you learned about removing a project from WebIDE and uninstalling an application from the operating system. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [Article] Introducing Web Application Development in Rails [Article] One-page Application Development [Article]
Read more
  • 0
  • 0
  • 2301
article-image-swift-power-and-performance
Packt
12 Oct 2015
14 min read
Save for later

Swift Power and Performance

Packt
12 Oct 2015
14 min read
In this article by Kostiantyn Koval, author of the book, Swift High Performance, we will learn about Swift, its performance and optimization, and how to achieve high performance. (For more resources related to this topic, see here.) Swift speed I could guess you are interested in Swift speed and are probably wondering "How fast can Swift be?" Before we even start learning Swift and discovering all the good things about it, let's answer this right here and right now. Let's take an array of 100,000 random numbers, sort in Swift, Objective-C, and C by using a standard sort function from stdlib (sort for Swift, qsort for C, and compare for Objective-C), and measure how much time would it take. In order to sort an array with 100,000 integer elements, the following are the timings: Swift 0.00600 sec C 0.01396 sec Objective-C 0.08705 sec The winner is Swift! Swift is 14.5 times faster that Objective-C and 2.3 times faster than C. In other examples and experiments, C is usually faster than Objective-C and Swift is way faster. Comparing the speed of functions You know how functions and methods are implemented and how they work. Let's compare the performance and speed of global functions and different method types. For our test, we will use a simple add function. Take a look at the following code snippet: func add(x: Int, y: Int) -> Int { return x + y } class NumOperation { func addI(x: Int, y: Int) -> Int class func addC(x: Int, y: Int) -> Int static func addS(x: Int, y: Int) -> Int } class BigNumOperation: NumOperation { override func addI(x: Int, y: Int) -> Int override class func addC(x: Int, y: Int) -> Int } For the measurement and code analysis, we use a simple loop in which we call those different methods: measure("addC") { var result = 0 for i in 0...2000000000 { result += NumOperation.addC(i, y: i + 1) // result += test different method } print(result) } Here are the results. All the methods perform exactly the same. Even more so, their assembly code looks exactly the same, except the name of the function call: Global function: add(10, y: 11) Static: NumOperation.addS(10, y: 11) Class: NumOperation.addC(10, y: 11) Static subclass: BigNumOperation.addS(10, y: 11) Overridden subclass: BigNumOperation.addC(10, y: 11) Even though the BigNumOperation addC class function overrides the NumOperation addC function when you call it directly, there is no need for a vtable lookup. The instance method call looks a bit different: Instance: let num = NumOperation() num.addI(10, y: 11) Subclass overridden instance: let bigNum = BigNumOperation() bigNum.addI() One difference is that we need to initialize a class and create an instance of the object. In our example, this is not so expensive an operation because we do it outside the loop and it takes place only once. The loop with the calling instance method looks exactly the same. As you can see, there is almost no difference in the global function and the static and class methods. The instance method looks a bit different but it doesn't have any major impact on performance. Also, even though it's true for simple use cases, there is a difference between them in more complex examples. Let's take a look at the following code snippet: let baseNumType = arc4random_uniform(2) == 1 ? BigNumOperation.self : NumOperation.self for i in 0...loopCount { result += baseNumType.addC(i, y: i + 1) } print(result) The only difference we incorporated here is that instead of specifying the NumOperation class type in compile time, we randomly returned it at runtime. And because of this, the Swift compiler doesn't know what method should be called at compile time—BigNumOperation.addC or NumOperation.addC. This small change has an impact on the generated assembly code and performance. A summary of the usage of functions and methods Global functions are the simplest and give the best performance. Too many global functions, however, make the code hard to read and reason. Static type methods, which can't be overridden have the same performance as global functions, but they also provide a namespace (its type name), so our code looks clearer and there is no adverse effect on performance. Class methods, which can be overridden could lead to a decrease in performance, and they should be used when you need class inheritance. In other cases, static methods are preferred. The instance method operates on the instance of the object. Use instance methods when you need to operate on the data of that instance. Make methods final when you don't need to override them. This gives an extra tip for the compiler for optimization, and performance could be increased because of it. Intelligent code Because Swift is a static and strongly typed language, it can read, understand, and optimize code very well. It tries to avoid the execution of all unnecessary code. For a better explanation, let's take a look at this simple example: class Object { func nothing() { } } let object = Object() object.nothing() object.nothing() We create an instance of the Object class and call a nothing method. The nothing method is empty, and calling it does nothing. The Swift compiler understands this and removes those method calls. After this, we have only one line of code: let object = Object() The Swift compiler can also remove the objects created that are never used. It reduces memory usage and unnecessary function calls, which also reduces CPU usage. In our example, the object instance is not used after removing the nothing method call and the creation of object can be removed as well. In this way, Swift removes all three lines of code and we end up with no code to execute at all. Objective-C, in comparison, can't do this optimization. Because it has a dynamic runtime, the nothing method's implementation can be changed to do some work at runtime. That's why Objective-C can't remove empty method calls. This optimization might not seem like a big win but let's take a look at another—a bit more complex—example that uses more memory: class Object { let x: Int let y: Int let z: Int init(x: Int) { self.x = x self.y = x * 2 self.z = y * 2 } func nothing() { } } We have added some Int data to our Object class to increase memory usage. Now, the Object instance would use at least 24 bytes (3 * int size; Int uses 4 bytes in the 64 bit architecture). Let's also try to increase the CPU usage by adding more instructions, using a loop: for i in 0...1_000_000 { let object = Object(x: i) object.nothing() object.nothing() } print("Done") Integer literals can use the underscore sign (_) to improve readability. So, 1_000_000_000 is the same as 1000000000. Now, we have 3 million instructions and we would use 24 million bytes (about 24 MB). This is quite a lot for a type of operation that actually doesn't do anything. As you can see, we don't use the result of the loop body. For the loop body, Swift does the same optimization as in previous example and we end up with an empty loop: for i in 0...1_000_000 { } The empty loop can be skipped as well. As a result, we have saved 24 MB of memory usage and 3 million method calls. Dangerous functions There are some functions and instructions that sometimes don't provide any value for the application but the Swift compiler can't skip them because that could have a very negative impact on performance. Console print Printing a statement to the console is usually used for debugging purposes. The print and debugPrint instructions aren't removed from the application in release mode. Let's explore this code: for i in 0...1_000_000 { print(i) } The Swift compiler treats print and debugPrint as valid and important instructions that can't be skipped. Even though this code does nothing, it can't be optimized, because Swift doesn't remove the print statement. As a result, we have 1 million unnecessary instructions. As you can see, even very simple code that uses the print statement could decrease an application's performance very drastically. The loop with the 1_000_000 print statement takes 5 seconds, and that's a lot. It's even worse if you run it in Xcode; it would take up to 50 seconds. It gets all the more worse if you add a print instruction to the nothing method of an Object class from the previous example: func nothing() { print(x + y + z) } In that case, a loop in which we create an instance of Object and call nothing can't be eliminated because of the print instruction. Even though Swift can't eliminate the execution of that code completely, it does optimization by removing the creation instance of Object and calling the nothing method, and turns it into simple loop operation. The compiled code after optimization looks like this: // Initial Source Code for i in 0...1_000 { let object = Object(x: i) object.nothing() object.nothing() } // Optimized Code var x = 0, y = 0, z = 0 for i in 0...1_000_000 { x = i y = x * 2 z = y * 2 print(x + y + z) print(x + y + z) } As you can see, this code is far from perfect and has a lot of instructions that actually don't give us any value. There is a way to improve this code, so the Swift compiler does the same optimization as without print. Removing print logs To solve this performance problem, we have to remove the print statements from the code before compiling it. There are different ways of doing this. Comment out The first idea is to comment out all print statements of the code in release mode: //print("A") This will work but the next time when you want to enable logs, you will need to uncomment that code. This is a very bad and painful practice. But there is a better solution to it. Commented code is bad practice in general. You should be using a source code version control system, such as Git, instead. In this way, you can safely remove the unnecessary code and find it in the history if you need it later. Using a build configuration We can enable print only in debug mode. To do this, we will use a build configuration to conditionally exclude some code. First, we need to add a Swift compiler custom flag. To do this, select a project target and then go to Build Settings | Other Swift Flags. In the Swift Compiler - Custom Flags section and add the –D DEBUG flag for debug mode, like this: After this, you can use the DEBUG configuration flag to enable code only in debug mode. We will define our own print function. It will generate a print statement only in debug mode. In release mode, this function will be empty, and the Swift compiler will successfully eliminate it: func D_print(items: Any..., separator: String = " ", terminator: String = "n") { #if DEBUG print(items, separator: separator, terminator: terminator) #endif } Now, everywhere instead of print, we will use D_print: func nothing() { D_print(x + y + z) } You can also create a similar D_debugPrint function. Swift is very smart and does a lot of optimization, but we also have to make our code clear for people to read and for the compiler to optimize. Using a preprocessor adds complexity to your code. Use it wisely and only in situations when normal if conditions won't work, for instance, in our D_print example. Improving speed There are a few techniques that can simply improve code performance. Let's proceed directly to the first one. final You can create a function and property declaration with the final attribute. Adding the final attribute makes it non-overridable. The subclasses can't override that method or property. When you make a method non-overridable, there is no need to store it in vtable and the call to that function can be performed directly without any function address lookup in vtable: class Animal { final var name: String = "" final func feed() { } } As you have seen, final methods perform faster than non-final methods. Even such small optimization could improve an application's performance. It not only improves performance but also makes the code more secure. This way, you prevent a method from being overridden and prevent unexpected and incorrect behavior. Enabling the Whole Module Optimization setting would achieve very similar optimization results, but it's better to mark a function and property declaration explicitly as final, which would reduce the compiler's work and speed up the compilation. The compilation time for big projects with Whole Module Optimization could be up to 5 minutes in Xcode 7. Inline functions As you have seen, Swift can do optimization and inline some function calls. This way, there is no performance penalty for calling a function. You can manually enable or disable inline functions with the @inline attribute: @inline(__always) func someFunc () { } @inline(never) func someFunc () { } Even though you can manually control inline functions, it's usually better to leave it to the Swift complier to do this. Depending on the optimization settings, the Swift compiler applies different inlining techniques. The use-case for @inline(__always) would be very simple one-line functions that you always want to be inline. Value objects and reference objects There are many benefits of using immutable value types. Value objects make code not only safer and clearer but also faster. They have better speed and performance than reference objects; here is why. Memory allocation A value object can be allocated in the stack memory instead of the heap memory. Reference objects need to be allocated in the heap memory because they can be shared between many owners. Because value objects have only one owner, they can be allocated safely in the stack. Stack memory is way faster than heap memory. The second advantage is that value objects don't need reference counting memory management. As they can have only one owner, there is no such thing as reference counting for value objects. With Automatic Reference Counting (ARC) we don't think much about memory management, and it mostly looks transparent for us. Even though code looks the same when using reference objects and value objects, ARC adds extra retain and release method calls for reference objects. Avoiding Objective-C In most cases, Objective-C, with its dynamic runtime, performs slower than Swift. The interoperability between Swift and Objective-C is done so seamlessly that sometimes we may use Objective-C types and its runtime in the Swift code without knowing it. When you use Objective-C types in Swift code, Swift actually uses the Objective-C runtime for method dispatch. Because of that, Swift can't do the same optimization as for pure Swift types. Lets take a look at a simple example: for _ in 0...100 { _ = NSObject() } Let's read this code and make some assumptions about how the Swift compiler would optimize it. The NSObject instance is never used in the loop body, so we could eliminate the creation of an object. After that, we will have an empty loop; this can be eliminated as well. So, we remove all of the code from execution, but actually no code gets eliminated. This happens because Objective-C types use dynamic runtime method dispatch, called message sending. All standard frameworks, such as Foundation and UIKit, are written in Objective-C, and all types such as NSDate, NSURL, UIView, and UITableView use the Objective-C runtime. They do not perform as fast as Swift types, but we get all of these frameworks available for usage in Swift, and this is great. There is no way to remove the Objective-C dynamic runtime dispatch from Objective-C types in Swift, so the only thing we can do is learn how to use them wisely. Summary In this article, we covered many powerful features of Swift related to Swift's performance and gave some tips on how to solve performance-related issues. Resources for Article: Further resources on this subject: Flappy Swift[article] Profiling an app[article] Network Development with Swift [article]
Read more
  • 0
  • 0
  • 3869

article-image-collaboration-using-github-workflow
Packt
30 Sep 2015
12 min read
Save for later

Collaboration Using the GitHub Workflow

Packt
30 Sep 2015
12 min read
In this article by Achilleas Pipinellis, the author of the book GitHub Essentials, has come up with a workflow based on the features it provides and the power of Git. It has named it the GitHub workflow (https://guides.github.com/introduction/flow). In this article, we will learn how to work with branches and pull requests, which is the most powerful feature of GitHub. (For more resources related to this topic, see here.) Learn about pull requests Pull request is the number one feature in GitHub that made it what it is today. It was introduced in early 2008 and is being used extensively among projects since then. While everything else can be pretty much disabled in a project's settings (such as issues and the wiki), pull requests are always enabled. Why pull requests are a powerful asset to work with Whether you are working on a personal project where you are the sole contributor or on a big open source one with contributors from all over the globe, working with pull requests will certainly make your life easier. I like to think of pull requests as chunks of commits, and the GitHub UI helps you visualize clearer what is about to be merged in the default branch or the branch of your choice. Pull requests are reviewable with an enhanced diff view. You can easily revert them with a simple button on GitHub and they can be tested before merging, if a CI service is enabled in the project. The connection between branches and pull requests There is a special connection between branches and pull requests. In this connection, GitHub will automatically show you a button to create a new pull request if you push a new branch in your repository. As we will explore in the following sections, this is tightly coupled to the GitHub workflow, and GitHub uses some special words to describe the from and to branches. As per GitHub's documentation: The base branch is where you think changes should be applied, the head branch is what you would like to be applied. So, in GitHub terms, head is your branch, and base the branch you would like to merge into. Create branches directly in a project – the shared repository model The shared repository model, as GitHub aptly calls it, is when you push new branches directly to the source repository. From there, you can create a new pull request by comparing between branches, as we will see in the following sections. Of course, in order to be able to push to a repository you either have to be the owner or a collaborator; in other words you must have write access. Create branches in your fork – the fork and pull model Forked repositories are related to their parent in a way that GitHub uses in order to compare their branches. The fork and pull model is usually used in projects when one does not have write access but is willing to contribute. After forking a repository, you push a branch to your fork and then create a pull request in the source repository asking its maintainer to merge the changes. This is common practice to contribute to open source projects hosted on GitHub. You will not have access to their repository, but being open source, you can fork the public repository and work on your own copy. How to create and submit a pull request There are quite a few ways to initiate the creation of a pull request, as we you will see in the following sections. The most common one is to push a branch to your repository and let GitHub's UI guide you. Let's explore this option first. Use the Compare & pull request button Whenever a new branch is pushed to a repository, GitHub shows a quick button to create a pull request. In reality, you are taken to the compare page, as we will explore in the next section, but some values are already filled out for you. Let's create, for example, a new branch named add_gitignore where we will add a .gitignore file with the following contents: git checkout -b add_gitignore echo -e '.bundlen.sass-cachen.vendorn_site' > .gitignore git add .gitignore git commit -m 'Add .gitignore' git push origin add_gitignore Next, head over your repository's main page and you will notice the Compare & pull request button, as shown in the following screenshot: From here on, if you hit this button you will be taken to the compare page. Note that I am pushing to my repository following the shared repository model, so here is how GitHub greets me: What would happen if I used the fork and pull repository model? For this purpose, I created another user to fork my repository and followed the same instructions to add a new branch named add_gitignore with the same changes. From here on, when you push the branch to your fork, the Compare & pull request button appears whether you are on your fork's page or on the parent repository. Here is how it looks if you visit your fork: The following screenshot will appear, if you visit the parent repository: In the last case (captured in red), you can see from which user this branch came from (axil43:add_gitignore). In either case, when using the fork and pull model, hitting the Compare & pull request button will take you to the compare page with slightly different options: Since you are comparing across forks, there are more details. In particular, you can see the base fork and branch as well as the head fork and branch that are the ones you are the owner of. GitHub considers the default branch set in your repository to be the one you want to merge into (base) when the Create Pull Request button appears. Before submitting it, let's explore the other two options that you can use to create a pull request. You can jump to the Submit a pull request section if you like. Use the compare function directly As mentioned in the previous section, the Compare & pull request button gets you on the compare page with some predefined values. The button appears right after you push a new branch and is there only for a few moments. In this section, we will see how to use the compare function directly in order to create a pull request. You can access the compare function by clicking on the green button next to the branch drop-down list on a repository's main page: This is pretty powerful as one can compare across forks or, in the same repository, pretty much everything—branches, tags, single commits and time ranges. The default page when you land on the compare page is like the following one; you start by comparing your default branch with GitHub, proposing a list of recently created branches to choose from and compare: In order to have something to compare to, the base branch must be older than what you are comparing to. From here, if I choose the add_gitignore branch, GitHub compares it to a master and shows the diff along with the message that it is able to be merged into the base branch without any conflicts. Finally, you can create the pull request: Notice that I am using the compare function while I'm at my own repository. When comparing in a repository that is a fork of another, the compare function slightly changes and automatically includes more options as we have seen in the previous section. As you may have noticed the Compare & pull request quick button is just a shortcut for using compare manually. If you want to have more fine-grained control on the repositories and the branches compared, use the compare feature directly. Use the GitHub web editor So far, we have seen the two most well-known types of initiating a pull request. There is a third way as well: using entirely the web editor that GitHub provides. This can prove useful for people who are not too familiar with Git and the terminal, and can also be used by more advanced Git users who want to propose a quick change. As always, according to the model you are using (shared repository or fork and pull), the process is a little different. Let's first explore the shared repository model flow using the web editor, which means editing files in a repository that you own. The shared repository model Firstly, make sure you are on the branch that you wish to branch off; then, head over a file you wish to change and press the edit button with the pencil icon: Make the change you want in that file, add a proper commit message, and choose Create a new branch giving the name of the branch you wish to create. By default, the branch name is username-patch-i, where username is your username and i is an increasing integer starting from 1. Consecutive edits on files will create branches such as username-patch-1, username-patch-2, and so on. In our example, I decided to give the branch a name of my own: When ready, press the Propose file change button. From this moment on, the branch is created with the file edits you made. Even if you close the next page, your changes will not be lost. Let's skip the pull request submission for the time being and see how the fork and pull model works. The fork and pull model In the fork and pull model, you fork a repository and submit a pull request from the changes you make in your fork. In the case of using the web editor, there is a caveat. In order to get GitHub automatically recognize that you wish to perform a pull request in the parent repository, you have to start the web editor from the parent repository and not your fork. In the following screenshot, you can see what happens in this case: GitHub informs you that a new branch will be created in your repository (fork) with the new changes in order to submit a pull request. Hitting the Propose file change button will take you to the form to submit the pull request: Contrary to the shared repository model, you can now see the base/head repositories and branches that are compared. Also, notice that the default name for the new branch is patch-i, where i is an increasing integer number. In our case, this was the first branch created that way, so it was named patch-1. If you would like to have the ability to name the branch the way you like, you should follow the shared repository model instructions as explained in preceding section. Following that route, edit the file in your fork where you have write access, add your own branch name, hit the Propose file change button for the branch to be created, and then abort when asked to create the pull request. You can then use the Compare & pull request quick button or use the compare function directly to propose a pull request to the parent repository. One last thing to consider when using the web editor, is the limitation of editing one file at a time. If you wish to include more changes in the same branch that GitHub created for you when you first edited a file, you must first change to that branch and then make any subsequent changes. How to change the branch? Simply choose it from the drop-down menu as shown in the following screenshot: Submit a pull request So far, we have explored the various ways to initiate a pull request. In this section, we will finally continue to submit it as well. The pull request form is identical to the form when creating a new issue. If you have write access to the repository that you are making the pull request to, then you are able to set labels, milestone, and assignee. The title of the pull request is automatically filled by the last commit message that the branch has, or if there are multiple commits, it will just fill in the branch name. In either case, you can change it to your liking. In the following image, you can see the title is taken from the branch name after GitHub has stripped the special characters. In a sense, the title gets humanized: You can add an optional description and images if you deem proper. Whenever ready, hit the Create pull request button. In the following sections, we will explore how the peer review works. Peer review and inline comments The nice thing about pull requests is that you have a nice and clear view of what is about to get merged. You can see only the changes that matter, and the best part is that you can fire up a discussion concerning those changes. In the previous section, we submitted the pull request so that it can be reviewed and eventually get merged. Suppose that we are collaborating with a team and they chime in to discuss the changes. Let's first check the layout of a pull request. Summary In this article, we explored the GitHub workflow and the various ways to perform a pull request, as well as the many features GitHub provides to make that workflow even smoother. This is how the majority of open source projects work when there are dozens of contributors involved. Resources for Article: Further resources on this subject: Git Teaches – Great Tools Don't Make Great Craftsmen[article] Maintaining Your GitLab Instance[article] Configuration [article]
Read more
  • 0
  • 0
  • 2622