Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-how-to-perform-numeric-metric-aggregations-with-elasticsearch
Pravin Dhandre
22 Feb 2018
7 min read
Save for later

How to perform Numeric Metric Aggregations with Elasticsearch

Pravin Dhandre
22 Feb 2018
7 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book Learning Elastic Stack 6.0 written by Pranav Shukla and Sharath Kumar M N . This book provides detailed coverage on fundamentals of each components of Elastic Stack, making it easy to search, analyze and visualize data across different sources in real-time.[/box] Today, we are going to demonstrate how to run numeric and statistical queries such as summation, average, count and various similar metric aggregations on Elastic Stack to serve a better analytics engine on your dataset. Metric aggregations   Metric aggregations work with numeric data, computing one or more aggregate metrics within the given context. The context could be a query, filter, or no query to include the whole index/type. Metric aggregations can also be nested inside other bucket aggregations. In this case, these metrics will be computed for each bucket in the bucket aggregations. We will start with simple metric aggregations without nesting them inside bucket aggregations. When we learn about bucket aggregations later in the chapter, we will also learn how to use metric aggregations inside bucket aggregations. We will learn about the following metric aggregations: Sum, average, min, and max aggregations Stats and extended stats aggregations Cardinality aggregation Let us learn about them one by one. Sum, average, min, and max aggregations Finding the sum of a field, the minimum value for a field, the maximum value for a field, or an average, are very common operations. For the people who are familiar with SQL, the query to find the sum would look like the following: SELECT sum(downloadTotal) FROM usageReport; The preceding query will calculate the sum of the downloadTotal field across all records in the table. This requires going through all records of the table or all records in the given context and adding the values of the given fields. In Elasticsearch, a similar query can be written using the sum aggregation. Let us understand the sum aggregation first. Sum aggregation Here is how to write a simple sum aggregation: GET bigginsight/_search { "aggregations": { 1 "download_sum": { 2 "sum": { 3 "field": "downloadTotal" 4 } } }, "size": 0 5 } The aggs or aggregations element at the top level should wrap any aggregation. Give a name to the aggregation; here we are doing the sum aggregation on the downloadTotal field and hence the name we chose is download_sum. You can name it anything. This field will be useful while looking up this particular aggregation's result in the response. We are doing a sum aggregation, hence the sum element. We want to do term aggregation on the downloadTotal field. Specify size = 0 to prevent raw search results from being returned. We just want aggregation results and not the search results in this case. Since we haven't specified any top level query elements, it matches all documents. We do not want any raw documents (or search hits) in the result. The response should look like the following: { "took": 92, ... "hits": { "total": 242836, 1 "max_score": 0, "hits": [] }, "aggregations": { 2 "download_sum": { 3 "value": 2197438700 4 } } } Let us understand the key aspects of the response. The key parts are numbered 1, 2, 3, and so on, and are explained in the following points: The hits.total element shows the number of documents that were considered or were in the context of the query. If there was no additional query or filter specified, it will include all documents in the type or index. Just like the request, this response is wrapped inside aggregations to indicate as Such. The response of the aggregation requested by us was named download_sum, hence we get our response from the sum aggregation inside an element with the same name. The actual value after applying the sum aggregation. The average, min, and max aggregations are very similar. Let's look at them briefly. Average aggregation The average aggregation finds an average across all documents in the querying context: GET bigginsight/_search { "aggregations": { "download_average": { 1 "avg": { 2 "field": "downloadTotal" } } }, "size": 0 } The only notable differences from the sum aggregation are as follows: We chose a different name, download_average, to make it apparent that the aggregation is trying to compute the average. The type of aggregation that we are doing is avg instead of the sum aggregation that we were doing earlier. The response structure is identical but the value field will now represent the average of the requested field. The min and max aggregations are the exactly same. Min aggregation Here is how we will find the minimum value of the downloadTotal field in the entire index/type: GET bigginsight/_search { "aggregations": { "download_min": { "min": { "field": "downloadTotal" } } }, "size": 0 } Let's finally look at max aggregation also. Max aggregation Here is how we will find the maximum value of the downloadTotal field in the entire index/type: GET bigginsight/_search { "aggregations": { "download_max": { "max": { "field": "downloadTotal" } } }, "size": 0 } These aggregations were really simple. Now let's look at some more advanced yet simple stats and extended stats aggregations. Stats and extended stats aggregations These aggregations compute some common statistics in a single request without having to issue multiple requests. This saves resources on the Elasticsearch side as well because the statistics are computed in a single pass rather than being requested multiple times. The client code also becomes simpler if you are interested in more than one of these statistics. Let's look at the stats aggregation first. Stats aggregation The stats aggregation computes the sum, average, min, max, and count of documents in a single pass: GET bigginsight/_search { "aggregations": { "download_stats": { "stats": { "field": "downloadTotal" } } }, "size": 0 } The structure of the stats request is the same as the other metric aggregations we have seen so far, so nothing special is going on here. The response should look like the following: { "took": 4, ..., "hits": { "total": 242836, "max_score": 0, "hits": [] }, "aggregations": { "download_stats": { "count": 242835, "min": 0, "max": 241213, "avg": 9049.102065188297, "sum": 2197438700 } } } As you can see, the response with the download_stats element contains count, min, max, average, and sum; everything is included in the same response. This is very handy as it reduces the overhead of multiple requests and also simplifies the client code. Let us look at the extended stats aggregation. Extended stats Aggregation The extended stats aggregation returns a few more statistics in addition to the ones returned by the stats aggregation: GET bigginsight/_search { "aggregations": { "download_estats": { "extended_stats": { "field": "downloadTotal" } } }, "size": 0 } The response looks like the following: { "took": 15, "timed_out": false, ..., "hits": { "total": 242836, "max_score": 0, "hits": [] }, "aggregations": { "download_estats": { "count": 242835, "min": 0, "max": 241213, "avg": 9049.102065188297, "sum": 2197438700, "sum_of_squares": 133545882701698, "variance": 468058704.9782911, "std_deviation": 21634.664429528162, "std_deviation_bounds": { "upper": 52318.43092424462, "lower": -34220.22679386803 } } } } It also returns the sum of squares, variance, standard deviation, and standard deviation Bounds. Cardinality aggregation Finding the count of unique elements can be done with the cardinality aggregation. It is similar to finding the result of a query such as the following: select count(*) from (select distinct username from usageReport) u; Finding the cardinality or the number of unique values for a specific field is a very common requirement. If you have click-stream from the different visitors on your website, you may want to find out how many unique visitors you got in a given day, week, or month. Let us understand how we find out the count of unique users for which we have network traffic data: GET bigginsight/_search { "aggregations": { "unique_visitors": { "cardinality": { "field": "username" } } }, "size": 0 } The cardinality aggregation response is just like the other metric aggregations: { "took": 110, ..., "hits": { "total": 242836, "max_score": 0, "hits": [] }, "aggregations": { "unique_visitors": { "value": 79 } } } To summarize, we learned how to perform numerous metric aggregations on numeric datasets and easily deploy elasticsearch in building powerful analytics application. If you found this tutorial useful, do check out the book Learning Elastic Stack 6.0 to examine the fundamentals of Elastic Stack in detail and start developing solutions for problems like logging, site search, app search, metrics and more.      
Read more
  • 0
  • 0
  • 10533

article-image-debugging-vulkan
Packt
23 Nov 2016
16 min read
Save for later

Debugging in Vulkan

Packt
23 Nov 2016
16 min read
In this article by Parminder Singh, author of Learning Vulkan, we learn Vulkan debugging in order to avoid unpleasant mistakes. Vulkan allows you to perform debugging through validation layers. These validation layer checks are optional and can be injected into the system at runtime. Traditional graphics APIs perform validation right up front using some sort of error-checking mechanism, which is a mandatory part of the pipeline. This is indeed useful in the development phase, but actually, it is an overhead during the release stage because the validation bugs might have already been fixed at the development phase itself. Such compulsory checks cause the CPU to spend a significant amount of time in error checking. On the other hand, Vulkan is designed to offer maximum performance, where the optional validation process and debugging model play a vital role. Vulkan assumes the application has done its homework using the validation and debugging capabilities available at the development stage, and it can be trusted flawlessly at the release stage. In this article, we will learn the validation and debugging process of a Vulkan application. We will cover the following topics: Peeking into Vulkan debugging Understanding LunarG validation layers and their features Implementing debugging in Vulkan (For more resources related to this topic, see here.) Peeking into Vulkan debugging Vulkan debugging validates the application implementation. It not only surfaces the errors, but also other validations, such as proper API usage. It does so by verifying each parameter passed to it, warning about the potentially incorrect and dangerous API practices in use and reporting any performance-related warnings when the API is not used optimally. By default, debugging is disabled, and it's the application's responsibility to enable it. Debugging works only for those layers that are explicitly enabled at the instance level at the time of the instance creation (VkInstance). When debugging is enabled, it inserts itself into the call chain for the Vulkan commands the layer is interested in. For each command, the debugging visits all the enabled layers and validates them for any potential error, warning, debugging information, and so on. Debugging in Vulkan is simple. The following is an overview that describes the steps required to enable it in an application: Enable the debugging capabilities by adding the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension at the instance level. Define the set of the validation layers that are intended for debugging. For example, we are interested in the following layers at the instance and device level. For more information about these layer functionalities, refer to the next section: VK_LAYER_GOOGLE_unique_objects VK_LAYER_LUNARG_api_dump VK_LAYER_LUNARG_core_validation VK_LAYER_LUNARG_image VK_LAYER_LUNARG_object_tracker VK_LAYER_LUNARG_parameter_validation VK_LAYER_LUNARG_swapchain VK_LAYER_GOOGLE_threading The Vulkan debugging APIs are not part of the core command, which can be statically loaded by the loader. These are available in the form of extension APIs that can be retrieved at runtime and dynamically linked to the predefined function pointers. So, as the next step, the debug extension APIs vkCreateDebugReportCallbackEXT and vkDestroyDebugReportCallbackEXT are queried and linked dynamically. These are used for the creation and destruction of the debug report. Once the function pointers for the debug report are retrieved successfully, the former API (vkCreateDebugReportCallbackEXT) creates the debug report object. Vulkan returns the debug reports in a user-defined callback, which has to be linked to this API. Destroy the debug report object when debugging is no more required. Understanding LunarG validation layers and their features The LunarG Vulkan SDK supports the following layers for debugging and validation purposes. In the following points, we have described some of the layers that will help you understand the offered functionalities: VK_LAYER_GOOGLE_unique_objects: Non-dispatchable handles are not required to be unique; a driver may return the same handle for multiple objects that it considers equivalent. This behavior makes the tracking of the object difficult because it is not clear which object to reference at the time of deletion. This layer packs the Vulkan objects into a unique identifier at the time of creation and unpacks them when the application uses it. This ensures there is proper object lifetime tracking at the time of validation. As per LunarG's recommendation, this layer must be last in the chain of the validation layer, making it closer to the display driver. VK_LAYER_LUNARG_api_dump: This layer is helpful in knowing the parameter values passed to the Vulkan APIs. It prints all the data structure parameters along with their values. VK_LAYER_LUNARG_core_validation: This is used for validating and printing important pieces of information from the descriptor set, pipeline state, dynamic state, and so on. This layer tracks and validates the GPU memory, object binding, and command buffers. Also, it validates the graphics and compute pipelines. VK_LAYER_LUNARG_image: This layer can be used for validating texture formats, rendering target formats, and so on. For example, it verifies whether the requested format is supported on the device. It validates whether the image view creation parameters are reasonable for the image that the view is being created for. VK_LAYER_LUNARG_object_tracker: This keeps track of object creation along with its use and destruction, which is helpful in avoiding memory leaks. It also validates that the referenced object is properly created and is presently valid. VK_LAYER_LUNARG_parameter_validation: This validation layer ensures that all the parameters passed to the API are correct as per the specification and are up to the required expectation. It checks whether the value of a parameter is consistent and within the valid usage criteria defined in the Vulkan specification. Also, it checks whether the type field of a Vulkan control structure contains the same value that is expected for a structure of that type. VK_LAYER_LUNARG_swapchain: This layer validates the use of the WSI swapchain extensions. For example, it checks whether the WSI extension is available before its functions could be used. Also, it validates that an image index is within the number of images in a swapchain. VK_LAYER_GOOGLE_threading: This is helpful in the context of thread safety. It checks the validity of multithreaded API usage. This layer ensures the simultaneous use of objects using calls running under multiple threads. It reports threading rule violations and enforces a mutex for such calls. Also, it allows an application to continue running without actually crashing, despite the reported threading problem. VK_LAYER_LUNARG_standard_validation: This enables all the standard layers in the correct order. For more information on validation layers, visit LunarG's official website. Check out https://vulkan.lunarg.com/doc/sdk and specifically refer to the Validation layer details section for more details. Implementing debugging in Vulkan Since debugging is exposed by validation layers, most of the core implementation of the debugging will be done under the VulkanLayerAndExtension class (VulkanLED.h/.cpp). In this section, we will learn about the implementation that will help us enable the debugging process in Vulkan: The Vulkan debug facility is not part of the default core functionalities. Therefore, in order to enable debugging and access the report callback, we need to add the necessary extensions and layers: Extension: Add the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension to the instance level. This will help in exposing the Vulkan debug APIs to the application: vector<const char *> instanceExtensionNames = { . . . . // other extensios VK_EXT_DEBUG_REPORT_EXTENSION_NAME, }; Layer: Define the following layers at the instance level to allow debugging at these layers: vector<const char *> layerNames = { "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation", "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation", "VK_LAYER_LUNARG_swapchain", “VK_LAYER_GOOGLE_unique_objects” }; In addition to the enabled validation layers, the LunarG SDK provides a special layer called VK_LAYER_LUNARG_standard_validation. This enables basic validation in the correct order as mentioned here. Also, this built-in metadata layer loads a standard set of validation layers in the optimal order. It is a good choice if you are not very specific when it comes to a layer. a) VK_LAYER_GOOGLE_threading b) VK_LAYER_LUNARG_parameter_validation c) VK_LAYER_LUNARG_object_tracker d) VK_LAYER_LUNARG_image e) VK_LAYER_LUNARG_core_validation f) VK_LAYER_LUNARG_swapchain g) VK_LAYER_GOOGLE_unique_objects These layers are then supplied to the vkCreateInstance() API to enable them: VulkanApplication* appObj = VulkanApplication::GetInstance(); appObj->createVulkanInstance(layerNames, instanceExtensionNames, title); // VulkanInstance::createInstance() VkResult VulkanInstance::createInstance(vector<const char *>& layers, std::vector<const char *>& extensionNames, char const*const appName) { . . . VkInstanceCreateInfo instInfo = {}; // Specify the list of layer name to be enabled. instInfo.enabledLayerCount = layers.size(); instInfo.ppEnabledLayerNames = layers.data(); // Specify the list of extensions to // be used in the application. instInfo.enabledExtensionCount = extensionNames.size(); instInfo.ppEnabledExtensionNames = extensionNames.data(); . . . vkCreateInstance(&instInfo, NULL, &instance); } The validation layer is very specific to the vendors and SDK version. Therefore, it is advisable to first check whether the layers are supported by the underlying implementation before passing them to the vkCreateInstance() API. This way, the application remains portable throughout when ran against another driver implementation. The areLayersSupported() is a user-defined utility function that inspects the incoming layer names against system-supported layers. The unsupported layers are informed to the application and removed from the layer names before feeding them into the system: // VulkanLED.cpp VkBool32 VulkanLayerAndExtension::areLayersSupported (vector<const char *> &layerNames) { uint32_t checkCount = layerNames.size(); uint32_t layerCount = layerPropertyList.size(); std::vector<const char*> unsupportLayerNames; for (uint32_t i = 0; i < checkCount; i++) { VkBool32 isSupported = 0; for (uint32_t j = 0; j < layerCount; j++) { if (!strcmp(layerNames[i], layerPropertyList[j]. properties.layerName)) { isSupported = 1; } } if (!isSupported) { std::cout << "No Layer support found, removed” “ from layer: "<< layerNames[i] << endl; unsupportLayerNames.push_back(layerNames[i]); } else { cout << "Layer supported: " << layerNames[i] << endl; } } for (auto i : unsupportLayerNames) { auto it = std::find(layerNames.begin(), layerNames.end(), i); if (it != layerNames.end()) layerNames.erase(it); } return true; } The debug report is created using the vkCreateDebugReportCallbackEXT API. This API is not a part of Vulkan's core commands; therefore, the loader is unable to link it statically. If you try to access it in the following manner, you will get an undefined symbol reference error: vkCreateDebugReportCallbackEXT(instance, NULL, NULL, NULL); All the debug-related APIs need to be queried using the vkGetInstanceProcAddr() API and linked dynamically. The retrieved API reference is stored in a corresponding function pointer called PFN_vkCreateDebugReportCallbackEXT. The VulkanLayerAndExtension::createDebugReportCallback() function retrieves the create and destroy debug APIs, as shown in the following implementation: /********* VulkanLED.h *********/ // Declaration of the create and destroy function pointers PFN_vkCreateDebugReportCallbackEXT dbgCreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT dbgDestroyDebugReportCallback; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Get vkCreateDebugReportCallbackEXT API dbgCreateDebugReportCallback=(PFN_vkCreateDebugReportCallbackEXT) vkGetInstanceProcAddr(*instance,"vkCreateDebugReportCallbackEXT"); if (!dbgCreateDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkCreateDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } // Get vkDestroyDebugReportCallbackEXT API dbgDestroyDebugReportCallback= (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr (*instance, "vkDestroyDebugReportCallbackEXT"); if (!dbgDestroyDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkDestroyDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } . . . } The vkGetInstanceProcAddr() API obtains the instance-level extensions dynamically; these extensions are not exposed statically on a platform and need to be linked through this API dynamically. The following is the signature of this API: PFN_vkVoidFunction vkGetInstanceProcAddr( VkInstance instance, const char* name); The following table describes the API fields: Parameters Description instance This is a VkInstance variable. If this variable is NULL, then the name must be one of these: vkEnumerateInstanceExtensionProperties, vkEnumerateInstanceLayerProperties, or vkCreateInstance. name This is the name of the API that needs to be queried for dynamic linking.   Using the dbgCreateDebugReportCallback()function pointer, create the debugging report object and store the handle in debugReportCallback. The second parameter of the API accepts a VkDebugReportCallbackCreateInfoEXT control structure. This data structure defines the behavior of the debugging, such as what should the debug information include—errors, general warnings, information, performance-related warning, debug information, and so on. In addition, it also takes the reference of a user-defined function (debugFunction); this helps filter and print the debugging information once it is retrieved from the system. Here's the syntax for creating the debugging report: struct VkDebugReportCallbackCreateInfoEXT { VkStructureType type; const void* next; VkDebugReportFlagsEXT flags; PFN_vkDebugReportCallbackEXT fnCallback; void* userData; }; The following table describes the purpose of the mentioned API fields: Parameters Description type This is the type information of this control structure. It must be specified as VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT. flags This is to define the kind of debugging information to be retrieved when debugging is on; the next table defines these flags. fnCallback This field refers to the function that filters and displays the debug messages. The VkDebugReportFlagBitsEXT control structure can exhibit a bitwise combination of the following flag values: Insert table here The createDebugReportCallback function implements the creation of the debug report. First, it creates the VulkanLayerAndExtension control structure object and fills it with relevant information. This primarily includes two things: first, assigning a user-defined function (pfnCallback) that will print the debug information received from the system (see the next point), and second, assigning the debugging flag (flags) in which the programmer is interested: /********* VulkanLED.h *********/ // Handle of the debug report callback VkDebugReportCallbackEXT debugReportCallback; // Debug report callback create information control structure VkDebugReportCallbackCreateInfoEXT dbgReportCreateInfo = {}; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Define the debug report control structure, // provide the reference of 'debugFunction', // this function prints the debug information on the console. dbgReportCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgReportCreateInfo.pfnCallback = debugFunction; dbgReportCreateInfo.pUserData = NULL; dbgReportCreateInfo.pNext = NULL; dbgReportCreateInfo.flags = VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT | VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_DEBUG_BIT_EXT; // Create the debug report callback and store the handle // into 'debugReportCallback' result = dbgCreateDebugReportCallback (*instance, &dbgReportCreateInfo, NULL, &debugReportCallback); if (result == VK_SUCCESS) { cout << "Debug report callback object created successfullyn"; } return result; } Define the debugFunction() function that prints the retrieved debug information in a user-friendly way. It describes the type of debug information along with the reported message: VKAPI_ATTR VkBool32 VKAPI_CALL VulkanLayerAndExtension::debugFunction( VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData){ if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] ERROR: [" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] WARNING: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) { std::cout<<"[VK_DEBUG_REPORT] INFORMATION:[" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if(msgFlags& VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT){ cout <<"[VK_DEBUG_REPORT] PERFORMANCE: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) { cout << "[VK_DEBUG_REPORT] DEBUG: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else { return VK_FALSE; } return VK_SUCCESS; } The following table describes the various fields from the debugFunction()callback: Parameters Description msgFlags This specifies the type of debugging event that has triggered the call, for example, an error, warning, performance warning, and so on. objType This is the type object that is manipulated by the triggering call. srcObject This is the handle of the object that's being created or manipulated by the triggered call. location This refers to the place of the code describing the event. msgCode This refers to the message code. layerPrefix This is the layer responsible for triggering the debug event. msg This field contains the debug message text. userData Any application-specific user data is specified to the callback using this field.  The debugFunction callback has a Boolean return value. The true return value indicates the continuation of the command chain to subsequent validation layers even after an error is occurred. However, the false value indicates the validation layer to abort the execution when an error occurs. It is advisable to stop the execution at the very first error. Having an error itself indicates that something has occurred unexpectedly; letting the system run in these circumstances may lead to undefined results or further errors, which could be completely senseless sometimes. In the latter case, where the execution is aborted, it provides a better chance for the developer to concentrate and fix the reported error. In contrast, it may be cumbersome in the former approach, where the system throws a bunch of errors, leaving the developers in a confused state sometimes. In order to enable debugging at vkCreateInstance, provide dbgReportCreateInfo to the VkInstanceCreateInfo’spNext field: VkInstanceCreateInfo instInfo = {}; . . . instInfo.pNext = &layerExtension.dbgReportCreateInfo; vkCreateInstance(&instInfo, NULL, &instance); Finally, once the debug is no longer in use, destroy the debug callback object: void VulkanLayerAndExtension::destroyDebugReportCallback(){ VulkanApplication* appObj = VulkanApplication::GetInstance(); dbgDestroyDebugReportCallback(instance,debugReportCallback,NULL); } The following is the output from the implemented debug report. Your output may differ from this based on the GPU vendor and SDK provider. Also, the explanation of the errors or warnings reported are very specific to the SDK itself. But at a higher level, the specification will hold; this means you can expect to see a debug report with a warning, information, debugging help, and so on, based on the debugging flag you have turned on. Summary This article was short, precise, and full of practical implementations. Working on Vulkan without debugging capabilities is like shooting in the dark. We know very well that Vulkan demands an appreciable amount of programming and developers make mistakes for obvious reasons; they are humans after all. We learn from our mistakes, and debugging allows us to find and correct these errors. It also provides insightful information to build quality products. Let's do a quick recap. We learned the Vulkan debugging process. We looked at the various LunarG validation layers and understood the roles and responsibilities offered by each one of them. Next, we added a few selected validation layers that we were interested to debug. We also added the debug extension that exposes the debugging capabilities; without this, the API's definition could not be dynamically linked to the application. Then, we implemented the Vulkan create debug report callback and linked it to our debug reporting callback; this callback decorates the captured debug report in a user-friendly and presentable fashion. Finally, we implemented the API to destroy the debugging report callback object. Resources for Article: Further resources on this subject: Get your Apps Ready for Android N [article] Multithreading with Qt [article] Manage Security in Excel [article]
Read more
  • 0
  • 0
  • 10531

article-image-multiple-templates-django
Packt
21 Oct 2009
13 min read
Save for later

Multiple Templates in Django

Packt
21 Oct 2009
13 min read
Considering the different approaches Though there are different approaches that can be taken to serve content in multiple formats, the best solution will be specific to your circumstances and implementation. Almost any approach you take will have maintenance overhead. You'll have multiple places to update when things change. As copies of your template files proliferate, a simple text change can become a large task. Some of the cases we'll look at don't require much consideration. Serving a printable version of a page, for example, is straightforward and easily accomplished. Putting a pumpkin in your site header at Halloween or using a heart background around Valentine's Day can make your site seem timely and relevant, especially if you are in a seasonal business. Other techniques, such as serving different templates to different browsers, devices, or user-agents might create serious debate among content authors. Since serving content to mobile devices is becoming a new standard of doing business, we'll make it the focus of this article. Serving mobile devices The Mobile Web will remind some old timers (like me!) of the early days of web design where we'd create different sites for Netscape and Internet Explorer. Hopefully, we take lessons from those days as we go forward and don't repeat our mistakes. Though we're not as apt to serve wholly different templates to different desktop browsers as we once were, the mobile device arena creates special challenges that require careful attention. One way to serve both desktop and mobile devices is a one-size-fits-all approach. Through carefully structured and semantically correct XHTML markup and CSS selectors identified to be applied to handheld output, you can do a reasonable job of making your content fit a variety of contexts and devices. However, this method has a couple of serious shortcomings. First, it does not take into account the limitations of devices for rich media presentation with Flash, JavaScript, DHTML, and AJAX as they are largely unsupported on all but the highest-end devices. If your site depends on any of these technologies, your users can get frustrated when trying to experience it on a mobile device. Also, it doesn't address the varying levels of CSS support by different mobile devices. What looks perfect on one device might look passable on another and completely unusable on a third because only some of the CSS rules were applied properly. It also does not take into account the potentially high bandwidth costs for large markup files and CSS for users who pay by the amount of data transferred. For example, putting display: none on an image doesn't stop a mobile device from downloading the file. It only prevents it from being shown. Finally, this approach doesn't tailor the experience to the user's circumstances. Users tend to be goal-oriented and have specific actions in mind when using the mobile web, and content designers should recognize that simply recreating the desktop experience on a smaller screen might not solve their needs. Limiting the information to what a mobile user is looking for and designing a simplified navigation can provide a better user experience. Adapting content You know your users best, and it is up to you to decide the best way to serve them. You may decide to pass on the one-size-fits-all approach and serve a separate mobile experience through content adaptation. The W3C's Mobile Web Initiative best practices guidelines suggest giving users the flexibility and freedom to choose their experience, and provide links between the desktop and mobile templates so that they can navigate between the two. It is generally not recommended to automatically redirect users on mobile devices to a mobile site unless you give them a way to access the full site. The dark side to this kind of content adaptation is that you will have a second set of template files to keep updated when you make site changes. It can also cause your visitors to search through different bookmarks to find the content they have saved. Before we get into multiple sites, let's start with some examples of showing alternative templates on our current site. Setting up our example Since we want to customize the output of our detail page based on the presence of a variable in the URL, we're going to use a view function instead of a generic view. Let us consider a press release application for a company website. The press release object will have a title, body, published date, and author name.In the root directory of your project (in the directory projects/mycompany), create the press application by using the startapp command: $ python manage.py startapp press This will create a press folder in your site. Edit the mycompany/press/models.py file: from django.db import models class PressRelease(models.Model): title = models.CharField(max_length=100) body = models.TextField() pub_date = models.DateTimeField() author = models.CharField(max_length=100) def __unicode__(self): return self.title Create a file called admin.py in the mycompany/press directory, adding these lines: from django.contrib import adminfrom mycompany.press.models import PressRelease admin.site.register(PressRelease) Add the press and admin applications to your INSTALLED_APPS variable in the settings.py file: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'mycompany.press',) In the root directory of your project, run the syncdb command to add the new models to the database: $ python manage.py syncdb We will be prompted to create a superuser, go ahead and create it. We can access the admin site by browsing to http://localhost:8000/admin/ and add data. Create your mycompany/press/urls.py file as shown: urlpatterns = patterns('', (r'detail/(?P<pid>d+)/$', 'mycompany.press.views.detail'), (r'list/$','django.views.generic.list_detail.object_list', press_list_dict), (r'latest/$','mycompany.press.views.latest'), (r'$','django.views.generic.simple.redirect_to', {'url': '/press/list/'})) In your mycompany/press/views.py file, your detail view should look like this: from django.http import HttpResponsefrom django.shortcuts import get_object_or_404from django.template import loader, Contextfrom mycompany.press.models import PressRelease def detail(request, pid): ''' Accepts a press release ID and returns the detail page ''' p = get_object_or_404(PressRelease, id=pid) t = loader.get_template('press/detail.html') c = Context({'press': p}) return HttpResponse(t.render(c)) Let's jazz up our template a little more for the press release detail by adding some CSS to it. In mycompany/templates/press/detail.html, edit the file to look like this: <html><head><title>{{ press.title }}</title><style type="text/css">body { text-align: center;}#container { margin: 0 auto; width: 70%; text-align: left;}.header { background-color: #000; color: #fff;}</style></head><body><div id="container"><div class="header"><h1>MyCompany Press Releases</h1></div><div><h2>{{ press.title }}</h2><p>Author: {{ press.author }}<br/>Date: {{ press.pub_date }}<br/></p><p>{{ press.body }}</p></div></body></html> Start your development server and point your browser to the URL http://localhost:8000/press/detail/1/. You should see something like this, depending on what data you entered before when you created your press release: If your press release detail page is serving correctly, you're ready to continue. Remember that generic views can save us development time, but sometimes you'll need to use a regular view because you're doing something in a way that requires a view function customized to the task at hand. The exercise we're about to do is one of those circumstances, and after going through the exercise, you'll have a better idea of when to use one type of view over another. Serving printable pages One of the easiest approaches we will look at is serving an alternative version of a page based on the presence of a variable in the URL (aka a URL parameter). To serve a printable version of an article, for example, we can add ?printable to the end of the URL. To make it work, we'll add an extra step in our view to check the URL for this variable. If it exists, we'll load up a printer-friendly template file. If it doesn't exist, we'll load the normal template file. Start by adding the highlighted lines to the detail function in the mycompany/press/views.py file: def detail(request, pid): ''' Accepts a press release ID and returns the detail page ''' p = get_object_or_404(PressRelease, id=pid) if request.GET.has_key('printable'): template_file = 'press/detail_printable.html' else: template_file = 'press/detail.html' t = loader.get_template(template_file) c = Context({'press': p}) return HttpResponse(t.render(c)) We're looking at the request.GET object to see if a query string parameter of printable was present in the current request. If it was, we load the press/detail_printable.html file. If not, we load the press/detail.html file. We've also changed the loader.get_template function to look for the template_file variable. To test our changes, we'll need to create a simple version of our template that only has minimal formatting. Create a new file called detail_printable.html in the mycompany/templates/press/ directory and add these lines into it: <html><head><title>{{ press.title }}</title></head><body><h1>{{ press.title }}</h1><p>Author: {{ press.author }}<br/>Date: {{ press.pub_date }}<br/></p><p>{{ press.body }}</p></body></html> Now that we have both regular and printable templates, let's test our view.Point your browser to the URL http://localhost:8000/press/detail/1/, and you should see our original template as it was before. Change the URL to http://localhost:8000/press/detail/1/?printable and you should see our new printable template: Creating site themes Depending on the audience and focus of your site, you may want to temporarily change the look of your site for a season or holiday such as Halloween or Valentine's Day. This is easily accomplished by leveraging the power of the TEMPLATE_DIRS configuration setting. The TEMPLATE_DIRS variable in the settings.py file allows you to specify the location of the templates for your site. Also TEMPLATE_DIRS allows you to specify multiple locations for your template files. When you specify multiple paths for your template files, Django will look for a requested template file in the first path, and if it doesn't find it, it will keep searching through the remaining paths until the file is located. We can use this to our advantage by adding an override directory as the first element of the TEMPLATE_DIRS value. When we want to override a template with a special themed one, we'll add the file to the override directory. The next time the template loader tries to load the template, it will find it in the override directory and serve it. For example, let's say we want to override our press release page from the previous example. Recall that the view loaded the template like this (from mycompany/press/views.py): template_file = 'press/detail.html't = loader.get_template(template_file) When the template engine loads the press/detail.html template file, it gets itfrom the mycompany/templates/ directory as specified in the mycompany/settings.py file: TEMPLATE_DIRS = ( '/projects/mycompany/templates/',) If we add an additional directory to our TEMPLATE_DIRS setting, Django will look in the new directory first: TEMPLATE_DIRS = ( '/projects/mycompany/templates/override/’, '/projects/mycompany/templates/',) Now when the template is loaded, it will first check for the file /projects/mycompany/templates/override/press/detail.html. If that file doesn't exist, it will go on to the next directory and look for the file in /projects/mycompany/templates/press/detail.html. If you're using Windows, use the Windows-style file path c:/projects/mycompany/templates/ for these examples. Therein lies the beauty. If we want to override our press release template, we simply drop an alternative version with the same file name into the override directory. When we're done using it, we just remove it from the override directory and the original version will be served (or rename the file in the override directory to something other than detail.html). If you're concerned about the performance overhead of having a nearly empty override directory that is constantly checked for the existence of template files, we should consider caching techniques as a potential solution for this. Testing the template overrides Let's create a template override to test the concept we just learned. In your mycompany/settings.py file, edit the TEMPLATE_DIRS setting to look like this: TEMPLATE_DIRS = ( '/projects/mycompany/templates/override/', '/projects/mycompany/templates/',) Create a directory called override at mycompany/templates/ and another directory underneath that called press. You should now have these directories: /projects/mycompany/templates/override//projects/mycompany/templates/override/press/ Create a new file called detail.html in mycompany/templates/override/press/ and add these lines to the file: <html><head><title>{{ press.title }}</title></head><body><h1>Happy Holidays</h1><h2>{{ press.title }}</h2><p>Author: {{ press.author }}<br/>Date: {{ press.pub_date }}<br/></p><p>{{ press.body }}</p></body></html> You'll probably notice that this is just our printable detail template with an extra "Happy Holidays" line added to the top of it. Point your browser to the URL http://localhost:8000/press/detail/1/ and you should see something like this: By creating a new press release detail template and dropping it in the override directory, we caused Django to automatically pick up the new template and serve it without us having to change the view. To change it back, you can simply remove the file from the override directory (or rename it). One other thing to notice is that if you add ?printable to the end of the URL, it still serves the printable version of the file we created earlier. Delete the mycompany/templates/override/ directory and any files in it as we won't need them again.
Read more
  • 0
  • 0
  • 10525

article-image-installing-virtualbox-linux
Packt
14 Apr 2010
4 min read
Save for later

Installing VirtualBox on Linux

Packt
14 Apr 2010
4 min read
Time for action – downloading and Installing VirtualBox on Linux Ok, for this exercise you'll need a copy of Ubuntu Linux already installed on your PC. I chose Ubuntu because it's one of the friendliest Linux distributi ons available, as you will see in a moment. Before installing VirtualBox, you'll need to install two additional packages on your Ubuntu system. Open a terminal window (Applications | Accessories | Terminal), and type sudo apt-get update, followed by Enter. If Ubuntu asks for your administrative password, type it, and hit Enter to continue. Once the package list is updated, type sudo apt-get install dkms, and hit Enter; then type Y and hit Enter to install the DKMS package. The other package needed before you can install VirtualBox is build-essential. This package contains all the compiling tools VirtualBox needs to build the kernel module. Type sudo apt-get install build-essential, and hit Enter. Then type Y, and hit Enter again to conti nue. Wait for the $ prompt to show up again, type exit, and hit Enter to close the terminal window. Now you can proceed to install VirtualBox. Open the Synaptic Package Manager (System | Administration | Synaptic Package Manager), and select the Settings | Repositories option in the menu bar (if Ubuntu asks for your administrative password, type it, and press Enter to continue) : The Software Sources dialog will appear. Click on the Other (on earlier Ubuntu versions the name of this tab is Third-Party Software)and then on the Add+ button: Another dialog box will show up. Now type deb http://download.virtualbox.org/virtualbox/debian karmic non-free on the APT line field, and click on the Add Source button: If you're not using Ubuntu 9.10 Karmic Koala, then you'll need to change the APT line in the previous step. For example, if you're using Ubuntu 9.04 Jaunty Jackalope, replace the karmic part with jaunty. On the http://www.virtualbox.org/wiki/Linux_Downloads webpage, you'll find more information about installing VirtualBox on several Linux distributions and the APT line required for each Ubuntu distribution available. The third-party software source for VirtualBox will now show up on the list: Now open a terminal window (Applications | Accessories | Terminal), and type wget -q http://download.virtualbox.org/virtualbox/debian/sun_vbox.asc to download the Sun public key: Go back to the Synaptic Manager, select the Authentication tab, and click on the Import Key File button: The Import Key dialog will appear next. Select the sun_vbox.asc file you just downloaded, and click on the OK button to continue: The Sun public key for VirtualBox should now appear on the list: You can now delete the Sun public key file you downloaded earlier. Click on the Close button to return to the Synaptic Package Manager. If the Repositories Changed dialog shows up, select the Never show this message again checkbox, and click on Close to continue. Now click on the Synaptic Package Manager's Reload button to update your package sources with the most recent VirtualBox version: Once the Synaptic Package Manager finishes updating the package sources list, click on the Origin button located at the lower-left part of the window and select the download.virtualbox.org/non-free repository from the window above this button: Click on the most recent virtualbox-3.X package checkbox in the right window, and select the Mark for Installation option: When upgrading to a newer VirtualBox version, you must first completely remove the older version. Then you'll be able to install the newest version without any hassles. The Mark additional required changes? dialog box will appear next. Click on the Mark button to mark all the additional packages required to install VirtualBox: Now click on the Apply button in the Synaptic Package Manager: The Apply the following changes? dialog box will appear next. Make sure the Download package files only option is deselected, and click on the Apply button to start installing the required packages, along with VirtualBox: The Synaptic Package Manager will start downloading the required packages and, when finished, it will install them along with VirtualBox.
Read more
  • 0
  • 0
  • 10432

article-image-diy-selfie-drone-arduino-esp8266
Vijin Boricha
29 May 2018
10 min read
Save for later

How to assemble a DIY selfie drone with Arduino and ESP8266

Vijin Boricha
29 May 2018
10 min read
Have you ever thought of something that can take a photo from the air, or perhaps take a selfie from it? How about we build a drone for taking selfies and recording videos from the air? Taking photos from the sky is one of the most exciting things in photography this year. You can shoot from helicopters, planes, or even from satellites. Unless you own a personal air vehicle or someone you know does, you know this is a costly affair sure to burn through your pockets. Drones can come in handy here. Have ever googled drone photography? If you did, I am sure you'd want to build or buy a drone for photography, because of the amazing views of the common subjects taken from the sky. Today, we will learn to build a drone for aerial photography and videography. This tutorial is an excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. Assuming you know how to build your customized frame if not you can refer to our book, or you may buy HobbyKing X930 glass fiber frame and connect the parts together, as directed in the manual. However, I have a few suggestions to help you carry out a better assembly of the frame: Firstly, connect the motor mounted with the legs or wings or arms of the frame. Tighten them firmly, as they will carry and hold the most important equipment of the drone. Then, connect them to the base and, later other parts with firm connections. Now, we will calibrate our ESCs. We will take the signal cable from an ESC (the motor is plugged into the ESC; careful, don't connect the propeller) and connect it to the throttle pins on the radio. Make sure the transmitter is turned on and the throttle is in the lowest position. Now, plug the battery into the ESC and you will hear a beep. Now, gradually increase the throttle from the transmitter. Your motor will start spinning at any position. This is because the ESC is not calibrated. So, you need to tell the ESC where the high point and the low point of the throttle are. Disconnect the battery first. Increase the throttle of the transmitter to the highest position and power the ESC. Your ESC will now beep once and beep 3 times in every 4 seconds. Now, move the throttle to the bottommost position and you will hear the ESC beep as if it is ready and calibrated. Now, you can increase the throttle of the transmitter and will see from lower to higher, the throttle will work. Now, mount the motors, connect them to the ESCs, and then connect them to the ArduPilot, changing the pins gradually. Now, connect your GPS to the ArduPilot and calibrate it. Now, our drone is ready to fly. I would suggest you fly the drone for about 10-15 minutes before connecting the camera. Connecting the camera For a photography drone, connecting the camera and controlling the camera is one of the most important things. Your pictures and videos will be spoiled if you cannot adjust the camera and stabilize it properly. In our case, we will use a camera gimbal to hold the camera and move it from the ground. Choosing a gimbal The camera gimbal holds the camera for you and can move the camera direction according to your command. There are a number of camera gimbals out there. You can choose any type, depending on your demand and camera size and specification. If you want to use a DSLR camera, you should use a bigger gimbal and, if you use a point and shoot type camera or action camera, you may use small- or medium-sized gimbals. There are two types of gimbals, a brushless gimbal, and a standard gimbal. The standard gimbal has servo motors and gears. If you use an FPV camera, then a standard gimbal with a 2-axis manual mount is the best option. The standard gimbal is not heavy; it is lightweight and not expensive. The best thing is you will not need an external controller board for your standard camera gimbal. The brushless gimbal is for professional aero photographers. It is smooth and can shoot videos or photos with better quality. The brushless gimbal will need an external controller board for your drone and the brushless gimbal is heavier than the standard gimbal. Choosing the best gimbal is one of the hard things for a photographer, as the stabilization of the image is a must for photoshoots. If you cannot control the camera from the ground, then using a gimbal is worthless. The following picture shows a number of gimbals: After choosing your camera and the gimbal, the first thing is to mount the gimbal and the camera to the drone. Make sure the mount is firm, but not too hard, because it will make the camera shake while flying the drone. You may use the Styrofoam or rubber pieces that came with the gimbal to reduce the vibration and make the image stable. Configuring the camera with the ArduPilot Configuring the camera with the ArduPilot is easy. Before going any further, let us learn a few things about the camera gimbal's Euler angels: Tilt: This moves the camera sloping position (range -90 degrees to +90 degrees), it is the motion (clockwise-anticlockwise) with the vertical axis Roll: This is a motion ranging from 0 degrees to 360 degrees parallel to the horizontal axis Pan: This is the same type motion of roll ranging from 0 degrees to 360 degrees but in the vertical axis Shutter: This is a switch that triggers a click or sends a signal Firstly, we are going to use the standard gimbal. Basically, there are two servos in a standard gimbal. One is for pitch or tilt and another is for the roll. So, a standard gimbal gives you a two-dimensional motion with the camera viewpoint. Connection Follow these steps to connect the camera to the ArduPilot: Take the pitch servo's signal pin and connect it to the 11th pin of the ArduPilot (A11) and the roll signal to the 10th pin (A10). Make sure you connect only the signal (S pin) cable of the servos to the pin, not the other two pins (ground and the VCC). The signal cables must be connected to the innermost pins of the A11 and A10 pins (two pins make a raw; see the following picture for clarification): My suggestion is adding an extra battery for your gimbal's servos. If you want to connect your servo directly to the ArduPilot, your ArduPilot will not perform well, as the servos will draw power. Now, connect your ArduPilot to your PC using wire or telemetry. Go to the Initial Setup menu and, under Optional Hardware, you will find another option called Camera Gimbal. Click on this and you will see the following screen: For the Tilt, change the pin to RC11; for the Roll, change the pin to RC10; and for Shutter, change it to CH7. If you want to change the Tilt during the flight from the transmitter, you need to change the Input Ch of the Tilt. See the following screenshot: Now, you need to change an option in the Configuration | Extended Tuning page. Set Ch6 Opt to None, as in the following screenshot, and hit the Write Params button: We need to align the minimum and maximum PWM values for the servos of the gimbal. To do that, we can tilt the frame of the gimbal to the leftmost position and from the transmitter, move the knob to the minimum position and start increasing, your servo will start to move at any time, then stop moving the knob. For the maximum calibration, move the Tilt to the rightmost position and do the same thing for the knob with the maximum position. Do the same thing for the pitch with the forward and backward motion. We also need to level the gimbal for better performance. To do that, you need to keep the gimbal frame level to the ground and set the Camera Gimbal option, the Servo Limits, and the Angle Limits. Change them as per the level of the frame. Controlling the camera Controlling the camera to take selfies or record video is easy. You can use the shutter pin we used before or the camera's mobile app for controlling the camera. My suggestion is to use the camera's app to take shots because you will get a live preview of what you are shooting and it will be easy to control the camera shots. However, if you want to use the Shutter button manually from the transmitter then you can do this too. We have connected the RC7 pin for controlling a servo. You can use a servo or a receiver switch for your camera to manually trigger the shutter. To do that, you can buy a receiver controller on/off switch. You can use this switch for various purposes. Clicking the shutter of your camera is one of them. Manually triggering the camera is easy. It is usually done for point and shoot cameras. To do that, you need to update the firmware of your cameras. You can do this in many ways, but the easiest one will be discussed here. Your RECEIVER CONTROLLED ON/OFF SWITCH may look like the following: You can see five wires in the picture. The three wires together are, as usual, pins of the servo motor. Take out the signal cable (in this case, this is the yellow cable) and connect it to the RC7 pin of the ArduPilot. Then, connect the positive to one of the thick red wires. Take the camera's data cable and connect the other tick wire to the positive of the USB cable and the negative wire will be connected to the negative of the three connected wires. Then, an output of the positive and negative wire will go to the battery (an external battery is suggested for the camera). To upgrade the camera firmware, you need to go to the camera's website and upgrade the firmware for the remote shutter option. In my case, the website is http://chdk.wikia.com/wiki/CHDK . I have downloaded it for a Canon point and shoot camera. You can also use action cameras for your drones. They are cheap and can be controlled remotely via mobile applications. Flying and taking shots Flying the photography drone is not that difficult. My suggestion is to lock the altitude and fly parallel to the ground. If you use a camera remote controller or an app, then it is really easy to take the photo or record a video. However, if you use the switch, as we discussed, then you need to open and connect your drone to the mission planner via telemetry. Go to the flight data, right click on the map, and then click the Trigger Camera Now option. It will trigger the Camera Shutter button and start recording or take a photo. You can do this when your drone is in a locked position and, using the timer, take a shot from above, which can be a selfie too. Let's try it. Let me know what happens and whether you like it or not. Next, learn to build other drones like a mission control drone or gliding drones from our book Building Smart Drones with ESP8266 and Arduino. Drones: Everything you ever wanted to know! How to build an Arduino based ‘follow me’ drone Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 10421

article-image-basic-website-using-nodejs-and-mysql-database
Packt
14 Jul 2016
5 min read
Save for later

Basic Website using Node.js and MySQL database

Packt
14 Jul 2016
5 min read
In this article by Fernando Monteiro author of the book Node.JS 6.x Blueprints we will understand some basic concepts of a Node.js application using a relational database (Mysql) and also try to look at some differences between Object Document Mapper (ODM) from MongoDB and Object Relational Mapper (ORM) used by Sequelize and Mysql. For this we will create a simple application and use the resources we have available as sequelize is a powerful middleware for creation of models and mapping database. We will also use another engine template called Swig and demonstrate how we can add the template engine manually. (For more resources related to this topic, see here.) Creating the baseline applications The first step is to create another directory, I'll use the root folder. Create a folder called chapter-02. Open your terminal/shell on this folder and type the express command: express –-git Note that we are using only the –-git flag this time, we will use another template engine but we will install it manually. Installing Swig template Engine The first step to do is change the default express template engine to use Swig, a pretty simple template engine very flexible and stable, also offers us a syntax very similar to Angular which is denoting expressions just by using double curly brackets {{ variableName }}. More information about Swig can be found on the official website at: http://paularmstrong.github.io/swig/docs/ Open the package.json file and replace the jade line for the following: "swig": "^1.4.2" Open your terminal/shell on project folder and type: npm install Before we proceed let's make some adjust to app.js, we need to add the swig module. Open app.js and add the following code, right after the var bodyParser = require('body-parser'); line: var swig = require('swig'); Replace the default jade template engine line for the following code: var swig = new swig.Swig(); app.engine('html', swig.renderFile); app.set('view engine', 'html'); Refactoring the views folder Let's change the views folder to the following new structure: views pages/ partials/ Remove the default jade files form views. Create a file called layout.html inside pages folder and place the following code: <!DOCTYPE html> <html> <head> </head> <body> {% block content %} {% endblock %} </body> </html> Create a index.html inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <h1>{{ title }}</h1> Welcome to {{ title }} {% endblock %} Create a error.html page inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <div class="container"> <h1>{{ message }}</h1> <h2>{{ error.status }}</h2> <pre>{{ error.stack }}</pre> </div> {% endblock %} We need to adjust the views path on app.js, replace the code on line 14 for the following code: // view engine setup app.set('views', path.join(__dirname, 'views/pages')); At this time we completed the first step to start our MVC application. In this example we will use the MVC pattern in its full meaning, Model, View, Controller. Creating controllers folder Create a folder called controllers inside the root project folder. Create a index.js inside the controllers folder and place the following code: // Index controller exports.show = function(req, res) { // Show index content res.render('index', { title: 'Express' }); }; Edit the app.js file and replace the original index route app.use('/', routes); with the following code: app.get('/', index.show); Add the controller path to app.js on line 9, replace the original code, with the following code: // Inject index controller var index = require('./controllers/index'); Now it's time to get if all goes as expected, we run the application and check the result. Type on your terminal/shell the following command: npm start Check with the following URL: http://localhost:3000, you'll see the welcome message of express framework. Removing the default routes folder Remove the routes folder and its content. Remove the user route from the app.js, after the index controller and on line 31. Adding partials files for head and footer Inside views/partials create a new file called head.html and place the following code: <meta charset="utf-8"> <title>{{ title }}</title> <link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/css/bootstrap.min.css'> <link rel="stylesheet" href="/stylesheets/style.css"> Inside views/partials create a file called footer.html and place the following code: <script src='https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js'></script> <script src='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/js/bootstrap.min.js'></script> Now is time to add the partials file to layout.html page using the include tag. Open layout.html and add the following highlighted code: <!DOCTYPE html> <html> <head> {% include "../partials/head.html" %} </head> <body> {% block content %} {% endblock %} {% include "../partials/footer.html" %} </body> </html> Finally we are prepared to continue with our project, this time our directories structure looks like the following image: Folder structure Summaray In this article, we are discussing the basic concept of Node.js and Mysql database and we also saw how to refactor express engine template and use another resource like Swig template library to build a basic website. Resources for Article: Further resources on this subject: Exception Handling in MySQL for Python [article] Python Scripting Essentials [article] Splunk's Input Methods and Data Feeds [article]
Read more
  • 0
  • 0
  • 10407
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-set-up-scala-plugin-for-intellij-ide
Pavan Ramchandani
26 Jun 2018
2 min read
Save for later

How to set up the Scala Plugin in IntelliJ IDE [Tutorial]

Pavan Ramchandani
26 Jun 2018
2 min read
The Scala Plugin is used to turn a normal IntelliJ IDEA into a convenient Scala development environment. In this article, we will discuss how to set up Scala Plugin for IntelliJ IDEA IDE.  If you do not have IntelliJ IDEA, you can download it from here. By default, IntelliJ IDEA does not come with Scala features. Scala Plugin adds Scala features means that we can create Scala/Play Projects, we can create Scala Applications, Scala worksheets, and more. Scala Plugin contains the following technologies: Scala Play Framework SBT Scala.js It supports three popular OS Environments: Windows, Mac, and Linux. Setting up Scala Plugin for IntelliJ IDE Perform the  following steps to install Scala Plugin for IntelliJ IDE to develop our Scala-based projects: Open IntelliJ IDE: Go to  Configure at the bottom right and click on the Plugins option available in the drop-down, as shown here: This opens the Plugins window as shown here: Now click on InstallJetbrainsplugins, as shown in the preceding screenshot. Next, type the word Scala in the search bar to see the ScalaPlugin, as shown here: Click on the Install button to install Scala Plugin for IntelliJ IDEA. Now restart IntelliJ IDEA to see that Scala Plugin features. After we re-open IntelliJ IDEA, if we try to access File | New Project option, we will see Scala option in New Project window as shown in the following screenshot to create new Scala or Play Framework-based SBT projects: We can see the Play Framework option only in the IntelliJ IDEA Ultimate Edition. As we are using CE (Community Edition), we cannot see that option. It's now time to start Scala/Play application development using the IntelliJ IDE. You can start developing some Scala/Play-based applications. To summarize, we got an understanding to Scala Plugin and covered the installation steps for Scala Plugin for IntelliJ. To learn more about solutions for taking reactive programming approach with Scala, please refer the book Scala Reactive Programming. What Scala 3.0 Roadmap looks like! Building Scalable Microservices Exploring Scala Performance
Read more
  • 0
  • 0
  • 10406

article-image-application-data-entity-framework-net-core
Aaron Lazar
14 Aug 2018
14 min read
Save for later

Access application data with Entity Framework in .NET Core [Tutorial]

Aaron Lazar
14 Aug 2018
14 min read
In this tutorial, we will get started with using the Entity Framework and create a simple console application to perform CRUD operations. The intent is to get started with EF Core and understand how to use it. Before we dive into coding, let us see the two development approaches that EF Core supports: Code-first Database-first These two paradigms have been supported for a very long time and therefore we will just look at them at a very high level. EF Core mainly targets the code-first approach and has limited support for the database-first approach, as there is no support for the visual designer or wizard for the database model out of the box. However, there are third-party tools and extensions that support this. The list of third-party tools and extensions can be seen at https://docs.microsoft.com/en-us/ef/core/extensions/. This tutorial has been extracted from the book .NET Core 2.0 By Example, by Rishabh Verma and Neha Shrivastava. In the code-first approach, we first write the code; that is, we first create the domain model classes and then, using these classes, EF Core APIs create the database and tables, using migration based on the convention and configuration provided. We will look at conventions and configurations a little later in this section. The following diagram illustrates the code-first approach: In the database-first approach, as the name suggests, we have an existing database or we create a database first and then use EF Core APIs to create the domain and context classes. As mentioned, currently EF Core has limited support for it due to a lack of tooling. So, our preference will be for the code-first approach throughout our examples. The reader can discover the third-party tools mentioned previously to learn more about the EF Core database-first approach as well. The following image illustrates the database-first approach: Building Entity Framework Core Console App Now that we understand the approaches and know that we will be using the code-first approach, let's dive into coding our getting started with EF Core console app. Before we do so, we need to have SQL Express installed in our development machine. If SQL Express is not installed, download the SQL Express 2017 edition from https://www.microsoft.com/en-IN/sql-server/sql-server-downloads and run the setup wizard. We will do the Basic installation of SQL Express 2017 for our learning purposes, as shown in the following screenshot: Our objective is to learn how to use EF Core and so we will not do anything fancy in our console app. We will just do simple Create Read Update Delete (CRUD) operations of a simple class called Person, as defined here: public class Person { public int Id { get; set; } public string Name { get; set; } public bool Gender { get; set; } public DateTime DateOfBirth { get; set; } public int Age { get { var age = DateTime.Now.Year - this.DateOfBirth.Year; if (DateTime.Now.DayOfYear < this.DateOfBirth.DayOfYear) { age = age - 1; } return age; } } } As we can see in the preceding code, the class has simple properties. To perform the CRUD operations on this class, let's create a console app by performing the following steps: Create a new .NET Core console project named GettingStartedWithEFCore, as shown in the following screenshot: Create a new folder named Models in the project node and add the Person class to this newly created folder. This will be our model entity class, which we will use for CRUD operations. Next, we need to install the EF Core package. Before we do that, it's important to know that EF Core provides support for a variety of databases. A few of the important ones are: SQL Server SQLite InMemory (for testing) The complete and comprehensive list can be seen at https://docs.microsoft.com/en-us/ef/core/providers/. We will be working with SQL Server on Windows for our learning purposes, so let's install the SQL Server package for Entity Framework Core. To do so, let's install the Microsoft.EntityFrameworkCore.SqlServer package from the NuGet Package Manager in Visual Studio 2017. Right-click on the project. Select Manage Nuget Packages and then search for Microsoft.EntityFrameworkCore.SqlServer. Select the matching result and click Install: Next, we will create a class called Context, as shown here: public class Context : DbContext { public DbSet<Person&gt; Persons { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { //// Get the connection string from configuration optionsBuilder.UseSqlServer(@"Server=.\SQLEXPRESS ;Database=PersonDatabase;Trusted_Connection=True;"); } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity<Person> ().Property(nameof(Person.Name)).IsRequired(); } } The class looks quite simple, but it has the following subtle and important things to make note of: The Context class derives from DbContext, which resides in the Microsoft.EntityFrameworkCore namespace. DbContext is an integral part of EF Core and if you have worked with EF, you will already be aware of it. An instance of DbContext represents a session with the database and can be used to query and save instances of your entities. DbContext is a combination of the Unit Of Work and Repository Patterns. Typically, you create a class that derives from DbContext and contains Microsoft.EntityFrameworkCore.DbSet properties for each entity in the model. If properties have a public setter, they are automatically initialized when the instance of the derived context is created. It contains a property named Persons (plural of the model class Person) of type DbSet<Person&gt;. This will map to the Persons table in the underlying database. The class overrides the OnConfiguring method of DbContext and specifies the connection string to be used with the SQL Server database. The connection string should be read from the configuration file, appSettings.json, but for the sake of brevity and simplicity, it's hardcoded in the preceding code. The OnConfiguring method allows us to select and configure the data source to be used with a context using DbContextOptionsBuilder. Let's look at the connection string. Server= specifies the server. It can be .\SQLEXPRESS, .\SQLSERVER, .\LOCALDB, or any other instance name based on the installation you have done. Database= specifies the database name that will be created. Trusted_Connection=True specifies that we are using integrated security or Windows authentication. An enthusiastic reader should read the official Microsoft Entity framework documentation on configuring the context at https://docs.microsoft.com/en-us/ef/core/miscellaneous/configuring-dbcontext.  The OnModelCreating method allows us to configure the model using the ModelBuilder Fluent API. This is the most powerful method of configuration and allows configuration to be specified without modifying the entity classes. The Fluent API configuration has the highest precedence and will override conventions and data annotations. The preceding code has same effect as the following data annotation has on the Name property in the Person class: [Required] public string Name { get; set; } The preceding point highlights the flexibility and configuration that EF Core brings to the table. EF Core uses a combination of conventions, attributes, and Fluent API statements to build a database model at runtime. All we have to do is to perform actions on the model classes using a combination of these and they will automatically be translated to appropriate changes in the database. Before we conclude this point, let's have a quick look at each of the different ways to configure a database model: EF Core conventions: The conventions in EF Core are comprehensive. They are the default rules by which EF Core builds a database model based on classes. A few of the simpler yet important default conventions are listed here: EF Core creates database tables for all DbSet<TEntity&gt; properties in a Context class with the same name as that of the property. In the preceding example, the table name would be Persons based on this convention. EF Core creates tables for entities that are not included as DbSet properties but are reachable through reference properties in the other DbSet entities. If the Person class had a complex/navigation property, EF Core would have created a table for it as well. EF Core creates columns for all the scalar read-write properties of a class with the same name as the property by default. It uses the reference and collection properties for building relationships among corresponding tables in the database. In the preceding example, the scalar properties of Person correspond to a column in the Persons table. EF Core assumes a property named ID or one that is suffixed with ID as a primary key. If the property is an integer type or Guid type, then EF Core also assumes it to be IDENTITY and automatically assigns a value when inserting the data. This is precisely what we will make use of in our example while inserting or creating a new Person. EF Core maps the data type of a database column based on the data type of the property defined in the C# class. A few of the mappings between the C# data type to the SQL Server column data type are listed in the following table: C# data type SQL server data type int int string nvarchar(Max) decimal decimal(18,2) float real byte[] varbinary(Max) datetime datetime bool bit byte tinyint short smallint long bigint double float There are many other conventions, and we can define custom conventions as well. For more details, please read the official Microsoft documentation at https://docs.microsoft.com/en-us/ef/core/modeling/. Attributes: Conventions are often not enough to map the class to database objects. In such scenarios, we can use attributes called data annotation attributes to get the desired results. The [Required] attribute that we have just seen is an example of a data annotation attribute. Fluent API: This is the most powerful way of configuring the model and can be used in addition to or in place of attributes. The code written in the OnModelConfiguring method is an example of a Fluent API statement. If we check now, there is no PersonDatabase database. So, we need to create the database from the model by adding a migration. EF Core includes different migration commands to create or update the database based on the model. To do so in Visual Studio 2017, go to Tools | Nuget Package Manager | Package Manager Console, as shown in the following screenshot: This will open the Package Manager Console window. Select the Default Project as GettingStartedWithEFCore and type the following command: add-migration CreatePersonDatabase If you are not using Visual Studio 2017 and you are dependent on .NET Core CLI tooling, you can use the following command: dotnet ef migrations add CreatePersonDatabase We have not installed the Microsoft.EntityFrameworkCore.Design package, so it will give an error: Your startup project 'GettingStartedWithEFCore' doesn't reference Microsoft.EntityFrameworkCore.Design. This package is required for the Entity Framework Core Tools to work. Ensure your startup project is correct, install the package, and try again. So let's first go to the NuGet Package Manager and install this package. After successful installation of this package, if we run the preceding command again, we should be able to run the migrations successfully. It will also tell us the command to undo the migration by displaying the message To undo this action, use Remove-Migration. We should see the new files added in the Solution Explorer in the Migrations folder, as shown in the following screenshot: 8. Although we have migrations applied, we have still not created a database. To create the database, we need to run the following commands. In Visual Studio 2017: update-database –verbose In .NET Core CLI: dotnet ef database update If all goes well, we should have the database created with the Persons table (property of type DbSet<Person&gt;) in the database. Let's validate the table and database by using SQL Server Management Studio (SSMS). If SSMS is not installed in your machine, you can also use Visual Studio 2017 to view the database and table. Let's check the created database. In Visual Studio 2017, click on the View menu and select Server Explorer, as shown in the following screenshot: In Server Explorer, right-click on Data Connections and then select Add Connection. The Add Connection dialog will show up. Enter .\SQLEXPRESS in the Server name (since we installed SQL EXPRESS 2017) and select PersonDatabase as the database, as shown in the following screenshot: On clicking OK, we will see the database named PersonDatabase and if we expand the tables, we can see the Persons table as well as the _EFMigrationsHistory table. Notice that the properties in the Person class that had setters are the only properties that get transformed into table columns in the Persons table. Notice that the Age property is read-only in the class we created and therefore we do not see an age column in the database table, as shown in the following screenshot: This is the first migration to create a database. Whenever we add or update the model classes or configurations, we need to sync the database with the model using the add-migration and update-database commands. With this, we have our model class ready and the corresponding database created. The following image summarizes how the properties have been mapped from the C# class to the database table columns: Now, we will use the Context class to perform CRUD operations.  Let's go back to our Main.cs and write the following code. The code is well commented, so please go through the comments to understand the flow: class Program { static void Main(string[] args) { Console.WriteLine("Getting started with EF Core"); Console.WriteLine("We will do CRUD operations on Person class."); //// Lets create an instance of Person class. Person person = new Person() { Name = "Rishabh Verma", Gender = true, //// For demo true= Male, false = Female. Prefer enum in real cases. DateOfBirth = new DateTime(2000, 10, 23) }; using (var context = new Context()) { //// Context has strongly typed property named Persons which referes to Persons table. //// It has methods Add, Find, Update, Remove to perform CRUD among many others. //// Use AddRange to add multiple persons in once. //// Complete set of APIs can be seen by using F12 on the Persons property below in Visual Studio IDE. var personData = context.Persons.Add(person); //// Though we have done Add, nothing has actually happened in database. All changes are in context only. //// We need to call save changes, to persist these changes in the database. context.SaveChanges(); //// Notice above that Id is Primary Key (PK) and hence has not been specified in the person object passed to context. //// So, to know the created Id, we can use the below Id int createdId = personData.Entity.Id; //// If all goes well, person data should be persisted in the database. //// Use proper exception handling to discover unhandled exception if any. Not showing here for simplicity and brevity. createdId variable would now hold the id of created person. //// READ BEGINS Person readData = context.Persons.Where(j => j.Id == createdId).FirstOrDefault(); //// We have the data of person where Id == createdId, i.e. details of Rishabh Verma. //// Lets update the person data all together just for demonstarting update functionality. //// UPDATE BEGINS person.Name = "Neha Shrivastava"; person.Gender = false; person.DateOfBirth = new DateTime(2000, 6, 15); person.Id = createdId; //// For update cases, we need this to be specified. //// Update the person in context. context.Persons.Update(person); //// Save the updates. context.SaveChanges(); //// DELETE the person object. context.Remove(readData); context.SaveChanges(); } Console.WriteLine("All done. Please press Enter key to exit..."); Console.ReadLine(); } } With this, we have completed our sample app to get started with EF Core. I hope this simple example will set you up to start using EF Core with confidence and encourage you to start exploring it further. The detailed features of EF Core can be learned from the official Microsoft documentation available at https://docs.microsoft.com/en-us/ef/core/. If you're interested in learning more, head over to this book, .NET Core 2.0 By Example, by Rishabh Verma and Neha Shrivastava. How to build a chatbot with Microsoft Bot framework Working with Entity Client and Entity SQL Get to know ASP.NET Core Web API [Tutorial]
Read more
  • 0
  • 0
  • 10400

article-image-implementing-a-non-blocking-cross-service-communication-with-webclienttutorial
Amrata Joshi
13 Feb 2019
10 min read
Save for later

Implementing a non-blocking cross-service communication with WebClient[Tutorial]

Amrata Joshi
13 Feb 2019
10 min read
The  WebClient is the reactive replacement for the old RestTemplate.  However, in WebClient, we have a functional API that fits better with the reactive approach and offers built-in mapping to Project Reactor types such as Flux or Mono. This article is an excerpt taken from the book Hands-On Reactive Programming in Spring 5 written by Oleh Dokuka and Igor Lozynskyi. This book covers the difference between a reactive system and reactive programming, the basics of reactive programming in Spring 5 and much more. In this article, you will understand the basics of non-blocking cross-service communication with WebClient, reactive WebSocket API, server-side WebSocket API, and much more. WebClient.create("http://localhost/api") // (1) .get() // (2) .uri("/users/{id}", userId) // (3) .retrieve() // (4) .bodyToMono(User.class) // (5) .map(...) // (6) .subscribe(); // In the preceding example, we create a WebClient instance using a factory method called create, shown at point 1. Here, the create method allows us to specify the base URI, which is used internally for all future HTTP calls. Then, in order to start building a call to a remote server, we may execute one of the WebClient methods that sounds like an HTTP method. In the previous example, we used WebClient#get, shown at point (2). Once we call the WebClient#get method, we operate on the request builder instance and can specify the relative path in the uri method, shown at point (3). In addition to the relative path, we can specify headers, cookies, and a request body. However, for simplicity, we have omitted those settings in this case and moved on to composing the request by calling the retrieve or exchange methods. In this example, we use the retrieve method, shown at point (4). This option is useful when we are only interested in retrieving the body and performing further processing. Once the request is set up, we may use one of the methods that help us with the conversion of the response body. Here, we use the bodyToMono method, which converts the incoming payload of the User to Mono, shown at point (5). Finally, we can build the processing flow of the incoming response using the Reactor API, and execute the remote call by calling the subscribe method. WebClient follows the behavior described in the Reactive Streams specification. This means that only by calling the subscribe method will WebClient wire the connection and start sending the data to the remote server. Even though, in most cases, the most common response processing is body processing, there are some cases where we need to process the response status, headers, or cookies. For example, let's build a call to our password checking service and process the response status in a custom way using the WebClient API: class DefaultPasswordVerificationService // (1) implements PasswordVerificationService { // final WebClient webClient; // (2) // public DefaultPasswordVerificationService( // WebClient.Builder webClientBuilder // ) { // this.webClient = webClientBuilder // (2.1) .baseUrl("http://localhost:8080") // .build(); // } // @Override // (3) public Mono<Void> check(String raw, String encoded) { // return webClient // .post() // (3.1) .uri("/check") // .body(BodyInserters.fromPublisher( // (3.2) Mono.just(new PasswordDTO(raw, encoded)), // PasswordDTO.class // )) // .exchange() // (3.3) .flatMap(response -> { // (3.4) if (response.statusCode().is2xxSuccessful()) { // (3.5) return Mono.empty(); // } // else if(resposne.statusCode() == EXPECTATION_FAILD) { // return Mono.error( // (3.6) new BadCredentialsException(...) // ); // } // return Mono.error(new IllegalStateException()); // }); // } // } // The following numbered list describes the preceding code sample: This is the implementation of the PasswordVerificationService interface. This is the initialization of the WebClient instance. It is important to note that we use a WebClient instance per class here, so we do not have to initialize a new one on each execution of the check method. Such a technique reduces the need to initialize a new instance of WebClient and decreases the method's execution time. However, the default implementation of WebClient uses the Reactor-Netty HttpClient, which in default configurations shares a common pool of resources among all the HttpClient instances. Hence, the creation of a new HttpClient instance does not cost that much. Once the constructor of DefaultPasswordVerificationService is called, we start initializing webClient and use a fluent builder, shown at point (2.1), in order to set up the client. This is the implementation of the check method. Here, we use the webClient instance in order to execute a post request, shown at point (3.1). In addition, we send the body, using the body method, and prepare to insert it using the BodyInserters#fromPublisher factory method, shown in (3.2). We then execute the exchange method at point (3.3), which returns Mono<ClientResponse>. We may, therefore, process the response using the flatMap operator, shown in (3.4). If the password is verified successfully, as shown at point (3.5), the check method returns Mono.empty. Alternatively, in the case of an EXPECTATION_FAILED(417) status code, we may return the Mono of BadCredentialsExeception, as shown at point (3.6). As we can see from the previous example, in a case where it is necessary to process the status code, headers, cookies, and other internals of the common HTTP response, the most appropriate method is the exchange method, which returns ClientResponse. As mentioned, DefaultWebClient uses the Reactor-Netty HttpClient in order to provide asynchronous and non-blocking interaction with the remote server. However, DefaultWebClient is designed to be able to change the underlying HTTP client easily. For that purpose, there is a low-level reactive abstraction around the HTTP connection, which is called org.springframework.http.client.reactive.ClientHttpConnector. By default, DefaultWebClient is preconfigured to use ReactorClientHttpConnector, which is an implementation of the ClientHttpConnector interface. Starting from Spring WebFlux 5.1, there is a JettyClientHttpConnector implementation, which uses the reactive HttpClient from Jetty. In order to change the underlying HTTP client engine, we may use the WebClient.Builder#clientConnector method and pass the desired instance, which might be either a custom implementation or the existing one. In addition to the useful abstract layer, ClientHttpConnector may be used in a raw format. For example, it may be used for downloading large files, on-the-fly processing, or just simple byte scanning. We will not go into details about ClientHttpConnector; we will leave this for curious readers to look into themselves. Reactive WebSocket API We have now covered most of the new features of the new WebFlux module. However, one of the crucial parts of the modern web is a streaming interaction model, where both the client and server can stream messages to each other. In this section, we will look at one of the most well-known duplex protocols for duplex client-server communication, called WebSocket. Despite the fact that communication over the WebSocket protocol was introduced in the Spring Framework in early 2013 and designed for asynchronous message sending, the actual implementation still has some blocking operations. For instance, both writing data to I/O or reading data from I/O are still blocking operations and therefore both impact on the application's performance. Therefore, the WebFlux module has introduced an improved version of the infrastructure for WebSocket. WebFlux offers both client and server infrastructure. We are going to start by analyzing the server-side WebSocket and will then cover the client-side possibilities. Server-side WebSocket API WebFlux offers WebSocketHandler as the central interface for handling WebSocket connections. This interface has a method called handle, which accepts WebSocketSession. The WebSocketSession class represents a successful handshake between the client and server and provides access to information, including information about the handshake, session attributes, and the incoming stream of data. In order to learn how to deal with this information, let's consider the following example of responding to the sender with echo messages: class EchoWebSocketHandler implements WebSocketHandler { // (1) @Override // public Mono<Void> handle(WebSocketSession session) { // (2) return session // (3) .receive() // (4) .map(WebSocketMessage::getPayloadAsText) // (5) .map(tm -> "Echo: " + tm) // (6) .map(session::textMessage) // (7) .as(session::send); // (8) } // } As we can see from the previous example, the new WebSocket API is built on top of the reactive types from Project Reactor. Here, at point (1), we provide an implementation of the WebSocketHandler interface and override the handle method at point (2). Then, we use the WebSocketSession#receive method at point (3) in order to build the processing flow of the incoming WebSocketMessage using the Flux API. WebSocketMessage is a wrapper around DataBuffer and provides additional functionalities, such as translating the payload represented in bytes to text in point (5). Once the incoming message is extracted, we prepend to that text the "Echo: " suffix shown at point (6), wrap the new text message in the WebSocketMessage, and send it back to the client using the WebSocketSession#send method. Here, the send method accepts Publisher<WebSocketMessage> and returns Mono<Void> as the result. Therefore, using the as operator from the Reactor API, we may treat Flux as Mono<Void> and use session::send as a transformation function. Apart from the WebSocketHandler interface implementation, setting up the server-side WebSocket API requires configuring additional HandlerMapping and WebSocketHandlerAdapter instances. Consider the following code as an example of such a configuration: @Configuration // (1) public class WebSocketConfiguration { // @Bean // (2) public HandlerMapping handlerMapping() { // SimpleUrlHandlerMapping mapping = // new SimpleUrlHandlerMapping(); // (2.1) mapping.setUrlMap(Collections.singletonMap( // (2.2) "/ws/echo", // new EchoWebSocketHandler() // )); // mapping.setOrder(-1); // (2.3) return mapping; // } // @Bean // (3) public HandlerAdapter handlerAdapter() { // return new WebSocketHandlerAdapter(); // } // } The preceding example can be described as follows: This is the class that is annotated with @Configuration. Here, we have the declaration and setup of the HandlerMapping bean. At point (2.1), we create SimpleUrlHandlerMapping, which allows setup path-based mapping, shown at point (2.2), to WebSocketHandler. In order to allow SimpleUrlHandlerMapping to be handled prior to other HandlerMapping instances, it should be a higher priority. This is the declaration of the HandlerAdapter bean, which is WebSocketHandlerAdapter. Here, WebSocketHandlerAdapter plays the most important role, since it upgrades the HTTP connection to the WebSocket one and then calls the WebSocketHandler#handle method. Client-side WebSocket API Unlike the WebSocket module (which is based on WebMVC), WebFlux provides us with client-side support too. In order to send a WebSocket connection request, we have the WebSocketClient class. WebSocketClient has two central methods to execute WebSocket connections, as shown in the following code sample: public interface WebSocketClient { Mono<Void> execute( URI url, WebSocketHandler handler ); Mono<Void> execute( URI url, HttpHeaders headers, WebSocketHandler handler ); } As we can see, WebSocketClient uses the same WebSockeHandler interface in order to process messages from the server and send messages back. There are a few WebSocketClient implementations that are related to the server engine, such as the TomcatWebSocketClient implementation or the JettyWebSocketClient implementation. In the following example, we will look at ReactorNettyWebSocketClient: WebSocketClient client = new ReactorNettyWebSocketClient(); client.execute( URI.create("http://localhost:8080/ws/echo"), session -> Flux .interval(Duration.ofMillis(100)) .map(String::valueOf) .map(session::textMessage) .as(session::send) ); The preceding example shows how we can use ReactorNettyWebSocketClient to wire a WebSocket connection and start sending periodic messages to the server. To summarize, we learned the basics of non-blocking cross-service communication with WebClient, reactive WebSocket API, server-side WebSocket API, and much more. To know more about the reactive system and reactive programming, check out the book, Hands-On Reactive Programming in Spring 5 written by Oleh Dokuka and Igor Lozynskyi.  Getting started with React Hooks by building a counter with useState and useEffect Implementing Dependency Injection in Swift [Tutorial] Reactive programming in Swift with RxSwift and RxCocoa [Tutorial]
Read more
  • 0
  • 0
  • 10392

article-image-common-design-patterns-javascript
Richa Tripathi
01 May 2018
14 min read
Save for later

Implementing 5 Common Design Patterns in JavaScript (ES8)

Richa Tripathi
01 May 2018
14 min read
In this tutorial, we'll see how common design patterns can be used as blueprints for organizing larger structures. Defining steps with template functions A template is a design pattern that details the order a given set of operations are to be executed in; however, a template does not outline the steps themselves. This pattern is useful when behavior is divided into phases that have some conceptual or side effect dependency that requires them to be executed in a specific order. Here, we'll see how to use the template function design pattern. We assume you already have a workspace that allows you to create and run ES modules in your browser for all the recipes given below: How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-01-defining-steps-with-template-functions. Copy or create an index.html file that loads and runs a main function from main.js. Create a main.js file that defines a new abstract class named Mission: // main.js class Mission { constructor () { if (this.constructor === Mission) { throw new Error('Mission is an abstract class, must extend'); } } } Add a function named execute that calls three instance methods—determineDestination, determinPayload, and launch: // main.js class Mission { execute () { this.determinDestination(); this.determinePayload(); this.launch(); } } Create a LunarRover class that extends the Mission class: // main.js class LunarRover extends Mission {} Add a constructor that assigns name to an instance property: // main.js class LunarRover extends Mission constructor (name) { super(); this.name = name; } } Implement the three methods called by Mission.execute: // main.js class LunarRover extends Mission {} determinDestination() { this.destination = 'Oceanus Procellarum'; } determinePayload() { this.payload = 'Rover with camera and mass spectrometer.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Rover Will arrive in a week. `); } } Create a JovianOrbiter class that also extends the Mission class: // main.js class LunarRover extends Mission {} constructor (name) { super(); this.name = name; } determinDestination() { this.destination = 'Jovian Orbit'; } determinePayload() { this.payload = 'Orbiter with decent module.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Orbiter Will arrive in 7 years. `); } } Create a main function that creates both concrete mission types and executes them: // main.js export function main() { const jadeRabbit = new LunarRover('Jade Rabbit'); jadeRabbit.execute(); const galileo = new JovianOrbiter('Galileo'); galileo.execute(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. The output should appear as follows: How it works... The Mission abstract class defines the execute method, which calls the other instance methods in a particular order. You'll notice that the methods called are not defined by the Mission class. This implementation detail is the responsibility of the extending classes. This use of abstract classes allows child classes to be used by code that takes advantage of the interface defined by the abstract class. In the template function pattern, it is the responsibility of the child classes to define the steps. When they are instantiated, and the execute method is called, those steps are then performed in the specified order. Ideally, we'd be able to ensure that Mission.execute was not overridden by any inheriting classes. Overriding this method works against the pattern and breaks the contract associated with it. This pattern is useful for organizing data-processing pipelines. The guarantee that these steps will occur in a given order means that, if side effects are eliminated, the instances can be organized more flexibly. The implementing class can then organize these steps in the best possible way. Assembling customized instances with builders The previous recipe shows how to organize the operations of a class. Sometimes, object initialization can also be complicated. In these situations, it can be useful to take advantage of another design pattern: builders. Now, we'll see how to use builders to organize the initialization of more complicated objects. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-02-assembling-instances-with-builders. Create a main.js file that defines a new class named Mission, which that takes a name constructor argument and assigns it to an instance property. Also, create a describe method that prints out some details: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create classes named Destination, Payload, and Rocket, which receive a name property as a constructor parameter and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } }   Create a MissionBuilder class that defines the setMissionName, setDestination, setPayload, and setRocket methods: // main.js class MissionBuilder { setMissionName (name) { this.missionName = name; return this; } setDestination (destination) { this.destination = destination; return this; } setPayload (payload) { this.payload = payload; return this; } setRocket (rocket) { this.rocket = rocket; return this; } } Create a build method that creates a new Mission instance with the appropriate properties: // main.js class MissionBuilder { build () { const mission = new Mission(this.missionName); mission.rocket = this.rocket; mission.destination = this.destination; mission.payload = this.payload; return mission; } } Create a main function that uses MissionBuilder to create a new mission instance: // main.js export function main() { // build an describe a mission new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Oceanus Procellarum')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build() .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The builder defines methods for assigning all the relevant properties and defines a build method that ensures that each is called and assigned appropriately. Builders are like template functions, but instead of ensuring that a set of operations are executed in the correct order, they ensure that an instance is properly configured before returning. Because each instance method of MissionBuilder returns the this reference, the methods can be chained. The last line of the main function calls describe on the new Mission instance that is returned from the build method. Replicating instances with factories Like builders, factories are a way of organizing object construction. They differ from builders in how they are organized. Often, the interface of factories is a single function call. This makes factories easier to use, if less customizable, than builders. Now, we'll see how to use factories to easily replicate instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-03-replicating-instances-with-factories. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Mission. Add a constructor that takes a name constructor argument and assigns it to an instance property. Also, define a simple describe method: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create three classes named Destination, Payload, and Rocket, that take name as a constructor argument and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } } Create a MarsMissionFactory object with a single create method that takes two arguments: name and rocket. This method should create a new Mission using those arguments: // main.js const MarsMissionFactory = { create (name, rocket) { const mission = new Mission(name); mission.destination = new Destination('Martian surface'); mission.payload = new Payload('Mars rover'); mission.rocket = rocket; return mission; } } Create a main method that creates and describes two similar missions: // main.js export function main() { // build an describe a mission MarsMissionFactory .create('Curiosity', new Rocket('Atlas V')) .describe(); MarsMissionFactory .create('Spirit', new Rocket('Delta II')) .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The create method takes a subset of the properties needed to create a new mission. The remaining values are provided by the method itself. This allows factories to simplify the process of creating similar instances. In the main function, you can see that two Mars missions have been created, only differing in name and Rocket instance. We've halved the number of values needed to create an instance. This pattern can help reduce instantiation logic. In this recipe, we simplified the creation of different kinds of missions by identifying the common attributes, encapsulating those in the body of the factory function, and using arguments to supply the remaining properties. In this way, commonly used instance shapes can be created without additional boilerplate code. Processing a structure with the visitor pattern The patterns we've seen thus far organize the construction of objects and the execution of operations. The next pattern we'll look at is specially made to traverse and perform operations on hierarchical structures. Here, we'll be looking at the visitor pattern. How to do it... Open your command-line application and navigate to your workspace. Copy the 09-02-assembling-instances-with-builders folder to a new 09-04-processing-a-structure-with-the-visitor-pattern directory. Add a class named MissionInspector to main.js. Create a visitor method that calls a corresponding method for each of the following types: Mission, Destination, Rocket, and Payload: // main.js /* visitor that inspects mission */ class MissionInspector { visit (element) { if (element instanceof Mission) { this.visitMission(element); } else if (element instanceof Destination) { this.visitDestination(element); } else if (element instanceof Rocket) { this.visitRocket(element); } else if (element instanceof Payload) { this.visitPayload(element); } } } Create a visitMission method that logs out an ok message: // main.js class MissionInspector { visitMission (mission) { console.log('Mission ok'); mission.describe(); } } Create a visitDestination method that throws an error if the destination is not in an approved list: // main.js class MissionInspector { visitDestination (destination) { const name = destination.name.toLowerCase(); if ( name === 'mercury' || name === 'venus' || name === 'earth' || name === 'moon' || name === 'mars' ) { console.log('Destination: ', name, ' approved'); } else { throw new Error('Destination: '' + name + '' not approved at this time'); } } } Create a visitPayload method that throws an error if the payload isn't valid: // main.js class MissionInspector { visitPayload (payload) { const name = payload.name.toLowerCase(); const payloadExpr = /(orbiter)|(rover)/; if ( payloadExpr.test(name) ) { console.log('Payload: ', name, ' approved'); } else { throw new Error('Payload: '' + name + '' not approved at this time'); } } } Create a visitRocket method that logs out an ok message: // main.js class MissionInspector { visitRocket (rocket) { console.log('Rocket: ', rocket.name, ' approved'); } } Add an accept method to the Mission class that calls accept on its constituents, then tells visitor to visit the current instance: // main.js class Mission { // other mission code ... accept (visitor) { this.rocket.accept(visitor); this.payload.accept(visitor); this.destination.accept(visitor); visitor.visit(this); } } Add an accept method to the Destination class that tells visitor to visit the current instance: // main.js class Destination { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Payload class that tells visitor to visit the current instance: // main.js class Payload { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Rocket class that tells visitor to visit the current instance: // main.js class Rocket { // other mission code ... accept (visitor) { visitor.visit(this); } } Create a main function that creates different instances with the builder, visits them with the MissionInspector instance, and logs out any thrown errors: // main.js export function main() { // build an describe a mission const jadeRabbit = new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Moon')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build(); const curiosity = new MissionBuilder() .setMissionName('Curiosity') .setDestination(new Destination('Mars')) .setPayload(new Payload('Mars Rover')) .setRocket(new Rocket('Delta II')) .build(); // expect error from Destination const buzz = new MissionBuilder() .setMissionName('Buzz Lightyear') .setDestination(new Destination('Too Infinity And Beyond')) .setPayload(new Payload('Interstellar Orbiter')) .setRocket(new Rocket('Self Propelled')) .build(); // expect error from payload const terraformer = new MissionBuilder() .setMissionName('Mars Terraformer') .setDestination(new Destination('Mars')) .setPayload(new Payload('Terraformer')) .setRocket(new Rocket('Light Sail')) .build(); const inspector = new MissionInspector(); [jadeRabbit, curiosity, buzz, terraformer].forEach((mission) => { try { mission.accept(inspector); } catch (e) { console.error(e); } }); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The visitor pattern has two components. The visitor processes the subject objects and the subjects tell other related subjects about the visitor, and when the current subject should be visited. The accept method is required for each subject to receive a notification that there is a visitor. That method then makes two types of method call. The first is the accept method on its related subjects. The second is the visitor method on the visitor. In this way, the visitor traverses a structure by being passed around by the subjects. The visitor methods are used to process different types of node. In some languages, this is handled by language-level polymorphism. In JavaScript, we can use run-time type checks to do this. The visitor pattern is a good option for processing hierarchical structures of objects, where the structure is not known ahead of time, but the types of subjects are known. Using a singleton to manage instances Sometimes, there are objects that are resource intensive. They may require time, memory, battery power, or network usage that are unavailable or inconvenient. It is often useful to manage the creation and sharing of instances. Here, we'll see how to use singletons to manage instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-05-singleton-to-manage-instances. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Rocket. Add a constructor takes a name constructor argument and assigns it to an instance property: // main.js class Rocket { constructor (name) { this.name = name; } } Create a RocketManager object that has a rockets property. Add a findOrCreate method that indexes Rocket instances by the name property: // main.js const RocketManager = { rockets: {}, findOrCreate (name) { const rocket = this.rockets[name] || new Rocket(name); this.rockets[name] = rocket; return rocket; } } Create a main function that creates instances with and without the manager. Compare the instances and see whether they are identical: // main.js export function main() { const atlas = RocketManager.findOrCreate('Atlas V'); const atlasCopy = RocketManager.findOrCreate('Atlas V'); const atlasClone = new Rocket('Atlas V'); console.log('Copy is the same: ', atlas === atlasCopy); console.log('Clone is the same: ', atlas === atlasClone); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The object stores references to the instances, indexed by the string value given with name. This map is created when the module loads, so it is persisted through the life of the program. The singleton is then able to look up the object and returns instances created by findOrCreate with the same name. Conserving resources and simplifying communication are primary motivations for using singletons. Creating a single object for multiple uses is more efficient in terms of space and time needed than creating several. Plus, having single instances for messages to be communicated through makes communication between different parts of a program easier. Singletons may require more sophisticated indexing if they are relying on more complicated data. You read an excerpt from a book written by Ross Harrison, titled ECMAScript Cookbook. This book contains over 70 recipes to help you improve your coding skills and solving practical JavaScript problems. 6 JavaScript micro optimizations you need to know Mozilla is building a bridge between Rust and JavaScript Behavior Scripting in C# and Javascript for game developers  
Read more
  • 0
  • 0
  • 10392
article-image-implementing-wcf-service-real-world
Packt
09 Jun 2010
18 min read
Save for later

Implementing a WCF Service in the Real World

Packt
09 Jun 2010
18 min read
WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. In this article by, Mike Liu, author of  WCF 4.0 Multi-tier Services Development with LINQ to Entities, we will create and test the WCF service by following these steps: Create the project using a WCF Service Library template Create the project using a WCF Service Application template Create the Service Operation Contracts Create the Data Contracts Add a Product Entity project Add a business logic layer project Call the business logic layer from the service interface layer Test the service Here ,In this article, we will learn how to separate the service interface layer from the business logic layer (Read more interesting articles on WCF 4.0 here.) Why layer a service? An important aspect of SOA design is that service boundaries should be explicit, which means hiding all the details of the implementation behind the service boundary. This includes revealing or dictating what particular technology was used. Furthermore, inside the implementation of a service, the code responsible for the data manipulation should be separated from the code responsible for the business logic. So in the real world, it is always good practice to implement a WCF service in three or more layers. The three layers are the service interface layer, the business logic layer, and the data access layer. Service interface layer: This layer will include the service contracts and operation contracts that are used to define the service interfaces that will be exposed at the service boundary. Data contracts are also defined to pass in and out of the service. If any exception is expected to be thrown outside of the service, then Fault contracts will also be defined at this layer. Business logic layer: This layer will apply the actual business logic to the service operations. It will check the preconditions of each operation, perform business activities, and return any necessary results to the caller of the service. Data access layer: This layer will take care of all of the tasks needed to access the underlying databases. It will use a specific data adapter to query and update the databases. This layer will handle connections to databases, transaction processing, and concurrency controlling. Neither the service interface layer nor the business logic layer needs to worry about these things. Layering provides separation of concerns and better factoring of code, which gives you better maintainability and the ability to split out layers into separate physical tiers for scalability. The data access code should be separated into its own layer that focuses on performing translation services between the databases and the application domain. Services should be placed in a separate service layer that focuses on performing translation services between the service-oriented external world and the application domain. The service interface layer will be compiled into a separate class assembly and hosted in a service host environment. The outside world will only know about and have access to this layer. Whenever a request is received by the service interface layer, the request will be dispatched to the business logic layer, and the business logic layer will get the actual work done. If any database support is needed by the business logic layer, it will always go through the data access layer. Creating a new solution and project using WCF templates We need to create a new solution for this example and add a new WCF project to this solution. This time we will use the built-in Visual Studio WCF templates for the new project. Using the C# WCF service library template There are a few built-in WCF service templates within Visual Studio 2010; two of them are Visual Studio WCF Service Library and Visual Studio WCF Service Application. In this article, we will use the service library template. Follow these steps to create the RealNorthwind solution and the project using the service library template: Start Visual Studio 2010, select menu option File New | Project…|, and you will see the New Project dialog box. From this point onwards, we will create a completely new solution and save it in a different location. In the New Project window, specify Visual C# WCF | WCF| Service Library as the project template, RealNorthwindService as the (project) name, and RealNorthwind as the solution name. Make sure that the checkbox Create directory for solution is selected. Click on the OK button, and the solution is created with a WCF project inside it. The project already has an IService1.cs file to define a service interface and Service1.cs to implement the service. It also has an app.config file, which we will cover shortly. Using the C# WCF service application template Instead of using the Visual Studio WCF Service Library template to create our new WCF project, we can use the Visual Studio Service Application template to create the new WCF project. Because we have created the solution, we will add a new project using the Visual Studio WCF Service Application template. Right-click on the solution item in Solution Explorer, select menu option Add New Project…| from the context menu, and you will see the Add New Project dialog box. In the Add New Project window, specify Visual C# | WCF Service Application as the project template, RealNorthwindService2 as the (project) name, and leave the default location of C:SOAWithWCFandLINQProjectsRealNorthwind unchanged. Click on the OK button and the new project will be added to the solution.The project already has an IService1.cs file to define a service interface, and Service1.svc.cs to implement the service. It also has a Service1.svc file and a web.config file, which are used to host the new WCF service. It has also had the necessary references added to the project such as System.ServiceModel. You can follow these steps to test this service: Change this new project, RealNorthwindService2, to be the startup project(right-click on it from Solution Explorer and select Set as Startup Project). Then run it (Ctrl + F5 or F5). You will see that it can now run. You will see that ASP.NET Development Server has been started, and a browser is open listing all of the files under the RealNorthwindService2 project folder.Clicking on the Service1.svc file will open the metadata page of the WCF service in this project. If you have pressed F5 in the previous step to run this project, you might see a warning message box asking you if you want to enable debugging for the WCF service. As we said earlier, you can choose enable debugging or just run in the non-debugging mode. You may also have noticed that the WCF Service Host is started together with ASP.NET Development Server. This is actually another way of hosting a WCF service in Visual Studio 2010. It has been started at this point because, within the same solution, there is a WCF service project (RealNorthwindService) created using the WCF Service Library template. So far we have used two different Visual Studio WCF templates to create two projects. The first project, using the C# WCF Service Library template, is a more sophisticated one because this project is actually an application containing a WCF service, a hosting application (WcfSvcHost), and a WCF Test Client. This means that we don't need to write any other code to host it, and as soon as we have implemented a service, we can use the built-in WCF Test Client to invoke it. This makes it very convenient for WCF development. The second project, using the C# WCF Service Application template, is actually a website. This is the hosting application of the WCF service so you don't have to create a separate hosting application for the WCF service. As we have already covered them and you now have a solid understanding of these styles, we will not discuss them further. But keep in mind that you have this option, although in most cases it is better to keep the WCF service as clean as possible, without any hosting functionalities attached to it. To focus on the WCF service using the WCF Service Library template, we now need to remove the project RealNorthwindService2 from the solution. In Solution Explorer, right-click on the RealNorthwindService2 project item and select Remove from the context menu. Then you will see a warning message box. Click on the OK button in this message box and the RealNorthwindService2 project will be removed from the solution. Note that all the files of this project are still on your hard drive. You will need to delete them using Windows Explorer. Creating the service interface layer In this article, we will create the service interface layer contracts. Because two sample files have already been created for us, we will try to reuse them as much as possible. Then we will start customizing these two files to create the service contracts. Creating the service interfaces To create the service interfaces, we need to open the IService1.cs file and do the following: Change its namespace from RealNorthwindService to: MyWCFServices.RealNorthwindService Change the interface name from IService1 to IProductService. Don't be worried if you see the warning message before the interface definition line, as we will change the web.config file in one of the following steps. Change the first operation contract definition from this line: string GetData(int value); to this line: Product GetProduct(int id); Change the second operation contract definition from this line: CompositeType GetDataUsingDataContract(CompositeType composite); to this line: bool UpdateProduct(Product product); Change the filename from IService1.cs to IProductService.cs. With these changes, we have defined two service contracts. The first one can be used to get the product details for a specific product ID, while the second one can be used to update a specific product. The product type, which we used to define these service contracts, is still not defined. The content of the service interface for RealNorthwindService.ProductService should look like this now: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ [ServiceContract] public interface IProductService { [OperationContract] Product GetProduct(int id); [OperationContract] bool UpdateProduct(Product product); // TODO: Add your service operations here }} This is not the whole content of the IProductService.cs file. The bottom part of this file should still have the class, CompositeType. Creating the data contracts Another important aspect of SOA design is that you shouldn't assume that the consuming application supports a complex object model. One part of the service boundary definition is the data contract definition for the complex types that will be passed as operation parameters or return values. For maximum interoperability and alignment with SOA principles, you should not pass any .NET-specific types such as DataSet or Exceptions across the service boundary. You should stick to fairly simple data structure objects such as classes with properties and backing member fields. You can pass objects that have nested complex types such as 'Customer with an Order collection'. However, you shouldn't make any assumption about the consumer being able to support object-oriented constructs such as inheritance or base-classes for interoperable web services. In our example, we will create a complex data type to represent a product object. This data contract will have five properties: ProductID, ProductName, QuantityPerUnit, UnitPrice, and Discontinued. These will be used to communicate with client applications. For example, a supplier may call the web service to update the price of a particular product or to mark a product for discontinuation. It is preferable to put data contracts in separate files within a separate assembly but, to simplify our example, we will put DataContract in the same file as the service contract. We will modify the file, IProductService.cs, as follows: Change the DataContract name from CompositeType to Product. Change the fields from the following lines: bool boolValue = true;string stringValue = "Hello "; to these seven lines: int productID;string productName;string quantityPerUnit;decimal unitPrice;bool discontinued; Delete the old boolValue and StringValue DataMember properties. Then, for each of the above fields, add a DataMember property. For example, for productID, we will have this DataMember property: [DataMember]public int ProductID{ get { return productID; } set { productID = value; }} A better way is to take advantage of the automatic property feature of C#, and add the following ProductID DataMember without defining the productID field: [DataMember]public int ProductID { get; set; } To save some space, we will use the latter format. So, we need to delete all of those field definitions and add an automatic property for each field, with the first letter capitalized. The data contract part of the finished service contract file, IProductService.cs,should now look like this: [DataContract]public class Product{ [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; }} Implementing the service contracts To implement the two service interfaces that we defined, open the Service1.cs file and do the following: Change its namespace from RealNorthwindService to MyWCFServices.RealNorthwindService. Change the class name from Service1 to ProductService. Make it inherit from the IProductService interface, instead of IService1. The class definition line should be like this: public class ProductService : IProductService Delete the GetData and GetDataUsingDataContract methods. Add the following method, to get a product: public Product GetProduct(int id){ // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10.0; return product;} In this method, we created a fake product and returned it to the client.Later, we will remove the hard-coded product from this method and call the business logic to get the real product. Add the following method to update a product: public bool UpdateProduct(Product product){ // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true;} Also, in this method, we don't update anything. Instead, we always return true if a valid price is passed in. Change the filename from Service1.cs to ProductService.cs. The content of the ProductService.cs file should be like this: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ public class ProductService : IProductService { public Product GetProduct(int id) { // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10; return product; } public bool UpdateProduct(Product product) { // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true; } }} Modifying the app.config file Because we have changed the service name, we have to make the appropriate changes to the configuration file. Note that when you rename the service, if you have used the refactor feature of Visual Studio, some of the following tasks may have been done by Visual Studio. Follow these steps to change the configuration file: Open the app.config file from Solution Explorer. Change all instances of the RealNorthwindService string except the one in baseAddress to MyWCFServices.RealNorthwindService. This is for the namespace change. Change the RealNorthwindService string in baseAddress to MyWCFServices/RealNorthwindService. Change all instances of the Service1 string to ProductService. This is for the actual service name change. Change the service address port from 8731 to 8080. This is to prepare for the client application, which we will create soon. You can also change Design_Time_Addresses to whatever address you want, or delete the baseAddress part from the service. This can be used to test your service locally. We will leave it unchanged for our example. The content of the app.config file should now look like this: <?xml version="1.0" encoding="utf-8" ?><configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="MyWCFServices.RealNorthwindService. ProductService"> <endpoint address="" binding="wsHttpBinding" contract="MyWCFServices. RealNorthwindService.IProductService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8080/Design_Time_ Addresses/MyWCFServices/ RealNorthwindService/ProductService/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Testing the service using WCF Test Client Because we are using the WCF Service Library template in this example, we are now ready to test this web service. As we pointed out when creating this project, this service will be hosted in the Visual Studio 2010 WCF Service Host environment. To start the service, press F5 or Ctrl + F5. WcfSvcHost will be started and WCF Test Client is also started. This is a Visual Studio 2010 built-in test client for WCF Service Library projects. In order to run the WCF Test Client you have to log into your machine as a local administrator. You also have to start Visual Studio as an administrator because we have changed the service port from 8732 to 8080 (port 8732 is pre-registered but 8080 is not). Again, if you get an Access is denied error, make sure you run Visual Studio as an administrator (under Windows XP you need to log on as an administrator). Now from this WCF Test Client we can double-click on an operation to test it.First, let us test the GetProduct operation. Now the message Invoking Service… will be displayed in the status bar as the client is trying to connect to the server. It may take a while for this initial connection to be made as several things need to be done in the background. Once the connection has been established, a channel will be created and the client will call the service to perform the requested operation. Once the operation has been completed on the server side, the response package will be sent back to the client, and the WCF Test Client will display this response in the bottom panel. If you started the test client in debugging mode (by pressing F5), you can set a breakpoint at a line inside the GetProduct method in the RealNorthwindService.cs file, and when the Invoke button is clicked, the breakpoint will be hit so that you can debug the service as we explained earlier. However, here you don't need to attach to the WCF Service Host. Note that the response is always the same, no matter what product ID you use to retrieve the product. Specifically, the product name is hard-coded, as shown in the diagram. Moreover, from the client response panel, we can see that several properties of the Product object have been assigned default values. Also, because the product ID is an integer value from the WCF Test Client, you can only enter an integer for it. If a non-integer value is entered, when you click on the Invoke button, you will get an error message box to warn you that you have entered a value with the wrong type. Now let's test the operation, UpdateProduct. The Request/Response packages are displayed in grids by default but you have the option of displaying them in XML format. Just select the XML tab at the bottom of the right-side panel, and you will see the XML-formatted Request/Response packages. From these XML strings, you can see that they are SOAP messages. Besides testing operations, you can also look at the configuration settings of the web service. Just double-click on Config File from the left-side panel and the configuration file will be displayed in the right-side panel. This will show you the bindings for the service, the addresses of the service, and the contract for the service. What you see here for the configuration file is not an exact image of the actual configuration file. It hides some information such as debugging mode and service behavior, and includes some additional information on reliable sessions and compression mode. If you are satisfied with the test results, just close the WCF Test Client, and you will go back to Visual Studio IDE. Note that as soon as you close the client, the WCF Service Host is stopped. This is different from hosting a service inside ASP.NET Development Server, where ASP.NET Development Server still stays active even after you close the client.
Read more
  • 0
  • 0
  • 10386

article-image-classify-emails-using-deep-neural-networks-generating-tf-idf
Savia Lobo
21 Feb 2018
9 min read
Save for later

How to classify emails using deep neural networks after generating TF-IDF

Savia Lobo
21 Feb 2018
9 min read
[box type="note" align="" class="" width=""]This article is an excerpt taken from the book Natural Language Processing with Python Cookbook written by Krishna Bhavsar, Naresh Kumar, and Pratap Dangeti. This book will teach you how to efficiently use NLTK and implement text classification, identify parts of speech, tag words, and more. You will also learn how to analyze sentence structures and master lexical analysis, syntactic and semantic analysis, pragmatic analysis, and application of deep learning techniques.[/box] In this article, you will learn how to use deep neural networks to classify emails into one of the 20 pre-trained categories based on the words present in each email. This is a simple model to start with understanding the subject of deep learning and its applications on NLP. Getting ready The 20 newsgroups dataset from scikit-learn have been utilized to illustrate the concept. Number of observations/emails considered for analysis are 18,846 (train observations - 11,314 and test observations - 7,532) and its corresponding classes/categories are 20, which are shown in the following: >>> from sklearn.datasets import fetch_20newsgroups >>> newsgroups_train = fetch_20newsgroups(subset='train') >>> newsgroups_test = fetch_20newsgroups(subset='test') >>> x_train = newsgroups_train.data >>> x_test = newsgroups_test.data >>> y_train = newsgroups_train.target >>> y_test = newsgroups_test.target >>> print ("List of all 20 categories:") >>> print (newsgroups_train.target_names) >>> print ("n") >>> print ("Sample Email:") >>> print (x_train[0]) >>> print ("Sample Target Category:") >>> print (y_train[0]) >>> print (newsgroups_train.target_names[y_train[0]]) In the following screenshot, a sample first data observation and target class category has been shown. From the first observation or email we can infer that the email is talking about a two-door sports car, which we can classify manually into autos category which is 8. Note: Target value is 7 due to the indexing starts from 0), which is validating our understanding with actual target class 7. How to do it… Using NLP techniques, we have pre-processed the data for obtaining finalized word vectors to map with final outcomes spam or ham. Major steps involved are:    Pre-processing.    Removal of punctuations.    Word tokenization.    Converting words into lowercase.    Stop word removal.    Keeping words of length of at least 3.    Stemming words.    POS tagging.    Lemmatization of words: TF-IDF vector conversion. Deep learning model training and testing. Model evaluation and results discussion. How it works... The NLTK package has been utilized for all the pre-processing steps, as it consists of all the necessary NLP functionality under one single roof: # Used for pre-processing data >>> import nltk >>> from nltk.corpus import stopwords >>> from nltk.stem import WordNetLemmatizer >>> import string >>> import pandas as pd >>> from nltk import pos_tag >>> from nltk.stem import PorterStemmer The function written (pre-processing) consists of all the steps for convenience. However, we will be explaining all the steps in each section: >>> def preprocessing(text): The following line of the code splits the word and checks each character to see if it contains any standard punctuations, if so it will be replaced with a blank or else it just don't replace with blank: ... text2 = " ".join("".join([" " if ch in string.punctuation else ch for ch in text]).split()) The following code tokenizes the sentences into words based on whitespaces and puts them together as a list for applying further steps: ... tokens = [word for sent in nltk.sent_tokenize(text2) for word in nltk.word_tokenize(sent)] Converting all the cases (upper, lower and proper) into lower case reduces duplicates in corpus: ... tokens = [word.lower() for word in tokens] As mentioned earlier, Stop words are the words that do not carry much of weight in understanding the sentence; they are used for connecting words and so on. We have removed them with the following line of code: ... stopwds = stopwords.words('english') ... tokens = [token for token in tokens if token not in stopwds] Keeping only the words with length greater than 3 in the following code for removing small words which hardly consists of much of a meaning to carry; ... tokens = [word for word in tokens if len(word)>=3] Stemming applied on the words using Porter stemmer which stems the extra suffixes from the words: ... stemmer = PorterStemmer() ... tokens = [stemmer.stem(word) for word in tokens] POS tagging is a prerequisite for lemmatization, based on whether word is noun or verb or and so on. it will reduce it to the root word ... tagged_corpus = pos_tag(tokens) pos_tag function returns the part of speed in four formats for Noun and six formats for verb. NN - (noun, common, singular), NNP - (noun, proper, singular), NNPS - (noun, proper, plural), NNS - (noun, common, plural), VB - (verb, base form), VBD - (verb, past tense), VBG - (verb, present participle), VBN - (verb, past participle), VBP - (verb, present tense, not 3rd person singular), VBZ - (verb, present tense, third person singular) ... Noun_tags = ['NN','NNP','NNPS','NNS'] ... Verb_tags = ['VB','VBD','VBG','VBN','VBP','VBZ'] ... lemmatizer = WordNetLemmatizer() The following function, prat_lemmatize, has been created only for the reasons of mismatch between the pos_tag function and intake values of lemmatize function. If the tag for any word falls under the respective noun or verb tags category, n or v will be applied accordingly in lemmatize function: ... def prat_lemmatize(token,tag): ...      if tag in Noun_tags: ...          return lemmatizer.lemmatize(token,'n') ...      elif tag in Verb_tags: ...          return lemmatizer.lemmatize(token,'v') ...      else: ...          return lemmatizer.lemmatize(token,'n') After performing tokenization and applied all the various operations, we need to join it back to form stings and the following function performs the same: ... pre_proc_text =   " ".join([prat_lemmatize(token,tag) for token,tag in tagged_corpus]) ... return pre_proc_text Applying pre-processing on train and test data: >>> x_train_preprocessed = [] >>> for i in x_train: ... x_train_preprocessed.append(preprocessing(i)) >>> x_test_preprocessed = [] >>> for i in x_test: ... x_test_preprocessed.append(preprocessing(i)) # building TFIDF vectorizer >>> from sklearn.feature_extraction.text import TfidfVectorizer >>> vectorizer = TfidfVectorizer(min_df=2, ngram_range=(1, 2), stop_words='english', max_features= 10000,strip_accents='unicode', norm='l2') >>> x_train_2 = vectorizer.fit_transform(x_train_preprocessed).todense() >>> x_test_2 = vectorizer.transform(x_test_preprocessed).todense() After the pre-processing step has been completed, processed TF-IDF vectors have to be sent to the following deep learning code: # Deep Learning modules >>> import numpy as np >>> from keras.models import Sequential >>> from keras.layers.core import Dense, Dropout, Activation >>> from keras.optimizers import Adadelta,Adam,RMSprop >>> from keras.utils import np_utils The following image produces the output after firing up the preceding Keras code. Keras has been installed on Theano, which eventually works on Python. A GPU with 6 GB memory has been installed with additional libraries (CuDNN and CNMeM) for four to five times faster execution, with a choking of around 20% memory; hence only 80% memory out of 6 GB is available; The following code explains the central part of the deep learning model. The code is self- explanatory, with the number of classes considered 20, batch size 64, and number of epochs to train, 20: # Definition hyper parameters >>> np.random.seed(1337) >>> nb_classes = 20 >>> batch_size = 64 >>> nb_epochs = 20 The following code converts the 20 categories into one-hot encoding vectors in which 20 columns are created and the values against the respective classes are given as 1. All other classes are given as 0: >>> Y_train = np_utils.to_categorical(y_train, nb_classes) In the following building blocks of Keras code, three hidden layers (1000, 500, and 50 neurons in each layer respectively) are used, with dropout as 50% for each layer with Adam as an optimizer: #Deep Layer Model building in Keras #del model >>> model = Sequential() >>> model.add(Dense(1000,input_shape= (10000,))) >>> model.add(Activation('relu')) >>> model.add(Dropout(0.5)) >>> model.add(Dense(500)) >>> model.add(Activation('relu')) >>> model.add(Dropout(0.5)) >>> model.add(Dense(50)) >>> model.add(Activation('relu')) >>> model.add(Dropout(0.5)) >>> model.add(Dense(nb_classes)) >>> model.add(Activation('softmax')) >>> model.compile(loss='categorical_crossentropy', optimizer='adam') >>> print (model.summary()) The architecture is shown as follows and describes the flow of the data from a start of 10,000 as input. Then there are 1000, 500, 50, and 20 neurons to classify the given email into one of the 20 categories: The model is trained as per the given metrics: # Model Training >>> model.fit(x_train_2, Y_train, batch_size=batch_size, epochs=nb_epochs,verbose=1) The model has been fitted with 20 epochs, in which each epoch took about 2 seconds. The loss has been minimized from 1.9281 to 0.0241. By using CPU hardware, the time required for training each epoch may increase as a GPU massively parallelizes the computation with thousands of threads/cores: Finally, predictions are made on the train and test datasets to determine the accuracy, precision, and recall values: #Model Prediction >>> y_train_predclass = model.predict_classes(x_train_2,batch_size=batch_size) >>> y_test_predclass = model.predict_classes(x_test_2,batch_size=batch_size) >>> from sklearn.metrics import accuracy_score,classification_report >>> print ("nnDeep Neural Network - Train accuracy:"),(round(accuracy_score( y_train, y_train_predclass),3)) >>> print ("nDeep Neural Network - Test accuracy:"),(round(accuracy_score( y_test,y_test_predclass),3)) >>> print ("nDeep Neural Network - Train Classification Report") >>> print (classification_report(y_train,y_train_predclass)) >>> print ("nDeep Neural Network - Test Classification Report") >>> print (classification_report(y_test,y_test_predclass)) It appears that the classifier is giving a good 99.9% accuracy on the train dataset and 80.7% on the test dataset. We learned the classification of emails using DNNs(Deep Neural Networks) after generating TF-IDF. If you found this post useful, do check out this book Natural Language Processing with Python Cookbook  to further analyze sentence structures and application of various deep learning techniques.    
Read more
  • 0
  • 0
  • 10377

article-image-interactive-dashboard-with-vrealize-operations-manager
Vijin Boricha
03 Jul 2018
14 min read
Save for later

Interactive dashboard with vRealize Operations Manager [Tutorial]

Vijin Boricha
03 Jul 2018
14 min read
Creating a dashboard is a relatively simple exercise, creating a good dashboard will require tuning and some tweaking. The tricky part is displaying the information needed on a single screen, this is one of the biggest challenges when creating a dashboard; placing all the relevant information on a single pane of glass. The number one goal when creating a dashboard is to get all the information across in a glance. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater. Out of the 46 widgets vRealize Operations 6.6 has available, we will only use a handful of them regularly. The most commonly used widgets, from experience, are the scoreboard, metric selector, heat map, object list and metric chart. The rest are generally only used for specific use cases. There are basically two types of dashboards that we can create, an interactive dashboard or a static dashboard. An interactive dashboard is typically used for troubleshooting or similar activities where you are expecting the user to interact with widgets to get the information they are after. A static or display dashboard typically uses self-providing widgets such as scoreboards and heatmaps that are designed for display monitors, or other situations where an administrator is keeping an eye on environment changes. Each of the widgets has the ability to be a self-provider that means we set the information we want to display directly in the widget. The other option is to set up interactions and have other widgets provide information based on an object or metric selection in another widget. In this article, we will focus on the interactive dashboard. We will be looking at creating a dashboard that looks at vSphere cluster information, which at a glance will show us the overall health and general cluster information an administrator would need. Working through this will give you the knowledge needed to create any type of dashboard. The dashboard we are about to create will show how to configure the more common widgets in a way that can be replicated on a greater scale. When creating a dashboard, you will generally go through the following steps: Start the New Dashboard wizard from the Actions menu. Configure the general dashboard settings. Add and configure individual widgets. (Optional) Configure widget interactions. (Optional) Configure dashboard navigation. Creating an interactive dashboard You can create a dashboard by using the New Dashboard wizard. Alternatively, you can clone an existing dashboard and modify the clone. Perform the following steps to create a new dashboard: Navigate to the Dashboards page, click Actions, and then click Create Dashboard, as shown in the following screenshot: Under Dashboard Configuration, we need to give it a meaningful name and provide a description. If you click Yes for the Is default setting, the dashboard appears on the homepage when you log in. By default, the Recommendations dashboard is the dashboard that appears on the home page when a user logs in. You can change the default dashboard. Next, we click on Widget List to bring up all the available widgets. Here we will click and drag the widgets we need from the left pane to the right. We will use the following: Object List Metric Picker Metric Chart Generic Scoreboard Heat Map You can arrange widgets in the dashboard by dragging them to the desired column position. The left pane and the right pane are collapsed so that you have more room for your dashboard workspace, which is the center pane. To edit the widgets, we click on the little pen icon sitting at the top of the widget. The Object List The Object List widget configuration options, as shown in the following screenshot, include some of the more common options, such as Title, Refresh Content, and Refresh Interval. Options also exist that are specific to this widget: Mode: You can select Self, Children, or Parent mode. This setting is used by widget interactions. Auto Select First Row: This enables you to select whether or not to start with the first row of data. Select which tags to filter: This enables you to select objects from an object tree to observe. For example, you can choose to observe information about objects managed by the vCenter Server instance named VCVA01. You can add different metrics using the Additional Column option during widget configuration. Using the Additional Column pane, you can add metrics that are specific for each object in the data grid columns. Perform the following steps to edit the Object List widget in our example dashboard: Click this on Object List and the Edit Object List window will appear. In here edit the Title, select On for Auto Select First Row, and select Cluster Compute Resource under Object Types Tag. We should have something similar to the following screenshot. Click Save to continue. With tag selection, multiple tags can be selected, if this is done then only objects that fall under both tag types will be shown in the widget. The next thing we want to do is click on Widget Interactions on the left pane; this is where we go to link the widgets, for example, we select a virtual machine from an object list widget and it would change any linked widgets to display the information of that object. We will see a Selected Object(s) with a drop-down list followed by a green arrow pointing to our widgets. This is saying that what we select in the drop-down list will be linked to the associated widget. Here our new Cluster List will feed Metric Picker, Scoreboard, and Heatmap, while Metric Picker will feed Metric Chart. Also we will notice that a widget like Metric Chart can be fed by more than one widget. Click APPLY INTERACTIONS and we should end up with something similar to the following screenshot: The Metric Picker Now, if we select a metric under the Metric Picker widget it should show the metric in the Metric Chart widget, as displayed in the following screenshot: Metric Picker will contain all the available metrics for the selected object, such as an ESXi host or a Virtual Machine. The Heatmap Next up, we will edit the Heatmap widget. For this example, we will use the Heatmap widget to display capacity remaining for the datastores attached to the vSphere cluster. This is the best way to see at a glance that none of the datastores are over 90% used or getting close. We need to make the following changes: Give the widget a new name describing what we are trying to achieve. Change Group By to Cluster Compute Resource - This is what we want the parent container to be. Change mode to Instance - This mode type is best used when interacting with other widgets to get its objects. Change Object Type to Datastore - This is the object that we want displayed. Change Attribute Kind to Disk Space | Capacity Remaining (%) - The metric of the object that we want to use. Change Colors around to 0 (Min), 20 (Max) - Because we really only want to know if a datastore is getting close to the threshold, minimizing the range will give us more granular colors. Change the colors around making it red on the left and green on the right. This is done by clicking on the little color square at each end and picking a new color. The reason this is done is because we have capacity remaining, so we need 0% remaining as red. Click Save and we should now have something similar to the following screenshot, with each square representing a datastore: Move the mouse over each box to reveal more detail of the object. The Scoreboard Time to modify the last widget. This one will be a little more complicated due to how we display what we want while being interactive. When we configured the widget interactions, we noticed that the scoreboard widget was populated automatically with a bunch of metrics, as shown in the following screenshot: Now, let's go back to our dashboard creation and edit the Scoreboard widget. We will notice quite a lot of configuration options compared to others, most of which are how the boxes are laid out, such as number of columns, box size, and rounding out decimals. What we want to do for this widget is: Name the scoreboard something meaningful Round the decimals to 1 - this cuts down the amount of decimal places returned on the displayed value Under Metric Configuration choose the Host-Util file from the drop-down list We should now see something similar to the following screenshot: But what about the object selection you may have noticed in the lower half of the scoreboards widget? These are only used if we make the widget a self-provider, which we can see as an option to the top left of the edit window. We can choose objects and metrics, but they are ignored when Self Provider is set to Off. If we now click Save we should see the new configuration of the scoreboard widget, as shown in the following screenshot:  I’ve also changed the Visual Theme to Original in the scoreboard widget configuration options to change the way the scoreboard visualizes the information. The scoreboard widget may not always display the information we necessarily need. To get the widget to display the information we want while continuing to be interactive to our selections in the Cluster List widgets, we have to create a metric configuration (XML) file. Metric Configuration Files (XML) A lot of the widgets are edited through the GUI with the objects and metrics we want displayed, but some require a metric configuration file to define what metrics the widget should display. Metric configuration files can create a custom set of metrics for the customization of supported widgets with meaningful data. Metric configuration files store the metric attribute keys in XML format. These widgets support customization using metric configuration files: Scoreboard Metric Chart Property List Rolling View Chart Sparkline Chart Topology Graph To keep this simple, we will configure four metrics to be displayed, which are: CPU usage for the cluster in % CPU demand for the cluster Memory ballooning CPU usage for the cluster in MHz Perform the following steps to create a metric configuration file: Open a text editor, add the following code, and save it as an XML file; in this case we will call it clusterexample.xml: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <AdapterKinds> <AdapterKind adapterKindKey="VMWARE"> <ResourceKind resourceKindKey="ClusterComputeResource"> <Metric attrkey="cpu|capacity_usagepct_average" label="CPU" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|demandPct" label="CPU Demand" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|usagemhz_average" label="CPU Usage" unit="GHz" yellow="8" orange="16" red="20" /> <Metric attrkey="mem|vmmemctl_average" label="Balloon Mem" unit="GB" yellow="100" orange="150" red="200" /> </ResourceKind> </AdapterKind> </AdapterKinds> Using WinSCP or another similar product, upload this file to the following location on the vRealize Operations 6.0 virtual appliance: /usr/lib/vmware-vcops/tomcat-web-app/webapps/vcops-web-ent/WEB-INF/classes/resources/reskndmetrics In this location, you will notice some built in sample XML files. Alternatively, you can create the XML file from the vRealize Operations user interface. To do so, navigate to Administration | Configuration, and then Metric Configuration. Now let's go back to our dashboard creation and edit the Scoreboard widget. Under Metric Configuration choose the clusterexmaple.xml file that we just created from the drop-down list. Click Save to save the configuration. We have now completed the new dashboard; click Save on the bottom right to save the dashboard. We can go back and edit this dashboard whenever we need to. This new dashboard will now be available on the home page, this is shown in the following screenshot: For the Scoreboard widget we have used an XML file so the widget will display the metrics we would like to see when an object is selected in another widget. How can we get the correct metric and adapter names to be used in this file? Glad you asked. The simplest way to get the correct information we need for that XML file is to create a non-interactive dashboard with the widget we require with all the information we want to display for our interactive one. For example, let's quickly create a temp dashboard with only one scoreboard widget and populate it with what we want by manually selecting the objects and metrics with self-provider set to yes: Create another dashboard and drag and drop a single scoreboard widget. Edit the scoreboard widget and configure it with all the information we would like. Search for an object in the middle pane and select the widgets we want in the right pane. Configure the box label and Measurement Unit. A thing to note here is that we have selected memory balloon metric as shown in the following screenshot, but we have given it a label of GB. This is because of a new feature in 6.0 it will automatically upscale the metrics when shown on a scoreboard, this also goes for datastore GB to TB, CPU MHz to GHz, and network throughput from KBps to MBps. Typically in 5.x we would create super metrics to make this happen. The downside to this is that the badge color still has to be set in the metrics base format. Save this dashboard once we have the metrics we want. Locate it under our dashboard list and select it, click on the little cog, and select Export Dashboards as shown in the following screenshot. This will automatically download a file called Dashboard<Date>.json. Open this file in a text editor and have a look through it and we will see all the information we require to write our XML interaction file. First off is our resourceKindKey and adapterKindKey, as shown in the following screenshot. These are pretty self-explanatory, resourceKind being Cluster resource, and adapter is the adapter that's collecting the metrics, in this case the inbuilt vCenter one called VMWARE. Next are our resources, as we can see from the following screenshot we have metricKey, which is the most important one as well as the color settings, unit, and the label: There it is, how we can get the information we require for XML files: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <AdapterKinds> <AdapterKind adapterKindKey="VMWARE"> <ResourceKind resourceKindKey="ClusterComputeResource"> <Metric attrkey="cpu|capacity_usagepct_average" label="CPU" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|demandPct" label="CPU Demand" unit="%" yellow="50" orange="75" red="90" /> <Metric attrkey="cpu|usagemhz_average" label="CPU Usage" unit="GHz" yellow="8" orange="16" red="20" /> <Metric attrkey="mem|vmmemctl_average" label="Balloon Mem" unit="GB" yellow="100" orange="150" red="200" /> </ResourceKind> </AdapterKind> </AdapterKinds> Any widget with the setting Metric Configuration available can use the XML files you create. The XML format is as per the preceding code. An XML file can also have multiple Adapter kinds as there could be different adapter metrics that you require. Today, we learned to create a dashboard that is interactive based on selections made within widgets. You also unraveled the mystery of the metric configuration XML file and how to get the information you require into it. To know more about Super metrics and when to use it, check out this book Mastering vRealize Operations Manager - Second Edition. What to expect from vSphere 6.7 How to ace managing the Endpoint Operations Management Agent with vROps Troubleshooting techniques in vRealize Operations components
Read more
  • 0
  • 0
  • 10362
article-image-building-motion-charts-tableau
Ashwin Nair
31 Oct 2017
4 min read
Save for later

Building Motion Charts with Tableau

Ashwin Nair
31 Oct 2017
4 min read
[box type="info" align="" class="" width=""]The following is an excerpt from the book Tableau 10 Bootcamp, Chapter 2, Interactivity – written by Joshua N. Milligan and Donabel Santos. It offers intensive training on Data Visualization and Dashboarding with Tableau 10. In this article, we will learn how to build motion charts with Tableau.[/box] Tableau is an amazing platform for achieving incredible data discovery, analysis, and Storytelling. It allows you to build fully interactive dashboards and stories with your visualizations and insights so that you can share the data story with others. Creating Motion Charts with Tableau Let`s learn how to build motion charts with Tableau. A motion chart, as its name suggests, is a chart that displays the entire trail of changes in data over time by showing movement using the X and Y-axes. It is very much similar to the doodles in our notebooks which seem to come to life after flipping through the pages. It is amazing to see the same kind of movement in action in Tableau using the Pagesshelf. It is work that feels like play. On the Pages shelf, when you drop a field, Tableau creates a sequence of pages that filters the view for each value in that field. Tableau's page control allows us to flip pages, enabling us to see our view come to life. With three predefined speed settings, we can control the speed of the flip. The three settings include one that relates to the slowest speed, the others to the fastest speed. We can also format the marks and show the marks or trails, or both, using page control. In our viz, we have used a circle for marking each year. The circle that moves to a new position each year represents the specific country's new population value. These circles are all connected by trail lines that enable us to simulate a moving time series graph by setting the  mark and trail histories both to show in page control: Let's create an animated motion chart showing the population change over the years for a selected few countries: Open the Motion Chart worksheet and connect to the CO2 (Worldbank) data Source: Open Dimensions and drag Year to the Columns shelf. Open Measures and drag CO2 Emission to the Rows shelf. Right-click on the CO2 Emission axis, and change the title to CO2 Emission (metric tons per capita): In the Marks card, click on the dropdown to change the mark from Automatic to Circle. Open Dimensions and drag Country Name to Color in the Marks card. Also, drag Country Name to the Filter shelf from Dimensions Under the General tab of the Filter window, while the Select from list radio button is selected, select None. Select the Custom value list radio button, still under the General tab, and add China, Trinidad and Tobago, and United States: Click OK when done. This should close the Filter window. Open Dimensions and drag Year to Pages for adding a page control to the view. Click on the Show history checkbox to select it. Click on the drop-down beside Show history and perform the following steps: Select All for Marks to show history for Select Both for Show Using the Year page control, click on the forward arrow to play. This shows the change in the population of the three selected countries over the years. [box type="info" align="" class="" width=""]Tip -  In case you ever want to loopback the animation, you can click on the dropdown on the top-right of your page control card, and select Loop Playback:[/box] Note that Tableau Server does not support the animation effect that you see when working on motion charts with Tableau Desktop. Tableau strives for zero footprints when serving the charts and dashboards on the server so that there is no additional download to enable the functionalities. So, the play control does not work the same. No need to fret though. You can click manually on the slider and have a similar effect.  If you liked the above excerpt from the book Tableau 10 Bootcamp, check out the book to learn more data visualization techniques.
Read more
  • 0
  • 0
  • 10353

article-image-handle-missing-data-ibm-spss-modeler
Amey Varangaonkar
21 Feb 2018
8 min read
Save for later

How to handle missing data in IBM SPSS Modeler

Amey Varangaonkar
21 Feb 2018
8 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book IBM SPSS Modeler Essentials written by Keith McCormick and Jesus Salcedo. This book gets you up and running with the fundamentals of SPSS Modeler, a premium tool for data mining and predictive analytics.[/box] In today’s tutorial we will demonstrate how easy it is to work with missing values in a dataset using the SPSS Modeler. Missing data is different than other topics in data modeling that you cannot choose to ignore . This is because failing to make a choice just means you are using the default option for a procedure, which most of the time is not optimal. In fact, it is important to remember that every model deals with missing data in a certain way, and some modeling techniques handle missing data better than others. In SPSS Modeler, there are four types of missing data: Type of missing data Definition $Null$ value Applies only to numeric fields. This is a cell that is empty or has an illegal value White space Applies only to string fields. This is a cell that is empty or has spaces. Empty string Applies only to string fields. This is a cell that is empty. Empty string is a subset of white space Blank value This is predefined code, and it applies to any type of field The first step in dealing with missing data is to assess the type and amount of missing data for each field. Consider whether there is a pattern as to why data might be missing. This can help determine if missing values could have affected responses. Only then can we decide how to handle it. There are two problems associated with missing data, and these affect the quantity and quality of the data: Missing data reduces sample size (quantity) Responders may be different from non-responders (quality—there could be biased results) Ways to address missing data There are three ways to address missing data: Remove fields Remove cases Impute missing values It can be necessary at times to remove fields with a large proportion of missing values. The easiest way to remove fields is to use a Filter node (discussed later in the book), however you can also use the Data Audit node to do this. [box type="info" align="" class="" width=""]Note that in some cases missing data can be predictive of behavior, so it is important to assess the importance of a variable before removing a field.[/box] In some situations, it may be necessary to remove cases instead of fields. For example, you may be developing a predictive model to predict customers' purchasing behavior and you simply do not have enough information concerning new customers. The easiest way to remove cases would be to use a Select node (discussed in the next chapter); however, you can also use the Data Audit node to do this. Imputing missing values implies replacing values for fields. However, some people do not estimate values for categorical fields because it does not seem right. In general, it is easier to estimate missing values for numeric fields, such as age, where often analysts will use the mean, median, or mode. [box type="info" align="" class="" width=""]Note that it is not a good idea to estimate missing data if you are missing a large percentage of information for that field, because estimates will not be accurate. Typically, we try not to impute more than 5% of values.[/box] To close out of the Data Audit node: Click OK to return to the stream canvas Defining missing values in the Type node When working with missing data, the first thing you need to do is define the missing data so that Modeler knows there is missing data, otherwise Modeler will think that the missing data is another value for a field (which, in some situations, it is, as in our dataset, but quite often this is not the case). Although the Data Audit node provides a report of missing values, blank values need to be defined within a Type node (or Type tab of a source node) in order for these to be identified by the Data Audit node. The Type tab (or node) is the only place where users can define missing values (Missing column). [box type="info" align="" class="" width=""]Note that in the Type node, blank values and $null$ values are not shown; however, empty strings and white space are depicted by "" or " ".[/box] To define blank values: Edit the Var.File node. Click on the Types tab. Click on the Missing cell for the field Region. Select Specify in the Missing column. Click Define blanks. Selecting Define blanks chooses Null and White space (remember, Empty String is a subset of White space, so it is also selected), and in this way these types of missing data are specified. To specify a predefined code, or a blank value, you can add each individual value to a separate cell in the Missing values area, or you can enter a range of numeric values if they are consecutive. 6. Type "Not applicable" in the first Missing values cell. 7. Hit Enter: We have now specified that "Not applicable" is a code for missing data for the field Region. 8. Click OK. In our dataset, we will only define one field as having missing data. 9. Click on the Clear Values button. 10. Click on the Read Values button: The asterisk indicates that missing values have been defined for the field Region. Now Not applicable is no longer considered a valid value for the field Region, but it will still be shown in graphs and other output. However, models will now treat the category Not applicable as a missing value. 11. Click OK. Imputing missing values with the Data Audit node As we have seen, the Data Audit node allows you to identify missing values so that you can get a sense of how much missing data you have. However, the Data Audit node also allows you to remove fields or cases that have missing data, as well as providing several options for data imputation: Rerun the Data Audit node. Note that the field Region only has 15,774 valid cases now, because we have correctly identified that the Not applicable category was a predefined code for missing data. 2. Click on the Quality tab. We are not going to impute any missing values in this example because it is not necessary, but we are going to show you some of the options, since these will be useful in other situations. To impute missing values you first need to specify when you want to impute missing values. For example: 3. Click in the Impute when cell for the field Region. 4. Select the Blank & Null Values. Now you need to specify how the missing values will be imputed. 5. Click in the Impute Method cell for the field Region. 6. Select Specify. In this dialog box, you can specify which imputation method you want to use, and once you have chosen a method, you can then further specify details about the imputation. There are several imputation methods: Fixed uses the same value for all cases. This fixed value can be a constant, the mode, the mean, or a midpoint of the range (the options will vary depending on the measurement level of the field). Random uses a random (different) value based on a normal or uniform distribution. This allows for there to be variation for the field with imputed values. Expression allows you to create your own equation to specify missing values. Algorithm uses a value predicted by a C&R Tree model. We are not going to impute any values now so click Cancel. If we had selected an imputation method, we would then: Click on the field Region to select it. Click on the Generate menu. The Generate menu of the Data Audit node allows you to remove fields, remove cases, or impute missing values, explained as follows: Missing Values Filter Node: This removes fields with too much missing data, or keeps fields with missing data so that you can investigate them further Missing Values Select Node: This removes cases with missing data, or keeps cases with missing data so that you can further investigate Missing Values SuperNode: This imputes missing values: If we were going to impute values, we would then click Missing Values SuperNode. In this way you can impute missing values using SPSS Modeler, and it makes your analysis a lot more easier. If you found our post useful, make sure to check out our book IBM SPSS Modeler Essentials, for more information on data mining and generating hidden insights using the popular SPSS Modeler tool.  
Read more
  • 0
  • 0
  • 10346