Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-its-black-friday-but-whats-the-business-and-developer-cost-of-downtime
Richard Gall
23 Nov 2018
4 min read
Save for later

It's Black Friday: But what's the business (and developer) cost of downtime?

Richard Gall
23 Nov 2018
4 min read
Black Friday is back, and, as you've probably already noticed, with a considerable vengeance. According to Adobe Analytics data, online spending is predicted to hit $3.7 billion over this holiday season in the U.S, up from $2.9 billion in 2017. But while consumers clamour for deals and businesses reap the rewards, it's important to remember there's a largely hidden plane of software engineering labour. Without this army of developers, consumers will most likely be hitting their devices in frustration, while business leaders will be missing tough revenue targets - so, as we enter into Black Friday let's pour one out for all those engineers on call and trying their best to keep eCommerce sites on their feet. Here's to the software engineers keeping things running on Black Friday Of course, the pain that hits on days like Black Friday and Cyber Monday can be minimised with smart planning and effective decision making long before those sales begin. However, for engineering teams under-resourced and lacking the right tools, that is simply impossible. This means that software engineers are left in a position where they're treading water, knowing that they're going to be sinking once those big days come around. It doesn't have to be like this. With smarter leadership and, indeed, more respect for the intensive work engineers put in to make websites and apps actually work, revenue driving platforms can become more secure, resilient and stable. Chaos engineering platform Gremlin publishes the 'true cost of downtime' This is the central argument of chaos engineering platform Gremlin, who we've covered a number of times this year. To coincide with Black Friday the team has put together what they believe is the 'true cost of downtime'. On the one hand this is a good marketing hook for their chaos engineering platform, but, cynicism aside, it's also a good explanation of why the principles of chaos engineering can be so valuable from both a business and developer perspective. Estimating the annual revenue of some of the biggest companies in the world, Gremlin has been then created an interactive table to demonstrate what the cost of downtime for each of those businesses would be, for the length of time you are on the page. For 20 minutes downtime, Amazon.com would have lost a staggering $4.4 million. For Walgreens it's more than $80,000. Gremlin provide some context to all this, saying: "Enterprise commerce businesses typically rely on a complex microservices architecture, from fulfillment, to website security, ability to scale with holiday traffic, and payment processing - there is a lot that can go wrong and impact revenue, damage customer trust, and consume engineering time. If an ecommerce site isn’t 100% online and performant, it’s losing revenue." "The holiday season is especially demanding for SREs working in ecommerce. Even the most skilled engineering teams can struggle to keep up with the demands of peak holiday traffic (i.e. Black Friday and Cyber Monday). Just going down for a few seconds can mean thousands in lost revenue, but for some sites, downtime can be exponentially more expensive." For Gremlin, chaos engineering is clearly the answer to many of the problems days like Black Friday poses. While it might not work for every single organization, it's nevertheless true that failing to pay attention to the value of your applications and websites at an hour by hour level could be incredibly damaging. With outages on Facebook, WhatsApp, and Instagram happening earlier this week, these problems aren't hidden away - they're in full view of the public. What does remain hidden, however, is the work and stress that goes in to tackling these issues and ensuring things are working as they should be. Perhaps it's time to start learning the lessons of Black Friday - business revenues will be that little bit healthier, but engineers will also be that little bit happier. 
Read more
  • 0
  • 0
  • 4697

article-image-neural-style-transfer-creating-artificial-art-with-deep-learning-and-transfer-learning
Bhagyashree R
23 Nov 2018
14 min read
Save for later

Neural Style Transfer: Creating artificial art with deep learning and transfer learning

Bhagyashree R
23 Nov 2018
14 min read
Paintings require a special skill only a few have mastered. Paintings present a complex interplay of content and style. Photographs, on the other hand, are a combination of perspectives and light. When the two are combined, the results are spectacular and surprising. This process is called artistic style transfer. In this tutorial, we will be focusing on leveraging deep learning along with transfer learning for building a neural style transfer system. This article will walk you through the theoretical concepts around neural style transfer, loss functions, and optimization. Besides this, we will use a hands-on approach to implement our own neural style transfer model. [box type="shadow" align="" class="" width=""]This article is an excerpt from a book written by Dipanjan Sarkar, Raghav Bali, and Tamoghna Ghosh titled Hands-On Transfer Learning with Python. To follow along with the article, you can find the code in the book's GitHub repository.[/box] Understanding neural style transfer Neural style transfer is the process of applying the style of a reference image to a specific target image, such that the original content of the target image remains unchanged. Here, style is defined as colours, patterns, and textures present in the reference image, while content is defined as the overall structure and higher-level components of the image.  Here, the main objective is to retain the content of the original target image, while superimposing or adopting the style of the reference image on the target image. To define this concept mathematically, consider three images: the original content (c), the reference style (s), and the generated image (g). We would need a way to measure how different images c and g might be in terms of their content. Also, the output image should have less difference compared to the style image, in terms of styling features of the output. Formally, the objective function for neural style transfer can be formulated as follows:  Here, α and β are weights used to control the impact of content and style components on the overall loss. This depiction can be simplified further and represented as follows: Here, we can define the following components from the preceding formula: dist is a norm function; for example, the L2 norm distance style(...) is a function to compute representations of style for the reference style and generated images content(...) is a function to compute representations of content for the original content and generated images Ic, Is, and Ig are the content, style, and generated images respectively Thus, minimizing this loss causes style(Ig) to be close to style(Is), and also content(Ig) to be close to content(Ic). This helps us in achieving the necessary stipulations for effective style transfer. The loss function we will try to minimize consists of three parts; namely, the content loss, the style loss, and the total variation loss, which we will be talking about soon. The main steps for performing neural style transfer are depicted as follows: Leverage VGG-16 to help compute layer activations for the style, content, and generated image Use these activations to define the specific loss functions mentioned earlier Finally, use gradient descent to minimize the overall loss Image preprocessing methodology The first and foremost step towards implementation of such a network is to preprocess the data or images in this case. The following code snippet shows some quick utilities to preprocess and post-process images for size and channel adjustments: import numpy as np from keras.applications import vgg16 from keras.preprocessing.image import load_img, img_to_array def preprocess_image(image_path, height=None, width=None): height = 400 if not height else height width = width if width else int(width * height / height) img = load_img(image_path, target_size=(height, width)) img = img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg16.preprocess_input(img) return img def deprocess_image(x): # Remove zero-center by mean pixel x[:, :, 0] += 103.939 x[:, :, 1] += 116.779 x[:, :, 2] += 123.68 # 'BGR'->'RGB' x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype('uint8') return x As we would be writing custom loss functions and manipulation routines, we would need to define certain placeholders. Remember that keras is a high-level library that utilizes tensor manipulation backends (like tensorflow, theano, and CNTK) to perform the heavy lifting. Thus, these placeholders provide high-level abstractions to work with the underlying tensor object. The following snippet prepares placeholders for style, content, and generated images, along with the input tensor for the neural network: from keras import backend as K # This is the path to the image you want to transform. TARGET_IMG = 'lotr.jpg' # This is the path to the style image. REFERENCE_STYLE_IMG = 'pattern1.jpg' width, height = load_img(TARGET_IMG).size img_height = 480 img_width = int(width * img_height / height) target_image = K.constant(preprocess_image(TARGET_IMG, height=img_height, width=img_width)) style_image = K.constant(preprocess_image(REFERENCE_STYLE_IMG, height=img_height, width=img_width)) # Placeholder for our generated image generated_image = K.placeholder((1, img_height, img_width, 3)) # Combine the 3 images into a single batch input_tensor = K.concatenate([target_image, style_image, generated_image], axis=0) We will load the pre-trained VGG-16 model; that is, without the top fully-connected layers. The only difference here is that we would be providing the size dimensions of the input tensor for the model input. The following snippet helps us build the pre-trained model: model = vgg16.VGG16(input_tensor=input_tensor, weights='imagenet', include_top=False) Building loss functions In the Understanding neural style transfer section, we discussed that the problem with neural style transfer revolves around loss functions of content and style. In this section, we will define these loss functions. Content loss In any CNN-based model, activations from top layers contain more global and abstract information, and bottom layers will contain local information about the image. We would want to leverage the top layers of a CNN for capturing the right representations for the content of an image.  Hence, for the content loss, considering we will be using the pre-trained VGG-16 model, we can define our loss function as the L2 norm (scaled and squared Euclidean distance) between the activations of a top layer (giving feature representations) computed over the target image, and the activations of the same layer computed over the generated image. Assuming we usually get feature representations relevant to the content of images from the top layers of a CNN, the generated image is expected to look similar to the base target image. The following snippet shows the function to compute the content loss: def content_loss(base, combination): return K.sum(K.square(combination - base)) Style loss As per the A Neural Algorithm of Artistic Style, by Gatys et al, we will be leveraging the Gram matrix and computing the same over the feature representations generated by the convolution layers. The Gram matrix computes the inner product between the feature maps produced in any given conv layer. The inner product's terms are proportional to the co-variances of corresponding feature sets, and hence, captures patterns of correlations between the features of a layer that tends to activate together. These feature correlations help capture relevant aggregate statistics of the patterns of a particular spatial scale, which correspond to the style, texture, and appearance, and not the components and objects present in an image. The style loss is thus defined as the scaled and squared Frobenius norm (Euclidean norm on a matrix) of the difference between the Gram matrices of the reference style and the generated images. Minimizing this loss helps ensure that the textures found at different spatial scales in the reference style image will be similar in the generated image. Thus, the following snippet defines a style loss function based on a Gram matrix calculation: def style_loss(style, combination, height, width): def build_gram_matrix(x): features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1))) gram_matrix = K.dot(features, K.transpose(features)) return gram_matrix S = build_gram_matrix(style) C = build_gram_matrix(combination) channels = 3 size = height * width return K.sum(K.square(S - C))/(4. * (channels ** 2) * (size ** 2)) Total variation loss It was observed that optimization to reduce only the style and content losses led to highly pixelated and noisy outputs. To cover the same, total variation loss was introduced. The total variation loss is analogous to regularization loss. This is introduced for ensuring spatial continuity and smoothness in the generated image to avoid noisy and overly pixelated results. The same is defined in the function as follows: def total_variation_loss(x): a = K.square( x[:, :img_height - 1, :img_width - 1, :] - x[:, 1:, :img_width - 1, :]) b = K.square( x[:, :img_height - 1, :img_width - 1, :] - x[:, :img_height - 1, 1:, :]) return K.sum(K.pow(a + b, 1.25)) Overall loss function Having defined the components of the overall loss function for neural style transfer, the next step is to stitch together these building blocks. Since content and style information is captured by the CNNs at different depths in the network, we need to apply and calculate loss at appropriate layers for each type of loss. We will be taking the conv layers one to five for the style loss and setting appropriate weights for each layer. Here is the code snippet to build the overall loss function: # weights for the weighted average loss function content_weight = 0.05 total_variation_weight = 1e-4 content_layer = 'block4_conv2' style_layers = ['block1_conv2', 'block2_conv2', 'block3_conv3','block4_conv3', 'block5_conv3'] style_weights = [0.1, 0.15, 0.2, 0.25, 0.3] # initialize total loss loss = K.variable(0.) # add content loss layer_features = layers[content_layer] target_image_features = layer_features[0, :, :, :] combination_features = layer_features[2, :, :, :] loss += content_weight * content_loss(target_image_features, combination_features) # add style loss for layer_name, sw in zip(style_layers, style_weights): layer_features = layers[layer_name] style_reference_features = layer_features[1, :, :, :] combination_features = layer_features[2, :, :, :] sl = style_loss(style_reference_features, combination_features, height=img_height, width=img_width) loss += (sl*sw) # add total variation loss loss += total_variation_weight * total_variation_loss(generated_image) Constructing a custom optimizer The objective is to iteratively minimize the overall loss with the help of an optimization algorithm. In the paper by Gatys et al., optimization was done using the L-BFGS algorithm, which is an optimization algorithm based on Quasi-Newton methods, which are popularly used for solving non-linear optimization problems and parameter estimation. This method usually converges faster than standard gradient descent.  We build an Evaluator class based on patterns, followed by keras creator François Chollet, to compute both loss and gradient values in one pass instead of independent and separate computations. This will return the loss value when called the first time and will cache the gradients for the next call. Thus, it would be more efficient than computing both independently. The following snippet defines the Evaluator class: class Evaluator(object): def __init__(self, height=None, width=None): self.loss_value = None self.grads_values = None self.height = height self.width = width def loss(self, x): assert self.loss_value is None x = x.reshape((1, self.height, self.width, 3)) outs = fetch_loss_and_grads([x]) loss_value = outs[0] grad_values = outs[1].flatten().astype('float64') self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_values evaluator = Evaluator(height=img_height, width=img_width) Style transfer in action The final piece of the puzzle is to use all the building blocks and perform style transfer in action! The following snippet outlines how loss and gradients are evaluated. We also write back outputs after regular intervals/iterations (5, 10, and so on) to understand how the process of neural style transfer transforms the images in consideration after a certain number of iterations as depicted in the following snippet: from scipy.optimize import fmin_l_bfgs_b from scipy.misc import imsave from imageio import imwrite import time result_prefix = 'st_res_'+TARGET_IMG.split('.')[0] iterations = 20 # Run scipy-based optimization (L-BFGS) over the pixels of the # generated image # so as to minimize the neural style loss. # This is our initial state: the target image. # Note that `scipy.optimize.fmin_l_bfgs_b` can only process flat # vectors. x = preprocess_image(TARGET_IMG, height=img_height, width=img_width) x = x.flatten() for i in range(iterations): print('Start of iteration', (i+1)) start_time = time.time() x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x, fprime=evaluator.grads, maxfun=20) print('Current loss value:', min_val) if (i+1) % 5 == 0 or i == 0: # Save current generated image only every 5 iterations img = x.copy().reshape((img_height, img_width, 3)) img = deprocess_image(img) fname = result_prefix + '_iter%d.png' %(i+1) imwrite(fname, img) print('Image saved as', fname) end_time = time.time() print('Iteration %d completed in %ds' % (i+1, end_time - start_time)) It must be pretty evident by now that neural style transfer is a computationally expensive task. For the set of images in consideration, each iteration took between 500-1,000 seconds on an Intel i5 CPU with 8GB RAM (much faster on i7 or Xeon processors though!). The following code snippet shows the speedup we are getting using GPUs on a p2.x instance on AWS, where each iteration takes a mere 25 seconds! The following code snippet also shows the output of some of the iterations. We print the loss and time taken for each iteration, and save the generated image after every five iterations: Start of iteration 1 Current loss value: 10028529000.0 Image saved as st_res_lotr_iter1.png Iteration 1 completed in 28s Start of iteration 2 Current loss value: 5671338500.0 Iteration 2 completed in 24s Start of iteration 3 Current loss value: 4681865700.0 Iteration 3 completed in 25s Start of iteration 4 Current loss value: 4249350400.0 . . . Start of iteration 20 Current loss value: 3458219000.0 Image saved as st_res_lotr_iter20.png Iteration 20 completed in 25s Now you'll learn how the neural style transfer model has performed style transfer for the content images in consideration. Remember that we performed checkpoint outputs after certain iterations for every pair of style and content images. We utilize matplotlib and skimage to load and understand the style transfer magic performed by our system! We have used the following image from the very popular Lord of the Rings movie as our content image, and a nice floral pattern-based artwork as our style image: In the following code snippet, we are loading the generated styled images after various iterations: from skimage import io from glob import glob from matplotlib import pyplot as plt %matplotlib inline content_image = io.imread('lotr.jpg') style_image = io.imread('pattern1.jpg') iter1 = io.imread('st_res_lotr_iter1.png') iter5 = io.imread('st_res_lotr_iter5.png') iter10 = io.imread('st_res_lotr_iter10.png') iter15 = io.imread('st_res_lotr_iter15.png') iter20 = io.imread('st_res_lotr_iter20.png') fig = plt.figure(figsize = (15, 15)) ax1 = fig.add_subplot(6,3, 1) ax1.imshow(content_image) t1 = ax1.set_title('Original') gen_images = [iter1,iter5, iter10, iter15, iter20] for i, img in enumerate(gen_images): ax1 = fig.add_subplot(6,3,i+1) ax1.imshow(content_image) t1 = ax1.set_title('Iteration {}'.format(i+5)) plt.tight_layout() fig.subplots_adjust(top=0.95) t = fig.suptitle('LOTR Scene after Style Transfer') Following is the output showcasing the original image and the generated styled images after every five iterations: Following is the final styled image at a higher resolution. You can clearly see how the floral pattern textures and styles have slowly started propagating in the original Lord of the Rings movie image, giving it a nice vintage look: This chapter presented a very novel technique in the deep learning landscape, leveraging the power of deep learning to create art!  We covered the core concepts of neural style transfer, how to represent and formulate the problem using an effective loss function, and how to leverage the power of transfer learning and pretrained models like VGG-16 to extract the right feature representations. If you found this post useful, do check out the book, Hands-On Transfer Learning with Python, which covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples. Generative Models in action: How to create a Van Gogh with Neural Artistic Style Transfer “Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners
Read more
  • 0
  • 0
  • 6638

article-image-facebooks-outgoing-head-of-communications-and-policy-takes-blame-for-hiring-pr-firm-definers-and-reveals-more
Melisha Dsouza
22 Nov 2018
4 min read
Save for later

Facebook's outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more

Melisha Dsouza
22 Nov 2018
4 min read
On 4th November, the New York Times published a scathing report on Facebook that threw the tech giant under scrutiny for its leadership morales. The report pointed out how Facebook has been following the strategy of 'delaying, denying and deflecting’ the blame for all the controversies surrounding it. One of the recent scandals it was involved in was hiring a PR firm- called Definers- who did opposition research and shared content that criticized Facebook’s rivals Google and Apple, diverting focus from the impact of Russian interference on Facebook. They also pushed the idea that liberal financier George Soros was behind a growing anti-Facebook movement. Now, in a memo sent by Elliot Schrage (Facebook’s outgoing Head of Communications and Policy) to Facebook employees and obtained by TechCrunch, he takes the blame for hiring The Definers. Elliot Schrage, who after the Cambridge Analytica scandal, announced in June that he was leaving, admitted that his team asked Definers to push negative narratives about Facebook's competitors. He also stated that Facebook asked Definers to conduct research on liberal financier George Soros. His argument was that after George Soros attacked Facebook in a speech at Davos, calling them a “menace to society”, they wanted to determine if he had any financial motivation. According to the TechCrunch report, Elliot denied that the company asked the PR firm to distribute or create fake news. "I knew and approved of the decision to hire Definers and similar firms. I should have known of the decision to expand their mandate," Schrage said in the memo. He further stresses on being disappointed that a lot of the company’s internal discussion has become public. According to the memo, “This is a serious threat to our culture and ability to work together in difficult times.” Saving Mark and Sheryl from additional finger pointing, Schrage further added "Over the past decade, I built a management system that relies on the teams to escalate issues if they are uncomfortable about any project, the value it will provide or the risks that it creates. That system failed here and I'm sorry I let you all down. I regret my own failure here." As a follow-up note to the memo, Sheryl Sandberg (COO, Facebook) also shares accountability of hiring Deniers. She says “I want to be clear that I oversee our Comms team and take full responsibility for their work and the PR firms who work with us” Conveniently enough, this memo comes after the announcement that Elliot is stepping down from his post at Facebook. Elliot’s replacement, Facebook’s new head of global policy and former U.K. Deputy Prime Minister, Nick Clegg will now be reviewing its work with all political consultants. The entire scandal has led to harsh criticism from the media circle like Kara Swisher and from academics like Scott Galloway. On an episode of Pivot with Kara Swisher and Scott Galloway,  Kara comments that “Sheryl Sandberg ... really comes off the worst in this story, although I still cannot stand the ability of people to pretend that this is not all Mark Zuckerberg’s responsibility,” She further followed up with a jarring comment stating “He is the CEO. He has 60 percent. He’s an adult, and they’re treating him like this sort of adult boy king who doesn’t know what’s going on. It’s ridiculous. He knows exactly what’s going on.” Galloway added that since Sheryl had “written eloquently on personal loss and the important discussion around gender equality”, these accomplishments gave her “unfair” protection, and that it might also be true that she will be “unfairly punished.” He raises questions on both, Mark and Sheryl’s leadership saying “Can you think of any individuals who have made so much money doing so much damage? I mean, they make tobacco executives look like Mister Rogers.” On 19th November, he tweeted a detailed theory on why Sandberg is yet a part of Facebook; because “The Zuck can't be (fired)” and nobody wants to be the board who "fires the woman". https://twitter.com/profgalloway/status/1064559077819326464 Here’s another recent tweet thread from Scott which is a sarcastic take on what a “Big Tech” company actually is: https://twitter.com/profgalloway/status/1065315074259202048 Head over to CNBC to know more about this news. What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy NYT Facebook exposé fallout: Board defends Zuckerberg and Sandberg; Media call and transparency report Highlights BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”  
Read more
  • 0
  • 0
  • 2220

article-image-opencv-4-0-releases-with-experimental-vulcan-g-api-module-and-qr-code-detector-among-others
Natasha Mathur
21 Nov 2018
2 min read
Save for later

OpenCV 4.0 releases with experimental Vulcan, G-API module and QR-code detector among others

Natasha Mathur
21 Nov 2018
2 min read
Two months after the OpenCV team announced the alpha release of Open CV 4.0, the final version 4.0 of OpenCV is here. OpenCV 4.0 was announced last week and is now available as a c++11 library that requires a c++ 11- compliant compiler. This new release explores features such as a G-API module, QR code detector, performance improvements, and DNN improvements among others. OpenCV is an open source library of programming functions which is mainly aimed at real-time computer vision. OpenCV is cross-platform and free for use under the open-source BSD license. Let’s have a look at what’s new in OpenCV 4.0. New Features G-API: OpenCV 4.0 comes with a completely new module opencv_gapi. G-API is an engine responsible for very efficient image processing, based on the lazy evaluation and on-fly construction of the processing graph. QR code detector and decoder: OpenCV 4.0 comprises QR code detector and decoder that has been added to opencv/objdetect module along with a live sample. The decoder is currently built on top of QUirc library. Kinect Fusion algorithm: A popular Kinect Fusion algorithm has been implemented, optimized for CPU and GPU (OpenCL), and integrated into opencv_contrib/rgbd module.  Kinect 2 support has also been updated in opencv/videoio module to make the live samples work. DNN improvements Support has been added for Mask-RCNN model. A new Integrated ONNX parser has been added. Support added for popular classification networks such as the YOLO object detection network. There’s been an improvement in the performance of the DNN module in OpenCV 4.0 when built with Intel DLDT support by utilizing more layers from DLDT. OpenCV 4.0 comes with experimental Vulkan backend that has been added for the platforms where OpenCL is not available. Performance improvements In OpenCV 4.0, hundreds of basic kernels in OpenCV have been rewritten with the help of "wide universal intrinsics". Wide universal intrinsics map to SSE2, SSE4, AVX2, NEON or VSX intrinsics, depending on the target platform and the compile flags. This leads to better performance, even for the already optimized functions. Support has been added for IPP 2019 using the IPPICV component upgrade. For more information, check out the official release notes. Image filtering techniques in OpenCV 3 ways to deploy a QT and OpenCV application OpenCV and Android: Making Your Apps See
Read more
  • 0
  • 0
  • 4315

article-image-the-us-department-of-commerce-wants-to-regulate-export-of-ai-and-related-products
Prasad Ramesh
21 Nov 2018
4 min read
Save for later

The US Department of Commerce wants to regulate export of AI and related products

Prasad Ramesh
21 Nov 2018
4 min read
This Monday the Department of Commerce, Bureau of Industry and Security (BIS) published a proposal to control the export of AI from USA. This move seems to lean towards restricting AI tech going out of the country to protect the national security of USA. The areas that come under the licensing proposal Artificial intelligence, as we’ve seen in recent years has great potential for both good and harm. The DoC in the United States of America is not taking any chances with it. The proposal lists many areas of AI that could potentially require a license to be exported to certain countries. Other than computer vision, natural language processing, military-specific products like adaptive camouflage and faceprint for surveillance is also listed in the proposal to restrict the export of AI. The areas major areas listed in the proposal are: Biotechnology including genomic and genetic engineering Artificial intelligence (AI) and machine learning including neural networks, computer vision, and natural language processing Position, Navigation, and Timing (PNT) technology Microprocessor technology like stacked memory on chip Advanced computing technology like memory-centric logic Data analytics technology like data analytics by visualization and analysis algorithms Quantum information and sensing technology like quantum computing, encryption, and sensing Logistics technology like mobile electric power Additive manufacturing like 3D printing Robotics like micro drones and molecular robotics Brain-computer interfaces like mind-machine interfaces Hypersonics like flight control algorithms Advanced Materials like adaptive camouflage Advanced surveillance technologies faceprint and voiceprint technologies David Edelman, a former adviser to ex-US president Barack Obama said: “This is intended to be a shot across the bow, directed specifically at Beijing, in an attempt to flex their muscles on just how broad these restrictions could be”. Countries that could be affected with regulation on export of AI To determine the level of export controls, the department will consider the potential end-uses and end-users of the technology. The list of countries is not clear but ones to which exports are restricted like embargoed countries will be considered. Also, China could be one of them. What does this mean for companies? If your organization creates products in ‘emerging technologies’ then there will be restrictions on the countries you can export to and also on disclosure of technology to foreign nationals in United States. Depending on the criteria, non-US citizens might even need licenses to participate in research and development of such technology. This will restrict non-US citizens to participate and take back anything from, say an advanced AI research project. If the new regulations go into effect, it will affect the security review of foreign investments across these areas. When the list of technologies is finalized, many types of foreign investments will be subject to a review and deals could be halted or undone. Public views on academic research In addition to commercial applications and products, this regulation could also be bad news for academic research. https://twitter.com/jordanbharrod/status/1065047269282627584 https://twitter.com/BryanAlexander/status/1064941028795400193 Even Google Home, Amazon Alexa, iRobot Roomba could be affected. https://twitter.com/R_D/status/1064511113956655105 But it does not look like research papers will be really affected. The document states that the commerce does not intend to expand jurisdiction on ‘fundamental research’ for ‘emerging technologies’ that is intended to be published and not currently subject to EAR as per § 734.8. But will this affect open-source technologies? We really hope not. Deadline for comments is less than 30 days away BIS has invited comments to the proposal for defining and categorizing emerging technologies, the impact of the controls in US technology leadership among other topics. However the short deadline of December 19, 2018 indicates their haste to implement licensing export of AI quickly. For more details, and to know where you can submit your comments, read the proposal. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Google open sources BERT, an NLP pre-training technique Teaching AI ethics – Trick or Treat?
Read more
  • 0
  • 0
  • 3537

article-image-4-key-findings-from-the-state-of-javascript-2018-developer-survey
Prasad Ramesh
20 Nov 2018
4 min read
Save for later

4 key findings from The State of JavaScript 2018 developer survey

Prasad Ramesh
20 Nov 2018
4 min read
Three JavaScript developers surveyed over 20,000 JavaScript developers to find out what’s happening within the language and its huge ecosystem. From usage to satisfaction to learning habits, this State if JavaScript 2018 report offered another valuable insight on a community that is still going strong, despite the landscape continuing to change. You can check out the results of the State of JavaScript 2018 survey in detail here but keep reading to find out 4 things we found interesting about the State of JavaScript 2018 survey. JavaScript developers love ES6 and TypeScript ES6 and TypeScript were the most well received. 86.3% and 46.7% developers respectively have used and would use these languages again. ClojureScript, Elm, and Flow, however, don’t seem to pique many developers' interests these days (unsurprisingly). React rules the front-end frameworks - Angular's popularity may be dwindling There has been a big battle between a couple of frameworks in the front-end side of web development - namely between React, Vue, and Angular. The State of JavaScript 2018 survey suggests that React is winning out, with Vue in second position. 64.8% and 28.8% developers said that they would use React and Vue.js respectively, again. However, Vue is growing in popularity - 46.6% of respondents expressed an interest in learning it. However, news wasn't great for Angular - 33.8% of respondents said that they wouldn't use Angular again. Vue is gaining popularity as. Ember and polymer were less than well received as more than 50% of the responses for both indicated no interest in learning them. Preact and Polymer, meanwhile, are perhaps still a little new on the scene: 28.1% and 18.5% respondents had never even heard of these frameworks. Vue.js 3.0 is ditching JavaScript for TypeScript. Learn more here. Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL When it comes to data, Redux is the most popular library with 47.2% developers saying that they would use it again. GraphQL is second with 20.4% of respondents vouching for it. But Redux shouldn’t be complacent - 62.5% developers also want to learn GraphQL. It looks like the Redux and GraphQL debate is going to continue well into 2019. What the consensus will be in 12 months time is anyone’s guess. Why do React developers love Redux? Find out here. Express.js popularity confirms Node.js as JavaScript’s quiet hero It was observed that there haven’t been any major break breakthroughs in this area in recent years. But that is, perhaps, a good thing when you consider the frantic pace of change in other areas of JavaScript. It probably also has a lot to do with the dominance of Node.js in this area. Express, a Node.js framework, is by far the most popular, with 64.7% of developers taking the survey saying they would use it again. Sadly, it appears Meteor is languishing despite its meteoric hype just a few years ago. 49.4% of developers had heard of it, but said they had no interest in learning it. In conclusion: The landscape is becoming more clearly defined, but the JavaScript developer role is changing A few years ago, the JavaScript ecosystem was chaotic and almost incoherent. Every week seemed to bring a new framework demanding your attention. It looks, as we move towards the end of the decade, that things are a lot different now - React has established itself at the forefront of the front end, while TypeScript appears to have embedded itself within the ecosystem too. With GraphQL also generating interest, and competing with Redux, we're seeing a clear shift in what JavaScript developers are doing, and what they're being asked to do. As the stack expands, managing data sources and building for speed and scalability is now a problem right at the heart of JavaScript development, not just on its fringes.
Read more
  • 0
  • 0
  • 4401
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-11-predictions-for-the-future-of-programming
Guest Contributor
19 Nov 2018
12 min read
Save for later

11 predictions for the future of programming

Guest Contributor
19 Nov 2018
12 min read
It’s been over five decades since programming pushed the boundaries of digital craftsmanship, and it is still doing so with no signs of stopping or slowing down. There is a new tool, framework, add-on, functionality, technology, or a programming language breaking the Internet every now and then. Any adept programmer not only needs to be good at coding but also has to stay abreast with the ongoing and upcoming happenings in the programming world. Just learning to code does not a give you a big edge over others. By having a good idea of what’s coming ahead,  present steps can be planned effectively. Obviously, no one can perfectly forecast the future of computer programming, but that won’t stop us from speculating, right! Here are 11  predictions for the future of programming that we think programmers should keep an eye on. #1 Cloud native as the new default Do you know that in order to cater to a single search query, Google Search uses more than 1000s of servers? All this is done in order to serve the right results. Cloud has been popular for past one decade but it’s destined to grow immensely in the future as more and more developers intend to use cloud for faster go to market. Tinkering in the cloud to build an app is so much easier as compared to managing your own servers as you don’t have to buy new servers, maintain them, upgrade them, or add new servers as and when the demand fluctuates. Web users are an impatient lot these days; so making web pages faster is the main goal for developers. 40% of people abandon a website that takes more than 3 seconds to load. More efficient algorithms save a few microseconds whereas additional impetus is provided by the rapidly developing enhanced servers. #2 IoT security concerns will escalate IoT is a growing technological concept these days. The promising piece of tech has already made it to the market, although in a limited form. Any smart device is just like a computer or machine that can be hacked by means of feeding some simple malicious lines of code. So, security of IoT devices is as important as their deployment. Or else, we will have to face dire consequences, as experienced recently in the form of a North Korean hacker charged for WannaCry ransomware and a 16 year old hacking into Apple’s servers to access customer data. Programmers need to develop suspicious-activity-proof algorithms for IoT devices. Failing to do so will not only make the devices vulnerable to unintended use but also put the entire system at risk. Hence, with the growth in the IoT market, concern about its safety will also mushroom. #3 Video Content will continue to dominate the Web In order to solve the dire glitches caused by plugins, the HTML standards committee started embedding video tags into HTML. Videos tags are programmable by virtue of the fact that basic video tags respond to JavaScript commands. Earlier video content was fixed. If you watch a video about dogs fighting cats, then you will be recommended just that. Nothing more, nothing less. However, this is not the case anymore. It is the time of seamless canvas design, in which web designers figure out clever ways to deploy different video content. Doing so allows the user to steer the way in which a narrative is unfolded and it opens up new ways of interacting with the video content. Now machine learning can deliver higher-quality streaming experiences that do not buffer as much as many existing systems. More efficient codecs and better video compression are also playing a role in making video a better digital consumption medium. Again, programming makes it feasible, as video tags and iframe are part of the programming code. #4 Consoles, consoles everywhere Thanks to the groundbreaking progress in video game console technology, PCs are continuously being rejected in favor of gaming consoles. Living room consoles are just the start. With the concept of intelligent devices, makers of other household items are also looking to make their offerings smarter. Our hairdryers and toasters are already boasting digital memory, allowing for remembering our preferences. However, the time when these, and other household units as well, will start communicating with each other i.e. exchanging information on their own is yet to come. All of these scenarios are only made possible by programming. As several programmers have already embarked on the journey for achieving results in the same direction, we might not be that far away from a time when the aforementioned scenario would be a day-to-day reality. #5 Data is important, data will be important Data is the backbone of the network of networks i.e. the Internet. What we see, read, and hear over the gigantic web is data, loads and loads of it. However, data collection is not something new for humanity. Since antiquity, humans have collected and stored large chunks of data for churning out important information at some later time. With the passage of time, enriching and protecting data have become important. While the former is achieved by presenting data in the form of videos, pictures, pie charts, etc., the latter is accomplished by adding SSL to the website and using better encryption techniques. Data processing has become equally important just like the digital ecosphere itself. In the enterprise community, data gathering will branch out more elaborately into storing, curating, and parsing. Simply said, data is and data will be the undisputed champion in the Digital World. #6 Machine Learning dominance Machine Learning is already flourishing and seeping into everyday enterprise and life. For example, machine learning algorithms are already finding a place in important automation code for big businesses. They are used for heaping big data projects. Languages like the R programming language and Python have enabled this proliferation of machine learning, so far. What’s amazing about machine learning is that it is slowly being integrated into modern life. It will soon become a common entity in a person’s life, just like smartphones and IoT. Again, machine learning also requires services of programming and code, of course. No code, no machine learning. At least for now. There is the rise of machine learning as a service trend which aims to remove or minimize programming. However, if we ever learning anything from the history of web development, even as drag and drop web design tools grow, professional web developers also grow in demand. We can expect to see a similar trend with machine learning as it continues down the path of democratization. #7 User Interface design will continue gaining popularity The time when an Internet user was expected to use a keyboard and mouse is long gone. With each passing day, using a PC is preferred less and less. Apart from offices and college laboratories, PCs are gradually being replaced by other smart devices. As smartphones, tablets, living room consoles, etc. take on the world, the emphasis on UI has heightened. A touch and a click on the screen is different. With the advancement in technology, the former is given preference. This is because it’s quick and convenient at the same time. Furthermore, face and fingerprint recognition are the new cool. Research on voice control is also advancing. Many brands have already incepted their very own virtual assistants, such as Amazon Alexa, Siri and Google Assistant, which can recognize the demands of their users with mere voice commands and interaction. For example, Android 9 Pie comes with a number of UI alterations to stay relevant with the present UI scenario, including a new position for the volume controls and Material Theming. The latter is a built-in Android toolset meant for customizing the Material Design supported by the Android. Again, designing a powerful user interface is dependent on great programming. A user interface needs not only to be robust only but also show signs of intuitiveness and interactivity. The stress on UI designing will continue growing in the future. Some of the upcoming UI trends forecasted for 2019 are the overlapping effect, functional animations, and contrast of fonts. #8 Open Source vs. Closed Development Nearly all laptops run on proprietary software but Smartphones with Android leading the race are mostly open source. iOS is still closed but it has a robust set of APIs on which developers can build their own empires. While open source software is something that anyone can tinker with, closed development environment restricts 3rd-party from accessing and toying with such a system. Among other differences between the two, a significant difference is in the quality of support. This is, obviously, better offered by closed source software. Open source is rocking the world with new developers entering into programming by tinkering with open source whereas closed environment is also growing tremendously because of personalization and security features. This is one hell of a competition. #9 Autonomous Transportation Another industry that requires services of programming is the autonomous vehicles. Just yesterday, Waymo announced that their first driverless cars will be on the road commercially next month. So far, we have only seen some of the many accomplishments that a driverless mode of transportation can achieve. Though we have only cars, for now, that is making use of autonomous transportation algorithms, soon other transportation means will also join the parade. There are already crowdfunding projects for autonomous skateboards. Known as XTND Board, it is a lightweight electric vehicle meant to redefine commuting. Autonomous aircrafts are  already being used in the military. However, pilotless airplane transportation may just be around the corner. All it requires is an excellent programming code to allow a vehicle to know that what route should it chose. So, maybe flights might become autonomous after rides. #10 The Law will redefine new limits Writing code is like fixing something, setting up protocols. What the program will do and what it won’t, depends entirely on the coding. However, there are several ways to manipulate harmful programming code. There’s a subtle analogy between programming code and law and both have their own jurisdictions. Though there is a bright, sunny side to the technological advancement, there’s also a darker side of the same that needs to be reviewed and regulated. As years will pass from this point in time, programmers will face real-world challenges to assist the Law & Order to sustain the malicious content of the society, both on the digital front and the real-world front. We have already seen how adding technology to law works. However, the other side is that it can also act as a tool to break the law(s). Cyberattacks, identity theft, and data laundering are some of the notable examples made possible by technology. This is a question which is also its own solution. In order to prevent such insincere acts, security personnel need to think like bypassers. This is where ethical hacking comes in. It is simply thinking and operating like a malicious hacker but doing so for the right cause. #11 Containers will continue to rule Theoretically, there isn’t a need for the so-called containers, which are heavily deployed in the modern-day programming. In theory, the executable files can run anywhere and various requisite permissions, such as using hardware, are given by the OS. Hence, there is, theoretically, no requirements for a container. However, because of being theoretical, all executables are considered the same. Obviously, this is not the general case. What happens is that executables are different and each one of them requires specific libraries to run. For instance, the WORA (Write Once, Run Anywhere) chant of Java fails owing to the virtue that there are several different versions of virtual machines (VMs). Though using a comprehensive VM might solve the issue, the solution lacks practicality. On the other hand, the sleek and lightweight containers win the preference. Containers are the solution to the issue of reliability caused by a software when it is to be migrated from one computing environment to another. A container is simply a complete package that contains an entire runtime environment with the application, its dependencies and libraries, other required binaries, configuration files, etc. So, when a container of a specific application has everything in it that it requires to operate, the container becomes independent of the platform. The containers will continue to rule in the future up ahead. If you are new to programming, you can check out programming terms for beginners to kickstart your coding journey. These were the future predictions that we can think of. Do want to add anything else? Please feel free to do so in the comments below. Author Bio Saurabh has worked globally for telecom and finance giants in various capacities. After working for a decade in Infosys and Sapient, he started his first startup, Leno, to solve a hyperlocal book-sharing problem. He is interested in product marketing, and analytics. His latest venture Hackr.io recommends the best online programming and design courses for every programming language. All the tutorials are submitted and voted by the programming community. What we learned from IBM Research’s ‘5 in 5’ predictions presented at Think 2018. “Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca. Why does the C programming language refuse to die?  
Read more
  • 0
  • 0
  • 10692

article-image-anatomy-of-an-azure-function-app-tutorial
Sugandha Lahoti
18 Nov 2018
8 min read
Save for later

Anatomy of an Azure function App [Tutorial]

Sugandha Lahoti
18 Nov 2018
8 min read
Azure functions help you easily run small pieces of code in the cloud without worrying about a whole application or the infrastructure to run it. With Azure functions, you can use triggers to execute your code and bindings to simplify the input and output of your code. In this article, we will look at the anatomy and structure of an Azure Function. This article is taken from the book Learning Azure Functions by Manisha Yadav and Mitesh Soni. In this book, you will you learn the techniques of scaling your Azure functions and making the most of serverless architecture. In this article, we will cover the following topics: Anatomy of Azure Functions Setting up a basic Azure Function Anatomy of Azure Functions Let's understand the different components or resources that are used while creating Azure Functions. The following image describes the Functions in Azure in different pricing plans: Azure Function App A function app is a collection of one or more functions that are managed together. All the functions in a Function App share the same pricing plan and it can be a consumption plan or an App Service plan. When we utilize Visual Studio Team Services for Continuous Integration and Continuous Delivery using build and release definitions, then the Function app is also shared. The way we manage different resources in Azure with the Azure Resource Group is similar to how we can manage multiple functions with the Function App. Function code In this article, we will consider a scenario where a photography competition is held and photographers need to upload photographs to the portal. The moment a photograph is uploaded, a thumbnail should be created immediately. A function code is the main content that executes and performs some operations as shown as follows: var Jimp = require("jimp"); // JavaScript function must export a single function via module.exports // To find the function and execute it module.exports = (context, myBlob) => { // context is a must have parameter and first parameter always // context is used to pass data to and from the function //context name is not fixed; it can be anything // Read Photograph with Jimp Jimp.read(myBlob).then((image) => { // Manipulate Photograph // resize the Photograph. Jimp.AUTO can be passed as one of the values. image .resize(200, 200) .quality(40) .getBuffer(Jimp.MIME_JPEG, (error, stream) => { // Check for errors while processing the Photograph. if (error) { // To print the message on log console context.log('There was an error processing the Photograph.'); // To communicate with the runtime that function is finished to avoid timeout context.done(error); } else { // To print the message on log console context.log('Successfully processed the Photograph'); // To communicate with the runtime that function is finished to avoid timeout // Bind the stream to the output binding to create a new blob context.done(null, stream); } }); }); }; Function configuration Function configuration defines the function bindings and other configuration settings. It contains configurations such as the type of trigger, paths for blob containers, and so on: { "bindings": [ { "name": "myBlob", "type": "blobTrigger", "direction": "in", "path": "photographs/{name}", "connection": "origphotography2017_STORAGE", "dataType": "binary" }, { "type": "blob", "name": "$return", "path": "thumbnails/{name}", "connection": "origphotography2017_STORAGE", "direction": "out" } ], "disabled": false } The function runtime uses this configuration file to decide which events to monitor and how to pass data to and from the function execution. Function settings We can limit the daily usage quota and application settings. We can enable Azure Function proxies and change the edit mode of our function app. The application settings in the Function App are similar to the application settings in Azure App Services. We can configure .NET Framework v4.6, Java version, Platform, ARR Affinity, remote debugging, remote Visual Studio version, app settings, and connection strings. Runtime The runtime is responsible for executing function code on the underlying WebJobs SDK host. In the next section, we will create our Function App and functions using the Azure Portal. Setting up a basic Azure Function Let's understand Azure Functions and create one in Azure Portal. Go to https://portal.azure.com. Click on Function Apps in the left sidebar: There is no Function App available as of now. Click on the plus + sign and search for Function Apps. Then click on Create: Provide the App name, Subscription details, and existing Resource Group. Select Consumption Plan in Hosting Plan. Then select Location: Select Create New in Storage and click on Create: Now, let's go to Function Apps in the left sidebar and verify whether the recently created Function App is available in the list or not: Click on the Function App and we can see the details related to the Subscription, Resource group, URL, Location, App Service Plan / pricing tier. We can stop or restart the function from the same pane. The Settings tab provides details on the Runtime version, Application settings, and the limit on daily usage: It also allows us to keep the Function App in Read/Write or Read Only mode. We can also enable deployment slots, a well-known feature of Azure App Services: In the Platform features tab as shown below, we get different kinds of options to enable the Function App with MONITORING, NETWORKING, DEPLOYMENT TOOLS, and so on. We will cover most of the features in this chapter and in upcoming chapters in detail: Click on Properties. Verify the different details that are available: There is a property named OUTBOUND IP ADDRESSES which is useful if we need the IP addresses of the Function App for whitelisting. Click on App Service plan and it will open a consumption plan in the pane: On the left sidebar in the Azure Portal, go to Storage services and verify the storage accounts that are available: What we want to achieve is that when we upload an image in a specific blob container, the function should be available immediately in the Function App and should be executed and create a thumbnail in another blob container. Create a new storage account: Go to the Overview section of the Storage accounts and check all the available settings: Click on the Containers section in the Storage accounts: There is no container available in the Storage accounts. Click on + Container and fill in the Name and Access type and click on OK: Similarly, create another container to store thumbnails: Verify both containers in the Storage accounts: Once we have all the components ready to achieve our main objective of creating a function that creates thumbnails of photographs, we can start creating a function: Click on the Functions section in the Function App: Select Webhook + API and then choose a language: Click on Custom function so that we can utilize the already available templates: Select the Language as JavaScript: Select the BlobTrigger template: Provide the name of our function. Give the path to the container for the source and select Storage account connection: Look at the function and code available in the code editor in the Microsoft Azure Portal: Before we write the actual code in the function, let's configure the triggers and outputs: Select Blob parameter name, Storage account connection, and Path: Click on New Output. Select Azure Blob Storage: Select Blob parameter name, Storage account connection, and Path. Click on Save: Review the final output bindings: Click on the advanced editor link to review function.json: Now, paste the function code for creating a thumbnail into the Functions code editor: Now, once everything is set and configured, let's upload a photograph in the photographs blob container: Click on the container and click on Upload. Select the photograph that needs to be uploaded to the container. Click on Upload: Go to Function Apps and check the logs. We may get an Error: Cannot find module 'jimp': 2017-06-30T16:54:18 Welcome, you are now connected to log-streaming service. 2017-06-30T16:54:41.202 Script for function 'photoProcessing' changed. Reloading. 2017-06-30T16:55:01.309 Function started (Id=411e4d84-5ef0-4ca9-b963-ed94c0ba8e84) 2017-06-30T16:55:01.371 Function completed (Failure, Id=411e4d84-5ef0-4ca9-b963-ed94c0ba8e84, Duration=59ms) 2017-06-30T16:55:01.418 Exception while executing function: Functions.photoProcessing. mscorlib: Error: Cannot find module 'jimp' at Function.Module._resolveFilename (module.js:455:15) at Function.Module._load (module.js:403:25) at Module.require (module.js:483:17) at require (internal/module.js:20:19) at Object.<anonymous> (D:\home\site\wwwroot\photoProcessing\index.js:1:74) at Module._compile (module.js:556:32) at Object.Module._extensions..js (module.js:565:10) at Module.load (module.js:473:32) at tryModuleLoad (module.js:432:12) at Function.Module._load (module.js:424:3). Thus we created our first function that processes a photograph and creates a thumbnail for a photography competition scenario. We also saw the anatomy of Azure Functions. To know more on how triggers can activate a function, and how binding can be used to output the results of a function, read our book Learning Azure Functions. Azure DevOps outage root cause analysis starring greedy threads and rogue scale units. Working with Azure container service cluster [Tutorial]. Implementing Identity Security in Microsoft Azure [Tutorial].
Read more
  • 0
  • 0
  • 12427

article-image-how-to-build-template-metaprogramming-tmp-using-ctutorial
Savia Lobo
18 Nov 2018
11 min read
Save for later

How to build Template Metaprogramming (TMP) using C++[Tutorial]

Savia Lobo
18 Nov 2018
11 min read
The simplest way to say this is that metaprogramming is a technique that creates a code by using a code. Implementing metaprogramming, we write a computer program that manipulates the other programs and treats them as its data. In addition, templates are a compile-time mechanism in C++ that is Turing-complete, which means any computation expressible by a computer program can be computed, in some form, by a template metaprogram before runtime. It also uses recursion a lot and has immutable variables. So, in metaprogramming, we create code that will run when the code is compiled. This tutorial is an excerpt taken from the book,' Learning C++ Functional Programming', written by Wisnu Anggoro. In this book, you'll learn to apply Functional Programming techniques to C++ to build highly modular, testable, and reusable code. In this article, we will learn how to build Template Metaprogramming (TMP) in C++. Preprocessing the code using a macro To start our discussion on metaprogramming, let's go back to the era when the ANSI C programming language was a popular language. For simplicity, we used the C preprocessor by creating a macro. The C parameterized macro is also known as metafunctions and is one of the examples of metaprogramming. Consider the following parameterized macro: #define MAX(a,b) (((a) > (b)) ? (a) : (b)) Since the C++ programming language has a drawback compatibility with the C language, we can compile the preceding macro using our C++ compiler. Let's create the code to consume the preceding macro, which will be as follows: /* macro.cpp */ #include <iostream> using namespace std; // Defining macro #define MAX(a,b) (((a) > (b)) ? (a) : (b)) auto main() -> int { cout << "[macro.cpp]" << endl; // Initializing two int variables int x = 10; int y = 20; // Consuming the MAX macro // and assign the result to z variable int z = MAX(x,y); // Displaying the result cout << "Max number of " << x << " and " << y; cout << " is " << z << endl; return 0; } As we can see in the preceding macro.cpp code, we pass two arguments to the MAX macro since it is a parameterized macro, which means the parameter can be obtained from the users. If we run the preceding code, we should see the following output on the console: Metaprogramming is a code that will run in compile time. By using a macro in the preceding code, we can demonstrate there's a new code generated from the MAX macro. The preprocessor will parse the macro in compile time and bring the new code. In compile time, the compiler modifies the code as follows: auto main() -> int { // same code // ... int z = (((a) > (b)) ? (a) : (b)); // <-- Notice this section // same code // ... return 0; } Besides a one-line macro preprocessor, we can also generate a multiline macro metafunction. To achieve this, we can use the backslash character at the end of the line. Let's suppose we need to swap the two values. We can create a parameterized macro named SWAP and consume it like the following code: /* macroswap.cpp */ #include <iostream> using namespace std; // Defining multi line macro #define SWAP(a,b) { \ (a) ^= (b); \ (b) ^= (a); \ (a) ^= (b); \ } auto main() -> int { cout << "[macroswap.cpp]" << endl; // Initializing two int variables int x = 10; int y = 20; // Displaying original variable value cout << "before swapping" << endl; cout << "x = " << x << ", y = " << y ; cout << endl << endl; // Consuming the SWAP macro SWAP(x,y); // Displaying swapped variable value cout << "after swapping" << endl; cout << "x = " << x << ", y = " << y; cout << endl; return 0; } As we can see in the preceding code, we will create a multiline preprocessor macro and use backslash characters at the end of each line. Each time we invoke the SWAP parameterized macro, it will then be replaced with the implementation of the macro. We will see the following output on the console if we run the preceding code: Now we have a basic understanding of the metaprogramming, especially in metafunction, we can move further in the next topics. We use parenthesis for each variable in every implementation of the macro preprocessor because the preprocessor is simply replacing our code with the implementation of the macro. Let's suppose we have the following macro: MULTIPLY(a,b) (a * b) It won't be a problem if we pass the number as the parameters. However, if we pass an operation as the argument, a problem will occur. For instance, if we use the MULTIPLY macro as follows: MULTIPLY(x+2,y+5); Then the compiler will replace it as (x+2*y+5). This happens because the macro just replaces the a variable with the x + 2 expression and the b variable with the y + 5 expression, with any additional parentheses. And because the order of multiplication is higher than addition, we will have got the result as follows: (x+2y+5) And that is not what we expect. As a result, the best approach is to use parenthesis in each variable of the parameter. Dissecting template metaprogramming in the Standard Library The Standard Library provided in the C++ language is mostly a template that contains an incomplete function. However, it will be used to generate complete functions. The template metaprogramming is the C++ template to generate C++ types and code in compile time. Let's pick up one of the classes in the Standard Library--the Array class. In the Array class, we can define a data type for it. When we instance the array, the compiler actually generates the code for an array of the data type we define. Now, let's try to build a simple Array template implementation as follows: template<typename T> class Array { T element; }; Then, we instance the char and int arrays as follows: Array<char> arrChar; Array<int> arrInt; What the compiler does is it creates these two implementations of the template based on the data type we define. Although we won't see this in the code, the compiler actually creates the following code: class ArrayChar { char element; }; class ArrayInt { int element; }; ArrayChar arrChar; ArrayInt arrInt; As we can see in the preceding code snippet, the template metaprogramming is a code that creates another code in compile time. Building the template metaprogramming Before we go further in the template metaprogramming discussion, it's better if we discuss the skeleton that builds the template metaprogramming. There are four factors that form the template metaprogramming--type, value, branch, and recursion. In this topic, we will dig into the factors that form the template. Adding a value to the variable in the template In the macro preprocessor, we explicitly manipulate the source code; in this case, the macro (metafunction) manipulates the source code. In contrast, we work with types in C++ template metaprogramming. This means the metafunction is a function that works with types. So, the better approach to use template metaprogramming is working with type parameters only when possible. When we are talking about the variables in template metaprogramming, it's actually not a variable since the value on it cannot be modified. What we need from the variable is its name so we can access it. Because we will code with types, the named values are typedef, as we can see in the following code snippet: struct ValueDataType { typedef int valueDataType; }; By using the preceding code, we store the int type to the valueDataType alias name so we can access the data type using the valueDataType variable. If we need to store a value instead of the data type to the variable, we can use enum so it will be the data member of the enum itself. Let's take a look at the following code snippet if we want to store the value: struct ValuePlaceHolder { enum { value = 1 }; }; Based on the preceding code snippet, we can now access the value variable to fetch its value. Mapping a function to the input parameters We can add the variable to the template metaprogramming. Now, what we have to do next is retrieve the user parameters and map them to a function. Let's suppose we want to develop a Multiplexer function that will multiply two values and we have to use the template metaprogramming. The following code snippet can be used to solve this problem: template<int A, int B> struct Multiplexer { enum { result = A * B }; }; As we can see in the preceding code snippet, the template requires two arguments, A and B, from the user, and it will use them to get the value of result variable by multiplying these two parameters. We can access the result variable using the following code: int i = Multiplexer<2, 3>::result; If we run the preceding code snippet, the i variable will store 6 since it will calculate 2 times 3. Choosing the correct process based on the condition When we have more than one function, we have to choose one over the others based on certain conditions. We can construct the conditional branch by providing two alternative specializations of the template class, as shown here: template<typename A, typename B> struct CheckingType { enum { result = 0 }; }; template<typename X> struct CheckingType<X, X> { enum { result = 1 }; }; As we can see in the preceding template code, we have two templates that have X and A/B as their type. When the template has only a single type, that is, typename X, it means that the two types (CheckingType <X, X>) we compare are exactly the same. Otherwise, these two data types are different. The following code snippet can be used to consume the two preceding templates: if (CheckingType<UnknownType, int>::result) { // run the function if the UnknownType is int } else { // otherwise run any function } As we can see in the preceding code snippet, we try to compare the UnknownType data type with the int type. The UnknownType data type might be coming from the other process. Then, we can decide the next process we want to run by comparing these two types using templates. Up to here, you might wonder how template multiprogramming will help us make code optimization. Soon we will use the template metaprogramming to optimize code. However, we need to discuss other things that will solidify our knowledge in template multiprogramming. For now, please be patient and keep reading. Repeating the process recursively We have successfully added value and data type to the template, then created a branch to decide the next process based on the current condition. Another thing we have to consider in the basic template is repeating the process. However, since the variable in the template is immutable, we cannot iterate the sequence. Let's suppose we are developing a template to calculate the factorial value. The first thing we have to do is develop a general template that passes the I value to the function as follows: template <int I> struct Factorial { enum { value = I * Factorial<I-1>::value }; }; As we can see in the preceding code, we can obtain the value of the factorial by running the following code: Factorial<I>::value; In the preceding code, I is an integer number. Next, we have to develop a template to ensure that it doesn't end up with an infinite loop. We can create the following template that passes zero (0) as a parameter to it: template <> struct Factorial<0> { enum { value = 1 }; }; Now we have a pair of templates that will generate the value of the factorial in compile time. The following is a sample code to get the value of Factorial(10) in compile time: int main() { int fact10 = Factorial<10>::value; } If we run the preceding code, we will get 3628800 as a result of the factorial of 10. Thus, in this post, we learned how to build Template Metaprogramming with the C++ programming language. If you've enjoyed reading this post and want to know more about Flow control with template metaprogramming and more, do check out the book, Learning C++ Functional Programming. Introduction to R Programming Language and Statistical Environment Use Rust for web development [Tutorial] Boost 1.68.0, a set of C++ source libraries, is released, debuting YAP!
Read more
  • 0
  • 0
  • 15706

article-image-observability-as-code-secrets-as-a-service-and-chaos-katas-thoughtworks-outlines-key-engineering-techniques-to-trial-and-assess
Richard Gall
14 Nov 2018
5 min read
Save for later

Observability as code, secrets as a service, and chaos katas: ThoughtWorks outlines key engineering techniques to trial and assess

Richard Gall
14 Nov 2018
5 min read
ThoughtWorks has just published vol. 19 of its essential radar report. As always, it's a vital insight into what's beginning to emerge in the technology field. In the techniques quadrant of its radar, there were some really interesting new entries. Let's take a look at some of them now, so you can better plan and evaluate your roadmap and skill set for 2019. 8 of the best new techniques you should be trialling (according to ThoughtWorks) 1% canary: a way to build better feedback loops This sounds like a weird one, but the concept is simple. It's essentially about building a quick feedback loop to a tiny segment of customers - say, 1%. This can allow engineering teams to learn things quickly and make changes on other aspects of the project as it evolves. Bounded buy: a smarter way to buy out-of-the-box software solutions Bounded buy mitigates the scope creep that can cause headaches for businesses dealing with out-of-the-box software. It means those responsible for purchasing software focus only on solutions that are modular, with each 'piece' directly connecting into a particular department's needs or workflow. Crypto shredding: securing sensitive data Crypto shredding is a method of securing data that might otherwise be easily replicated or copied. Essentially, it overwrites sensitive data with encryption keys which can easily be removed or deleted. It adds an extra layer of control over a large data set - a technique that could be particularly useful in a field like healthcare. Four key metrics - focus on what's most important to build a high performance team Building a high performance team, can be challenging. Accelerate, the team behind the State of DevOps report, highlighted key drivers that engineers and team leaders should focus on: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. According to ThoughtWorks "each metric creates a virtuous cycle and focuses the teams on continuous improvement." Observability as code - breaking through the limits of traditional monitoring tools Observability has emerged as a bit of a buzzword over the last 12 months. But in the context of microservices, and increased complexity in software architecture, it is nevertheless important. However, the means through which you 'do' observability - a range of monitoring tools and dashboards - can be limiting in terms of making adjustments and replicating dashboards. This is why treating observability as code is going to become increasingly more important. It makes sense - if infrastructure as code is the dominant way we think about building software, why shouldn't it be the way we monitor it too? Run cost as architecture fitness function There's a wide assumption that serverless can save you money. This is true when you're starting out, or want to do something quickly, but it's less true as you scale up. If you're using serverless functions repeatedly, you're likely to be paying a lot - more than if you has a slightly less fashionable cloud or on premise server. To combat this complacency, you should instead watch how much services cost against the benefit delivered by them. Seems obvious, but easy to miss if you've just got excited about going serverless. Secrets as a service Without wishing to dampen what sounds incredibly cool, secrets as a service are ultimately just elaborate password managers. They can help organizations more easily decouple credentials, API keys from their source code, a move which should ensure improved security - and simplicity. By using credential rotation, organizations can be much better prepared at tackling and mitigating any security issues. AWS has 'Secrets Manager' while HashiCorp's Vault offers similar functionality. Security chaos engineering In the last edition of Radar, security chaos engineering was in the assess phase - which means ThoughtWorks thinks it's worth looking at, but maybe too early to deploy. With volume 19, security chaos engineering has moved into trial. Clearly, while chaos engineering more broadly has seen slower adoption, it would seem that over the last 12 months the security field has taken chaos engineering to heart. 2 new software engineering techniques to assess Chaos katas If chaos engineering is finding it hard to gain mainstream adoption, perhaps chaos katas is the way forward. This is essentially a technique that helps engineers deploy chaos practices in their respective domains using the training approach known as kata - a Japanese word that simply refers to a set of choreographed movements. In this context, the 'katas' are a set of code patterns that implement failures in a structured way, which engineers can then identify and explore. This is essentially a bottom up way of doing chaos engineering that also gives engineers a deeper insight into their software infrastructure. Infrastructure configuration scanner The question of who should manage your infrastructure is still a tricky one, with plenty of conflicting perspectives. However, from a productivity and agility perspective, putting the infrastructure in the hands of engineers makes a lot of sense. Of course, this could feel like an extra burden - but with an infrastructure configuration scanner, like Scout2 or Watchmen, engineers can ensure that everything is configured correctly. Software engineering techniques need to maintain simplicity as complexity increases There's clearly a diverse range of techniques on the ThoughtWorks Radar. Ultimately, however, the picture that emerges is one where efficiency and observability are key. A crucial part of software engineering will managing increased complexity and developing new tools and processes to instil some degree of simplicity and clarity. Was there anything ThoughtWorks missed?
Read more
  • 0
  • 0
  • 5161
article-image-amazon-splits-hq2-between-new-york-and-washington-d-c-after-a-making-200-states-compete-over-a-year-public-sentiments-largely-negative
Sugandha Lahoti
14 Nov 2018
4 min read
Save for later

Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative

Sugandha Lahoti
14 Nov 2018
4 min read
Yesterday, Amazon announced that it will split its second headquarters between New York City and a suburb of Washington, D.C. Specifically into Long Island City, Queens, and Crystal City, in Arlington, Virginia. Per Amazon, the company will invest $5 billion and create more than 50,000 jobs across the two new headquarters locations, with more than 25,000 employees each in New York City and Arlington. The new locations will join Seattle as the company’s three headquarters in North America. How did it all start? The stage was set fourteen months ago when Amazon invited North American cities to apply to compete for getting Amazon’s second headquarters in their cities. Every year, American cities and states spend up to $90 billion in tax breaks and cash grants to urge companies to move among states. After considering applications from nearly 238 cities, Amazon shocked everyone by finally choosing New York and D.C., the two most sought after cities in America. And to top that, the incentives the company will receive as part of the deals are touching the sky.  Amazon announced that, in New York, it will receive up to $1.2 billion in a refundable tax credit, tied to the creation of jobs, and a $325 million cash development grant. The company will meanwhile earn up to $573 million in cash grants for the Arlington investment if it creates the promised jobs. https://twitter.com/ByRosenberg/status/1062394439102943232 What do the people think? Amazon’s decision was met with backlash and outrage by concerned citizens as well as other people on the internet. People have found Amazon’s decision extremely concerning. https://twitter.com/Ocasio2018/status/1062204614496403457 https://twitter.com/SteveCase/status/1062399854905909248 Queens residents are also “outraged” at Amazon’s plans according to Rep.-elect Alexandria Ocasio-Cortez. https://twitter.com/Ocasio2018/status/1062203458227503104 Many people have also released statements and drafted open letters to Jeff Bezos disagreeing to Amazon’s decision and highlighting the trouble associated. https://twitter.com/NYCSpeakerCoJo/status/1062384477861748736 David Heinemeier Hansson, the creator of Ruby on Rails has written an open letter addressed to Jeff Bezos terming Amazon HQ2 process demeaning if not outright cruel. “At a time when politicians are viewed as more inept, more suspicious, and more corrupt than ever, you made city after city grovel in front of your selection committee. They debased themselves in a futile attempt to appeal to your grace and mercy, and you showed them little. The losers ended up worse than where they started, and even the winners may well too.” He wrote, “At some point people are going to have had enough, and when they figure out a way to channel that discontent into political action, they’re going to come looking for the heads of those that did them the most egregious wrongs.” How to control corporate giveaways? “We need a national truce, both within states and between states,” said Amy Liu, the director of the Metropolitan Policy Program at the Brookings Institution. “There should be no more poaching of private companies with public funds.” The Atlantic highlights three ways the government could take control of corporate giveaways: First, Congress could pass a national law banning this sort of corporate bribery. Second, Congress could make corporate subsidies less valuable by threatening to tax state or local incentives as a special kind of income. Finally, the federal government could actively discourage the culture of corporate subsidies by strict law enforcement measures. In another striking move, Democratic Assemblyman Ron Kim has announced a bill to block the Amazon deal and redirect taxpayer subsidies for Jeff Bezos into reducing student debt. “Giving Jeff Bezos hundreds of millions of dollars is an immoral waste of taxpayers’ money when it’s crystal clear that the money would create more jobs and more economic growth when it is used to relieve student debt,” said Kim. Read Amazon’s official press release to know in detail about the new headquarters and Amazon’s incentives in these HQ2. Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first. Jeff Bezos: Amazon will continue to support U.S. Defense Department. Amazon increases the minimum wage of all employees in the US and UK
Read more
  • 0
  • 0
  • 2302

article-image-brute-forcing-http-applications-and-web-applications-using-nmap-tutorial
Savia Lobo
11 Nov 2018
6 min read
Save for later

Brute forcing HTTP applications and web applications using Nmap [Tutorial]

Savia Lobo
11 Nov 2018
6 min read
Many home routers, IP webcams, and web applications still rely on HTTP authentication these days, and we, as system administrators or penetration testers, need to make sure that the system or user accounts are not using weak credentials. Now, thanks to the NSE script http-brute, we can perform robust dictionary attacks against HTTP basic, digest, and NTLM authentication. This article is an excerpt taken from the book Nmap: Network Exploration and Security Auditing Cookbook - Second Edition, written by Paulino Calderon. This book includes the basic usage of Nmap and related tools like Ncat, Ncrack, Ndiff, and Zenmap and much more. In this article, we will learn how to perform brute force password auditing against web servers that are using HTTP authentication and also against popular and custom web applications with Nmap. Brute forcing HTTP applications How to do it... Use the following Nmap command to perform brute force password auditing against a resource protected by HTTP's basic authentication: $ nmap -p80 --script http-brute <target> The results will return all the valid accounts that were found (if any): PORT STATE SERVICE REASON 80/tcp open http syn-ack | http-brute: | Accounts | admin:secret => Valid credentials | Statistics |_ Perfomed 603 guesses in 7 seconds, average tps: 86 How it works... The Nmap options -p80 --script http-brute tells Nmap to launch the http-brute script against the web server running on port 80. This script was originally committed by Patrik Karlsson, and it was created to launch dictionary attacks against URIs protected by HTTP authentication. The http-brute script uses, by default, the database files usernames.lst and passwords.lst located at /nselib/data/ to try each password, for every user, to hopefully find a valid account. There's more... The script http-brute depends on the NSE libraries unpwdb and brute. Read the Appendix B, Brute Force Password Auditing Options, for more information. To use different username and password lists, set the arguments userdb and passdb: $ nmap -p80 --script http-brute --script-args userdb=/var/usernames.txt,passdb=/var/passwords.txt <target> To quit after finding one valid account, use the argument brute.firstOnly: $ nmap -p80 --script http-brute --script-args brute.firstOnly <target> By default, http-brute uses Nmap's timing template to set the following timeout limits: -T3,T2,T1: 10 minutes -T4: 5 minutes -T5: 3 minutes For setting a different timeout limit, use the argument unpwd.timelimit. To run it indefinitely, set it to 0: $ nmap -p80 --script http-brute --script-argsunpwdb.timelimit=0 <target> $ nmap -p80 --script http-brute --script-args unpwdb.timelimit=60m <target> Brute modes The brute library supports different modes that alter the combinations used in the attack. The available modes are: user: In this mode, for each user listed in userdb, every password in passdb will be tried: $ nmap --script http-brute --script-args brute.mode=user <target> pass: In this mode, for each password listed in passdb, every user in userdb will be tried: $ nmap --script http-brute --script-args brute.mode=pass <target> creds: This mode requires the additional argument brute.credfile: $ nmap --script http-brute --script-args brute.mode=creds,brute.credfile=./creds.txt <target> Brute forcing web applications Performing brute force password auditing against web applications is an essential step to evaluate the password strength of system accounts. There are powerful tools such as THC Hydra, but Nmap offers great flexibility as it is fully configurable and contains a database of popular web applications, such as WordPress, Joomla!, Django, Drupal, MediaWiki, and WebSphere. How to do it... Use the following Nmap command to perform brute force password auditing against web applications using forms: $ nmap --script http-form-brute -p 80 <target> If credentials are found, they will be shown in the results: PORT STATE SERVICE REASON 80/tcp open http syn-ack | http-form-brute: | Accounts | user:secret - Valid credentials | Statistics |_ Perfomed 60023 guesses in 467 seconds, average tps: 138   How it works... The Nmap options -p80 --script http-form-brute tells Nmap to launch the http-form-brute script against the web server running on port 80. This script was originally committed by Patrik Karlsson, and it was created to launch dictionary attacks against authentication systems based on web forms. The script automatically attempts to detect the form fields required to authenticate, and it uses internally a database of popular web applications to help during the form detection phase. There's more... The script http-form-brute depends on the correct detection of the form fields. Often you will be required to manually set via script arguments the name of the fields holding the username and password variables. If the script argument http-form-brute.passvar is set, form detection will not be performed: $ nmap -p80 --script http-form-brute --script-args http-form-brute.passvar=contrasenia,http-form-brute.uservar=usuario <target> In a similar way, often you will need to set the script arguments http-form-brute.onsuccess or http-form-brute.onfailure to set the success/error messages returned when attempting to authenticate: $nmap -p80 --script http-form-brute --script-args http-form-brute.onsuccess=Exito <target> Brute forcing WordPress installations If you are targeting a popular application, remember to check whether there are any NSE scripts specialized on attacking them. For example, WordPress installations can be audited with the script http-wordpress-brute: $ nmap -p80 --script http-wordpress-brute <target> To set the number of threads, use the script argument http-wordpress-brute.threads: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.threads=5 <target>   If the server has virtual hosting, set the host field using the argument http-wordpress-brute.hostname: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.hostname="ahostname.wordpress.com" <target> To set a different login URI, use the argument http-wordpress-brute.uri: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.uri="/hidden-wp-login.php" <target> To change the name of the POST variable that stores the usernames and passwords, set the arguments http-wordpress-brute.uservar and http-wordpress-brute.passvar: $ nmap -p80 --script http-wordpress-brute --script-args http-wordpress-brute.uservar=usuario,http-wordpress-brute.passvar=pasguord <target> Brute forcing WordPress installations Another good example of a specialized NSE brute force script is http-joomla-brute. This script is designed to perform brute force password auditing against Joomla! installations. By default, our generic brute force script for HTTP will fail against Joomla! CMS since the application generates dynamically a security token, but this NSE script will automatically fetch it and include it in the login requests. Use the following Nmap command to launch the script: $ nmap -p80 --script http-joomla-brute <target> To set the number of threads, use the script argument http-joomla-brute.threads: $ nmap -p80 --script http-joomla-brute --script-args http-joomla-brute.threads=5 <target> To change the name of the POST variable that stores the login information, set the arguments http-joomla-brute.uservar and http-joomla-brute.passvar: $ nmap -p80 --script http-joomla-brute --script-args http-joomla-brute.uservar=usuario,http-joomla-brute.passvar=pasguord <target> To summarize, we learned how to brute force password auditing against web servers custom web applications with Nmap. If you've enjoyed reading this post, do check out our book, Nmap: Network Exploration and Security Auditing Cookbook - Second Edition to know more to learn about Lua programming and NSE script development which will allow you to further extend the power of Nmap. Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap [Tutorial] Introduction to the Nmap Scripting Engine Exploring the Nmap Scripting Engine API and Libraries
Read more
  • 0
  • 0
  • 28667

article-image-discovering-network-hosts-with-tcp-syn-and-tcp-ack-ping-scans-in-nmaptutorial
Savia Lobo
09 Nov 2018
8 min read
Save for later

Discovering network hosts with 'TCP SYN' and 'TCP ACK' ping scans in Nmap[Tutorial]

Savia Lobo
09 Nov 2018
8 min read
Ping scans are used for detecting live hosts in networks. Nmap's default ping scan (-sP) sends TCP SYN, TCP ACK, and ICMP packets to determine if a host is responding, but if a firewall is blocking these requests, it will be treated as offline. Fortunately, Nmap supports a scanning technique named the TCP SYN ping scan that is very handy to probe different ports in an attempt to determine if a host is online or at least has more permissive filtering rules. Similar to the TCP SYN ping scan, the TCP ACK ping scan is used to determine if a host is responding. It can be used to detect hosts that block SYN packets or ICMP echo requests, but it will most likely be blocked by modern firewalls that track connection states because it sends bogus TCP ACK packets associated with non-existing connections. This article is an excerpt taken from the book Nmap: Network Exploration and Security Auditing Cookbook - Second Edition written by Paulino Calderon. In this book, you will be introduced to the most powerful features of Nmap and related tools, common security auditing tasks for local and remote networks, web applications, databases, mail servers and much more. This post will talk about the TCP SYN and TCP ACK ping scans and its related options. Discovering network hosts with TCP SYN ping scans How to do it... Open your terminal and enter the following command: # nmap -sn -PS <target> You should see the list of hosts found in the target range using TCP SYN ping scanning: # nmap -sn -PS 192.1.1/24 Nmap scan report for 192.168.0.1 Host is up (0.060s latency). Nmap scan report for 192.168.0.2 Host is up (0.0059s latency). Nmap scan report for 192.168.0.3 Host is up (0.063s latency). Nmap scan report for 192.168.0.5 Host is up (0.062s latency). Nmap scan report for 192.168.0.7 Host is up (0.063s latency). Nmap scan report for 192.168.0.22 Host is up (0.039s latency). Nmap scan report for 192.168.0.59 Host is up (0.00056s latency). Nmap scan report for 192.168.0.60 Host is up (0.00014s latency). Nmap done: 256 IP addresses (8 hosts up) scanned in 8.51 seconds How it works... The -sn option tells Nmap to skip the port scanning phase and only perform host discovery. The -PS flag tells Nmap to use a TCP SYN ping scan. This type of ping scan works in the following way: Nmap sends a TCP SYN packet to port 80. If the port is closed, the host responds with an RST packet. If the port is open, the host responds with a TCP SYN/ACK packet indicating that a connection can be established. Afterward, an RST packet is sent to reset this connection. The CIDR /24 in 192.168.1.1/24 is used to indicate that we want to scan all of the 256 IPs in our local network. There's  more... TCP SYN ping scans can be very effective to determine if hosts are alive on networks. Although Nmap sends more probes by default, it is configurable. Now it is time to learn more about discovering hosts with TCP SYN ping scans. Privileged versus unprivileged TCP SYN ping scan Running a TCP SYN ping scan as an unprivileged user who can't send raw packets makes Nmap use the connect() system call to send the TCP SYN packet. In this case, Nmap distinguishes a SYN/ACK packet when the function returns successfully, and an RST packet when it receives an ECONNREFUSED error message. Firewalls and traffic filtering A lot of systems are protected by some kind of traffic filtering, so it is important to always try different ping scanning techniques. In the following example, we will scan a host online that gets marked as offline, but in fact, was just behind some traffic filtering system that did not allow TCP ACK or ICMP requests: # nmap -sn 0xdeadbeefcafe.com Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 4.68 seconds # nmap -sn -PS 0xdeadbeefcafe.com Nmap scan report for 0xdeadbeefcafe.com (52.20.139.72) Host is up (0.062s latency). rDNS record for 52.20.139.72: ec2-52-20-139-72.compute- 1.amazonaws.com Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds During a TCP SYN ping scan, Nmap uses the SYN/ACK and RST responses to determine if the host is responding. It is important to note that there are firewalls configured to drop RST packets. In this case, the TCP SYN ping scan will fail unless we send the probes to an open port: # nmap -sn -PS80 <target> You can set the port list to be used with -PS (port list or range) as follows: # nmap -sn -PS80,21,53 <target> # nmap -sn -PS1-1000 <target> # nmap -sn -PS80,100-1000 <target> Discovering hosts with TCP ACK ping scans How to do it... Open your terminal and enter the following command: # nmap -sn -PA <target> The result is a list of hosts that responded to the TCP ACK packets sent, therefore, online: # nmap -sn -PA 192.168.0.1/24 Nmap scan report for 192.168.0.1 Host is up (0.060s latency). Nmap scan report for 192.168.0.60 Host is up (0.00014s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 6.11 seconds How it works... The -sn option tells Nmap to skip the port scan phase and only perform host discovery. And the -PA flag tells Nmap to use a TCP ACK ping scan. A TCP ACK ping scan works in the following way: Nmap sends an empty TCP packet with the ACK flag set to port 80 (the default port, but an alternate port list can be assigned). If the host is offline, it should not respond to this request. Otherwise, it will return an RST packet and will be treated as online. RST packets are sent because the TCP ACK packet sent is not associated with an existing valid connection. There's more... TCP ACK ping scans use port 80 by default, but this behavior can be configured. This scanning technique also requires privileges to create raw packets. Now we will learn more about the scan limitations and configuration options. Privileged versus unprivileged TCP ACK ping scans TCP ACK ping scans need to run as a privileged user. Otherwise a connect() system call is used to send an empty TCP SYN packet. Hence, TCP ACK ping scans will not use the TCP ACK technique, previously discussed, as an unprivileged user, and it will perform a TCP SYN ping scan instead. Selecting ports in TCP ACK ping scans In addition, you can select the ports to be probed using this technique, by listing them after the -PA flag: # nmap -sn -PA21,22,80 <target> # nmap -sn -PA80-150 <target> # nmap -sn -PA22,1000-65535 <target> Discovering hosts with UDP ping scans Ping scans are used to determine if a host is responding and can be considered online. UDP ping scans have the advantage of being capable of detecting systems behind firewalls with strict TCP filtering but that left UDP exposed. This next recipe describes how to perform a UDP ping scan with Nmap and its related options. How to do it... Open your terminal and enter the following command: # nmap -sn -PU <target> Nmap will determine if the target is reachable using a UDP ping scan: # nmap -sn -PU scanme.nmap.org Nmap scan report for scanme.nmap.org (45.33.32.156) Host is up (0.13s latency). Other addresses for scanme.nmap.org (not scanned): 2600:3c01::f03c:91ff:fe18:bb2f Nmap done: 1 IP address (1 host up) scanned in 7.92 seconds How it works... The -sn option tells Nmap to skip the port scan phase but perform host discovery. In combination with the -PU flag, Nmap uses UDP ping scanning. The technique used by a UDP ping scan works as follows: Nmap sends an empty UDP packet to port 40125. If the host is online, it should return an ICMP port unreachable error. If the host is offline, various ICMP error messages could be returned. There's more... Services that do not respond to empty UDP packets will generate false positives when probed. These services will simply ignore the UDP packets, and the host will be incorrectly marked as offline. Therefore, it is important that we select ports that are closed for better results. Selecting ports in UDP ping scans To specify the ports to be probed, add them after the -PU flag, as follows: # nmap -sn -PU1337,11111 scanme.nmap.org # nmap -sn -PU1337 scanme.nmap.org # nmap -sn -PU1337-1339 scanme.nmap.org This in this post we saw how network hosts can be discovered using TCP SYN and TCP ACK ping scans. If you've enjoyed reading this post and want to learn how to discover hosts using other ping scans such as ICMP, SCTP INIT, IP protocol, and others head over to our book, Nmap: Network Exploration and Security Auditing Cookbook - Second Edition. Docker Multi-Host Networking Experiments on Amazon AWS Hosting the service in IIS using the TCP protocol FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack
Read more
  • 0
  • 0
  • 28722
article-image-googlewalkout-demanded-a-truly-equity-culture-for-everyone-pichai-shares-a-comprehensive-plan-for-employees-to-safely-report-sexual-harassment
Melisha Dsouza
09 Nov 2018
4 min read
Save for later

#GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment

Melisha Dsouza
09 Nov 2018
4 min read
Last week, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. This global walkout by Google workers was a response to the New York times report on Google published last month, shielding senior executives accused of sexual misconduct. Yesterday, Google addressed these demands in a note written by Sundar Pichai to their employees. He admits that they have “not always gotten everything right in the past” and they are “sincerely sorry”  for the same. This supposedly ‘comprehensive’ plan will provide more transparency into how employees raise concerns and how Google will handle them. Here are some of the major changes that caught our attention: Following suite after Uber and Microsoft, Google has eliminated forced arbitration in cases of sexual harassment. Fostering a more transparent nature in reporting a sexual harassment case, employees can now be accompanied with support persons to the meetings with HR. Google is planning to update and expand their mandatory sexual harassment training. They will now be conducting these annually instead of once in two years. If an employee fails to complete his/her training, they will receive a one-rating dock in the employees performance review system. This applies to senior management as well where they could be downgraded from ‘exceeds expectation’ to ‘meets expectation’. They will turn increase focus towards diversity, equity and inclusion in 2019, through hiring, progression and retention, in order to create a more inclusive culture for everyone. Google found that one of the most common factors among the harassment complaints is that the perpetrator was under the influence of alcohol (~20% of cases). Stating the policy again, the plan mentions that excessive consumption of alcohol is not permitted when an employee is at work, performing Google business, or attending a Google-related event, whether onsite or offsite. Going forward, all leaders at the company will be expected to create teams, events, offsites and environments in which excessive alcohol consumption is strongly discouraged. They will be expected to follow the two-drink rule. Although the plan is a step towards making workplace conditions stable, it does leave out some of the more inherent concerns related to structural changes as stated by the organizers of the Google Walkout. For example, the structural inequity that separates ‘full time’ employees from contract workers. Contract workers make up more than half of Google’s workforce, and perform essential roles across the company. However, they receive few of the benefits associated with tech company employment. They are also largely women, people of color, immigrants, and people from working class backgrounds. “We demand a truly equitable culture, and Google leadership can achieve this by putting employee representation on the board and giving full rights and protections to contract workers, our most vulnerable workers, many of whom are Black and Brown women.” -Google Walkout Organizer Stephanie Parker Google’s plan to bring transparency at the workplace looks like a positive step towards improving their workplace culture. It would be interesting to see how the plan works out for Google’s employees, as well as other organizations using this as an example to maintain a peaceful workplace environment for their workers. You can head over to Medium.com to read the #GoogleWlakout organizers’ response to the update. Head over to Pichai’s blog post for details on the announcement itself. Technical and hidden debts in machine learning – Google engineers’ give their perspective 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 2923

article-image-un-on-web-summit-2018-how-we-can-create-a-safe-and-beneficial-digital-future-for-all
Bhagyashree R
07 Nov 2018
4 min read
Save for later

UN on Web Summit 2018: How we can create a safe and beneficial digital future for all

Bhagyashree R
07 Nov 2018
4 min read
On Monday, at the opening ceremony of Web Summit 2018, Antonio Guterres, the secretary general of the United Nations (UN) spoke about the benefits and challenges that come with cutting edge technologies. Guterres highlighted that the pace of change is happening so quickly that trends such as blockchain, IoT, and artificial intelligence can move from the cutting edge to the mainstream in no time. Guterres was quick to pay tribute to technological innovation, detailing some of the ways this is helping UN organizations improve the lives of people all over the world. For example, UNICEF is now able to map a connection between school in remote areas, and the World Food Programme is using blockchain to make transactions more secure, efficient and transparent. But these innovations nevertheless pose risks and create new challenges that we need to overcome. Three key technological challenges the UN wants to tackle Guterres identified three key challenges for the planet. Together they help inform a broader plan of what needs to be done. The social impact of the third and fourth industrial revolution With the introduction of new technologies, in the next few decades we will see the creation of thousands of new jobs. These will be very different from what we are used to today, and will likely require retraining and upskilling. This will be critical as many traditional jobs will be automated. Guterres believes that consequences of unemployment caused by automation could be incredibly disruptive - maybe even destructive - for societies. He further added that we are not preparing fast enough to match the speed of these growing technologies. As a solution to this, Guterres said: “We will need to make massive investments in education but a different sort of education. What matters now is not to learn things but learn how to learn things.” While many professionals will be able to acquire the skills to become employable in the future, some will inevitably be left behind. To minimize the impact of these changes, safety nets will be essential to help millions of citizens transition into this new world, and bring new meaning and purpose into their lives. Misuse of the internet The internet has connected the world in ways people wouldn’t have thought possible a generation ago. But it has also opened up a whole new channel for hate speech, fake news, censorship and control. The internet certainly isn’t creating many of the challenges facing civic society on its own - but it won’t be able to solve them on its own either. On this, Guterres said: “We need to mobilise the government, civil society, academia, scientists in order to be able to avoid the digital manipulation of elections, for instance, and create some filters that are able to block hate speech to move and to be a factor of the instability of societies.” The problem of control Automation and AI poses risks that exceed the challenges of the third and fourth industrial revolutions. They also create urgent ethical dilemmas, forcing us to ask exactly what artificial intelligence should be used for. Smarter weapons might be a good idea if you’re an arms manufacturer, but there needs to be a wider debate that takes in wider concerns and issues. “The weaponization of artificial intelligence is a serious danger and the prospects of machines that have the capacity by themselves to select and destroy targets is creating enormous difficulties or will create enormous difficulties,” Guterres remarked. His solution might seem radical but it’s also simple: ban them. He went on to explain: “To avoid the escalation in conflict and guarantee that international military laws and human rights are respected in the battlefields, machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant and should be banned by international law.” How we can address these problems Typical forms of regulations can help to a certain extent, as in the case of weaponization. But these cases are limited. In the majority of circumstances technologies move so fast that legislation simply cannot keep up in any meaningful way. This is why we need to create platforms where governments, companies, academia, and civil society can come together, to discuss and find ways that allow digital technologies to be “a force for good”. You can watch Antonio Guterres’ full talk on YouTube. Tim Berners-Lee is on a mission to save the web he invented MEPs pass a resolution to ban “Killer robots” In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now – World Economic Forum survey
Read more
  • 0
  • 0
  • 2938