Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-how-to-navigate-files-in-a-vue-app-using-the-dropbox-api
Pravin Dhandre
05 Jun 2018
13 min read
Save for later

How to navigate files in a Vue app using the Dropbox API

Pravin Dhandre
05 Jun 2018
13 min read
Dropbox API is a set of HTTP endpoints that help apps to integrate with Dropbox easily. It allows developers to work with files present in Dropbox and provides access to advanced functionality like full-text search, thumbnails, and sharing. In this article we will discuss Dropbox API functionalities listed below to navigate and query your files and folders: Loading and querying the Dropbox API Listing the directories and files from your Dropbox account Adding a loading state to your app Using Vue animations To get started you will need a Dropbox account and follow the steps detailed in this article. If you don't have one, sign up and add a few dummy files and folders. The content of Dropbox doesn't matter, but having folders to navigate through will help with understanding the code. This article is an excerpt from a book written by Mike Street titled Vue.js 2.x by Example.  This book puts Vue.js into a real-world context, guiding you through example projects to help you build Vue.js applications from scratch. Getting started—loading the libraries Create a new HTML page for your app to run in. Create the HTML structure required for a web page and include your app view wrapper: It's called #app here, but call it whatever you want - just remember to update the JavaScript. As our app code is going to get quite chunky, make a separate JavaScript file and include it at the bottom of the document. You will also need to include Vue and the Dropbox API SDK. As with before, you can either reference the remote files or download a local copy of the library files. Download a local copy for both speed and compatibility reasons. Include your three JavaScript files at the bottom of your HTML file: Create your app.js and initialize a new Vue instance, using the el tag to mount the instance onto the ID in your view. Creating a Dropbox app and initializing the SDK Before we interact with the Vue instance, we need to connect to the Dropbox API through the SDK. This is done via an API key that is generated by Dropbox itself to keep track of what is connecting to your account and where Dropbox requires you to make a custom Dropbox app. Head to the Dropbox developers area and select Create your app. Choose Dropbox API and select either a restricted folder or full access. This depends on your needs, but for testing, choose Full Dropbox. Give your app a name and click the button Create app. Generate an access token to your app. To do so, when viewing the app details page, click the Generate button under the Generated access token. This will give you a long string of numbers and letters - copy and paste that into your editor and store it as a variable at the top of your JavaScript. In this the API key will be referred to as XXXX: Now that we have our API key, we can access the files and folders from our Dropbox. Initialize the API and pass in your accessToken variable to the accessToken property of the Dropbox API: We now have access to Dropbox via the dbx variable. We can verify our connection to Dropbox is working by connecting and outputting the contents of the root path: This code uses JavaScript promises, which are a way of adding actions to code without requiring callback functions. Take a note of the first line, particularly the path variable. This lets us pass in a folder path to list the files and folders within that directory. For example, if you had a folder called images in your Dropbox, you could change the parameter value to /images and the file list returned would be the files and folders within that directory. Open your JavaScript console and check the output; you should get an array containing several objects - one for each file or folder in the root of your Dropbox. Displaying your data and using Vue to get it Now that we can retrieve our data using the Dropbox API, it's time to retrieve it within our Vue instance and display in our view. This app is going to be entirely built using components so we can take advantage of the compartmentalized data and methods. It will also mean the code is modular and shareable, should you want to integrate into other apps. We are also going to take advantage of the native Vue created() function - we'll cover it when it gets triggered in a bit. Create the component First off, create your custom HTML element, <dropbox-viewer>, in your View. Create a <script> template block at the bottom of the page for our HTML layout: Initialize your component in your app.js file, pointing it to the template ID: Viewing the app in the browser should show the heading from the template. The next step is to integrate the Dropbox API into the component. Retrieve the Dropbox data Create a new method called dropbox. In there, move the code that calls the Dropbox class and returns the instance. This will now give us access to the Dropbox API through the component by calling this.dropbox(): We are also going to integrate our API key into the component. Create a data function that returns an object containing your access token. Update the Dropbox method to use the local version of the key: We now need to add the ability for the component to get the directory list. For this, we are going to create another method that takes a single parameter—the path. This will give us the ability later to request the structure of a different path or folder if required. Use the code provided earlier - changing the dbx variable to this.dropbox(): Update the Dropbox filesListFolder function to accept the path parameter passed in, rather than a fixed value. Running this app in the browser will show the Dropbox heading, but won't retrieve any folders because the methods have not been called yet. The Vue life cycle hooks This is where the created() function comes in. The created() function gets called once the Vue instance has initialized the data and methods, but has yet to mount the instance on the HTML component. There are several other functions available at various points in the life cycle; more about these can be read at Alligator.io. The life cycle is as follows: Using the created() function gives us access to the methods and data while being able to start our retrieval process as Vue is mounting the app. The time between these various stages is split-second, but every moment counts when it comes to performance and creating a quick app. There is no point waiting for the app to be fully mounted before processing data if we can start the task early. Create the created() function on your component and call the getFolderStructure method, passing in an empty string for the path to get the root of your Dropbox: Running the app now in your browser will output the folder list to your console, which should give the same result as before. We now need to display our list of files in the view. To do this, we are going to create an empty array in our component and populate it with the result of our Dropbox query. This has the advantage of giving Vue a variable to loop through in the view, even before it has any content. Displaying the Dropbox data Create a new property in your data object titled structure, and assign this to an empty array. In the response function of the folder retrieval, assign response.entries to this.structure. Leave console.log as we will need to inspect the entries to work out what to output in our template: We can now update our view to display the folders and files from your Dropbox. As the structure array is available in our view, create a <ul> with a repeatable <li> looping through the structure. As we are now adding a second element, Vue requires templates to have one containing the element, wrap your heading and list in a <div>: Viewing the app in the browser will show a number of empty bullet points when the array appears in the JavaScript console. To work out what fields and properties you can display, expand the array in the JavaScript console and then further for each object. You should notice that each object has a collection of similar properties and a few that vary between folders and files. The first property, .tag, helps us identify whether the item is a file or a folder. Both types then have the following properties in common: id: A unique identifier to Dropbox name: The name of the file or folder, irrespective of where the item is path_display: The full path of the item with the case matching that of the files and folders path_lower: Same as path_display but all lowercase Items with a .tag of a file also contain several more fields for us to display: client_modified: This is the date when the file was added to Dropbox. content_hash: A hash of the file, used for identifying whether it is different from a local or remote copy. More can be read about this on the Dropbox website. rev: A unique identifier of the version of the file. server_modified: The last time the file was modified on Dropbox. size: The size of the file in bytes. To begin with, we are going to display the name of the item and the size, if present. Update the list item to show these properties: More file meta information To make our file and folder view a bit more useful, we can add more rich content and metadata to files such as images. These details are available by enabling the include_media_info option in the Dropbox API. Head back to your getFolderStructure method and add the parameter after path. Here are some new lines of readability: Inspecting the results from this new API call will reveal the media_info key for videos and images. Expanding this will reveal several more pieces of information about the file, for example, dimensions. If you want to add these, you will need to check that the media_info object exists before displaying the information: Try updating the path when retrieving the data from Dropbox. For example, if you have a folder called images, change the this.getFolderStructure parameter to /images. If you're not sure what the path is, analyze the data in the JavaScript console and copy the value of the path_lower attribute of a folder, for example: Formatting the file sizes With the file size being output in plain bytes it can be quite hard for a user to dechiper. To combat this, we can add a formatting method to output a file size which is more user-friendly, for example displaying 1kb instead of 1024. First, create a new key on the data object that contains an array of units called byteSizes: This is what will get appended to the figure, so feel free to make these properties either lowercase or full words, for example, megabyte. Next, add a new method, bytesToSize, to your component. This will take one parameter of bytes and output a formatted string with the unit at the end: We can now utilize this method in our view: Add loading screen The last step of this tutorial is to make a loading screen for our app. This will tell the user the app is loading, should the Dropbox API be running slowly (or you have a lot of data to show!). The theory behind this loading screen is fairly basic. We will set a loading variable to true by default that then gets set to false once the data has loaded. Based on the result of this variable, we will utilize view attributes to show, and then hide, an element with the loading text or animation in and also reveal the loaded data list. Create a new key in the data object titled isLoading. Set this variable to true by default: Within the getFolderStructure method on your component, set the isLoading variable to false. This should happen within the promise after you have set the structure: We can now utilize this variable in our view to show and hide a loading container. Create a new <div> before the unordered list containing some loading text. Feel free to add a CSS animation or an animated gif—anything to let the user know the app is retrieving data: We now need to only show the loading div if the app is loading and the list once the data has loaded. As this is just one change to the DOM, we can use the v-if directive. To give you the freedom of rearranging the HTML, add the attribute to both instead of using v-else. To show or hide, we just need to check the status of the isLoading variable. We can prepend an exclamation mark to the list to only show if the app is not loading: Our app should now show the loading container once mounted, and then it should show the list once the app data has been gathered. To recap, our complete component code now looks like this: Animating between states As a nice enhancement for the user, we can add some transitions between components and states. Helpfully, Vue includes some built-in transition effects. Working with CSS, these transitions allow you to add fades, swipes, and other CSS animations easily when DOM elements are being inserted. More information about transitions can be found in the Vue documentation. The first step is to add the Vue custom HTML <transition> element. Wrap both your loading and list with separate transition elements and give it an attribute of name and a value of fade: Now add the following CSS to either the head of your document or a separate style sheet if you already have one: With the transition element, Vue adds and removes various CSS classes based on the state and time of the transition. All of these begin with the name passed in via the attribute and are appended with the current stage of transition: Try the app in your browser, you should notice the loading container fading out and the file list fading in. Although in this basic example, the list jumps up once the fading has completed, it's an example to help you understand using transitions in Vue. We learned to make Dropbox viewer to list out files and folders from the Dropbox account and also used Vue animations for navigation. Do check out the book Vue.js 2.x by Example to start integrating remote data into your web applications swiftly. Building a real-time dashboard with Meteor and Vue.js Building your first Vue.js 2 Web application Installing and Using Vue.js
Read more
  • 0
  • 0
  • 6684

article-image-how-tflearn-makes-building-tensorflow-models-easier
Savia Lobo
04 Jun 2018
7 min read
Save for later

How TFLearn makes building TensorFlow models easier

Savia Lobo
04 Jun 2018
7 min read
Today, we will introduce you to TFLearn, and will create layers and models which are directly beneficial in any model implementation with Tensorflow. TFLearn is a modular library in Python that is built on top of core TensorFlow. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango. In this book, you will learn how to build TensorFlow models to work with multilayer perceptrons using Keras, TFLearn, and R.[/box] TIP: TFLearn is different from the TensorFlow Learn package which is also known as TF Learn (with one space in between TF and Learn). It is available at the following link; and the source code is available on GitHub. TFLearn can be installed in Python 3 with the following command: pip3  install  tflearn Note: To install TFLearn in other environments or from source, please refer to the following link: http://tflearn.org/installation/ The simple workflow in TFLearn is as follows:  Create an input layer first.  Pass the input object to create further layers.  Add the output layer.  Create the net using an estimator layer such as regression.  Create a model from the net created in the previous step.  Train the model with the model.fit() method.  Use the trained model to predict or evaluate. Creating the TFLearn Layers Let us learn how to create the layers of the neural network models in TFLearn:  Create an input layer first: input_layer  =  tflearn.input_data(shape=[None,num_inputs]  Pass the input object to create further layers: layer1  =  tflearn.fully_connected(input_layer,10, activation='relu') layer2  =  tflearn.fully_connected(layer1,10, activation='relu')  Add the output layer: output  =  tflearn.fully_connected(layer2,n_classes, activation='softmax')  Create the final net from the estimator layer such as regression: net  =  tflearn.regression(output, optimizer='adam', metric=tflearn.metrics.Accuracy(), loss='categorical_crossentropy' ) The TFLearn provides several classes for layers that are described in following sub-sections. TFLearn core layers TFLearn offers the following layers in the tflearn.layers.core module: Layer classDescriptioninput_dataThis layer is used to specify the input layer for the neural network.fully_connectedThis layer is used to specify a layer where all the neurons are connected to all the neurons in the previous layer.dropoutThis layer is used to specify the dropout regularization. The input elements are scaled by 1/keep_prob while keeping the expected sum unchanged.Layer classDescriptioncustom_layerThis layer is used to specify a custom function to be applied to the input. This class wraps our custom function and presents the function as a layer.reshapeThis layer reshapes the input into the output of specified shape.flattenThis layer converts the input tensor to a 2D tensor.activationThis layer applies the specified activation function to the input tensor.single_unitThis layer applies the linear function to the inputs.highwayThis layer implements the fully connected highway function.one_hot_encodingThis layer converts the numeric labels to their binary vector one-hot encoded representations.time_distributedThis layer applies the specified function to each time step of the input tensor.multi_target_dataThis layer creates and concatenates multiple placeholders, specifically used when the layers use targets from multiple sources. TFLearn convolutional layers TFLearn offers the following layers in the tflearn.layers.conv module: Layer classDescriptionconv_1dThis layer applies 1D convolutions to the input dataconv_2dThis layer applies 2D convolutions to the input dataconv_3dThis layer applies 3D convolutions to the input dataconv_2d_transposeThis layer applies transpose of conv2_d to the input dataconv_3d_transposeThis layer applies transpose of conv3_d to the input dataatrous_conv_2dThis layer computes a 2-D atrous convolutiongrouped_conv_2dThis layer computes a depth-wise 2-D convolutionmax_pool_1dThis layer computes 1-D max poolingmax_pool_2dThis layer computes 2D max poolingavg_pool_1dThis layer computes 1D average poolingavg_pool_2dThis layer computes 2D average poolingupsample_2dThis layer applies the row and column wise 2-D repeat operationupscore_layerThis layer implements the upscore as specified in http://arxiv. org/abs/1411.4038global_max_poolThis layer implements the global max pooling operationglobal_avg_poolThis layer implements the global average pooling operationresidual_blockThis layer implements the residual block to create deep residual networksresidual_bottleneckThis layer implements the residual bottleneck block for deep residual networksresnext_blockThis layer implements the ResNeXt block TFLearn recurrent layers TFLearn offers the following layers in the tflearn.layers.recurrent module: Layer classDescriptionsimple_rnnThis layer implements the simple recurrent neural network modelbidirectional_rnnThis layer implements the bi-directional RNN modellstmThis layer implements the LSTM modelgruThis layer implements the GRU model TFLearn normalization layers TFLearn offers the following layers in the tflearn.layers.normalization module: Layer classDescriptionbatch_normalizationThis layer normalizes the output of activations of previous layers for each batchlocal_response_normalizationThis layer implements the LR normalizationl2_normalizationThis layer applies the L2 normalization to the input tensors TFLearn embedding layers TFLearn offers only one layer in the tflearn.layers.embedding_ops module: Layer classDescriptionembeddingThis layer implements the embedding function for a sequence of integer IDs or floats TFLearn merge layers TFLearn offers the following layers in the tflearn.layers.merge_ops module: Layer classDescriptionmerge_outputsThis layer merges the list of tensors into a single tensor, generally used to merge the output tensors of the same shapemergeThis layer merges the list of tensors into a single tensor; you can specify the axis along which the merge needs to be done TFLearn estimator layers TFLearn offers only one layer in the tflearn.layers.estimator module: Layer classDescriptionregressionThis layer implements the linear or logistic regression While creating the regression layer, you can specify the optimizer and the loss and metric functions. TFLearn offers the following optimizer functions as classes in the tflearn.optimizers module: SGD RMSprop Adam Momentum AdaGrad Ftrl AdaDelta ProximalAdaGrad Nesterov Note: You can create custom optimizers by extending the tflearn.optimizers.Optimizer base class. TFLearn offers the following metric functions as classes or ops in the tflearn.metrics module: Accuracy or  accuracy_op Top_k or top_k_op R2 or r2_op WeightedR2  or weighted_r2_op Binary_accuracy_op Note : You can create custom metrics by extending the tflearn.metrics.Metric base class. TFLearn provides the following loss functions, known as objectives, in the tflearn.objectives module: Softymax_categorical_crossentropy categorical_crossentropy binary_crossentropy Weighted_crossentropy mean_square hinge_loss roc_auc_score Weak_cross_entropy_2d While specifying the input, hidden, and output layers, you can specify the activation functions to be applied to the output. TFLearn provides the following activation functions in the tflearn.activations module: linear tanh Sigmoid softmax softplus Softsign relu relu6 leaky_relu Prelu elu Crelu selu Creating the TFLearn Model Create the model from the net created in the previous step (step 4 in creating the TFLearn layers section): model  =  tflearn.DNN(net) Types of TFLearn models The TFLearn offers two different classes of the models: DNN  (Deep Neural Network) model: This class allows you to create a multilayer perceptron from the network that you have created from the layers SequenceGenerator model: This class allows you to create a deep neural network that can generate sequences Training the TFLearn Model After creating, train the model with the model.fit() method: model.fit(X_train, Y_train, n_epoch=n_epochs, batch_size=batch_size, show_metric=True, run_id='dense_model') Using the TFLearn Model Use the trained model to predict or evaluate: score  =  model.evaluate(X_test,  Y_test) print('Test  accuracy:',  score[0]) The complete code for the TFLearn MNIST classification example is provided in the notebook ch-02_TF_High_Level_Libraries. The output from the TFLearn MNIST example is as follows: Training  Step:  5499         |  total  loss:  0.42119  |  time:  1.817s |  Adam  |  epoch:  010  |  loss:  0.42119  -  acc:  0.8860  --  iter:  54900/55000 Training  Step:  5500         |  total  loss:  0.40881  |  time:  1.820s |  Adam  |  epoch:  010  |  loss:  0.40881  -  acc:  0.8854  --  iter:  55000/55000 -- Test  accuracy:  0.9029 Note: You can get more information about TFLearn from the following link: http://tflearn.org/. To summarize, we got to know about TFLearn and the different TFLearn layers and models. If you found this post useful, do check out this book Mastering TensorFlow 1.x, to explore advanced features of TensorFlow 1.x, and gain insight into TensorFlow Core, Keras, TF Estimators, TFLearn, TF Slim, Pretty Tensor, and Sonnet. TensorFlow.js 0.11.1 releases! How to Build TensorFlow Models for Mobile and Embedded devices Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 7219

article-image-data-cleaning-worst-part-of-data-analysis
Amey Varangaonkar
04 Jun 2018
5 min read
Save for later

Data cleaning is the worst part of data analysis, say data scientists

Amey Varangaonkar
04 Jun 2018
5 min read
The year was 2012. Harvard Business Review had famously declared the role of data scientist as the ‘sexiest job of the 21st century’. Companies were slowly working with more data than ever before. The real actionable value of the data that could be used for commercial purposes was slowly beginning to uncover. Someone who could derive these actionable insights from the data was needed. The demand for data scientists was higher than ever. Fast forward to 2018 - more data has been collected in the last 2 years than ever before. Data scientists are still in high demand, and the need for insights is higher than ever. There has been one significant change, though - the process of deriving insights has become more complex. If you ask the data scientists, the first initial phase of this process, which involves data cleansing, has become a lot more cumbersome. So much so, that it is no longer a myth that data scientists spend almost 80% of their time cleaning and readying the data for analysis. Why data cleaning is a nightmare In the recently conducted Packt Skill-Up survey, we asked data professionals what the worst part of the data analysis process was, and a staggering 50% responded with data cleaning. Source: Packt Skill Up Survey We dived deep into this, and tried to understand why many data science professionals have this common feeling of dislike towards data cleaning, or scrubbing - as many call it. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. There is no consistent data format Organizations these days work with a lot of data. Some of it is in a structured, readily understandable format. This kind of data is usually quite easy to clean, parse and analyze. However, some of the data is really messy, and cannot be used as is for analysis. This includes missing data, irregularly formatted data, and irrelevant data which is not worth analyzing at all. There is also the problem of working with unstructured data which needs to be pre-processed to get the data worth analyzing. Audio or video files, email messages, presentations, xml documents and web pages are some classic examples of this. There’s too much data to be cleaned The volume of data that businesses deal with on a day to day basis is in the scale of terabytes or even petabytes. Making sense of all this data, coming from a variety of sources and in different formats is, undoubtedly, a huge task. There are a whole host of tools designed to ease this process today, but it remains an incredibly tricky challenge to sift through the large volumes of data and prepare it for analysis. Data cleaning is tricky and time-consuming Data cleansing can be quite an exhaustive and time-consuming task, especially for data scientists. Cleaning the data requires removal of duplications, removing or replacing missing entries, correcting misfielded values, ensuring consistent formatting and a host of other tasks which take a considerable amount of time. Once the data is cleaned, it needs to be placed in a secure location. Also, a log of the entire process needs to be kept to ensure the right data goes through the right process. All of this requires the data scientists to create a well-designed data scrubbing framework to avoid the risk of repetition. All of this is more of a grunt work and requires a lot of manual effort. Sadly, there are no tools in the market which can effectively automate this process. Outsourcing the process is expensive Given that data cleaning is a rather tedious job, many businesses think of outsourcing the task to third party vendors. While this reduces a lot of time and effort on the company’s end, it definitely increases the cost of the overall process. Many small and medium scale businesses may not be able to afford this, and thus are heavily reliant on the data scientist to do the job for them. You can hate it, but you cannot ignore it It is quite obvious that data scientists need clean, ready-to-analyze data if they are to to extract actionable business insights from it. Some data scientists equate data cleaning to donkey work, suggesting there’s not a lot of innovation involved in this process. However, some believe data cleaning is rather important, and pay special attention to it given once it is done right, most of the problems in data analysis are solved. It is very difficult to take advantage of the intrinsic value offered by the dataset if it does not adhere to the quality standards set by the business, making data cleaning a crucial component of the data analysis process. Now that you know why data cleaning is essential, why not dive deeper into the technicalities? Check out our book Practical Data Wrangling for expert tips on turning your noisy data into relevant, insight-ready information using R and Python. Read more Cleaning Data in PDF Files 30 common data science terms explained How to create a strong data science project portfolio that lands you a job  
Read more
  • 0
  • 2
  • 9299

article-image-have-microservices-killed-monolithic-software-architecture-for-good
Aaron Lazar
04 Jun 2018
6 min read
Save for later

Have Microservices killed the monolithic architecture? Maybe not!

Aaron Lazar
04 Jun 2018
6 min read
Microservices have been growing in popularity since the past few years, 2014 to be precise. Honestly speaking they weren’t that popular until around 2016 - take a look at the steep rise in the curve. The outbreak has happened over the past few years and there are quite a few factors contributing to their growth, like the cloud, distributed architectures, etc. Source: Google Trends Microservices allow for a clearer and refined architecture, with services built to work in isolation, without affecting the resilience and robustness of the application in any way. But does that mean that the Monolith is dead and only Microservices reign? Let’s find out, shall we? Those of you who participated in this year’s survey, I thank you for taking the time out to share such valuable information. For those of you who don’t know what the survey is all about, it a thing that we do every year, where thousands of developers, architects, managers, admins, share their insights with us, and we share our findings with the community. This year’s survey was as informative as the last, if not more! We had developers tell us so much about what they’re doing, where they see technology heading and what tools and techniques they use to stay relevant at what they do. So we took the opportunity and asked our respondents a question about the topic under discussion. Source: WWE.com Revelations If I asked a developer in 2018, what they thought would be the response, they’d instantly say that a majority would be for microservices. Source: Packtpub Skill Up Survey 2018 If you were the one who guessed the answer was going to be Yes, give yourself a firm pat on the back! It’s great to see that 1,603 people are throwing their hands up in the air and building microservices. On the other hand, it’s possible that it’s purely their manager’s decision (See how this forms a barrier to achieving business goals). Anyway, I was particularly concerned about the remaining 314 people who said ‘No’ (those who skipped answering, now is your chance to say something in the comments section below!). Why no Microservices? I thought I’d analyse the possibilities as to why one wouldn’t want to use the microservices pattern in their application architecture. It’s not like developers are migrating from monoliths to microservices, just because everyone else is doing it. Like any other architectural decision, there are several factors that need to be taken into consideration before making the switch. So here’s what I thought were some reasons why developers are sticking to monoliths. #1 One troll vs many elves: Complex times Well imagine you could be attacked by one troll or a hundred house elves. Which situation would you choose to be in if neither isn’t an option? I don’t know about you, but I’d choose the troll any day! Keeping the troll’s size aside, I’d be better off knowing I had one large enemy in front of me, rather than being surrounded by a hundred miniature ones. The same goes for microservices. More services means more complexity, more issues that could crop up. For developers, more services means that they would need to run or connect to all of them on their machine. Although there are tools that help solve this problem, you have to admit that it’s a task to run all services together as a whole application. On the other hand, Ops professionals are tasked to monitor and keep all these services up and running. #2 We lack the expertise Let alone having Developer Rockstars or Admin Ninjas (Oops, I shouldn’t be using those words now, find out why), if your organisation lacks experienced professionals, you’ve got a serious problem. What if there’s an organisation that has been having issues developing/managing a monolith itself. There’s no guarantee that they will be able to manage a microservices based application more effectively. It’s a matter of the organisation having enough hands on skills needed to perform these tasks. These skills are tough to acquire and it’s not simple for organisations to find the right talent. #3 Tower of Babel: Communication gaps In a monolith, communication happens within the application itself and the network channels exist internally. However, this isn’t the case for a microservices architecture as inter-service communication is necessary to keep everything running in tandem. This results in the generation of multiple points of failure, complicating things. To minimise failure, each service has a certain number of retries when trying to establish communication with another. When scaled up, these retries add a load on the database, what with communication formats having to follow strict rules to avoid complexity back again. It’s a vicious circle! #4 Rebuilding a monolith When you build an application based on the microservices architecture, you may benefit a great deal from robustness and reliability. However, microservices together form a large, complicated system, which can be managed by orchestration platforms like Kubernetes. Although, if individual teams are managing clusters of these services, it’s quite likely that orchestration, deployment and management of such a system will be a pain. #5 Burning in dependency hell Microservices are notorious for inviting developers to build services in various languages and then to glue them together. While this is an advantage to a certain extent, it complicates dependency management in the entire application. Moreover, dependencies get even more complicated when versions of tools don’t receive instantaneous support as they are updated. You and your team can go crazy keeping track of versions and dependencies that need to be managed to maintain smooth functioning of your application. So while the microservice architecture is hot, it is not always the best option and teams can actually end up making things worse if they choose to make the change unprepared. Yes, the cloud does benefit much more when applications are deployed as services, rather than as a monolith, but the renowned/infamous “lift and shift” method still exists and works when needed. Ultimately, if you think past the hype, the monolith is not really dead yet and is in fact still being deployed and run in several organisations. Finally, I want to stress that it’s critical that developers and architects take a well informed decision, keeping in mind all the above factors, before they choose an architecture. Like they say, “With great power comes great responsibility”, that’s exactly what great architecture is all about, rather than just jumping on the bandwagon. Building Scalable Microservices Why microservices and DevOps are a match made in heaven What is a multi layered software architecture?
Read more
  • 0
  • 0
  • 4869

article-image-developers-want-to-become-entrepreneurs-make-successful-transition
Fatema Patrawala
04 Jun 2018
9 min read
Save for later

1 in 3 developers want to be entrepreneurs. What does it take to make a successful transition?

Fatema Patrawala
04 Jun 2018
9 min read
Much of the change we have seen over the last decade has been driven by a certain type of person: part developer, part entrepreneur. From Steve Jobs, to Bill Gates, Larry Page to Sergey Bin, Mark Zuckerberg to Elon Musk - the list is long. These people have built their entrepreneurial success on software. In doing so, they’ve changed the way we live, and changed the way we look at the global tech landscape. Part of the reason software is eating the world is because of the entrepreneurial spirit of people like these.Silicon Valley is a testament to this! That is why a combination of tech and leadership skills in developers is the holy grail even today. From this perspective, we weren’t surprised to find that a significant number of developers working today have entrepreneurial ambitions. In this year’s Skill Up survey we asked respondents what they hope to be doing in the next 5 years. A little more than one third of the respondents said they would like to be working as a founder of their own company. Source: Packt Skill Up Survey But being a successful entrepreneur requires a completely new set of skills from those that make one a successful developer. It requires a different mindset. True, certain skills you might already possess. Others, though you’re going to have to acquire. With skills realizing the power of purpose and values will be the key factor to building a successful venture. You will need to clearly articulate why you’re doing something and what you’re doing. In this way you will attract and keep customers who will clearly know the purpose of your business. Let us see what it takes to transition from a developer to a successful entrepreneur. Think customer first, then product: Source: Gifer Whatever you are thinking right now, think Bigger! Developers love making things. And that’s a good place to begin. In tech, ideas often originate from technical conversations and problem solving. “Let’s do X on Z  platform because the Y APIs make it possible”. This approach works sometimes, but it isn’t a sustainable business model. That’s because customers don’t think like that. Their thought process is more like this: “Is X for me?”, “Will X solve my problem?”, “Will X save me time or money?”. Think about the customer first and then begin developing your product. You can’t create a product and then check if there is it is in demand - that sounds obvious, but it does happen. Do your research, think about the market and get to know your customer first. In other words, to be a successful entrepreneur, you need to go one step further - yes, care about your products, but think about it from the long term perspective. Think big picture: what’s going to bring in real value to the lives of other people. Search for opportunities to solve problems in other people’s lives adopting a larger and more market oriented approach. Adding a bit of domain knowledge like market research, product-market fit, market positioning etc to your to-do list should be a vital first step in your journey. Remember, being a good technician is not enough Source: Tenor Most developers are comfortable and probably excel at being good technicians. This means you’re good at writing those beautiful codes, produce something tangible, and cranking away on each task, moving one step closer to the project launch date. Great. But it takes more than a technician to run a successful business. It’s critical to look ahead into both the near and the long term. What’s going to provide the best ROI next year? Where do I want to be in 5 years time? To do this you need to be organized and focused. It’s of critical importance to determine your goals and objectives. Simply put you need to evolve from being a problem solver/great executioner to a visionary/strategic thinker. The developer-entrepreneur is a rare breed of doer-thinker. Be passionate and focussed Source: Bustle Perseverance is the single most important trait found in successful entrepreneurs. As a developer you are primed to think logically and come to conclusions, which is a great trait to possess in most cases. However, in entrepreneurship there will be times when you will do all the right things, meet the right people and yet meet with failures. To get your company off the ground, you need to be passionate and excited about what you’re working on. This enthusiasm will often be the only thing to carry you through late nights, countless setbacks and tough situations. You must stay open, even if logic dictates otherwise. Most startups fail either because they read the market wrong or they didn’t stay long enough in the race. You need to also know how to focus closely on the very next step to get closer to your ultimate goal. There will be many distracting forces when trying to build a business that focusing on one particular task will not be easy and you will need to religiously master this skill. Become amazing at networking Source: Gfycat It truly isn't about what you know or what you have developed. Sure, what you know is important. But what's far more important is who you know. There's a reason why certain people can make so much progress in such a brief period. These business networking power players command the room by bringing the right people together.  As an entrepreneur, if there's one thing that you should focus on, it's becoming a truly skilled business networker. Imagine having an idea for a business that's so wonderful, that you can pick up the phone and call four or five people who can help you turn that idea into a reality. Sharpen your strategic and critical thinking skills Source: Tenor As entrepreneurs it’s essential to possess sharp critical thinking skills. When you think critically,t you ask the hard, tactical questions while expanding the lens to see the wider picture. Without thinking critically, it’s hard to assess whether the most creative application that you have developed really has a life out in the world. You are always going to love what you have created but it is important to wear multiple hats to think from the users’ perspective. Get it reviewed by the industry stalwarts in your community to catch that silly but important area that you could have missed while planning. With critical thinking it is also important you strategize and plan out things for smooth functioning of your business. Find and manage the right people Source: Gfycat In reality businesses are like living creatures, who’s organs all need to work in harmony. There is no major organ more critical than another, and a failure to one system can bring down all the others. Developers build things, marketers get customers and salespersons sell products. The trick is, to find and employ the right people for your business if you are to create a sustainable business. Your ideas and thinking will need to align with the ideas of people that you are working with. Only by learning to leverage employees, vendors and other resources can you build a scalable and sustainable company. Learn the art of selling an idea Source: Patent an idea Every entrepreneur needs to play the role of a sales person whether they like it or not. To build a successful business venture you will need to sell your ideas, products or services to customers, investors or employees. You should be ready to work and be there when customers are ready to buy. Alternatively, you should also know how to let go and move on when they are not. The ability to convince others that you are going to provide them the maximum product value will help you crack that mission critical deal. Be flexible. Create contingency plans Source: Gifer Things rarely go as per plan in software development. Project scope creeps, clients expectations rise, or bugs always seem to mysteriously appear. Even the most seasoned developers can’t predict all possible scenarios so they have to be ready with contingencies. This is also true in the world of business start-ups. Despite the best-laid business plans, entrepreneurs need to be prepared for the unexpected and be able to roll with the punches. You need to very well be prepared with an option A, B or C. Running a business is like sea surfing. You’ve got to be nimble enough both in your thinking and operations to ride the waves, high and low. Always measure performance Source: Tenor "If you can't measure it, you can't improve it." Peter Drucker’s point feels obvious but is one worth bearing in mind always. If you can't measure something, you’ll never improve. Measuring key functions of your business will help you scale it faster. Otherwise you risk running your firm blindly, without any navigational path to guide it. Closing Comments Becoming an entrepreneur and starting your own business is one of life’s most challenging and rewarding journeys. Having said that, for all of the perks that the entrepreneurial path offers, it’s far from being all roses. Being an entrepreneur means being a warrior. It means being clever, hungry and often a ruthless competitor. If you are a developer turned entrepreneur, we’d love to hear your take on the topic and your own personal journey. Share your comments below or write to us at [email protected]. Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important Don’t call us ninjas or rockstars, say developers        
Read more
  • 0
  • 0
  • 6232

article-image-visualizing-bigquery-data-with-tableau
Sugandha Lahoti
04 Jun 2018
8 min read
Save for later

Visualizing BigQuery Data with Tableau

Sugandha Lahoti
04 Jun 2018
8 min read
Tableau is an interactive data visualization tool that can be used to create business intelligence dashboards. Much like most business intelligence tools, it can be used to pull and manipulate data from a number of sources. The difference is its dedication to help users create insightful data visualizations. Tableau's drag-and-drop interface makes it easy for users to explore data via elegant charts. It also includes an in-memory engine in order to speed up calculations on extremely large data sets. In today’s tutorial, we will be using Tableau Desktop for visualizing BigQuery Data. [box type="note" align="" class="" width=""]This article is an excerpt from the book, Learning Google BigQuery, written by Thirukkumaran Haridass and Eric Brown. This book is a comprehensive guide to mastering Google BigQuery to get intelligent insights from your Big Data.[/box] The following section explains how to use Tableau Desktop Edition to connect to BigQuery and get the data from BigQuery to create visuals: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery: At this point, all the tables in your dataset should be displayed on the left: You can drag and drop the table you are interested in using to the middle section labeled Drop Tables Here. In this case, we want to query the Google Analytics BigQuery test data, so we will click where it says New Custom SQL and enter the following query in the dialog: SELECT trafficsource.medium as Medium, COUNT(visitId) as Visits FROM `google.com:analytics- bigquery.LondonCycleHelmet.ga_sessions_20130910` GROUP BY Medium Now we can click on Update Now to view the first 10,000 rows of our data. We can also do some simple transformations on our columns, such as changing string values to dates and many others. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. Tableau's interface allows users to simply drag and drop dimensions and metrics from the left side of the report into the central part to create simple text charts, with a feel much like Excel's pivot chart functionality. This makes Tableau easy to transition to for Excel users. From the Dimensions section on the left-hand-side navigation, drag and drop the Medium dimension into the sheet section. Then drag the Visits metric in the Metric section on the left-hand-side navigation to the Text sub-section in the Marks section. This will create a simple text chart with data from the original query: On the right, click on the button marked Show Me. This should bring up a screen with icons for each graph type that can be created in Tableau: Tableau helps by shading graph types that are not available based on the data that is currently selected in the report. It will also make suggestions based on the data available. In this case, a bar chart has been preselected for us as our data is a text dimension and a numeric metric. Click on the bar chart. Once clicked, the default sideways bar chart will appear with the data we have selected. Click on the Swap Rows and Columns in the icon bar at the top of the screen to flip the chart from horizontal to vertical: Map charts in Tableau One of Tableau's strengths is its ease of use when creating a number of different types of charts. This is true when creating maps, especially because maps can be very painful to create using other tools. Here is the way to create a simple map in Tableau using BigQuery public data. The first few steps are the same as in the preceding example: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery. At this point, all the tables in your dataset should be displayed on the left-hand side. Click where it says New Custom SQL and enter the following query in the dialog: SELECT zipcode, SUM(population) AS population FROM `bigquery-public- data.census_bureau_usa.population_by_zip_2010` GROUP BY zipcode ORDER BY population desc This data is from the United States Census from 2010. The query returns all zip codes in USA, sorted by most populous to least populous. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. Double-click on the zipcode dimension on the dimensions section on the left navigation. Clicking on a dimension of zip codes (or any other formatted location dimension such as latitude/longitude, country names, state names, and so on) will automatically create a map in Tableau: Drag the population metric from the metrics section on the left navigation and drop it on the color tab in the marks section: The map will now show the most populous zip codes shaded darker than the less populous zip codes. The map chart also includes zoom features in order to make dealing with large maps easy. In the top-left corner of the map, there is a magnifying glass icon. This icons has the map zoom features. Clicking on the arrow at the bottom of this icon opens more features. The icon with a rectangle and a magnifying glass is the selection tool (The first icon to the right of the arrow when hovering over arrow): Click on this icon and then on the map to select a section of the map to be zoomed into: This image is shown after zooming into the California area of the United States. The map now shows the areas of the state that are the most populous. Create a word cloud in Tableau Word clouds are great visualizations for finding words that are most referenced in books, publications, and social media. This section will cover creating a word cloud in Tableau using BigQuery public data. The first few steps are the same as in the preceding example: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery. At this point, all the tables in your dataset should be displayed on the left. Click where it says New Custom SQL and enter the following query in the dialog: SELECT word, SUM(word_count) word_count FROM `bigquery-public-data.samples.shakespeare` GROUP BY word ORDER BY word_count desc The dataset is from the works of William Shakespeare. The query returns a list of all words in his works, along with a count of the times each word appears in one of his works. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. In the dimensions section, drag and drop the word dimension into the text tab in the marks section. In the dimensions section, drag and drop the word_count measure to the size tab in the marks section. There will be two tabs used in the marks section. Right-click on the size tab labeled word and select Measure | Count: This will create what is called a tree map. In this example, there are far too many words in the list to utilize the visualization. Drag and drop the word_count measure from the measures section to the filters section. When prompted with How do you want to filter on word_count, select Sum and click on next.. Select At Least for your condition and type 2000 in the dialog. Click on OK. This will return only those words that have a word count of at least 2,000.. Use the dropdown in the marks card to select Text: 11. Drag and drop the word_count measure from the measures section to the color tab in the marks section. This will color each word based on the count for that word: You should be left with a color-coded word cloud. Other charts can now be created as individual worksheet tabs. Tabs can then be combined to make what Tableau calls a dashboard. The process of creating a dashboard here is a bit more cumbersome than creating a dashboard in Google Data Studio, but Tableau offers a great deal of more customization for its dashboards. This, coupled with all the other features it offers, makes Tableau a much more attractive option, especially for enterprise users. We learnt various features of Tableau and how to use it for visualizing BigQuery data.To know about other third party tools for reporting and visualization purposes such as R and Google Data Studio, check out this book Learning Google BigQuery. Tableau is the most powerful and secure end-to-end analytics platform - Interview Insights Tableau 2018.1 brings new features to help organizations easily scale analytics Getting started with Data Visualization in Tableau      
Read more
  • 0
  • 0
  • 4372
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-fuzzy-logic-ai-characters-unity-3d-games
Kunal Chaudhari
01 Jun 2018
16 min read
Save for later

Implementing fuzzy logic to bring AI characters alive in Unity based 3D games

Kunal Chaudhari
01 Jun 2018
16 min read
Fuzzy logic is a fantastic way to represent the rules of your game in a more nuanced way. Perhaps more so than other concepts, fuzzy logic is a very math-heavy topic. Most of the information can be represented purely by mathematical functions. For the sake of teaching the important concepts as they apply to Unity, most of the math has been simplified and implemented using Unity's built-in features. In this tutorial, we will take a look at the concepts behind fuzzy logic systems and implement in your AI system. Implementing fuzzy logic will make your game characters more believable and depict real-world attributes. This article is an excerpt from a book written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe titled  Unity 2017 Game AI Programming - Third Edition. This book will help you leverage the power of artificial intelligence to program smart entities for your games. Defining fuzzy logic The simplest way to define fuzzy logic is by comparison to binary logic.  Generally, transition rules as looked at as true or false or 0 or 1 values. Is something visible? Is it at least a certain distance away? Even in instances where multiple values were being evaluated, all of the values had exactly two outcomes; thus, they were binary. In contrast, fuzzy values represent a much richer range of possibilities, where each value is represented as a float rather than an integer. We stop looking at values as 0 or 1, and we start looking at them as 0 to 1. A common example used to describe fuzzy logic is temperature. Fuzzy logic allows us to make decisions based on non-specific data. I can step outside on a sunny Californian summer's day and ascertain that it is warm, without knowing the temperature precisely. Conversely, if I were to find myself in Alaska during the winter, I would know that it is cold, again, without knowing the exact temperature. These concepts of cold, cool, warm, and hot are fuzzy ones. There is a good amount of ambiguity as to at what point we go from warm to hot. Fuzzy logic allows us to model these concepts as sets and determine their validity or truth by using a set of rules. When people make decisions, people have some gray areas. That is to say, it's not always black and white. The same concept applies to agents that rely on fuzzy logic. Say you hadn't eaten in a few hours, and you were starting to feel a little hungry. At which point were you hungry enough to go grab a snack? You could look at the time right after a meal as 0, and 1 would be the point where you approached starvation. The following figure illustrates this point: When making decisions, there are many factors that determine the ultimate choice. This leads into another aspect of fuzzy logic controllers—they can take into account as much data as necessary. Let's continue to look at our "should I eat?" example. We've only considered one value for making that decision, which is the time since the last time you ate. However, there are other factors that can affect this decision, such as how much energy you're expending and how lazy you are at that particular moment. Or am I the only one to use that as a deciding factor? Either way, you can see how multiple input values can affect the output, which we can think of as the "likeliness to have another meal." Fuzzy logic systems can be very flexible due to their generic nature. You provide input, the fuzzy logic provides an output. What that output means to your game is entirely up to you. We've primarily looked at how the inputs would affect a decision, which, in reality, is taking the output and using it in a way the computer, our agent, can understand. However, the output can also be used to determine how much of something to do, how fast something happens, or for how long something happens. For example, imagine your agent is a car in a sci-fi racing game that has a "nitro-boost" ability that lets it expend a resource to go faster. Our 0 to 1 value can represent a normalized amount of time for it to use that boost or perhaps a normalized amount of fuel to use. Picking fuzzy systems over binary systems With most things in game programming, we must evaluate the requirements of our game and the technology and hardware limitations when deciding on the best way to tackle a problem. As you might imagine, there is a performance cost associated with going from a simple yes/no system to a more nuanced fuzzy logic one, which is one of the reasons we may opt out of using it. Of course, being a more complex system doesn't necessarily always mean it's a better one. There will be times when you just want the simplicity and predictability of a binary system because it may fit your game better. While there is some truth to the old adage, "the simpler, the better", one should also take into account the saying, "everything should be made as simple as possible, but not simpler". Though the quote is widely attributed to Albert Einstein, the father of relativity, it's not entirely clear who said it. The important thing to consider is the meaning of the quote itself. You should make your AI as simple as your game needs it to be, but not simpler. Pac-Man's AI works perfectly for the game–it's simple enough. However, rules say that simple would be out of place in a modern shooter or strategy game. Using fuzzy logic Once you understand the simple concepts behind fuzzy logic, it's easy to start thinking of the many ways in which it can be useful. In reality, it's just another tool in our belt, and each job requires different tools. Fuzzy logic is great at taking some data, evaluating it in a similar way to how a human would (albeit in a much simpler way), and then translating the data back to information that is usable by the system. Fuzzy logic controllers have several real-world use cases. Some are more obvious than others, and while these are by no means one-to-one comparisons to our usage in game AI, they serve to illustrate a point: Heating ventilation and air conditioning (HVAC) systems: The temperature example when talking about fuzzy logic is not only a good theoretical approach to explaining fuzzy logic, but also a very common real-world example of fuzzy logic controllers in action. Automobiles: Modern automobiles come equipped with very sophisticated computerized systems, from the air conditioning system (again), to fuel delivery, to automated braking systems. In fact, putting computers in automobiles has resulted in far more efficient systems than the old binary systems that were sometimes used. Your smartphone: Ever notice how your screen dims and brightens depending on how much ambient light there is? Modern smartphone operating systems look at ambient light, the color of the data being displayed, and the current battery life to optimize screen brightness. Washing machines: Not my washing machine necessarily, as it's quite old, but most modern washers (from the last 20 years) make some use of fuzzy logic. Load size, water dirtiness, temperature, and other factors are taken into account from cycle to cycle to optimize water use, energy consumption, and time. If you take a look around your house, there is a good chance you'll find a few interesting uses of fuzzy logic, and I mean besides your computer, of course. While these are neat uses of the concept, they're not particularly exciting or game-related. I'm partial to games involving wizards, magic, and monsters, so let's look at a more relevant example. Implementing a simple fuzzy logic system For this example, we're going to use my good friend, Bob, the wizard. Bob lives in an RPG world, and he has some very powerful healing magic at his disposal. Bob has to decide when to cast this magic on himself based on his remaining health points (HPs). In a binary system, Bob's decision-making process might look like this: if(healthPoints <= 50) { CastHealingSpell(me); } We see that Bob's health can be in one of two states—above 50, or not. Nothing wrong with that, but let's have a look at what the fuzzy version of this same scenario might look like, starting with determining Bob's health status: Before the panic sets in upon seeing charts and values that may not quite mean anything to you right away, let's dissect what we're looking at. Our first impulse might be to try to map the probability that Bob will cast a healing spell to how much health he is missing. That would, in simple terms, just be a linear function. Nothing really fuzzy about that—it's a linear relationship, and while it is a step above a binary decision in terms of complexity, it's still not truly fuzzy. Enter the concept of a membership function. It's key to our system, as it allows us to determine how true a statement is. In this example, we're not simply looking at raw values to determine whether or not Bob should cast his spell; instead, we're breaking it up into logical chunks of information for Bob to use in order to determine what his course of action should be. In this example, we're comparing three statements and evaluating not only how true each one is, but which is the most true: Bob is in a critical condition Bob is hurt Bob is healthy If you're into official terminology, we call this determining the degree of membership to a set. Once we have this information, our agent can determine what to do with it next. At a glance, you'll notice it's possible for two statements to be true at a time. Bob can be in a critical condition and hurt. He can also be somewhat hurt and a little bit healthy. You're free to pick the thresholds for each, but, in this example, let's evaluate these statements as per the preceding graph. The vertical value represents the degree of truth of a statement as a normalized float (0 to 1): At 0 percent health, we can see that the critical statement evaluates to 1. It is absolutely true that Bob is critical when his health is gone. At 40 percent health, Bob is hurt, and that is the truest statement. At 100 percent health, the truest statement is that Bob is healthy. Anything outside of these absolutely true statements is squarely in fuzzy territory. For example, let's say Bob's health is at 65 percent. In that same chart, we can visualize it like this: The vertical line drawn through the chart at 65 represents Bob's health. As we can see, it intersects both sets, which means that Bob is a little bit hurt, but he's also kind of healthy. At a glance, we can tell, however, that the vertical line intercepts the Hurt set at a higher point in the graph. We can take this to mean that Bob is more hurt than he is healthy. To be specific, Bob is 37.5 percent hurt, 12.5 percent healthy, and 0 percent critical. Let's take a look at this in code; open up our FuzzySample scene in Unity. The hierarchy will look like this: The important game object to look at is Fuzzy Example. This contains the logic that we'll be looking at. In addition to that, we have our Canvas containing all of the labels and the input field and button that make this example work. Lastly, there's the Unity-generated EventSystem and Main Camera, which we can disregard. There isn't anything special going on with the setup for the scene, but it's a good idea to become familiar with it, and you are encouraged to poke around and tweak it to your heart's content after we've looked at why everything is there and what it all does. With the Fuzzy Example game object selected, the inspector will look similar to the following image: Our sample implementation is not necessarily something you'll take and implement in your game as it is, but it is meant to illustrate the previous points in a clear manner. We use Unity's AnimationCurve for each different set. It's a quick and easy way to visualize the very same lines in our earlier graph. Unfortunately, there is no straightforward way to plot all the lines in the same graph, so we use a separate AnimationCurve for each set. In the preceding screenshot, they are labeled Critical, Hurt, and Healthy. The neat thing about these curves is that they come with a built-in method to evaluate them at a given point (t). For us, t does not represent time, but rather the amount of health Bob has. As in the preceding graph, the Unity example looks at a HP range of 0 to 100. These curves also provide a simple user interface for editing the values. You can simply click on the curve in the inspector. That opens up the curve editing window. You can add points, move points, change tangents, and so on, as shown in the following screenshot: Unity's curve editor window Our example focuses on triangle-shaped sets. That is, linear graphs for each set. You are by no means restricted to this shape, though it is the most common. You could use a bell curve or a trapezoid, for that matter. To keep things simple, we'll stick to the triangle. You can learn more about Unity's AnimationCurve editor at http://docs.unity3d.com/ScriptReference/AnimationCurve.html. The rest of the fields are just references to the different UI elements used in code that we'll be looking at later in this chapter. The names of these variables are fairly self-explanatory, however, so there isn't much guesswork to be done here. Next, we can take a look at how the scene is set up. If you play the scene, the game view will look something similar to the following screenshot: A simple UI to demonstrate fuzzy values We can see that we have three distinct groups, representing each question from the "Bob, the wizard" example. How healthy is Bob, how hurt is Bob, and how critical is Bob? For each set, upon evaluation, the value that starts off as 0 true will dynamically adjust to represent the actual degree of membership. There is an input box in which you can type a percentage of health to use for the test. No fancy controls are in place for this, so be sure to enter a value from 0 to 100. For the sake of consistency, let's enter a value of 65 into the box and then press the Evaluate! button. This will run some code, look at the curves, and yield the exact same results we saw in our graph earlier. While this shouldn't come as a surprise (the math is what it is, after all), there are fewer things more important in game programming than testing your assumptions, and sure enough, we've tested and verified our earlier statement. After running the test by hitting the Evaluate! button, the game scene will look similar to the following screenshot: This is how Bob is doing at 65 percent health Again, the values turn out to be 0.125 (or 12.5 percent) healthy and 0.375 (or 37.5 percent) hurt. At this point, we're still not doing anything with this data, but let's take a look at the code that's handling everything: using UnityEngine; using UnityEngine.UI; using System.Collections; public class FuzzySample1 : MonoBehaviour { private const string labelText = "{0} true"; public AnimationCurve critical; public AnimationCurve hurt; public AnimationCurve healthy; public InputField healthInput; public Text healthyLabel; public Text hurtLabel; public Text criticalLabel; private float criticalValue = 0f; private float hurtValue = 0f; private float healthyValue = 0f; We start off by declaring some variables. The labelText is simply a constant we use to plug into our label. We replace {0} with the real value. Next, we declare the three AnimationCurve variables that we mentioned earlier. Making these public or otherwise accessible from the inspector is key to being able to edit them visually (though it is possible to construct curves by code), which is the whole point of using them. The following four variables are just references to UI elements that we saw earlier in the screenshot of our inspector, and the last three variables are the actual float values that our curves will evaluate into: private void Start () { SetLabels(); } /* * Evaluates all the curves and returns float values */ public void EvaluateStatements() { if (string.IsNullOrEmpty(healthInput.text)) { return; } float inputValue = float.Parse(healthInput.text); healthyValue = healthy.Evaluate(inputValue); hurtValue = hurt.Evaluate(inputValue); criticalValue = critical.Evaluate(inputValue); SetLabels(); } The Start() method doesn't require much explanation. We simply update our labels here so that they initialize to something other than the default text. The EvaluateStatements() method is much more interesting. We first do some simple null checking for our input string. We don't want to try and parse an empty string, so we return out of the function if it is empty. As mentioned earlier, there is no check in place to validate that you've input a numerical value, so be sure not to accidentally input a non-numerical value or you'll get an error. For each of the AnimationCurve variables, we call the Evaluate(float t) method, where we replace t with the parsed value we get from the input field. In the example we ran, that value would be 65. Then, we update our labels once again to display the values we got. The code looks similar to this: /* * Updates the GUI with the evluated values based * on the health percentage entered by the * user. */ private void SetLabels() { healthyLabel.text = string.Format(labelText, healthyValue); hurtLabel.text = string.Format(labelText, hurtValue); criticalLabel.text = string.Format(labelText, criticalValue); } } We simply take each label and replace the text with a formatted version of our labelText constant that replaces the {0} with the real value. To summarize, we learned how fuzzy logic is used in the real world, and how it can help illustrate vague concepts in a way binary systems cannot. We also learned to implement our own fuzzy logic controllers using the concepts of member functions, degrees of membership, and fuzzy sets.  If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to build exciting and richer games by mastering advanced Artificial Intelligence concepts in Unity. Unity Machine Learning Agents: Transforming Games with Artificial Intelligence Put your game face on! Unity 2018.1 is now available How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 12850

article-image-developers-think-managers-dont-know-about-technology
Fatema Patrawala
01 Jun 2018
7 min read
Save for later

Developers think managers don’t know enough about technology. And that’s hurting business.

Fatema Patrawala
01 Jun 2018
7 min read
It's not hard to find jokes online about management not getting software. There has long been a perception that those making key business decisions don't actually understand the technology and software that is at the foundation of just about every organization's operations. Now, research has confirmed that the management and engineering divide is real. In this year's Skill Up survey that we ran among 8000 developers, we found that more than 60% of developers believe they know more about technology than their manager. Source: Packtpub Skill Up Survey 2018 Developer perceptions on the topic aren't simply a question of ego; they're symptomatic of some a barriers to business success. 42% of the respondents listed management's lack of technical knowledge as a barrier to success. It also appears as one of the top 3 organizational barriers to achieving business goals. Source: Packtpub Skill Up Survey 2018 To dissect the technical challenges faced by organizations, we also asked respondents to pick the top technical barriers to success. As can be seen from the below graph, a lot of the barriers directly or indirectly relate to management’s understanding of technology. Take for example management’s decisions to continue with legacy systems, or investment or divestment from certain projects, choice of training programs and vendors etc. Source: Packtpub Skill Up Survey 2018 Management tends to weigh decisions based on the magnitude of investment against returns in immediate or medium term. Also, unless there is hard evidence of performance benefits or cost savings, management is generally wary of approving new projects or spending more on existing ones. This approach is generally robust and has saved business precious dollars by curbing pet projects and unruly experiments and research. However, with technology, things are always so straightforward. One day some tool is the talk of the town (think Adobe Flash) and everyone seems to be learning it or buying it and then in a few months or a couple of years down the line, it has gone completely off radar. Conversely, something that didn’t exist yesterday or was present in some obscure research lab (think self-driving tech, gene-editing, robotics etc), is now changing the rules of the game and businesses whose leadership teams have had their ears on the ground topple everyone else, including time tested veterans. Early adopters and early jumpers make the most of tech trends. This requires one (in the position to make decisions within organizations) to be aware of the changing tech landscape to the extent that one can predict what’s going to replace the current reigning tech and in what timeframe. It requires that management is aware of what’s happening in adjacent industries or even in seemingly unrelated ones. Who knew Unity (game platform), Nvidia (chipmaker), Google (search engine), would enter the auto industry, (all thanks to self driving tech)? While these are some over the top factors, let us look at each one of them in detail. Why do developers believe there is a management knowledge gap? Few reasons listed to justify the response: Rapid pace of technology change: The rapid rate of technology change is significantly impacting IT strategy. Not only are there plenty of emerging technology trends, from AI to cloud, they’re all coming at the same time, and even affecting each other. It’s clear that keeping up with the rate of digital advancement - for example automation, harnessing big data, emerging technologies and cyber security - will pose significant challenge for leaders and senior management. Adding a whole new layer of complexity as they try to stay ahead of competition and innovate. Balancing strategic priorities while complying to changing regulations: Another major challenge for senior management is to balance strategic priorities with the regulatory demands of the industry. In 2018, GDPR has been setting a new benchmark for the protection of consumer data rights by making organisations more accountable. Governed by GDPR, organisations and senior management will now be responsible for guarding every piece of information connected to an individual. In order to be GDPR compliant, management will begin introducing the correct security protocols in their business processes. This will include encryption, two-factor authentication and key management strategies to avoid severe legal, financial and reputational consequences.To make the right decisions, they will need to be technically competent enough to understand the strengths and limitations of the tools and techniques involved in the compliance process Finding right IT talent: Identifying the right talent with the skill sets that you need is a big challenge for senior management. They are constantly trying to find and hire IT talent, such as skilled data scientists and app developers, to accommodate and capitalize on emerging trends in cloud and the API economy. The team has to take care to bring in the right people and let them create magic with their development skills. Alongside this they also need to reinvent how they manage, retract, retain, motivate, and compensate these folks. Responses to this quora question highlight that it can be a difficult process for managers to go through a lengthy recruitment cycle. And the worst feeling is when after all the effort the candidate declines the offer for another lucrative one. So much promising technology, so little time: Time is tight in business and tech. Keeping pace with how quickly innovative and promising technologies crop up is easier said than done. There are so many interesting technologies out there, and there's so little time to implement them fast enough. Before anyone can choose a technology that might work for the company, a new product appears to be on the horizon. Once you see something you like, there's always something else popping up. While managers are working on a particular project to make all the parts work together for an outstanding customer experience, it requires time to do so and implement these technologies. When juggling with all of these moving parts, managers are always looking for technologies and ways to implement great things faster. That's the major reason behind companies having a CTO, VP of engineering and CEO separately to function at their own levels and departments. Murphy’s law of unforeseen IT problems: One of the biggest problems when you’re working in tech is Murphy's Law. This is the law that states  "Anything that can go wrong, will -- at the worst possible moment." It doesn't matter how hard we have worked, how strong the plan is, or how many times things are tested. You get to doing the project and if something has to go wrong, it will. There are times we face IT problems that we don't see coming. It doesn't matter how much you try to plan -- stuff happens. When management doesn’t properly understand technology it’s often hard for them to appreciate how problems arise and how long it can take to solve them. That puts pressure on engineers and developers which can make managing projects even harder. Overcoming perfectionism with an agile mindset: Senior management often wants things done yesterday, and they want it done perfectly. Of course, this is impossible. While Agile can help improve efficiency in the development process, perfectionism is anathema to Agile. It’s about delivering quickly and consistently, not building something perfect and then deploying it. Getting management to understand this is a challenge for engineers - good management teams will understand Agile and what the trade offs are. At the forefront of everyone’s mind should be what the customer needs and what is going to benefit the business. Concluding with Dilbert comic for a lighter note. Source With purpose, process, and changing technologies, managers need to change in the way they function and manage. People don't leave companies, they leave bad managers and the same could be said true for technical workers. They don't leave bad companies they leave non-technical managers who make bad technical decisions. Don’t call us ninjas or rockstars, say developers 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 4344

article-image-restful-web-services-with-kotlin
Natasha Mathur
01 Jun 2018
9 min read
Save for later

Building RESTful web services with Kotlin

Natasha Mathur
01 Jun 2018
9 min read
Kotlin has been eating up the Java world. It has already become a hit in the Android Ecosystem which was dominated by Java and is welcomed with open arms. Kotlin is not limited to Android development and can be used to develop server-side and client-side web applications as well. Kotlin is 100% compatible with the JVM so you can use any existing frameworks such as Spring Boot, Vert.x, or JSF for writing Java applications. In this tutorial, we will learn how to implement RESTful web services using Kotlin. This article is an excerpt from the book 'Kotlin Programming Cookbook', written by, Aanand Shekhar Roy and Rashi Karanpuria. Setting up dependencies for building RESTful services In this recipe, we will lay the foundation for developing the RESTful service. We will see how to set up dependencies and run our first SpringBoot web application. SpringBoot provides great support for Kotlin, which makes it easy to work with Kotlin. So let's get started. We will be using IntelliJ IDEA and Gradle build system. If you don't have that, you can get it from https://www.jetbrains.com/idea/. How to do it… Let's follow the given steps to set up the dependencies for building RESTful services: First, we will create a new project in IntelliJ IDE. We will be using the Gradle build system for maintaining dependency, so create a Gradle project: When you have created the project, just add the following lines to your build.gradle file. These lines of code contain spring-boot dependencies that we will need to develop the web app: buildscript { ext.kotlin_version = '1.1.60' // Required for Kotlin integration ext.spring_boot_version = '1.5.4.RELEASE' repositories { jcenter() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // Required for Kotlin integration classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin classpath "org.springframework.boot:spring-boot-gradle-plugin:$spring_boot_version" } } apply plugin: 'kotlin' // Required for Kotlin integration apply plugin: "kotlin-spring" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin apply plugin: 'org.springframework.boot' jar { baseName = 'gs-rest-service' version = '0.1.0' } sourceSets { main.java.srcDirs += 'src/main/kotlin' } repositories { jcenter() } dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" // Required for Kotlin integration compile 'org.springframework.boot:spring-boot-starter-web' testCompile('org.springframework.boot:spring-boot-starter-test') } Let's now create an App.kt file in the following directory hierarchy: It is important to keep the App.kt file in a package (we've used the college package). Otherwise, you will get an error that says the following: ** WARNING ** : Your ApplicationContext is unlikely to start due to a `@ComponentScan` of the default package. The reason for this error is that if you don't include a package declaration, it considers it a "default package," which is discouraged and avoided. Now, let's try to run the App.kt class. We will put the following code to test if it's running: @SpringBootApplication open class App { } fun main(args: Array<String>) { SpringApplication.run(App::class.java, *args) } Now run the project; if everything goes well, you will see output with the following line at the end: Started AppKt in 5.875 seconds (JVM running for 6.445) We now have our application running on our embedded Tomcat server. If you go to http://localhost:8080, you will see an error as follows: The preceding error is 404 error and the reason for that is we haven't told our application to do anything when a user is on the / path. Creating a REST controller In the previous recipe, we learned how to set up dependencies for creating RESTful services. Finally, we launched our backend on the http://localhost:8080 endpoint but got 404 error as our application wasn't configured to handle requests at that path (/). We will start from that point and learn how to create a REST controller. Let's get started! We will be using IntelliJ IDE for coding purposes. For setting up of the environment, refer to the previous recipe. You can also find the source in the repository at https://gitlab.com/aanandshekharroy/kotlin-webservices. How to do it… In this recipe, we will create a REST controller that will fetch us information about students in a college. We will be using an in-memory database using a list to keep things simple: Let's first create a Student class having a name and roll number properties: package college class Student() { lateinit var roll_number: String lateinit var name: String constructor( roll_number: String, name: String): this() { this.roll_number = roll_number this.name = name } } Next, we will create the StudentDatabase endpoint, which will act as a database for the application: @Component class StudentDatabase { private val students = mutableListOf<Student>() } Note that we have annotated the StudentDatabase class with @Component, which means its lifecycle will be controlled by Spring (because we want it to act as a database for our application). We also need a @PostConstruct annotation, because it's an in-memory database that is destroyed when the application closes. So we would like to have a filled database whenever the application launches. So we will create an init method, which will add a few items into the "database" at startup time: @PostConstruct private fun init() { students.add(Student("2013001","Aanand Shekhar Roy")) students.add(Student("2013165","Rashi Karanpuria")) } Now, we will create a few other methods that will help us deal with our database: getStudent: Gets the list of students present in our database: fun getStudents()=students addStudent: This method will add a student to our database: fun addStudent(student: Student): Boolean { students.add(student) return true } Now let's put this database to use. We will be creating a REST controller that will handle the request. We will create a StudentController and annotate it with @RestController. Using @RestController is simple, and it's the preferred method for creating MVC RESTful web services. Once created, we need to provide our database using Spring dependency injection, for which we will need the @Autowired annotation. Here's how our StudentController looks: @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase } Now we will set our response to the / path. We will show the list of students in our database. For that, we will simply create a method that lists out students. We will need to annotate it with @RequestMapping and provide parameters such as path and request method (GET, POST, and such): @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() This is what our controller looks like now. It is a simple REST controller: package college import org.springframework.beans.factory.annotation.Autowired import org.springframework.web.bind.annotation.RequestMapping import org.springframework.web.bind.annotation.RequestMethod import org.springframework.web.bind.annotation.RestController @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() } Now when you restart the server and go to http://localhost:8080, we will see the response as follows: As you can see, Spring is intelligent enough to provide the response in the JSON format, which makes it easy to design APIs. Now let's try to create another endpoint that will fetch a student's details from a roll number: @GetMapping("/student/{roll_number}") fun studentWithRollNumber( @PathVariable("roll_number") roll_number:String) = database.getStudentWithRollNumber(roll_number) Now, if you try the http://localhost:8080/student/2013001 endpoint, you will see the given output: {"roll_number":"2013001","name":"Aanand Shekhar Roy"} Next, we will try to add a student to the database. We will be doing it via the POST method: @RequestMapping("/add", method = arrayOf(RequestMethod.POST)) fun addStudent(@RequestBody student: Student) = if (database.addStudent(student)) student else throw Exception("Something went wrong") There's more… So far, our server has been dependent on IDE. We would definitely want to make it independent of an IDE. Thanks to Gradle, it is very easy to create a runnable JAR just with the following: ./gradlew clean bootRepackage The preceding command is platform independent and uses the Gradle build system to build the application. Now, you just need to type the mentioned command to run it: java -jar build/libs/gs-rest-service-0.1.0.jar You can then see the following output as before: Started AppKt in 4.858 seconds (JVM running for 5.548) This means your server is running successfully. Creating the Application class for Spring Boot The SpringApplication class is used to bootstrap our application. We've used it in the previous recipes; we will see how to create the Application class for Spring Boot in this recipe. We will be using IntelliJ IDE for coding purposes. To set up the environment, read previous recipes, especially the Setting up dependencies for building RESTful services recipe. How to do it… If you've used Spring Boot before, you must be familiar with using @Configuration, @EnableAutoConfiguration, and @ComponentScan in your main class. These were used so frequently that Spring Boot provides a convenient @SpringBootApplication alternative. The Spring Boot looks for the public static main method, and we will use a top-level function outside the Application class. If you noted, while setting up the dependencies, we used the kotlin-spring plugin, hence we don't need to make the Application class open. Here's an example of the Spring Boot application: package college import org.springframework.boot.SpringApplication import org.springframework.boot.autoconfigure.SpringBootApplication @SpringBootApplication class Application fun main(args: Array<String>) { SpringApplication.run(Application::class.java, *args) } The Spring Boot application executes the static run() method, which takes two parameters and starts a autoconfigured Tomcat web server when Spring application is started. When everything is set, you can start the application by executing the following command: ./gradlew bootRun If everything goes well, you will see the following output in the console: This is along with the last message—Started AppKt in xxx seconds. This means that your application is up and running. In order to run it as an independent server, you need to create a JAR and then you can execute as follows: ./gradlew clean bootRepackage Now, to run it, you just need to type the following command: java -jar build/libs/gs-rest-service-0.1.0.jar We learned how to set up dependencies for building RESTful services, creating a REST controller, and creating the application class for Spring boot. If you are interested in learning more about Kotlin then be sure to check out the 'Kotlin Programming Cookbook'. Build your first Android app with Kotlin 5 reasons to choose Kotlin over Java Getting started with Kotlin programming Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 15414

article-image-network-programming-gawk
Pavan Ramchandani
31 May 2018
12 min read
Save for later

Network programming 101 with GAWK (GNU AWK)

Pavan Ramchandani
31 May 2018
12 min read
In today's tutorial, we will learn about the networking aspects, for example working with TCP/IP for both client-side and server-side. We will also explore HTTP services to help you get going with networking in AWK. This tutorial is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. The AWK programming language was developed as a pattern-matching language for text manipulation; however, GAWK has advanced features, such as file-like handling of network connections. We can perform simple TCP/IP connection handling in GAWK with the help of special filenames. GAWK extends the two-way I/O mechanism used with the |& operator to simple networking using these special filenames that hide the complex details of socket programming to the programmer. The special filename for network communication is made up of multiple fields, all of which are mandatory. The following is the syntax of creating a filename for network communication: /net-type/protocol/local-port/remote-host/remote-port Each field is separated from another with a forward slash. Specifying all of the fields is mandatory. If any of the field is not valid for any protocol or you want the system to pick a default value for that field, it is set as 0. The following list illustrates the meaning of different fields used in creating the file for network communication: net-type: Its value is inet4 for IPv4, inet6 for IPv6, or inet to use the system default (which is generally IPv4). protocol: It is either tcp or udp for a TCP or UDP IP connection. It is advised you use the TCP protocol for networking. UDP is used when low overhead is a priority. local-port: Its value decides which port on the local machine is used for communication with the remote system. On the client side, its value is generally set to 0 to indicate any free port to be picked up by the system itself. On the server side, its value is other than 0 because the service is provided to a specific publicly known port number or service name, such as http, smtp, and so on. remote-host: It is the remote hostname which is to be at the other end of the connection. For the server side, its value is set to 0 to indicate the server is open for all other hosts for connection. For the client side, its value is fixed to one remote host and hence, it is always different from 0. This name can either be represented through symbols, such as www.google.com, or numbers, 123.45.67.89. remote-port: It is the port on which the remote machine will communicate across the network. For clients, its value is other than 0, to indicate to which port they are connecting to the remote machine. For servers, its value is the port on which they want connection from the client to be established. We can use a service name here such as ftp, http, or a port number such as 80, 21, and so on. TCP client and server (/inet/tcp) TCP gaurantees that data is received at the other end and in the same order as it was transmitted, so always use TCP. In the following example, we will create a tcp-server (sender) to send the current date time of the server to the client. The server uses the strftime() function with the coprocess operator to send to the GAWK server, listening on the 8080 port. The remote host and remote port could be any client, so its value is kept as 0. The server connection is closed by passing the special filename to the close() function for closing the file as follows: $ vi tcpserver.awk #TCP-Server BEGIN { print strftime() |& "/inet/tcp/8080/0/0" close("/inet/tcp/8080/0/0") } Now, open one Terminal and run this program before running the client program as follows: $ awk -f tcpserver.awk Next, we create the tcpclient (receiver) to receive the data sent by the tcpserver. Here, we first create the client connection and pass the received data to the getline() using the coprocess operator. Here the local-port value is set to 0 to be automatically chosen by the system, the remote-host is set to the localhost, and the remote-port is set to the tcp-server port, 8080. After that, the received message is printed, using the print $0 command, and finally, the client connection is closed using the close command, as follows: $ vi tcpclient.awk #TCP-client BEGIN { "/inet/tcp/0/localhost/8080" |& getline print $0 close("/inet/tcp/0/localhost/8080") } Now, execute the tcpclient program in another Terminal as follows : $ awk -f tcpclient.awk The output of the previous code is as follows : Fri Feb 9 09:42:22 IST 2018 UDP client and server ( /inet/udp ) The server and client programs that use the UDP protocol for communication are almost identical to their TCP counterparts, with the only difference being that the protocol is changed to udp from tcp. So, the UDP-server and UDP-client program can be written as follows: $ vi udpserver.awk #UDP-Server BEGIN { print strftime() |& "/inet/udp/8080/0/0" "/inet/udp/8080/0/0" |& getline print $0 close("/inet/udp/8080/0/0") } $ awk -f udpserver.awk Here, only one addition has been made to the client program. In the client, we send the message hello from client ! to the server. So when we execute this program on the receiving Terminal, where the udpclient.awk program is run, we get the remote system date time. And on the Terminal where the udpserver.awk program is run, we get the hello message from the client: $ vi udpclient.awk #UDP-client BEGIN { print "hello from client!" |& "/inet/udp/0/localhost/8080" "/inet/udp/0/localhost/8080" |& getline print $0 close("/inet/udp/0/localhost/8080") } $ awk -f udpclient.awk GAWK can be used to open direct sockets only. Currently, there is no way to access services available over an SSL connection such as https, smtps, pop3s, imaps, and so on. Reading a web page using HttpService To read a web page, we use the Hypertext Transfer Protocol (HTTP ) service which runs on port number 80. First, we redefine the record separators RS and ORS because HTTP requires CR-LF to separate lines. The program requests to the IP address 35.164.82.168 ( www.grymoire.com ) of a static website which, in turn, makes a GET request to the web page: http://35.164.82.168/Unix/donate.html . HTTP calls the GET request, a method which tells the web server to transmit the web page donate.html. The output is stored in the getline function using the co-process operator and printed on the screen, line by line, using the while loop. Finally, we close the http service connection. The following is the program to retrieve the web page: $ vi view_webpage.awk BEGIN { RS=ORS="rn" http = "/inet/tcp/0/35.164.82.168/80" print "GET http://35.164.82.168/Unix/donate.html" |& http while ((http |& getline) > 0) print $0 close(http) } $ awk -f view_webpage.awk Upon executing the program, it fills the screen with the source code of the page on the screen as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML lang="en-US"> <HEAD> <TITLE> Welcome to The UNIX Grymoire!</TITLE> <meta name="keywords" content="grymoire, donate, unix, tutorials, sed, awk"> <META NAME="Description" CONTENT="Please donate to the Unix Grymoire" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="myCSS.css" rel="stylesheet" type="text/css"> <!-- Place this tag in your head or just before your close body tag --> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <link rel="canonical" href="http://www.grymoire.com/Unix/donate.html"> <link href="myCSS.css" rel="stylesheet" type="text/css"> ........ ........ Profiling in GAWK Profiling of code is done for code optimization. In GAWK, we can do profiling by supplying a profile option to GAWK while running the GAWK program. On execution of the GAWK program with that option, it creates a file with the name awkprof.out. Since GAWK is performing profiling of the code, the program execution is up to 45% slower than the speed at which GAWK normally executes. Let's understand profiling by looking at some examples. In the following example, we create a program that has four functions; two arithmetic functions, one function prints an array, and one function calls all of them. Our program also contains two BEGIN and two END statements. First, the BEGIN and END statement and then it contains a pattern action rule, then the second BEGIN and END statement, as follows: $ vi codeprof.awk func z_array(){ arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for ( v in arr ) print v, arr[v] print "Array Ends...!" print "=====================" } function mul(num1, num2){ result = num1 * num2 printf ("Multiplication of %d * %d : %dn", num1,num2,result) } function all(){ add(30,10) mul(5,6) z_array() } BEGIN { print "First BEGIN statement" print "=====================" } END { print "First END statement " print "=====================" } /maruti/{print $0 } BEGIN { print "Second BEGIN statement" print "=====================" all() } END { print "Second END statement" print "=====================" } function add(num1, num2){ result = num1 + num2 printf ("Addition of %d + %d : %dn", num1,num2,result) } $ awk -- prof -f codeprof.awk cars.dat The output of the previous code is as follows: First BEGIN statement ===================== Second BEGIN statement ===================== Addition of 30 + 10 : 40 Multiplication of 5 * 6 : 30 Array begins...! ===================== 1 audi 2 bmw 3 ferrari 4 toyota 5 volvo Array Ends...! ===================== maruti swift 2007 50000 5 maruti dezire 2009 3100 6 maruti swift 2009 4100 5 maruti esteem 1997 98000 1 First END statement ===================== Second END statement ===================== Execution of the previous program also creates a file with the name awkprof.out. If we want to create this profile file with a custom name, then we can specify the filename as an argument to the --profile option as follows: $ awk --prof=codeprof.prof -f codeprof.awk cars.dat Now, upon execution of the preceding code we get a new file with the name codeprof.prof. Let's try to understand the contents of the file codeprof.prof created by the profiles as follows: # gawk profile, created Fri Feb 9 11:01:41 2018 # BEGIN rule(s) BEGIN { 1 print "First BEGIN statement" 1 print "=====================" } BEGIN { 1 print "Second BEGIN statement" 1 print "=====================" 1 all() } # Rule(s) 12 /maruti/ { # 4 4 print $0 } # END rule(s) END { 1 print "First END statement " 1 print "=====================" } END { 1 print "Second END statement" 1 print "=====================" } # Functions, listed alphabetically 1 function add(num1, num2) { 1 result = num1 + num2 1 printf "Addition of %d + %d : %dn", num1, num2, result } 1 function all() { 1 add(30, 10) 1 mul(5, 6) 1 z_array() } 1 function mul(num1, num2) { 1 result = num1 * num2 1 printf "Multiplication of %d * %d : %dn", num1, num2, result } 1 function z_array() { 1 arr[30] = "volvo" 1 arr[10] = "bmw" 1 arr[20] = "audi" 1 arr[50] = "toyota" 1 arr["car"] = "ferrari" 1 n = asort(arr) 1 print "Array begins...!" 1 print "=====================" 5 for (v in arr) { 5 print v, arr[v] } 1 print "Array Ends...!" 1 print "=====================" } This profiling example explains the various basic features of profiling in GAWK. They are as follows: The first look at the file from top to bottom explains the order of the program in which various rules are executed. First, the BEGIN rules are listed followed by the BEGINFILE rule, if any. Then pattern-action rules are listed. Thereafter, ENDFILE rules and END rules are printed. Finally, functions are listed in alphabetical order. Multiple BEGIN and END rules retain their places as separate identities. The same is also true for the BEGINFILE and ENDFILE rules. The pattern-action rules have two counts. The first number, to the left of the rule, tells how many times the rule's pattern was tested for the input file/record. The second number, to the right of the rule's opening left brace, with a comment, shows how many times the rule's action was executed when the rule evaluated to true. The difference between the two indicates how many times the rules pattern evaluated to false. If there is an if-else statement then the number shows how many times the condition was tested. At the right of the opening left brace for its body is a count showing how many times the condition was true. The count for the else statement tells how many times the test failed.  The count at the beginning of a loop header (for or while loop) shows how many times the loop conditional-expression was executed. In user-defined functions, the count before the function keyword tells how many times the function was called. The counts next to the statements in the body show how many times those statements were executed. The layout of each block uses C-style tabs for code alignment. Braces are used to mark the opening and closing of a code block, similar to C-style. Parentheses are used as per the precedence rule and the structure of the program, but only when needed. Printf or print statement arguments are enclosed in parentheses, only if the statement is followed by redirection. GAWK also gives leading comments before rules, such as before BEGIN and END rules, BEGINFILE and ENDFILE rules, and pattern-action rules and before functions. GAWK provides standard representation in a profiled version of the program. GAWK also accepts another option, --pretty-print. The following is an example of a pretty-printing AWK program: $ awk --pretty-print -f codeprof.awk cars.dat When GAWK is called with pretty-print, the program generates awkprof.out, but this time without any execution counts in the output. Pretty-print output also preserves any original comments if they are given in a program while the profile option omits the original program’s comments. The file created on execution of the program with --pretty-print option is as follows: # gawk profile, created Fri Feb 9 11:04:19 2018 # BEGIN rule(s) BEGIN { print "First BEGIN statement" print "=====================" } BEGIN { print "Second BEGIN statement" print "=====================" all() } # Rule(s) /maruti/ { print $0 } # END rule(s) END { print "First END statement " print "=====================" } END { print "Second END statement" print "=====================" } # Functions, listed alphabetically function add(num1, num2) { result = num1 + num2 printf "Addition of %d + %d : %dn", num1, num2, result } function all() { add(30, 10) mul(5, 6) z_array() } function mul(num1, num2) { result = num1 * num2 printf "Multiplication of %d * %d : %dn", num1, num2, result } function z_array() { arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for (v in arr) { print v, arr[v] } print "Array Ends...!" print "=====================" } To summarize, we looked at the basics of network programming and GAWK's built-in command line debugger. Do check out the book Learning AWK Programming to know more about the intricacies of AWK programming for text processing. 20 ways to describe programming in 5 words What is Mob Programming?
Read more
  • 0
  • 0
  • 8739
article-image-really-basic-guide-to-batch-file-programming
Richard Gall
31 May 2018
3 min read
Save for later

A really basic guide to batch file programming

Richard Gall
31 May 2018
3 min read
Batch file programming is a way of making a computer do things simply by creating, yes, you guessed it, a batch file. It's a way of doing things you might ordinarily do in the command prompt, but automates some tasks, which means you don't have to write so much code. If it sounds straightforward, that's because it is, generally. Which is why it's worth learning... Batch file programming is a good place to start learning how computers work Of course, if you already know your way around batch files, I'm sure you'll agree it's a good way for someone relatively experienced in software to get to know their machine a little better. If you know someone that you think would get a lot from learning batch file programming share this short guide with them! Why would I write a batch script? There are a number of reasons you might write batch scripts. It's particularly useful for resolving network issues, installing a number of programs on different machines, even organizing files and folders on your computer. Imagine you have a recurring issue - with a batch file you can solve it quickly and easily wherever you are without having to write copious lines of code in the command line. Or maybe your desktop simply looks like a mess; with a little knowledge of batch file programming you can clean things up without too much effort. How to write a batch file Clearly, batch file programming can make your life a lot easier. Let's take a look at the key steps to begin writing batch scripts. Step 1: Open your text editor Batch file programming is really about writing commands - so you'll need your text editor open to begin. Notepad, wordpad, it doesn't matter! Step 2: Begin writing code As we've already seen, batch file programming is really about writing commands for your computer. The code is essentially the same as what you would write in the command prompt. Here are a few batch file commands you might want to know to get started: ipconfig - this presents network information like your IP and MAC address. start “” [website] - this opens a specified website in your browser. rem - this is used if you want to make a comment or remark in your code (ie. for documentation purposes) pause - this, as you'd expect, pauses the script so it can be read before it continues. echo - this command will display text in the command prompt. %%a - this command refers to every file in a given folder if - this is a conditional command The list of batch file commands is pretty long. There are plenty of other resources with an exhaustive list of commands you can use, but a good place to begin is this page on Wikipedia. Step 3: Save your batch file Once you've written your commands in the text editor, you'll then need to save your document as a batch file. Title it, and suffix it with the .bat extension. You'll also need to make sure save as type is set as 'All files'. That's basically it when it comes to batch file programming. Of course, there are some complex things you can do, but once you know the basics, getting into the code is where you can start to experiment.  Read next Jupyter and Python scripting Python Scripting Essentials
Read more
  • 0
  • 0
  • 23381

article-image-tips-and-tricks-to-optimize-your-responsive-web-design
Sugandha Lahoti
31 May 2018
11 min read
Save for later

Tips and tricks to optimize your responsive web design

Sugandha Lahoti
31 May 2018
11 min read
Loosely put, website optimization refers to the activities and processes that improve your website's user experience and visibility while reducing the costs associated with hosting your website. In this article, we will learn tips and techniques for client-side optimization. This article is an excerpt from Mastering Bootstrap 4 - Second Edition by Benjamin Jakobus, and Jason Marah. In this book, you will learn to build a customized Bootstrap website from scratch, optimize your website and integrate it with third-party frameworks. CSS optimization Before we even consider compression, minification, and file concatenation, we should think about the ways in which we can simplify and optimize our existing style sheet without using third-party tools. Of course, we should have striven for an optimal style sheet, to begin with, and in many aspects we did. However, our style sheet still leaves room for improvement. Inline styles are bad After reading this article, if you only remember one thing, then let it be that inline styles are bad. Period. Avoid using them whenever possible. Why? That's because not only will they make your website impossible to maintain as the website grows, they also take up precious bytes as they force you to repeat the same rules over and over. Consider the following code piece: <div class="carousel-inner" role="listbox"> <div style="height: 400px" class="carousel-item active"> <img class="d-block img-fluid" src="images/brazil.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Brazil </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/datsun.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Datsun 260Z </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/skydive.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Skydive </div> </div> </div> Note how the rule for defining an item's height, style="height: 400px", is repeated three times, once for each of the three items. That's an additional 21 characters (or 21 bytes, assuming that our document is UTF-8) for each additional image. Multiplying 3*21 gives us 63 bytes, and 21 more bytes for every new image that you want to add. Not to mention that if you ever want to update the height of the images, you will need to manually update the style attribute for every single image. The solution is, of course, to replace the inline styles with an appropriate class. Let's go ahead and define an img class that can be applied to any carousel image: .carousel-item { height: 400px; } Now let's go ahead and remove the style rules: <div class="carousel-inner" role="listbox"> <div style="height: 400px" class="carousel-item active"> <img class="d-block img-fluid" src="images/brazil.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Brazil </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/datsun.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Datsun 260Z </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/skydive.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Skydive </div> </div> </div> That's great! Not only is our CSS now easier to maintain, but we also shaved 29 bytes off our website (the original inline styles required 63 bytes; our new class definition, however, requires only 34 bytes). Yes, this does not seem like much, especially in the world of high-speed broadband, but remember that your website will grow and every byte adds up. Avoid Long identifiers and class names The longer your strings, the larger your files. It's a no-brainer. As such, long identifier and class names naturally increase the size of your web page. Of course, extremely short class or identifier names tend to lack meaning and therefore will make it more difficult (if not impossible) to maintain your page. As such, one should strive for an ideal balance between length and expressiveness. Of course, even better than shortening identifiers is removing them altogether. One handy technique of removing these is to use hierarchical selection. Have a look at an events pagination code piece. For example, we are using the services-events-content identifier within our pagination logic, as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num){ $('#services-events-content div').hide(); var current_page = '#page-' + num; $(current_page).show(); }); To denote the services content, we broke the name of our identifier into three parts, namely, services, events, and content. Our markup is as follows: <div id="services-events-content"> <div id="page-1"> <h3>My Sample Event #1</h3> ... </div> </div> Let's try and get rid of this identifier altogether by observing two characteristics of our Events section: The services-events-content is an indirect descendent of a div with the id services-events. We cannot remove this id as it is required for the menu to work. The element with the id services-events-content is itself a div. If we were to remove its id, we could also remove the entire div. As such, we do not need a second identifier to select the pages that we wish to hide. Instead, all that we need to do is select the div within the div that is within the div that is assigned the id services-events. How do we express this as a CSS selector? It's easy—use #services-events div div div. Also, as such, our pagination logic is updated as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num){ $('#services-events div div div').hide(); var current_page = '#page-' + num; $(current_page).show(); }); Now, save and refresh. What's that? As you clicked on a page, the pagination control disappeared; that's because we are now hiding all div elements that are two div elements down from the element with the id services-events. Move the pagination control div outside its parent element. Our markup should now look as follows: <div role="tabpanel" class="tab-pane active" id="services-events"> <div class="container"> <div class="row"> <div id="page-1"> <h3>My Sample Event #1</h3> <h3>My Sample Event #2</h3> </div> <div id="page-2"> <h3>My Sample Event #3</h3> </div> </div> <div id="services-events-pagination"></div> </div> </div> Now save and refresh. That's better! Last but not least, let's update the css. Take the following code into consideration: #services-events-content div { display: none; } #services-events-content div img { margin-top: 0.5em; margin-right: 1em; } #services-events-content { height: 15em; overflow-y: scroll; } Replace this code with the following: #services-events div div div { display: none; } #services-events div div div img { margin-top: 0.5em; margin-right: 1em; } #services-events div div div { height: 15em; overflow-y: scroll; } That's it, we have simplified our style sheet and saved some bytes in the process! However, we have not really improved the performance of our selector. jQuery executes selectors from right to left, hence executing the last selector first. In this example, jQuery will first scan the complete DOM to discover all div elements (last selector executed first) and then apply a filter to return only those elements that are div, with a div parent, and then select only the ones with ID services-events as parent. While we can't really improve the performance of the selector in this case, we can still simplify our code further by adding a class to each page: <div id="page-1" class="page">...</div> <div id="page-2" class="page">...</div> <div id="page-3" class="page">...</div> Then, all we need to do is select by the given class: $('#services-events div.page').hide();. Alternatively, knowing that this is equal to the DOM element within the .on callback, we can do the following in order to prevent us from iterating through the whole DOM: $(this).parents('#services-vents').find('.page').hide(); The final code will look as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num) { $(this).parents('#services-events').find('.page').hide(); $('#page-' + num).show(); }); Note a micro-optimization in the preceding code—there was no need for us to create that var in memory. Hence, the last line changes to $('#page-' + num).show();. Use Shorthand rules when possible According to the Mozilla Developer Network (shorthand properties, Mozilla Developer Network, https://developer.mozilla.org/en-US/docs/Web/CSS/Shorthand_properties, accessed November 2015), shorthand properties are: "CSS properties that let you set the values of several other CSS properties simultaneously. Using a shorthand property, a Web developer can write more concise and often more readable style sheets, saving time and energy."                                                                                    – Mozilla Developer Network, 2015 Unless strictly necessary, we should never be using longhand rules. When possible, shorthand rules are always the preferred option. Besides the obvious advantage of saving precious bytes, shorthand rules also help increase your style sheet's maintainability. For example, border: 20px dotted #FFF is equivalent to three separate rules: border-style: dotted; border-width: 20px; border-color: #FFF; Group selectors Organizing selectors into groups will arguably also save some bytes. .navbar-myphoto .dropdown-menu > a:hover { color: gray; background-color: #504747; } .navbar-myphoto .dropdown-menu > a:focus { color: gray; background-color: #504747; } .navbar-myphoto .dropdown-menu > .active > a:focus { color: gray; background-color: #504747; } Note how each of the three selectors contains the same declarations, that is, the color and background-color properties are set to the exact same values for each selector. To prevent us from repeating these declarations, we should simply group them (reducing the code from 274 characters to 181 characters): .navbar-myphoto .dropdown-menu > a:hover, .navbar-myphoto .dropdown-menu > a:focus, .navbar-myphoto .dropdown-menu > .active > a:focus { color: gray; background-color: #504747; } Voilà! We just saved 93 bytes! (assuming UTF-8 encoding). Rendering times When optimizing your style rules, the number of bytes should not be your only concern. In fact, it comes secondary to the rendering time of your web page. CSS rules affect the amount of work that is required by the browser to render your page. As such, some rules are more expensive than others. For example, changing the color of an element is cheaper than changing its margin. The reason for this is that a change in color only requires your browser to draw the new pixels. While drawing itself is by no means a cheap operation, changing the margin of an element requires much more effort. Your browser needs to both recalculate the page layout and also draw the changes. Optimizing your page's rendering times is a complex topic, and as such is beyond the scope of this post. However, we recommend that you take a look at http://csstriggers.com/. This site provides a concise overview of the costs involved when updating a given CSS property. Minifying CSS and JavaScript Now it is time to look into minification. Minification is the process of removing redundant characters from a file without altering the actual information contained within it. In other words, minifying the css file will reduce its overall size, while leaving the actual CSS style rules intact. This is achieved by stripping out any whitespace characters within our file. Stripping out whitespace characters has the obvious result that our CSS is now practically unreadable and impossible to maintain. As such, minified style sheets should only be used when serving a page (that is, during production), and not during development. Clearly, minifying your style sheet manually would be an incredibly time-consuming (and hence pointless) task. Therefore, there exist many tools that will do the job for us. One such tool is npm minifier. Visit https://www.npmjs.com/package/minifier for more. Let's go ahead and install it: sudo npm install -g minifier Once installed, we can minify our style sheet by typing the following command: minify path-to-myphoto.css Here, path-to-myphoto.css represents the path to the MyPhoto style sheet. Go ahead and execute the command. Once minification is complete, you should see the Minification complete message. A new CSS file (myphoto.min.css) will have been created inside the directory containing the myphoto.css file. The new file should be 2,465 bytes. Our original myphoto.css file is 3,073 bytes. Minifying our style sheet just reduced the number of bytes to send by roughly 19%! We touched upon the basics of website optimization and testing. In the follow-up article, we will see how to use the build tool Grunt to automate the more common and mundane optimization tasks. To build responsive, dynamic, and mobile-first applications on the web with Bootstrap 4, check out this book  Mastering Bootstrap 4 - Second Edition. Get ready for Bootstrap v4.1; Web developers to strap up their boots How to use Bootstrap grid system for responsive website design? Bootstrap 4 Objects, Components, Flexbox, and Layout
Read more
  • 0
  • 0
  • 2365

article-image-how-we-think-ai-urge-ai-founding-fathers
Neil Aitken
31 May 2018
9 min read
Save for later

We must change how we think about AI, urge AI founding fathers

Neil Aitken
31 May 2018
9 min read
In Manhattan, nearly 15,000 Taxis make around 30 journeys each, per day. That’s nearly half a million paid trips. The yellow cabs are part of the never ending, slow progression of vehicles which churn through the streets of New York. The good news is, after a century of worsening traffic, congestion is about to be ameliorated, at least to a degree. Researchers at MIT announced this week, that they have developed an algorithm to optimise the way taxis find their customers. Their product is allegedly so efficient, it can reduce the required number of cabs (for now, the ones with human drivers) in Manhattan, by a third. That’s a non trivial improvement. The trick, apparently, is to use the cabs as a hustler might cue the ball in Pool – lining the next pick up to start where the last drop off ended. The technology behind the improvement offered by the MIT research team, is the same one that is behind most of the incredible technology news stories of the last 3 years – Artificial Intelligence. AI is now a part of most of the digital interactions we have. It fuels the recommendation engines in YouTube, Spotify and Netflix. It shows you products you might like in Google’s search results and on Amazon’s homepage. Undoubtedly, AI is the hot topic of the time – as you cannot possibly have failed to notice. How AI was created – and nearly died AI was, until recently, a long forgotten scientific curiosity, employed seriously only in Sci-Fi movies. The technology fell in to a ‘Winter’– a time when AI related projects couldn’t get funding and decision makers had given up on the technology - in the late 1980s. It was at that time that much of the fundamental work which underpins today’s AI, concepts like neural networks and backpropagation were codified. Artificial Intelligence is now enjoying a rebirth. Almost every new idea funded by Venture Capitalists has AI baked in. The potential excites business owners, especially those involved in the technology sphere, and scares governments in equal measure. It offers better profits and the potential for mass unemployment as if they are two sides of the same coin. Is is a one in a generation technology improvement, similar to Air Conditioning, mass produced motor car and the smartphone, in that it can be applied to all aspects of the economy at the same time. Just as the iPhone has propelled telecommunications technology forward, and created billions of dollars of sales for phone companies selling mobile data plans, AI is fueling totally new businesses and making existing operations significantly more efficient. Behind the fanfare associated with AI, however, lies a simple truth. Today’s AI algorithms use what’s called ‘narrow’ or ‘domain specific’ intelligence. In simple terms, each current AI implementation is specific to the job it is given. IBM trained their AI system ‘Watson’, to beat human contestants at ‘Jeopardy!’ When Google want to build an ‘AI product’ that can be used to beat a living counterpart at the Chinese board game ‘Go’, they create a new AI system. And so on. A new task requires a new AI system. Judea Pearl, inventor of Bayesian networks and Turing Awardee On AI systems that can move from predicting what will happen to what will cause something Now, one of the people behind those original concepts from the 1980s, which underpin today’s AI solutions is back with an even bigger idea which might push AI forward. Judea Pearl, Chancellor's professor of computer science and statistics at UCLA, and a distinguished visiting professor at the Technion, Israel Institute of Technology was awarded the Turing Award 30 years ago. This award was given to him for the Bayesian mathematical models, which gave modern AI its strength. Pearl’s fundamental contribution to computer science was in providing the logic and decision making framework for computers to operate under uncertainty. Some say it was he who provided the spark which thawed that AI winter. Today, he laments the current state of AI, concerned that the field has evolved very little in the last 3 decades since his important theory was presented. Pearl likens current AI implementations to simple tools which can tell you what’s likely to come next, based on the recognition of a familiar pattern. For example, a medical AI algorithm might be able to look at X-Rays of a human chest and ‘discern’ that the patient has, or does not have, lung cancer based on patterns it has learnt from its training datasets. The AI in this scenario doesn’t ‘know’ what lung cancer is or what a tumor is. Importantly, it is a very long way from understanding that smoking can cause the affliction. What’s needed in AI next, says Pearl, is a critical difference: AIs which are evolved to the point where they can determine not just what will happen next, but what will cause it. It’s a fundamental improvement, of the same magnitude as his earlier contributions. Causality – what Pearl is proposing - is one of the most basic units of scientific thought and progress. The ability to conduct a repeatable experiment, showing that A caused B, in multiple locations and have independent peers review the results is one of the fundamentals of establishing truth. In his most recent publication, ‘The Book Of Why’,  Pearl outlines how we can get AI, from where it is now, to where it can develop an understanding of these causal relationships. He believes the first step is to cement the building blocks of reality – ‘what is a lung’, ‘what is smoke’ and that we’ll be able to do in the next 10 years. Geoff Hinton, Inventor of backprop and capsule nets On AI which more closely mimics the human brain Geoff Hinton’s was the mind behind backpropagation, another of the fundamental technologies which has brought AI to the point it is at today. To progress AI, however, he says we might have to start all over again. Hinton has developed (and produced two papers for the University of Toronto to articulate) a new way of training AI systems, involving something he calls ‘Capsule Networks’ – a concept he’s been working on for 30 years, in an effort to improve the capabilities of the backpropagation algorithms he developed. Capsule networks operate in a manner similar to the human brain. When we see an image, our brains breaks it down to it’s components and processes them in parallel. Some brain neurons recognise edges through contrast differences. Others look for corners by examining the points at which edges intersect. Capsule Networks are similar, several acting on a picture at one time, identifying, for example, an ear or a nose on an animal, irrespective of the angle from which it is being viewed. This is a big deal as until now, CNNs (convolution neural networks), the set of AI algorithms that are most often used in image and video recognition systems, could recognize images as well as humans do. CNNs, however, find it hard to recognize images if their angle is changed. It’s too early to judge whether capsule networks are the key to the next step in the AI revolution, but in many tasks, Capsule Networks are identifying images faster and more accurately than current capabilities allow. Andrew Ng, Chief Scientist at Baidu On AI that can learn without humans Andrew Ng is the co-inventor of Google Brain, the team and project that Alphabet put together in 2011 to explore Artificial Intelligence. He now works for Baidu, China’s most successful search engine – analogous in size and scope to Google in the rest of the world. At the moment, he heads up Baidu’s Silicon Valley AI research facility. Beyond concerns over potential job displacement caused by AI, an issue so significant he says it is perhaps all we should be thinking about when it comes to Artificial Intelligence, he suggests that, in the future, the most progress will be made when AI systems can team themselves without human involvement. At the moment, training an AI, even on something that, to us is simple, such as what a cat looks like, is a complicated process. The procedure involves ‘supervised learning.’ It’s shown a lot of pictures (when they did this at Google, they used 10 million images), some of which are cats - labelled appropriately by humans. Once a sufficient level of ‘education’ has been undertaken, the AI can then accurately label cats, most of the time. Ng thinks supervision is problematic, he describes it as having an Achilles heel in the form of the quantity of data that is required. To go beyond current capabilities, says Ng, will require a completely new type of technology – one which can learn through ‘unsupervised learning’ -  machines learning from data that has not been classified by humans. Progress on unsupervised learning is slow. At both Baidu and Google, engineers are focussing on constrained versions of unsupervised learning such as training AI systems to learn about a human face and then using them to create a face themselves. The activity requires that the AI develops what we would call an ‘internal representation’ of a face – something which is required in any unsupervised learning. Other avenues to train without supervision include, ingeniously, pitting an AI system against a computer game – an environment in which they receive feedback (through points awarded in the game) for ‘constructive’ activities, but within which they are not taught directly by a human. Next generation AI depends on ‘scrubbing away’ existing assumptions Artificial Intelligence, as it stands will deliver economy wide efficiency improvements, the likes of which we have not seen in decades. It seems incredible to think that the field is still in its infancy when it can deliver such substantial benefits – like reduced traffic congestion, lower carbon emissions and saved time in New York Taxis. But it is. Isaac Azimov who developed his own concepts behind how Artificial Intelligence might be trained with simple rules said “Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won't come in.” The author should rest assured. Between them, Pearl, Hinton and Ng are each taking revolutionary approaches to elevate AI beyond even the incredible heights it has reached, and starting without reference to the concepts which have brought us this far. 5 polarizing Quotes from Professor Stephen Hawking on artificial intelligence Toward Safe AI – Maximizing your control over Artificial Intelligence Decoding the Human Brain for Artificial Intelligence to make smarter decisions
Read more
  • 0
  • 0
  • 2717
article-image-java-multithreading-synchronize-threads-implement-critical-sections
Fatema Patrawala
30 May 2018
13 min read
Save for later

Java Multithreading: How to synchronize threads to implement critical sections and avoid race conditions

Fatema Patrawala
30 May 2018
13 min read
One of the most common situations in concurrent programming occurs when more than one execution thread shares a resource. In a concurrent application, it is normal for multiple threads to read or write the same data structure or have access to the same file or database connection. These shared resources can provoke error situations or data inconsistency, and we have to implement some mechanism to avoid these errors. These situations are called race conditions and they occur when different threads have access to the same shared resource at the same time. Therefore, the final result depends on the order of the execution of threads, and most of the time, it is incorrect. You can also have problems with change visibility. So if a thread changes the value of a shared variable, the changes would only be written in the local cache of that thread; other threads will not have access to the change (they will only be able to see the old value). We present to you a java multithreading tutorial taken from the book, Java 9 Concurrency Cookbook - Second Edition, written by Javier Fernández González. The solution to these problems lies in the concept of a critical section. A critical section is a block of code that accesses a shared resource and can't be executed by more than one thread at the same time. To help programmers implement critical sections, Java (and almost all programming languages) offers synchronization mechanisms. When a thread wants access to a critical section, it uses one of these synchronization mechanisms to find out whether there is any other thread executing the critical section. If not, the thread enters the critical section. If yes, the thread is suspended by the synchronization mechanism until the thread that is currently executing the critical section ends it. When more than one thread is waiting for a thread to finish the execution of a critical section, JVM chooses one of them and the rest wait for their turn. Java language offers two basic synchronization mechanisms: The  synchronized keyword The  Lock interface and its implementations In this article, we explore the use of synchronized keyword method to perform synchronization mechanism in Java. So let's get started: Synchronizing a method In this recipe, you will learn how to use one of the most basic methods of synchronization in Java, that is, the use of the synchronized keyword to control concurrent access to a method or a block of code. All the synchronized sentences (used on methods or blocks of code) use an object reference. Only one thread can execute a method or block of code protected by the same object reference. When you use the synchronized keyword with a method, the object reference is implicit. When you use the synchronized keyword in one or more methods of an object, only one execution thread will have access to all these methods. If another thread tries to access any method declared with the synchronized keyword of the same object, it will be suspended until the first thread finishes the execution of the method. In other words, every method declared with the synchronized keyword is a critical section, and Java only allows the execution of one of the critical sections of an object at a time. In this case, the object reference used is the own object, represented by the this keyword. Static methods have a different behavior. Only one execution thread will have access to one of the static methods declared with the synchronized keyword, but a different thread can access other non-static methods of an object of that class. You have to be very careful with this point because two threads can access two different synchronized methods if one is static and the other is not. If both methods change the same data, you can have data inconsistency errors. In this case, the object reference used is the class object. When you use the synchronized keyword to protect a block of code, you must pass an object reference as a parameter. Normally, you will use the this keyword to reference the object that executes the method, but you can use other object references as well. Normally, these objects will be created exclusively for this purpose. You should keep the objects used for synchronization private. For example, if you have two independent attributes in a class shared by multiple threads, you must synchronize access to each variable; however, it wouldn't be a problem if one thread is accessing one of the attributes and the other accessing a different attribute at the same time. Take into account that if you use the own object (represented by the this keyword), you might interfere with other synchronized code (as mentioned before, the this object is used to synchronize the methods marked with the synchronized keyword). In this recipe, you will learn how to use the synchronized keyword to implement an application simulating a parking area, with sensors that detect the following: when a car or a motorcycle enters or goes out of the parking area, an object to store the statistics of the vehicles being parked, and a mechanism to control cash flow. We will implement two versions: one without any synchronization mechanisms, where we will see how we obtain incorrect results, and one that works correctly as it uses the two variants of the synchronized keyword. The example of this recipe has been implemented using the Eclipse IDE. If you use Eclipse or a different IDE, such as NetBeans, open it and create a new Java project. How to do it... Follow these steps to implement the example: First, create the application without using any synchronization mechanism. Create a class named ParkingCash with an internal constant and an attribute to store the total amount of money earned by providing this parking service: public class ParkingCash { private static final int cost=2; private long cash; public ParkingCash() { cash=0; } Implement a method named vehiclePay() that will be called when a vehicle (a car or motorcycle) leaves the parking area. It will increase the cash attribute: public void vehiclePay() { cash+=cost; } Finally, implement a method named close() that will write the value of the cash attribute in the console and reinitialize it to zero: public void close() { System.out.printf("Closing accounting"); long totalAmmount; totalAmmount=cash; cash=0; System.out.printf("The total amount is : %d", totalAmmount); } } Create a class named ParkingStats with three private attributes and the constructor that will initialize them: public class ParkingStats { private long numberCars; private long numberMotorcycles; private ParkingCash cash; public ParkingStats(ParkingCash cash) { numberCars = 0; numberMotorcycles = 0; this.cash = cash; } Then, implement the methods that will be executed when a car or motorcycle enters or leaves the parking area. When a vehicle leaves the parking area, cash should be incremented: public void carComeIn() { numberCars++; } public void carGoOut() { numberCars--; cash.vehiclePay(); } public void motoComeIn() { numberMotorcycles++; } public void motoGoOut() { numberMotorcycles--; cash.vehiclePay(); } Finally, implement two methods to obtain the number of cars and motorcycles in the parking area, respectively. Create a class named Sensor that will simulate the movement of vehicles in the parking area. It implements the Runnable interface and has a ParkingStats attribute, which will be initialized in the constructor: public class Sensor implements Runnable { private ParkingStats stats; public Sensor(ParkingStats stats) { this.stats = stats; } Implement the run() method. In this method, simulate that two cars and a motorcycle arrive in and then leave the parking area. Every sensor will perform this action 10 times: @Override public void run() { for (int i = 0; i< 10; i++) { stats.carComeIn(); stats.carComeIn(); try { TimeUnit.MILLISECONDS.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stats.motoComeIn(); try { TimeUnit.MILLISECONDS.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stats.motoGoOut(); stats.carGoOut(); stats.carGoOut(); } } Finally, implement the main method. Create a class named Main with the main() method. It needs ParkingCash and ParkingStats objects to manage parking: public class Main { public static void main(String[] args) { ParkingCash cash = new ParkingCash(); ParkingStats stats = new ParkingStats(cash); System.out.printf("Parking Simulatorn"); Then, create the Sensor tasks. Use the availableProcessors() method (that returns the number of available processors to the JVM, which normally is equal to the number of cores in the processor) to calculate the number of sensors our parking area will have. Create the corresponding Thread objects and store them in an array: intnumberSensors=2 * Runtime.getRuntime() .availableProcessors(); Thread threads[]=new Thread[numberSensors]; for (int i = 0; i<numberSensors; i++) { Sensor sensor=new Sensor(stats); Thread thread=new Thread(sensor); thread.start(); threads[i]=thread; } Then wait for the finalization of the threads using the join() method: for (int i=0; i<numberSensors; i++) { try { threads[i].join(); } catch (InterruptedException e) { e.printStackTrace(); } } Finally, write the statistics of Parking: System.out.printf("Number of cars: %dn", stats.getNumberCars()); System.out.printf("Number of motorcycles: %dn", stats.getNumberMotorcycles()); cash.close(); } } In our case, we executed the example in a four-core processor, so we will have eight Sensor tasks. Each task performs 10 iterations, and in each iteration, three vehicles enter the parking area and the same three vehicles go out. Therefore, each Sensor task will simulate 30 vehicles. If everything goes well, the final stats will show the following: There are no cars in the parking area, which means that all the vehicles that came into the parking area have moved out Eight Sensor tasks were executed, where each task simulated 30 vehicles and each vehicle was charged 2 dollars each; therefore, the total amount of cash earned was 480 dollars When you execute this example, each time you will obtain different results, and most of them will be incorrect. The following screenshot shows an example: We had race conditions, and the different shared variables accessed by all the threads gave incorrect results. Let's modify the previous code using the synchronized keyword to solve these problems: First, add the synchronized keyword to the vehiclePay() method of the ParkingCash class: public synchronized void vehiclePay() { cash+=cost; } Then, add a synchronized block of code using the this keyword to the close() method: public void close() { System.out.printf("Closing accounting"); long totalAmmount; synchronized (this) { totalAmmount=cash; cash=0; } System.out.printf("The total amount is : %d",totalAmmount); } Now add two new attributes to the ParkingStats class and initialize them in the constructor of the class: private final Object controlCars, controlMotorcycles; public ParkingStats (ParkingCash cash) { numberCars=0; numberMotorcycles=0; controlCars=new Object(); controlMotorcycles=new Object(); this.cash=cash; } Finally, modify the methods that increment and decrement the number of cars and motorcycles, including the synchronized keyword. The numberCars attribute will be protected by the controlCars object, and the numberMotorcycles attribute will be protected by the controlMotorcycles object. You must also synchronize the getNumberCars() and getNumberMotorcycles() methods with the associated reference object: public void carComeIn() { synchronized (controlCars) { numberCars++; } } public void carGoOut() { synchronized (controlCars) { numberCars--; } cash.vehiclePay(); } public void motoComeIn() { synchronized (controlMotorcycles) { numberMotorcycles++; } } public void motoGoOut() { synchronized (controlMotorcycles) { numberMotorcycles--; } cash.vehiclePay(); } Execute the example now and see the difference when compared to the previous version. How it works... The following screenshot shows the output of the new version of the example. No matter how many times you execute it, you will always obtain the correct result: Let's see the different uses of the synchronized keyword in the example: First, we protected the vehiclePay() method. If two or more Sensor tasks call this method at the same time, only one will execute it and the rest will wait for their turn; therefore, the final amount will always be correct. We used two different objects to control access to the car and motorcycle counters. This way, one Sensor task can modify the numberCars attribute and another Sensor task can modify the numberMotorcycles attribute at the same time; however, no two Sensor tasks will be able to modify the same attribute at the same time, so the final value of the counters will always be correct. Finally, we also synchronized the getNumberCars() and getNumberMotorcycles() methods. Using the synchronized keyword, we can guarantee correct access to shared data in concurrent applications. As mentioned at the introduction of this recipe, only one thread can access the methods of an object that uses the synchronized keyword in their declaration. If thread (A) is executing a synchronized method and thread (B) wants to execute another synchronized method of the same object, it will be blocked until thread (A) is finished. But if thread (B) has access to different objects of the same class, none of them will be blocked. When you use the synchronized keyword to protect a block of code, you use an object as a parameter. JVM guarantees that only one thread can have access to all the blocks of code protected with this object (note that we always talk about objects, not classes). We used the TimeUnit class as well. The TimeUnit class is an enumeration with the following constants: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS, and SECONDS. These indicate the units of time we pass to the sleep method. In our case, we let the thread sleep for 50 milliseconds. There's more... The synchronized keyword penalizes the performance of the application, so you must only use it on methods that modify shared data in a concurrent environment. If you have multiple threads calling a synchronized method, only one will execute them at a time while the others will remain waiting. If the operation doesn't use the synchronized keyword, all the threads can execute the operation at the same time, reducing the total execution time. If you know that a method will not be called by more than one thread, don't use the synchronized keyword. Anyway, if the class is designed for multithreading access, it should always be correct. You must promote correctness over performance. Also, you should include documentation in methods and classes in relation to their thread safety. You can use recursive calls with synchronized methods. As the thread has access to the synchronized methods of an object, you can call other synchronized methods of that object, including the method that is being executed. It won't have to get access to the synchronized methods again. We can use the synchronized keyword to protect access to a block of code instead of an entire method. We should use the synchronized keyword in this way to protect access to shared data, leaving the rest of the operations out of this block and obtaining better performance of the application. The objective is to have the critical section (the block of code that can be accessed only by one thread at a time) as short as possible. Also, avoid calling blocking operations (for example, I/O operations) inside a critical section. We have used the synchronized keyword to protect access to the instruction that updates the number of persons in the building, leaving out the long operations of the block that don't use shared data. When you use the synchronized keyword in this way, you must pass an object reference as a parameter. Only one thread can access the synchronized code (blocks or methods) of this object. Normally, we will use the this keyword to reference the object that is executing the method: synchronized (this) { // Java code } To summarize, we learnt to use the synchronized  keyword method for multithreading in Java to perform synchronization mechasim. You read an excerpt from the book Java 9 Concurrency Cookbook - Second Edition. This book will help you master the art of fast, effective Java development with the power of concurrent and parallel programming. Concurrency programming 101: Why do programmers hang by a thread? How to create multithreaded applications in Qt Getting Inside a C++ Multithreaded Application
Read more
  • 0
  • 0
  • 17403

article-image-dont-call-us-ninjas-or-rockstars-say-developers
Richard Gall
30 May 2018
5 min read
Save for later

Don't call us ninjas or rockstars, say developers

Richard Gall
30 May 2018
5 min read
Words like 'ninja' and 'rockstar' have been flying around the tech world for some time now. Data revealed by recruitment website Indeed at the end of 2017 showed that the term 'rockstar' has increased in job postings 19% since 2015. We seem to live in a world where 'sexing up' job roles has become the norm. And when top talent is hard to come by it makes sense. The words offer some degree of status to candidates and, and imply the organizations behind them are forward-thinking. But it's starting to get boring. In this year's Skill Up survey, 57% of respondents said they didn't like creative terms like 'rockstar' 'ninja' and 'wizard' - only 26% said they actually liked the term. While words like these might be boring, they can be harmful too. In an age of spin and fake news, using language to dress up and redefine things can have an impact we might not expect. Sign up to the Packt Hub weekly newsletter and receive a free PDF of this year's Skill Up report. Using words like rockstar and ninja pits developers against each other The industry's insistence on using these words in everything from recruitment to conferences cultivates a bizarre class system within tech. When we start calling people rockstars, it suggests something about the role they play within a company or engineering team. It says 'these people are doing something really exciting' while everyone else, presumably, isn't. While it's true that hierarchies are part and parcel of any modern organization, this superficial labeling isn't helpful. 'Collaboration' and 'agile' are buzzwords that are as overused as ninja and rockstar, but at least they offer something practical and positive. And let's be honest - collaborating is important if we're to build better software and have better working lives. The unforeseen impact on developer mental health An unforeseen affect of these words could be a negative impact on mental health in the tech industry. We already know that burnout is becoming a common occurrence, as tech professionals are being overworked and pushed to breaking point. While this is particularly true of startup culture, where engineers are driven to develop products on incredibly tight schedules as owners seek investment and investors look for signs of growth, but this trend is growing. Arguably, the gap between management and engineers is playing into this. The results of innovation look shiny and exciting, but the actual work - which can, as we know, be repetitive, boring, hard as it can be enjoyable - isn't properly understood. Ninjas, rockstars and the commodification of technical skill It has become a truism that communication is important when it comes to tech. Words like ninja and rockstar are making communication hard - they undermine our ability to communicate. They conceal the work that engineers actually do. It's great that they shine a spotlight on technical professionals and skills, but they do so in a way that is actually quite euphemistic. It's hints at the value of the skills, but actually fails to engage with why these skills are important, how you develop them, and how they can be properly leveraged. More specifically, words like 'ninja' and 'rockstar' turn technical expertise into a product. It makes skills marketable; it turns knowledge into a commodity. Good for you if that helps you earn a little more money in your next job; even better if it lands you a book deal or a spot speaking at a conference. But all you're really doing is taking advantage of the technical skills bubble. It won't last forever, and in the long run it probably won't be good news for the industry. Some people will be overvalued, while others will be undervalued. Ninjas, rockstars and open source culture These irritating euphemisms come from a number of different sources. Tech recruitment has played a big part, as companies try and attract top tech talent. So has modern corporate culture, which has been trying to loosen its proverbial tie for the last decade. But it's also worth noting that rockstars and ninjas come out of open source culture. This is a culture that is celebrated for its lack of authority. However, with this decline, it makes opens up a space for 'experts' to take the lead. In the past, we might have quaintly referred to these people as 'community figures'. As open source has moved mainstream, the commodification of technical expertise found form in the tech ninja and rockstar. But while rockstars and ninjas appear to be the shining lights of open source culture, they might also be damaging to it as well. If open source culture is 'led' by a number of people the world begins referring to as rockstars, the very foundations of it begin to move. It's no longer quite as open as it used to be. True, perhaps we need rockstars and ninjas in open source. These are people that evangelize about certain projects. These are people who can offer unique and useful perspectives on important debates and issues in their respective fields, right? Well, sort of. It is important for people to discuss new ideas and pioneer new ways of doing things. But this doesn't mean we need to sex things up. After all, certain people aren't more entitled to an opinion just because they have a book deal. Yes they have experience, but it's important that communities don't get left behind as the tech industry chases after the dream of relentless innovation. Of course it's great to talk about, but we've all got to do the work too.
Read more
  • 0
  • 0
  • 4991