Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-decrypting-bitcoin-2017-journey
Ashwin Nair
28 Dec 2017
7 min read
Save for later

There and back again: Decrypting Bitcoin`s 2017 journey from $1000 to $20000

Ashwin Nair
28 Dec 2017
7 min read
Lately, Bitcoin has emerged as the most popular topic of discussion amongst colleagues, friends and family. The conversations more or less have a similar theme - filled with inane theories around what Bitcoin is and and the growing fear of missing out on Bitcoin mania. Well to be fair, who would want to miss out on an opportunity to grow their money by more than 1000%. That`s the return posted by Bitcoin since the start of the year. Bitcoin at the time of writing this article is at $15,000 with a marketcap over $250 billion. To put the hype in context, Bitcoin is now valued higher than 90% of companies in S&P 500 list. Supposedly, invented by an anonymous group or an individual under the alias Satoshi Nakamoto in 2009, Bitcoin has been seen as a digital currency that the internet might not need but one that it deserves. Satoshi`s vision was to create a robust electronic payment system that functions smoothly without a need for a trusted third party. This was achievable with the help of Blockchain, a digital ledger which records transactions in an indelible fashion and is distributed across multiple nodes. This ensured that no transaction is altered or deleted, thus completely eliminating the need for a trusted third party. For all the excitement around Bitcoin and the increase in interest level towards owning one, the thought of dissecting the roller coaster journey from sub $1000 level to $15,000 this year seemed pretty exciting. Started year with a bang / Global uncertainties leading to Bitcoin rally - Oct 2016 to Jan 2017 Global uncertainties played a big role in driving Bitcoin`s price and boy 2016 was full of it!  From Brexit to Trump winning the US elections and major economic commotion in terms of Devaluation of China`s Yuan and India`s Demonetization drive all leading to investors seeking shelter in Bitcoin. The first blip - Jan 2017 to March 2017 China as a country has had a major impact in determining Bitcoin`s fate. Early 2017, China contributed to over 80% of Bitcoin transactions indicating the amount of power Chinese traders and investors had in controlling Bitcoin`s price. However, People's Bank of China`s closer inspection of the exchanges revealed several irregularities in the transactions and business practices. Which eventually led to officials halting withdrawals using Bitcoin transactions and taking stringent steps towards cryptocurrency exchanges. Source - Bloomberg On the path tobecoming  mainstream and gaining support from Ethereum - Mar 2017 to June 2017 During this phase, we saw the rise of another cryptocurrency, Ethereum, a close digital cousin of Bitcoin. Similar to Bitcoin, Ethereum is also built on top of Blockchain technology allowing it to build a decentralized public network. However, Ethereum’s capabilities extend beyond being a cryptocurrency and help developers to build and deploy any kind of decentralized applications. Ethereum`s valuation in this period rose from $20 to $375 which was quite beneficial for Bitcoin. As every reference of Ethereum had Bitcoin mentioned as well, whether it was to explain what Ethereum is or how it can take the number 1 cryptocurrency spot in the future. This, coupled with the rise in Blockchain`s popularity, increased Bitcoin`s visibility within USA. The media started observing politicians, celebrities and other prominent personalities speaking on Bitcoin as well. Bitcoin also received a major boost from Chinese exchanges, wherein withdrawals of the cryptocurrency resumed after nearly a four-month freeze. All these factors led to Bitcoin crossing an all time high of $2500, up by more than 150% since the start of the year. The Curious case of Fork - June 2017 to September 2017 The month of July saw the Cryptocurrencies market cap witnessing a sharp decline, with questions being raised on the price volatility and whether Bitcoin`s rally for the year was over? We can of-course now confidently debunk that question. Though, there hasn’t been any proven rationale behind the decline, one of the reasons seems to be profit booking following months of steep rally in valuations witnessed by Bitcoin. Another major factor, which might have driven the price collapse may be an unresolved dispute among leading members of the bitcoin community over how to overcome the growing problem of Bitcoin being slow and expensive. With growing usage of Bitcoin, its performance in terms of transaction time has slowed down. Bitcoin`s network due to its limitation in terms of block size could only execute around 7 transactions per second compared to the VISA Network which could do over 1600 transactions. This also led to transaction fees being increased substantially to $5.00 per transaction and settlement time often taking hours and even days. This eventually put Bitcoin`s flaw in the spotlight when compared with services offered by competitors such as Paypal in terms of cost and transaction time. Source - Coinmarketcap The poor performance of Bitcoin led to investors opting for other cryptocurrencies. The above graph, shows how Bitcoin`s dominance fell substantially compared to other cryptocurrencies such as Ethereum and Litecoin during this time.   With the core-community still unable to come to a consensus on how to improve the performance and update the software, the prospect of a “fork” was raised.  Fork highlights change in underlying software protocol of Bitcoin to make previous rules valid or invalid. There are two types of blockchain forks - soft fork and hard fork. Around August, the community announced to go ahead with a hard fork in the form of Bitcoin Cash. This news was surprisingly taken in a positive manner leading to Bitcoin rebounding strongly and reaching new heights of around $4000 in price. Once bitten, Twice Shy (China) September 2017 to October 2017 Source - Zerohedge The month of September saw another setback for Bitcoin due to measures taken from People`s Bank of China. This time, PBoC banned initial coin offerings (ICO), thus prohibiting the practice of building and selling cryptocurrency to any investors or to finance any startup projects within China. Based on a  report by National Committee of Experts on Internet Financial Security Technology, Chinese Investors were involved in 2.6 Billion Yuan worth of ICOs in January-June, 2017 reflecting China`s exposure towards Bitcoin. My Precious (October 2017 to December 2017) Source - Manmanroe During the last quarter Bitcoin`s surge has shocked even hardcore Bitcoin fanatics.  Everything seems to be going right for Bitcoin at the moment.  While at the start of the year China was the major contributor towards the hike in Bitcoin`s valuation, now the momentum seemed to have shifted to a much sensible and responsible market in terms of Japan who have embraced Bitcoin in quite a positive manner. As you can see from the below graph, Japan now holds more than 50% of transaction compared to USA which is much lesser in size. Besides Japan, we are also seeing positive developments in country such as Russia and India, who are looking to legalize cryptocurrency usage. Moreover, the level of interests towards Bitcoin from institutional investors is at its peak. All these factors have resulted in Bitcoin to cross the 5 digit mark for the first time in Nov, 2017 and touching an all time high figure of close to $20,000 in December, 2017. Post the record high, Bitcoin has been witnessing a crash and rebound phenomenon in the last two weeks of December. From a record high of $20,000 to $11,000 and now at $15,000, Bitcoin is still a volatile digital currency if one is looking for a quick price appreciation. Despite the valuation dilemma and the price volatility, one thing is sure: the world is warming up to the idea of cryptocurrencies and even owning one. There are already several predictions being made on how Bitcoin`s astronomical growth is going to continue in 2018. However, Bitcoin needs to overcome several challenges before it can replace the traditional currency and and be widely accepting in banking practices. Besides, the rise of other cryptocurriencies such as Ethereum, LiteCoin or bitcoin cash who are looking to dethrone Bitcoin from the #1 spot, there are broader issues at hand which the Bitcoin community should prioritize such as how to curb the effect of Bitcoin`s mining activities on the environment and on having smoother reforms as well as building regulatory roadmap from countries before people actually start using instead of just looking it as a tool for making a quick buck.  
Read more
  • 0
  • 20
  • 7232

article-image-sleep-loss-cuts-developers-productivity-in-half-research-finds
Vincy Davis
03 May 2019
3 min read
Save for later

All coding and no sleep makes Jack/Jill a dull developer, research confirms

Vincy Davis
03 May 2019
3 min read
In recent years, the software engineering community has been interested in factors related to human habits that can play a role in increasing developers' productivity. The researchers- D. Fucci from HITeC and the University of Hamburg, G. Scanniello and S. Romano from DiMIE - University and N. Juristo from Technical University of Madrid have published a paper “Need for Sleep: the Impact of a Night of Sleep Deprivation on Novice Developers’ Performance” that investigates how sleep deprivation can impact developers' productivity. What was the experiment? The researchers performed a quasi experiment with 45 undergraduate students in Computer Science at the University of Basilicata in Italy. The participants were asked to work on a programming task which required them to use the popular agile practice of test-first development (TFD). The students were divided into two groups - The treatment group where 23 students were asked to skip their sleep the night before the experiment and the control group where the remaining students slept the night before the experiment. The conceptual model and the operationalization of the constructs investigated is as shown below. Image source: Research paper Outcome of the Experiment The result of the experiment indicated that sleep deprivation has a negative effect on the capacity of software developers to produce a software solution that meets given requirements. In particular, novice developers who forewent one night of sleep, wrote code which was approximately 50% more likely not to fulfill the functional requirements with respect to the code produced by developers under normal sleep condition. Another observation was that sleep deprivation decreased developers' productivity with the development task and hindered their ability to apply the test-first development (TFD) practice. The researchers also found that sleep-deprived novice developers had to make more fixes to syntactic mistakes in the source code. As an aftereffect of this result paper, experienced developers are recollecting their earlier sleep deprived programming days. Some are even regretting them. https://twitter.com/zhenghaooo/status/1121937715413434369 Recently the Chinese ‘996’ work routine has come into picture, wherein tech companies are expecting their employees to work from 9 am to 9 pm, 6 days a week, leading to 60+ hours of work per week. This kind of work culture will devoid these developers of any work-life balance. This will also encourage the habit of skipping sleep. Thus decreasing developers productivity. A user on Reddit declares sleep as the key to being a productive coder and not burning out. Another user added, “There's a culture in university computer science around programming for 30+ hours straight (hackathons). I've participated and pulled off some pretty cool things in 48 hours of feverish keyboard whacking and near-constant swearing, but I'd rather stab myself repeatedly with a thumbtack than repeat that experience.” It’s high time that companies focus more on the ‘quality’ of work than insisting developers to work for long hours, which will in turn reduce their productivity. It is clear from the result of this research paper that no sleep in a night, can certainly affect one’s quality of work. To know more about the experiment, head over to the research paper. Microsoft and GitHub employees come together to stand with the 996.ICU repository Jack Ma defends the extreme “996 work culture” in Chinese tech firms Dorsey meets Trump privately to discuss how to make public conversation “healthier and more civil” on Twitter
Read more
  • 0
  • 0
  • 7226

article-image-train-convolutional-neural-network-in-keras-improve-with-data-augmentation
Amey Varangaonkar
23 Aug 2018
10 min read
Save for later

Train a convolutional neural network in Keras and improve it with data augmentation [Tutorial]

Amey Varangaonkar
23 Aug 2018
10 min read
In this article, we will see how convolutional layers work and how to use them. We will also see how you can build your own convolutional neural network in Keras to build better, more powerful deep neural networks and solve computer vision problems. We will also see how we can improve this network using data augmentation. For a better understanding of the concepts, we will be taking a well-known dataset CIFAR-10. This dataset was created by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The following article has been taken from the book Deep Learning Quick Reference, written by Mike Bernico.  Adding inputs to the network The CIFAR-10 dataset is made up of 60,000 32 x 32 color images that belong to 10 classes, with 6,000 images per class. We'll be using 50,000 images as a training set, 5,000 images as a validation set, and 5,000 images as a test set. The input tensor layer for the convolutional neural network will be (N, 32, 32, 3), which we will pass to the build_network function. The following code is used to build the network: def build_network(num_gpu=1, input_shape=None): inputs = Input(shape=input_shape, name="input") Getting the output The output of this model will be a class prediction, from 0-9. We will use a 10-node softmax.  We will use the following code to define the output: output = Dense(10, activation="softmax", name="softmax")(d2) Cost function and metrics Earlier, we used categorical cross-entropy as the loss function for a multi-class classifier.  This is just another multiclass classifier and we can continue using categorical cross-entropy as our loss function, and accuracy as a metric. We've moved on to using images as input, but luckily our cost function and metrics remain unchanged. Working with convolutional layers We're going to use two convolutional layers, with batch normalization, and max pooling. This is going to require us to make quite a few choices, which of course we could choose to search as hyperparameters later. It's always better to get something working first though. As the popular computer scientist and mathematician Donald Knuth would say, premature optimization is the root of all evil. We will use the following code snippet to define the two convolutional blocks: # convolutional block 1 conv1 = Conv2D(64, kernel_size=(3,3), activation="relu", name="conv_1")(inputs) batch1 = BatchNormalization(name="batch_norm_1")(conv1) pool1 = MaxPooling2D(pool_size=(2, 2), name="pool_1")(batch1) # convolutional block 2 conv2 = Conv2D(32, kernel_size=(3,3), activation="relu", name="conv_2")(pool1) batch2 = BatchNormalization(name="batch_norm_2")(conv2) pool2 = MaxPooling2D(pool_size=(2, 2), name="pool_2")(batch2) So, clearly, we have two convolutional blocks here, that consist of a convolutional layer, a batch normalization layer, and a pooling layer. In the first block, I'm using 64 3 x 3 filters with relu activations. I'm using valid (no) padding and a stride of 1. Batch normalization doesn't require any parameters and it isn't really trainable. The pooling layer is using 2 x 2 pooling windows, valid padding, and a stride of 2 (the dimension of the window). The second block is very much the same; however, I'm halving the number of filters to 32. While there are many knobs we could turn in this architecture, the one I would tune first is the kernel size of the convolutions. Kernel size tends to be an important choice. In fact, some modern neural network architectures such as Google's inception, allow us to use multiple filter sizes in the same convolutional layer. Getting the fully connected layers After two rounds of convolution and pooling, our tensors have gotten relatively small and deep. After pool_2, the output dimension is (n, 6, 6, 32). We have, in these convolutional layers, hopefully extracted relevant image features that this 6 x 6 x 32 tensor represents. To classify images, using these features, we will connect this tensor to a few fully connected layers, before we go to our final output layer. In this example, I'll use a 512-neuron fully connected layer, a 256-neuron fully connected layer, and finally, the 10-neuron output layer. I'll also be using dropout to help prevent overfitting, but only a very little bit! The code for this process is given as follows for your reference: from keras.layers import Flatten, Dense, Dropout # fully connected layers flatten = Flatten()(pool2) fc1 = Dense(512, activation="relu", name="fc1")(flatten) d1 = Dropout(rate=0.2, name="dropout1")(fc1) fc2 = Dense(256, activation="relu", name="fc2")(d1) d2 = Dropout(rate=0.2, name="dropout2")(fc2) I haven't previously mentioned the flatten layer above. The flatten layer does exactly what its name suggests. It flattens the n x 6 x 6 x 32 tensor into an n x 1152 vector. This will serve as an input to the fully connected layers. Working with multi-GPU models in Keras Many cloud computing platforms can provision instances that include multiple GPUs. As our models grow in size and complexity you might want to be able to parallelize the workload across multiple GPUs. This can be a somewhat involved process in native TensorFlow, but in Keras, it's just a function call. Build your model, as normal, as shown in the following code: model = Model(inputs=inputs, outputs=output) Then, we just pass that model to keras.utils.multi_gpu_model, with the help of the following code: model = multi_gpu_model(model, num_gpu) In this example, num_gpu is the number of GPUs we want to use. Training the model Putting the model together, and incorporating our new cool multi-GPU feature, we come up with the following architecture: def build_network(num_gpu=1, input_shape=None): inputs = Input(shape=input_shape, name="input") # convolutional block 1 conv1 = Conv2D(64, kernel_size=(3,3), activation="relu", name="conv_1")(inputs) batch1 = BatchNormalization(name="batch_norm_1")(conv1) pool1 = MaxPooling2D(pool_size=(2, 2), name="pool_1")(batch1) # convolutional block 2 conv2 = Conv2D(32, kernel_size=(3,3), activation="relu", name="conv_2")(pool1) batch2 = BatchNormalization(name="batch_norm_2")(conv2) pool2 = MaxPooling2D(pool_size=(2, 2), name="pool_2")(batch2) # fully connected layers flatten = Flatten()(pool2) fc1 = Dense(512, activation="relu", name="fc1")(flatten) d1 = Dropout(rate=0.2, name="dropout1")(fc1) fc2 = Dense(256, activation="relu", name="fc2")(d1) d2 = Dropout(rate=0.2, name="dropout2")(fc2) # output layer output = Dense(10, activation="softmax", name="softmax")(d2) # finalize and compile model = Model(inputs=inputs, outputs=output) if num_gpu > 1: model = multi_gpu_model(model, num_gpu) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"]) return model We can use this to build our model: model = build_network(num_gpu=1, input_shape=(IMG_HEIGHT, IMG_WIDTH, CHANNELS)) And then we can fit it, as you'd expect: model.fit(x=data["train_X"], y=data["train_y"], batch_size=32, epochs=200, validation_data=(data["val_X"], data["val_y"]), verbose=1, callbacks=callbacks) As we train this model, you will notice that overfitting is an immediate concern. Even with a relatively modest two convolutional layers, we're already overfitting a bit. You can see the effects of overfitting from the following graphs: It's no surprise, 50,000 observations is not a lot of data, especially for a computer vision problem. In practice, computer vision problems benefit from very large datasets. In fact, Chen Sun showed that additional data tends to help computer vision models linearly with the log of the data volume in https://arxiv.org/abs/1707.02968. Unfortunately, we can't really go find more data in this case. But maybe we can make some. Let's talk about data augmentation next. Using data augmentation Data augmentation is a technique where we apply transformations to an image and use both the original image and the transformed images to train on. Imagine we had a training set with a cat in it: If we were to apply a horizontal flip to this image, we'd get something that looks like this: This is exactly the same image, of course, but we can use both the original and transformation as training examples. This isn't quite as good as two separate cats in our training set; however, it does allow us to teach the computer that a cat is a cat regardless of the direction it's facing. In practice, we can do a lot more than just a horizontal flip. We can vertically flip, when it makes sense, shift, and randomly rotate images as well. This allows us to artificially amplify our dataset and make it seem bigger than it is. Of course, you can only push this so far, but it's a very powerful tool in the fight against overfitting when little data exists. What is the Keras ImageDataGenerator? Not so long ago, the only way to do image augmentation was to code up the transforms and apply them randomly to the training set, saving the transformed images to disk as we went (uphill, both ways, in the snow). Luckily for us, Keras now provides an ImageDataGenerator class that can apply transformations on the fly as we train, without having to hand code the transformations. We can create a data generator object from ImageDataGenerator by instantiating it like this: def create_datagen(train_X): data_generator = ImageDataGenerator( rotation_range=20, width_shift_range=0.02, height_shift_range=0.02, horizontal_flip=True) data_generator.fit(train_X) return data_generator In this example, I'm using both shifts, rotation, and horizontal flips. I'm using only very small shifts. Through experimentation, I found that larger shifts were too much and my network wasn't actually able to learn anything. Your experience will vary as your problem does, but I would expect larger images to be more tolerant of shifting. In this case, we're using 32 pixel images, which are quite small. Training with a generator If you haven't used a generator before, it works like an iterator. Every time you call the ImageDataGenerator .flow() method, it will produce a new training minibatch, with random transformations applied to the images it was fed. The Keras Model class comes with a .fit_generator() method that allows us to fit with a generator rather than a given dataset: model.fit_generator(data_generator.flow(data["train_X"], data["train_y"], batch_size=32), steps_per_epoch=len(data["train_X"]) // 32, epochs=200, validation_data=(data["val_X"], data["val_y"]), verbose=1, callbacks=callbacks) Here, we've replaced the traditional x and y parameters with the generator. Most importantly, notice the steps_per_epoch parameter. You can sample with replacement any number of times from the training set, and you can apply random transformations each time. This means that we can use more mini batches each epoch than we have data. Here, I'm going to only sample as many batches as I have observations, but that isn't required. We can and should push this number higher if we can. Before we wrap things up, let's look at how beneficial image augmentation is in this case: As you can see, just a little bit of image augmentation really helped us out. Not only is our overall accuracy higher, but our network is overfitting much slower. If you have a computer vision problem with just a little bit of data, image augmentation is something you'll want to do. We saw the benefits and ease of training a convolutional neural network from scratch using Keras and then improving that network using data augmentation. If you found the above article to be useful, make sure you check out the book Deep Learning Quick Reference for more information on modeling and training various different types of deep neural networks with ease and efficiency. Top 5 Deep Learning Architectures CapsNet: Are Capsule networks the antidote for CNNs kryptonite? What is a CNN?
Read more
  • 0
  • 0
  • 7224

article-image-migrating-mysql-table-using-oracle-sql-developer-15
Packt
29 Jan 2010
4 min read
Save for later

Migrating a MySQL table using Oracle SQL Developer 1.5

Packt
29 Jan 2010
4 min read
Oracle SQL Developer Tool is a stand alone graphic database developer tool that connects to Oracle as well as third-party databases which can be used to perform a variety of tasks from running simple queries to migration of databases from third party vendor products to Oracle. This article by Dr. Jayaram Krishnaswamy, shows how the reader may use Oracle's most recent tool, the Oracle SQL Developer 1.5 to work with the MySQL database. An example of migrating a table in MySQL to Oracle 10G XE is also described. The Oracle SQL Developer Tool has steadily improved from its beginnings in version 1.1. The earlier versions are briefly explained here. The latest version, SQL Developer 1.5.4 released in March 2009 was described in this article. The SQL Developer tool[(1.5.4.59.40)] bundle can be downloaded from Oracle's web site, Oracle Technology Products. When you unzip the bundle you are ready to start using this tool. You may get an even more recent version of this tool as it is continuously updated. It is assumed that you have a MySQL Server that you can connect to and that you have the required credentials. The MySQL server used in developing this article was installed when the XAMPP bundle was installed. Reader will benefit by reading earlier MySQL articles 1, 2, 3 on the Packt site. Connecting to MySQL Out of the box Oracle SQL Developer 1.5.4 only supports Oracle and MS Access. The product documents clearly says that it can connect to other database products. This article will show how this is achieved. In order to install products from Oracle you must have username and password for the Oracle web Account. Bring up the Oracle SQL Developer application by clicking the executable. The program starts up and after a while the user interface gets displayed as shown. Right click on Connection, the New Connection page opens as shown displaying the default connection to the resident Oracle 10G XE server. Click the menu item Help and choose "Check for Updates". This brings up the wizard displaying the Welcome screen as shown in the next figure. Click Next. The "Source" page of the wizard shows up as shown. The updates for Oracle SQL Developer is already chosen. Place a check mark for "Third Party SQL Developer Extensions". You can choose to install looking for updates on the internet or from the downloaded bundle, if it exists. First try the internet and click Next. This brings up the "Updates" page of the wizard as shown in the next figure. Read the warning on this window. The extensions are not evaluated by Oracle but available. The details of available extensions are as follows: OrindaBuild Java Code Generator version 6.1.20090331 shown in the next figure. The JTDS DBC Driver version 11.1.58.17 shown in the next figure. The MYSQL JDBC driver shown in the next figure: The last one is a patch for the Oracle SQL Developer to fix some of the import, LDAP and performance issues as shown. For this article only the JTDS JDBC driver for MS SQL Server and the MySQL JDBC options were checked. The License agreements are for the JTDS drivers. Click Next. The License agreements must be accepted. Click I Agree. Click Next. This is the download step of the wizard. To proceed further you must have the Oracle Web Account username and password. Here you have the option to signup as well. After a while the new extensions are downloaded as shown in the next figure. Click Finish to close the wizard. You need to restart SQL Developer to complete the installation of the extensions. Click Yes on the "Confirm Exit" window that shows up. Now, when you click New Connection to create a new connection you display the "New / Select Database Connection" as shown. You can now see that other 3rd party databases are added to the window. Choose the tab for MySQL. Fill in the required details as shown in the next figure appropriate for your MySQL installation. You must provide a name for the connection. Herein the connection is named, My_MySQL. The credentials must be provided as shown or that which is appropriate for your installation. The port is the default designated for this server when you install the product. You may accept the other defaults on this page and click Test. The word "success" gets displayed in the status label at bottom left. The connection name and connection details gets added to the page shown above.
Read more
  • 0
  • 0
  • 7224

article-image-how-tflearn-makes-building-tensorflow-models-easier
Savia Lobo
04 Jun 2018
7 min read
Save for later

How TFLearn makes building TensorFlow models easier

Savia Lobo
04 Jun 2018
7 min read
Today, we will introduce you to TFLearn, and will create layers and models which are directly beneficial in any model implementation with Tensorflow. TFLearn is a modular library in Python that is built on top of core TensorFlow. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango. In this book, you will learn how to build TensorFlow models to work with multilayer perceptrons using Keras, TFLearn, and R.[/box] TIP: TFLearn is different from the TensorFlow Learn package which is also known as TF Learn (with one space in between TF and Learn). It is available at the following link; and the source code is available on GitHub. TFLearn can be installed in Python 3 with the following command: pip3  install  tflearn Note: To install TFLearn in other environments or from source, please refer to the following link: http://tflearn.org/installation/ The simple workflow in TFLearn is as follows:  Create an input layer first.  Pass the input object to create further layers.  Add the output layer.  Create the net using an estimator layer such as regression.  Create a model from the net created in the previous step.  Train the model with the model.fit() method.  Use the trained model to predict or evaluate. Creating the TFLearn Layers Let us learn how to create the layers of the neural network models in TFLearn:  Create an input layer first: input_layer  =  tflearn.input_data(shape=[None,num_inputs]  Pass the input object to create further layers: layer1  =  tflearn.fully_connected(input_layer,10, activation='relu') layer2  =  tflearn.fully_connected(layer1,10, activation='relu')  Add the output layer: output  =  tflearn.fully_connected(layer2,n_classes, activation='softmax')  Create the final net from the estimator layer such as regression: net  =  tflearn.regression(output, optimizer='adam', metric=tflearn.metrics.Accuracy(), loss='categorical_crossentropy' ) The TFLearn provides several classes for layers that are described in following sub-sections. TFLearn core layers TFLearn offers the following layers in the tflearn.layers.core module: Layer classDescriptioninput_dataThis layer is used to specify the input layer for the neural network.fully_connectedThis layer is used to specify a layer where all the neurons are connected to all the neurons in the previous layer.dropoutThis layer is used to specify the dropout regularization. The input elements are scaled by 1/keep_prob while keeping the expected sum unchanged.Layer classDescriptioncustom_layerThis layer is used to specify a custom function to be applied to the input. This class wraps our custom function and presents the function as a layer.reshapeThis layer reshapes the input into the output of specified shape.flattenThis layer converts the input tensor to a 2D tensor.activationThis layer applies the specified activation function to the input tensor.single_unitThis layer applies the linear function to the inputs.highwayThis layer implements the fully connected highway function.one_hot_encodingThis layer converts the numeric labels to their binary vector one-hot encoded representations.time_distributedThis layer applies the specified function to each time step of the input tensor.multi_target_dataThis layer creates and concatenates multiple placeholders, specifically used when the layers use targets from multiple sources. TFLearn convolutional layers TFLearn offers the following layers in the tflearn.layers.conv module: Layer classDescriptionconv_1dThis layer applies 1D convolutions to the input dataconv_2dThis layer applies 2D convolutions to the input dataconv_3dThis layer applies 3D convolutions to the input dataconv_2d_transposeThis layer applies transpose of conv2_d to the input dataconv_3d_transposeThis layer applies transpose of conv3_d to the input dataatrous_conv_2dThis layer computes a 2-D atrous convolutiongrouped_conv_2dThis layer computes a depth-wise 2-D convolutionmax_pool_1dThis layer computes 1-D max poolingmax_pool_2dThis layer computes 2D max poolingavg_pool_1dThis layer computes 1D average poolingavg_pool_2dThis layer computes 2D average poolingupsample_2dThis layer applies the row and column wise 2-D repeat operationupscore_layerThis layer implements the upscore as specified in http://arxiv. org/abs/1411.4038global_max_poolThis layer implements the global max pooling operationglobal_avg_poolThis layer implements the global average pooling operationresidual_blockThis layer implements the residual block to create deep residual networksresidual_bottleneckThis layer implements the residual bottleneck block for deep residual networksresnext_blockThis layer implements the ResNeXt block TFLearn recurrent layers TFLearn offers the following layers in the tflearn.layers.recurrent module: Layer classDescriptionsimple_rnnThis layer implements the simple recurrent neural network modelbidirectional_rnnThis layer implements the bi-directional RNN modellstmThis layer implements the LSTM modelgruThis layer implements the GRU model TFLearn normalization layers TFLearn offers the following layers in the tflearn.layers.normalization module: Layer classDescriptionbatch_normalizationThis layer normalizes the output of activations of previous layers for each batchlocal_response_normalizationThis layer implements the LR normalizationl2_normalizationThis layer applies the L2 normalization to the input tensors TFLearn embedding layers TFLearn offers only one layer in the tflearn.layers.embedding_ops module: Layer classDescriptionembeddingThis layer implements the embedding function for a sequence of integer IDs or floats TFLearn merge layers TFLearn offers the following layers in the tflearn.layers.merge_ops module: Layer classDescriptionmerge_outputsThis layer merges the list of tensors into a single tensor, generally used to merge the output tensors of the same shapemergeThis layer merges the list of tensors into a single tensor; you can specify the axis along which the merge needs to be done TFLearn estimator layers TFLearn offers only one layer in the tflearn.layers.estimator module: Layer classDescriptionregressionThis layer implements the linear or logistic regression While creating the regression layer, you can specify the optimizer and the loss and metric functions. TFLearn offers the following optimizer functions as classes in the tflearn.optimizers module: SGD RMSprop Adam Momentum AdaGrad Ftrl AdaDelta ProximalAdaGrad Nesterov Note: You can create custom optimizers by extending the tflearn.optimizers.Optimizer base class. TFLearn offers the following metric functions as classes or ops in the tflearn.metrics module: Accuracy or  accuracy_op Top_k or top_k_op R2 or r2_op WeightedR2  or weighted_r2_op Binary_accuracy_op Note : You can create custom metrics by extending the tflearn.metrics.Metric base class. TFLearn provides the following loss functions, known as objectives, in the tflearn.objectives module: Softymax_categorical_crossentropy categorical_crossentropy binary_crossentropy Weighted_crossentropy mean_square hinge_loss roc_auc_score Weak_cross_entropy_2d While specifying the input, hidden, and output layers, you can specify the activation functions to be applied to the output. TFLearn provides the following activation functions in the tflearn.activations module: linear tanh Sigmoid softmax softplus Softsign relu relu6 leaky_relu Prelu elu Crelu selu Creating the TFLearn Model Create the model from the net created in the previous step (step 4 in creating the TFLearn layers section): model  =  tflearn.DNN(net) Types of TFLearn models The TFLearn offers two different classes of the models: DNN  (Deep Neural Network) model: This class allows you to create a multilayer perceptron from the network that you have created from the layers SequenceGenerator model: This class allows you to create a deep neural network that can generate sequences Training the TFLearn Model After creating, train the model with the model.fit() method: model.fit(X_train, Y_train, n_epoch=n_epochs, batch_size=batch_size, show_metric=True, run_id='dense_model') Using the TFLearn Model Use the trained model to predict or evaluate: score  =  model.evaluate(X_test,  Y_test) print('Test  accuracy:',  score[0]) The complete code for the TFLearn MNIST classification example is provided in the notebook ch-02_TF_High_Level_Libraries. The output from the TFLearn MNIST example is as follows: Training  Step:  5499         |  total  loss:  0.42119  |  time:  1.817s |  Adam  |  epoch:  010  |  loss:  0.42119  -  acc:  0.8860  --  iter:  54900/55000 Training  Step:  5500         |  total  loss:  0.40881  |  time:  1.820s |  Adam  |  epoch:  010  |  loss:  0.40881  -  acc:  0.8854  --  iter:  55000/55000 -- Test  accuracy:  0.9029 Note: You can get more information about TFLearn from the following link: http://tflearn.org/. To summarize, we got to know about TFLearn and the different TFLearn layers and models. If you found this post useful, do check out this book Mastering TensorFlow 1.x, to explore advanced features of TensorFlow 1.x, and gain insight into TensorFlow Core, Keras, TF Estimators, TFLearn, TF Slim, Pretty Tensor, and Sonnet. TensorFlow.js 0.11.1 releases! How to Build TensorFlow Models for Mobile and Embedded devices Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 7219

article-image-deep-learning-r
Packt
15 Feb 2016
12 min read
Save for later

Deep learning in R

Packt
15 Feb 2016
12 min read
As the title suggests, in this article, we will be taking a look at some of the deep learning models in R. Some of the pioneering advancements in neural networks research in the last decade have opened up a new frontier in machine learning that is generally called by the name deep learning. The general definition of deep learning is, a class of machine learning techniques, where many layers of information processing stages in hierarchical supervised architectures are exploited for unsupervised feature learning and for pattern analysis/classification. The essence of deep learning is to compute hierarchical features or representations of the observational data, where the higher-level features or factors are defined from lower-level ones. Although there are many similar definitions and architectures for deep learning, two common elements in all of them are: multiple layers of nonlinear information processing and supervised or unsupervised learning of feature representations at each layer from the features learned at the previous layer. The initial works on deep learning were based on multilayer neural network models. Recently, many other forms of models are also used such as deep kernel machines and deep Q-networks. Researchers have experimented with multilayer neural networks even in previous decades. However, two reasons were limiting any progress with learning using such architectures. The first reason is that the learning of parameters of the network is a nonconvex optimization problem and often one gets stuck at poor local minima's starting from random initial conditions. The second reason is that the associated computational requirements were huge. A breakthrough for the first problem came when Geoffrey Hinton developed a fast algorithm for learning a special class of neural networks called deep belief nets (DBN). We will describe DBNs in more detail in the later sections. The high computational power requirements were met with the advancement in computing using general purpose graphical processing units (GPGPUs). What made deep learning so popular for practical applications is the significant improvement in accuracy achieved in automatic speech recognition and computer vision. For example, the word error rate in automatic speech recognition of a switchboard conversational speech had reached a saturation of around 40% after years of research. However, using deep learning, the word error rate was reduced dramatically to close to 10% in a matter of a few years. Another well-known example is how deep convolution neural network achieved the least error rate of 15.3% in the 2012 ImageNet Large Scale Visual Recognition Challenge compared to state-of-the-art methods that gave 26.2% as the least error rate. In this article, we will describe one class of deep learning models called deep belief networks. Interested readers are requested to read the book by Li Deng and Dong Yu for a detailed understanding of various methods and applications of deep learning. We will also illustrate the use of DBN with the R package darch. Restricted Boltzmann machines A restricted Boltzmann machine (RBM) is a two-layer network (bi-partite graph), in which one layer is a visible layer (v) and the second layer is a hidden layer (h). All nodes in the visible layer and all nodes in the hidden layer are connected by undirected edges, and there no connections between nodes in the same layer: An RBM is characterized by the joint distribution of states of all visible units v={V1,V2,...,VM}and states of all hidden units h={h1,h2,...,hN} given by: Here, E(v,h|Ɵ) is called the energy function  Z=ƩvƩhexp(-E(v,h|Ɵ) and is the normalization constant known by the name partition function from Statistical Physics nomenclature. There are mainly two types of RBMs. In the first one, both v and h are Bernoulli random variables. In the second type, h is a Bernoulli random variable whereas v is a Gaussian random variable. For Bernoulli RBM, the energy function is given by: Here, Wij represents the weight of the edge between nodes Vi and hj; bi and aj are bias parameters for the visible and hidden layers, respectively. For this energy function, the exact expressions for the conditional probability can be derived as follows: Here, is the logistic function 1/(1+exp(-x)). If the input variables are continuous, one can use the Gaussian RBM; the energy function of it is given by: Also, in this case, the conditional probabilities of vi and hj will become as follows: This is a normal distribution with mean ƩMI=1Wijhj+bi and variance 1. Now that we have described the basic architecture of an RBM, how is it that it is trained? If we try to use the standard approach of taking the gradient of log-likelihood, we get the following update rule: Here, IEdata(vi,hj) is the expectation of vi,hj computed using IEmodel(vi,hj) the dataset and is the same expectation computed using the model. However, one cannot use this exact expression for updating weights because IEmodel(vi,hj) is difficult to compute. The first breakthrough came to solve this problem and, hence, to train deep neural networks, when Hinton and team proposed an algorithm called Contrastive Divergence (CD). The essence of the algorithm is described in the next paragraph. The idea is to approximate IEmodel(vi,hj) by using values of vi and hj generated using Gibbs sampling from the conditional distributions mentioned previously. One scheme of doing this is as follows: Initialize Vt=0 from the dataset. Find ht=0 by sampling from the conditional distribution ht=0 ~ p(h|vt=0). Find Vt=1 by sampling from the conditional distribution vt=1 ~ p(v|ht=0). Find ht=1 by sampling from the conditional distribution ht=1 ~ p(h|vt=1). Once we find values of Vt=1 and ht=1 , use (vit=1hjt=1) which is the product of ith component of Vt=1 and jth component of ht=1, as an approximation for IEmodel(vi,hj). This is called CD-1 algorithm. One can generalize this to use the values from the kth step of Gibbs sampling and it is known as CD-k algorithm. One can easily see the connection between RBMs and Bayesian inference. Since the CD algorithm is like a posterior density estimate, one could say that RBMs are trained using a Bayesian inference approach. Although the Contrastive Divergence algorithm looks simple, one needs to be very careful in training RBMs, otherwise the model can result in overfitting. Readers who are interested in using RBMs in practical applications should refer to the technical report where this is discussed in detail. Deep belief networks One can stack several RBMs, one on top of each other, such that the values of hidden units in the layer n-1(hi,n-1) would become values of visible units in the nth layer (vi,n), and so on. The resulting network is called a deep belief network. It was one of the main architectures used in early deep learning networks for pretraining. The idea of pretraining a NN is the following: in the standard three-layer (input-hidden-output) NN, one can start with random initial values for the weights and using the backpropagation algorithm can find a good minimum of the log-likelihood function. However, when the number of layers increases, the straightforward application of backpropagation does not work because starting from output layer, as we compute the gradient values for the layers deep inside, their magnitude becomes very small. This is called the gradient vanishing problem. As a result, the network will get trapped in some poor local minima. Backpropagation still works if we are starting from the neighborhood of a good minimum. To achieve this, a DNN is often pretrained in an unsupervised way using a DBN. Instead of starting from random values of weights, first train a DBN in an unsupervised way and use weights from the DBN as initial weights for a corresponding supervised DNN. It was seen that such DNNs pretrained using DBNs perform much better. The layer-wise pretraining of a DBN proceeds as follows. Start with the first RBM and train it using input data in the visible layer and the CD algorithm (or its latest better variants). Then, stack a second RBM on top of this. For this RBM, use values sample from as the values for the visible layer. Continue this process for the desired number of layers. The outputs of hidden units from the top layer can also be used as inputs for training a supervised model. For this, add a conventional NN layer at the top of DBN with the desired number of classes as the number of output nodes. Input for this NN would be the output from the top layer of DBN. This is called DBN-DNN architecture. Here, a DBN's role is generating highly efficient features (the output of the top layer of DBN) automatically from the input data for the supervised NN in the top layer. The architecture of a five-layer DBN-DNN for a binary classification task is shown in the following figure: The last layer is trained using the backpropagation algorithm in a supervised manner for the two classes c1 and c2 . We will illustrate the training and classification with such a DBN-DNN using the darch R package. The darch R package The darch package, written by Martin Drees, is one of the R packages by which one can begin doing deep learning in R. It implements the DBN described in the previous section. The package can be downloaded from https://cran.r-project.org/web/packages/darch/index.html. The main class in the darch package implements deep architectures and provides the ability to train them with Contrastive Divergence and fine-tune with backpropagation, resilient backpropagation, and conjugate gradients. The new instances of the class are created with the newDArch constructor. It is called with the following arguments: a vector containing the number of nodes in each layers, the batch size, a Boolean variable to indicate whether to use the ff package for computing weights and outputs, and the name of the function for generating the weight matrices. Let us create a network having two input units, four hidden units, and one output unit: install.packages("darch") #one time >library(darch) >darch ← newDArch(c(2,4,1),batchSize = 2,genWeightFunc = generateWeights) INFO [2015-07-19 18:50:29] Constructing a darch with 3 layers. INFO [2015-07-19 18:50:29] Generating RBMs. INFO [2015-07-19 18:50:29] Construct new RBM instance with 2 visible and 4 hidden units. INFO [2015-07-19 18:50:29] Construct new RBM instance with 4 visible and 1 hidden units. Let us train the DBN with a toy dataset. We are using this because for training any realistic examples, it would take a long time, hours if not days. Let us create an input data set containing two columns and four rows: >inputs ← matrix(c(0,0,0,1,1,0,1,1),ncol=2,byrow=TRUE) >outputs ← matrix(c(0,1,1,0),nrow=4) Now, let us pretrain the DBN using the input data: >darch ← preTrainDArch(darch,inputs,maxEpoch=1000) We can have a look at the weights learned at any layer using the getLayerWeights( ) function. Let us see how the hidden layer looks like: >getLayerWeights(darch,index=1) [[1]] [,1] [,2] [,3] [,4] [1,] 8.167022 0.4874743 -7.563470 -6.951426 [2,] 2.024671 -10.7012389 1.313231 1.070006 [3,] -5.391781 5.5878931 3.254914 3.000914 Now, let's do a backpropagation for supervised learning. For this, we need to first set the layer functions to sigmoidUnitDerivatives: >layers ← getLayers(darch) >for(i in length(layers):1){ layers[[i]][[2]] ← sigmoidUnitDerivative } >setLayers(darch) ← layers >rm(layers) Finally, the following two lines perform the backpropagation: >setFineTuneFunction(darch) ← backpropagation >darch ← fineTuneDArch(darch,inputs,outputs,maxEpoch=1000) We can see the prediction quality of DBN on the training data itself by running darch as follows: >darch ← getExecuteFunction(darch)(darch,inputs) >outputs_darch ← getExecOutputs(darch) >outputs_darch[[2]] [,1] [1,] 9.998474e-01 [2,] 4.921130e-05 [3,] 9.997649e-01 [4,] 3.796699e-05 Comparing with the actual output, DBN has predicted the wrong output for the first and second input rows. Since this example was just to illustrate how to use the darch package, we are not worried about the 50% accuracy here. Other deep learning packages in R Although there are some other deep learning packages in R such as deepnet and RcppDL, compared with libraries in other languages such as Cuda (C++) and Theano (Python), R yet does not have good native libraries for deep learning. The only available package is a wrapper for the Java-based deep learning open source project H2O. This R package, h20, allows running H2O via its REST API from within R. Readers who are interested in serious deep learning projects and applications should use H2O using h2o packages in R. One needs to install H2O in your machine to use h2o. Summary We have learned one of the latest advances in neural networks that is called deep learning. It can be used to solve many problems such as computer vision and natural language processing that involves highly cognitive elements. The artificial intelligent systems using deep learning were able to achieve accuracies comparable to human intelligence in tasks such as speech recognition and image classification. To know more about Bayesian modeling in R, check out Learning Bayesian Models with R (https://www.packtpub.com/big-data-and-business-intelligence/learning-bayesian-models-r). You can also check out our other R books, Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/data-analysis-r), and Machine Learning with R - Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-r-second-edition). Resources for Article: Further resources on this subject: Working with Data – Exploratory Data Analysis [article] Big Data Analytics [article] Deep learning in R [article]
Read more
  • 0
  • 0
  • 7211
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-splunks-input-methods-and-data-feeds
Packt
30 May 2016
13 min read
Save for later

Splunk's Input Methods and Data Feeds

Packt
30 May 2016
13 min read
This article being crafted by Ashish Kumar Yadav has been picked from Advanced Splunk book. This book helps you to get in touch with a great data science tool named Splunk. The big data world is an ever expanding forte and it is easy to get lost in the enormousness of machine data available at your bay. The Advanced Splunk book will definitely provide you with the necessary resources and the trail to get you at the other end of the machine data. While the book emphasizes on Splunk, it also discusses its close association with Python language and tools like R and Tableau that are needed for better analytics and visualization purpose. (For more resources related to this topic, see here.) Splunk supports numerous ways to ingest data on its server. Any data generated from a human-readable machine from various sources can be uploaded using data input methods such as files, directories, TCP/UDP scripts can be indexed on the Splunk Enterprise server and analytics and insights can be derived from them. Data sources Uploading data on Splunk is one of the most important parts of analytics and visualizations of data. If data is not properly parsed, timestamped, or broken into events, then it can be difficult to analyze and get proper insight on the data. Splunk can be used to analyze and visualize data ranging from various domains, such as IT security, networking, mobile devices, telecom infrastructure, media and entertainment devices, storage devices, and many more. The machine generated data from different sources can be of different formats and types, and hence, it is very important to parse data in the best format to get the required insight from it. Splunk supports machine-generated data of various types and structures, and the following screenshot shows the common types of data that comes with an inbuilt support in Splunk Enterprise. The most important point of these sources is that if the data source is from the following list, then the preconfigured settings and configurations already stored in Splunk Enterprise are applied. This helps in getting the data parsed in the best and most suitable formats of events and timestamps to enable faster searching, analytics, and better visualization. The following screenshot enlists common data sources supported by Splunk Enterprise: Structured data Machine-generated data is generally structured, and in some cases, it can be semistructured. Some of the types of structured data are EXtensible Markup Language (XML), JavaScript Object Notation (JSON), comma-separated values (CSV), tab-separated values (TSV), and pipe-separated values (PSV). Any format of structured data can be uploaded on Splunk. However, if the data is from any of the preceding formats, then predefined settings and configuration can be applied directly by choosing the respective source type while uploading the data or by configuring it in the inputs.conf file. The preconfigured settings for any of the preceding structured data is very generic. Many times, it happens that the machine logs are customized structured logs; in that case, additional settings will be required to parse the data. For example, there are various types of XML. We have listed two types here. In the first type, there is the <note> tag at the start and </note> at the end, and in between, there are parameters are their values. In the second type, there are two levels of hierarchies. XML has the <library> tag along with the <book> tag. Between the <book> and </book> tags, we have parameters and their values. The first type is as follows: <note> <to>Jack</to> <from>Micheal</from> <heading>Test XML Format</heading> <body>This is one of the format of XML!</body> </note> The second type is shown in the following code snippet: <Library> <book category="Technical"> <title lang="en">Splunk Basic</title> <author>Jack Thomas</author> <year>2007</year> <price>520.00</price> </book> <book category="Story"> <title lang="en">Jungle Book</title> <author>Rudyard Kiplin</author> <year>1984</year> <price>50.50</price> </book> </Library > Similarly, there can be many types of customized XML scripts generated by machines. To parse different types of structured data, Splunk Enterprise comes with inbuilt settings and configuration defined for the source it comes from. Let's say, for example, that the data received from a web server's logs are also structured logs and it can be in either a JSON, CSV, or simple text format. So, depending on the specific sources, Splunk tries to make the job of the user easier by providing the best settings and configuration for many common sources of data. Some of the most common sources of data are data from web servers, databases, operation systems, network security, and various other applications and services. Web and cloud services The most commonly used web servers are Apache and Microsoft IIS. All Linux-based web services are hosted on Apache servers, and all Windows-based web services on IIS. The logs generated from Linux web servers are simple plain text files, whereas the log files of Microsoft IIS can be in a W3C-extended log file format or it can be stored in a database in the ODBC log file format as well. Cloud services such as Amazon AWS, S3, and Microsoft Azure can be directly connected and configured according to the forwarded data on Splunk Enterprise. The Splunk app store has many technology add-ons that can be used to create data inputs to send data from cloud services to Splunk Enterprise. So, when uploading log files from web services, such as Apache, Splunk provides a preconfigured source type that parses data in the best format for it to be available for visualization. Suppose that the user wants to upload apache error logs on the Splunk server, and then the user chooses apache_error from the Web category of Source type, as shown in the following screenshot: On choosing this option, the following set of configuration is applied on the data to be uploaded: The event break is configured to be on the regular expression pattern ^[ The events in the log files will be broken into a single event on occurrence of [ at every start of a line (^) The timestamp is to be identified in the [%A %B %d %T %Y] format, where: %A is the day of week; for example, Monday %B is the month; for example, January %d is the day of the month; for example, 1 %T is the time that has to be in the %H : %M : %S format %Y is the year; for example, 2016 Various other settings such as maxDist that allows the amount of variance of logs can vary from the one specified in the source type and other settings such as category, descriptions, and others. Any new settings required as per our needs can be added using the New Settings option available in the section below Settings. After making the changes, either the settings can be saved as a new source type or the existing source type can be updated with the new settings. IT operations and network security Splunk Enterprise has many applications on the Splunk app store that specifically target IT operations and network security. Splunk is a widely accepted tool for intrusion detection, network and information security, fraud and theft detection, and user behaviour analytics and compliance. A Splunk Enterprise application provides inbuilt support for the Cisco Adaptive Security Appliance (ASA) firewall, Cisco SYSLOG, Call Detail Records (CDR) logs, and one of the most popular intrusion detection application, Snort. The Splunk app store has many technology add-ons to get data from various security devices such as firewall, routers, DMZ, and others. The app store also has the Splunk application that shows graphical insights and analytics over the data uploaded from various IT and security devices. Databases The Splunk Enterprise application has inbuilt support for databases such as MySQL, Oracle Syslog, and IBM DB2. Apart from this, there are technology add-ons on the Splunk app store to fetch data from the Oracle database and the MySQL database. These technology add-ons can be used to fetch, parse, and upload data from the respective database to the Splunk Enterprise server. There can be various types of data available from one source; let's take MySQL as an example. There can be error log data, query logging data, MySQL server health and status log data, or MySQL data stored in the form of databases and tables. This concludes that there can be a huge variety of data generated from the same source. Hence, Splunk provides support for all types of data generated from a source. We have inbuilt configuration for MySQL error logs, MySQL slow queries, and MySQL database logs that have been already defined for easier input configuration of data generated from respective sources. Application and operating system data The Splunk input source type has inbuilt configuration available for Linux dmesg, syslog, security logs, and various other logs available from the Linux operating system. Apart from the Linux OS, Splunk also provides configuration settings for data input of logs from Windows and iOS systems. It also provides default settings for Log4j-based logging for Java, PHP, and .NET enterprise applications. Splunk also supports lots of other applications' data such as Ruby on Rails, Catalina, WebSphere, and others. Splunk Enterprise provides predefined configuration for various applications, databases, OSes, and cloud and virtual environments to enrich the respective data with better parsing and breaking into events, thus deriving at better insight from the available data. The applications' source whose settings are not available in Splunk Enterprise can alternatively have apps or add-ons on the app store. Data input methods Splunk Enterprise supports data input through numerous methods. Data can be sent on Splunk via files and directories, TCP, UDP, scripts or using universal forwarders. Files and directories Splunk Enterprise provides an easy interface to the uploaded data via files and directories. Files can be directly uploaded from the Splunk web interface manually or it can be configured to monitor the file for changes in content, and the new data will be uploaded on Splunk whenever it is written in the file. Splunk can also be configured to upload multiple files by either uploading all the files in one shot or the directory can be monitored for any new files, and the data will get indexed on Splunk whenever it arrives in the directory. Any data format from any sources that are in a human-readable format, that is, no propriety tools are needed to read the data, can be uploaded on Splunk. Splunk Enterprise even supports uploading in a compressed file format such as (.zip and .tar.gz), which has multiple log files in a compressed format. Network sources Splunk supports both TCP and UDP to get data on Splunk from network sources. It can monitor any network port for incoming data and then can index it on Splunk. Generally, in case of data from network sources, it is recommended that you use a Universal forwarder to send data on Splunk, as Universal forwarder buffers the data in case of any issues on the Splunk server to avoid data loss. Windows data Splunk Enterprise provides direct configuration to access data from a Windows system. It supports both local as well as remote collections of various types and sources from a Windows system. Splunk has predefined input methods and settings to parse event log, performance monitoring report, registry information, hosts, networks and print monitoring of a local as well as remote Windows system. So, data from different sources of different formats can be sent to Splunk using various input methods as per the requirement and suitability of the data and source. New data inputs can also be created using Splunk apps or technology add-ons available on the Splunk app store. Adding data to Splunk—new interfaces Splunk Enterprises introduced new interfaces to accept data that is compatible with constrained resources and lightweight devices for Internet of Things. Splunk Enterprise version 6.3 supports HTTP Event Collector and REST and JSON APIs for data collection on Splunk. HTTP Event Collector is a very useful interface that can be used to send data without using any forwarder from your existing application to the Splunk Enterprise server. HTTP APIs are available in .NET, Java, Python, and almost all the programming languages. So, forwarding data from your existing application that is based on a specific programming language becomes a cake walk. Let's take an example, say, you are a developer of an Android application, and you want to know what all features the user uses that are the pain areas or problem-causing screens. You also want to know the usage pattern of your application. So, in the code of your Android application, you can use REST APIs to forward the logging data on the Splunk Enterprise server. The only important point to note here is that the data needs to be sent in a JSON payload envelope. The advantage of using HTTP Event Collector is that without using any third-party tools or any configuration, the data can be sent on Splunk and we can easily derive insights, analytics, and visualizations from it. HTTP Event Collector and configuration HTTP Event Collector can be used when you configure it from the Splunk Web console, and the event data from HTTP can be indexed in Splunk using the REST API. HTTP Event Collector HTTP Event Collector (EC) provides an API with an endpoint that can be used to send log data from applications into Splunk Enterprise. Splunk HTTP Event Collector supports both HTTP and HTTPS for secure connections. The following are the features of HTTP Event Collector, which make's adding data on Splunk Enterprise easier: It is very lightweight is terms of memory and resource usage, and thus can be used in resources constrained to lightweight devices as well. Events can be sent directly from anywhere such as web servers, mobile devices, and IoT without any need of configuration or installation of forwarders. It is a token-based JSON API that doesn't require you to save user credentials in the code or in the application settings. The authentication is handled by tokens used in the API. It is easy to configure EC from the Splunk Web console, enable HTTP EC, and define the token. After this, you are ready to accept data on Splunk Enterprise. It supports both HTTP and HTTPS, and hence it is very secure. It supports GZIP compression and batch processing. HTTP EC is highly scalable as it can be used in a distributed environment as well as with a load balancer to crunch and index millions of events per second. Summary In this article, we walked through various data input methods along with various data sources supported by Splunk. We also looked at HTTP Event Collector, which is a new feature added in Splunk 6.3 for data collection via REST to encourage the usage of Splunk for IoT. The data sources and input methods for Splunk are unlike any generic tool and the HTTP Event Collector is the added advantage compare to other data analytics tools. Resources for Article: Further resources on this subject: The Splunk Interface [article] The Splunk Web Framework [article] Introducing Splunk [article]
Read more
  • 0
  • 0
  • 7211

article-image-creating-nhibernate-session-access-database-within-aspnet
Packt
14 May 2010
7 min read
Save for later

Creating a NHibernate session to access database within ASP.NET

Packt
14 May 2010
7 min read
NHibernate is an open source object-relational mapper, or simply put, a way to rapidly retrieve data from your database into standard .NET objects. This article teaches you how to create NHibernate sessions, which use database sessions to retrieve and store data into the database. In this article by Aaron B. Cure, author of Nhibernate 2 Beginner's Guide we'll talk about: What is an NHibernate session? How does it differ from a regular database session? Retrieving and committing data Session strategies for ASP.NET (Read more interesting articles on Nhibernate 2 Beginner's Guide here.) What is an NHibernate session? Think of an NHibernate session as an abstract or virtual conduit to the database. Gone are the days when you have to create a Connection, open the Connection, pass the Connection to a Command object, create a DataReader from the Command object, and so on. With NHibernate, we ask the SessionFactory for a Session object, and that's it. NHibernate handles all of the "real" sessions to the database, connections, pooling, and so on. We reap all the benefits without having to know the underlying intricacies of all of the database backends we are trying to connect to. Time for action – getting ready Before we actually connect to the database, we need to do a little "housekeeping". Just a note, if you run into trouble (that is, your code doesn't work like the walkthrough), then don't panic. See the troubleshooting section at the end of this Time for action section. Before we get started, make sure that you have all of the Mapping and Common files and that your Mapping files are included as "Embedded Resources". Your project should look as shown in the following screenshot: The first thing we need to do is create a new project to use to create our sessions. Right-click on the Solution 'Ordering' and click on Add | New Project. For our tests, we will use a Console Application and name it Ordering.Console. Use the same location as your previous project. Next, we need to add a few references. Right-click on the References folder and click on Add Reference. In VB.NET, you need to right-click on the Ordering.Console project, and click on Add Reference. Select the Browse tab, and navigate to the folder that contains your NHibernate dlls. You should have six files in this folder. Select the NHibernate.dll, Castle.Core.dll, Castle.DynamicProxy2.dll, Iesi.Collections.dll, log4net.dll, and NHibernate.ByteCode.Castle.dll files, and click on OK to add them as references to the project. Right-click on the References folder (or the project folder in VB.NET), and click on Add Reference again. Select the Projects tab, select the Ordering.Data project, and click on OK to add the data tier as a reference to our console application. The last thing we need to do is create a configuration object. We will discuss configuration in a later chapter, so for now, it would suffice to say that this will give us everything we need to connect to the database. Your current Program.cs file in the Ordering.Console application should look as follows: using System;using System.Collections.Generic;using System.Text;namespace Ordering.Console{ class Program { static void Main(string[] args) { } }} Or, if you are using VB.NET, your Module1.vb file will look as follows: Module Module1 Sub Main() End SubEnd Module At the top of the file, we need to import a few references to make our project compile. Right above the namespace or Module declarations, add the using/Imports statements for NHibernate, NHibernate.Cfg, and Ordering.Data: using NHibernate;using NHibernate.Cfg;using Ordering.Data; In VB.NET you need to use the Imports keyword as follows: Imports NHibernateImports NHibernate.CfgImports Ordering.Data Inside the Main() block, we want to create the configuration object that will tell NHibernate how to connect to the database. Inside your Main() block, add the following code: Configuration cfg = new Configuration();cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionProvider, typeof(NHibernate.Connection.DriverConnectionProvider) .AssemblyQualifiedName); cfg.Properties.Add(NHibernate.Cfg.Environment.Dialect, typeof(NHibernate.Dialect.MsSql2008Dialect) .AssemblyQualifiedName); cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionDriver, typeof(NHibernate.Driver.SqlClientDriver) .AssemblyQualifiedName); cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionString, "Server= (local)SQLExpress;Database= Ordering;Trusted_Connection=true;"); cfg.Properties.Add(NHibernate.Cfg.Environment. ProxyFactoryFactoryClass, typeof (NHibernate.ByteCode.LinFu.ProxyFactoryFactory) .AssemblyQualifiedName); cfg.AddAssembly(typeof(Address).AssemblyQualifiedName); For a VB.NET project, add the following code: Dim cfg As New Configuration()cfg.Properties.Add(NHibernate.Cfg.Environment. _ ConnectionProvider, GetType(NHibernate.Connection. _ DriverConnectionProvider).AssemblyQualifiedName) cfg.Properties.Add(NHibernate.Cfg.Environment.Dialect, _ GetType(NHibernate.Dialect.MsSql2008Dialect). _ AssemblyQualifiedName) cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionDriver, _ GetType(NHibernate.Driver.SqlClientDriver). _ AssemblyQualifiedName) cfg.Properties.Add(NHibernate.Cfg.Environment.ConnectionString, _ "Server= (local)SQLExpress;Database=Ordering; _ Trusted_Connection=true;") cfg.Properties.Add(NHibernate.Cfg.Environment. _ ProxyFactoryFactoryClass, GetType _ (NHibernate.ByteCode.LinFu.ProxyFactoryFactory). _ AssemblyQualifiedName) cfg.AddAssembly(GetType(Address).AssemblyQualifiedName) Lastly, right-click on the Ordering.Console project, and select Set as Startup Project, as shown in the following screenshot: Press F5 or Debug | Start Debugging and test your project. If everything goes well, you should see a command prompt window pop up and then go away. Congratulations! You are done! However, it is more than likely you will get an error on the line that says cfg.AddAssembly(). This line instructs NHibernate to "take all of my HBM.xml files and compile them". This is where we will find out how well we handcoded our HBM.xml files. The most common error that will show up is MappingException was unhandled. If you get a mapping exception, then see the next step for troubleshooting tips. Troubleshooting: NHibernate will tell us where the errors are and why they are an issue. The first step to debug these issues is to click on the View Detail link under Actions on the error pop up. This will bring up the View Detail dialog, as shown in the following screenshot: If you look at the message, NHibernate says that it Could not compile the mapping document: Ordering.Data.Mapping.Address.hbm.xml. So now we know that the issue is in our Address.hbm.xml file, but this is not very helpful. If we look at the InnerException, it says "Problem trying to set property type by reflection". Still not a specific issue, but if we click on the + next to the InnerException, I can see that there is an InnerException on this exception. The second InnerException says "class Ordering.Data.Address, Ordering.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null not found while looking for property: Id". Now we are getting closer. It has something to do with the ID property. But wait, there is another InnerException. This InnerException says "Could not find a getter for property 'Id' in class 'Ordering.Data.Address'". How could that be? Looking at my Address.cs class, I see: using System;using System.Collections.Generic;using System.Text;namespace Ordering.Data{ public class Address { }} Oops! Apparently I stubbed out the class, but forgot to add the actual properties. I need to put the rest of the properties into the file, which looks as follows: using System;using System.Collections.Generic;using System.Text; namespace Ordering.Data{ public class Address { #region Constructors public Address() { } public Address(string Address1, string Address2, string City, string State, string Zip) : this() { this.Address1 = Address1; this.Address2 = Address2; this.City = City; this.State = State; this.Zip = Zip; } #endregion #region Properties private int _id; public virtual int Id { get { return _id; } set { _id = value; } } private string _address1; public virtual string Address1 { get { return _address1; } set { _address1 = value; } } private string _address2; public virtual string Address2 { get { return _address2; } set { _address2 = value; } } private string _city; public virtual string City { get { return _city; } set { _city = value; } } private string _state; public virtual string State { get { return _state; } set { _state = value; } } private string _zip; public virtual string Zip { get { return _zip; } set { _zip = value; } } private Contact _contact; public virtual Contact Contact { get { return _contact; } set { _contact = value; } } #endregion }} By continuing to work my way through the errors that are presented in the configuration and starting the project in Debug mode, I can handle each exception until there are no more errors. What just happened? We have successfully created a project to test out our database connectivity, and an NHibernate Configuration object which will allow us to create sessions, session factories, and a whole litany of NHibernate goodness!
Read more
  • 0
  • 0
  • 7204

article-image-exploring-and-interacting-materials-using-blueprints
Packt
23 Jul 2015
16 min read
Save for later

Exploring and Interacting with Materials using Blueprints

Packt
23 Jul 2015
16 min read
In this article by Brenden Sewell, author of the book Blueprints Visual Scripting for Unreal Engine, we will cover the following topics: Exploring materials Creating our first Blueprint When setting out to develop a game, one of the first steps toward exploring your idea is to build a prototype. Fortunately, Unreal Engine 4 and Blueprints make it easier than ever to quickly get the essential gameplay functionality working so that you can start testing your ideas sooner. To develop some familiarity with the Unreal editor and Blueprints, we will begin by prototyping simple gameplay mechanics using some default assets and a couple of Blueprints. (For more resources related to this topic, see here.) Exploring materials Earlier, we set for ourselves the goal of changing the color of the cylinder when it is hit by a projectile. To do so, we will need to change the actor's material. A material is an asset that can be added to an actor's mesh (which defines the physical shape of the actor) to create its look. You can think of a material as paint applied on top of an actor's mesh or shape. Since an actor's material determines its color, one method of changing the color of an actor is to replace its material with a material of a different color. To do this, we will first be creating a material of our own. It will make an actor appear red. Creating materials We can start by creating a new folder inside the FirstPersonBP directory and calling it Materials. Navigate to the newly created folder and right-click inside the empty space in the content browser, which will generate a new asset creation popup. From here, select Material to create a new material asset. You will be prompted to give the new material a name, which I have chosen to call TargetRed. Material Properties and Blueprint Nodes Double-click on TargetRed to open a new editor tab for editing the material, like this: You are now looking at Material Editor, which shares many features and conventions with Blueprints. The center of this screen is called the grid, and this is where we will place all the objects that will define the logic of our Blueprints. The initial object you see in the center of the grid screen, labeled with the name of the material, is called a node. This node, as seen in the previous screenshot, has a series of input pins that other material nodes can attach to in order to define its properties. To give the material a color, we will need to create a node that will give information about the color to the input labeled Base Color on this node. To do so, right-click on empty space near the node. A popup will appear, with a search box and a long list of expandable options. This shows all the available Blueprint node options that we can add to this Material Blueprint. The search box is context sensitive, so if you start typing the first few letters of a valid node name, you will see the list below shrink to include only those nodes that include those letters in the name. The node we are looking for is called VectorParameter, so we start typing this name in the search box and click on the VectorParameter result to add that node to our grid: A vector parameter in the Material Editor allows us to define a color, which we can then attach to the Base Color input on the tall material definition node. We first need to give the node a color selection. Double-click on the black square in the middle of the node to open Color Picker. We want to give our target a bright red color when it is hit, so either drag the center point in the color wheel to the red section of the wheel, or fill in the RGB or Hex values manually. When you have selected the shade of red you want to use, click on OK. You will notice that the black box in your vector parameter node has now turned red. To help ourselves remember what parameter or property of the material our vector parameter will be defining, we should name it color. You can do this by ensuring that you have the vector parameter node selected (indicated by a thin yellow highlight around the node), and then navigating to the Details panel in the editor. Enter a value for Parameter Name, and the node label will change automatically: The final step is to link our color vector parameter node to the base material node. With Blueprints, you can connect two nodes by clicking and dragging one output pin to another node's input pin. Input pins are located on the left-hand side of a node, while output pins are always located to the right. The thin line that connects two nodes that have been connected in this way is called a wire. For our material, we need to click and drag a wire from the top output pin of the color node to the Base Color input pin of the material node, as shown in the following screenshot: Adding substance to our material We can optionally add some polish to our material by taking advantage of some of the other input pins on the material definition node. 3D objects look unrealistic with flat, single color materials applied, but we can add additional reflectiveness and depth by setting a value for the materials Metallic and Roughness inputs. To do so, right click in empty grid space and type scalar into the search box. The node we are looking for is called ScalarParameter. Once you have a scalar parameter node, select it, and go to the Details panel. A scalar parameter takes a single float value (a number with decimal values). Set Default Value to 0.1, as we want any additive effects to our material to be subtle. We should also change Parameter Name to Metallic. Finally, we click and drag the output pin from our Metallic node to the Metallic input pin of the material definition node. We want to make an additional connection to the Roughness parameter, so right-click on the Metallic node we just created and select Duplicate. This will generate a copy of that node, without the wire connection. Select this duplicate Metallic node and then change the Parameter Name field in the Details panel to Roughness. We will keep the same default value of 0.1 for this node. Now click and drag the output pin from the Roughness node to the Roughness input pin of the Material definition node. The final result of our Material Blueprint should look like what is shown in the following screenshot: We have now made a shiny red material. It will ensure that our targets will stand out when they are hit. Click on the Save button in the top-left corner of the editor to save the asset, and click again on the tab labeled FirstPersonExampleMap to return to your level. Creating our first Blueprint We now have a cylinder in the world, and the material we would like to apply to the cylinder when shot. The final piece of the interaction will be the game logic that evaluates that the cylinder has been hit, and then changes the material on the cylinder to our new red material. In order to create this behavior and add it to our cylinder, we will have to create a Blueprint. There are multiple ways of creating a Blueprint, but to save a couple of steps, we can create the Blueprint and directly attach it to the cylinder we created in a single click. To do so, make sure you have the CylinderTarget object selected in the Scene Outliner panel, and click on the blue Blueprint/Add Script button at the top of the Details panel. You will then see a path select window. For this project, we will be storing all our Blueprints in the Blueprints folder, inside the FirstPersonBP folder. Since this is the Blueprint for our CylinderTarget actor, leaving the name of the Blueprint as the default, CylinderTarget_Blueprint, is appropriate. CylinderTarget_Blueprint should now appear in your content browser, inside the Blueprints folder. Double-click on this Blueprint to open a new editor tab for the Blueprint. We will now be looking at the Viewport view of our cylinder. From here, we can manipulate some of the default properties of our actor, or add more components, each of which can contain their own logic to make the actor more complex. for creating a simple Blueprint attached to the actor directly. To do so, click on the tab labeled Event Graph above the Viewport window. Exploring the Event Graph panel The Event Graph panel should look very familiar, as it shares most of the same visual and functional elements as the Material Editor we used earlier. By default, the event graph opens with three unlinked event nodes that are currently unused. An event refers to some action in the game that acts as a trigger for a Blueprint to do something. Most of the Blueprints you will create follow this structure: Event (when) | Conditionals (if) | Actions (do). This can be worded as follows: when something happens, check whether X, Y, and Z are true, and if so, do this sequence of actions. A real-world example of this might be a Blueprint that determines whether or not I have fired a gun. The flow is like this: WHEN the trigger is pulled, IF there is ammo left in the gun, DO fire the gun. The three event nodes that are present in our graph by default are three of the most commonly used event triggers. Event Begin Play triggers actions when the player first begins playing the game. Event Actor Begin Overlap triggers actions when another actor begins touching or overlapping the space containing the existing actor controlled by the Blueprint. Event Tick triggers attached actions every time a new frame of visual content is displayed on the screen during gameplay. The number of frames that are shown on the screen within a second will vary depending on the power of the computer running the game, and this will in turn affect how often Event Tick triggers the actions. We want to trigger a "change material" action on our target every time a projectile hits it. While we could do this by utilizing the Event Actor Begin Overlap node to detect when a projectile object was overlapping with the cylinder mesh of our target, we will simplify things by detecting only when another actor has hit our target actor. Let's start with a clean slate, by clicking and dragging a selection box over all the default events and hitting the Delete key on the keyboard to delete them. Detecting a hit To create our hit detection event, right-click on empty graph space and type hit in the search box. The Event Hit node is what we are looking for, so select it when it appears in the search results. Event Hit triggers actions every time another actor hits the actor controlled by this Blueprint. Once you have the Event Hit node on the graph, you will notice that Event Hit has a number of multicolored output pins originating from it. The first thing to notice is the white triangle pin that is in the top-right corner of the node. This is the execution pin, which determines the next action to be taken in a sequence. Linking the execution pins of different nodes together enables the basic functionality of all Blueprints. Now that we have the trigger, we need to find an action that will enable us to change the material of an actor. Click and drag a wire from the execution pin to the empty space to the right of the node. Dropping a wire into empty space like this will generate a search window, allowing you to create a node and attach it to the pin you are dragging from in a single operation. In the search window that appears, make sure that the Context Sensitive box is checked. This will limit the results in the search window to only those nodes that can actually be attached to the pin you dragged to generate the search window. With Context Sensitive checked, type set material in the search box. The node we want to select is called Set Material (StaticMeshComponent). If you cannot find the node you are searching for in the context-sensitive search, try unchecking Context Sensitive to find it from the complete list of node options. Even if the node is not found in the context-sensitive search, there is still a possibility that the node can be used in conjunction with the node you are attempting to attach it to. The actions in the Event Hit node can be set like this: Swapping a material Once you have placed the Set Material node, you will notice that it is already connected via its input execution pin to the Event Hit node's output execution pin. This Blueprint will now fire the Set Material action whenever the Blueprint's actor hits another actor. However, we haven't yet set up the material that will be called when the Set Material action is called. Without setting the material, the action will fire but not produce any observable effect on the cylinder target. To set the material that will be called, click on the drop-down field labeled Select Asset underneath Material inside the Set Material node. In the asset finder window that appears, type red in the search box to find the TargetRed material we created earlier. Clicking on this asset will attach it to the Material field inside the Set Material node. We have now done everything we need with this Blueprint to turn the target cylinder red, but before the Blueprint can be saved, it must be compiled. Compiling is the process used to convert the developer-friendly Blueprint language into machine instructions that tell the computer what operations to perform. This is a hands-off process, so we don't need to concern ourselves with it, except to ensure that we always compile our Blueprint scripts after we assemble them. To do so, hit the Compile button in the top-left corner of the editor toolbar, and then click on Save. Now that we have set up a basic gameplay interaction, it is wise to test the game to ensure that everything is happening the way we expect. Click on the Play button, and a game window will appear directly above the Blueprint Editor. Try both shooting and running into the CylinderTarget actor you created. Improving the Blueprint When we run the game, we see that the cylinder target does change colors upon being hit by a projectile fired from the player's gun. This is the beginning of a framework of gameplay that can be used to get enemies to respond to the player's actions. However, you also might have noticed that the target cylinder changes color even when the player runs into it directly. Remember that we wanted the cylinder target to become red only when hit by a player projectile, and not because of any other object colliding with it. Unforeseen results like this are common whenever scripting is involved, and the best way to avoid them is to check your work by playing the game as you construct it as often as possible. To fix our Blueprint so that the cylinder target changes color only in response to a player projectile, return to the CylinderTarget_Blueprint tab and look at the Event Hit node again. The remaining output pins on the Event Hit node are variables that store data about the event that can be passed to other nodes. The color of the pins represents the kind of data variable it passes. Blue pins pass objects, such as actors, whereas red pins contain a boolean (true or false) variable. You will learn more about these pin types as we get into more complicated Blueprints; for now, we only need to concern ourselves with the blue output pin labeled Other, which contains the data about which other actor performed the hitting to fire this event. This will be useful in order for us to ensure that the cylinder target changes color only when hit by a projectile fired from the player, rather than changing color because of any other actors that might bump into it. To ensure that we are only triggering in response to a player projectile hit, click and drag a wire from the Other output pin to empty space. In this search window, type projectile. You should see some results that look similar to the following screenshot. The node we are looking for is called Cast To FirstPersonProjectile: FirstPersonProjectile is a Blueprint included in Unreal Engine 4's First Person template that controls the behavior of the projectiles that are fired from your gun. This node uses casting to ensure that the action attached to the execution pin of this node occurs only if the actor hitting the cylinder target matches the object referenced by the casting node. When the node appears, you should already see a blue wire between the Other output pin of the Event Hit node and the Object pin of the casting node. If not, you can generate it manually by clicking and dragging from one pin to the other. You should also remove the connections between the Event Hit and Set Material node execution pins so that the casting node can be linked between them. Removing a wire between two pins can be done by holding down the Alt key and clicking on a pin. Once you have linked the three nodes, the event graph should look like what is shown in the following screenshot: Now compile, save, and click on the play button again to test. This time, you should notice that the cylinder target retains its default color when you walk up and touch it, but does turn red when fired upon by your gun. Summary In this article, the skills you have learned will serve as a strong foundation for building more complex interactive behavior. You may wish to spend some additional time modifying your prototype to include a more appealing layout, or feature faster moving targets. One of the greatest benefits of Blueprint's visual scripting is the speed at which you can test new ideas, and each additional skill that you learn will unlock exponentially more possibilities for game experiences that you can explore and prototype. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game [article] Configuration and Handy Tweaks for UDK [article] Unreal Development Toolkit: Level Design HQ [article]
Read more
  • 0
  • 0
  • 7203

article-image-ragdoll-physics
Packt
19 Feb 2016
5 min read
Save for later

Ragdoll Physics

Packt
19 Feb 2016
5 min read
In this article we will learn how to apply Ragdoll physics to a character. (For more resources related to this topic, see here.) Applying Ragdoll physics to a character Action games often make use of Ragdoll physics to simulate the character's body reaction to being unconsciously under the effect of a hit or explosion. In this recipe, we will learn how to set up and activate Ragdoll physics to our character whenever she steps in a landmine object. We will also use the opportunity to reset the character's position and animations a number of seconds after that event has occurred. Getting ready For this recipe, we have prepared a Unity Package named Ragdoll, containing a basic scene that features an animated character and two prefabs, already placed into the scene, named Landmine and Spawnpoint. The package can be found inside the 1362_07_08 folder. How to do it... To apply ragdoll physics to your character, follow these steps: Create a new project and import the Ragdoll Unity Package. Then, from the Project view, open the mecanimPlayground level. You will see the animated book character and two discs: Landmine and Spawnpoint. First, let's set up our Ragdoll. Access the GameObject | 3D Object | Ragdoll... menu and the Ragdoll wizard will pop-up. Assign the transforms as follows:     Root: mixamorig:Hips     Left Hips: mixamorig:LeftUpLeg     Left Knee: mixamorig:LeftLeg     Left Foot: mixamorig:LeftFoot     Right Hips: mixamorig:RightUpLeg     Right Knee: mixamorig:RightLeg     Right Foot: mixamorig:RightFoot     Left Arm: mixamorig:LeftArm     Left Elbow: mixamorig:LeftForeArm     Right Arm: mixamorig:RightArm     Right Elbow: mixamorig:RightForeArm     Middle Spine: mixamorig:Spine1     Head: mixamorig:Head     Total Mass: 20     Strength: 50 Insert image 1362OT_07_45.png From the Project view, create a new C# Script named RagdollCharacter.cs. Open the script and add the following code: using UnityEngine; using System.Collections; public class RagdollCharacter : MonoBehaviour { void Start () { DeactivateRagdoll(); } public void ActivateRagdoll(){ gameObject.GetComponent<CharacterController> ().enabled = false; gameObject.GetComponent<BasicController> ().enabled = false; gameObject.GetComponent<Animator> ().enabled = false; foreach (Rigidbody bone in GetComponentsInChildren<Rigidbody>()) { bone.isKinematic = false; bone.detectCollisions = true; } foreach (Collider col in GetComponentsInChildren<Collider>()) { col.enabled = true; } StartCoroutine (Restore ()); } public void DeactivateRagdoll(){ gameObject.GetComponent<BasicController>().enabled = true; gameObject.GetComponent<Animator>().enabled = true; transform.position = GameObject.Find("Spawnpoint").transform.position; transform.rotation = GameObject.Find("Spawnpoint").transform.rotation; foreach(Rigidbody bone in GetComponentsInChildren<Rigidbody>()){ bone.isKinematic = true; bone.detectCollisions = false; } foreach (CharacterJoint joint in GetComponentsInChildren<CharacterJoint>()) { joint.enableProjection = true; } foreach(Collider col in GetComponentsInChildren<Collider>()){ col.enabled = false; } gameObject.GetComponent<CharacterController>().enabled = true; } IEnumerator Restore(){ yield return new WaitForSeconds(5); DeactivateRagdoll(); } } Save and close the script. Attach the RagdollCharacter.cs script to the book Game Object. Then, select the book character and, from the top of the Inspector view, change its tag to Player. From the Project view, create a new C# Script named Landmine.cs. Open the script and add the following code: using UnityEngine; using System.Collections; public class Landmine : MonoBehaviour { public float range = 2f; public float force = 2f; public float up = 4f; private bool active = true; void OnTriggerEnter ( Collider collision ){ if(collision.gameObject.tag == "Player" && active){ active = false; StartCoroutine(Reactivate()); collision.gameObject.GetComponent<RagdollCharacter>().ActivateRagdoll(); Vector3 explosionPos = transform.position; Collider[] colliders = Physics.OverlapSphere(explosionPos, range); foreach (Collider hit in colliders) { if (hit.GetComponent<Rigidbody>()) hit.GetComponent<Rigidbody>().AddExplosionForce(force, explosionPos, range, up); } } } IEnumerator Reactivate(){ yield return new WaitForSeconds(2); active = true; } } Save and close the script. Attach the script to the Landmine Game Object. Play the scene. Using the WASD keyboard control scheme, direct the character to the Landmine Game Object. Colliding with it will activate the character's Ragdoll physics and apply an explosion force to it. As a result, the character will be thrown away to a considerable distance and will no longer be in the control of its body movements, akin to a ragdoll. How it works... Unity's Ragdoll Wizard assigns, to selected transforms, the components Collider, Rigidbody, and Character Joint. In conjunction, those components make ragdoll physics possible. However, those components must be disabled whenever we want our character to be animated and controlled by the player. In our case, we switch those components on and off using the RagdollCharacter script and its two functions: ActivateRagdoll() and DeactivateRagdoll(), the latter including instructions to re-spawn our character in the appropriate place. For the testing purposes, we have also created the Landmine script, which calls RagdollCharacter script's function named ActivateRagdoll(). It also applies an explosion force to our ragdoll character, throwing it outside the explosion site. There's more... Instead of resetting the character's transform settings, you could have destroyed its gameObject and instantiated a new one over the respawn point using Tags. For more information on this subject, check Unity's documentation at: http://docs.unity3d.com/ScriptReference/GameObject.FindGameObjectsWithTag.html. Summary In this article we learned how to apply Ragdoll physics to a character. We also learned how to setup Ragdoll for the character of the game. To learn more please refer to the following books: Learning Unity 2D Game Development by Examplehttps://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Unity Game Development Blueprintshttps://www.packtpub.com/game-development/unity-game-development-blueprints. Getting Started with Unityhttps://www.packtpub.com/game-development/getting-started-unity. Resources for Article: Further resources on this subject: Using a collider-based system [article] Looking Back, Looking Forward [article] The Vertex Functions [article]
Read more
  • 0
  • 0
  • 7198
article-image-4-things-in-tech-that-might-die-in-2019
Richard Gall
19 Dec 2018
10 min read
Save for later

4 things in tech that might die in 2019

Richard Gall
19 Dec 2018
10 min read
If you’re in and around the tech industry, you’ve probably noticed that hype is an everyday reality. People spend a lot of time talking about what trends and technologies are up and coming and what people need to be aware of - they just love it. Perhaps second only to the fashion industry, the tech world moves through ideas quickly, with innovation piling up upon the next innovation. For the most part, our focus is optimistic: what is important? What’s actually going to shape the future? But with so much change there are plenty of things that disappear completely or simply shift out of view. Some of these things may have barely made an impression, others may have been important but are beginning to be replaced with other, more powerful, transformative and relevant tools. So, in the spirit of pessimism, here is a list of some of the trends and tools that might disappear from view in 2019. Some of these have already begun to sink, while others might leave you pondering whether I’ve completely lost my marbles. Of course, I am willing to be proven wrong. While I will not be eating my hat or any other item of clothing, I will nevertheless accept defeat with good grace in 12 months time. Blockchain Let’s begin with a surprise. You probably expected Blockchain to be hyped for 2019, but no, 2019 might, in fact, be the year that Blockchain dies. Let’s consider where we are right now: Blockchain, in itself, is a good idea. But so far all we’ve really had our various cryptocurrencies looking ever so slightly like pyramid schemes. Any further applications of Blockchain have, by and large, eluded the tech world. In fact, it’s become a useful sticker for organizations looking to raise funds - there are examples of apps out there that support Blockchain backed technologies in the early stages of funding which are later dropped as the company gains support. And it’s important to note that the word Blockchain doesn’t actually refer to one thing - there are many competing definitions as this article on The Verge explains so well. At risk of sounding flippant, Blockchain is ultimately a decentralized database. The reason it’s so popular is precisely because there is a demand for a database that is both scalable and available to a variety of parties - a database that isn’t surrounded by the implicit bureaucracy and politics that even the most prosaic ones do. From this perspective, it feels likely that 2019 will be a search for better ways of managing data - whether that includes Blockchain in its various forms remains to be seen. What you should learn instead of Blockchain A trend that some have seen as being related to Blockchain is edge computing. Essentially, this is all about decentralized data processing at the ‘edge’ of a network, as opposed to within a centralized data center (say, for example, cloud). Understanding the value of edge computing could allow us to better realise what Blockchain promises. Learn edge computing with Azure IoT Development Cookbook. It’s also worth digging deeper into databases - understanding how we can make these more scalable, reliable, and available, are essentially the tasks that anyone pursuing Blockchain is trying to achieve. So, instead of worrying about a buzzword, go back to what really matters. Get to grips with new databases. Learn with Seven NoSQL Databases in a Week Why I could be wrong about Blockchain There’s a lot of support for Blockchain across the industry, so it might well be churlish to dismiss it at this stage. Blockchain certainly does offer a completely new way of doing things, and there are potentially thousands of use cases. If you want to learn Blockchain, check out these titles: Mastering Blockchain, Second Edition Foundations of Blockchain Blockchain for Enterprise   Hadoop and big data If Blockchain is still receiving considerable hype, then big data has been slipping away quietly for the last couple of years. Of course, it hasn’t quite disappeared - data is now a normal part of reality. It’s just that trends like artificial intelligence and cloud have emerged to take its place and place even greater emphasis on what we’re actually doing with that data, and how we’re doing it. Read more: Why is Hadoop dying? With this change in emphasis, we’ve also seen the slow death of Hadoop. In a world that increasingly cloud native, it simply doesn’t make sense to run data on a cluster of computers - instead, leveraging public cloud makes much more sense. You might, for example, use Amazon S3 to store your data and then Spark, Flink, or Kafka for processing. Of course, the advantages of cloud are well documented. But in terms of big data, cloud allows for much greater elasticity in terms of scale, greater speed, and makes it easier to perform machine learning thanks to in built features that a number of the leading cloud vendors provide. What you should learn instead of Hadoop The future of big data largely rests in tools like Spark, Flink and Kafka. But it’s important to note it’s not really just about a couple of tools. As big data evolves, focus will need to be on broader architectural questions about what data you have, where it needs to be stored and how it should be used. Arguably, this is why ‘big data’ as a concept will lose valence with the wider community - it will still exist, but will be part of parcel of everyday reality, it won’t be separate from everything else we do. Learn the tools that will drive big data in the future: Apache Spark 2: Data Processing and Real-Time Analytics [Learning Path] Apache Spark: Tips, Tricks, & Techniques [Video] Big Data Processing with Apache Spark Learning Apache Flink Apache Kafka 1.0 Cookbook Why I could be wrong about Hadoop Hadoop 3 is on the horizon and could be the saving grace for Hadoop. Updates suggest that this new iteration is going to be much more amenable to cloud architectures. Learn Hadoop 3: Apache Hadoop 3 Quick Start Guide Mastering Hadoop 3         R 12 to 18 months ago debate was raging over whether R or Python was the best language for data. As we approach the end of 2018, that debate seems to have all but disappeared, with Python finally emerging as the go-to language for anyone working with data. There are a number of reasons for this: Python has the best libraries and frameworks for developing machine learning models. TensorFlow, for example, which runs on top of Keras, makes developing pretty sophisticated machine and deep learning systems relatively easy. R, however, simply can’t match Python in this way. With this ease comes increased adoption. If people want to ‘get into’ machine learning and artificial intelligence, Python is the obvious choice. This doesn’t mean R is dead - instead, it will continue to be a language that remains relevant for very specific use cases in research and data analysis. If you’re a researcher in a university, for example, you’ll probably be using R. But it at least now has to concede that it will never have the reach or levels of growth that Python has. What you should learn instead of R This is obvious - if you’re worried about R’s flexibility and adaptability for the future, you need to learn Python. But it’s certainly not the only option when it comes to machine learning - the likes of Scala and Go could prove useful assets on your CV, for machine learning and beyond. Learn a new way to tackle contemporary data science challenges: Python Machine Learning - Second Edition Hands-on Supervised Machine Learning with Python [Video] Machine Learning With Go Scala for Machine Learning - Second Edition       Why I could be wrong about R R is still an incredibly useful language when it comes to data analysis. Particularly if you’re working with statistics in a variety of fields, it’s likely that it will remain an important part of your skill set for some time. Check out these R titles: Getting Started with Machine Learning in R [Video] R Data Analysis Cookbook - Second Edition Neural Networks with R         IoT IoT is a term that has been hanging around for quite a while now. But it still hasn’t quite delivered on the hype that it originally received. Like Blockchain, 2019 is perhaps IoT’s make or break year. Even if it doesn’t develop into the sort of thing it promised, it could at least begin to break down into more practical concepts - like, for example edge computing. In this sense, we’d stop talking about IoT as if it were a single homogenous trend about to hit the modern world, but instead a set of discrete technologies that can produce new types of products, and complement existing (literal) infrastructure. The other challenge that IoT faces in 2019 is that the very concept of a connected world depends upon decision making - and policy - beyond the world of technology and business. If, for example, we’re going to have smart cities, there needs to be some kind of structure in place on which some degree of digital transformation can take place. Similarly, if every single device is to be connected in some way, questions will need to be asked about how these products are regulated and how this data is managed. Essentially, IoT is still a bit of a wild west. Given the year of growing scepticism about technology, major shifts are going to be unlikely over the next 12 months. What to learn One way of approaching IoT is instead to take a step back and think about the purpose of IoT, and what facets of it are most pertinent to what you want to achieve. Are you interested in collecting and analyzing data? Or developing products that have in built operational intelligence. Once you think about it from this perspective, IoT begins to sound less like a conceptual behemoth, and something more practical and actionable. Why I could be wrong about IoT Immediate shifts in IoT might be slow, but it could begin to pick up speed in organizations that understand it could have a very specific value. In this sense, IoT is a little like Blockchain - it’s only really going to work if we can move past the hype, and get into the practical uses of different technologies. Check out some of our latest IoT titles: Internet of Things Programming Projects Industrial Internet Application Development Introduction to Internet of Things [Video] Alexa Skills Projects       Does anything really die in tech? You might be surprised at some of the entries on this list - others, not so much. But either way, it’s worth pointing out that ultimately nothing ever really properly disappears in tech. From a legacy perspective change and evolution often happens slowly, and in terms of innovation buzzwords and hype don’t simply vanish, they mature and influence developments in ways we might not have initially expected. What will really be important in 2019 is to be alive to these shifts, and give yourself the best chance of taking advantage of change when it really matters.
Read more
  • 0
  • 0
  • 7195

article-image-working-basic-components-make-threejs-scene
Packt
17 Oct 2013
22 min read
Save for later

Working with the Basic Components That Make Up a Three.js Scene

Packt
17 Oct 2013
22 min read
(For more resources related to this topic, see here.) Creating a scene We know that for a scene to show anything, we need three types of components: Component Description Camera It determines what is rendered on the screen Lights They have an effect on how materials are shown and used when creating shadow effects Objects These are the main objects that are rendered from the perspective of the camera: cubes, spheres, and so on The THREE.Scene() object serves as the container for all these different objects. This object itself doesn't have too many options and functions. Basic functionality of the scene The best way to explore the functionality of the scene is by looking at an example. I'll use this example to explain the various functions and options that a scene has. When we open this example in the browser, the output will look something like the following screenshot: Even though the scene looks somewhat empty, it already contains a couple of objects. By looking at the following source code, we can see that we've used the Scene.add(object) function from the THREE.Scene() object to add a THREE.Mesh (the ground plane that you see), a THREE.SpotLight. and a THREE.AmbientLight object. The THREE.Camera object is added automatically by the Three.js library when you render the scene, but can also be added manually if you prefer. var scene = new THREE.Scene(); var camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 0.1, 1000); ... var planeGeometry = new THREE.PlaneGeometry(60,40,1,1); var planeMaterial = new THREE.MeshLambertMaterial({color: 0xffffff}); var plane = new THREE.Mesh(planeGeometry,planeMaterial); ... scene.add(plane); var ambientLight = new THREE.AmbientLight(0x0c0c0c); scene.add(ambientLight); ... var spotLight = new THREE.SpotLight( 0xffffff ); ... scene.add( spotLight ); Before we look deeper into the THREE.Scene() object, I'll first explain what you can do in the demonstration, and after that we'll look at some code. Open this example in your browser and look at the controls at the upper-right corner as you can see in the following screenshot: With these controls you can add a cube to the scene, remove the last added cube from the scene, and show all the current objects that the scene contains. The last entry in the control section shows the current number of objects in the scene. What you'll probably notice when you start up the scene is that there are already four objects in the scene. These are the ground plane, the ambient light, the spot light, and the camera that we had mentioned earlier. In the following code fragment, we'll look at each of the functions in the control section and start with the easiest one: the addCube() function: this.addCube = function() { var cubeSize = Math.ceil((Math.random() * 3)); var cubeGeometry = new THREE.CubeGeometry(cubeSize,cubeSize,cubeSize); var cubeMaterial = new THREE.MeshLambertMaterial( {color: Math.random() * 0xffffff }); var cube = new THREE.Mesh(cubeGeometry, cubeMaterial); cube.castShadow = true; cube.name = "cube-" + scene.children.length; cube.position.x=-30 + Math.round( (Math.random() * planeGeometry.width)); cube.position.y= Math.round((Math.random() * 5)); cube.position.z=-20 + Math.round((Math.random() * planeGeometry.height)); scene.add(cube); this.numberOfObjects = scene.children.length; }; This piece of code should be pretty easy to read by now. Not many new concepts are introduced here. When you click on the addCube button, a new THREE.CubeGeometry instance is created with a random size between zero and three. Besides a random size, the cube also gets a random color and position in the scene. A new thing in this piece of code is that we also give the cube a name by using the name attribute. Its name is set to cube-appended with the number of objects currently in the scene (shown by the scene.children.length property). So you'll get names like cube-1, cube-2, cube-3, and so on. A name can be useful for debugging purposes, but can also be used to directly find an object in your scene. If you use the Scene.getChildByName(name) function, you can directly retrieve a specific object and, for instance, change its location. You might wonder what the last line in the previous code snippet does. The numberOfObjects variable is used by our control GUI to list the number of objects in the scene. So whenever we add or remove an object, we set this variable to the updated count. The next function that we can call from the control GUI is removeCube and, as the name implies, clicking on this button removes the last added cube from the scene. The following code snippet shows how this function is defined: this.removeCube = function() { var allChildren = scene.children; var lastObject = allChildren[allChildren.length-1]; if (lastObject instanceof THREE.Mesh) { scene.remove(lastObject); this.numberOfObjects = scene.children.length; } } To add an object to the scene we will use the add() function. To remove an object from the scene we use the not very surprising remove() function. In the given code fragment we have used the children property from the THREE.Scene() object to get the last object that was added. We also need to check whether that object is a Mesh object in order to avoid removing the camera and the lights. After we've removed the object, we will once again update the GUI property that holds the number of objects in the scene. The final button on our GUI is labeled as outputObjects. You've probably already clicked on it and nothing seemed to happen. What this button does is print out all the objects that are currently in our scene and will output them to the web browser Console as shown in the following screenshot: The code to output information to the Console log makes use of the built-in console object as shown: this.outputObjects = function() { console.log(scene.children); } This is great for debugging purposes; especially when you name your objects, it's very useful for finding issues and problems with a specific object in your scene. For instance, the properties of the cube-17 object will look like the following code snippet: __webglActive: true __webglInit: true _modelViewMatrix: THREE.Matrix4 _normalMatrix: THREE.Matrix3 _vector: THREE.Vector3 castShadow: true children: Array[0] eulerOrder: "XYZ" frustumCulled: true geometry: THREE.CubeGeometry id: 20 material: THREE.MeshLambertMaterial matrix: THREE.Matrix4 matrixAutoUpdate: true matrixRotationWorld: THREE.Matrix4 matrixWorld: THREE.Matrix4 matrixWorldNeedsUpdate: false name: "cube-17" parent: THREE.Scene position: THREE.Vector3 properties: Object quaternion: THREE.Quaternion receiveShadow: false renderDepth: null rotation: THREE.Vector3 rotationAutoUpdate: true scale: THREE.Vector3 up: THREE.Vector3 useQuaternion: false visible: true __proto__: Object So far we've seen the following scene-related functionality: Scene.Add(): This method adds an object to the scene Scene.Remove(): This removes an object from the scene Scene.children(): This method gets a list of all the children in the scene Scene.getChildByName(): This gets a specific object from the scene by using the name attribute These are the most important scene-related functions, and most often you won't need any more. There are, however, a couple of helper functions that could come in handy, and I'd like to show them based on the code that handles the cube rotation. We use a render loop to render the scene. Let's look at the same code snippet for this example: function render() { stats.update(); scene.traverse(function(e) { if (e instanceof THREE.Mesh && e != plane ) { e.rotation.x+=controls.rotationSpeed; e.rotation.y+=controls.rotationSpeed; e.rotation.z+=controls.rotationSpeed; } }); requestAnimationFrame(render); renderer.render(scene, camera); } Here we can see that the THREE.Scene.traverse() function is being used. We can pass a function as an argument to the traverse() function. This passed in function will be called for each child of the scene. In the render() function, we will use the traverse() function to update the rotation for each of the cube instances (we will explicitly ignore the ground plane). We could also have done this by iterating over the children property array by using a for loop. Before we dive into the Mesh and Geometry object details, I'd like to show you two interesting properties that you can set on the Scene object: fog and overrideMaterial. Adding the fog effect to the scene The fog property let's you add a fog effect to the complete scene. The farther an object is, the more it will be hidden from sight. The following screenshot shows how the fog property is enabled: Enabling the fog property is really easy to do in the Three.js library. Just add the following line of code after you've defined your scene: scene.fog=new THREE.Fog( 0xffffff, 0.015, 100 ); Here we are defining a white fog (0xffffff). The last two properties can be used to tune how the mist will appear. The 0.015 value sets the near property and the 100 value sets the far property. With these properties you can determine where the mist will start and how fast it will get denser. There is also a different way to set the mist for the scene; for this you will have to use the following definition: scene.fog=new THREE.FogExp2( 0xffffff, 0.015 ); This time we don't specify the near and far properties, but just the color and the mist density. It's best to experiment a bit with these properties in order to get the effect that you want. Using the overrideMaterial property The last property that we will discuss for the scene is the overrideMaterial property, which is used to fix the materials of all the objects. When you use this property as shown in the following code snippet, all the objects that you add to the scene will make use of the same material: scene.overrideMaterial = new THREE.MeshLambertMaterial({color: 0xffffff}); The scene will be rendered as shown in the following screenshot: In the earlier screenshot, you can see that all the cube instances are rendered by using the same material and color. In this example we've used a MeshLambertMaterial object as the material. With this material type, you can create non-shiny looking objects which will respond to the lights that you add to the scene. In this section we've looked at the first of the core concepts of the Three.js library: the scene. The most important thing to remember about the scene is that it is basically a container for all the objects, lights, and cameras that you want to use while rendering. The following table summarizes the most important functions and attributes of the Scene object: Function/Property Description add(object) Adds an object to the scene. You can also use this function, as we'll see later, to create groups of objects. children Returns a list of all the objects that have been added to the scene, including the camera and lights. getChildByName(name) When you create an object, you can give it a distinct name by using the name attribute. The Scene object has a function that you can use to directly return an object with a specific name. remove(object)  If you've got a reference to an object in the scene, you can also remove it from the scene by using this function. traverse(function) The children attribute returns a list of all the children in the scene. With the traverse() function we can also access these children by passing in a callback function. fog This property allows you to set the fog for the scene. It will render a haze that hides the objects that are far away. overrideMaterial With this property you can force all the objects in the scene to use the same material. In the next section we'll look closely at the objects that you can add to the scene. Working with the Geometry and Mesh objects In each of the examples so far you've already seen the geometries and meshes that are being used. For instance, to add a sphere object to the scene we did the following: var sphereGeometry = new THREE.SphereGeometry(4,20,20); var sphereMaterial = new THREE.MeshBasicMaterial({color: 0x7777ff); var sphere = new THREE.Mesh(sphereGeometry,sphereMaterial); We have defined the shape of the object, its geometry, what this object looks like, its material, and combined all of these in a mesh that can be added to a scene. In this section we'll look a bit closely at what the Geometry and Mesh objects are. We'll start with the geometry. The properties and functions of a geometry The Three.js library comes with a large set of out-of-the-box geometries that you can use in your 3D scene. Just add a material, create a mesh variable, and you're pretty much done. The following screenshot, from example 04-geometries.html, shows a couple of the standard geometries available in the Three.js library: In we'll explore all the basic and advanced geometries that the Three.js library has to offer. For now, we'll go into more detail on what the geometry variable actually is. A geometry in Three.js, and in most other 3D libraries, is basically a collection of points in a 3D space and a number of faces connecting all those points together. Take, for example, a cube: A cube has eight corners. Each of these corners can be defined as a combination of x, y, and z coordinates. So, each cube has eight points in a 3D space. In the Three.js library, these points are called vertices. A cube has six sides, with one vertex at each corner. In the Three.js library, each of these sides is called a face. When you use one of the Three.js library-provided geometries, you don't have to define all the vertices and faces yourself. For a cube you only need to define the width, height, and depth. The Three.js library uses that information and creates a geometry with eight vertices at the correct position and the correct face. Even though you'd normally use the Three.js library-provided geometries, or generate them automatically, you can still create geometries completely by hand by defining the vertices and faces. This is shown in the following code snippet: var vertices = [ new THREE.Vector3(1,3,1), new THREE.Vector3(1,3,-1), new THREE.Vector3(1,-1,1), new THREE.Vector3(1,-1,-1), new THREE.Vector3(-1,3,-1), new THREE.Vector3(-1,3,1), new THREE.Vector3(-1,-1,-1), new THREE.Vector3(-1,-1,1) ]; var faces = [ new THREE.Face3(0,2,1), new THREE.Face3(2,3,1), new THREE.Face3(4,6,5), new THREE.Face3(6,7,5), new THREE.Face3(4,5,1), new THREE.Face3(5,0,1), new THREE.Face3(7,6,2), new THREE.Face3(6,3,2), new THREE.Face3(5,7,0), new THREE.Face3(7,2,0), new THREE.Face3(1,3,4), new THREE.Face3(3,6,4), ]; var geom = new THREE.Geometry(); geom.vertices = vertices; geom.faces = faces; geom.computeCentroids(); geom.mergeVertices(); This code shows you how to create a simple cube. We have defined the points that make up this cube in the vertices array. These points are connected to create triangular faces and are stored in the faces array. For instance, the new THREE.Face3(0,2,1) element creates a triangular face by using the points 0, 2, and 1 from the vertices array. In this example we have used a THREE.Face3 element to define the six sides of the cube, that is, two triangles for each face. In the previous versions of the Three.js library, you could also use a quad instead of a triangle. A quad uses four vertices instead of three to define the face. Whether using quads or triangles is better is a much-heated debate in the 3D modeling world. Basically, using quads is often preferred during modeling, since they can be more easily enhanced and smoothed much easier than triangles. For rendering and game engines, though, working with triangles is easier since every shape can be rendered as a triangle. Using these vertices and faces, we can now create our custom geometry, and use it to create a mesh. I've created an example that you can use to play around with the position of the vertices. In example 05-custom-geometry.html, you can change the position of all the vertices of a cube. This is shown in the following screenshot: This example, which uses the same setup as all our other examples, has a render loop. Whenever you change one of the properties in the drop-down control box, the cube is rendered correctly based on the changed position of one of the vertices. This isn't something that works out-of-the-box. For performance reasons, the Three.js library assumes that the geometry of a mesh won't change during its lifetime. To get our example to work we need to make sure that the following is added to the code in the render loop: mesh.geometry.vertices=vertices; mesh.geometry.verticesNeedUpdate=true; mesh.geometry.computeFaceNormals(); In the first line of the given code snippet, we point the vertices of the mesh that you see on the screen to an array of the updated vertices. We don't need to reconfigure the faces, since they are still connected to the same points as they were before. After we've set the updated vertices, we need to tell the geometry that the vertices need to be updated. We can do this by setting the verticesNeedUpdate property of the geometry to true. Finally we will do a recalculation of the faces to update the complete model by using the computeFaceNormals() function. The last geometry functionality that we'll look at is the clone() function. We had mentioned that the geometry defines the form, the shape of an object, and combined with a material we can create an object that can be added to the scene to be rendered by the Three.js library. With the clone() function, as the name implies, we can make a copy of the geometry and, for instance, use it to create a different mesh with a different material. In the same example, that is, 05-custom-geometry.html, you can see a clone button at the top of the control GUI, as seen in the following screenshot: If you click on this button, a clone will be made of the geometry as it currently is, and a new object is created with a different material and is added to the scene. The code for this is rather trivial, but is made a bit more complex because of the materials that I have used. Let's take a step back and first look at the code that was used to create the green material for the cube: var materials = [ new THREE.MeshLambertMaterial( { opacity:0.6, color: 0x44ff44, transparent:true } ), new THREE.MeshBasicMaterial( { color: 0x000000, wireframe: true } ) ]; As you can see, I didn't use a single material, but an array of two materials. The reason is that besides showing a transparent green cube, I also wanted to show you the wireframe, since that shows very clearly where the vertices and faces are located. The Three.js library, of course, supports the use of multiple materials when creating a mesh. You can use the SceneUtils.createMultiMaterialObject() function for this as shown: var mesh = THREE.SceneUtils.createMultiMaterialObject( geom,materials); What the Three.js library does in this function is that it doesn't create one THREE.Mesh instance, but it creates one for each material that you have specified, and puts all of these meshes in a group. This group can be used in the same manner that you've used for the Scene object. You can add meshes, get meshes by name, and so on. For instance, to add shadows to all the children in this group, we will do the following: mesh.children.forEach(function(e) {e.castShadow=true}); Now back to the clone() function that we were discussing earlier: this.clone = function() { var cloned = mesh.children[0].geometry.clone(); var materials = [ new THREE.MeshLambertMaterial( { opacity:0.6, color: 0xff44ff, transparent:true } ), new THREE.MeshBasicMaterial({ color: 0x000000, wireframe: true } ) ]; var mesh2 = THREE.SceneUtils.createMultiMaterialObject(cloned,materials); mesh2.children.forEach(function(e) {e.castShadow=true}); mesh2.translateX(5); mesh2.translateZ(5); mesh2.name="clone"; scene.remove(scene.getChildByName("clone")); scene.add(mesh2); } This piece of JavaScript is called when the clone button is clicked on. Here we clone the geometry of the first child of the cube. Remember, the mesh variable contains two children: a mesh that uses the MeshLambertMaterial and a mesh that uses the MeshBasicMaterial. Based on this cloned geometry, we will create a new mesh, aptly named mesh2. We can move this new mesh by using the translate() function remove the previous clone (if present), and add the clone to the scene. That's enough on geometries for now. The functions and attributes for a mesh We've already learned that, in order to create a mesh, we need a geometry and one or more materials. Once we have a mesh, we can add it to the scene, and it is rendered. There are a couple of properties that you can use to change where and how this mesh appears in the scene. In the first example, we'll look at the following set of properties and functions: Function/Property Description position Determines the position of this object relative to the position of its parent. Most often the parent of an object is a THREE.Scene() object. rotation With this property you can set the rotation of an object around any of its axes. scale This property allows you to scale the object around its x, y, and z axes. translateX(amount)  Moves the object through the specified amount over the x axis. translateY(amount)  Moves the object through the specified amount over the y axis. translateZ(amount)  Moves the object through the specified amount over the z axis. As always, we have an example ready for you that'll allow you to play around with these properties. If you open up the 06-mesh-properties.html example in your browser, you will get a drop-down menu where you can alter all these properties and directly see the result, as shown in the following screenshot: Let me walk you through them; I'll start with the position property. We've already seen this property a couple of times, so let's quickly address it. With this property you can set the x, y, and z coordinates of the object. The position of an object is relative to its parent object, which usually is the scene that you have added the object to. We can set an object's position property in three different ways; each coordinate can be set directly as follows: cube.position.x=10; cube.position.y=3; cube.position.z=1; But we can also set all of them at once: cube.position.set(10,3,1); There is also a third option. The position property is a THREE.Vector3 object. This means that we can also do the following to set this object: cube.postion=new THREE.Vector3(10,3,1) I want to make a quick sidestep before looking at the other properties of this mesh. I had mentioned that this position is set relative to the position of its parent. In the previous section on THREE.Geometry, we made use of the THREE.SceneUtils.createMultiMaterialObject object to create a multimaterial object. I had explained that this doesn't really return a single mesh, but a group that contains a mesh based on the same geometry for each material. In our case, it is a group that contains two meshes. If we change the position of one of the meshes that is created, you can clearly see that there really are two distinct objects. However, if we now move the created group around, the offset will remain the same. These two meshes are shown in the following screenshot: Ok, the next one on the list is the rotation property. You've already seen this property being used a couple of times in this article. With this property, you can set the rotation of the object around one of its axes. You can set this value in the same manner as we did the for the position property. A complete rotation, as you might remember from math class, is two pi. The following code snippet shows how to configure this: cube.rotation.x=0.5*Math.PI; cube.rotation.set(0.5*Math.PI,0,0); cube.rotation = new THREE.Vector3(0.5*Math.PI,0,0); You can play around with this property by using the 06-mesh-properties.html example. The next property on our list is one that we haven't talked about: scale. The name pretty much sums up what you can do with this property. You can scale the object along a specific axis. If you set the scale to values smaller than one, the object will shrink as shown: When you use values larger than one, the object will become larger as shown in the screenshot that follows: The last part of the mesh that we'll look at in this article is the translate functionality. With translate, you can also change the position of an object, but instead of defining the absolute position of where you want the object to be, you will define where the object should move to, relative to its current position. For instance, we've got a sphere object that is added to a scene and its position has been set to (1,2,3). Next, we will translate the object along its x axis by translateX(4). Its position will now be (5,2,3). If we want to restore the object to its original position we will set it to translateX(-4). In the 06-mesh-properties.html example, there is a menu tab called translate. From there you can experiment with this functionality. Just set the translate values for the x, y, and z axes, and click on the translate button. You'll see that the object is being moved to a new position based on these three values.
Read more
  • 0
  • 0
  • 7188

article-image-working-forms-dynamics-ax-part-1
Packt
07 Jan 2010
12 min read
Save for later

Working with Forms in Dynamics AX: Part 1

Packt
07 Jan 2010
12 min read
Introduction Normally, forms are created using AOT by creating a form object and adding form controls like tabs, tab pages, grids, groups, data fields, images, and other. Form behavior is controlled by its properties or the code in its member methods. The behavior and layout of form controls are also controlled by their properties and the code in their member methods. Although it is very rare, forms can also be created dynamically from code. In this article, we will cover various aspects of using Dynamics AX forms. We start with building Dynamics AX dialogs, which actually are dynamic forms, and explain how to handle their events. The article will also show how to add dynamic controls to existing forms, how to build dynamic forms from scratch, how to make forms modal, and how to change the appearance of all application forms with a few lines of code. This article also discusses the usage of splitters, tree controls, creating checklists, saving last user selections, modifying application version, and other things. Creating Dialogs Dialogs are a way to present users with a simple input form. They are commonly used for small user tasks like filling in report values, running batch jobs, presenting only the most important fields to the user when creating a new record, etc. Dialogs are normally created from X++ code without storing actual layout in AOT. The application class Dialog is used to build dialogs. Other application classes like DialogField, DialogGroup , DialogTabPage, and so on, are used to create dialog controls. One of the common ways is to use dialogs within RunBase framework classes that need user input. In this example, we will see how to build a dialog from code using the RunBase framework class. The dialog will contain customer table fields shown in different groups and tabs for creating a new record. There will be two tab pages, General and Details. The first page will have Customer account and Name input controls. The second page will be divided into two groups, Setup and Payment, with relevant fields. The actual record will not be created, as it is out of scope of this example. However, for demonstration purposes, the information specified by the user will be displayed in the Infolog. How to do it... Open AOT, and create a new class CustCreate with the following code: class CustCreate extends RunBase{ DialogField fieldAccount; DialogField fieldName; DialogField fieldGroup; DialogField fieldCurrency; DialogField fieldPaymTermId; DialogField fieldPaymMode; CustAccount custAccount; CustName custName; CustGroupId custGroupId; CurrencyCode currencyCode; CustPaymTermId paymTermId; CustPaymMode paymMode;}public container pack(){ return connull();}public boolean unpack(container packedClass){ return true;}protected Object dialog(){ Dialog dialog; DialogTabPage tabGeneral; DialogTabPage tabDetails; DialogGroup groupCustomer; DialogGroup groupPayment; ; dialog = super(); dialog.caption("Customer information"); tabGeneral = dialog.addTabPage("General"); fieldAccount = dialog.addField( typeid(CustVendAC), "Customer account"); fieldName = dialog.addField(typeid(CustName)); tabDetails = dialog.addTabPage("Details"); groupCustomer = dialog.addGroup("Setup"); fieldGroup = dialog.addField(typeid(CustGroupId)); fieldCurrency = dialog.addField(typeid(CurrencyCode)); groupPayment = dialog.addGroup("Payment"); fieldPaymTermId = dialog.addField(typeid(CustPaymTermId)); fieldPaymMode = dialog.addField(typeid(CustPaymMode)); return dialog;}public boolean getFromDialog(){; custAccount = fieldAccount.value(); custName = fieldName.value(); custGroupId = fieldGroup.value(); currencyCode = fieldCurrency.value(); paymTermId = fieldPaymTermId.value(); paymMode = fieldPaymMode.value(); return true;}public void run(){; info("You have entered customer information:"); info(strfmt("Account: %1", custAccount)); info(strfmt("Name: %1", custName)); info(strfmt("Group: %1", custGroupId)); info(strfmt("Currency: %1", currencyCode)); info(strfmt("Terms of payment: %1", paymTermId)); info(strfmt("Method of payment: %1", paymMode));}static void main(Args _args){ CustCreate custCreate = new CustCreate(); ; if (custCreate.prompt()) { custCreate.run(); }} To test the dialog, run the class. The following form should appear with the General tab page open initially: When you click on the Details tab page, you will see the following screen: Enter some information into the fields, and click OK. The results are displayed in the Infolog: How it works... Firstly, we create a new class CustCreate. By extending it from RunBase, we utilize the standard approach of running such kinds of dialogs. RunBase will also automatically add the required buttons to the dialog. Then we declare class member variables, which will be used later. DialogField type variables are actual user input fields. The rest are used to store the values returned from user input. The pack() and unpack() methods are normally used to convert an object to a container, which is a format to store an object in the user cache (SysLastValue) or to transfer it between Server and Client tiers. RunBase requires those two methods to be present in all its subclasses. In this example, we are not using any of the pack()/unpack() features, but because those methods are mandatory, we return an empty container from pack() and true from unpack(). The layout of the actual dialog is constructed in the dialog() member method. Here, we define local variables for the dialog itself, tab pages, and groups. Those variables, as opposed to the dialog fields, do not store any value to be processed further. The super() of the RunBase framework creates the initial dialog object for us. The object is created using the Dialog application class. The class actually uses the Dynamics AX form as a base, automatically adds the relevant controls, including OK and Cancel buttons, and presents it to the user as a dialog. Additional dialog controls are added to the dialog by using the addField(), addGroup(), and addTabPage() methods . There are more methods to add different types of controls like addText(), addImage(), addMenuItemButton(), and others. All controls have to be added to the dialog object directly. Adding an input control to groups or tabs is done by calling addField() right after addGroup() or addTabPage(). In the example above, we add tab pages, groups, and fields in logical sequence, so every control appears in the right position. The method returns a prepared dialog object for further processing. Values from the dialog controls are assigned to variables by calling the value() member method of DialogField. If a dialog is used within the RunBase framework, as in this example, the best place to assign dialog control values to variables is the getFormDialog() member method. RunBase calls this method right after the user clicks OK. The main processing is done in run(). For demonstration purposes, this example contains only variable output to Infolog. In order to make this class runable, the static method main() has to be created. Here, we create a new CustCreate object, invoke user dialog by calling prompt(), and once the user finishes entering customer details by clicking OK, we call run() to process the data. Handling dialog events Sometimes, the user interface requires us to change the status of a field, depending on the status of another field. For example, if the user marks the Show filter checkbox, another field, Filter, appears or becomes enabled. In standard Dynamics AX forms, this can be done using input control event modified() . But sometimes such features are required on dialogs where handling events is not that straightforward. Very often, I find myself in a situation where existing dialogs need to be adjusted to support events. The easiest way of doing that is of course to build a form in AOT, which will replace the original dialog. But in cases when the existing dialog is complex enough, probably a more cost effective solution would be to implement dialog event handling. It is not as flexible as AOT forms, but in most cases it does the job. In this recipe, we will create a dialog very similar to the previous one, but instead of entering the customer number, we will be able to select it from the list. Once the customer is selected, the rest of the fields will be filled automatically by the system from the customer record. How to do it... In AOT, create a new class named CustSelect with the following code: class CustSelect extends RunBase{ DialogField fieldAccount; DialogField fieldName; DialogField fieldGroup; DialogField fieldCurrency; DialogField fieldPaymTermId; DialogField fieldPaymMode;}public container pack(){ return connull();}public boolean unpack(container packedClass){ return true;}protected Object dialog(){ Dialog dialog; DialogTabPage tabGeneral; DialogTabPage tabDetails; DialogGroup groupCustomer; DialogGroup groupPayment; ; dialog = super(); dialog.caption("Customer information"); dialog.allowUpdateOnSelectCtrl(true); tabGeneral = dialog.addTabPage("General"); fieldAccount = dialog.addField( typeid(CustAccount), "Customer account"); fieldName = dialog.addField(typeid(CustName)); fieldName.enabled(false); tabDetails = dialog.addTabPage("Details"); groupCustomer = dialog.addGroup("Setup"); fieldGroup = dialog.addField(typeid(CustGroupId)); fieldCurrency = dialog.addField(typeid(CurrencyCode)); fieldGroup.enabled(false); fieldCurrency.enabled(false); groupPayment = dialog.addGroup("Payment"); fieldPaymTermId = dialog.addField(typeid(CustPaymTermId)); fieldPaymMode = dialog.addField(typeid(CustPaymMode)); fieldPaymTermId.enabled(false); fieldPaymMode.enabled(false); return dialog;}public void dialogSelectCtrl(){ CustTable custTable; ; custTable = CustTable::find(fieldAccount.value()); fieldName.value(custTable.Name); fieldGroup.value(custTable.CustGroup); fieldCurrency.value(custTable.Currency); fieldPaymTermId.value(custTable.PaymTermId); fieldPaymMode.value(custTable.PaymMode);}static void main(Args _args){ CustSelect custSelect = new CustSelect(); ; if (CustSelect.prompt()) { CustSelect.run(); }} Run the class, select any customer from the list, and move the cursor to the next control. Notice how the rest of the fields were populated automatically with the customer information: When you click on Details tab page, you will see the details as in following screenshot: How it works... The new class CustSelect is a copy of CustCreate from the previous recipe with few changes. In its declaration, we leave all DialogField declarations. We remove all other variables apart from Customer account. The Customer account input control is the only editable field on the dialog, so we have to keep it for storing its value. The methods pack()/unpack() remain the same as we are not using any of their features. In the dialog() member method, we call allowUpdateOnSelect() with the argument true to enable input control event handling. We also disable all fields apart from Customer account by calling enable() with parameter false for every field. The member method dialogSelectCtrl() of the RunBase class is called every time the user modifies any input control in the dialog. It is the place where we have to add all the required code to ensure that, in our case, all controls are populated with the correct data from the customer record, once the Customer account is chosen. Static main() method ensures that we can run this class. There's more... Usage of dialogSelectCtrl() sometimes might appear a bit limited as this method is only invoked when the dialog control loses its focus. No other events can be controlled, and it can become messy if more controls needs to be processed. Actually, this method is called from the selectControl() of the form, which is used as a base for the dialog. As mentioned earlier, dialogs created using the Dialog class are actually forms, which are dynamically created during runtime. So in order to extend event handling functionality on dialogs, we should utilize form event handling features. The Dialog class does not provide direct access to form event handling functions, but we can easily access the form object within the dialog. Although we cannot create the usual event handling methods on runtime form controls, we can override this behavior. Let's modify the previous example to include more events. We will add an event on the second tab page, which is triggered once the page is activated. First, we have to override the dialogPostRun() method on the CustSelect class: public void dialogPostRun(DialogRunbase dialog){; dialog.formRun().controlMethodOverload(true); dialog.formRun().controlMethodOverloadObject(this); super(dialog);} Here, we enable event overriding on the form after it is fully created and is ready for displaying on the screen. We also pass the CustSelect object as argument for the controlMethodOverloadObject() to make sure that form "knows" where overridden events are located. Next, we have to create the method that overrides the tab page event: void TabPg_2_pageActivated(){; info('Tab page activated');} The method name consists of the control name and event name joined with an underscore. But before creating such methods, we first have to get the name of the runtime control. This is because the dialog form is created dynamically, and Dynamics AX defines control names automatically without allowing the user to choose them. In this example, I have temporary added the following code to the bottom of dialog(), which displayed the name of the Details tab page control when the class was executed: info(tabDetails.name()); Now, run the class again, and select the Details tab page. The message should be displayed in the Infolog. Creating dynamic menu buttons Normally, Dynamics AX forms are created in AOT by adding various controls to the form's design and do not change during runtime. But besides that, Dynamics AX allows developers to add controls dynamically during form runtime. Probably, you have already noticed that the Document handling form in the standard Dynamics AX application has a nice option to create a new record by clicking the New button and selecting the desired document type from the list. This feature does not add any new functionality to the application, but it provides an alternative way of quickly creating a new record and it makes the form more user-friendly. The content of this button is actually generated dynamically during the initialization of the form and may vary depending on the document handling setup. There might be other cases when such features can be used. For example, dynamic menu buttons could be used to display a list of statuses, which depends on the type of the selected record. In this recipe, we will explore the code behind this feature. As an example, we will modify the Ledger budget button on the Chart of accounts form to display a list of available budget models relevant only for the selected ledger account. That means the list is going to be generated dynamically and may be different for different accounts.
Read more
  • 0
  • 0
  • 7177
article-image-ubuntu-user-interface-tweaks
Packt
01 Oct 2009
10 min read
Save for later

Ubuntu User Interface Tweaks

Packt
01 Oct 2009
10 min read
I have spent time on all of the major Desktop operating systems and Linux is by far the most customizable. The GNOME Desktop environment, the default environment of Ubuntu and many other Linux distributions, is a very simple yet very customizable interface. I have spent a lot of time around a lot of Linux users and rarely do I find two Desktops that look the same. Whether it is simple desktop background customizations or much more complex UI alterations, GNOME allows you to make your desktop your own. Just like any other environment that you're going to find yourself in for an extended period of time, you're going to want to make it your own. The GNOME Desktop offers this ability in a number of ways. First of all I'll cover perhaps the more obvious methods, and then I'll move to the more complex. As mentioned in the introduction, by the end of this article you'll know how to automate (script) the customization of your desktop down to the very last detail. This is perfect for those that find themselves reinstalling their machines on a regular basis. Appearance GNOME offers a number of basic customizations within the Applications menu. To use the "Appearance Preferences" tool simply navigate to: System > Preferences > Appearance You'll find that the main screen allows you to change your basic theme. The theme includes the environment color scheme, icon set and window bordering. This is often one of the very first things that users will change on a new installation. Of the default theme selections I generally prefer "Clearlooks" over the default Ubuntu brown color. The next tab allows you to set your background. This is the graphic, color or gradient that you want to appear on your desktop. This is also a very common customization. More often than not users will find third-party graphics specific in this section. A great place to find user-generated desktop content is the http://gnome-look.org website. It is dedicated to user-generated Artwork for the GNOME and Ubuntu desktop. On the third tab you'll find Fonts. I have found that fonts do play a very important role in the look of your desktop. For the longest time I didn't bother with customizing my fonts, but after being introduced to a few that I like, it is a must-have in my desktop customization list. My personal preference is to use the "Droid Sans" font, at 10pt for all settings. I think this is a very clean, crisp font design that really changes the desktop look and feel.  If you'd like to try out this font set you'll have to install it. This can be done using: sudo aptitude install ttf-droid Another noticeable customization to your desktop on the Fonts tab is the Rendering option. For laptops you'll definitely want to select the bottom-right option, "Subpixel Smoothing (LCDs)". You should notice a change right away when you check the box. Finally, the Details button on the fonts tab can make great improvements to the over all look. This is where you can set your font resolution. I highly suggest setting this value to 96 dots per inch (dpi). Recent versions of Ubuntu try to dynamically detect the preferred dpi. Unfortunately I haven't had the best of luck with this new feature, so I've continued to set it manually. I think you'll notice a change if your system is on something other than 96. Setting the font to "Droid Sans" 10pt and the resolution to 96 dpi is one of the biggest visual changes that I make to my system! The final tab in the Appearances tool is the Interface. This tab allows you to customize simple things like whether or not your Applications menu should display icons or not. Personally, I have found that I like the default settings, but I would suggest trying a few customizations and finding out what you like. If you've followed the suggestions so far I'm sure your desktop likely looks a lot different than it did out of the box. By changing the theme, desktop background, font and dpi you may have already made drastic changes. I'd like to also share with you some of the additional changes that I make, which will help demonstrate some of the more advanced, little known features of the GNOME desktop. gconf-editor A default Ubuntu system comes with a behind-the-scenes tool called the "gconf-editor". This is basically a graphical editor for your entire GNOME configuration settings. At first use it can be a bit confusing, but once you figure out where and how to find your preferred settings it becomes much easier. To launch the gconf-editor press the key combination ALT-F2 and type: gconf-editor I have heard people compare this tool to Microsoft's registry tool, but I assure you that it is far less complicated! It simply stores GNOME configuration and application settings. It even includes the changes that you made above! Anytime you make a change to the graphical interface it gets stored, and this tool is a graphical way to view those changes. Let's change something else, this time using the gconf-editor. Another of my favorite interface customizations includes the panels. By default you have two panels, one at the top and one at the bottom of your screen. I prefer to have both panels at the top of my screen, and I like them to be a bit smaller than they are out of the box. Here is how we would make that change using the gconf-editor. Navigate to Edit > Search and search for bottom_panel or top_panel. I will start with bottom_panel. You should come up with a few results, the first one being /apps/panel/toplevels/bottom_panel_screen0. You can now customize the color, size, auto-hide feature and much more of your panel. If you find orientation, double-click the entry, and change the value to "top" you'll find that your panel instantly moves to the top of the screen. You may want to alter the size entry while you're in there as well. Make a note of the Key name that you see for each item. These will come in handy a little bit later. A few other settings that you might find interesting are Nautilus desktop settings such as: computer_icon_visible home_icon_visible network_icon_visible trash_icon_visible volumes_visible These are simple check-box settings, activating or deactivating an option upon click. Basically these allow you to toggle the computer, home, network or trash icons on your desktop. I prefer to make sure each of these is turned off. The only one that I do like to keep on is volumes_visible. Try this out yourself and see what you prefer. Automation Earlier I mentioned that you'll want to make note of the Key Name for the settings that you're playing with. It is these names that allow us to automate, or script, the customization of our desktop environment. After putting a little bit of time into finding the key names for each of the customizations that I like I am now able to completely customize every aspect of my desktop by running a simple script! Let me give you a few examples. Above we found that the key name for the bottom panel was: /apps/panel/toplevels/bottom_panel_screen0 The key name specifically for the orientation was: /apps/panel/toplevels/bottom_panel_screen0/orientation The value we changed was top or bottom. We can now make this change from the command line using by typing: gconftool-2 -s --type string /apps/panel/toplevels/bottom_panel_screen0/orientation top Let us see a few more examples, these will change the font settings for each entry that we saw in the Appearances menu: gconftool -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/font_name "Droid Sans 10" You may or may not have made these changes manually, but just think about the time you could save on your next Ubuntu installation by pasting in these five commands instead! I will warn you though, once you start making a list of gconftool commands it's hard to stop. Considering how simple it is to make environment changes using simple commands, why not list everything! I'd like to share the script that I use to make my preferred changes. You'll likely want to edit the values to match your preferences. #!/bin/bash## customize GNOME interface# ([email protected])#gconftool-2 -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool-2 -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type bool /apps/nautilus/preferences/always_use_browser truegconftool-2 -s --type bool /apps/nautilus/desktop/computer_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/home_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/network_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/trash_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/volumes_visible truegconftool-2 -s --type bool /apps/nautilus-open-terminal/desktop_opens_home_dir truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/Platform/Linux/TrayIconPreferences/StatusIconVisible truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/CorePreferences/QuietStart truegconftool-2 -s --type bool /apps/gnome-terminal/profiles/Default/default_show_menubar falsegconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/font "Droid Sans Mono 10"gconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/scrollbar_position "hidden"gconftool-2 -s --type string /apps/gnome/interface/gtk_theme "Shiki-Brave"gconftool-2 -s --type string /apps/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type integer /apps/panel/toplevels/bottom_panel_screen0/size 23gconftool-2 -s --type integer /apps/panel/toplevels/top_panel_screen0/size 23 Summary By saving the above script into a file called "gnome-setup" and running it after a fresh installation I'm able to update my theme, fonts, visible and non-visible icons, gnome-do preferences, gnome-terminal preferences and much more within seconds. My desktop actually feels like my desktop again! I find that maintaining a simple file like this greatly eases the customization of my desktop environment and lets me focus on getting things done. I no longer spend an hour tweaking each little setting to make my machine my home again. I install, run my script, and get to work! If you have read this article you may be interested to view : Compiling and Running Handbrake in Ubuntu Control of File Types in Ubuntu Ubuntu 9.10: How To Upgrade Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala" Five Years of Ubuntu Control of File Types in Ubuntu What's New In Ubuntu 9.10 "Karmic Koala" Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher
Read more
  • 0
  • 0
  • 7170

article-image-elegant-restful-client-python-exposing-remote-resources
Xavier Bruhiere
12 Aug 2015
6 min read
Save for later

Elegant RESTful Client in Python for Exposing Remote Resources

Xavier Bruhiere
12 Aug 2015
6 min read
Product Hunt addicts like me might have noticed how often a "developer" tab was available on landing pages. More and more modern products offer a special entry point tailored for coders who want deeper interaction, beyond standard end-user experience. Twitter, Myo, Estimote are great examples of technologies an engineer could leverage for its own tool/product. And Application Programming Interfaces (API) make it possible. Companies design them as a communication contract between the developer and their product. We can discern Representational State Transfer APIs (RESTful) from programmatic ones. The latter usually offer deeper technical integration, while the former tries to abstract most of the product's complexity behind intuitive remote resources (more on that later). The resulting simplicity owes a lot to the HTTP protocol and turns out to be trickier than one thinks. Both RESTful servers and clients often underestimates the value of HTTP historical rules or the challenges behind network failures. I will dump in this article my last experience in building an HTTP+JSON API client. We are going to build a small framework in python to interact with well-designed third party services. One should get out of it a consistent starting point for new projects, like remotely controlling its car ! Stack and Context Before diving in, let's state an important assumption : APIs our client will call are well designed. They enforce RFC standards, conventions and consistent resources. Sometimes, however, real world throws at us ugly interfaces. Always read the documentation (if any) and deal with it. The choice of Python should be seen as a minor implementation consideration. Nevertheless, it will bring us the powerful requests package and a nice repl to manually explore remote services. Its popularity also suggests we are likely to be able to integrate our future package in a future project. To keep things practical, requests will hit Consul HTTP endpoints, providing us with a handy interface for our infrastructure. Consul, as a whole, it is a tool for discovering and configuring services in your infrastructure. Just download the appropriate binary, move it in your $PATH and start a new server : consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node consul-server We also need python 3.4 or 2.7, pip installed and, then, to download the single dependency we mentioned earlier with pip install requests==2.7.0. Now let's have a conversation with an API ! Sending requests APIs exposes resources for manipulation through HTTP verbs. Say we need to retrieve nodes in our cluster, Consul documentation requires us to perform a GET /v1/catalog/nodes. import requests def http_get(resource, payload=None): """ Perform an HTTP GET request against the given endpoint. """ # Avoid dangerous default function argument `{}` payload = payload or {} # versioning an API guarantees compatibility endpoint = '{}/{}/{}'.format('localhost:8500', 'v1', resource) return requests.get( endpoint, # attach parameters to the url, like `&foo=bar` params=payload, # tell the API we expect to parse JSON responses headers={'Accept': 'application/vnd.consul+json; version=1'}) Providing consul is running on the same host, we get the following result. In [4]: res = http_get('catalog/nodes') In [5]: res.json() Out[5]: [{'Address': '172.17.0.1', 'Node': 'consul-server'}] Awesome : a few lines of code gave us a really convenient access to Consul information. Let's leverage OOP to abstract further the nodes resource. Mapping resources The idea is to consider a Catalog class whose attributes are Consul API resources. A little bit of Python magic offers an elegant way to achieve that. class Catalog(object): # url specific path _path = 'catalog' def__getattr__(self, name): """ Extend built-in method to add support for attributes related to endpoints. Example: agent.members runs GET /v1/agent/members """ # Default behavior if name in self.__dict__: returnself.__dict__[name] # Dynamic attribute based on the property name else: return http_get('/'.join([self._path, name])) It might seem a little cryptic if you are not familiar with built-in Python's object methods but the usage is crystal clear : In [47]: catalog_ = Catalog() In [48]: catalog_.nodes.json() Out[48]: [{'Address': '172.17.0.1', 'Node': 'consul-server'}] The really nice benefit with this approach is that we become very productive in supporting new resources. Just rename the previous class ClientFactory and profit. class Status(ClientFactory): _path = 'status' In [58]: status_ = Status() In [59]: status_.peers.json() Out[59]: ['172.17.0.1:8300'] But... what if the resource we call does not exist ? And, although we provide a header with Accept: application/json, what if we actually don't get back a JSON object or reach our rate limit ? Reading responses Let's challenge our current implementation against those questions. In [61]: status_.not_there Out[61]: <Response [404]> In [68]: # ok, that's a consistent response In [69]: # 404 HTTP code means the resource wasn't found on server-side In [69]: status_.not_there.json() --------------------------------------------------------------------------- StopIteration Traceback (most recent call last) ... ValueError: Expecting value: line 1 column 1 (char 0) Well that's not safe at all. We're going to wrap our HTTP calls with a decorator in charge of inspecting the API response. def safe_request(fct): """ Return Go-like data (i.e. actual response and possible error) instead of raising errors. """ def inner(*args, **kwargs): data, error = {}; one try: res = fct(*args, **kwargs) except requests.exceptions.ConnectionErroras error: returnNone, {'message': str(error), 'id': -1} if res.status_code == 200 and res.headers['content-type'] == 'application/json': # expected behavior data = res.json() elif res.status_code == 206 and res.headers['content-type'] == 'application/json': # partial response, return as-is data = res.json() else: # something went wrong error = {'id': res.status_code, 'message': res.reason} return res, error return inner # update our old code @safe_request def http_get(resource): # ... This implementation stills require us to check for errors instead of disposing of the data right away. But we are dealing with network and unexpected failures will happen. Being aware of them without crashing or wrapping every resources with try/catch is a working compromise. In [71]: res, err = status_.not_there In [72]: print(err) {'id': 404, 'message': 'Not Found'} Conclusion We just covered an opinionated python abstraction for programmatically expose remote resources. Subclassing the objects above allows one to quickly interact with new services, through command line tools or interactive prompt. Yet, we only worked with the GET method. Most of the APIs allow resources deletion (DELETE), update (PUT) or creation (POST) to name a few HTTP verbs. Other future work could involve : authentication smarter HTTP code handler when dealing with forbidden, rate limiting, internal server error responses Given the incredible services that emerged lately (IBM Watson, Docker, ...), building API clients is a more and more productive option to develop innovative projects. About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 7170