Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-endpoint-operations-management-agent-vrealize-operations
Vijin Boricha
30 May 2018
10 min read
Save for later

How to ace managing the Endpoint Operations Management Agent with vROps

Vijin Boricha
30 May 2018
10 min read
In this tutorial, you will learn the functionalities of endpoint operations management agent and how you can effectively manage it. You can install the Endpoint Operations Management Agent from a tar.gz or .zip archive, or from an operating system-specific installer for Windows or for Linux-like systems that support RPM. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater.  The agent can come bundled with JRE or without JRE. Installing the Agent Before you start, you have to make sure you have vRealize Operations user credentials to ensure that you have enough permissions to deploy the agent. Let's see how can we install the agent on both Linux and Windows machines. Manually installing the Agent on a Linux Endpoint Perform the following steps to install the Agent on a Linux machine: Download the appropriate distributable package from https://my.vmware.com and copy it over to the machine. The Linux .rpm file should be installed with the root credentials. Log in to the machine and run the following command to install the agent: [root]# rpm -Uvh <DistributableFileName> In this example, we are installing on CentOS. Only when we start the service will it generate a token and ask for the server's details. Start the Endpoint Operations Management Agent service by running: service epops-agent start The previous command will prompt you to enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP: If the server is on the same machine as the agent, you can enter the localhost. If a firewall is blocking traffic from the agent to the server, allow the traffic to that address through the firewall. Default port: Specify the SSL port on the vRealize Operations server to which the agent must connect. The default port is 443. Certificate to trust: You can configure Endpoint Operations Management agents to trust the root or intermediate certificate to avoid having to reconfigure all agents if the certificate on the analytics nodes and remote collectors are modified. vRealize Operations credentials: Enter the name of a vRealize Operations user with agent manager permissions. The agent token is generated by the Endpoint Operations Management Agent. It is only created the first time the endpoint is registered to the server. The name of the token is random and unique, and is based on the name and IP of the machine. If the agent is reinstalled on the Endpoint, the same token is used, unless it is deleted manually. Provide the necessary information, as shown in the following screenshot: Go into the vRealize Operations UI and navigate to Administration | Configuration | Endpoint Operations, and select the Agents tab: After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can also verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page. If you encounter issues during the installation and configuration, you can check the /opt/vmware/epops-agent/log/agent.log log file for more information. We will examine this log file in more detail later in this book. Manually installing the agent on a Windows Endpoint On a Windows machine, the Endpoint Operations Management Agent is installed as a service. The following steps outline how to install the Agent via the .zip archive bundle, but you can also install it via an executable file. Perform the following steps to install the Agent on a Windows machine: Download the agent .zip file. Make sure the file version matches your Microsoft Windows OS. Open Command Prompt and navigate to the bin folder within the extracted folder. Run the following command to install the agent: ep-agent.bat install After the install, is complete, start the agent by executing: ep-agent.bat start If this is the first time you are installing the agent, a setup process will be initiated. If you have specified configuration values in the agent properties file, those will be used. Upon starting the service, it will prompt you for the same values as it did when we installed in on a Linux machine. Enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP Default port Certificate to trust vRealize Operations credentials If the agent is reinstalled on the endpoint, the same token is used, unless it is deleted manually. After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page: Because the agent is collecting data, the collection status of the agent is green. Alternatively, you can also verify the collection status of the Endpoint Operations Management Agent by navigating to Administration | Configuration | Endpoint Operations, and selecting the Agents tab. Automated agent installation using vRealize Automation In a typical environment, the Endpoint Operations Management Agent will not be installed on many/all virtual machines or physical servers. Usually, only those servers holding critical applications or services will be monitored using the agent. Of those servers which are not, manually installing and updating the Endpoint Operations Management Agent can be done with reasonable administrative effort. If we have an environment where we need to install the agent on many VMs, and we are using some kind of provisioning and automation engine like Microsoft System Center Configuration Manager or VMware vRealize Automation, which has application-provisioning capabilities, it is not recommended to install the agent in the VM template that will be cloned. If for whatever reason you have to do it, do not start the Endpoint Operations Management Agent, or remove the Endpoint Operations Management token and data before the cloning takes place. If the Agent is started before the cloning, a client token is created and all clones will be the same object in vRealize Operations. In the following example, we will show how to install the Agent via vRealize Automation as part of the provisioning process. I will not go into too much detail on how to create blueprints in vRealize Automation, but I will give you the basic steps to create a software component that will install the agent and add it to a blueprint. Perform the following steps to create a software component that will install the Endpoint Operations Management Agent and add it to a vRealize Automation Blueprint: Open the vRealize Automation portal page and navigate to Design, the Software Components tab, and click New to create a new software component. On the General tab, fill in the information. Click Properties and click New . Add the following six properties, as shown in the following table: Name Type Value Encrypted Overridable Required Computed RPMNAME String The name of the Agent RPM package No Yes No No SERVERIP String The IP of the vRealize Operations server/cluster No Yes No No SERVERLOGIN String Username with enough permission in vRealize Operations No Yes No No SERVERPWORD String Password for the user No Yes No No SRVCRTTHUMB String The thumbprint of the vRealize Operations certificate No Yes No No SSLPORT String The port on which to communicate with vRealize Operations No Yes No No These values will be used to configure the Agent once it is installed. The following screenshot illustrates some example values: Click Actions and configure the Install, Configure, and Start actions, as shown in the following table: Life Cycle Stage Script Type Script Reboot Install Bash rpm -i http://<FQDNOfThe ServerWhere theRPMFileIsLocated>/$RPMNAME No (unchecked) Configure Bash sed -i "s/#agent.setup.serverIP=localhost/agent.setup.serverIP=$SERVERIP/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverSSLPort=443/agent.setup.serverSSLPort=$SSLPORT/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverLogin=username/agent.setup.serverLogin=$SERVERLOGIN/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverPword=password/agent.setup.serverPword=$SERVERPWORD/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverCertificateThumbprint=/agent.setup.serverCertificateThumbprint=$SRVCRTTHUMB/" /opt/vmware/epops-agent/conf/agent.properties No (unchecked) Start Bash /sbin/service epops-agent start /sbin/chkconfig epops-agent on No (unchecked) The following screenshot illustrates some example values: Click Ready to Complete and click Finish. Edit the blueprint to where you want the agent to be installed. From Categories, select Software Components. Select the Endpoint Operations Management software component and drag and drop it over the machine type (virtual machine) where you want the object to be installed: Click Save to save the blueprint. This completes the necessary steps to automate the Endpoint Operations Management Agent installation as a software component in vRealize Automation. With every deployment of the blueprint, after cloning, the Agent will be installed and uniquely identified to vRealize Operations. Reinstalling the agent When reinstalling the agent, various elements are affected, like: Already-collected metrics Identification tokens that enable a reinstalled agent to report on the previously-discovered objects on the server The following locations contain agent-related data which remains when the agent is uninstalled: The data folder containing the keystore The epops-token platform token file which is created before agent registration Data continuity will not be affected when the data folder is deleted. This is not the case with the epops-token file. If you delete the token file, the agent will not be synchronized with the previously discovered objects upon a new installation. Reinstalling the agent on a Linux Endpoint Perform the following steps to reinstall the agent on a Linux machine: If you are removing the agent completely and are not going to reinstall it, delete the data directory by running: $ rm -rf /opt/epops-agent/data If you are completely removing the client and are not going to reinstall or upgrade it, delete the epops-token file: $ rm /etc/vmware/epops-token Run the following command to uninstall the Agent: $ yum remove <AgentFullName> If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. Reinstalling the Agent on a Windows Endpoint Perform the following steps to reinstall the Agent on a Windows machine: From the CLI, change the directory to the agent bin directory: cd C:epops-agentbin Stop the agent by running: C:epops-agentbin> ep-agent.bat stop Remove the agent service by running: C:epops-agentbin> ep-agent.bat remove If you are completely removing the client and are not going to reinstall it, delete the data directory by running: C:epops-agent> rd /s data If you are completely removing the client and are not going to reinstall it, delete the epops-token file using the CLI or Windows Explorer: C:epops-agent> del "C:ProgramDataVMwareEP Ops agentepops-token" Uninstall the agent from the Control Panel. If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. You've become familiar with the Endpoint Operations Management Agent and what it can do. We also discussed how to install and reinstall it on both Linux and Windows operating systems. To know more about vSphere and vRealize automation workload placement, check out our book  Mastering vRealize Operations Manager - Second Edition. VMware vSphere storage, datastores, snapshots What to expect from vSphere 6.7 KVM Networking with libvirt
Read more
  • 0
  • 0
  • 4351

article-image-diy-selfie-drone-arduino-esp8266
Vijin Boricha
29 May 2018
10 min read
Save for later

How to assemble a DIY selfie drone with Arduino and ESP8266

Vijin Boricha
29 May 2018
10 min read
Have you ever thought of something that can take a photo from the air, or perhaps take a selfie from it? How about we build a drone for taking selfies and recording videos from the air? Taking photos from the sky is one of the most exciting things in photography this year. You can shoot from helicopters, planes, or even from satellites. Unless you own a personal air vehicle or someone you know does, you know this is a costly affair sure to burn through your pockets. Drones can come in handy here. Have ever googled drone photography? If you did, I am sure you'd want to build or buy a drone for photography, because of the amazing views of the common subjects taken from the sky. Today, we will learn to build a drone for aerial photography and videography. This tutorial is an excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. Assuming you know how to build your customized frame if not you can refer to our book, or you may buy HobbyKing X930 glass fiber frame and connect the parts together, as directed in the manual. However, I have a few suggestions to help you carry out a better assembly of the frame: Firstly, connect the motor mounted with the legs or wings or arms of the frame. Tighten them firmly, as they will carry and hold the most important equipment of the drone. Then, connect them to the base and, later other parts with firm connections. Now, we will calibrate our ESCs. We will take the signal cable from an ESC (the motor is plugged into the ESC; careful, don't connect the propeller) and connect it to the throttle pins on the radio. Make sure the transmitter is turned on and the throttle is in the lowest position. Now, plug the battery into the ESC and you will hear a beep. Now, gradually increase the throttle from the transmitter. Your motor will start spinning at any position. This is because the ESC is not calibrated. So, you need to tell the ESC where the high point and the low point of the throttle are. Disconnect the battery first. Increase the throttle of the transmitter to the highest position and power the ESC. Your ESC will now beep once and beep 3 times in every 4 seconds. Now, move the throttle to the bottommost position and you will hear the ESC beep as if it is ready and calibrated. Now, you can increase the throttle of the transmitter and will see from lower to higher, the throttle will work. Now, mount the motors, connect them to the ESCs, and then connect them to the ArduPilot, changing the pins gradually. Now, connect your GPS to the ArduPilot and calibrate it. Now, our drone is ready to fly. I would suggest you fly the drone for about 10-15 minutes before connecting the camera. Connecting the camera For a photography drone, connecting the camera and controlling the camera is one of the most important things. Your pictures and videos will be spoiled if you cannot adjust the camera and stabilize it properly. In our case, we will use a camera gimbal to hold the camera and move it from the ground. Choosing a gimbal The camera gimbal holds the camera for you and can move the camera direction according to your command. There are a number of camera gimbals out there. You can choose any type, depending on your demand and camera size and specification. If you want to use a DSLR camera, you should use a bigger gimbal and, if you use a point and shoot type camera or action camera, you may use small- or medium-sized gimbals. There are two types of gimbals, a brushless gimbal, and a standard gimbal. The standard gimbal has servo motors and gears. If you use an FPV camera, then a standard gimbal with a 2-axis manual mount is the best option. The standard gimbal is not heavy; it is lightweight and not expensive. The best thing is you will not need an external controller board for your standard camera gimbal. The brushless gimbal is for professional aero photographers. It is smooth and can shoot videos or photos with better quality. The brushless gimbal will need an external controller board for your drone and the brushless gimbal is heavier than the standard gimbal. Choosing the best gimbal is one of the hard things for a photographer, as the stabilization of the image is a must for photoshoots. If you cannot control the camera from the ground, then using a gimbal is worthless. The following picture shows a number of gimbals: After choosing your camera and the gimbal, the first thing is to mount the gimbal and the camera to the drone. Make sure the mount is firm, but not too hard, because it will make the camera shake while flying the drone. You may use the Styrofoam or rubber pieces that came with the gimbal to reduce the vibration and make the image stable. Configuring the camera with the ArduPilot Configuring the camera with the ArduPilot is easy. Before going any further, let us learn a few things about the camera gimbal's Euler angels: Tilt: This moves the camera sloping position (range -90 degrees to +90 degrees), it is the motion (clockwise-anticlockwise) with the vertical axis Roll: This is a motion ranging from 0 degrees to 360 degrees parallel to the horizontal axis Pan: This is the same type motion of roll ranging from 0 degrees to 360 degrees but in the vertical axis Shutter: This is a switch that triggers a click or sends a signal Firstly, we are going to use the standard gimbal. Basically, there are two servos in a standard gimbal. One is for pitch or tilt and another is for the roll. So, a standard gimbal gives you a two-dimensional motion with the camera viewpoint. Connection Follow these steps to connect the camera to the ArduPilot: Take the pitch servo's signal pin and connect it to the 11th pin of the ArduPilot (A11) and the roll signal to the 10th pin (A10). Make sure you connect only the signal (S pin) cable of the servos to the pin, not the other two pins (ground and the VCC). The signal cables must be connected to the innermost pins of the A11 and A10 pins (two pins make a raw; see the following picture for clarification): My suggestion is adding an extra battery for your gimbal's servos. If you want to connect your servo directly to the ArduPilot, your ArduPilot will not perform well, as the servos will draw power. Now, connect your ArduPilot to your PC using wire or telemetry. Go to the Initial Setup menu and, under Optional Hardware, you will find another option called Camera Gimbal. Click on this and you will see the following screen: For the Tilt, change the pin to RC11; for the Roll, change the pin to RC10; and for Shutter, change it to CH7. If you want to change the Tilt during the flight from the transmitter, you need to change the Input Ch of the Tilt. See the following screenshot: Now, you need to change an option in the Configuration | Extended Tuning page. Set Ch6 Opt to None, as in the following screenshot, and hit the Write Params button: We need to align the minimum and maximum PWM values for the servos of the gimbal. To do that, we can tilt the frame of the gimbal to the leftmost position and from the transmitter, move the knob to the minimum position and start increasing, your servo will start to move at any time, then stop moving the knob. For the maximum calibration, move the Tilt to the rightmost position and do the same thing for the knob with the maximum position. Do the same thing for the pitch with the forward and backward motion. We also need to level the gimbal for better performance. To do that, you need to keep the gimbal frame level to the ground and set the Camera Gimbal option, the Servo Limits, and the Angle Limits. Change them as per the level of the frame. Controlling the camera Controlling the camera to take selfies or record video is easy. You can use the shutter pin we used before or the camera's mobile app for controlling the camera. My suggestion is to use the camera's app to take shots because you will get a live preview of what you are shooting and it will be easy to control the camera shots. However, if you want to use the Shutter button manually from the transmitter then you can do this too. We have connected the RC7 pin for controlling a servo. You can use a servo or a receiver switch for your camera to manually trigger the shutter. To do that, you can buy a receiver controller on/off switch. You can use this switch for various purposes. Clicking the shutter of your camera is one of them. Manually triggering the camera is easy. It is usually done for point and shoot cameras. To do that, you need to update the firmware of your cameras. You can do this in many ways, but the easiest one will be discussed here. Your RECEIVER CONTROLLED ON/OFF SWITCH may look like the following: You can see five wires in the picture. The three wires together are, as usual, pins of the servo motor. Take out the signal cable (in this case, this is the yellow cable) and connect it to the RC7 pin of the ArduPilot. Then, connect the positive to one of the thick red wires. Take the camera's data cable and connect the other tick wire to the positive of the USB cable and the negative wire will be connected to the negative of the three connected wires. Then, an output of the positive and negative wire will go to the battery (an external battery is suggested for the camera). To upgrade the camera firmware, you need to go to the camera's website and upgrade the firmware for the remote shutter option. In my case, the website is http://chdk.wikia.com/wiki/CHDK . I have downloaded it for a Canon point and shoot camera. You can also use action cameras for your drones. They are cheap and can be controlled remotely via mobile applications. Flying and taking shots Flying the photography drone is not that difficult. My suggestion is to lock the altitude and fly parallel to the ground. If you use a camera remote controller or an app, then it is really easy to take the photo or record a video. However, if you use the switch, as we discussed, then you need to open and connect your drone to the mission planner via telemetry. Go to the flight data, right click on the map, and then click the Trigger Camera Now option. It will trigger the Camera Shutter button and start recording or take a photo. You can do this when your drone is in a locked position and, using the timer, take a shot from above, which can be a selfie too. Let's try it. Let me know what happens and whether you like it or not. Next, learn to build other drones like a mission control drone or gliding drones from our book Building Smart Drones with ESP8266 and Arduino. Drones: Everything you ever wanted to know! How to build an Arduino based ‘follow me’ drone Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 10421

article-image-how-to-build-deep-convolutional-gan-using-tensorflow-and-keras
Savia Lobo
29 May 2018
13 min read
Save for later

How to build Deep convolutional GAN using TensorFlow and Keras

Savia Lobo
29 May 2018
13 min read
In this tutorial, we will learn to build both simple and deep convolutional GAN models with the help of TensorFlow and Keras deep learning frameworks. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango.[/box] Simple GAN with TensorFlow For building the GAN with TensorFlow, we build three networks, two discriminator models, and one generator model with the following steps: Start by adding the hyper-parameters for defining the network: # graph hyperparameters g_learning_rate = 0.00001 d_learning_rate = 0.01 n_x = 784 # number of pixels in the MNIST image # number of hidden layers for generator and discriminator g_n_layers = 3 d_n_layers = 1 # neurons in each hidden layer g_n_neurons = [256, 512, 1024] d_n_neurons = [256] # define parameter ditionary d_params = {} g_params = {} activation = tf.nn.leaky_relu w_initializer = tf.glorot_uniform_initializer b_initializer = tf.zeros_initializer Next, define the generator network: z_p = tf.placeholder(dtype=tf.float32, name='z_p', shape=[None, n_z]) layer = z_p # add generator network weights, biases and layers with tf.variable_scope('g'): for i in range(0, g_n_layers): w_name = 'w_{0:04d}'.format(i) g_params[w_name] = tf.get_variable( name=w_name, shape=[n_z if i == 0 else g_n_neurons[i - 1], g_n_neurons[i]], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) g_params[b_name] = tf.get_variable( name=b_name, shape=[g_n_neurons[i]], initializer=b_initializer()) layer = activation( tf.matmul(layer, g_params[w_name]) + g_params[b_name]) # output (logit) layer i = g_n_layers w_name = 'w_{0:04d}'.format(i) g_params[w_name] = tf.get_variable( name=w_name, shape=[g_n_neurons[i - 1], n_x], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) g_params[b_name] = tf.get_variable( name=b_name, shape=[n_x], initializer=b_initializer()) g_logit = tf.matmul(layer, g_params[w_name]) + g_params[b_name] g_model = tf.nn.tanh(g_logit) Next, define the weights and biases for the two discriminator networks that we shall build: with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) d_params[w_name] = tf.get_variable( name=w_name, shape=[n_x if i == 0 else d_n_neurons[i - 1], d_n_neurons[i]], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) d_params[b_name] = tf.get_variable( name=b_name, shape=[d_n_neurons[i]], initializer=b_initializer()) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) d_params[w_name] = tf.get_variable( name=w_name, shape=[d_n_neurons[i - 1], 1], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) d_params[b_name] = tf.get_variable( name=b_name, shape=[1], initializer=b_initializer()) Now using these parameters, build the discriminator that takes the real images as input and outputs the classification: # define discriminator_real # input real images x_p = tf.placeholder(dtype=tf.float32, name='x_p', shape=[None, n_x]) layer = x_p with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) layer = activation( tf.matmul(layer, d_params[w_name]) + d_params[b_name]) layer = tf.nn.dropout(layer,0.7) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) d_logit_real = tf.matmul(layer, d_params[w_name]) + d_params[b_name] d_model_real = tf.nn.sigmoid(d_logit_real)  Next, build another discriminator network, with the same parameters, but providing the output of generator as input: # define discriminator_fake # input generated fake images z = g_model layer = z with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) layer = activation( tf.matmul(layer, d_params[w_name]) + d_params[b_name]) layer = tf.nn.dropout(layer,0.7) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) d_logit_fake = tf.matmul(layer, d_params[w_name]) + d_params[b_name] d_model_fake = tf.nn.sigmoid(d_logit_fake) Now that we have the three networks built, the connection between them is made using the loss, optimizer and training functions. While training the generator, we only train the generator's parameters and while training the discriminator, we only train the discriminator's parameters. We specify this using the var_list parameter to the optimizer's minimize() function. Here is the complete code for defining the loss, optimizer and training function for both kinds of network: g_loss = -tf.reduce_mean(tf.log(d_model_fake)) d_loss = -tf.reduce_mean(tf.log(d_model_real) + tf.log(1 - d_model_fake)) g_optimizer = tf.train.AdamOptimizer(g_learning_rate) d_optimizer = tf.train.GradientDescentOptimizer(d_learning_rate) g_train_op = g_optimizer.minimize(g_loss, var_list=list(g_params.values())) d_train_op = d_optimizer.minimize(d_loss, var_list=list(d_params.values()))  Now that we have defined the models, we have to train the models. The training is done as per the following algorithm: For each epoch: For each batch: get real images x_batch generate noise z_batch train discriminator using z_batch and x_batch generate noise z_batch train generator using z_batch The complete code for training from the notebook is as follows: n_epochs = 400 batch_size = 100 n_batches = int(mnist.train.num_examples / batch_size) n_epochs_print = 50 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(n_epochs): epoch_d_loss = 0.0 epoch_g_loss = 0.0 for batch in range(n_batches): x_batch, _ = mnist.train.next_batch(batch_size) x_batch = norm(x_batch) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) feed_dict = {x_p: x_batch,z_p: z_batch} _,batch_d_loss = tfs.run([d_train_op,d_loss], feed_dict=feed_dict) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) feed_dict={z_p: z_batch} _,batch_g_loss = tfs.run([g_train_op,g_loss], feed_dict=feed_dict) epoch_d_loss += batch_d_loss epoch_g_loss += batch_g_loss if epoch%n_epochs_print == 0: average_d_loss = epoch_d_loss / n_batches average_g_loss = epoch_g_loss / n_batches print('epoch: {0:04d} d_loss = {1:0.6f} g_loss = {2:0.6f}' .format(epoch,average_d_loss,average_g_loss)) # predict images using generator model trained x_pred = tfs.run(g_model,feed_dict={z_p:z_test}) display_images(x_pred.reshape(-1,pixel_size,pixel_size)) We printed the generated images every 50 epochs: As we can see the generator was producing just noise in epoch 0, but by epoch 350, it got trained to produce much better shapes of handwritten digits. You can try experimenting with epochs, regularization, network architecture and other hyper-parameters to see if you can produce even faster and better results. Simple GAN with Keras Now let us implement the same model in Keras:  The hyper-parameter definitions remain the same as the last section: # graph hyperparameters g_learning_rate = 0.00001 d_learning_rate = 0.01 n_x = 784 # number of pixels in the MNIST image # number of hidden layers for generator and discriminator g_n_layers = 3 d_n_layers = 1 # neurons in each hidden layer g_n_neurons = [256, 512, 1024] d_n_neurons = [256]  Next, define the generator network: # define generator g_model = Sequential() g_model.add(Dense(units=g_n_neurons[0], input_shape=(n_z,), name='g_0')) g_model.add(LeakyReLU()) for i in range(1,g_n_layers): g_model.add(Dense(units=g_n_neurons[i], name='g_{}'.format(i) )) g_model.add(LeakyReLU()) g_model.add(Dense(units=n_x, activation='tanh',name='g_out')) print('Generator:') g_model.summary() g_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=g_learning_rate) ) This is what the generator model looks like: In the Keras example, we do not define two discriminator networks as we defined in the TensorFlow example. Instead, we define one discriminator network and then stitch the generator and discriminator network into the GAN network. The GAN network is then used to train the generator parameters only, and the discriminator network is used to train the discriminator parameters: # define discriminator d_model = Sequential() d_model.add(Dense(units=d_n_neurons[0], input_shape=(n_x,), name='d_0' )) d_model.add(LeakyReLU()) d_model.add(Dropout(0.3)) for i in range(1,d_n_layers): d_model.add(Dense(units=d_n_neurons[i], name='d_{}'.format(i) )) d_model.add(LeakyReLU()) d_model.add(Dropout(0.3)) d_model.add(Dense(units=1, activation='sigmoid',name='d_out')) print('Discriminator:') d_model.summary() d_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.SGD(lr=d_learning_rate) ) This is what the discriminator models look: Discriminator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= d_0 (Dense) (None, 256) 200960 _________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 256) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 256) 0 _________________________________________________________________ d_out (Dense) (None, 1) 257 ================================================================= Total params: 201,217 Trainable params: 201,217 Non-trainable params: 0 _________________________________________________________________ Next, define the GAN Network, and turn the trainable property of the discriminator model to false, since GAN would only be used to train the generator: # define GAN network d_model.trainable=False z_in = Input(shape=(n_z,),name='z_in') x_in = g_model(z_in) gan_out = d_model(x_in) gan_model = Model(inputs=z_in,outputs=gan_out,name='gan') print('GAN:') gan_model.summary() gan_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=g_learning_rate) ) This is what the GAN model looks: GAN: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= z_in (InputLayer) (None, 256) 0 _________________________________________________________________ sequential_1 (Sequential) (None, 784) 1526288 _________________________________________________________________ sequential_2 (Sequential) (None, 1) 201217 ================================================================= Total params: 1,727,505 Trainable params: 1,526,288 Non-trainable params: 201,217 _________________________________________________________________  Great, now that we have defined the three models, we have to train the models. The training is as per the following algorithm: For each epoch: For each batch: get real images x_batch generate noise z_batch generate images g_batch using generator model combine g_batch and x_batch into x_in and create labels y_out set discriminator model as trainable train discriminator using x_in and y_out generate noise z_batch set x_in = z_batch and labels y_out = 1 set discriminator model as non-trainable train gan model using x_in and y_out, (effectively training generator model) For setting the labels, we apply the labels as 0.9 and 0.1 for real and fake images respectively. Generally, it is suggested that you use label smoothing by picking a random value from 0.0 to 0.3 for fake data and 0.8 to 1.0 for real data. Here is the complete code for training from the notebook: n_epochs = 400 batch_size = 100 n_batches = int(mnist.train.num_examples / batch_size) n_epochs_print = 50 for epoch in range(n_epochs+1): epoch_d_loss = 0.0 epoch_g_loss = 0.0 for batch in range(n_batches): x_batch, _ = mnist.train.next_batch(batch_size) x_batch = norm(x_batch) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) g_batch = g_model.predict(z_batch) x_in = np.concatenate([x_batch,g_batch]) y_out = np.ones(batch_size*2) y_out[:batch_size]=0.9 y_out[batch_size:]=0.1 d_model.trainable=True batch_d_loss = d_model.train_on_batch(x_in,y_out) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) x_in=z_batch y_out = np.ones(batch_size) d_model.trainable=False batch_g_loss = gan_model.train_on_batch(x_in,y_out) epoch_d_loss += batch_d_loss epoch_g_loss += batch_g_loss if epoch%n_epochs_print == 0: average_d_loss = epoch_d_loss / n_batches average_g_loss = epoch_g_loss / n_batches print('epoch: {0:04d} d_loss = {1:0.6f} g_loss = {2:0.6f}' .format(epoch,average_d_loss,average_g_loss)) # predict images using generator model trained x_pred = g_model.predict(z_test) display_images(x_pred.reshape(-1,pixel_size,pixel_size)) We printed the results every 50 epochs, up to 350 epochs: The model slowly learns to generate good quality images of handwritten digits from the random noise. There are so many variations of the GANs that it will take another book to cover all the different kinds of GANs. However, the implementation techniques are almost similar to what we have shown here. Deep Convolutional GAN with TensorFlow and Keras In DCGAN, both the discriminator and generator are implemented using a Deep Convolutional Network: 1.  In this example, we decided to implement the generator as the following network: Generator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= g_in (Dense) (None, 3200) 822400 _________________________________________________________________ g_in_act (Activation) (None, 3200) 0 _________________________________________________________________ g_in_reshape (Reshape) (None, 5, 5, 128) 0 _________________________________________________________________ g_0_up2d (UpSampling2D) (None, 10, 10, 128) 0 _________________________________________________________________ g_0_conv2d (Conv2D) (None, 10, 10, 64) 204864 _________________________________________________________________ g_0_act (Activation) (None, 10, 10, 64) 0 _________________________________________________________________ g_1_up2d (UpSampling2D) (None, 20, 20, 64) 0 _________________________________________________________________ g_1_conv2d (Conv2D) (None, 20, 20, 32) 51232 _________________________________________________________________ g_1_act (Activation) (None, 20, 20, 32) 0 _________________________________________________________________ g_2_up2d (UpSampling2D) (None, 40, 40, 32) 0 _________________________________________________________________ g_2_conv2d (Conv2D) (None, 40, 40, 16) 12816 _________________________________________________________________ g_2_act (Activation) (None, 40, 40, 16) 0 _________________________________________________________________ g_out_flatten (Flatten) (None, 25600) 0 _________________________________________________________________ g_out (Dense) (None, 784) 20071184 ================================================================= Total params: 21,162,496 Trainable params: 21,162,496 Non-trainable params: 0 The generator is a stronger network having three convolutional layers followed by tanh activation. We define the discriminator network as follows: Discriminator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= d_0_reshape (Reshape) (None, 28, 28, 1) 0 _________________________________________________________________ d_0_conv2d (Conv2D) (None, 28, 28, 64) 1664 _________________________________________________________________ d_0_act (Activation) (None, 28, 28, 64) 0 _________________________________________________________________ d_0_maxpool (MaxPooling2D) (None, 14, 14, 64) 0 _________________________________________________________________ d_out_flatten (Flatten) (None, 12544) 0 _________________________________________________________________ d_out (Dense) (None, 1) 12545 ================================================================= Total params: 14,209 Trainable params: 14,209 Non-trainable params: 0 _________________________________________________________________  The GAN network is composed of the discriminator and generator as demonstrated previously: GAN: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= z_in (InputLayer) (None, 256) 0 _________________________________________________________________ g (Sequential) (None, 784) 21162496 _________________________________________________________________ d (Sequential) (None, 1) 14209 ================================================================= Total params: 21,176,705 Trainable params: 21,162,496 Non-trainable params: 14,209 _________________________________________________________________ When we run this model for 400 epochs, we get the following output: As you can see, the DCGAN is able to generate high-quality digits starting from epoch 100 itself. The DGCAN has been used for style transfer, generation of images and titles and for image algebra, namely taking parts of one image and adding that to parts of another image. We built a simple GAN in TensorFlow and Keras and applied it to generate images from the MNIST dataset. We also built a DCGAN where the generator and discriminator consisted of convolutional networks. Do check out the book Mastering TensorFlow 1.x  to explore advanced features of TensorFlow 1.x and obtain in-depth knowledge of TensorFlow for solving artificial intelligence problems. 5 reasons to learn Generative Adversarial Networks (GANs) in 2018 Implementing a simple Generative Adversarial Network (GANs) Getting to know Generative Models and their types
Read more
  • 0
  • 0
  • 5800

article-image-integrate-firebase-android-ios-applications
Savia Lobo
28 May 2018
26 min read
Save for later

How to integrate Firebase on Android/iOS applications natively

Savia Lobo
28 May 2018
26 min read
In this tutorial, you'll see Firebase integration within a native context, basically over an iOS and Android application. You will also implement some of the basic, as well as advanced features, that are found in any modern mobile application in both, Android and iOS ecosystems. So let's get busy! This article is an excerpt taken from the book,' Firebase Cookbook', written by Houssem Yahiaoui. Implement the pushing and retrieving of data from Firebase Real-time Database We're going to start first with Android and see how we can manage this feature: First, head to your Android Studio project. Now that you have opened your project, let's move on to integrating the Real-time Database. In your project, head to the Menu bar, navigate to Tools | Firebase, and then select Realtime Database. Now click Save and retrieve data. Since we've already connected our Android application to Firebase, let's now add the Firebase Real-time Database dependencies locally by clicking on the Add the Realtime Database to your app button. This will give you a screen that looks like the following screenshot:  Figure 1: Android Studio  Firebase integration section Click on the Accept Changes button and the gradle will add these new dependencies to your gradle file and download and build the project. Now we've created this simple wish list application. It might not be the most visually pleasing but will serve us well in this experiment with TextEdit, a Button, and a ListView. So, in our experiment we want to do the following: Add a new wish to our wish list Firebase Database See the wishes underneath our ListView Let's start with adding that list of data to our Firebase. Now head to your MainActivity.java file of any other activity related to your project and add the following code: //[*] UI reference. EditText wishListText; Button addToWishList; ListView wishListview; // [*] Getting a reference to the Database Root. DatabaseReference fRootRef = FirebaseDatabase.getInstance().getReference(); //[*] Getting a reference to the wishes list. DatabaseReference wishesRef = fRootRef.child("wishes"); protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //[*] UI elements wishListText = (EditText) findViewById(R.id.wishListText); addToWishList = (Button) findViewById(R.id.addWishBtn); wishListview = (ListView) findViewById(R.id.wishsList); } @Override protected void onStart() { super.onStart(); //[*] Listening on Button click event addToWishList.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //[*] Getting the text from our EditText UI Element. String wish = wishListText.getText().toString(); //[*] Pushing the Data to our Database. wishesRef.push().setValue(wish); AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("Success"); alertDialog.setMessage("wish was added to Firebase"); alertDialog.show(); } }); } In the preceding code, we're doing the following: Getting a reference to our UI elements Since everything in Firebase starts with a reference, we're grabbing ourselves a reference to the root element in our database We are getting another reference to the wishes child method from the root reference Over the OnCreate() method, we are binding all the UI-based references to the actual UI widgets Over the OnStart() method, we're doing the following: Listening to the button click event and grabbing the EditText content Using the wishesRef.push().setValue() method to push the content of the EditText automatically to Firebase, then we are displaying a simple AlertDialog as the UI preferences However, the preceding code is not going to work. This is strange since everything is well configured, but the problem here is that the Firebase Database is secured out of the box with authorization rules. So, head to Database | RULES and change the rules there, and then publish. After that is done, the result will look similar to the following screenshot:  Figure 2: Firebase Real-time Database authorization section After saving and launching the application, the pushed data result will look like this: Figure 3: Firebase Real-time Database after adding a new wish to the wishes collection Firebase creates the child element in case you didn't create it yourself. This is great because we can create and implement any data structure we want, however, we want. Next, let's see how we can retrieve the data we sent. Move back to your onStart() method and add the following code lines: wishesRef.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { //[*] Grabbing the data Snapshot String newWish = dataSnapshot.getValue(String.class); wishes.add(newWish); adapter.notifyDataSetChanged(); } @Override public void onChildChanged(DataSnapshot dataSnapshot, String s) {} @Override public void onChildRemoved(DataSnapshot dataSnapshot) {} @Override public void onChildMoved(DataSnapshot dataSnapshot, String s) {} @Override public void onCancelled(DatabaseError databaseError) {} }); Before you implement the preceding code, go to the onCreate() method and add the following line underneath the UI widget reference: //[*] Adding an adapter. adapter = new ArrayAdapter<String>(this, R.layout.support_simple_spinner_dropdown_item, wishes); //[*] Wiring the Adapter wishListview.setAdapter(adapter);  Preceding that, in the variable declaration, simply add the following declaration: ArrayList<String> wishes = new ArrayList<String>(); ArrayAdapter<String> adapter; So, in the preceding code, we're doing the following: Adding a new ArrayList and an adapter for ListView changes. We're wiring everything in the onCreate() method. Wiring an addChildEventListener() in the wishes Firebase reference. Grabbing the data snapshot from the Firebase Real-time Database that is going to be fired whenever we add a new wish, and then wiring the list adapter to notify the wishListview which is going to update our Listview content automatically. Congratulations! You've just wired and exploited the Real-time Database functionality and created your very own wishes tracker. Now, let's see how we can create our very own iOS wishes tracker application using nothing but Swift and Firebase: Head directly to and fire up Xcode, and let's open up the project, where we integrated Firebase. Let's work on the feature. Edit your Podfile and add the following line: pod 'Firebase/Database' This will download and install the Firebase Database dependencies locally, in your very own awesome wishes tracker application. There are two view controllers, one for the wishes table and the other one for adding a new wish to the wishes list, the following represents the main wishes list view.   Figure 4: iOS application wishes list view Once we click on the + sign button in the Header, we'll be navigated with a segueway to a new ViewModal, where we have a text field where we can add our new wish and a button to push it to our list.         Figure 5: Wishes iOS application, in new wish ViewModel Over  addNewWishesViewController.swift, which is the view controller for adding the new wish view, after adding the necessary UITextField, @IBOutlet and the button @IBAction, replace the autogenerated content with the following code lines: import UIKit import FirebaseDatabase class newWishViewController: UIViewController { @IBOutlet weak var wishText: UITextField //[*] Adding the Firebase Database Reference var ref: FIRDatabaseReference? override func viewDidLoad() { super.viewDidLoad() ref = FIRDatabase.database().reference() } @IBAction func addNewWish(_ sender: Any) { let newWish = wishText.text // [*] Getting the UITextField content. self.ref?.child("wishes").childByAutoId().setValue( newWish!) presentedViewController?.dismiss(animated: true, completion:nil) } } In the preceding code, besides the self-explanatory UI element code, we're doing the following: We're using the FIRDatabaseReference and creating a new Firebase reference, and we're initializing it with viewDidLoad(). Within the addNewWish IBAction (function), we're getting the text from the UITextField, calling for the "wishes" child, then we're calling childByAutoId(), which will create an automatic id for our data (consider it a push function, if you're coming from JavaScript). We're simply setting the value to whatever we're going to get from the TextField. Finally, we're dismissing the current ViewController and going back to the TableViewController which holds all our wishes. Implementing anonymous authentication Authentication is one of the most tricky, time-consuming and tedious tasks in any web application. and of course, maintaining the best practices while doing so is truly a hard job to maintain. For mobiles, it's even more complex, because if you're using any traditional application it will mean that you're going to create a REST endpoint, an endpoint that will take an email and password and return either a session or a token, or directly a user's profile information. In Firebase, things are a bit different and in this recipe, we're going to see how we can use anonymous authentication—we will explain that in a second. You might wonder, but why? The why is quite simple: to give users an anonymous temporal, to protect data and to give users an extra taste of your application's inner soul. So let's see how we can make that happen. How to do it... We will first see how we can implement anonymous authentication in Android: Fire up your Android Studio. Before doing anything, we need to get some dependencies first, speaking, of course, of the Firebase Auth library that can be downloaded by adding this line to the build.gradle file under the dependencies section: compile 'com.google.firebase:firebase-auth:11.0.2' Now simply Sync and you will be good to start adding Firebase Authentication logic. Let us see what we're going to get as a final result: Figure 6: Android application: anonymous login application A simple UI with a button and a TextView, where we put our user data after a successful authentication process. Here's the code for that simple UI: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/ apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.hcodex.anonlogin.MainActivity"> <Button android:id="@+id/anonLoginBtn" android:layout_width="289dp" android:layout_height="50dp" android:text="Anounymous Login" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginTop="47dp" android:onClick="anonLoginBtn" app:layout_constraintTop_toBottomOf= "@+id/textView2" app:layout_constraintHorizontal_bias="0.506" android:layout_marginStart="8dp" android:layout_marginEnd="8dp" /> <TextView android:id="@+id/textView2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Firebase Anonymous Login" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginTop="80dp" /> <TextView android:id="@+id/textView3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Profile Data" android:layout_marginTop="64dp" app:layout_constraintTop_toBottomOf= "@+id/anonLoginBtn" android:layout_marginLeft="156dp" app:layout_constraintLeft_toLeftOf="parent" /> <TextView android:id="@+id/profileData" android:layout_width="349dp" android:layout_height="175dp" android:layout_marginBottom="28dp" android:layout_marginEnd="8dp" android:layout_marginLeft="8dp" android:layout_marginRight="8dp" android:layout_marginStart="8dp" android:layout_marginTop="8dp" android:text="" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintHorizontal_bias="0.526" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toBottomOf= "@+id/textView3" /> </android.support.constraint.ConstraintLayout> Now, let's see how we can wire up our Java code: //[*] Step 1 : Defining Logic variables. FirebaseAuth anonAuth; FirebaseAuth.AuthStateListener authStateListener; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); anonAuth = FirebaseAuth.getInstance(); setContentView(R.layout.activity_main); }; //[*] Step 2: Listening on the Login button click event. public void anonLoginBtn(View view) { anonAuth.signInAnonymously() .addOnCompleteListener( this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if(!task.isSuccessful()) { updateUI(null); } else { FirebaseUser fUser = anonAuth.getCurrentUser(); Log.d("FIRE", fUser.getUid()); updateUI(fUser); } }); } } //[*] Step 3 : Getting UI Reference private void updateUI(FirebaseUser user) { profileData = (TextView) findViewById( R.id.profileData); profileData.append("Anonymous Profile Id : n" + user.getUid()); } Now, let's see how we can implement anonymous authentication on iOS: What we'll achieve in this test is the following : Figure 7: iOS application, anonymous login application   Before doing anything, we need to download and install the Firebase authentication dependency first. Head directly over to your Podfile and the following line: pod 'Firebase/Auth' Then simply save the file, and on your terminal, type the following command: ~> pod install This will download the needed dependency and configure our application accordingly. Now create a simple UI with a button and after configuring your UI button IBAction reference, let's add the following code: @IBAction func connectAnon(_ sender: Any) { Auth.auth().signInAnonymously() { (user, error) in if let anon = user?.isAnonymous { print("i'm connected anonymously here's my id (user?.uid)") } } } How it works... Let's digest the preceding code: We're defining some basic logic variables; we're taking basically a TextView, where we'll append our results and define the Firebase  anonAuth variable. It's of FirebaseAuth type, which is the starting point for any authentication strategy that we might use. Over onCreate, we're initializing our Firebase reference and fixing our content view. We're going to authenticate our user by clicking a button bound with the anonLoginBtn() method. Within it, we're simply calling for the signInAnonymously() method, then if incomplete, we're testing if the authentication task is successful or not, else we're updating our TextEdit with the user information. We're using the updateUI method to simply update our TextField. Pretty simple steps. Now simply build and run your project and test your shiny new features. Implementing password authentication on iOS Email and password authentication is the most common way to authenticate anyone and it can be a major risk point if done wrong. Using Firebase will remove that risk and make you think of nothing but the UX that you will eventually provide to your users. In this recipe, we're going to see how you can do this on iOS. How to do it... Let's suppose you've created your awesome UI with all text fields and buttons and wired up the email and password IBOutlets and the IBAction login button. Let's see the code behind the awesome, quite simple password authentication process: import UIKit import Firebase import FirebaseAuth class EmailLoginViewController: UIViewController { @IBOutlet weak var emailField: UITextField! @IBOutlet weak var passwordField: UITextField! override func viewDidLoad() { super.viewDidLoad() } @IBAction func loginEmail(_ sender: Any) { if self.emailField.text</span> == "" || self.passwordField.text == "" { //[*] Prompt an Error let alertController = UIAlertController(title: "Error", message: "Please enter an email and password.", preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } else { FIRAuth.auth()?.signIn(withEmail: self.emailField.text!, password: self.passwordField.text!) { (user, error) in if error == nil { //[*] TODO: Navigate to Application Home Page. } else { //[*] Alert in case we've an error. let alertController = UIAlertController(title: "Error", message: error?.localizedDescription, preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } } } } } How it works ... Let's digest the preceding code: We're simply adding some IBOutlets and adding the IBAction login button. Over the loginEmail function, we're doing two things: If the user didn't provide any email/password, we're going to prompt them with an error alert indicating the necessity of having both fields. We're simply calling for the FIRAuth.auth().singIn() function, which in this case takes an Email and a Password. Then we're testing if we have any errors. Then, and only then, we might navigate to the app home screen or do anything else we want. We prompt them again with the Authentication Error message. And as simple as that, we're done. The User object will be transported, as well, so you may do any additional processing to the name, email, and much more. Implementing password authentication on Android To make things easier in terms of Android, we're going to use the awesome Firebase Auth UI. Using the Firebase Auth UI will save a lot of hassle when it comes to building the actual user interface and handling the different intent calls between the application activities. Let's see how we can integrate and use it for our needs. Let's start first by configuring our project and downloading all the necessary dependencies. Head to your build.gradle file and copy/paste the following entry: compile 'com.firebaseui:firebase-ui-auth:3.0.0' Now, simply sync and you will be good to start. How to do it... Now, let's see how we can make the functionality work: Declare the FirebaseAuth reference, plus add another variable that we will need later on: FirebaseAuth auth; private static final int RC_SIGN_IN = 17; Now, inside your onCreate method, add the following code: auth = FirebaseAuth.getInstance(); if(auth.getCurrentUser() != null) { Log.d("Auth", "Logged in successfully"); } else { startActivityForResult( AuthUI.getInstance() .createSignInIntentBuilder() .setAvailableProviders( Arrays.asList(new AuthUI.IdpConfig.Builder( AuthUI.EMAIL_PROVIDER).build())).build(), RC_SIGN_IN);findViewById(R.id.logoutBtn) .setOnClickListener(this); Now, in your activity, implement the View.OnClick listener. So your class will look like the following: public class MainActivity extends AppCompatActivity implements View.OnClickListener {} After that, implement the onClick function as shown here: @Override public void onClick(View v) { if(v.getId() == R.id.logoutBtn) { AuthUI.getInstance().signOut(this) .addOnCompleteListener( new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { Log.d("Auth", "Logged out successfully"); // TODO: make custom operation. } }); } } In the end, implement the onActivityResult method as shown in the following code block: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(requestCode == RC_SIGN_IN) { if(resultCode == RESULT_OK) { //User is in ! Log.d("Auth",auth.getCurrentUser().getEmail()); } else { //User is not authenticated Log.d("Auth", "Not Authenticated"); } } } Now build and run your project. You will have a similar interface to that shown in the following screenshot: Figure 8: Android authentication using email/password:  email picker This interface will be shown in case you're not authenticated and your application will list all the saved accounts on your device. If you click on the NONE OF THE ABOVE button, you will be prompted with the following interface: Figure 9: Android authentication email/password: adding new email When you add your email and click on the NEXT button, the API will go and look for that user with that email in your application's users. If such an email is present, you will be authenticated, but if it's not the case, you will be redirected to the Sign-up activity as shown in the following screenshot: Figure 10: Android authentication: creating a new account, with email/password/name Next, you will add your name and password. And with that, you will create a new account and you will be authenticated. How it works... From the preceding code, it's clear that we didn't create any user interface. The Firebase UI is so powerful, so let's explore what happens: The setAvailableProviders method will take a list of providers—those providers will be different based on your needs, so it can be any email provider, Google, Facebook, and each and every provider that Firebase supports. The main difference is that each and every provider will have each separate configuration and necessary dependencies that you will need to support the functionality. Also, if you've noticed, we're setting up a logout button. We created this button mainly to log out our users and added a click listener to it. The idea here is that when you click on it, the application performs the Sign-out operation. Then you add your custom intent that will vary from a redirect to closing the application. If you noticed, we're implementing the onActivityResult special function and this one will be our main listening point whenever we connect or disconnect from the application. Within it, we can perform different operations from resurrection to displaying toasts, to anything that you can think of. Implementing Google Sign-in authentication Google authentication is the process of logging in/creating an account using nothing but your existing Google account. It's easy, fast, and intuitive and removes a lot of hustle we face, usually when we register any web/mobile application. I'm talking basically about form filling. Using Firebase Google Sign-in authentication, we can manage such functionality; plus we have had the user basic metadata such as the display name, picture URL, and more. In this recipe, we're going to see how we can implement Google Sign-in functionality for both Android and iOS. Before doing any coding, it's important to do some basic configuration in our Firebase Project console. Head directly to your Firebase project Console | Authentication | SIGN-IN METHOD | Google and simply activate the switch and follow the instructions there in order to get the client. Please notice that Google Sign-in is automatically configured for iOS, but for Android, we will need to do some custom configuration. Let us first look at getting ready for Android to implement Google Sign-in authentication: Before we start implementing the authentication functionality, we will need to install some dependencies first, so please head to your build.gradle file and paste the following, and then sync your build: compile 'com.google.firebase:firebase-auth:11.4.2' compile 'com.google.android.gms:play-services- auth:11.4.2' The dependency versions are dependable, and that means that whenever you want to install them, you will have to provide the same version for both dependencies. Moving on to getting ready  in iOS for implementation of Google Sign-in authentication: In iOS, we will need to install a couple of dependencies, so please go and edit your Podfile and add the following lines underneath your already present dependencies, if you have any: pod 'Firebase/Auth' pod 'GoogleSignIn' Now, in your terminal, type the following command: ~> pod install This command will install the required dependencies and configure your project accordingly. How to do it... First, let us take a look at how we will implement this recipe in Android: Now, after installing our dependencies, we will need to create the UI for our calls. To do that, simply copy and paste the following special button XML code into your layout: <com.google.android.gms.common.SignInButton android:id="@+id/gbtn" android:layout_width="368dp" android:layout_height="wrap_content" android:layout_marginLeft="16dp" android:layout_marginTop="30dp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginRight="16dp" app:layout_constraintRight_toRightOf="parent" /> The result will be this: Figure 11: Google Sign-in button after the declaration After doing that, let's see the code behind it: SignInButton gBtn; FirebaseAuth mAuth; GoogleApiClient mGoogleApiClient; private final static int RC_SIGN_IN = 3; FirebaseAuth.AuthStateListener mAuthListener; @Override protected void onStart() { super.onStart(); mAuth.addAuthStateListener(mAuthListener); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mAuth = FirebaseAuth.getInstance(); gBtn = (SignInButton) findViewById(R.id.gbtn); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { signIn(); } }); mAuthListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { if(firebaseAuth.getCurrentUser() != null) { AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("User"); alertDialog.setMessage("I have a user loged in"); alertDialog.show(); } } }; mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this, new GoogleApiClient.OnConnectionFailedListener() { @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { Toast.makeText(MainActivity.this, "Something went wrong", Toast.LENGTH_SHORT).show(); } }) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build(); } GoogleSignInOptions gso = new GoogleSignInOptions.Builder( GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .build(); private void signIn() { Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent( mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi .getSignInResultFromIntent(data); if (result.isSuccess()) { // Google Sign In was successful, authenticate with Firebase GoogleSignInAccount account = result.getSignInAccount(); firebaseAuthWithGoogle(account); } else { Toast.makeText(MainActivity.this, "Connection Error", Toast.LENGTH_SHORT).show(); } } } private void firebaseAuthWithGoogle( GoogleSignInAccount account) { AuthCredential credential = GoogleAuthProvider.getCredential( account.getIdToken(), null); mAuth.signInWithCredential(credential) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if (task.isSuccessful()) { // Sign in success, update UI with the signed-in user's information Log.d("TAG", "signInWithCredential:success"); FirebaseUser user = mAuth.getCurrentUser(); Log.d("TAG", user.getDisplayName()); } else { Log.w("TAG", "signInWithCredential:failure", task.getException()); Toast.makeText(MainActivity.this, "Authentication failed.", Toast.LENGTH_SHORT) .show(); } // ... } }); } Then, simply build and launch your application, click on the authentication button, and you will be greeted with the following screen: Figure 12: Account picker, after clicking on Google Sign-in button. Next, simply pick the account you want to connect with, and then you will be greeted with an alert, finishing the authentication process. Now we will take a look at an implementation of our recipe in iOS: Before we do anything, let's import the Google Sign-in as follows: import GoogleSignIn After that, let's add our Google Sign-in button; to do so, go to your Login Page ViewController and add the following line of code: //Google sign in let googleBtn = GIDSignInButton() googleBtn.frame =CGRect(x: 16, y: 50, width: view.frame.width - 32, height: 50) view.addSubview(googleBtn) GIDSignIn.sharedInstance().uiDelegate = self The frame positioning is for my own needs—you can use it or modify the dimension to suit your application needs. Now, after adding the lines above, we will get an error. This is due to our ViewController not working well with the GIDSignInUIDelegate, so in order to make our xCode happier, let's add it to our ViewModal declaration so it looks like the following: class ViewController: UIViewController, FBSDKLoginButtonDelegate, GIDSignInUIDelegate {} Now, if you build and run your project, you will get the following: Figure 13: iOS application after configuring the Google Sign-in button Now, if you click on the Sign in button, you will get an exception. The reason for that is that the Sign in button is asking for the clientID, so to fix that, go to your AppDelegate file and complete the following import: import GoogleSignIn Next, add the following line of code within the application: didFinishLaunchingWithOptions as shown below: GIDSignIn.sharedInstance().clientID = FirebaseApp.app()?.options.clientID If you build and run the application now, then click on the Sign in button, nothing will happen. Why? Because iOS doesn't know how and where to navigate to next. So now, in order to fix that issue, go to your GoogleService-Info.plist file, copy the value of the REVERSED_CLIENT_ID, then go to your project configuration. Inside the Info section, scroll down to the URL types, add a new URL type, and paste the link inside the URL Schemes field: Figure 14: Xcode Firebase URL schema adding, to finish the Google Sign-in behavior Next, within the application: open URL options, add the following line: GIDSignIn.sharedInstance().handle(url, sourceApplication:options[ UIApplicationOpenURLOptionsKey.sourceApplication] as? String, annotation: options[UIApplicationOpenURLOptionsKey.annotation]) This will simply help the transition to the URL we already specified within the URL schemes. Next, if you build and run your application, tap on the Sign in button and you will be redirected using the SafariWebViewController to the Google Sign-in page, as shown in the following screenshot: Figure 15: iOS Google account picker after clicking on Sign-in button With that, the ongoing authentication process is done, but what will happen when you select your account and authorize the application? Typically, you need to go back to your application with all the needed profile information, don't you? isn't? Well, for now, it's not the case, so let's fix that. Go back to the AppDelegate file and do the following: Add the GIDSignInDelegate to the app delegate declaration Add the following line to the application: didFinishLaunchingWithOptions: GIDSignIn.sharedInstance().delegate = self This will simply let us go back to the application with all the tokens we need to finish the authentication process with Firebase. Next, we need to implement the signIn function that belongs to the GIDSignInDelegate; this function will be called once we're successfully authenticated: func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) { if let err = error { print("Can't connect to Google") return } print("we're using google sign in", user) } Now, once you're fully authenticated, you will receive the success message over your terminal. Now we can simply integrate our Firebase authentication logic. Complete the following import: import FirebaseAuth  Next, inside the same signIn function, add the following: guard let authentication = user.authentication else { return } let credential = GoogleAuthProvider.credential(withIDToken: authentication.idToken, accessToken: authentication.accessToken) Auth.auth().signIn(with: credential, completion: {(user, error) in if let error = error { print("[*] Can't connect to firebase, with error :", error) } print("we have a user", user?.displayName) }) This code will use the successfully logged in user token and call the Firebase Authentication logic to create a new Firebase user. Now we can retrieve the basic profile information that Firebase delivers. How it works... Let's explain what we did in the Android section: We activated authentication using our Google account from the Firebase project console. We also installed the required dependencies, from Firebase Auth to Google services. After finishing the setup, we gained the ability to create that awesome Google Sign-in special button, and we also gave it an ID for easy access. We created references from SignInButton and FirebaseAuth. Let's now explain what we just did in the iOS section: We used the GIDSignButton in order to create the branded Google Sign-in button, and we added it to our ViewController. Inside the AppDelegate, we made a couple of configurations so we could retrieve our ClientID that the button needed to connect to our application. For our button to work, we used the information stored in GoogleService-Info.plist and created an app link within our application so we could navigate to our connection page. Once everything was set, we were introduced to our application authorization page where we authorized the application and chose the account we wanted to use to connect. In order to get back all the required tokens and account information, we needed to go back to the AppDelegate file and implement the GIDSignInDelegate. Within it, we could can all the account-related tokens and information, once we were successful authenticated. Within the implemented SignIn function, we injected our regular Firebase authentication signIn method with all necessary tokens and information. When we built and ran the application again and signed in, we found the account used to authenticate, present in the Firebase authenticated account. To summarize, we learned how to integrate Firebase within a native context, basically over an iOS and Android application. If you've enjoyed reading this, do check,'Firebase Cookbook' for recipes to help you understand features of Firebase and implement them in your existing web or mobile applications. Using the Firebase Real-Time Database How to integrate Firebase with NativeScript for cross-platform app development Build powerful progressive web apps with Firebase  
Read more
  • 0
  • 0
  • 4717

article-image-optimize-mysql-8-servers-clients
Amey Varangaonkar
28 May 2018
11 min read
Save for later

How to optimize MySQL 8 servers and clients

Amey Varangaonkar
28 May 2018
11 min read
Our article focuses on optimization for MySQL 8 database servers and clients, we start with optimizing the server, followed by optimizing MySQL 8 client-side entities. It is more relevant to database administrators, to ensure performance and scalability across multiple servers. It would also help developers prepare scripts (which includes setting up the database) and users run MySQL for development and testing to maximize the productivity. [box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, written by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. In this book, authors have presented hands-on techniques for tackling the common and not-so-common issues when it comes to the different administration-related tasks in MySQL 8.[/box] Optimizing disk I/O There are quite a few ways to configure storage devices to devote more and faster storage hardware to the database server. A major performance bottleneck is disk seeking (finding the correct place on the disk to read or write content). When the amount of data grows large enough to make caching impossible, the problem with disk seeds becomes apparent. We need at least one disk seek operation to read, and several disk seek operations to write things in large databases where the data access is done more or less randomly. We should regulate or minimize the disk seek times using appropriate disks. In order to resolve the disk seek performance issue, increasing the number of available disk spindles, symlinking the files to different disks, or stripping disks can be done. The following are the details: Using symbolic links: When using symbolic links, we can create a Unix symbolic links for index and data files. The symlink points from default locations in the data directory to another disk in the case of MyISAM tables. These links may also be striped. This improves the seek and read times. The assumption is that the disk is not used concurrently for other purposes. Symbolic links are not supported for InnoDB tables. However, we can place InnoDB data and log files on different physical disks. Striping: In striping, we have many disks. We put the first block on the first disk, the second block on the second disk, and so on. The N block on the (N % number of-disks) disk. If the stripe size is perfectly aligned, the normal data size will be less than the stripe size. This will help to improve the performance. Striping is dependent on the stripe size and the operating system. In an ideal case, we would benchmark the application with different stripe sizes. The speed difference while striping depends on the parameters we have used, like stripe size. The difference in performance also depends on the number of disks. We have to choose if we want to optimize for random access or sequential access. To gain reliability, we may decide to set up with striping and mirroring (RAID 0+1). RAID stands for Redundant Array of Independent Drives. This approach needs 2 x N drives to hold N drives of data. With a good volume management software, we can manage this setup efficiently. There is another approach to it, as well. Depending on how critical the type of data is, we may vary the RAID level. For example, we can store really important data, such as host information and logs, on a RAID 0+1 or RAID N disk, whereas we can store semi-important data on a RAID 0 disk. In the case of RAID, parity bits are used to ensure the integrity of the data stored on each drive. So, RAID N becomes a problem if we have too many write operations to be performed. The time required to update the parity bits in this case is high. If it is not important to maintain when the file was last accessed, we can mount the file system with the -o noatime option. This option skips the updates on the file system, which reduces the disk seek time. We can also make the file system update asynchronously. Depending upon whether the file system supports it, we can set the -o async option. Using Network File System (NFS) with MySQL While using a Network File System (NFS), varying issues may occur, depending on the operating system and the NFS version. The following are the details: Data inconsistency is one issue with an NFS system. It may occur because of messages received out of order or lost network traffic. We can use TCP with hard and intr mount options to avoid these issues. MySQL data and log files may get locked and become unavailable for use if placed on NFS drives. If multiple instances of MySQL access the same data directory, it may result in locking issues. Improper shut down of MySQL or power outage are other reasons for filesystem locking issues. The latest version of NFS supports advisory and lease-based locking, which helps in addressing the locking issues. Still, it is not recommended to share a data directory among multiple MySQL instances. Maximum file size limitations must be understood to avoid any issues. With NFS 2, only the lower 2 GB of a file is accessible by clients. NFS 3 clients support larger files. The maximum file size depends on the local file system of the NFS server. Optimizing the use of memory In order to improve the performance of database operations, MySQL allocates buffers and caches memory. As a default, the MySQL server starts on a virtual machine (VM) with 512 MB of RAM. We can modify the default configuration for MySQL to run on limited memory systems. The following list describes the ways to optimize MySQL memory: The memory area which holds cached InnoDB data for tables, indexes, and other auxiliary buffers is known as the InnoDB buffer pool. The buffer pool is divided into pages. The pages hold multiple rows. The buffer pool is implemented as a linked list of pages for efficient cache management. Rarely used data is removed from the cache using an algorithm. Buffer pool size is an important factor for system performance. The innodb__buffer_pool_size system variable defines the buffer pool size. InnoDB allocates the entire buffer pool size at server startup. 50 to 75 percent of system memory is recommended for the buffer pool size. With MyISAM, all threads share the key buffer. The key_buffer_size system variable defines the size of the key buffer. The index file is opened once for each MyISAM table opened by the server. For each concurrent thread that accesses the table, the data file is opened once. A table structure, column structures for each column, and a 3 x N sized buffer are allocated for each concurrent thread. The MyISAM storage engine maintains an extra row buffer for internal use. The optimizer estimates the reading of multiple rows by scanning. The storage engine interface enables the optimizer to provide information about the recorded buffer size. The size of the buffer can vary depending on the size of the estimate. In order to take advantage of row pre-fetching, InnoDB uses a variable size buffering capability. It reduces the overhead of latching and B-tree navigation. Memory mapping can be enabled for all MyISAM tables by setting the myisam_use_mmap system variable to 1. The size of an in-memory temporary table can be defined by the tmp_table_size system variable. The maximum size of the heap table can be defined using the max_heap_table_size system variable. If the in-memory table becomes too large, MySQL automatically converts the table from in-memory to on-disk. The storage engine for an on-disk temporary table is defined by the internal_tmp_disk_storage_engine system variable. MySQL comes with the MySQL performance schema. It is a feature to monitor MySQL execution at low levels. The performance schema dynamically allocates memory by scaling its memory use to the actual server load, instead of allocating memory upon server startup. The memory, once allocated, is not freed until the server is restarted. Thread specific space is required for each thread that the server uses to manage client connections. The stack size is governed by the thread_stack system variable. The connection buffer is governed by the net_buffer_length system variable. A result buffer is governed by net_buffer_length. The connection buffer and result buffer starts with net_buffer_length bytes, but enlarges up to max_allowed_packets bytes, as needed. All threads share the same base memory. All join clauses are executed in a single pass. Most of the joins can be executed without a temporary table. Temporary tables are memory-based hash tables. Temporary tables that contain BLOB data and tables with large row lengths are stored on disk. A read buffer is allocated for each request, which performs a sequential scan on a table. The size of the read buffer is determined by the read_buffer_size system variable. MySQL closes all tables that are not in use at once when FLUSH TABLES or mysqladmin flush-table commands are executed. It marks all in-use tables to be closed when the current thread execution finishes. This frees in-use memory. FLUSH TABLES returns only after all tables have been closed. It is possible to monitor the MySQL performance schema and sys schema for memory usage. Before we can execute commands for this, we have to enable memory instruments on the MySQL performance schema. It can be done by updating the ENABLED column of the performance schema setup_instruments table. The following is the query to view available memory instruments in MySQL: mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%memory%'; This query will return hundreds of memory instruments. We can narrow it down by specifying a code area. The following is an example to limit results to InnoDB memory instruments: mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%memory/innodb%'; The following is the configuration to enable memory instruments: performance-schema-instrument='memory/%=COUNTED' The following is an example to query memory instrument data in the memory_summary_global_by_event_name table in the performance schema: mysql> SELECT * FROM performance_schema.memory_summary_global_by_event_name WHERE EVENT_NAME LIKE 'memory/innodb/buf_buf_pool'G; EVENT_NAME: memory/innodb/buf_buf_pool COUNT_ALLOC: 1 COUNT_FREE: 0 SUM_NUMBER_OF_BYTES_ALLOC: 137428992 SUM_NUMBER_OF_BYTES_FREE: 0 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 1 HIGH_COUNT_USED: 1 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 137428992 HIGH_NUMBER_OF_BYTES_USED: 137428992 It summarizes data by EVENT_NAME. The following is an example of querying the sys schema to aggregate currently allocated memory by code area: mysql> SELECT SUBSTRING_INDEX(event_name,'/',2) AS code_area, sys.format_bytes(SUM(current_alloc)) AS current_alloc FROM sys.x$memory_global_by_current_bytes GROUP BY SUBSTRING_INDEX(event_name,'/',2) ORDER BY SUM(current_alloc) DESC; Performance benchmarking We must consider the following factors when measuring performance: While measuring the speed of a single operation or a set of operations, it is important to simulate a scenario in the case of a heavy database workload for benchmarking In different environments, the test results may be different Depending on the workload, certain MySQL features may not help with performance MySQL 8 supports measuring the performance of individual statements. If we want to measure the speed of any SQL expression or function, the BENCHMARK() function is used. The following is the syntax for the function: BENCHMARK(loop_count, expression) The output of the BENCHMARK function is always zero. The speed can be measured by the line printed by MySQL in the output. The following is an example: mysql> select benchmark(1000000, 1+1); From the preceding example , we can find that the time taken to calculate 1+1 for 1000000 times is 0.15 seconds. Other aspects involved in optimizing MySQL servers and clients include optimizing locking operations, examining thread information and more. To know about these techniques, you may check out the book MySQL 8 Administrator’s Guide. SQL Server recovery models to effectively backup and restore your database Get SQL Server user management right 4 Encryption options for your SQL Server
Read more
  • 0
  • 0
  • 8095

article-image-testing-single-page-apps-spas-vue-js-dev-tools
Pravin Dhandre
25 May 2018
8 min read
Save for later

Testing Single Page Applications (SPAs) using Vue.js developer tools

Pravin Dhandre
25 May 2018
8 min read
Testing, especially for big applications, is paramount – especially when deploying your application to a development environment. Whether you choose unit testing or browser automation, there are a host of articles and books available on the subject. In this tutorial, we have covered the usage of Vue developer tools to test Single Page Applications. We will also touch upon other alternative tools like Nightwatch.js, Selenium, and TestCafe for testing. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example.  Using the Vue.js developer tools The Vue developer tools are available for Chrome and Firefox and can be downloaded from GitHub. Once installed, they become an extension of the browser developer tools. For example, in Chrome, they appear after the Audits tab. The Vue developer tools will only work when you are using Vue in development mode. By default, the un-minified version of Vue has the development mode enabled. However, if you are using the production version of the code, the development tools can be enabled by setting the devtools variable to true in your code: Vue.config.devtools = true We've been using the development version of Vue, so the dev tools should work with all three of the SPAs we have developed. Open the Dropbox example and open the Vue developer tools. Inspecting Vue components data and computed values The Vue developer tools give a great overview of the components in use on the page. You can also drill down into the components and preview the data in use on that particular instance. This is perfect for inspecting the properties of each component on the page at any given time. For example, if we inspect the Dropbox app and navigate to the Components tab, we can see the <Root> Vue instance and we can see the <DropboxViewer> component. Clicking this will reveal all of the data properties of the component – along with any computed properties. This lets us validate whether the structure is constructed correctly, along with the computed path property: Drilling down into each component, we can access individual data objects and computed properties. Using the Vue developer tools for inspecting your application is a much more efficient way of validating data while creating your app, as it saves having to place several console.log() statements. Viewing Vuex mutations and time-travel Navigating to the next tab, Vuex, allows us to watch store mutations taking place in real time. Every time a mutation is fired, a new line is created in the left-hand panel. This element allows us to view what data is being sent, and what the Vuex store looked like before and after the data had been committed. It also gives you several options to revert, commit, and time-travel to any point. Loading the Dropbox app, several structure mutations immediately populate within the left-hand panel, listing the mutation name and the time they occurred. This is the code pre-caching the folders in action. Clicking on each one will reveal the Vuex store state – along with a mutation containing the payload sent. The state display is after the payload has been sent and the mutation committed. To preview what the state looked like before that mutation, select the preceding option: On each entry, next to the mutation name, you will notice three symbols that allow you to carry out several actions and directly mutate the store in your browser: Commit this mutation: This allows you to commit all the data up to that point. This will remove all of the mutations from the dev tools and update the Base State to this point. This is handy if there are several mutations occurring that you wish to keep track of. Revert this mutation: This will undo the mutation and all mutations after this point. This allows you to carry out the same actions again and again without pressing refresh or losing your current place. For example, when adding a product to the basket in our shop app, a mutation occurs. Using this would allow you to remove the product from the basket and undo any following mutations without navigating away from the product page. Time-travel to this state: This allows you to preview the app and state at that particular mutation, without reverting any mutations that occur after the selected point. The mutations tab also allows you to commit or revert all mutations at the top of the left-hand panel. Within the right-hand panel, you can also import and export a JSON encoded version of the store's state. This is particularly handy when you want to re-test several circumstances and instances without having to reproduce several steps. Previewing event data The Events tab of the Vue developer tools works in a similar way to the Vuex tab, allowing you to inspect any events emitted throughout your app. Changing the filters in this app emits an event each time the filter type is updated, along with the filter query: The left-hand panel again lists the name of the event and the time it occurred. The right panel contains information about the event, including its component origin and payload. This data allows you to ensure the event data is as you expected it to be and, if not, helps you locate where the event is being triggered. The Vue dev tools are invaluable, especially as your JavaScript application gets bigger and more complex. Open the shop SPA we developed and inspect the various components and Vuex data to get an idea of how this tool can help you create applications that only commit mutations they need to and emit the events they have to. Testing your Single Page Application The majority of Vue testing suites revolve around having command-line knowledge and creating a Vue application using the CLI (command-line interface). Along with creating applications in frontend-compatible JavaScript, Vue also has a CLI that allows you to create applications using component-based files. These are files with a .vue extension and contain the template HTML along with the JavaScript required for the component. They also allow you to create scoped CSS – styles that only apply to that component. If you chose to create your app using the CLI, all of the theory and a lot of the practical knowledge you have learned in this book can easily be ported across. Command-line unit testing Along with component files, the Vue CLI allows you to integrate with command-line unit tests easier, such as Jest, Mocha, Chai, and TestCafe (https://testcafe.devexpress.com/). For example, TestCafe allows you to specify several different tests, including checking whether content exists, to clicking buttons to test functionality. An example of a TestCafe test checking to see if our filtering component in our first app contains the work Field would be: test('The filtering contains the word "filter"', async testController => { const filterSelector = await new Selector('body > #app > form > label:nth-child(1)'); await testController.expect(paragraphSelector.innerText).eql('Filter'); }); This test would then equate to true or false. Unit tests are generally written in conjunction with components themselves, allowing components to be reused and tested in isolation. This allows you to check that external factors have no bearing on the output of your tests. Most command-line JavaScript testing libraries will integrate with Vue.js; there is a great list available in the awesome Vue GitHub repository (https://github.com/vuejs/awesome-vue#test). Browser automation The alternative to using command-line unit testing is to automate your browser with a testing suite. This kind of testing is still triggered via the command line, but rather than integrating directly with your Vue application, it opens the page in the browser and interacts with it like a user would. A popular tool for doing this is Nightwatch.js (http://nightwatchjs.org/). You may use this suite for opening your shop and interacting with the filtering component or product list ordering and comparing the result. The tests are written in very colloquial English and are not restricted to being on the same domain name or file network as the site to be tested. The library is also language agnostic – working for any website regardless of what it is built with. The example Nightwatch.js gives on their website is for opening Google and ensuring the first result of a Google search for rembrandt van rijn is the Wikipedia entry: module.exports = { 'Demo test Google' : function (client) { client .url('http://www.google.com') .waitForElementVisible('body', 1000) .assert.title('Google') .assert.visible('input[type=text]') .setValue('input[type=text]', 'rembrandt van rijn') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('ol#rso li:first-child', 'Rembrandt - Wikipedia') .end(); } }; An alternative to Nightwatch is Selenium (http://www.seleniumhq.org/). Selenium has the advantage of having a Firefox extension that allows you to visually create tests and commands. We covered usage of Vue.js dev tools and learned to build automated tests for your web applications. If you found this tutorial useful, do check out the book Vue.js 2.x by Example and get complete knowledge resource on the process of building single-page applications with Vue.js. Building your first Vue.js 2 Web application 5 web development tools will matter in 2018
Read more
  • 0
  • 0
  • 7536
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-how-to-scaffold-a-new-module-in-odoo-11
Sugandha Lahoti
25 May 2018
2 min read
Save for later

How to Scaffold a New module in Odoo 11

Sugandha Lahoti
25 May 2018
2 min read
The latest version of Odoo ERP, Odoo 11, brings a plethora of features to Odoo targeting business application development. The market for Odoo is growing enormously and if you have thought about developing in Odoo, now is the best time to start. This hands-on video course, Odoo 11 Development Essentials, by Riste Kabranov, will help you get started with Odoo to build powerful applications. What is Scaffolding? With Scaffolding, you can automatically create a skeleton structure to simplify bootstrapping of new modules in Odoo. Since it’s an automatic process, you don’t need to spend efforts in setting up basic structures and look for starting requirements. Oddo has a scaffold command that creates the skeleton for a new module based on a template. By default, the new module is created in the current working directory, but we can provide a specific directory where to create the module, passing it as an additional parameter. A step-by-step guide to scaffold a new module in Odoo 11: Step 1 In the first step, you need to navigate to /opt/odoo/odoo and create a folder name custom_addons. Step 2 In the second step, you scaffold a new module into the custom_addons. For this, Locate odoo-bin Use ./odoo-bin scaffold module_name folder_name to scaffold a new empty module Check if the new module is there and consists all the files needed. Check out the video for a more detailed walkthrough! This video tutorial has been taken from Odoo 11 Development Essentials. To learn how to build and customize business applications with Odoo, buy the full video course. ERP tool in focus: Odoo 11 Building Your First Odoo Application Top 5 free Business Intelligence tools
Read more
  • 0
  • 0
  • 9528

article-image-create-configure-azure-virtual-machine
Gebin George
25 May 2018
13 min read
Save for later

How to create and configure an Azure Virtual Machine

Gebin George
25 May 2018
13 min read
Creating virtual machines on Azure gives you on-demand, high-scale, secure, virtualized infrastructure using Windows Server. Virtual machine helps you deploy and scale applications easily. In this article, we will learn how to run an Azure Virtual Machine. This tutorial is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly. This book will help you efficiently monitor, diagnose, and troubleshoot Azure Networking. Creating an Azure VM is a very straightforward process – all you have to do is follow the given steps: Navigate to the Azure portal and search for Virtual Machines, as shown in the following screenshot: Figure 3.1: Searching for Virtual Machines Once the VM blade is opened, you can click on +Add to create a new VM, as shown in the following screenshot: Figure 3.2: Virtual Machines blade Once you have clicked on +Add, a new blade will pop up where you have to search for and select the desired OS for the VM, as shown in the following screenshot: Figure 3.3: Searching for Windows Server 2016 OS for the VM Once the OS is selected, you need to select the deployment model, whether that be Resource Manager or Classic, as shown in the following screenshot: Figure 3.4: Selecting the deployment model Once the deployment model is selected, a new blade will pop up where you have to specify the following: Name: Specify the name of the VM. VM disk type: Specify whether the disk type will be SSD or HDD. Consider that SSD will offer consistent, low-latency performance, but will incur more charges. Note that this option is not available for the Classic model in this blade, but is available in the Configure optional features blade. User name: Specify the username that will be used to log on the VM. Password: Specify the password, which must be between 12 and 123 characters long and must contain three of the following: one lowercase character, one uppercase character, one number, and one special character that is not or -. Subscription: This specifies the subscription that will be charged for the VM usage. Resource group: This specifies the resource group within which the VM will exist. Location: Specify the location in which the VM will be created. It is recommended that you select the nearest location to you. Save money: Here, you specify whether you own Windows Server Licenses with active Software Assurance (SA). If you do, Azure Hybrid Benefit is recommended to save compute costs. For more information about Azure Hybrid Benefit, you can check this page. Figure 3.5: Configure the VM basic settings Once you have clicked on OK, a new blade will pop up where you have to specify the VM size so that the VMs series can select the one that will fulfil your needs, as shown in the following screenshot: Figure 3.6: Select the VM size Once the VM size has been specified, you need to specify the following settings: Availability set: This option provides High availability for the VM by setting the VMs within the same application and availability set. Here, the VMs will be in different fault and update domains, granting the VMs high availability (up to 99.95% of Azure SLA). Use managed disks: Enable this feature to have Azure automatically manage the availability of disks to provide data redundancy and fault tolerance without creating and managing storage accounts on your own. This setting is not available in the Classic model. Virtual network: Specify the virtual network to which you want to assign the VM. Subnet: Select the subnet within the virtual network that you specified earlier to assign the VM to. Public IP address: Either select an existing public IP address or create a new one. Network security group (firewall): Select the NSG you want to assign to the VM NIC. This is called endpoints in the Classic model. Extensions: You can add more features to the VM using extensions, such as configuration management, antivirus protection, and so on. Auto-shutdown: Specify whether you want to shut down your VM daily or not; if you do, you can set a schedule. Considering that this option will help you saving compute cost especially for dev and test scenarios. This is not available in the Classic model. Notification before shutdown: Check this if you enabled Auto-shutdown and want to subscribe for notifications before the VM shuts down. This is not available in the Classic model. Boot diagnostics: This captures serial console output and screenshots of the VM running on a host to help diagnose start up issues. Guest OS diagnostics: This obtains metrics for the VM every minute; you can use these metrics to create alerts and stay informed of your applications. Diagnostics storage account: This is where metrics are written, so you can analyze them with your own tools. Figure 3.7: Specify more settings for the VM Enabling Boot diagnostics and Guest diagnostics will incur more charges since the diagnostics will need a dedicated storage account to store their data. Finally, once you are done with its settings, Azure will validate those you have specified and summarize them, as shown in the following screenshot: Figure 3.8: VM Settings Summary Once clicked on, Create the VM will start the creation process, and within minutes the VM will be created. Once the VM is created, you can navigate to the Virtual Machines blade to open the VM that has been created, as shown in the following screenshot: Figure 3.9: The created VM overview To connect to the VM, click on Connect, where a pre-configured RDP file with the required VM information will be downloaded. Open the RDP file. You will be asked to enter the username and password you specified for the VM during its configuration, as shown in the following screenshot: Figure 3.10: Entering the VM credentials Voila! You should now be connected to the VM. Azure VMs networking There are many network configurations that can be done for the VM. You can add additional NICs, change the private IP address, set a private or public IP address to be either static or dynamic, and you can change the inbound and outbound security rules. Adding inbound and outbound rules Adding inbound and outbound security rules to the VM NIC is a very simple process; all you need to do is follow these steps: Navigate to the desired VM. Scroll down to Networking, under SETTINGS, as shown in the following screenshot: Figure 3.11: VM networking settings To add inbound and outbound security rules, you have to click on either Add inbound or Add outbound. Once clicked on, a new blade will pop up where you have to specify settings using the following fields: Service: The service specifies the destination protocol and port range for this rule. Here, you can choose a predefined service, such as RDP or SSH, or provide a custom port range. Port ranges: Here, you need to specify a single port, a port range, or a comma-separated list of single ports or port ranges. Priority: Here, you enter the desired priority value. Name: Specify a name for the rule here. Description: Write a description for the rule that relates to it here. Figure 3.12: Adding an inbound rule Once you have clicked OK, the rule will be applied. Note that the same process applies when adding an outbound rule. Adding an additional NIC to the VM Adding an additional NIC starts from the same blade as adding inbound and outbound rules. To add an additional NIC, you have to follow the given steps: Before adding an additional NIC to the VM, you need to make sure that the VM is in a Stopped (Deallocated) status. Navigate to Networking on the desired VM. Click on Attach network interface, and a new blade will pop up. Here, you have to either create a network interface or select an existing one. If you are selecting an existing interface, simply click on OK and you are done. If you are creating a new interface, click on Create network interface, as shown in the following screenshot: Figure 3.13: Attaching network interface A new blade will pop up where you have to specify the following: Name: The name of the new NIC. Virtual network: This field will be grayed out because you cannot attach a VM's NIC to different virtual networks. Subnet: Select the desired subnet within the virtual network. Private IP address assignment: Specify whether you want to allocate this IP dynamically or statically. Network security group: Specify an NSG to be assigned to this NIC. Private IP address (IPv6): If you want to assign an IPv6 to this NIC, check this setting. Subscription: This field will be grayed out because you cannot have a VM's NIC in a different subscription. Resource group: Specify the resource group to which the NIC will exist. Location: This field will be grayed out because you cannot have VM NICs in different locations. Figure 3.14: Specify the NIC settings Once you are done, click Create. Once the network interface is created, you will return to the previous blade. Here, you need to specify the NIC you just created and click on OK, as shown in the following screenshot: Figure 3.15: Attaching the NIC Configuring the NICs The Network Interface Cards (NICs) include some configuration that you might be interested in. They are as follows: To navigate to the desired NIC, you can search for the network interfaces blade, as shown in the following screenshot: Figure 3.16: Searching for network interfaces blade Then, the blade will pop up, from which you can select the desired NIC, as shown in the following screenshot: Figure 3.17: Select the desired NIC You can also navigate back to the VM via | Networking | and then click on the desired NIC, as shown in the following screenshot: Figure 3.18: The VM NIC To configure the NIC, you need to follow the given steps: Once the NIC blade is opened, navigate to IP configurations, as shown in the following screenshot: Figure 3.19: NIC blade overview To enable IP forwarding, click on Enabled and then click Save. Enabling this feature will cause the NIC to receive traffic that is not destined to its own IP address. Traffic will be sent with a different source IP. To add another IP to the NIC, click on Add, and a new blade will pop up, for which you have to specify the following: Name: The name of the IP. Type: This field will be grayed out because a primary IP already exists. Therefore, this one will be secondary. Allocation: Specify whether the allocation method is static or dynamic. IP address: Enter the static IP address that belongs to the same subnet that the NIC belongs to. If you have selected dynamic allocation, you cannot enter the IP address statically. Public IP address: Specify whether or not you need a public IP address for this IP configuration. If you do, you will be asked to configure the required settings. Figure 3.20: Configure the IP configuration settings Click on Configure required settings for the public IP address and a new blade will pop up from which you can select an existing public IP address or create a new one, as shown in the following screenshot: Figure 3.21: Create a new public IP address Click on OK and you will return to the blade, as shown in Figure 3.20, with the following warning: Figure 3.22: Warning for adding a new IP address In this case, you need to plan for the addition of a new IP address to ensure that the time the VM is restarted is not during working hours. Azure VNets considerations for Azure VMs Building VMs in Azure is a common task, but to do this task well, and to make it operate properly, you need to understand the considerations of Azure VNets for Azure VMs. These considerations are as follows: Azure VNets enable you to bring your own IPv4/IPv6 addresses and assign them to Azure VMs, statically or dynamically You do not have access to the role that acts as DHCP or provides IP addresses; you can only control the ranges you want to use in the form of address ranges and subnets Installing a DHCP role on one of the Azure VMs is currently unsupported; this is because Azure does not use traditional Layer-2 or Layer-3 topology, and instead uses Layer-3 traffic with tunneling to emulate a Layer-2 LAN Private IP addresses can be used for internal communication; external communication can be done via public IP addresses You can assign multiple private and public IP addresses to a single VM You can assign multiple NICs to a single VM By default, all the VMs within the same virtual network can communicate with each other, unless otherwise specified by an NSG on a subnet within this virtual network The network security group (NSG) can sometimes cause an overhead; without this overhead, however, all VMs within the same subnet would communicate with each other By default, an inbound security rule is created for remote desktops for Windows-based VMs, and SSH for Linux-based VMs The inbound security rules are first applied on the NSG of the subnet and then the VM NIC NSG – for example, if the subnet's NSG allows HTTP traffic, it will pass through it; however, it may not reach its destination if the VM NIC NSG does not allow it The outbound security rules are applied for the VM NIC NSG first, and then applied on the subnet NSG Multiple NICs assigned to a VM can exist in different subnets Azure VMs with multiple NICs in the same availability set do not have to have the same number of NICs, but the VMs must have at least two NICs When you attach an NIC to a VM, you need to ensure that they exist in the same location and subscription The NIC and the VNet must exist in the same subscription and location The NIC's MAC address cannot be changed until the VM to which the NIC is assigned is deleted Once the VM is created, you cannot change the VNet to which it is assigned; however, you can change the subnet to which the VM is assigned You cannot attach an existing NIC to a VM during its creation, but you can add an existing NIC as an additional NIC By default, a dynamic public IP address is assigned to the VM during creation, but this address will change if the VM is stopped or deleted; to ensure it will not change, you need to ensure its IP address is static In a multi-NIC VM, the NSG that is applied to one NIC does not affect the others If you found this post useful, do check out the book Hands-On Networking with Azure, to design and implement Azure Networking for Azure VMs. Read More Introducing Azure Sphere – A secure way of running your Internet of Things devices Learn Azure Serverless computing for free: Download e-book
Read more
  • 0
  • 0
  • 7316

article-image-digital-forensics-using-autopsy
Savia Lobo
24 May 2018
10 min read
Save for later

Getting started with Digital forensics using Autopsy

Savia Lobo
24 May 2018
10 min read
Digital forensics involves the preservation, acquisition, documentation, analysis, and interpretation of evidence from various storage media types. It is not only limited to laptops, desktops, tablets, and mobile devices but also extends to data in transit which is transmitted across public or private networks. In this tutorial, we will cover how one can carry out digital forensics with Autopsy. Autopsy is a digital forensics platform and graphical interface to the sleuth kit and other digital forensics tools. This article is an excerpt taken from the book, 'Digital Forensics with Kali Linux', written by Shiva V.N. Parasram. Let's proceed with the analysis using the Autopsy browser by first getting acquainted with the different ways to start Autopsy. Starting Autopsy Autopsy can be started in two ways. The first uses the Applications menu by clicking on Applications | 11 - Forensics | autopsy: Alternatively, we can click on the Show applications icon (last item in the side menu) and type autopsy into the search bar at the top-middle of the screen and then click on the autopsy icon: Once the autopsy icon is clicked, a new terminal is opened showing the program information along with connection details for opening The Autopsy Forensic Browser. In the following screenshot, we can see that the version number is listed as 2.24 with the path to the Evidence Locker folder as /var/lib/autopsy: To open the Autopsy browser, position the mouse over the link in the terminal, then right-click and choose Open Link, as seen in the following screenshot: Creating a new case To create a new case, follow the given steps: When the Autopsy Forensic Browser opens, investigators are presented with three options. Click on NEW CASE: Enter details for the Case Name, Description, and Investigator Names. For the Case Name, I've entered SP-8-dftt, as it closely matches the image name (8-jpeg-search.dd), which we will be using for this investigation. Once all information is entered, click NEW CASE: Several investigator name fields are available, as there may be instances where several investigators may be working together. The locations of the Case directory and Configuration file are displayed and shown as created.  It's important to take note of the case directory location, as seen in the screenshot: Case directory (/var/lib/autopsy/SP-8-dftt/) created. Click ADD HOST to continue: Enter the details for the Host Name (name of the computer being investigated) and the Description of the host. Optional settings: Time zone: Defaults to local settings, if not specified Timeskew Adjustment: Adds a value in seconds to compensate for time differences Path of Alert Hash Database: Specifies the path of a created database of known bad hashes Path of Ignore Hash Database: Specifies the path of a created database of known good hashes similar to the NIST NSRL: Click on the ADD HOST button to continue. Once the host is added and directories are created, we add the forensic image we want to analyze by clicking the ADD IMAGE button: Click on the ADD IMAGE FILE button to add the image file: To import the image for analysis, the full path must be specified. On my machine, I've saved the image file (8-jpeg-search.dd) to the Desktop folder. As such, the location of the file would be /root/Desktop/ 8-jpeg-search.dd. For the Import Method, we choose Symlink. This way the image file can be imported from its current location (Desktop) to the Evidence Locker without the risks associated with moving or copying the image file. If you are presented with the following error message, ensure that the specified image location is correct and that the forward slash (/) is used: Upon clicking Next, the Image File Details are displayed. To verify the integrity of the file, select the radio button for Calculate the hash value for this image, and select the checkbox next to Verify hash after importing? The File System Details section also shows that the image is of a ntfs partition. Click on the ADD button to continue: After clicking the ADD button in the previous screenshot, Autopsy calculates the MD5 hash and links the image into the evidence locker. Press OK to continue: At this point, we're just about ready to analyze the image file. If there are multiple cases listed in the gallery area from any previous investigations you may have worked on, be sure to choose the 8-jpeg-search.dd file and case: Before proceeding, we can click on the IMAGE DETAILS option. This screen gives detail such as the image name, volume ID, file format, file system, and also allows for the extraction of ASCII, Unicode, and unallocated data to enhance and provide faster keyword searches. Click on the back button in the browser to return to the previous menu and continue with the analysis: Before clicking on the ANALYZE button to start our investigation and analysis, we can also verify the integrity of the image by creating an MD5 hash, by clicking on the IMAGE INTEGRITY button: Several other options exist such as FILE ACTIVITY TIMELINES, HASH DATABASES, and so on. We can return to these at any point in the investigation. After clicking on the IMAGE INTEGRITY button, the image name and hash are displayed. Click on the VALIDATE button to validate the MD5 hash: The validation results are displayed in the lower-left corner of the Autopsy browser window: We can see that our validation was successful, with matching MD5 hashes displayed in the results. Click on the CLOSE button to continue. To begin our analysis, we click on the ANALYZE button: Analysis using Autopsy Now that we've created our case, added host information with appropriate directories, and added our acquired image, we get to the analysis stage. After clicking on the ANALYZE button (see the previous screenshot), we're presented with several options in the form of tabs, with which to begin our investigation: Let's look at the details of the image by clicking on the IMAGE DETAILS tab. In the following snippet, we can see the Volume Serial Number and the operating system (Version) listed as Windows XP: Next, we click on the FILE ANALYSIS tab. This mode opens into File Browsing Mode, which allows the examination of directories and files within the image. Directories within the image are listed by default in the main view area: In File Browsing Mode, directories are listed with the Current Directory specified as C:/. For each directory and file, there are fields showing when the item was WRITTEN, ACCESSED, CHANGED, and CREATED, along with its size and META data: WRITTEN: The date and time the file was last written to ACCESSED: The date and time the file was last accessed (only the date is accurate) CHANGED: The date and time the descriptive data of the file was modified CREATED: The data and time the file was created META: Metadata describing the file and information about the file: For integrity purposes, MD5 hashes of all files can be made by clicking on the GENERATE MD5 LIST OF FILES button. Investigators can also make notes about files, times, anomalies, and so on, by clicking on the ADD NOTE button: The left pane contains four main features that we will be using: Directory Seek: Allows for the searching of directories File Name Search: Allows for the searching of files by Perl expressions or filenames ALL DELETED FILES: Searches the image for deleted files EXPAND DIRECTORIES: Expands all directories for easier viewing of contents By clicking on EXPAND DIRECTORIES, all contents are easily viewable and accessible within the left pane and main window. The + next to a directory indicates that it can be further expanded to view subdirectories (++) and their contents: To view deleted files, we click on the ALL DELETED FILES button in the left pane. Deleted files are marked in red and also adhere to the same format of WRITTEN, ACCESSED, CHANGED, and CREATED times. From the following screenshot, we can see that the image contains two deleted files: We can also view more information about this file by clicking on its META entry. By viewing the metadata entries of a file (last column to the right), we can also view the hexadecimal entries for the file, which may give the true file extensions, even if the extension was changed. In the preceding screenshot, the second deleted file (file7.hmm) has a peculiar file extension of .hmm. Click on the META entry (31-128-3) to view the metadata: Under the Attributes section, click on the first cluster labelled 1066 to view header information of the file: We can see that the first entry is .JFIF, which is an abbreviation for JPEG File Interchange Format. This means that the file7.hmm file is an image file but had its extension changed to .hmm. Sorting files Inspecting the metadata of each file may not be practical with large evidence files. For such an instance, the FILE TYPE feature can be used. This feature allows for the examination of existing (allocated), deleted (unallocated), and hidden files. Click on the FILE TYPE tab to continue: Click Sort files into categories by type (leave the default-checked options as they are) and then click OK to begin the sorting process: Once sorting is complete, a results summary is displayed. In the following snippet, we can see that there are five Extension Mismatches: To view the sorted files, we must manually browse to the location of the output folder, as Autopsy 2.4 does not support viewing of sorted files. To reveal this location, click on View Sorted Files in the left pane: The output folder locations will vary depending on the information specified by the user when first creating the case, but can usually be found at /var/lib/autopsy/<case name>/<host name>/output/sorter-vol#/index.html. Once the index.html file has been opened, click on the Extension Mismatch link: The five listed files with mismatched extensions should be further examined by viewing metadata content, with notes added by the investigator. Reopening cases in Autopsy Cases are usually ongoing and can easily be restarted by starting Autopsy and clicking on OPEN CASE: In the CASE GALLERY, be sure to choose the correct case name and, from there, continue your examination: To recap, we looked at forensics using the Autopsy Forensic Browser with The Sleuth Kit. Compared to individual tools, Autopsy has case management features and supports various types of file analysis, searching, and sorting of allocated, unallocated, and hidden files. Autopsy can also perform hashing on a file and directory levels to maintain evidence integrity. If you enjoyed reading this article, do check out, 'Digital Forensics with Kali Linux' to take your forensic abilities and investigations to a professional level, catering to all aspects of a digital forensic investigation from hashing to reporting. What is Digital Forensics? IoT Forensics: Security in an always connected world where things talk Working with Forensic Evidence Container Recipes
Read more
  • 0
  • 0
  • 27947

article-image-what-matters-on-an-engineering-resume-hacker-rank-report-says-skills-not-certifications
Richard Gall
24 May 2018
6 min read
Save for later

What matters on an engineering resume? Hacker Rank report says skills, not certifications

Richard Gall
24 May 2018
6 min read
Putting together an engineering resume can be a real headache. What should you include? How can you best communicate your experience and skills? Software engineers are constantly under pressure to deliver new projects and fix problems while learning new skills. Documenting the complexity of developer life in a straightforward and marketable manner is a challenge to say the least. Luckily, hiring managers and tech recruiters today recognize just how difficult communicating skill and competency in an engineering resume can be. A report by Hacker Rank revealed that the things that feature on a resume aren't that highly valued by recruiters and hiring managers. However, skills does remain top of the agenda: the question, really, is about how we demonstrate and communicate those skills. The quality of your previous experience matters on an engineering resume Hacker Rank found that hiring managers and tech recruiters value previous experience over everything else. 77% of survey respondents said previous experience was one of the 3 most important qualifications before a formal interview. In second place was years of experience with 46%. The difference between the two is subtle but important; it's offers a useful takeaway for engineers creating an engineering resume. Essentially, the quality of your experience is more important than the quantity of your experience. You need to make sure you communicate the details of your employment experiences. It sounds obvious but it's worth stating: applying for an engineering job isn't just a competition based on who has the most experience. You should explain the nature of the projects you are working on. The skills you used are essential, but being clear about how the project supported wider strategic or tactical goals is also important. This demonstrates not only your skills, but also your contextual awareness. It suggests to a hiring manager or recruiter you not only have the competence, but that you are also a team player with commercial awareness. Certifications aren't that important on your resume One of the most interesting insights from the Hacker Rank report was that both hiring managers and recruiters don't really care about certifications any more. Less than 16% listed it as one of the 3 most important things they look at during the recruitment process. Does this mean, then, that the certification is well and truly over? At this stage, it's hard to tell. But it does point to a wider cultural change that probably has a lot to do with open source. Because change is built into the reality of open source software, certifications are never going to be able to keep up with what's new and important. The things you learned to pass one year will likely be out of date the next. It probably also says something about the nature of technical roles today. Years ago, engineers would start a job knowing what they were going to be using. The toolchains and tech stacks would be relatively stable and consistent. In this context, certification was like a license, proving you understood the various components of a given tool or suite of tools. But today, it's more important for engineers to prove that they are both adaptable and capable of solving a range of different problems. With that in mind, it's essential to demonstrate your flexibility on your engineering resume. Make it clear that you're able to learn new things quickly, and that you can adapt your skill set to the problems you need to solve. You don't need to look good on paper to get the job... but it's going to help Hacker Rank's research also revealed that 75% of recruiters and hiring managers have hired people they initially thought didn't look good on paper. But that doesn't necessarily mean you should stop working on your resume. If anything, what this shows is that if you get your resume right, you could really catch someone's attention. You need to consider everything in your resume. Traditional resumes have a pretty clear structure, whatever job you're applying for, but if Hacker Rank's research tells us anything, it's that a an engineering resume requires a slightly different approach. Personal projects are more important than your portfolio on an engineering resume A further insight from Hacker Rank's report suggests one way you might adopt a different approach to your resume. Responding to the same question as the one we looked at above, 37% said personal projects were one of the 3 most important factors in determining whether to invite a candidate to interview. By contrast, only 22% said portfolio. This seems strange - surely a portfolio offers a deeper insight into someone's professional experience. Personal projects are more like hobbies, right? Personal projects actually tell you so much more about a candidate than a portfolio. A portfolio is largely determined by the work you have been doing. What's more, it's not always that easy to communicate value in a portfolio. Equally, if you've been badly managed, or faced a lack of support, your portfolio might not actually be a good reflection of how good you really are. Personal projects give you an insight into how a person thinks. It shows recruiters what makes an engineer tick. In the workplace your scope for creativity and problem solving might well be limited. With personal projects you're free to test out ideas try new tools. You're able to experiment. So, when you're putting together an engineering resume, make sure you dedicate some time outlining your personal projects. Consider these sorts of questions: Why did you start a project? What did you find interesting? What did you learn? Engineering skills still matter Just because the traditional resume appears to have fallen out of favor, it doesn't mean your skills don't matter. In fact, skill matters more than ever. For a third of hiring managers, skill assessments are the area they want to invest in. This would allow them to judge a candidate's competencies and skills much more effectively than simply looking at a resume. As we've seen, things like personal projects are valuable because they demonstrate skills in a way that is often difficult. They not only prove you have the technical skills you say you have, they also provide a good indication of how you think and how you might approach solving problems. They can help illustrate how you deploy those skills. And when its so easy to learn how to write lines of code (no bad thing, true), showing how you think and apply your skills is a sure fire way to make sure you stand out from the crowd. Read next: How to assess your tech team's skills Are technical skills overrated when hiring tech pros?  ‘Soft’ Skills Every Data Pro Needs
Read more
  • 0
  • 0
  • 8187
article-image-build-progressive-web-apps-firebase
Savia Lobo
24 May 2018
19 min read
Save for later

Build powerful progressive web apps with Firebase

Savia Lobo
24 May 2018
19 min read
Progressive Web Apps describe a collection of technologies, design concepts, and Web APIs. They work in tandem to provide an app-like experience on the mobile web. In this tutorial, we'll discuss diverse recipes and show what it takes to turn any application into a powerful, fully-optimized progressive web application using Firebase magic. This article is an excerpt taken from the book,' Firebase Cookbook', written by Houssem Yahiaoui. In this book, you will learn how to create cross-platform mobile apps, integrate Firebase in native platforms, and learn how to monetize your mobile applications using Admob for Android and iOS. Integrating Node-FCM in NodeJS server We'll see how we can fully integrate the FCM (Firebase Cloud Messaging) with any Nodejs application. This process is relatively easy and straightforward, and we will see how it can be done in a matter of minutes. How to do it... Let's ensure that our working environment is ready for our project, so let's install a couple of dependencies in order to ensure that everything will run smoothly: Open your terminal--in case you are using macOS/Linux--or your cmd--if you're using Windows--and write down the following command: ~> npm install fcm-push --save By using this command, we are downloading and installing the fcm-push library locally. If this is executed, it will help with managing the process of sending web push notification to our users. The NodeJS ecosystem has npm as part of it. So in order to have it on your working machine, you will have to download NodeJS and configure your system within. Here's the official NodeJS link; download and install the suitable version to your system: https://nodejs.org/en/download/. Now that we have successfully installed the library locally, in order to integrate it with our applications and start using it, we will just need another line of code. Within your local development project, create a new file that will host the functionality and simply implement the following code: const FCM = require('fcm-push'); Congratulations! We're typically done. In subsequent recipes, we'll see how to manage our way through the sending process and how we can successfully send our user's web push notification. Implementing service workers Service workers were, in fact, the missing piece in the web scenes of the past. They are what give us the feel of reactivity to any state that a web application, after integrating, can have from, for example, a notification message, an offline-state (no internet connection) and more. In this recipe, we'll see how we can integrate service worker into our application. Service workers files are event-driven, so everything that will happen inside them will be event based. Since it's JavaScript, we can always hook up listeners to any event. To do that, we will want to give a special logic knowing that if you will use this logic, the event will behave in its default way. You can read more about service workers and know how you can incorporate them--to achieve numerous awesome features to your application that our book won't cover--from https://developers.google.com/web/fundamentals/getting-started/primers/service-workers. How to do it... Services workers will live in the browser, so including them within your frontend bundle is the most suitable place for them. Keeping that in mind, let's create a file called firebase-messaging-sw.js and manifest.json file. The JavaScript file will be our service worker file and will host all major workload, and the JSON file will simply be a metadata config file. After that, ensure that you also create app.js file, where this file will be the starting point for our Authorization and custom UX.  We will look into the importance of each file individually later on, but for now, go back to the firebase-messaging-sw.js file and write the following: //[*] Importing Firebase Needed Dependencies importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-app.js'); importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-messaging.js'); // [*] Firebase Configurations var config = { apiKey: "", authDomain: "", databaseURL: "", storageBucket: "", messagingSenderId: "" }; //[*] Initializing our Firebase Application. firebase.initializeApp(config); // [*] Initializing the Firebase Messaging Object. const messaging = firebase.messaging(); // [*] SW Install State Event. self.addEventListener('install', function(event) { console.log("Install Step, let's cache some files =D"); }); // [*] SW Activate State Event. self.addEventListener('activate', function(event) { console.log('Activated!', event);}); Within any service worker file, the install event is always fired first. Within this event, we can handle and add custom logic to any event we want. This can range from saving a local copy of our application in the browser cache to practically anything we want. Inside your metadata file, which will be the manifest.json files, write the following line: { "name": "Firebase Cookbook", "gcm_sender_id": "103953800507" } How it works... For this to work, we're doing the following: Importing using importScripts while considering this as the script tag with the src attribute in HTML, the firebase app, and the messaging libraries. Then, we're introducing our Firebase config object; we've already discussed where you can grab that object content in the past chapters. Initializing our Firebase app with our config file. Creating a new reference from the firebase.messaging library--always remember that everything in Firebase starts with a reference. We're listening to the install and activate events and printing some stdout friendly message to the browser debugger. Also, within our manifest.json file, we're adding the following metadata: The application name (optional). The gcm_sender_id with the given value. Keep in mind that this value will not change for any new project you have or will create in the future. The gcm_sender_id added line might get deprecated in the future, so keep an eye on that. Implementing sending/receiving registration using Socket.IO By now, we've integrated our FCM server and made our service worker ready to host our awesome custom logic. Like we mentioned, we're about to send web push notifications to our users to expand and enlarge their experience without application. Lately, web push notification is being considered as an engagement feature that any cool application nowadays, ranging from Facebook and Twitter to numerous e-commerce sites, is making good use of. So in the first approach, let's see how we can make it happen with Socket.IO. In order to make the FCM server aware of any client--basically, a browser--Browser OEM has what we call a registration_id. This token is a unique token for every browser that will represent our clients by their browsers and needs to be sent to the FCM server. Each browser generates its own registration_id token. So if your user uses, for instance, chrome for their first interaction with the server and they used firefox for their second experience, the web push notification message won't be sent, and they need to send another token so that they can be notified. How to do it... Now, go back to your NodeJS project that was created in the first recipe. Let's download the node.js socket.io dependency: ~> npm install express socket.io --save Socket.io is also event-based and lets us create our custom event for everything we have, plus some native one for connections. Also, we've installed ExpressJS for configuration sake in order to create a socket.io server. Now after doing that, we need to configure our socket.io server using express. In this case, do as shown in the following code: const express = require('express'); const app = express(); app.io = require('socket.io')(); // [*] Configuring our static files. app.use(express.static('public/')); // [*] Configuring Routes. app.get('/', (req, res) => { res.sendFile(__dirname + '/public/index.html'); }); // [*] Configuring our Socket Connection. app.io.on('connection', socket => { console.log('Huston ! we have a new connection ...'); }) So let's discuss the preceding code, where we are simply doing the following: Importing express and creating a new express app. Using the power of object on the fly, we've included the socket.io package over a new sub-object over the express application. This integration makes our express application now support the usage of  socket.io over the Application. We're indicating that we want to use the public folder as our static files folder, which will host our HTML/CSS/Javascript/IMG resources. We're listening to the connection, which will be fired once we have a new upcoming client. Once a new connection is up, we're printing a friendly message over our console. Let's configure our frontend. Head directly to your index.html files located in the public folder, and add the following line at the head of your page: <script src="/socket.io/socket.io.js"></script> The socket.io.js file will be served in application launch time, so don't worry much if you don't have one locally. Then, at the bottom of our index.html file before the closing of our <body> tag, add the following: <script> var socket = io.connect('localhost:3000'); </script> In the preceding script, we're connecting our frontend to our socket.io backend. Supposedly, our server is in port 3000; this way, we ensured that our two applications are running in sync. Head to your app.js files--created in the earlier recipes; create and import it if you didn't do so already--and introduce the following: //[*] Importing Firebase Needed Dependencies importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-app.js'); importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-messaging.js'); // [*] Firebase Configurations var config = { apiKey: "", authDomain: "", databaseURL: "", storageBucket: "", messagingSenderId: "" }; //[*] Initializing our Firebase Application. firebase.initializeApp(config); // [*] Initializing the Firebase Messaging Object. const messaging = firebase.messaging(); Everything is great; don't forget to import the app.js file within your index.html file. We will now see how we can grab our registration_id token next: As I explained earlier, the registration token is unique per browser. However, in order to get it, you should know that this token is considered a privacy issue. To ensure that they do not fall into the wrong hands, it's not available everywhere and to get it, you need to ask for the Browser User Permission. Since you can use it in any particular application, let's see how we can do that. The registration_id token will be considered as security and privacy threat to the user in case this token has been compromised, because once the attacker or the hacker gets the tokens, they can send or spam users with push messages that might have a malicious intent within, so safely keeping and saving those tokens is a priority. Now, within your app.js files that we created early on, let's add the following lines of code underneath the Firebase messaging reference: messaging.requestPermission() .then(() => { console.log("We have permission !"); return messaging.getToken(); }) .then((token) => { console.log(token); socket.emit("new_user", token); }) .catch(function(err) { console.log("Huston we have a problem !",err); }); We've sent out token using the awesome feature of socket.io. In order to get it now, let's simply listen to the same event and hope that we will get some data over our NodeJS backend. We will now proceed to learn about receiving registration token: Back to the app.js file, inside our connection event, let's add the following code: socket.on('new_user', (endpoint) => { console.log(endpoint); //TODO : Add endpoint aka.registration_token, to secure place. }); Our socket.io logic will look pretty much as shown in the following code block: // [*] Configuring our Socket Connection. app.io.on('connection', socket => { console.log('Huston ! we have a new connection ...'); socket.on('new_user', (endpoint) => { console.log(endpoint); //TODO : Add endpoint aka.registration_token, to secure place. }); }); Now, we have our bidirectional connection set. Let's grab that regitration_token and save it somewhere safe for further use. How it works... In the first part of the recipe, we did the following: Importing using importScripts considering it a script tag with src attribute in HTML, the firebase app, and messaging libraries. Then, we're introducing our Firebase Config object. We've already discussed where you can grab that object content in the previous chapters. Initializing our Firebase app with our config file. Creating a new reference from firebase.messaging library--always remember that everything in Firebase starts with a reference. Let's discuss what we did previously in the section where we talked about getting the registration_token value: We're using the Firebase messaging reference that we created earlier and executing the requestPermission() function, which returns a promise. Next--assuming that you're following along with this recipe--you will do the following over your page. After launching the development server, you will get a request from your page (Figure 1): Now, let's get back to the code. If we allow the notification, the promise resolver will be executed and then we will return the registration_token value from themessaging.getToken(). Then, we're sending that token over a socket and emitting that with an even name of new_user. Socket.io uses web sockets and like we explained before, sockets are event based. So in order to have a two-way connection between nodes, we need to emit, that is, send an event after giving it a name and listening to the same event with that name. Remember the socket.io event's name because we'll incorporate that into our further recipes. Implementing sending/receiving registration using post requests In a different approach from the one used in Socket.io, let's explore the other way around using post requests. This means that we'll use a REST API that will handle all that for us. At the same time, it will handle saving the registration_token value in a secure place as well. So let's see how we can configure that. How to do it... First, let's start writing our REST API. We will do that by creating an express post endpoint. This endpoint will be porting our data to the server, but before that, let's install some dependencies using the following line: ~> npm install express body-parser --save Let's discuss what we just did: We're using npm to download ExpressJS locally to our development directory. Also, we're downloading body-parser. This is an ExpressJS middleware that will host all post requests data underneath the body subobject. This module is quite a common package in the NodeJS community, and you will find it pretty much everywhere. Now, let's configure our application. Head to the app.js file and add the following code lines there: const express = require('express'); const app = express(); const bodyParser = require('body-parser'); // [*] Configuring Body Parser. app.use(bodyParser.json()); // [*] Configuring Routes. app.post('/regtoken', (req, res) => { let reg_token =req.body.regtoken; console.log(reg_token); //TODO : Create magic while saving this token in secure place. }); In the preceding code, we're doing the following: Importing our dependencies including ExpressJS and BodyParser. In the second step, we're registering the body parser middleware. You can read more about body parser and how to properly configure it to suit your needs from https://github.com/expressjs/body-parser. Next, we're creating an express endpoint or a route. This route will host our custom logic to manage the retrieval of the registration token sent from our users. Now, let's see how we can send the registration token to form our user's side. In this step, you're free to use any HTTP client you seek. However, in order to keep things stable, we'll use the browser native fetch APIs. Since we've managed to fully configure our routes, it can host the functionality we want. Let's see how we can get the registration_token value and send it to our server using post request and the native browser HTTP client named fetch: messaging.requestPermission() .then(() => { console.log("We have permission !"); return messaging.getToken(); }) .then((token) => { console.log(token); //[*] Sending the token fetch("http://localhost:3000/regtoken", { method: "POST" }).then((resp) => { //[*] Handle Server Response. }) .catch(function(err) { //[*] Handle Server Error. }) }) .catch(function(err) { console.log("Huston we have a problem !", err); }); How it works... Let's discuss what we just wrote in the preceding code: We're using the Firebase messaging reference that we created earlier and executing the requestPermission() function that returns a promise. Next, supposing that you're following along with this recipe, after launching the development server you will get the following authorization request from your page: Moving back to the code, if we allow the notification, the promise resolver will be executed and then we will return the registration_token value from the messaging.getToken(). Next, we're using the Fetch API given it a URL as a first parameter and a method name that is post and handling the response and error. Since we saw the different approaches to exchange data between the client and the server, in the next recipe, we will see how we can receive web push notification messages from the server. Receiving web push notification messages We can definitely say that things are on a good path. We managed to configure our message to the server in the past recipes. Now, let's work on getting the message back from the server in a web push message. This is a proven way to gain more leads and regain old users. It is a sure way to re-engage your users, and success stories do not lie. Facebook, Twitter, and e-commerce websites are the living truth on how a web push message can make a difference in your ecosystem and your application in general. How to do it... Let's see how we can unleash the power of push messages. The API is simple and the way to do has never been easier, so let's how we can do that! Let's write down the following code over our firebase-messaging-sw.js file: // [*] Special object let us handle our Background Push Notifications messaging.setBackgroundMessageHandler(function(payload) { return self.registration.showNotification(payload.data.title, body: payload.data.body); }); Let's explain the preceding code: We're using the already created messaging object created using the Firebase messaging library, and we're calling. the setBackgroundMessageHandler() function. This will mean that we will catch all the messages that we will keep receiving in the background. We're using the service worker object represented in the self-object, and we're calling the showNotification() function and passing it some parameters. The first parameter is the title, and we're grabbing it from the server; we'll see how we can get it in just a second. The second parameter is the body of the message. Now, we've prepared our frontend to received messages. Let's send them from the server, and we will see how we can do that using the following code: var fcm = new FCM('<FCM_CODE>'); var message = { to: data.endpoint, // required fill with device token or topics notification: { title: data.payload.title, body: data.payload.body } }; fcm.send(message) .then(function(response) { console.log("Successfully sent with response: ", response); }) .catch(function(err) { console.log("Something has gone wrong!"); console.error(err); }) }); The most important part is FCM_CODE. You can grab it in the Firebase console by going to the Firebase Project Console and clicking on the Overview tab (Figure 3): Then, go to the CLOUD MESSAGING tab and copy and paste the Server Key in the section (Figure 4): How it works... Now, let's discuss the code that what we just wrote: The preceding code can be put everywhere, which means that you can send push notifications in every part of your application. We're composing our notification message by putting the registration token and the information we want to send. We're using the fcm.send() method in order to send our notification to the wanted users. Congratulations! We're done; now go and test your awesome new functionality! Implementing custom notification messages In the previous recipes, we saw how we can send a normal notification. Let's add some controllers and also learn how to add some pictures to it so that we can prettify it a bit. How to do it... Now write the following code and add it to our earlier code over the messaging.setBackgroundMessageHandler() function. So, the end result will look something like this: // [*] Special object let us handle our Background Push Notifications messaging.setBackgroundMessageHandler(function(payload) { const notificationOptions = { body: payload.data.msg, icon: "images/icon.jpg", actions: [ { action : 'like', title: 'Like', image: '<link-to-like-img>' }, { action : 'dislike', title: 'Dislike', image: '<link-to-like-img>' } ] } self.addEventListener('notificationclick', function(event) { var messageId = event.notification.data; event.notification.close(); if (event.action === 'like') { console.log("Going to like something !"); } else if (event.action === 'dislike') { console.log("Going to dislike something !"); } else { console.log("wh00t !"); } }, false); return self.registration.showNotification( payload.data.title,notificationOptions); }); How it works... Let's discuss what we've done so far: We added the notificationOptions object that hosts some of the required metadata, such as the body of the message and the image. Also, in this case, we're adding actions, which means we'll add custom buttons to our notification message. These will range from title to image, the most important part is the action name. Next, we listened on notificationclick, which will be fired each time one of the actions will be selected. Remember the action field we added early on; it'll be the differentiation point between all actions we might add. Then, we returned the notification and showed it using the showNotification() function. We saw how to build powerful progressive applications using Firebase. If you've enjoyed reading this tutorial, do check out, 'Firebase Cookbook', for creating serverless applications with Firebase Cloud Functions or turning your traditional applications into progressive apps with Service workers. What is a progressive web app? Windows launches progressive web apps… that don’t yet work on mobile Hybrid Mobile apps: What you need to know
Read more
  • 0
  • 0
  • 4188

article-image-arduino-based-follow-me-drone
Vijin Boricha
23 May 2018
9 min read
Save for later

How to build an Arduino based 'follow me' drone

Vijin Boricha
23 May 2018
9 min read
In this tutorial, we will learn how to train the drone to do something or give the drone artificial intelligence by coding from scratch. There are several ways to build Follow Me-type drones. We will learn easy and quick ways in this article. Before going any further, let's learn the basics of a Follow Me drone. This is a book excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. If you are a hardcore programmer and hardware enthusiast, you can build an Arduino drone, and make it a Follow Me drone by enabling a few extra features. For this section, you will need the following things: Motors ESCs Battery Propellers Radio-controller Arduino Nano HC-05 Bluetooth module GPS MPU6050 or GY-86 gyroscope. Some wires Connections are simple: You need to connect the motors to the ESCs, and ESCs to the battery. You can use a four-way connector (power distribution board) for this, like in the following diagram: Now, connect the radio to the Arduino Nano with the following pin configuration: Arduino pin Radio pin D3 CH1 D5 CH2 D2 CH3 D4 CH4 D12 CH5 D6 CH6 Now, connect the Gyroscope to the Arduino Nano with the following configuration: Arduino pin Gyroscope pin 5V 5V GND GND A4 SDA A5 SCL You are left with the four wires of the ESC signals; let's connect them to the Arduino Nano now, as shown in the following configuration: Arduino pin Motor signal pin D7 Motor 1 D8 Motor 2 D9 Motor 3 D10 Motor 4 Our connection is almost complete. Now we need to power the Arduino Nano and the ESCs. Before doing that, making common the ground means connecting both the wired to the ground. Before going any further, we need to upload the code to the brain of our drone, which is the Arduino Nano. The code is a little bit big. I am going to explain the code after installing the necessary library. You will need a library installed to the Arduino library folder before going to the programming part. The library's name is PinChangeInt. Install the library and write the code for the drone. The full code can be found at Github. Let's explain the code a little bit. In the code, you will find lots of functions with calculations. For our gyroscope, we needed to define all the axes, sensor data, pin configuration, temperature synchronization data, I2C data, and so on. In the following function, we have declared two structures for the accel and gyroscope data with all the directions: typedef union accel_t_gyro_union { struct { uint8_t x_accel_h; uint8_t x_accel_l; uint8_t y_accel_h; uint8_t y_accel_l; uint8_t z_accel_h; uint8_t z_accel_l; uint8_t t_h; uint8_t t_l; uint8_t x_gyro_h; uint8_t x_gyro_l; uint8_t y_gyro_h; uint8_t y_gyro_l; uint8_t z_gyro_h; uint8_t z_gyro_l; } reg; struct { int x_accel; int y_accel; int z_accel; int temperature; int x_gyro; int y_gyro; int z_gyro; } value; }; In the void setup() function of our code, we have declared the pins we have connected to the motors: myservoT.attach(7); //7-TOP myservoR.attach(8); //8-Right myservoB.attach(9); //9 - BACK myservoL.attach(10); //10 LEFT We also called our test_gyr_acc() and test_radio_reciev() functions, for testing the gyroscope and receiving data from the remote respectively. In our test_gyr_acc() function. In our test_gyr_acc() function, we have checked if it can detect our gyroscope sensor or not and set a condition if there is an error to get gyroscope data then to set our pin 13 high to get a signal: void test_gyr_acc() { error = MPU6050_read (MPU6050_WHO_AM_I, &c, 1); if (error != 0) { while (true) { digitalWrite(13, HIGH); delay(300); digitalWrite(13, LOW); delay(300); } } } We need to calibrate our gyroscope after testing if it connected. To do that, we need the help of mathematics. We will multiply both the rad_tilt_TB and rad_tilt_LR by 2.4 and add it to our x_a and y_a respectively. then we need to do some more calculations to get correct x_adder and the y_adder: void stabilize() { P_x = (x_a + rad_tilt_LR) * 2.4; P_y = (y_a + rad_tilt_TB) * 2.4; I_x = I_x + (x_a + rad_tilt_LR) * dt_ * 3.7; I_y = I_y + (y_a + rad_tilt_TB) * dt_ * 3.7; D_x = x_vel * 0.7; D_y = y_vel * 0.7; P_z = (z_ang + wanted_z_ang) * 2.0; I_z = I_z + (z_ang + wanted_z_ang) * dt_ * 0.8; D_z = z_vel * 0.3; if (P_z > 160) { P_z = 160; } if (P_z < -160) { P_z = -160; } if (I_x > 30) { I_x = 30; } if (I_x < -30) { I_x = -30; } if (I_y > 30) { I_y = 30; } if (I_y < -30) { I_y = -30; } if (I_z > 30) { I_z = 30; } if (I_z < -30) { I_z = -30; } x_adder = P_x + I_x + D_x; y_adder = P_y + I_y + D_y; } We then checked that our ESCs are connected properly with the escRead() function. We also called elevatorRead() and aileronRead() to configure our drone's elevator and the aileron. We called test_radio_reciev() to test if the radio we have connected is working, then we called check_radio_signal() to check if the signal is working. We called all the stated functions from the void loop() function of our Arduino code. In the void loop() function, we also needed to configure the power distribution of the system. We added a condition, like the following: if(main_power > 750) { stabilize(); } else { zero_on_zero_throttle(); } We also set a boundary; if main_power is greater than 750 (which is a stabling value for our case), then we stabilize the system or we call zero_on_zero_throttle(), which initializes all the values of all the directions. After uploading this, you can control your drone by sending signals from your remote control. Now, to make it a Follow Me drone, you need to connect a Bluetooth module or a GPS. You can connect your smartphone to the drone by using a Bluetooth module (HC-05 preferred) or another Bluetooth module as master-slave usage. And, of course, to make the drone follow you, you need the GPS. So, let's connect them to our drone. To connect the Bluetooth module, follow the following configuration: Arduino pin Bluetooth module pin TX RX RX TX 5V 5V GND GND See the following diagram for clarification: For the GPS, connect it as shown in the following configuration: Arduino pin GPS pin D11 TX D12 RX GND GND 5V 5V See the following diagram for clarification: Since all the sensors usages 5V power, I would recommend using an external 5V power supply for better communication, especially for the GPS. If we use the Bluetooth module, we need to make the drone's module the slave module and the other module the master module. To do that, you can set a pin mode for the master and then set the baud rate to at least 38,400, which is the minimum operating baud rate for the Bluetooth module. Then, we need to check if one module can hear the other module. For that, we can write our void loop() function as follows: if(Serial.available() > 0) { state = Serial.read(); } if (state == '0') { digitalWrite(Pin, LOW); state = 0; } else if (state == '1') { digitalWrite(Pin, HIGH); state = 0; } And do the opposite for the other module, connecting it to another Arduino. Remember, you only need to send and receive signals, so refrain from using other utilities of the Bluetooth module, for power consumption and swiftness. If we use the GPS, we need to calibrate the compass and make it able to communicate with another GPS module. We need to read the long value from the I2C, as follows: float readLongFromI2C() { unsigned long tmp = 0; for (int i = 0; i < 4; i++) { unsigned long tmp2 = Wire.read(); tmp |= tmp2 << (i*8); } return tmp; } float readFloatFromI2C() { float f = 0; byte* p = (byte*)&f; for (int i = 0; i < 4; i++) p[i] = Wire.read(); return f; } Then, we have to get the geo distance, as follows, where DEGTORAD is a variable that changes degree to radian: float geoDistance(struct geoloc &a, struct geoloc &b) { const float R = 6371000; // Earth radius float p1 = a.lat * DEGTORAD; float p2 = b.lat * DEGTORAD; float dp = (b.lat-a.lat) * DEGTORAD; float dl = (b.lon-a.lon) * DEGTORAD; float x = sin(dp/2) * sin(dp/2) + cos(p1) * cos(p2) * sin(dl/2) * sin(dl/2); float y = 2 * atan2(sqrt(x), sqrt(1-x)); return R * y; } We also need to write a function for the Geo bearing, where lat and lon are latitude and longitude respectively, gained from the raw data of the GPS sensor: float geoBearing(struct geoloc &a, struct geoloc &b) { float y = sin(b.lon-a.lon) * cos(b.lat); float x = cos(a.lat)*sin(b.lat) - sin(a.lat)*cos(b.lat)*cos(b.lon-a.lon); return atan2(y, x) * RADTODEG; } You can also use a mobile app to communicate with the GPS and make the drone move with you. Then the process is simple. Connect the GPS to your drone and get the TX and RX data from the Arduino and spread it through the radio and receive it through the telemetry, and then use the GPS from the phone with DroidPlanner or Tower. You also need to add a few lines in the main code to calibrate the compass. You can see the previous calibration code. The calibration of the compass varies from location to location. So, I would suggest you use the try-error method. In the following section, I will discuss how you can use an ESP8266 to make a GPS tracker that can be used with your drone. We learned to build a Follow Me-type drone and also used DroidPlanner 2 and Tower to configure it. Know more about using a smartphone to enable the follow me feature of ArduPilot and GPS tracker using ESP8266 from this book Building Smart Drones with ESP8266 and Arduino. Read More
Read more
  • 0
  • 0
  • 14585

article-image-is-web-development-dying
Richard Gall
23 May 2018
7 min read
Save for later

Is web development dying?

Richard Gall
23 May 2018
7 min read
It's not hard to find people asking whether web development is dying. A quick search throws up questions on Quora, Reddit, and other forums. "Is web development a dying profession or does it just smell funny?" asks one Reddit user. The usual suspects in the world of content (Forbes et al) have responded with their own takes and think pieces on whether web development is dead. And why wouldn't they? I, for one, would never miss out on an opportunity to write something with an outlandish and provocative headline for clicks. So, is web development dying or simply very unwell? Why do people think web development is dying? The question might seem a bit overwrought, but there are good reasons for people to ask the question. One reason is that getting a website has never been easier or cheaper. Think about it: if you want to create a content site, it doesn't take much to set one up with WordPress. You barely need to be technically literate, let alone a developer. Similarly, if you want an eCommerce store there are plenty of off-the-shelf solutions that allow people to start running an online business with very little work at all. Even if you do want a custom solution, you can now do that pretty cheaply. On the Treehouse forums, one user comments that thanks to sites like SquareSpace, businesses can now purchase a website for less than £100 (about $135). The commenter remarks that whereas he'd typically charge around £3000 for a complete website build, potential clients are coming back puzzled as to why he would think they'd spend so much when they could get the same result for a fraction of the price. From a professional perspective, this sort of anecdotal evidence indicates that it's becoming more and more difficult to be successful in web development. For all the talk around 'learning to code' and the digital economy, maybe building websites isn't the best area to get into. Web development is getting easier When people say web development is dying, they might actually be saying that there isn't as much money in it any more. If freelancers are struggling to charge the rates that they used to, that's because there is someone out there who is going to do it for a lot less money. The reason for this isn't that there's a new generation of web developers able to subsist on a paltry sum of money. It's actually getting a lot easier. Aside from solutions like WordPress and Shopify, the task of building websites from scratch (sort of scratch) is now easier than it has ever been. Are templates killing web development? Templates make everything easy for web developers and designers. Why would you want to do much more than drag and drop templates if you could? If the result looks good and does the job, then why spend time doing more? The more you do yourself, the more you're likely to break things. And the more you break things the more you've got to fix. Of course, templates are lowering the barrier to entry into web development and design. And while we shouldn't be precious about new web developers entering the industry, it is understandable that many experienced web developers are anxious about what the future might hold. From this perspective, templates aren't killing web development, but they are changing what the profession looks like. And without wishing to sound euphemistic, this is both a challenge and an opportunity for everyone in web development. Whether you're experienced or new to the industry, these changes mean people are going to have to adapt. Web development isn't dying, it's fragmenting The way web developers are going to have to adapt is by choosing what path they want to take in their career. Web development as we've always known it is, perhaps well and truly dead. Instead, it's fragmenting into specialized areas; design on the one hand, and full-stack on the other. This means your skill set needs to be unique. In a world where building websites takes very little skill or technical knowledge, specific expertise is vital. This is something journalist Andrew Pierno noted in a blog post on Medium. Pierno writes:  ...we are in a scenario where the web developer no longer has the skill set to build that interesting differentiator anymore, particularly if the main value prop is around A.I, computer vision, machine learning, AR, VR, blockchain, etc. Building websites is no longer remarkable - as we've seen, people that can do it are ubiquitous. But building a native application; that's not quite so easy. Building a mobile app that uses computer vision to compare you to Renaissance paintings - that's even harder to do. These are the sorts of things that are going to be valuable - and these are the sorts of things that web developers are going to need to learn how to do. Full-stack development and the expansion of the developer skill set In his piece, Pierno argues that the scope of the web developers role is shrinking. However, I don't think that's quite right. Yes, it might be fragmenting, but the scope of, say, full-stack development, is huge. In fact, full-stack developers need to know a huge range of technologies and tools. If they're to differentiate themselves in the job market, as Pierno suggests they should, they need to know machine learning, they need to know mobile, databases, and maybe even Blockchain. From this perspective, it's not hard to see how the 'web' part of web development might be dying. To some extent, as the web becomes more ubiquitous and less of a rarefied 'space' in people's lives, the more we have to get into the detail of how we utilize the technologies around it. Web development's decline is design's gain If web development as a discipline is dying, that's only going to make design more important. If, as we saw earlier, building websites is going to become a free for all for just about anyone with an internet connection and enough confidence, standards and quality might start to slip. That means the value of someone who understands good design will be higher than ever. As a web developer you might disappear into the ether of everyone else out there. But if you market yourself as a designer, someone who understands the intricacies of UI and UX implicitly, you immediately start to look a little different. Think of it like a sandwich shop - anyone can start making sandwiches. But to make a great sandwich shop, the type that wins awards and the type that people want to Instagram, requires extra attention to detail. It demands more skill and more culinary awareness. Maybe web development is dying, but maybe it just needs to change Clearly, what we call web development is very different in 2018 than what it was 5 years ago. There are a huge number of reasons for this, but perhaps the most important is that it doesn't really make sense to talk about 'the web' any more. Because 'the web' is now an outdated concept, perhaps web development needs to die. Maybe we're holding on to something which is only going to play into the hands of poor design and poor quality software. It's might even damage the careers of talented engineers and designers. You could make a pretty good comparison between 'the web' and 'big data'. Even reading those words feels oddly outdated today, but they're still at the center of the tech landscape. Big data, for example, is everywhere - it's hard to imagine our lives outside of it, but it doesn't make sense to talk about it in the abstract. Instead, what's interesting is how it's applied, how engineers make data accessible, usable and secure. The same is true of the web. It's not dead, but it has certainly assumed a slightly different form. And web development might well be dying, but the world will always need developers and designers. It's simply time to adapt. Read next Why is everyone talking about JavaScript fatigue? Is novelty ruining web development?
Read more
  • 0
  • 4
  • 29390
article-image-unity-variables-script-unity-2017-games
Amarabha Banerjee
23 May 2018
12 min read
Save for later

Working with Unity Variables to script powerful Unity 2017 games

Amarabha Banerjee
23 May 2018
12 min read
In this tutorial, you will learn how to work with the different variables available with the Unity 2017 platform. We will show you how to use these variables through use cases in order to script powerful Unity games. This article is an excerpt from the book, Learning C# by Developing Games with Unity 2017 written by Micael DaGraca, Greg Lukosek. The most popular game engine of our generation i.e. Unity is a preferred choice among game developers. Due to the flexibility it provides to code and script a game in C#. To understand and leverage the power of C# in your games, it is utterly necessary to get a proper understanding of how C# coding works. We are going to show you exactly that in the section given below. Writing C# statements properly When you do normal writing, it's in the form of a sentence, with a period used to end the sentence. When you write a line of code, it's called a statement, with a semicolon used to end the statement. This is necessary because the console reads the code one line at a time and it's necessary to use a semicolon to tell the console that the line of code is over and that the console can jump to the next line. (This is happening so fast that it looks like the computer is reading all of them at the same time, but it isn't.) When we start learning how to code, forgetting about this detail is very common, so don't forget to check for this error if the code isn't working: The code for a C# statement does not have to be on a single line as shown in the following example: public int number1 = 2; The statement can be on several lines. Whitespace and carriage returns are ignored, so, if you really want to, you can write it as follows: public int number1 = 2; However, I do not recommend writing your code like this because it's terrible to read code that is formatted like the preceding code. Nevertheless, there will be times when you'll have to write long statements that are longer than one line. Unity won't care. It just needs to see the semicolon at the end. Understanding component properties in Unity's Inspector GameObjects have some components that make them behave in a certain way. For instance, select Main Camera and look at the Inspector panel. One of the components is the camera. Without that component, it will cease being a camera. It would still be a GameObject in your scene, just no longer a functioning camera. Variables become component properties When we refer to components, we are basically referring to the available functions of a GameObject, for example, the human body has many functions, such as talking, moving, and observing. Now let's say that we want the human body to move faster. What is the function linked to that action? Movement. So in order to make our body move faster, we would need to create a script that had access to the movement component and then we would use that to make the body move faster. Just like in real life, different GameObjects can also have different components, for example, the camera component can only be accessed from a camera. There are plenty of components that already exist that were created by Unity's programmers, but we can also write our own components. This means that all the properties that we see in Inspector are just variables of some type. They simply store data that will be used by some method. Unity changes script and variable names slightly When we create a script, one of the first things that we need to do is give a name to the script and it's always good practice to use a name that identifies the content of the script. For example, if we are creating a script that is used to control the player movement, ideally that would be the name of the script. The best practice is to write playerMovement, where the first word is uncapitalized and the second one is capitalized. This is the standard way Unity developers name scripts and variables. Now let's say that we created a script named playerMovement. After assigning that script to a GameObject, we'll see that in the Inspector panel we would see that Unity adds a space to separate the words of the name, Player Movement. Unity does this modification to variable names too where, for example, a variable named number1 is shown as Number 1 and number2 as Number 2. Unity capitalizes the first letter as well. These changes improve readability in Inspector. Changing a property's value in the Inspector panel There are two situations where you can modify a property value: During the Play mode During the development stage (not in the Play mode) When you are in the Play mode, you will see that your changes take effect immediately in real time. This is great when you're experimenting and want to see the results. Write down any changes that you want to keep because when you stop the Play mode, any changes you made will be lost. When you are in the Development mode, changes that you make to the property values will be saved by Unity. This means that if you quit Unity and start it again, the changes will be retained. Of course, you won't see the effect of your changes until you click Play. The changes that you make to the property values in the Inspector panel do not modify your script. The only way your script can be changed is by you editing it in the script editor (MonoDevelop). The values shown in the Inspector panel override any values you might have assigned in your script. If you want to undo the changes you've made in the Inspector panel, you can reset the values to the default values assigned in your script. Click on the cog icon (the gear) on the far right of the component script, and then select Reset, as shown in the following screenshot: Displaying public variables in the Inspector panel You might still be wondering what the word public at the beginning of a variable statement means: public int number1 = 2; We mentioned it before. It means that the variable will be visible and accessible. It will be visible as a property in the Inspector panel so that you can manipulate the value stored in the variable. The word also means that it can be accessed from other scripts using the dot syntax. Private variables Not all variables need to be public. If there's no need for a variable to be changed in the Inspector panel or be accessed from other scripts, it doesn't make sense to clutter the Inspector panel with needless properties. In the LearningScript, perform the following steps: Change line 6 to this: private int number1 = 2; Then change line 7 to the following: int number2 = 9; Save the file In Unity, select Main Camera You will notice in the Inspector panel that both properties, Number 1 and Number 2, are gone: Line 6: private int number1 = 2; The preceding line explicitly states that the number1 variable has to be private. Therefore, the variable is no longer a property in the Inspector panel. It is now a private variable for storing data: Line 7: int number2 = 9; The number2 variable is no longer visible as a property either, but you didn't specify it as private. If you don't explicitly state whether a variable will be public or private, by default, the variable will implicitly be private in C#. It is good coding practice to explicitly state whether a variable will be public or private. So now, when you click Play, the script works exactly as it did before. You just can't manipulate the values manually in the Inspector panel anymore. Naming Unity variables properly As we explored previously, naming a script or variable is a very important step. It won't change the way that the code runs, but it will help us to stay organized and, by using best practices, we are avoiding errors and saving time trying to find the piece of code that isn't working. Always use meaningful names to store your variables. If you don't do that, six months down the line, you will be lost. I'm going to exaggerate here a bit to make a point. Let's say you will name a variable as shown in this code: public bool areRoadConditionsPerfect = true; That's a descriptive name. In other words, you know what it means by just reading the variable. So 10 years from now, when you look at that name, you'll know exactly what I meant in the previous comment. Now suppose that instead of areRoadConditionsPerfect, you had named this variable as shown in the following code: public bool perfect = true; Sure, you know what perfect is, but would you know that it refers to perfect road conditions? I know that right now you'll understand it because you just wrote it, but six months down the line, after writing hundreds of other scripts for all sorts of different projects, you'll look at this word and wonder what you meant. You'll have to read several lines of code you wrote to try to figure it out. You may look at the code and wonder who in their right mind would write such terrible code. So, take your time to write descriptive code that even a stranger can look at and know what you mean. Believe me, in six months or probably less time, you will be that stranger. Using meaningful names for variables and methods is helpful not only for you but also for any other game developer who will be reading your code. Whether or not you work in a team, you should always write easy-to-read code. Beginning variable names with lowercase You should begin a variable name with a lowercase letter because it helps distinguish between a class name and a variable name in your code. There are some other guides in the C# documentation as well, but we don't need to worry about them at this stage. Component names (class names) begin with an uppercase letter. For example, it's easy to know that Transform is a class and transform is a variable. There are, of course, exceptions to this general rule, and every programmer has a preferred way of using lowercase, uppercase, and perhaps an underscore to begin a variable name. In the end, you will have to decide upon a naming convention that you like. If you read the Unity forums, you will notice that there are some heated debates on naming variables. In this book, I will show you my preferred way, but you can use whatever is more comfortable for you. Using multiword variable names Let's use the same example again, as follows: public bool areRoadConditionsPerfect = true; You can see that the variable name is actually four words squeezed together. Since variable names can be only one word, begin the first word with a lowercase and then just capitalize the first letter of every additional word. This greatly helps create descriptive names that the viewer is still able to read. There's a term for this, it's called camel casing. I have already mentioned that for public variables, Unity's Inspector will separate each word and capitalize the first word. Go ahead! Add the previous statement to the LearningScript and see what Unity does with it in the Inspector panel. Declaring a variable and its type Every variable that we want to use in a script must be declared in a statement. What does that mean? Well, before Unity can use a variable, we have to tell Unity about it first. Okay then, what are we supposed to tell Unity about the variable? There are only three absolute requirements to declare a variable and they are as follows: We have to specify the type of data that a variable can store We have to provide a name for the variable We have to end the declaration statement with a semicolon The following is the syntax we use to declare a variable: typeOfData nameOfTheVariable; Let's use one of the LearningScript variables as an example; the following is how we declare a variable with the bare minimum requirements: int number1; This is what we have: Requirement #1 is the type of data that number1 can store, which in this case is an int, meaning an integer Requirement #2 is a name, which is number1 Requirement #3 is the semicolon at the end The second requirement of naming a variable has already been discussed. The third requirement of ending a statement with a semicolon has also been discussed. The first requirement of specifying the type of data will be covered next. The following is what we know about this bare minimum declaration as far as Unity is concerned: There's no public modifier, which means it's private by default It won't appear in the Inspector panel or be accessible from other scripts The value stored in number1 defaults to zero We discussed working with the Unity 2017 variables and how you can start working with them to create fun-filled games effectively. If you liked this article, be sure to go through the book Learning C# by Developing games with Unity 2017 to create exciting games with C# and Unity 2017. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 5132

article-image-restful-java-web-services-swagger
Fatema Patrawala
22 May 2018
14 min read
Save for later

Documenting RESTful Java Web Services using Swagger

Fatema Patrawala
22 May 2018
14 min read
In the earlier days, many software solution providers did not really pay attention to documenting their RESTful web APIs. However, many API vendors soon realized the need for a good API documentation solution. Today, you will find a variety of approaches to documenting RESTful web APIs. There are some popular solutions available today for describing, producing, consuming, and visualizing RESTful web services. In this tutorial, we will explore Swagger which offers a specification and a complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. The Swagger framework works with many popular programming languages, such as Java, Scala, Clojure, Groovy, JavaScript, and .NET. This tutorial is an extract taken from the book RESTFul Java Web Services - Third Edition, written by Bogunuva Mohanram Balachandar. This book will help you master core REST concepts and create RESTful web services in Java. A glance at the market adoption of Swagger The greatest strength of Swagger is its powerful API platform, which satisfies the client, documentation, and server needs. The Swagger UI framework serves as the documentation and testing utility. Its support for different languages and its matured tooling support have really grabbed the attention of many API vendors, and it seems to be the one with the most traction in the community today. Swagger is built using Scala. This means that when you package your application, you need to have the entire Scala runtime into your build, which may considerably increase the size of your deployable artifact (the EAR or WAR file). That said, Swagger is however improving with each release. For example, the Swagger 2.0 release allows you to use YAML for describing APIs. So, keep a watch on this framework. The Swagger framework has the following three major components: Server: This component hosts the RESTful web API descriptions for the services that the clients want to use Client: This component uses the RESTful web API descriptions from the server to provide an automated interfacing mechanism to invoke the REST APIs User interface: This part of the framework reads a description of the APIs from the server, renders it as a web page, and provides an interactive sandbox to test the APIs A quick overview of the Swagger structure Let's take a quick look at the Swagger file structure before moving further. The Swagger 1.x file contents that describe the RESTful APIs are represented as the JSON objects. With Swagger 2.0 release onwards, you can also use the YAML format to describe the RESTful web APIs. This section discusses the Swagger file contents represented as JSON. The basic constructs that we'll discuss in this section for JSON are also applicable for the YAML representation of APIs, although the syntax differs. When using the JSON structure for describing the REST APIs, the Swagger file uses a Swagger object as the root document object for describing the APIs. Here is a quick overview of the various properties that you will find in a Swagger object: The following properties describe the basic information about the RESTful web application: swagger: This property specifies the Swagger version. info: This property provides metadata about the API. host: This property locates the server where the APIs are hosted. basePath: This property is the base path for the API. It is relative to the value set for the host field. schemes: This property transfers the protocol for a RESTful web API, such as HTTP and HTTPS. The following properties allow you to specify the default values at the application level, which can be optionally overridden for each operation: consumes: This property specifies the default internet media types that the APIs consume. It can be overridden at the API level. produces: This property specifies the default internet media types that the APIs produce. It can be overridden at the API level. securityDefinitions: This property globally defines the security schemes for the document. These definitions can be referred to from the security schemes specified for each API. The following properties describe the operations (REST APIs): paths: This property specifies the path to the API or the resources. The path must be appended to the basePath property in order to get the full URI. definitions: This property specifies the input and output entity types for operations on REST resources (APIs). parameters: This property specifies the parameter details for an operation. responses: This property specifies the response type for an operation. security: This property specifies the security schemes in order to execute this operation. externalDocs: This property links to the external documentation. A complete Swagger 2.0 specification is available at Github. The Swagger specification was donated to the Open API initiative, which aims to standardize the format of the API specification and bring uniformity in usage. Swagger 2.0 was enhanced as part of the Open API initiative, and a new specification OAS 3.0 (Open API specification 3.0) was released in July 2017. The tools are still being worked on to support OAS3.0. I encourage you to explore the Open API Specification 3.0 available at Github. Overview of Swagger APIs The Swagger framework consists of many sub-projects in the Git repository, each built with a specific purpose. Here is a quick summary of the key projects: swagger-spec: This repository contains the Swagger specification and the project in general. swagger-ui: This project is an interactive tool to display and execute the Swagger specification files. It is based on swagger-js (the JavaScript library). swagger-editor: This project allows you to edit the YAML files. It is released as a part of Swagger 2.0. swagger-core: This project provides the scala and java library to generate the Swagger specifications directly from code. It supports JAX-RS, the Servlet APIs, and the Play framework. swagger-codegen: This project provides a tool that can read the Swagger specification files and generate the client and server code that consumes and produces the specifications. In the next section, you will learn how to use the swagger-core project offerings to generate the Swagger file for a JAX-RS application. Generating Swagger from JAX-RS Both the WADL and RAML tools that we discussed in the previous sections use the JAX-RS annotations metadata to generate the documentation for the APIs. The Swagger framework does not fully rely on the JAX-RS annotations but offers a set of proprietary annotations for describing the resources. This helps in the following scenarios: The Swagger core annotations provide more flexibility for generating documentations compliant with the Swagger specifications It allows you to use Swagger for generating documentations for web components that do not use the JAX-RS annotations, such as servlet and the servlet filter The Swagger annotations are designed to work with JAX-RS, improving the quality of the API documentation generated by the framework. Note that the swagger-core project is currently supported on the Jersey and Restlet implementations. If you are considering any other runtime for your JAX-RS application, check the respective product manual and ensure the support before you start using Swagger for describing APIs. Some of the commonly used Swagger annotations are as follows: The @com.wordnik.swagger.annotations.Api annotation marks a class as a Swagger resource. Note that only classes that are annotated with @Api will be considered for generating the documentation. The @Api annotation is used along with class-level JAX-RS annotations such as @Produces and @Path. Annotations that declare an operation are as follows: @com.wordnik.swagger.annotations.ApiOperation: This annotation describes a resource method (operation) that is designated to respond to HTTP action methods, such as GET, PUT, POST, and DELETE. @com.wordnik.swagger.annotations.ApiParam: This annotation is used for describing parameters used in an operation. This is designed for use in conjunction with the JAX-RS parameters, such as @Path, @PathParam, @QueryParam, @HeaderParam, @FormParam, and @BeanParam. @com.wordnik.swagger.annotations.ApiImplicitParam: This annotation allows you to define the operation parameters manually. You can use this to override the @PathParam or @QueryParam values specified on a resource method with custom values. If you have multiple ImplicitParam for an operation, wrap them with @ApiImplicitParams. @com.wordnik.swagger.annotations.ApiResponse: This annotation describes the status codes returned by an operation. If you have multiple responses, wrap them by using @ApiResponses. @com.wordnik.swagger.annotations.ResponseHeader: This annotation describes a header that can be provided as part of the response. @com.wordnik.swagger.annotations.Authorization: This annotation is used within either Api or ApiOperation to describe the authorization scheme used on a resource or an operation. Annotations that declare API models are as follows: @com.wordnik.swagger.annotations.ApiModel: This annotation describes the model objects used in the application. @com.wordnik.swagger.annotations.ApiModelProperty: This annotation describes the properties present in the ApiModel object. A complete list of the Swagger core annotations is available at Github. Having learned the basics of Swagger, it is time for us to move on and build a simple example to get a feel of the real-life use of Swagger in a JAX-RS application. As always, this example uses the Jersey implementation of JAX-RS. Specifying dependency to Swagger To use Swagger in your Jersey 2 application, specify the dependency to swagger-jersey2-jaxrs jar. If you use Maven for building the source, the dependency to the swagger-core library will look as follows: <dependency> <groupId>com.wordnik</groupId> <artifactId>swagger-jersey2-jaxrs</artifactId> <version>1.5.1-M1</version> <!-use the appropriate version here, 1.5.x supports Swagger 2.0 spec --> </dependency> You should be careful while choosing the swagger-core version for your product. Note that swagger-core 1.3 produces the Swagger 1.2 definitions, whereas swagger-core 1.5 produces the Swagger 2.0 definitions. The next step is to hook the Swagger provider components into your Jersey application. This is done by configuring the Jersey servlet (org.glassfish.jersey.servlet.ServletContainer) in web.xml, as shown here: <servlet> <servlet-name>jersey</servlet-name> <servlet-class> org.glassfish.jersey.servlet.ServletContainer </servlet-class> <init-param> <param-name>jersey.config.server.provider.packages </param-name> <param-value> com.wordnik.swagger.jaxrs.json, com.packtpub.rest.ch7.swagger </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jersey</servlet-name> <url-pattern>/webresources/*</url-pattern> </servlet-mapping> To enable the Swagger documentation features, it is necessary to load the Swagger framework provider classes from the com.wordnik.swagger.jaxrs.listing package. The package names of the JAX-RS resource classes and provider components are configured as the value for the jersey.config.server.provider.packages init parameter. The Jersey framework scans through the configured packages for identifying the resource classes and provider components during the deployment of the application. Map the Jersey servlet to a request URI so that it responds to the REST resource calls that match the URI. If you prefer not to use web.xml, you can also use the custom application subclass for (programmatically) specifying all the configuration entries discussed here. To try this option, refer to Github. Configuring the Swagger definition After specifying the Swagger provider components, the next step is to configure and initialize the Swagger definition. This is done by configuring the com.wordnik.swagger.jersey.config.JerseyJaxrsConfig servlet in web.xml, as follows: <servlet> <servlet-name>Jersey2Config</servlet-name> <servlet-class> com.wordnik.swagger.jersey.config.JerseyJaxrsConfig </servlet-class> <init-param> <param-name>api.version</param-name> <param-value>1.0.0</param-value> </init-param> <init-param> <param-name>swagger.api.basepath</param-name> <param-value> http://localhost:8080/hrapp/webresources </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> Here is a brief overview of the initialization parameters used for JerseyJaxrsConfig: api.version: This parameter specifies the API version for your application swagger.api.basepath: This parameter specifies the base path for your application With this step, we have finished all the configuration entries for using Swagger in a JAX-RS (Jersey 2 implementation) application. In the next section, we will see how to use the Swagger metadata annotation on a JAX-RS resource class for describing the resources and operations. Adding a Swagger annotation on a JAX-RS resource class Let's revisit the DepartmentResource class used in the previous sections. In this example, we will enhance the DepartmentResource class by adding the Swagger annotations discussed earlier. We use @Api to mark DepartmentResource as the Swagger resource. The @ApiOperation annotation describes the operation exposed by the DepartmentResource class: import com.wordnik.swagger.annotations.Api; import com.wordnik.swagger.annotations.ApiOperation; import com.wordnik.swagger.annotations.ApiParam; import com.wordnik.swagger.annotations.ApiResponse; import com.wordnik.swagger.annotations.ApiResponses; //Other imports are removed for brevity @Stateless @Path("departments") @Api(value = "/departments", description = "Get departments details") public class DepartmentResource { @ApiOperation(value = "Find department by id", notes = "Specify a valid department id", response = Department.class) @ApiResponses(value = { @ApiResponse(code = 400, message = "Invalid department id supplied"), @ApiResponse(code = 404, message = "Department not found") }) @GET @Path("{id}") @Produces("application/json") public Department findDepartment( @ApiParam(value = "The department id", required = true) @PathParam("id") Integer id){ return findDepartmentEntity(id); } //Rest of the codes are removed for brevity } To view the Swagger documentation, build the source and deploy it to the server. Once the application is deployed, you can navigate to http://<host>:<port>/<application-name>/<application-path>/swagger.json to view the Swagger resource listing in the JSON format. The Swagger URL for this example will look like the following: http://localhost:8080/hrapp/webresource/swagger.json The following sample Swagger representation is for the DepartmentResource class discussed in this section: { "swagger": "2.0", "info": { "version": "1.0.0", "title": "" }, "host": "localhost:8080", "basePath": "/hrapp/webresources", "tags": [ { "name": "user" } ], "schemes": [ "http" ], "paths": { "/departments/{id}": { "get": { "tags": [ "user" ], "summary": "Find department by id", "description": "", "operationId": "loginUser", "produces": [ "application/json" ], "parameters": [ { "name": "id", "in": "path", "description": "The department id", "required": true, "type": "integer", "format": "int32" } ], "responses": { "200": { "description": "successful operation", "schema": { "$ref": "#/definitions/Department" } }, "400": { "description": "Invalid department id supplied" }, "404": { "description": "Department not found" } } } } }, "definitions": { "Department": { "properties": { "departmentId": { "type": "integer", "format": "int32" }, "departmentName": { "type": "string" }, "_persistence_shouldRefreshFetchGroup": { "type": "boolean" } } } } } As mentioned at the beginning of this section, from the Swagger 2.0 release onward it supports the YAML representation of APIs. You can access the YAML representation by navigating to swagger.yaml. For instance, in the preceding example, the following URI gives you the YAML file: http://<host>:<port>/<application-name>/<application-path>/swagger.yaml The complete source code for this example is available at the Packt website. You can download the example from the Packt website link that we mentioned at the beginning of this book, in the Preface section. In the downloaded source code, see the rest-chapter7-service-doctools/rest-chapter7-jaxrs2swagger project. Generating a Java client from Swagger The Swagger framework is packaged with the Swagger code generation tool as well (swagger-codegen-cli), which allows you to generate client libraries by parsing the Swagger documentation file. You can download the swagger-codegen-cli.jar file from the Maven central repository by searching for swagger-codegen-cli in search maven. Alternatively, you can clone the Git repository and build the source locally by executing mvn install. Once you have swagger-codegen-cli.jar locally available, run the following command to generate the Java client for the REST API described in Swagger: java -jar swagger-codegen-cli.jar generate -i <Input-URI-or-File-location-for-swagger.json> -l <client-language-to-generate> -o <output-directory> The following example illustrates the use of this tool: java -jar swagger-codegen-cli-2.1.0-M2.jar generate -i http://localhost:8080/hrapp/webresources/swagger.json -l java -o generated-sources/java When you run this tool, it scans through the RESTful web API description available at http://localhost:8080/hrapp/webresources/swagger.json and generates a Java client source in the generated-sources/java folder. Note that the Swagger code generation process uses the mustache templates for generating the client source. If you are not happy with the generated source, Swagger lets you specify your own mustache template files. Use the -t flag to specify your template folder. To learn more, refer to the README.md file at Github. To learn more grab this latest edition of RESTful Java Web Services to build robust, scalable and secure RESTful web services using Java APIs. Getting started with Django RESTful Web Services How to develop RESTful web services in Spring Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 23375