Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-apple-joins-the-thread-group-signaling-its-smart-home-ambitions-with-homekit-siri-and-other-iot-products
Bhagyashree R
09 Aug 2018
3 min read
Save for later

Apple joins the Thread Group, signaling its Smart Home ambitions with HomeKit, Siri and other IoT products

Bhagyashree R
09 Aug 2018
3 min read
Apple is now a part of the Thread Group’s list of members, alongside its top rivals - Nest (a subsidiary of Google) and Amazon. This indicates some advancements in their HomeKit software framework and inter-device communication between iOS devices. Who is the Thread Group? The Thread Group is a non-profit company who have developed the network protocol Thread, with the aim of being the best way to connect and control IoT products. These are the features that enable them to do so: Mesh networking: It uses mesh network design, connecting hundreds of products securely and reliably, which also means no single point of failure. Secure: It provides security at network and application layers. To ensure only authorized devices join the network, it uses product install codes. They use AES encryption to close security holes that exist in other wireless protocols and smartphone-era authentication scheme. Battery friendly: Based on the power efficient IEEE 802.15.4 MAC/PHY, it ensures extremely low power consumption. Short messaging, streamlined routing protocol, use of low power wireless system-on-chips also makes it battery friendly. Based on IPv6: It is interoperable by design using proven, open standards and IPv6 technology with 6LoWPAN (short for, IPv6 over Low-Power Wireless Personal Area Networks) as the foundation. 6LoWPAN is an IPv6 based low-power wireless personal area network which is comprises of devices that conform to the IEEE 802.15.4-2003 standard Scalable: It can scale upto 250+ devices into a single network supporting multiple hops. What this membership brings to Apple? The company has not revealed their plans yet, but nothing is stopping us from imagining what they possibly could do with Thread. According to a redditor, the following are some potential use of Thread by Apple HomeKit by Apple uses WiFi and Bluetooth as its wireless protocols. WiFi is very power hungry and Bluetooth is short-ranged. With Thread’s mesh network and power-efficient design this problem could be solved. Apple only allows certain products to operate on battery, requiring others to be plugged into power constantly, HomeKit cameras, for instance. Critical to both Apple and extended-use home devices, Thread promises “extremely low power consumption.” Apple could have plans to provide support for the number of IoT smart home devices the HomePod is capable of connecting to with Thread. With the support of Thread, iOS devices could guarantee better inter-device Siri communications, more reliable continuity features, and secure geo-fencing. Apple joining the group could mean that it may become open to more hardware when it comes to its HomeKit and also become reasonable from a cost perspective in the smart home area. Apple releases iOS 12 beta 2 with screen time and battery usage updates among others macOS Mojave: Apple updates the Mac experience for 2018 Apple stocks soar just shy of $1 Trillion market cap as revenue hits $53.3 Billion in Q3 earnings 2018
Read more
  • 0
  • 0
  • 4210

article-image-why-golan-is-the-fastest-growing-language-on-github
Sugandha Lahoti
09 Aug 2018
4 min read
Save for later

Why Golang is the fastest growing language on GitHub

Sugandha Lahoti
09 Aug 2018
4 min read
Google’s Go language or alternatively Golang is currently one of the fastest growing programming languages in the software industry. Its speed, simplicity, and reliability make it the perfect choice for all kinds of developers. Now, its popularity has further gained momentum. According to a report, Go is the fastest growing language on GitHub in Q2 of 2018. Go has grown almost 7% overall with a 1.5% change from the previous Quarter. Source: Madnight.github.io What makes Golang so popular? A person was quoted on Reddit saying, “What I would have done in Python, Ruby, C, C# or C++, I'm now doing in Go.” Such is the impact of Go. Let’s see what makes Golang so popular. Go is cross-platform, so you can target an operating system of your choice when compiling a piece of code. Go offers a native concurrency model that is unlike most mainstream programming languages. Go relies on a concurrency model called CSP ( Communicating Sequential Processes). Instead of locking variables to share memory, Golang allows you to communicate the value stored in your variable from one thread to another. Go has a fairly mature package of its own. Once you install Go, you can build production level software that can cover a wide range of use cases from Restful web APIs to encryption software, before needing to consider any third party packages. Go code typically compiles to a single native binary, which basically makes deploying an application written in Go as easy as copying the application file to the destination server. Go is also being rapidly being adopted as the go-to cloud native language and by leading projects like Docker and Ethereum. It’s concurrency feature and easy deployment make it a popular choice for cloud development. Can Golang replace Python? Reddit is abuzz with people sharing their thoughts about whether Golang would replace Python. A user commented that “Writing a utility script is quicker in Go than in Python or JS. Not quicker as in performance, but in terms of raw development speed.” Another Reddit user pointed out three reasons not to use Python in a Reddit discussion, Why are people ditching python for go?: Dynamic compilation of Python can result in errors that exist in code, but they are in fact not detected. CPython really is very slow; very specifically, procedures that are invoked multiple times are not optimized to run more quickly in future runs (like pypy); they always run at the same slow speed. Python has a terrible distribution story; it's really hard to ship all your Python dependencies onto a new system. Go addresses those points pretty sharply. It has a good distribution story with static binaries. It has a repeatable build process, and it's pretty fast. In the same discussion, however, a user nicely sums it up saying, “There is nothing wrong with python except maybe that it is not statically typed and can be a bit slow, which also depends on the use case. Go is the new kid on the block, and while Go is nice, it doesn't have nearly as many libraries as python does. When it comes to stable, mature third-party packages, it can't beat python at the moment.” If you’re still thinking about whether or not to begin coding with Go, here’s a quirky rendition of the popular song Let it Go from Disney’s Frozen to inspire you. Write in Go! Write in Go! Go Cloud is Google’s bid to establish Golang as the go-to language of cloud Writing test functions in Golang [Tutorial] How Concurrency and Parallelism works in Golang [Tutorial]
Read more
  • 0
  • 0
  • 22807

article-image-share-projects-and-environment-on-anaconda
Natasha Mathur
09 Aug 2018
7 min read
Save for later

Share projects and environment on Anaconda cloud [Tutorial]

Natasha Mathur
09 Aug 2018
7 min read
When a small group of developers works on a same project, there is a need to share programs, command datasets, and working environments, and Anaconda Cloud could be used for this.  Usually, we can save our data on other people's servers. For Anaconda Cloud, users can use the platform to save and share packages, notebooks, projects, and environments. The public projects and notebooks are free. At the moment, private plans start at $7 per month. Anaconda Cloud allows users to create or distribute software packages. In this article, we will learn about Anaconda cloud and how to share projects and environment on Anaconda. This article is an excerpt from a book 'Hands-On Data Science with Anaconda' written by Dr. Yuxing Yan, James Yan. So, let's get started! Firstly, for a Windows version of Anaconda, click All Programs | Anaconda, and then choose Anaconda Cloud. After double-clicking on Cloud, the welcome screen will appear. Based on the information presented by the welcome screen, we know that we need an account with Anaconda before we can use it. After login, we will see the following screen: For example, if you double-click on Installing your first package, you will get more information on Anaconda Cloud. We do not need to be logged in, or even need a cloud account, to search for public packages, download, and install them. We need an account only to access private packages without a token or to share your packages with others. For Anaconda Cloud, users can use the platform to save and share projects and environments. Sharing projects in Anaconda First, let's look at the definition of a project. A project is a folder that contains an anaconda-project.yml configuration file together with scripts (code), notebooks, datasets, and other files. We can add a folder into a project by adding a configuration file named anaconda-project.yml to the folder. The configuration file can include the following sections: commands, variables, services, downloads, packages, channels, and environment specifications. Data scientists can use projects to encapsulate data science projects and make them easily portable. A project is usually compressed into a .tar.bz2 file for sharing and storing. Anaconda Project automates setup steps so that people with whom you share projects can run your projects with the following single command: anaconda-project run To install Anaconda Project, type the following: conda install anaconda-project Anaconda Project encapsulates data science projects and makes them easily portable. It automates setup steps such as installing the right packages, downloading files, setting environment variables, and running commands. Project makes it easy to reproduce your work, share projects with others, and run them on different platforms. It also simplifies deployment to servers. Anaconda projects run the same way on your machine, on another user's machine, or when deployed to a server. Traditional build scripts such as setup.py automate the building of the project – going from source code to something runnable – while Project automates running the project, taking build artifacts, and doing any necessary setup before executing them. We can use Project on Windows, macOS, and Linux. Project is supported and offered by Anaconda Inc® and contributors under a three-clause BSD license. Project sharing will save us a great deal of time since other developers will not spend too much time on the work done already. Here is the procedure: Build up your project Log in to Anaconda From the project's directory on your computer, type the following command: anaconda-project upload Alternatively, from Anaconda Navigator, in the Projects tab, upload via the bottom-right Upload to Anaconda Cloud. Projects can be any directory of code and assets. Often, projects will contain notebooks or Bokeh applications, for example. Here, we show how to generate a project called project01. First, we want to move to the correct location. Assume that we choose c:/temp/. The key command is given here: anaconda-project init --directory project01 Next, both commands are shown side by side as well: $ cd c:/temp/ $ anaconda-project init --directory project01 Create directory 'c:tempproject01'? y Project configuration is in c:tempproject01iris/anaconda-project.yml The corresponding output is shown here: We can also turn any existing directory into a project by switching to the directory and then running anaconda-project init without options or arguments. We can use MS Word to open anaconda-project.yml (see the first couple of lines shown here): # This is an Anaconda project file. # # Here you can describe your project and how to run it. # Use `anaconda-project run` to run the project. # The file is in YAML format, please see http://www.yaml.org/start.html for more. # # Set the 'name' key to name your project # name: project01 # # Set the 'icon' key to give your project an icon # icon: # # Set a one-sentence-or-so 'description' key with project details # description: # # In the commands section, list your runnable scripts, notebooks, and other code. # Use `anaconda-project add-command` to add commands. # There are two ways to share our projects with others. First, we archive the project by issuing the following command: anaconda-project archive project01.zip Then, we email the ZIP file to our colleague or others. The second way of sharing a project is to use Anaconda Cloud. Log in to Anaconda Cloud first. From the project's directory on our computer, type anaconda-project upload, or, from Anaconda Navigator, in the Projects tab, upload via the bottom-right Upload to Anaconda Cloud. Now that we're done looking at how you can share projects. Let's find out how you can share environments with your partner. Sharing of environments In terms of computer software, an operating environment or integrated applications environment is the environment in which users can execute software. Usually, such an environment consists of a user interface and an API. To a certain degree, the term platform could be viewed as its synonym. There are many reasons why we want to share our environment with someone else. For example, they can re-create a test that we have done. To allow them to quickly reproduce our environment with all of its packages and versions, give them a copy of your environment.yml file. Depending on the operating system, we have the following methods to export our environment file. Note that if we already have an environment.yml file in our current directory, it will be overwritten during this task. There are different ways to activate the myenv environment file depending on the systems used. For Windows users, in our Anaconda prompt, type the following command: activate myenv On macOS and Linux, in our Terminal window, issue the following command: source activate myenv Note that we replace myenv with the name of the environment. To export our active environment to a new file, type the following: conda env export > environment.yml To share, we can simply email or copy the exported environment.yml file to the other person. On the other hand, in order to remove an environment, run the following code in our Terminal window or at an Anaconda prompt: conda remove --name myenv --all Alternatively, we can specify the name, as shown here: conda env remove --name myenv To verify that the environment was removed, run the following command line: conda info --envs In this tutorial, we discussed Anaconda Cloud. Some topics included how to share different projects over different platforms and how to share your working environments. If you found this post useful, be sure to check out the book 'Hands-On Data Science with Anaconda' to learn further about replicating others' environments locally, and downloading a package from Anaconda. Anaconda 5.2 releases! Anaconda Enterprise version 5.1.1 released! 10 reasons why data scientists love Jupyter notebooks
Read more
  • 0
  • 0
  • 9183

article-image-create-unity-character-animations-and-avatars
Amarabha Banerjee
08 Aug 2018
10 min read
Save for later

Creating interactive Unity character animations and avatars [Tutorial]

Amarabha Banerjee
08 Aug 2018
10 min read
The early Unity versions' Legacy Animation System is used in Unity for a wide range of things, such as animating the color of a light or other simple animations on 3D objects in a scene, as well as animating skinned characters for certain kinds of games. In this tutorial, we will look at the basic settings for the Legacy Animation System. Then, we will step into the new animation system, gaining an understanding of the ThirdPersonCharacter prefab, and looking at the difference between the in-place and Root Motion animation methods available within Animator. If you want to dive deep into developing cutting-edge modern day Unity 2D games then this piece is for you. We will deal with character animations using Unity today. This article is an excerpt from the book Unity 2017 Game Development Essentials written by Tommaso Lintrami.  Importing character models and animations To import a model rig or an animation, just drag the model file to the Assets folder of your project. When you select the file in the Project view, you can edit the Import Settings in the Inspector panel: Please refer to the updated Unity online manual for a full description of the available import options: https://docs.unity3d.com/Manual/FBXImporter-Model.html. Importing animations using multiple model files The common way to import animations in Unity is to follow a naming convention scheme that is recognized automatically. You basically create, or ask the artist to create, separate model files and name them with the [email protected] convention. For example, for a model called Warrior_legacy, you could import separate idle, walk, jump, and attack animations using files named [email protected], [email protected], Warrior_legacy@standard_run_inPlace.fbx, and Warrior_legacy@walking_inPlace.fbx. Only the animation data from these files will be used, even if the original files are exported with mesh data from the animation software package: In the editor's Project view, the .fbx suffix is not shown in the preview, but can still be seen in the bottom line of the view. Unity automatically imports all the files, collects all the animation clips from them, and associates them with the file without the @ symbol. In the example above, the Warrior_legacy.fbx file will be set up to reference offensive_idle, jumping, running_inPlace, and sword_and_shield_walk_inPlace. To export the base rig, simply export a model file from your favorite digital content creation package with no animations ticked in the FBX exporter (for example, Warrior_legacy.fbx)  and the four animation clips as [email protected] by exporting the desired keyframes for each one of them (enabling animation in the graphic package's FBX export dialog). When imported in Unity, we will select the main rig file ( Warrior_legacy.fbx) and set its Rig type to Legacy: Setting up the animation We need to instruct Unity on how we want to play these animation clips, for instance, we certainly want the walk, idle, and running animation clips to play in a loop, while the jump and the attack animation clips should play in a single shot. Choose the Idle animation clip in the Project view folder where the legacy animation resides and then switch to the Animations tab in the Inspector: Set the Wrap Mode to PingPong in both the top and bottom parts of the panel, as shown in the preceding image. In many cases, you might also want to create an additional in-between loop frame, checking the Add Loop Frame option. This is needed to avoid an ugly animation loop being performed because the first and last frame of the animation are too different from each other. Click on the Apply button at the bottom of the panel to apply the changes. This will be required if the first and last frames of the animation are much different and require an additional frame in-between to interpolate between the two in order to obtain a good loop for this Animation Clip. Now, drag the Warrior_legacy.fbx main file into the scene. You should see a new GameObject with an Animations component attached, with all the reference clips already set up, and with the first specified to play at start when the Play On Awake checkbox is selected in the component (default). You can look at the final result for this part in the Chapter5_legacy Unity scene in the book's code project folder. Building the Player with Animator The Animator component was introduced in Unity 4 to replace the older Legacy Animation System. If you are completely new to Unity, you should start directly with the new animation system and consider the old one as still being good for many things, not only related to character animation. Animator introduced many cool things that were only partially available (and only through coding) with the old animation system. In the code folder, under Chapter 5-6-7/Models/Characters, you will find three folders for the warrior model rig. One is meant for the old Legacy Animation component, and the other two are for use with the Animator. The new system is made by a new animation component, the Animator, a powerful state-machine that controls the whole animation process and the Avatar configuration system. The Animator component will be mapped to a corresponding avatar and to an Animator Controller asset file, which can be created, like other files, from the Project view and edited in the Animator window. What is an avatar in Unity? When an .fbx 3D model file with a skeleton made of joints/bones is imported in Unity, if you expand the file in the Project view, you will see, among the various parts of it, an avatar's icon. The following screenshot represents the Warrior_Final.fbx rigged model automatically created by the Warrior_FinalAvatar component: When importing a rigged model instead (an FBX model with a skeleton or bones and, optionally, animations), Unity will configure it automatically for a Generic avatar. A Generic avatar is meant for any kind of non-human character rig, such as animals, non-biped monsters, plants, and so on. Typically, for your biped/humanoid characters, you want to switch the default import flag for the Rig Animation Type to Humanoid: This term comes from the Latin word bi (two) and ped (foot); this 3D animation-specific term indicates an anthropomorphic/humanoid character standing and walking on two legs. This name was introduced into 3D animation by 3D Studio Max, where Biped was the term for Character Studio to manage a rigged human character and its animations. As the default import setting for the Rig is Generic, we will switch to Humanoid for all the .fbx files in the Warrior_Mecanim_InPlace folder with the only exclusion being the non-rigged Warrior_final_non-rigged.fbx sample model mentioned earlier. Configuring an avatar Now, hit the Configure button and the actual scene and the Inspector will be temporarily replaced with the avatar, (as in the following screenshot), until the Done button is clicked and the editor returns to the previously loaded scene. Because the model included in the book codes is already a Mecanim-ready rig, you can just click on the Done button. On the left, the Scene view switched temporary for showing Avatar configuration results and the Inspector on the right showing configuration options. Most of the time, and in this case, you will not set the mapping for hands, fingers, and eyes separately, so the first of the four tabs (Body, Head, Left Arm, and Right Arm) will be enough for our purpose. The head is usually mapped for independent eyeball movement and/or jaw movement to allow a basic character speech movement whenever your game needs any of these features. The head part of the avatar configuration Inspector panel is shown as follows: A quick note on lip sync. Lip sync is an advanced technique, where a 3D character's face will change and animate its mouth and eyes when a certain audio file is playing. Unity doesn't support lip sync out of the box, but it can be done in many ways with external libraries and an appropriate model rig. Since Unity 4.5 onward, animation Blend Shapes are supported, allowing facial muscles' gestures to be embedded in the .fbx model and used in real time by the application. This technique is more modern than standard lip sync for game character's speeches; in both cases, a library or a discreet amount of coding would be needed to make the character speak correctly when the corresponding audio file is played. Hands mapping will be used only when your characters need fine finger movements hence will not be covered. The best scenario for this is a game where characters perform a lot of specific actions with many different tools (guns, hammers, knives, hacking, or maybe just making gestures while talking during a cinematic cut scene). Another example would be an avatar for virtual reality, where the Leap Motion, Data Gloves, or similar devices are used to track the hands of users with the 3 phalanges of their 10 fingers. If the rig you are importing is not Mecanim-ready, this is the place to map your bones to the appropriate spots on the green diagram in the Inspector, which is subdivided into body, head, left hand, and right hand. To configure an Avatar from a model that was not rigged following Mecanim's skeleton rules, we have the following two options: Using the auto-mapping feature available, which will try to automatically map the bones for you Manually map the bones of your model to the corresponding spots on the diagram The avatar configuration Inspector panel shows the skeleton's main bones mapped to the avatar: The Automap feature, accessible from the drop-down menu at the bottom-left part of the avatar Inspector, can automatically assign the bones of your models to the correct ones for a mecanim rig. This is mainly performed by reading bone names and analyzing the structure of the skeleton, and it is not 100% rig proof. So, you might need some tweaking (manual mapping) of your custom character models. As you can see, there are also Load and Save options to store this mapping. This is useful if you have a whole bunch of rigged character models all done with the same skeleton naming convention. The Clear option will clear up all the current bone mapping. The Pose drop-down menu is needed only if you want to enforce the T-pose, or sample the actual pose, and is rarely needed, but can help fix eventual modeler/3D artist mistakes or to make variations of an avatar. We discussed about creating Unity Character animations and how it will help you build interactive games with unity. Check out the book Unity 2017 Game Development Essentials for hands on game development in Unity 2017. Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS AI for Unity game developers: How to emulate real-world senses in your NPC agent Working with Unity Variables to script powerful Unity 2017 games
Read more
  • 0
  • 0
  • 17691

article-image-diffractive-deep-neural-network-d2nn-ucla-developed-ai-device-can-identify-objects-at-the-speed-of-light
Bhagyashree R
08 Aug 2018
3 min read
Save for later

Diffractive Deep Neural Network (D2NN): UCLA-developed AI device can identify objects at the speed of light

Bhagyashree R
08 Aug 2018
3 min read
Researchers at the University of California, Los Angeles (UCLA) have developed a 3D-printed all-optical deep learning architecture called Diffractive Deep Neural Network (D2NN). D2NN is a deep learning neural network physically formed by multiple layers of diffractive surfaces that work in collaboration to optically perform an arbitrary function. While the inference/prediction of the physical network is all-optical, the learning part that leads to its design is done through a computer. How does D2NN work? A computer-simulated design was created first, then the researchers with the help of a 3D printer created very thin polymer wafers. The uneven surface of the wafers helped diffract light coming from the object in different directions. The layers are composed of tens of thousands of artificial neurons or tiny pixels from which the light travels through. These layers together, form an “optical network” that shapes how incoming light travels through them. The network is able to identify an object because the light coming from the object is diffracted mostly toward a single pixel that is assigned to that type of object. The network was then trained using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produced as the light from that object passes through the device. What are its advantages? Scalable: It can easily be scaled up using numerous high-throughput and large-area 3D fabrication methods, such as, soft-lithography, additive manufacturing, and wide-field optical components and detection systems. Easily reconfigurable: D2NN can be easily improved by additional 3D printed layers or replacing some of the existing layers with newly trained ones. Lightening speed: Once the device is trained, it works at the speed of light. Efficient: No energy is consumed to run the device. Cost-effective: The device can be reproduced for less than $50, making it very cost-effective. What are the areas it can be used in? Image analysis Feature detection Object classification Can also enable new microscope or camera designs that can perform unique imaging tasks This new AI device could find applications in the area of medical technologies, data intensive tasks, robotics, security, and or any application where image and video data are essential. Refer to UCLA’s official news article to know more in detail. Also, you can refer to this paper  All-optical machine learning using diffractive deep neural Networks. OpenAI builds reinforcement learning based system giving robots human like dexterity Datasets and deep learning methodologies to extend image-based applications to videos AutoAugment: Google’s research initiative to improve deep learning performance
Read more
  • 0
  • 0
  • 4524

article-image-unity-game-engine-assets-2d-game-development
Amarabha Banerjee
08 Aug 2018
9 min read
Save for later

Implementing Unity game engine and assets for 2D game development [Tutorial]

Amarabha Banerjee
08 Aug 2018
9 min read
The rise of mobile platforms has been in part thanks to its popularity with indie developers, who prefer the short development cycles. The most prevalent medium on mobile is 2D and Unity has a host of features that support 2D game development, including Sprite Editing and Packing, as well as physics specifically designed for 2D games. In this tutorial, we will look at creating Unity game engine and assets for 2D games. This article is an excerpt from the book Unity 2017 Game Development Essentials written by Tommaso Lintrami.  Setting up the scene and preparing game assets Create a new scene from the main menu by navigating to Assets | Create | Scene, and name it ParallaxGame. In this new scene, we will set up, step by step, all the elements for our 2D game prototype. First of all, we will switch the camera setting in the Scene view to 2D by clicking on the button as shown by the red arrow in the following screenshot: As you can see, now the Scene view camera is orthographic. You can't rotate it as you wish, as you can do with the 3D camera. Of course, we will want to change this setting on our Main Camera as well. Also, we want to change the Orthographic size to 4.5 to have the correct view of the scene. Instead, for the Skybox, we will choose a very dark or black color as clear color in the depth setting. This is how the Inspector should look when these settings are done: While the Clipping Planes distances are important for setting the size of the frustum cone of a 3D, for the Perspective camera (inside which everything will be rendered by the engine), we should only set the Orthographic Size to 4.5, to have the correct distance of the 2D camera from the scene. When these settings are done, proceed by importing Chapter2-3-4.unitypackage into the project. You can either double-click on the package file with Unity open, or use the top menu: Assets | Import | Custom Package. If you haven't imported all the materials from the book's code already, be sure to include the Sprites subfolder. After the import, look in the Sprites/Parallax/DarkCave folder in the Project view and you will find some images imported as textures (as per default). The first thing we want to do now is to change the import settings of these images, in the Inspector, from Texture to Sprite (2D and UI). To do so, select all the images in the Project view in the Sprites/Parallax/DarkCave folder, all except the _reference_main_post file. Which is just a picture used as a reference of what the game level should look like: The Import Settings shown in the Inspector after selecting the seven images in the Project view The Max Size setting is hidden (-) because we have a multi-selection of image files. After having made the multiple selections, again, in the Inspector, we will do the following: Set the Texture Type option to Sprites (2D and UI). By default, images are imported as textures; to import them as Sprites, this type must be set. Uncheck the Generate Mip Maps option as we don't need MIP maps for this project as we are not going to look at the Sprites from a distant point of view, for example, games with the zoom-in/zoom-out feature (like the original Grand Theft Auto 2D game) would need this setting checked. Set Max Size to the maximum allowed. To ensure that you import all the images at their maximum resolution, set this to 8192. This is the maximum resolution size for an image on a modern PC, imported as a Sprite or texture. We set it so high because most of the background images we have in the collection are around 6,000 pixels wide. Click on the Apply button to apply these changes to all the images that were selected: The Project view showing the content of the folder after the images have been set to Sprite in the Import Settings Placing the prefabs in the game Unity can place the prefabs in the game in many ways, the usual, visual method is to drag a stored prefab or another kind of file/object directly into the scene. Before dragging in the Sprites we imported, we will create an empty GameObject and rename it ParallaxCave. We will drag the layer images we just imported as Sprites, one by one, from the Project view (pointing at the Assets/Chapters2-3-4/Sprites/Background/DarkCave folder) into the Scene view, or more simply, directly in the Hierarchy view as the children of our ParallaxCave GameObject, resulting in a scene Hierarchy like the one illustrated here: You can't drag all of them instantly because Unity will prompt you to save an animation filename for the selected collection of Sprites; we will see this later for our character and for the collectable graphics. The ParallaxCave GameObject and its children are in blue because this GameObject is stored as a prefab. When the link with the prefab is broken for a modification, the GameObject in the Hierarchy will become black again. When you see a red GameObject in the scene, it means that the prefab file that was linked to that GameObject was deleted. Importing and placing background layers In any game engine, 2D elements, such as Sprites, are rendered following a sort order; this order is also called the z-order because it is a way to express the depth or to cope with the missing z axis in a two-dimensional context. The sort order is assigned an integer number which can be positive or negative; 0 is the middle point of this draw order. Ideally, a sort order of zero expresses the middle ground, where the player will act, or near its layer. Look at this image: All positive numbers will render the Sprite element in front of the other elements with a lower number. The graphic set we are going to use was taken from the Open Game Art website at http://opengameart.org. For simplicity, the provided background image files are named with a number within parentheses, for example, middleground(z1), which means that this image should be rendered with a z sort order of 1. Change the sort order property of the Sprite component on each child object under ParallaxCave according to the value in the parentheses at the end of their filenames. This will rearrange the graphics into the appropriately sorted order. After we place and set the correct layer order for all the images, we should arrange and scale the layers in a proper manner to end as something like the reference image furnished in the Assets/Chapters2-3-4/Sprites/Background/DarkCave/ folder. You can take a look at the final result for this part anytime, by saving the current scene and loading the Chapter3_start.unity scene. On the optimization side, Sprites can be packed together in a single atlas texture with the Sprite Packer into a single image atlas (a single image containing a whole group of Sprites). Implementing parallax scrolling Parallax scrolling is a graphic technique where the background content (that is, an image) is moved at a different speed than the foreground content while scrolling. The technique was derived from the multiplane camera technique used in traditional animation since the 1930s. Parallax scrolling was popular in the 1980s and early 1990s and started to see light with video games such as Moon Patrol and Jungle Hunt, both released in 1982. On such a display system, a game can produce parallax by simply changing each layer's position by a different amount in the same direction. Layers that move more quickly are perceived to be closer to the virtual camera. Layers can be placed in front of the playfield, the layer containing the objects with which the player interacts, for various reasons, such as to provide increased dimension, obscure some of the action of the game, or distract the player. Here follows a short list of the first parallax scrolling games which made the history of video games: Moon Patrol (Atari, 1982) https://youtu.be/HBOKWCpwGfM https://en.wikipedia.org/wiki/Moon_Patrol Shadow of the Beast (Psygnosis, 1989) https://youtu.be/w6Osnolfxqw https://en.wikipedia.org/wiki/Shadow_of_the_Beast Super Mario World (Nintendo, 1990) https://www.youtube.com/watch?v=htFJTiVH5Ao https://en.wikipedia.org/wiki/Super_Mario_World Sonic The Hedgehog (Sega, 1991) https://youtu.be/dws4ij2IFH4 https://en.wikipedia.org/wiki/Sonic_the_Hedgehog_(1991_video_game) Making it last forever There are many roads we could take to make the hero run last forever and to achieve parallax scrolling. You can find a lot of different ready-made solutions in the Asset Store and there are also many General Public License (GPL) open source pieces of code written in C that we could take inspiration from. Using the Asset Store I chose FreeParallax from the Asset Store because it is powerful, free, and a well-written piece of code. Also, the modifications needed to achieve our game prototype on this class are very few. Let's download and import the system from the Asset Store. First, navigate to http://u3d.as/bvv: Click on the Open in Unity button to allow Unity to open this entry in the Asset Store window. You can, alternatively, search for the package directly in Unity  by opening the store from the top menu: Windows | Asset Store (recommended). In the search box type: parallax; also choose FREE ONLY like in this screenshot: You should now find the correct entry, the Free Parallax for Unity(2D) package. You can now download the package and import it into your project straight away. We saw how to create Unity game engine and assets for 2D games. Check out the book Unity 2017 Game Development Essentials to know more ways of creating interactive 2D games. Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS AI for Unity game developers: How to emulate real-world senses in your NPC agent Working with Unity Variables to script powerful Unity 2017 games
Read more
  • 0
  • 0
  • 5634
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-create-machine-learning-pipelines-using-unsupervised-automl
Sunith Shetty
07 Aug 2018
11 min read
Save for later

Create machine learning pipelines using unsupervised AutoML [Tutorial]

Sunith Shetty
07 Aug 2018
11 min read
AutoML uses unsupervised algorithms for performing an automated process of algorithm selection, hyperparameter tuning, iterative modeling, and model assessment.  When your dataset doesn't have a target variable, you can use clustering algorithms to explore it, based on different characteristics. These algorithms group examples together, so that each group will have examples as similar as possible to each other, but dissimilar to examples in other groups. Since you mostly don't have labels when you are performing such analysis, there is a performance metric that you can use to examine the quality of the resulting separation found by the algorithm. It is called the Silhouette Coefficient. The Silhouette Coefficient will help you to understand two things: Cohesion: Similarity within clusters Separation: Dissimilarity among clusters It will give you a value between 1 and -1, with values close to 1 indicating well-formed clusters. Clustering algorithms are used to tackle many different tasks such as finding similar users, songs, or images, detecting key trends and changes in patterns, understanding community structures in social networks. This tutorial deals with using unsupervised machine learning algorithms for creating machine learning pipelines. The code files for this article are available on Github. This article is an excerpt from a book written by Sibanjan Das, Umit Mert Cakmak titled Hands-On Automated Machine Learning.  Commonly used clustering algorithms There are two types of commonly used clustering algorithms: distance-based and probabilistic models. For example, k-means and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) are distance-based algorithms, whereas the Gaussian mixture model is probabilistic. Distance-based algorithms may use a variety of distance measures where Euclidean distance metrics are usually used. Probabilistic algorithms will assume that there is a generative process with a mixture of probability distributions with unknown parameters and the goal is to calculate these parameters from the data. Since there are many clustering algorithms, picking the right one depends on the characteristics of your data. For example, k-means will work with centroids of clusters and this requires clusters in your data to be evenly sized and convexly shaped. This means that k-means will not work well on elongated clusters or irregularly shaped manifolds. When your clusters in your data are not evenly sized or convexly shaped, you many want to use DBSCAN to cluster areas of any shape. Knowing a thing or two about your data will bring you closer to finding the right algorithms, but what if you don't know much about your data? Many times when you are performing exploratory analysis, it might be hard to get your head around what's happening. If you find yourself in this kind of situation, an automated unsupervised ML pipeline can help you to understand the characteristics of your data better. Be careful when you perform this kind of analysis, though; the actions you will take later will be driven by the results you will see and this could quickly send you down the wrong path if you are not cautious. Creating sample datasets with sklearn In sklearn, there are some useful ways to create sample datasets for testing algorithms: # Importing necessary libraries for visualization import matplotlib.pyplot as plt import seaborn as sns # Set context helps you to adjust things like label size, lines and various elements # Try "notebook", "talk" or "paper" instead of "poster" to see how it changes sns.set_context('poster') # set_color_codes will affect how colors such as 'r', 'b', 'g' will be interpreted sns.set_color_codes() # Plot keyword arguments will allow you to set things like size or line width to be used in charts. plot_kwargs = {'s': 10, 'linewidths': 0.1} import numpy as np import pandas as pd # Pprint will better output your variables in console for readability from pprint import pprint # Creating sample dataset using sklearn samples_generator from sklearn.datasets.samples_generator import make_blobs from sklearn.preprocessing import StandardScaler # Make blobs will generate isotropic Gaussian blobs # You can play with arguments like center of blobs, cluster standard deviation centers = [[2, 1], [-1.5, -1], [1, -1], [-2, 2]] cluster_std = [0.1, 0.1, 0.1, 0.1] # Sample data will help you to see your algorithms behavior X, y = make_blobs(n_samples=1000, centers=centers, cluster_std=cluster_std, random_state=53) # Plot generated sample data plt.scatter(X[:, 0], X[:, 1], **plot_kwargs) plt.show() We get the following plot from the preceding code: cluster_std will affect the amount of dispersion. Change it to [0.4, 0.5, 0.6, 0.5] and try again: cluster_std = [0.4, 0.5, 0.6, 0.5] X, y = make_blobs(n_samples=1000, centers=centers, cluster_std=cluster_std, random_state=53) plt.scatter(X[:, 0], X[:, 1], **plot_kwargs) plt.show() We get the following plot from the preceding code: Now it looks more realistic! Let's write a small class with helpful methods to create unsupervised experiments. First, you will use the fit_predict method to apply one or more clustering algorithms on the sample dataset: class Unsupervised_AutoML: def __init__(self, estimators=None, transformers=None): self.estimators = estimators self.transformers = transformers pass Unsupervised_AutoML class will initialize with a set of estimators and transformers. The second class method will be fit_predict: def fit_predict(self, X, y=None): """ fit_predict will train given estimator(s) and predict cluster membership for each sample """ # This dictionary will hold predictions for each estimator predictions = [] performance_metrics = {} for estimator in self.estimators: labels = estimator['estimator'](*estimator['args'], **estimator['kwargs']).fit_predict(X) estimator['estimator'].n_clusters_ = len(np.unique(labels)) metrics = self._get_cluster_metrics(estimator['estimator'].__name__, estimator['estimator'].n_clusters_, X, labels, y) predictions.append({estimator['estimator'].__name__: labels}) performance_metrics[estimator['estimator'].__name__] = metrics self.predictions = predictions self.performance_metrics = performance_metrics return predictions, performance_metrics The fit_predict method uses the _get_cluster_metrics method to get the performance metrics, which is defined in the following code block: # Printing cluster metrics for given arguments def _get_cluster_metrics(self, name, n_clusters_, X, pred_labels, true_labels=None): from sklearn.metrics import homogeneity_score, completeness_score, v_measure_score, adjusted_rand_score, adjusted_mutual_info_score, silhouette_score print("""################## %s metrics #####################""" % name) if len(np.unique(pred_labels)) >= 2: silh_co = silhouette_score(X, pred_labels) if true_labels is not None: h_score = homogeneity_score(true_labels, pred_labels) c_score = completeness_score(true_labels, pred_labels) vm_score = v_measure_score(true_labels, pred_labels) adj_r_score = adjusted_rand_score(true_labels, pred_labels) adj_mut_info_score = adjusted_mutual_info_score(true_labels, pred_labels) metrics = {"Silhouette Coefficient": silh_co, "Estimated number of clusters": n_clusters_, "Homogeneity": h_score, "Completeness": c_score, "V-measure": vm_score, "Adjusted Rand Index": adj_r_score, "Adjusted Mutual Information": adj_mut_info_score} for k, v in metrics.items(): print("t%s: %0.3f" % (k, v)) return metrics metrics = {"Silhouette Coefficient": silh_co, "Estimated number of clusters": n_clusters_} for k, v in metrics.items(): print("t%s: %0.3f" % (k, v)) return metrics else: print("t# of predicted labels is {}, can not produce metrics. n".format(np.unique(pred_labels))) The _get_cluster_metrics method calculates metrics, such as homogeneity_score, completeness_score, v_measure_score, adjusted_rand_score, adjusted_mutual_info_score, and silhouette_score. These metrics will help you to assess how well the clusters are separated and also measure the similarity within and between clusters. K-means algorithm in action You can now apply the KMeans algorithm to see how it works: from sklearn.cluster import KMeans estimators = [{'estimator': KMeans, 'args':(), 'kwargs':{'n_clusters': 4}}] unsupervised_learner = Unsupervised_AutoML(estimators) You can see the estimators: unsupervised_learner.estimators These will output the following: [{'args': (), 'estimator': sklearn.cluster.k_means_.KMeans, 'kwargs': {'n_clusters': 4}}] You can now invoke fit_predict to obtain predictions and performance_metrics: predictions, performance_metrics = unsupervised_learner.fit_predict(X, y) Metrics will be written to the console: ################## KMeans metrics ##################### Silhouette Coefficient: 0.631 Estimated number of clusters: 4.000 Homogeneity: 0.951 Completeness: 0.951 V-measure: 0.951 Adjusted Rand Index: 0.966 Adjusted Mutual Information: 0.950 You can always print metrics later: pprint(performance_metrics) This will output the name of the estimator and its metrics: {'KMeans': {'Silhouette Coefficient': 0.9280431207593165, 'Estimated number of clusters': 4, 'Homogeneity': 1.0, 'Completeness': 1.0, 'V-measure': 1.0, 'Adjusted Rand Index': 1.0, 'Adjusted Mutual Information': 1.0}} Let's add another class method to plot the clusters of the given estimator and predicted labels: # plot_clusters will visualize the clusters given predicted labels def plot_clusters(self, estimator, X, labels, plot_kwargs): palette = sns.color_palette('deep', np.unique(labels).max() + 1) colors = [palette[x] if x >= 0 else (0.0, 0.0, 0.0) for x in labels] plt.scatter(X[:, 0], X[:, 1], c=colors, **plot_kwargs) plt.title('{} Clusters'.format(str(estimator.__name__)), fontsize=14) plt.show() Let's see the usage: plot_kwargs = {'s': 12, 'linewidths': 0.1} unsupervised_learner.plot_clusters(KMeans, X, unsupervised_learner.predictions[0]['KMeans'], plot_kwargs) You get the following plot from the preceding block: In this example, clusters are evenly sized and clearly separate from each other but, when you are doing this kind of exploratory analysis, you should try different hyperparameters and examine the results. You will write a wrapper function later in this article to apply a list of clustering algorithms and their hyperparameters to examine the results. For now, let's see one more example with k-means where it does not work well. When clusters in your dataset have different statistical properties, such as differences in variance, k-means will fail to identify clusters correctly: X, y = make_blobs(n_samples=2000, centers=5, cluster_std=[1.7, 0.6, 0.8, 1.0, 1.2], random_state=220) # Plot sample data plt.scatter(X[:, 0], X[:, 1], **plot_kwargs) plt.show() We get the following plot from the preceding code: Although this sample dataset is generated with five centers, it's not that obvious and there might be four clusters, as well: from sklearn.cluster import KMeans estimators = [{'estimator': KMeans, 'args':(), 'kwargs':{'n_clusters': 4}}] unsupervised_learner = Unsupervised_AutoML(estimators) predictions, performance_metrics = unsupervised_learner.fit_predict(X, y) Metrics in the console are as follows: ################## KMeans metrics ##################### Silhouette Coefficient: 0.549 Estimated number of clusters: 4.000 Homogeneity: 0.729 Completeness: 0.873 V-measure: 0.795 Adjusted Rand Index: 0.702 Adjusted Mutual Information: 0.729 KMeans clusters are plotted as follows: plot_kwargs = {'s': 12, 'linewidths': 0.1} unsupervised_learner.plot_clusters(KMeans, X, unsupervised_learner.predictions[0]['KMeans'], plot_kwargs) We get the following plot from the preceding code: In this example, points between red (dark gray) and bottom-green clusters (light gray) seem to form one big cluster. K-means is calculating the centroid based on the mean value of points surrounding that centroid. Here, you need to have a different approach. The DBSCAN algorithm in action DBSCAN is one of the clustering algorithms that can deal with non-flat geometry and uneven cluster sizes. Let's see what it can do: from sklearn.cluster import DBSCAN estimators = [{'estimator': DBSCAN, 'args':(), 'kwargs':{'eps': 0.5}}] unsupervised_learner = Unsupervised_AutoML(estimators) predictions, performance_metrics = unsupervised_learner.fit_predict(X, y) Metrics in the console are as follows: ################## DBSCAN metrics ##################### Silhouette Coefficient: 0.231 Estimated number of clusters: 12.000 Homogeneity: 0.794 Completeness: 0.800 V-measure: 0.797 Adjusted Rand Index: 0.737 Adjusted Mutual Information: 0.792 DBSCAN clusters are plotted as follows: plot_kwargs = {'s': 12, 'linewidths': 0.1} unsupervised_learner.plot_clusters(DBSCAN, X, unsupervised_learner.predictions[0]['DBSCAN'], plot_kwargs) We get the following plot from the preceding code: Conflict between red (dark gray) and bottom-green (light gray) clusters from the k-means case seems to be gone, but what's interesting here is that some small clusters appeared and some points were not assigned to any cluster based on their distance. DBSCAN has the eps(epsilon) hyperparameter, which is related to proximity for points to be in same neighborhood; you can play with that parameter to see how the algorithm behaves. When you are doing this kind of exploratory analysis where you don't know much about the data, visual clues are always important, because metrics can mislead you since not every clustering algorithm can be assessed using similar metrics. To summarize we learned about many different aspects when it comes to choosing a suitable ML pipeline for a given problem. You gained a better understanding of how unsupervised algorithms may suit your needs for a given problem. To have a clearer understanding of the different aspects of automated Machine Learning, and how to incorporate automation tasks using practical datasets, check out this book Hands-On Automated Machine Learning. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) How machine learning as a service is transforming cloud Selecting Statistical-based Features in Machine Learning application
Read more
  • 0
  • 0
  • 9013

article-image-build-java-ee-containers-using-docker-tutorial
Aaron Lazar
07 Aug 2018
7 min read
Save for later

Build Java EE containers using Docker [Tutorial]

Aaron Lazar
07 Aug 2018
7 min read
Containers are changing the way we build and deliver software. They are also the essential glue for DevOps and the way to take CI/CD to another level. Put them together and you will have one of the most powerful environments in IT. But can Java EE take advantage of it? Of course! If an application server is an abstraction of Java EE applications, containers are an abstraction of the server, and once you have them built into a standard such as Docker, you have the power to use such tools to manage an application server. This article is an extract from the book Java EE 8 Cookbook, authored by Elder Moraes. This article will show you how to put your Java EE application inside a container. Since day one, Java EE has been based on containers. If you doubt it, just have a look at this diagram: Java EE architecture: https://docs.oracle.com/javaee/6/tutorial/doc/bnacj.html It belongs to Oracle's official documentation for Java EE 6 and, actually, has been much the same architecture since the times of Sun. If you pay attention, you will notice that there are different containers: a web container, an EJB container, and an application client container. In this architecture, it means that the applications developed with those APIs will rely on many features and services provided by the container. When we take the Java EE application server and put it inside a Docker container, we are doing the same thing— it is relying on some of the features and services provided by the Docker environment. This recipe will show you how to deliver a Java EE application in a container bundle, which is called an appliance. Installing Docker First, of course, you need the Docker platform installed in your environment. There are plenty of options, so I suggest you check this link and get more details: And if you are not familiar with Docker commands, I recommend you have a look at this beautiful cheat sheet: You'll also need to create an account at Docker Hub so you can store your own images. Check it out. It's free for public images. Building Java EE Container To build your Java EE container, you'll first need a Docker image. To build it, you'll need a Dockerfile such as this: FROM openjdk:8-jdk ENV GLASSFISH_HOME /usr/local/glassfish ENV PATH ${GLASSFISH_HOME}/bin:$PATH ENV GLASSFISH_PKG latest-glassfish.zip ENV GLASSFISH_URL https://download.oracle.com/glassfish/5.0/nightly/latest-glassfish.zip RUN mkdir -p ${GLASSFISH_HOME} WORKDIR ${GLASSFISH_HOME} RUN set -x && curl -fSL ${GLASSFISH_URL} -o ${GLASSFISH_PKG} && unzip -o $GLASSFISH_PKG && rm -f $GLASSFISH_PKG && mv glassfish5/* ${GLASSFISH_HOME} && rm -Rf glassfish5 RUN addgroup glassfish_grp && adduser --system glassfish && usermod -G glassfish_grp glassfish && chown -R glassfish:glassfish_grp ${GLASSFISH_HOME} && chmod -R 777 ${GLASSFISH_HOME} COPY docker-entrypoint.sh / RUN chmod +x /docker-entrypoint.sh USER glassfish ENTRYPOINT ["/docker-entrypoint.sh"] EXPOSE 4848 8080 8181 CMD ["asadmin", "start-domain", "-v"] This image will be our base image from which we will construct other images in this chapter. Now we need to build it: docker build -t eldermoraes/gf-javaee-jdk8 . Go ahead and push it to your Docker Registry at Docker Hub: docker push eldermoraes/gf-javaee-jdk8 Now you can create another image by customizing the previous one, and then put your app on it: FROM eldermoraes/gf-javaee-jdk8 ENV DEPLOYMENT_DIR ${GLASSFISH_HOME}/glassfish/domains/domain1/autodeploy/ COPY app.war ${DEPLOYMENT_DIR} In the same folder, we have a Java EE application file (app.war) that will be deployed inside the container. Check the See also section to download all the files. Once you save your Dockerfile, you can build your image: docker build -t eldermoraes/gf-javaee-cookbook . Now you can create the container: docker run -d --name gf-javaee-cookbook -h gf-javaee-cookbook -p 80:8080 -p 4848:4848 -p 8686:8686 -p 8009:8009 -p 8181:8181 eldermoraes/gf-javaee-cookbook Wait a few seconds and open this URL in your browser: http://localhost/app How to work with Dockerfile Let's understand our first Dockerfile: FROM openjdk:8-jdk This FROM keyword will ask Docker to pull the openjdk:8-jdk image, but what does it mean? It means that there's a registry somewhere where your Docker will find prebuilt images. If there's no image registry in your local environment, it will search for it in Docker Hub, the official and public Docker registry in the cloud. And when you say that you are using a pre-built image, it means that you don't need to build, in our case, the whole Linux container from scratch. There's already a template that you can rely on: ENV GLASSFISH_HOME /usr/local/glassfish ENV PATH ${GLASSFISH_HOME}/bin:$PATH ENV GLASSFISH_PKG latest-glassfish.zip ENV GLASSFISH_URL https://download.oracle.com/glassfish/5.0/nightly/latest-glassfish.zip RUN mkdir -p ${GLASSFISH_HOME} WORKDIR ${GLASSFISH_HOME} Here are just some environment variables to help with the coding. RUN set -x && curl -fSL ${GLASSFISH_URL} -o ${GLASSFISH_PKG} && unzip -o $GLASSFISH_PKG && rm -f $GLASSFISH_PKG && mv glassfish5/* ${GLASSFISH_HOME} && rm -Rf glassfish5 The RUN clause in Dockerfiles execute some bash commands inside the container when it has been created. Basically, what is happening here is that GlassFish is being downloaded and then prepared in the container: RUN addgroup glassfish_grp && adduser --system glassfish && usermod -G glassfish_grp glassfish && chown -R glassfish:glassfish_grp ${GLASSFISH_HOME} && chmod -R 777 ${GLASSFISH_HOME} For safety, we define the user that will hold the permissions for GlassFish files and processes: COPY docker-entrypoint.sh / RUN chmod +x /docker-entrypoint.sh Here we are including a bash script inside the container to perform some GlassFish administrative tasks: #!/bin/bash if [[ -z $ADMIN_PASSWORD ]]; then ADMIN_PASSWORD=$(date| md5sum | fold -w 8 | head -n 1) echo "##########GENERATED ADMIN PASSWORD: $ADMIN_PASSWORD ##########" fi echo "AS_ADMIN_PASSWORD=" > /tmp/glassfishpwd echo "AS_ADMIN_NEWPASSWORD=${ADMIN_PASSWORD}" >> /tmp/glassfishpwd asadmin --user=admin --passwordfile=/tmp/glassfishpwd change-admin-password --domain_name domain1 asadmin start-domain echo "AS_ADMIN_PASSWORD=${ADMIN_PASSWORD}" > /tmp/glassfishpwd asadmin --user=admin --passwordfile=/tmp/glassfishpwd enable-secure-admin asadmin --user=admin stop-domain rm /tmp/glassfishpwd exec "$@" After copying the bash file into the container, we go to the final block: USER glassfish ENTRYPOINT ["/docker-entrypoint.sh"] EXPOSE 4848 8080 8181 CMD ["asadmin", "start-domain", "-v"] The USER clause defines the user that will be used from this point in the file. It's great because from there, all the tasks will be done by the glassfish user. The ENTRYPOINT clause will execute the docker-entrypoint.sh script. The EXPOSE clause will define the ports that will be available for containers that use this image. And finally, the CMD clause will call the GlassFish script that will initialize the container. Now let's understand our second Dockerfile: FROM eldermoraes/gf-javaee-jdk8 We need to take into account the same considerations about the prebuilt image, but now the image was made by you. Congratulations! ENV DEPLOYMENT_DIR ${GLASSFISH_HOME}/glassfish/domains/domain1/autodeploy/ Here, we are building an environment variable to help with the deployment. It's done in the same way as for Linux systems: COPY app.war ${DEPLOYMENT_DIR} This COPY command will literally copy the app.war file to the folder defined in the DEPLOYMENT_DIR environment variable. From here, you are ready to build an image and create a container. The image builder is self-explanatory: docker build -t eldermoraes/gf-javaee-cookbook . Let's check the docker run command: docker run -d --name gf-javaee-cookbook -h gf-javaee-cookbook -p 80:8080 -p 4848:4848 -p 8686:8686 -p 8009:8009 -p 8181:8181 eldermoraes/gf-javaee-cookbook If we break it down, this is what the various elements of the command mean: -h: Defines the host name of the container. -p: Defines which ports will be exposed and how it will be done. It is useful, for example, when more than one container is using the same port by default—you just use them differently. eldermoraes/gf-javaee-cookbook: The reference to the image you just built. So now you've successfully built a container for your Java EE application, in Docker. If you found this tutorial helpful and would like to learn more, head over to the Packt store and get the book Java EE 8 Cookbook, authored by Elder Moraes. Oracle announces a new pricing structure for Java Design a RESTful web API with Java [Tutorial] How to convert Java code into Kotlin
Read more
  • 0
  • 0
  • 12555

article-image-gradle-build-script-to-start-automating-project
Savia Lobo
06 Aug 2018
15 min read
Save for later

Write your first Gradle build script to start automating your project [Tutorial]

Savia Lobo
06 Aug 2018
15 min read
When we develop a software, we write, compile, test, package, and finally, distribute the code. We can automate these steps by using a build system. The big advantage is that we have a repeatable sequence of steps. The build system will always follow the steps that we have defined, so we can concentrate on writing the actual code and don't have to worry about the other steps. Gradle is one such build system. It is a tool for build automation. With Gradle, we can automate compiling, testing, packaging, and deployment of our software or any other types of projects. Gradle is flexible, but has sensible defaults for most projects. This means that we can rely on the defaults if we don't want something special, but we can still can use the flexibility to adapt a build to certain custom needs. Gradle is already used by large open source projects such as Spring, Hibernate, and Grails. Enterprise companies such as LinkedIn and Netflix also use Gradle. In this article, we will explain what Gradle is and how to use it in our development projects. This Gradle tutorial is an excerpt taken from, 'Gradle Effective Implementations Guide - Second Edition' written by Hubert Klein Ikkink. Take a look at some of Gradle features. Declarative builds and convention over configuration Gradle uses a Domain Specific Language (DSL) based on Groovy to declare builds. The DSL provides a flexible language that can be extended by us. As the DSL is based on Groovy, we can write Groovy code to describe a build and use the power and expressiveness of the Groovy language. Groovy is a language for the Java Virtual Machine (JVM), such as Java and Scala. Groovy makes it easy to work with collections, has closures, and a lot of useful features. The syntax is closely related to the Java syntax. In fact, we could write a Groovy class file with Java syntax and it will compile. However, using the Groovy syntax makes it easier to express the code intent and we need less boilerplate code than with Java. To get the most out of Gradle, it is best to also learn the basics of the Groovy language, but it is not necessary to start writing Gradle scripts. Gradle is designed to be a build language and not a rigid framework. The Gradle core itself is written in Java and Groovy. To extend Gradle, we can use Java and Groovy to write our custom code. We can even write our custom code in Scala if we want to. Gradle provides support for Java, Groovy, Scala, web, and OSGi projects out of the box. These projects have sensible convention-over-configuration settings that we probably already use ourselves. However, we have the flexibility to change these configuration settings if required for our projects. Support for Ant Tasks and Maven repositories Gradle supports Ant Tasks and projects. We can import an Ant build and reuse all the tasks. However, we can also write Gradle tasks dependent on Ant Tasks. The integration also applies for properties, paths, and so on. Maven and Ivy repositories are supported to publish or fetch dependencies. So, we can continue to use any repository infrastructure that we already have. Incremental builds in Gradle With Gradle, we have incremental builds. This means the tasks in a build are only executed if necessary. For example, a task to compile source code will first check whether the sources have changed since the last execution of the task. If the sources have changed, the task is executed; but if the sources haven't changed, the execution of the task is skipped and the task is marked as being up to date. Gradle supports this mechanism for a lot of provided tasks. However, we can also use this for tasks that we write ourselves. Multi-project builds Gradle has great support for multi-project builds. A project can simply be dependent on other projects or be a dependency of other projects. We can define a graph of dependencies among projects, and Gradle can resolve these dependencies for us. We have the flexibility to define our project layout as we want. Gradle has support for partial builds. This means that Gradle will figure out whether a project, which our project depends on, needs to be rebuild or not. If the project needs rebuilding, Gradle will do this before building our own project. Gradle Wrapper The Gradle Wrapper allows us to execute Gradle builds even if Gradle is not installed on a computer. This is a great way to distribute source code and provide the build system with it so that the source code can be built. Also in an enterprise environment, we can have a zero-administration way for client computers to build the software. We can use the wrapper to enforce a certain Gradle version to be used so that the whole team is using the same version. We can also update the Gradle version for the wrapper, and the whole team will use the newer version as the wrapper code is checked in to version control. Free and open source Gradle is an open source project and it is licensed under the Apache License (ASL). Getting started with Gradle In this section, we will download and install Gradle before writing our first Gradle build script. Before we install Gradle, we must make sure that we have a Java Development SE Kit (JDK) installed on our computer. Gradle requires JDK 6 or higher. Gradle will use the JDK found on the path of our computer. We can check this by running the following command on the command line: $ java -version Although Gradle uses Groovy, we don't have to install Groovy ourselves. Gradle bundles the Groovy libraries with the distribution and will ignore a Groovy installation that is already available on our computer. Gradle is available on the Gradle website at http://www.gradle.org/downloads. From this page, we can download the latest release of Gradle. We can also download an older version if we want. We can choose among three different distributions to download. We can download the complete Gradle distribution with binaries, sources, and documentation; or we can only download the binaries; or we can only download the sources. To get started with Gradle, we will download the standard distribution with the binaries, sources, and documentation. At the time of writing this book, the current release is 2.12. Installing Gradle Gradle is packaged as a ZIP file for one of the three distributions. So when we have downloaded the Gradle full-distribution ZIP file, we must unzip the file. After unpacking the ZIP file we have: Binaries in the bin directory Documentation with the user guide, Groovy DSL, and API documentation in the doc directory A lot of samples in the samples directory Source code of Gradle in the src directory Supporting libraries for Gradle in the lib directory A directory named init.d, where we can store Gradle scripts that need to be executed each time we run Gradle Once we have unpacked the Gradle distribution to a directory, we can open a command prompt. We go to the directory where we have installed Gradle. To check our installation, we run gradle -v and get an output with the used JDK and library versions of Gradle, as follows: $ gradle -v ------------------------------------------------------------ Gradle 2.12 ------------------------------------------------------------ Build time: 2016-03-14 08:32:03 UTC Build number: none Revision: b29fbb64ad6b068cb3f05f7e40dc670472129bc0 Groovy: 2.4.4 Ant: Apache Ant(TM) version 1.9.3 compiled on December23 2013 JVM: 1.8.0_66 (Oracle Corporation 25.66-b17) OS: Mac OS X 10.11.3 x86_64 Here, we can check whether the displayed version is the same as the distribution version that we have downloaded from the Gradle website. To run Gradle on our computer, we have to only add $GRADLE_HOME/bin to our PATH environment variable. Once we have done that, we can run the gradle command from every directory on our computer. If we want to add JVM options to Gradle, we can use the JAVA_OPTS and GRADLE_OPTS environment variables. JAVA_OPTS is a commonly used environment variable name to pass extra parameters to a Java application. Gradle also uses the GRADLE_OPTS environment variable to pass extra arguments to Gradle. Both environment variables are used, so we can even set them both with different values. This is mostly used to set, for example, an HTTP proxy or extra memory options. Installing with SKDMAN! Software Development Kit Manager (SDKMAN!) is a tool to manage versions of software development kits such as Gradle. Once we have installed SKDMAN!, we can simply use the install command and SDKMAN! downloads Gradle and makes sure that it is added to our $PATH variable. SDKMAN! is available for Unix-like systems, such as Linux, Mac OSX, and Cygwin (on Windows). First, we need to install SDKMAN! with the following command in our shell: $ curl -s get.sdkman.io | bash Next, we can install Gradle with the install command: $ sdk install gradle Downloading: gradle 2.12 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 354 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 42.6M 100 42.6M 0 0 1982k 0 0:00:22 0:00:22 --:--:-- 3872k Installing: gradle 2.12 Done installing! Do you want gradle 2.12 to be set as default? (Y/n): Y Setting gradle 2.12 as default. If we have multiple versions of Gradle, it is very easy to switch between versions with the use command: $ sdk use gradle 2.12 Using gradle version 2.12 in this shell. Writing our first build script We now have a running Gradle installation. It is time to create our first Gradle build script. Gradle uses the concept of projects to define a related set of tasks. A Gradle build can have one or more projects. A project is a very broad concept in Gradle, but it is mostly a set of components that we want to build for our application. A project has one or more tasks. Tasks are a unit of work that need to be executed by the build. Examples of tasks are compiling source code, packaging class files into a JAR file, running tests, and deploying the application. We now know a task is a part of a project, so to create our first task, we also create our first Gradle project. We use the gradle command to run a build. Gradle will look for a file named build.gradle in the current directory. This file is the build script for our project. We define our tasks that need to be executed in this build script file. We create a new build.gradle file and open this in a text editor. We type the following code to define our first Gradle task: task helloWorld << { println 'Hello world.' } With this code, we will define a helloWorld task. The task will print the words Hello world. to the console. The println is a Groovy method to print text to the console and is basically a shorthand version of the System.out.println Java method. The code between the brackets is a closure. A closure is a code block that can be assigned to a variable or passed to a method. Java doesn't support closures, but Groovy does. As Gradle uses Groovy to define the build scripts, we can use closures in our build scripts. The << syntax is, technically speaking, an operator shorthand for the leftShift()method, which actually means add to. Therefore, here we are defining that we want to add the closure (with the println 'Hello world' statement) to our task with the helloWorld name. First, we save build.gradle, and with the gradle helloWorld command, we execute our build: $ gradle helloWorld :helloWorld Hello world. BUILD SUCCESSFUL Total time: 2.384 secs This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.12/userguide/gradle_daemon.html The first line of output shows our line Hello world. Gradle adds some more output such as the fact that the build was successful and the total time of the build. As Gradle runs in the JVM, every time we run a Gradle build, the JVM must be also started. The last line of the output shows a tip that we can use the Gradle daemon to run our builds. The Gradle daemon keeps Gradle running in memory so that we don't get the penalty of starting the JVM each time we run Gradle. This drastically speeds up the execution of tasks. We can run the same build again, but only with the output of our task using the Gradle --quiet or -q command-line option. Gradle will suppress all messages except error messages. When we use the --quiet (or -q) option, we get the following output: $ gradle --quiet helloWorld Hello world. Understanding the Gradle graphical user interface Finally, we take a look at the --gui command-line option. With this option, we start a graphical shell for our Gradle builds. Until now, we used the command line to start a task. With the Gradle GUI, we have a graphical overview of the tasks in a project and we can execute them by simply clicking on the mouse. To start the GUI, we invoke the following command: $ gradle --gui A window is opened with a graphical overview of our task tree. We only have one task that one is shown in the task tree, as we can see in the following screenshot: The output of running a task is shown at the bottom of the window. When we start the GUI for the first time, the tasks task is executed and we see the output in the window. Task tree The Task Tree tab shows projects and tasks found in our build project. We can execute a task by double-clicking on the task name. By default, all the tasks are shown, but we can apply a filter to show or hide certain projects and tasks. The Edit filter button opens a new dialog window where we can define the tasks and properties that are a part of the filter. The Toggle filter button makes the filter active or inactive. We can also right-click on the project and task names. This opens a context menu where we can choose to execute the task, add it to the favorites, hide it (adds it to the filter), or edit the build file. If we have associated the .gradle extension to a text editor in our operating system, then the editor is opened with the content of the build script. These options can be seen in the following screenshot: Favorites The Favorites tab stores tasks we want to execute regularly. We can add a task by right-clicking on the task in the Task Tree tab and selecting the Add To Favorites menu option, or if we have opened the Favorites tab, we can select the Add button and manually enter the project and task name that we want to add to our favorites list. We can see the Add Favorite dialog window in the following screenshot: Command line On the Command Line tab, we can enter any Gradle command that we normally would enter on the command prompt. The command can be added to Favorites as well. We see the Command Line tab contents in the following image: Setup The last tab is the Setup tab. Here, we can change the project directory, which is set to the current directory by default. With the GUI, we can select the logging level from the Log Level select box with the different log levels. We can choose to debug, info, Lifecycle, and error as log levels. The error log level only shows errors and is the least verbose, while debug is the most verbose log level. The lifecycle log level is the default log level. Here, we also can set how detailed the exception stack trace information should be. In the Stack Trace Output section, we can choose from the following three options: Exceptions Only: This is for only showing the exceptions when they occur, which is the default value Standard Stack Trace (-s): This is for showing more stack trace information for the exceptions -S): This is for the most verbose stack trace information for exceptions If we enable the Only Show Output When Error Occurs option, then we only get output from the build process if the build fails. Otherwise, we don't get any output. Finally, we can define a different way to start Gradle for the build with the Use Custom Gradle Executor option. For example, we can define a different batch or script file with extra setup information to run the build process. The following screenshot shows the Setup tab page and all the options that we can set: We learned to install Gradle on our computers and write our first Gradle build script with a simple task.  We also looked at the Gradle GUI and how we can use it to run Gradle build scripts. If you've enjoyed this post, do check out this book 'Gradle Effective Implementations Guide - Second Edition'to know more about how to use Grade for Java Projects. That ’70s language: AWK programming 26 new Java 9 enhancements you will love Slow down to learn how to code faster
Read more
  • 0
  • 0
  • 8915

article-image-statistical-tools-in-wireshark-for-packet-analysis
Vijin Boricha
06 Aug 2018
9 min read
Save for later

Using statistical tools in Wireshark for packet analysis [Tutorial]

Vijin Boricha
06 Aug 2018
9 min read
One of Wireshark's strengths is its statistical tools. When using Wireshark, we have various types of tools, starting from the simple tools for listing end-nodes and conversations, to the more sophisticated tools such as flow and I/O graphs. In this article, we will look at the simple tools in Wireshark that provide us with basic network statistics i.e; who talks to whom over the network, what are the chatty devices, what packet sizes run over the network, and so on. To start statistics tools, start Wireshark, and choose Statistics from the main menu. This article is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. Using the statistics for capture file properties menu In this recipe, we will learn how to get general information from the data that runs over the network. The capture file properties in Wireshark 2 replaces the summary menu in Wireshark 1. Start Wireshark, click on Statistics. How to do it... From the Statistics menu, choose Capture File Properties: What you will get is the Capture File Properties window (displayed in the following screenshot). As you can see in the following screenshot, we have the following: File: Provides file data, such as filename and path, length, and so on Time: Start time, end time, and duration of capture Capture: Hardware information for the PC that Wireshark is installed on Interfaces: Interface information—the interface registry identifier on the left, if capture filter is turned on, interface type and packet size limit Statistics: General capture statistics, including captured and displayed packets: How it works... This menu simply gives a summary of the filtered data properties and the capture statistics (average packets or bytes per second) when someone wants to learn the capture statistics. Using the statistics for protocol hierarchy menu In this recipe, we will learn how to get protocol hierarchy information of the data that runs over the network. Start Wireshark, click on Statistics. How to do it... From the Statistics menu, choose Protocol Hierarchy: What you will get is data about the protocol distribution in the captured file. You will get the protocol distribution of the captured data. The partial screenshot displayed here depicts the statistics of packets captured on a per-protocol basis: What you will get is the Protocol Hierarchy window: Protocol: The protocol name Percent Packets: The percentage of protocol packets from the total captured packets Packets: The number of protocol packets from the total captured packets Percent Bytes: The percentage of protocol bytes from the total captured packets Bytes: The number of protocol bytes from the total captured packets Bit/s: The bandwidth of this protocol, in relation to the capture time End Packets: The absolute number of packets of this protocol (for the highest protocol in the decode file) End Bytes: The absolute number of bytes of this protocol (for the highest protocol in the decode file) End Bit/s: The bandwidth of this protocol, relative to the capture packets and time (for the highest protocol in the decode file) The end columns counts when the protocol is the last protocol in the packet (that is, when the protocol comes at the end of the frame). These can be TCP packets with no payload (for example, SYN packets) which carry upper layer protocols. That is why you see a zero count for Ethernet, IPv4, and UDP end packets; there are no frames where those protocols are the last protocol in the frame. In this file example, we can see two interesting issues: We can see 1,842 packets of DHCPv6. If IPv6 and DHCPv6 are not required, disable it. We see more than 200,000 checkpoint high availability (CPHA) packets, 74.7% of which are sent over the network we monitored. These are synchronization packets that are sent between two firewalls working in a cluster, updating session tables between the firewalls. Such an amount of packets can severely influence performance. The solution for this problem is to configure a dedicated link between the firewalls so that session tables will not influence the network. How it works... Simply, it calculates statistics over the captured data. Some important things to notice: The percentage always refers to the same layer protocols. For example, in the following screenshot, we see that logical link control has 0.5% of the packets that run over Ethernet, IPv6 has 1.0%, IPv4 has 88.8% of the packets, ARP has 9.6% of the packets and even the old Cisco ISK has 0.1 %—a total of 100 % of the protocols over layer 2 Ethernet. On the other hand, we see that TCP has 75.70% of the data, and inside TCP, only 12.74% of the packets are HTTP, and that is almost it. This is because Wireshark counts only the packets with the HTTP headers. It doesn't count, for example, the ACK packets, data packets, and so on: Using the statistics for conversations menu In this recipe, we will learn how to get conversation information of the data that runs over the network. Start Wireshark, click on Statistics. How to do it... From the Statistics menu, choose Conversations: The following window will come up: You can choose between layer 2 Ethernet statistics, layer 3 IP statistics, or layer 4 TCP or UDP statistics. You can use this statistics tools for: On layer 2 (Ethernet): To find and isolate broadcast storms On layer 3/layer 4 (TCP/IP): To connect in parallel to the internet router port, and check who is loading the line to the ISP If you see that there is a lot of traffic going out to port 80 (HTTP) on a specific IP address on the internet, you just have to copy the address to your browser and find the website that is most popular with your users. If you don't get anything, simply go to a standard DNS resolution website (search Google for DNS lookup) and find out what is loading your internet line. For viewing IP addresses as names, you can check the Name resolution checkbox for name resolution (1 in the previous screenshot). For seeing the name resolution, you will first have to enable it by choosing View | Name Resolution | Enable for Network layer. You can also limit the conversations statistics to a display filter by checking the Limit to display filter checkbox (2). In this way, statistics will be presented on all the packets passing the display filter. A new feature in Wireshark version 2 is the graph feature, marked as (5) in the previous screenshot. When you choose a specific line in the TCP conversations statistics and click Graph..., it brings you to the TCP time/sequence (tcptrace) stream graph. To copy table data, click on the Copy button (3). In TCP or UDP, you can mark a specific line, and then click on the Follow Stream... button (4). This will define a display filter that will show you the specific stream of data. As you can see in the following screenshot, you can also right-click a line and choose to prepare or apply a filter, or to colorize a data stream: We also see that, unlike the previous Wireshark version, in which we saw all types of protocols in the upper tabs, here we can choose which protocols to see when only the identified protocols are presented by default. How it works... A network conversation is the traffic between two specific endpoints. For example, an IP conversation is all the traffic between two IP addresses, and TCP conversations present all TCP connections. Using the statistics for endpoints menu In this recipe, we will learn how to get endpoint statistics information of the captured data. Start Wireshark and click on Statistics. How to do it... To view the endpoint statistics, follow these steps: From the Statistics menu, choose Endpoints: The following window will come up: In this window, you will be able to see layer 2, 3, and 4 endpoints, which is Ethernet, IP, and TCP or UDP. From the left-hand side of the window you can see (here is an example for the TCP tab): Endpoint IP address and port number on this host Total packets sent, and bytes received from and to this host Packets to the host (Packets A → B) and bytes to host (Bytes A → B) Packets to the host (Packets B → A) and bytes to host (Bytes B → A) The Latitude and Longitude columns applicable with the GeoIP configured At the bottom of the window we have the following checkboxes: Name resolution: Provide name resolution in cases where it is configured in the name resolution under the view menu. Limit to display filter: To show statistics only for the display filter configured on the main window. Copy: Copy the list values to the clipboard in CSV or YAML format. Map: In cases where GeoIP is configured, shows the geographic information on the geographical map. How it works... Quite simply, it gives statistics on all the endpoints Wireshark has discovered. It can be any situation, such as the following: Few Ethernet (even on) end nodes (that is, MAC addresses), with many IP end nodes (that is, IP addresses)—this will be the case where, for example, we have a router that sends/receives packets from many remote devices. Few IP end nodes with many TCP end nodes—this will be the case for many TCP connections per host. Can be a regular operation of a server with many connections, and it could also be a kind of attack that comes through the network (SYN attack). We learned about Wireshark's basic statistic tools and how you can leverage those for network analysis. Get over 100 recipes to analyze and troubleshoot network problems using Wireshark 2 from this book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6 ? Wireshark for analyzing issues & malicious emails in POP, IMAP, and SMTP  [Tutorial] Capturing Wireshark Packets
Read more
  • 0
  • 4
  • 34121
article-image-reactive-extensions-create-rxjs-observables
Sugandha Lahoti
03 Aug 2018
10 min read
Save for later

Reactive Extensions: Ways to create RxJS Observables [Tutorial]

Sugandha Lahoti
03 Aug 2018
10 min read
Reactive programming is a paradigm where the main focus is working with an asynchronous data flow. Reactive Extensions allow you to work with asynchronous data streams. Reactive Extensions is an agnostic framework (this means it has implementations for several languages), and can be used in other platforms (such as RxJava, RxSwift, and so on). This makes learning Reactive Extensions (and functional reactive programming) really useful, as you can use it to improve your code on different platforms. One of these types, Reactive Extensions for JavaScript (RxJS) is a reactive streams library for JS that can be used both in the browser or in the server-side using Node.js. RxJS is a library for reactive programming using Observables. Observables provide support for passing messages between publishers and subscribers in your application. Observables offer significant benefits over other techniques for event handling, asynchronous programming, and handling multiple values. In this article, we will learn about the different types of observables in the context of RxJS and a few different ways of creating them. This article is an excerpt from the book, Mastering Reactive JavaScript, written by Erich de Souza Oliveira. RxJS lets you have even more control over the source of your data.  We will learn about the different flavors of Observables and how we can better control their life cycle. Installing RxJS RxJS is divided into modules. This way, you can create your own bundle with only the modules you're interested in. However, we will always use the official bundle with all the contents from RxJS; by doing so, we'll not have to worry about whether a certain module exists in our bundle or not. So, let's follow the steps described here to install RxJS. To install it on your server, just run the following command inside a node project: npm i [email protected] -save To add it to an HTML page, just paste the following code snippet inside your HTML: <script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/4.1.0/rx.all.js"> </script> For those using other package managers, you can install RxJS from Bower or NuGet. If you're running inside a node program, you need to have the RxJS library in each JavaScript file that you want to use. To do this, add the following line to the beginning of your JavaScript file: var Rx = require('rx'); The preceding line will be omitted in all examples, as we expect you to have added it before testing the sample code. Creating an observable Here we will see a list of methods to create an observable from common event sources. This is not an exhaustive list, but it contains the most important ones. You can see all the available methods on the RxJS GitHub page (https://github.com/Reactive-Extensions/RxJS). Creating an observable from iterable objects We can create an observable from iterable objects using the from() method. An iterable in JavaScript can be an array (or an array-like object) or other iterates added in ES6 (such as Set() and map()). The from() method has the following signature: Rx.Observable.from(iterable,[mapFunction],[context],[scheduler]); Usually, you will pass only the first argument. Others arguments are optional; you can see them here: iterable: This is the iterable object to be converted into an observable (can be an array, set, map, and so on) mapFunction: This is a function to be called for every element in the array to map it to a different value context: This object is to be used when mapFunction is provided scheduler: This is used to iterate the input sequence Don't worry if you don't know what a scheduler is. We will see how it changes our observables, but we will discuss it later in this chapter. For now, focus only on the other arguments of this function. Now let's see some examples on how we can create observables from iterables. To create an observable from an array, you can use the following code: Rx.Observable .from([0,1,2]) .subscribe((a)=>console.log(a)); This code prints the following output: 0 1 2 Now let's introduce a minor change in our code, to add the mapFunction argument to it, instead of creating an observable to propagate the elements of this array. Let's use mapFunction to propagate the double of each element of the following array: Rx.Observable .from([0,1,2], (a) => a*2) .subscribe((a)=>console.log(a)); This prints the following output: 0 2 4 We can also use this method to create an observable from an arguments object. To do this, we need to run from() in a function. This way, we can access the arguments object of the function. We can implement it with the following code: var observableFromArgumentsFactory = function(){ return Rx.Observable.from(arguments); }; observableFromArgumentsFactory(0,1,2) .subscribe((a)=>console.log(a)); If we run this code, we will see the following output: 0 1 2 One last usage of this method is to create an observable from either Set() or Map(). These data structures were added to ES6. We can implement it for a set as follows: var set = new Set([0,1,2]); Rx.Observable .from(set) .subscribe((a)=>console.log(a)); This code prints the following output: 0 1 2 We can also use a map as an argument for the from() method, as follows: var map = new Map([['key0',0],['key1',1],['key2',2]]); Rx.Observable .from(map) .subscribe((a)=>console.log(a)); This prints all the key-value tuples on this map: [ 'key0', 0 ] [ 'key1', 1 ] [ 'key2', 2 ] All observables created from this method are cold observables. As discussed before, this means it fires the same sequence for all the observers. To test this behavior, create an observable and add an Observer to it; add another observer to it after a second: var observable = Rx.Observable.from([0,1,2]); observable.subscribe((a)=>console.log('first subscriber receives => '+a)); setTimeout(()=>{ observable.subscribe((a)=>console.log('second subscriber receives => '+a)); },1000); If you run this code, you will see the following output in your console, showing both the subscribers receiving the same data as expected: first subscriber receives => 0 first subscriber receives => 1 first subscriber receives => 2 second subscriber receives => 0 second subscriber receives => 1 second subscriber receives => 2 Creating an observable from a sequence factory Now that we have discussed how to create an observable from a sequence, let's see how we can create an observable from a sequence factory. RxJS has a built-in method called generate() that lets you create an observable from an iteration (such as a for() loop). This method has the following signature: Rx.Observable.generate(initialState, conditionFunction, iterationFunction, resultFactory, [scheduler]); In this method, the only optional parameter is the last one. A brief description of all the parameters is as follows: initialState: This can be any object, it is the first object used in the iteration conditionFunction: This is a function with the condition to stop the iteration iterationFunction: This is a function to be used on each element to iterate resultFactory: This is a function whose return is passed to the sequence scheduler: This is an optional scheduler Before checking out an example code for this method, let's see some code that implements one of the most basic constructs in a program: a for() loop. This is used to generate an array from an initial value to a final value. We can produce this array with the following code: var resultArray=[]; for(var i=0;i < 3;i++){ resultArray.push(i) } console.log(resultArray); This code prints the following output: [0,1,2] When you create a for() loop, you basically give to it the following: an initial state (the first argument), the condition to stop the iteration (the second argument), how to iterate over the value (the third argument), and what to do with the value (block). Its usage is very similar to the generate() method. Let's do the same thing, but using the generate() method and creating an observable instead of an array: Rx.Observable.generate( 0, (i) => i<3, (i) => i+1, (i) => i ).subscribe((i) => console.log(i)); This code will print the following output: 0 1 2 Creating an observable using range () Another common source of data for observables are ranges. With the range() method, we can easily create an observable for a sequence of values in a range. The range() method has the following signature: Rx.Observable.range(first, count, [scheduler]); The last parameter in the following list is the only optional parameter in this method: first: This is the initial integer value in the sequence count: This is the number of sequential integers to be iterated from the beginning of the sequence scheduler: This is used to generate the values We can create an observable using a range with the following code: Rx.Observable .range(0, 4) .subscribe((i)=>console.log(i)); This prints the following output: 0 1 2 3 Creating an observable using period of time In the previous chapter, we discussed how to create timed sequences in bacon.js. In RxJS, we have two different methods to implement observables emitting values with a given interval. The first method is interval(). This method emits an infinite sequence of integers starting from one every x milliseconds; it has the following signature: Rx.Observable.interval(interval, [scheduler]); The interval parameter is mandatory, and the second argument is optional: interval: This is an integer number to be used as the interval between the values of this sequence scheduler: This is used to generate the values Run the following code: Rx.Observable .interval(1000) .subscribe((i)=> console.log(i)); You will see an output as follows; you will have to stop your program (hitting Ctrl+C) or it will keep sending events: 0 1 2 The interval() method sends the first value of the sequence after the given period of interval and keeps sending values after each interval. RxJS also has a method called timer(). This method lets you specify a due time to start the sequence or even generate an observable of only one value emitted after the due time has elapsed. It has the following signature: Rx.Observable.timer(dueTime, [interval], [scheduler]); Here are the parameters: dueTime: This can be a date object or an integer. If it is a date object, then it means it is the absolute time to start the sequence; if it is an integer, then it specifies the number of milliseconds to wait for before you could send the first element of the sequence. interval: This is an integer denoting the time between the elements. If it is not specified, it generates only one event. scheduler: This is used to produce the values. We can create an observable from the timer() method with the following code: Rx.Observable .timer(1000,500) .subscribe((i)=> console.log(i)); You will see an output that will be similar to the following; you will have to stop your program or it will keep sending events: 0 1 2 We can also use this method to generate only one value and finish the sequence. We can do this omitting the interval parameter, as shown in the following code: Rx.Observable .timer(1000) .subscribe((i)=> console.log(i)); If you run this code, it will only print 0 in your console and finish. We learned about various RxJS Observables and a few different ways of creating them. Read the book,  Mastering Reactive JavaScript, to create powerful applications using RxJs without compromising performance. (RxJS) Observable for Promise Users: Part (RxJS) Observable for Promise Users : Part 2 Angular 6 is here packed with exciting new features!
Read more
  • 0
  • 0
  • 5844

article-image-building-microservices-from-a-monolith-java-ee-app-tutorial
Aaron Lazar
03 Aug 2018
11 min read
Save for later

Building microservices from a monolith Java EE app [Tutorial]

Aaron Lazar
03 Aug 2018
11 min read
Microservices are one of the top buzzwords these days. It's easy to understand why: in a growing software industry where the amount of services, data, and users increases crazily, we really need a way to build and deliver faster, decoupled, and scalable solutions. In this tutorial, we'll help you get started with microservices or go deeper into your ongoing project. This article is an extract from the book Java EE 8 Cookbook, authored by Elder Moraes. One common question that I have heard dozens of times is, "how do I break down my monolith into microservices?", or, "how do I migrate from a monolith approach to microservices?" Well, that's what this recipe is all about. Getting ready with monolith and microservice projects For both monolith and microservice projects, we will use the same dependency: <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Working with entities and beans First, we need the entities that will represent the data kept by the application. Here is the User entity: @Entity public class User implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column private String name; @Column private String email; public User(){ } public User(String name, String email) { this.name = name; this.email = email; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Here is the UserAddress entity: @Entity public class UserAddress implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column @ManyToOne private User user; @Column private String street; @Column private String number; @Column private String city; @Column private String zip; public UserAddress(){ } public UserAddress(User user, String street, String number, String city, String zip) { this.user = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public String getStreet() { return street; } public void setStreet(String street) { this.street = street; } public String getNumber() { return number; } public void setNumber(String number) { this.number = number; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public String getZip() { return zip; } public void setZip(String zip) { this.zip = zip; } } Now we define one bean to deal with the transaction over each entity. Here is the UserBean class: @Stateless public class UserBean { @PersistenceContext private EntityManager em; public void add(User user) { em.persist(user); } public void remove(User user) { em.remove(user); } public void update(User user) { em.merge(user); } public User findById(Long id) { return em.find(User.class, id); } public List<User> get() { CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<User> cq = cb.createQuery(User.class); Root<User> pet = cq.from(User.class); cq.select(pet); TypedQuery<User> q = em.createQuery(cq); return q.getResultList(); } } Here is the UserAddressBean class: @Stateless public class UserAddressBean { @PersistenceContext private EntityManager em; public void add(UserAddress address){ em.persist(address); } public void remove(UserAddress address){ em.remove(address); } public void update(UserAddress address){ em.merge(address); } public UserAddress findById(Long id){ return em.find(UserAddress.class, id); } public List<UserAddress> get() { CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<UserAddress> cq = cb.createQuery(UserAddress.class); Root<UserAddress> pet = cq.from(UserAddress.class); cq.select(pet); TypedQuery<UserAddress> q = em.createQuery(cq); return q.getResultList(); } } Finally, we build two services to perform the communication between the client and the beans. Here is the UserService class: @Path("userService") public class UserService { @EJB private UserBean userBean; @GET @Path("findById/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response findById(@PathParam("id") Long id){ return Response.ok(userBean.findById(id)).build(); } @GET @Path("get") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response get(){ return Response.ok(userBean.get()).build(); } @POST @Path("add") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response add(User user){ userBean.add(user); return Response.accepted().build(); } @DELETE @Path("remove/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response remove(@PathParam("id") Long id){ userBean.remove(userBean.findById(id)); return Response.accepted().build(); } } Here is the UserAddressService class: @Path("userAddressService") public class UserAddressService { @EJB private UserAddressBean userAddressBean; @GET @Path("findById/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response findById(@PathParam("id") Long id){ return Response.ok(userAddressBean.findById(id)).build(); } @GET @Path("get") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response get(){ return Response.ok(userAddressBean.get()).build(); } @POST @Path("add") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response add(UserAddress address){ userAddressBean.add(address); return Response.accepted().build(); } @DELETE @Path("remove/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response remove(@PathParam("id") Long id){ userAddressBean.remove(userAddressBean.findById(id)); return Response.accepted().build(); } } Now let's break it down! Building microservices from the monolith Our monolith deals with User and UserAddress. So we will break it down into three microservices: A user microservice A user address microservice A gateway microservice A gateway service is an API between the application client and the services. Using it allows you to simplify this communication, also giving you the freedom of doing whatever you like with your services without breaking the API contracts (or at least minimizing it). The user microservice The User entity, UserBean, and UserService will remain exactly as they are in the monolith. Only now they will be delivered as a separated unit of deployment. The user address microservice The UserAddress classes will suffer just a single change from the monolith version, but keep their original APIs (that is great from the point of view of the client). Here is the UserAddress entity: @Entity public class UserAddress implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column private Long idUser; @Column private String street; @Column private String number; @Column private String city; @Column private String zip; public UserAddress(){ } public UserAddress(Long user, String street, String number, String city, String zip) { this.idUser = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getIdUser() { return idUser; } public void setIdUser(Long user) { this.idUser = user; } public String getStreet() { return street; } public void setStreet(String street) { this.street = street; } public String getNumber() { return number; } public void setNumber(String number) { this.number = number; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public String getZip() { return zip; } public void setZip(String zip) { this.zip = zip; } } Note that User is no longer a property/field in the UserAddress entity, but only a number (idUser). We will get into more details about it in the following section. The gateway microservice First, we create a class that helps us deal with the responses: public class GatewayResponse { private String response; private String from; public String getResponse() { return response; } public void setResponse(String response) { this.response = response; } public String getFrom() { return from; } public void setFrom(String from) { this.from = from; } } Then, we create our gateway service: @Consumes(MediaType.APPLICATION_JSON) @Path("gatewayResource") @RequestScoped public class GatewayResource { private final String hostURI = "http://localhost:8080/"; private Client client; private WebTarget targetUser; private WebTarget targetAddress; @PostConstruct public void init() { client = ClientBuilder.newClient(); targetUser = client.target(hostURI + "ch08-micro_x_mono-micro-user/"); targetAddress = client.target(hostURI + "ch08-micro_x_mono-micro-address/"); } @PreDestroy public void destroy(){ client.close(); } @GET @Path("getUsers") @Produces(MediaType.APPLICATION_JSON) public Response getUsers() { WebTarget service = targetUser.path("webresources/userService/get"); Response response; try { response = service.request().get(); } catch (ProcessingException e) { return Response.status(408).build(); } GatewayResponse gatewayResponse = new GatewayResponse(); gatewayResponse.setResponse(response.readEntity(String.class)); gatewayResponse.setFrom(targetUser.getUri().toString()); return Response.ok(gatewayResponse).build(); } @POST @Path("addAddress") @Produces(MediaType.APPLICATION_JSON) public Response addAddress(UserAddress address) { WebTarget service = targetAddress.path("webresources/userAddressService/add"); Response response; try { response = service.request().post(Entity.json(address)); } catch (ProcessingException e) { return Response.status(408).build(); } return Response.fromResponse(response).build(); } } As we receive the UserAddress entity in the gateway, we have to have a version of it in the gateway project too. For brevity, we will omit the code, as it is the same as in the UserAddress project. Transformation to microservices The monolith application couldn't be simpler: just a project with two services using two beans to manage two entities. The microservices So we split the monolith into three projects (microservices): the user service, the user address service, and the gateway service. The user service classes remained unchanged after the migration from the monolith version. So there's nothing to comment on. The UserAddress class had to be changed to become a microservice. The first change was made on the entity. Here is the monolith version: @Entity public class UserAddress implements Serializable { ... @Column @ManyToOne private User user; ... public UserAddress(User user, String street, String number, String city, String zip) { this.user = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } ... public User getUser() { return user; } public void setUser(User user) { this.user = user; } ... } Here is the microservice version: @Entity public class UserAddress implements Serializable { ... @Column private Long idUser; ... public UserAddress(Long user, String street, String number, String city, String zip) { this.idUser = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } public Long getIdUser() { return idUser; } public void setIdUser(Long user) { this.idUser = user; } ... } Note that in the monolith version, user was an instance of the User entity: private User user; In the microservice version, it became a number: private Long idUser; This happened for two main reasons: In the monolith, we have the two tables in the same database (User and UserAddress), and they both have physical and logical relationships (foreign key). So it makes sense to also keep the relationship between both the objects. The microservice should have its own database, completely independent from the other services. So we choose to keep only the user ID, as it is enough to load the address properly anytime the client needs. This change also resulted in a change in the constructor. Here is the monolith version: public UserAddress(User user, String street, String number, String city, String zip) Here is the microservice version: public UserAddress(Long user, String street, String number, String city, String zip) This could lead to a change of contract with the client regarding the change of the constructor signature. But thanks to the way it was built, it wasn't necessary. Here is the monolith version: public Response add(UserAddress address) Here is the microservice version: public Response add(UserAddress address) Even if the method is changed, it could easily be solved with @Path annotation, or if we really need to change the client, it would be only the method name and not the parameters (which used to be more painful). Finally, we have the gateway service, which is our implementation of the API gateway design pattern. Basically it is the one single point to access the other services. The nice thing about it is that your client doesn't need to care about whether the other services changed the URL, the signature, or even whether they are available. The gateway will take care of them. The bad part is that it is also on a single point of failure. Or, in other words, without the gateway, all services are unreachable. But you can deal with it using a cluster, for example. So now you've built a microservice in Java EE code, that was once a monolith! If you found this tutorial helpful and would like to learn more, head over to this book Java EE 8 Cookbook, authored by Elder Moraes. Oracle announces a new pricing structure for Java Design a RESTful web API with Java [Tutorial] How to convert Java code into Kotlin
Read more
  • 0
  • 0
  • 2789

article-image-views-in-vrealize-operation-manager
Vijin Boricha
02 Aug 2018
11 min read
Save for later

Building custom views in vRealize operation manager [Tutorial]

Vijin Boricha
02 Aug 2018
11 min read
A view in vRealize Operations manager consists of the view type, subject, and data components: In this tutorial, the view is a trend view, which gets its data from the CPU Demand (%) metric. The subject is the object type, which a view is associated with. A view presents the data of the subject. For example, if the selected object is a host and you select the view named Host CPU Demand (%) Trend View, the result is a trend of the host's CPU demand over a period of time. Today we will walk through the parts needed to define and build custom views in vRealize Operations manager, and learn to apply them to real work situations. This article is an excerpt from Mastering vRealize Operations Manager – Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater.  Adding Name and description fields Although it might seem obvious, the first thing you need to define when creating a report is the name and description. Before you dismiss this requirement and simply enter My View or Scott's Test, the name and description fields are very useful in defining the scope and target of the view. This is because many views are not really designed to be run/applied on the subject, but rather on one of its parents. This is especially true for lists and distributions, which we will cover below: What are different View types The presentation is the format the view is created in and how the information is displayed. The following types of views are available: List: A list view provides tabular data about specific objects in the environment that correspond to the selected view. Summary: A summary view presents tabular information about the current use of resources in the environment. Trend: A trend view uses historic data to generate trends and forecasts for resource use and availability in the environment. Distribution: A distribution view provides aggregated data about resource distribution in the monitored environment. Pie charts or bar charts are used to present the data. Text: A text view displays text that you provide when you create the view. Image: An image view allows you to insert a static image. List A list is one of the simplest presentation types to use and understand, and at the same time, is one of the most useful. A list provides a tabular layout of values for each data type, with the ability to provide an aggregation row such as sum or average at the end. Lists are the most useful presentation type for a large number of objects, and are able to provide information in the form of metrics and/or properties. Lists are also the most commonly used presentation when showing a collection of objects relative to its parent. An example of a list can be found in the following screenshot: List summary A summary is similar to a list, however the rows are the data types (rather than the objects) and the columns are aggregated values of all children of that subject type. Unlike a list, a summary field is compulsory, as the individual objects are not presented in the view. The summary view type is probably the least commonly used, but it is useful when you simply care about the end result and not the detail of how it was calculated. The following example shows Datastore Space Usage from the cluster level; information such as the average used GB across each Datastore can be displayed without the need to show each Datastore present in a list: Although it will be discussed in more detail in the next chapter, the availability of creating simple summary views of child resources has partially removed the need for creating super metrics for simply rolling up data to parent objects. Trend A trend view is a line graph representation of metrics showing historical data that can be used to generate trends and forecasts. Unlike some of the other presentation types, a trend can only show data from that subject type. As such, trend views do not filter up to parent objects. A trend view, in many ways, is similar to a standard metric chart widget with a set of combined preconfigured data types, with one major exception. The trend view has the ability to forecast data into the future for a specified period of time, as well as show the trend line for historical data for any object type. This allows the trend view to provide detailed and useful capacity planning data for any object in the vRealize Operations inventory. When selecting the data types to use in the view, it is recommended that, if multiple data types are used, that they support the same unit of measurement. Although this is not a requirement, views that have different unit types on the same scale are relatively hard to compare. An example of a trend view is shown as follows: Distribution A distribution view is a graphical representation of aggregated data which shows how resources fall within those aggregation groups. This essentially means that vRealize Operations finds a way of graphically representing a particular metric or property for a group of objects. In this example, it is the distribution of VM OS types in a given vSphere cluster. A distribution like a summary is very useful in displaying a small amount of information about a large number of objects. Distribution views can also be shown as bar charts. In this example, the distribution of Virtual Machine Memory Configuration Distribution is shown in a given vSphere cluster. This view can help spot virtual machines configured with a large amount of memory. An important point when creating distribution views is that the subject must be a child of the preview or target object. This means that you can only see a view for the distribution on one of the subject's parent objects. Both visualization methods essentially group the subjects into buckets, with the number of buckets and their values based on the distribution type. The three distribution types are as follows: Dynamic distribution: vRealize Operations automatically determines how many buckets to create based on an interval, a min/max value, or a logarithmic equation. When dealing with varying data values, this is generally the recommended display. Manual distribution: Allows the administrator to manually set the range of each bucket in the display. Discrete distribution: Used for displaying exact values of objects rather than ranges. A discrete distribution is recommended if most objects only have a few possible values, such as properties or other binary values. Text and images The text and image views are used to insert static text or image content for the purpose of reports and dashboards. They allow an administrator to add context to a report in combination with the dynamic views that are inserted when the reports are generated. Adding Subjects to View Although the subjects are generally selected after the presentation, it makes sense to describe them first. The subject is the base object for which the view shows information. In other words, the subject is the object type that the data is coming from for the view. Any object type from any adapter can be selected. It is important to keep in mind that you may be designing a view for a parent object, however, the subject is actually the data of a child object. For example, if you wish to list all the Datastore free space in a vSphere Cluster itemized by Datastore, the subject will be a Datastore, not a Cluster Compute Resource. This is because although the list will always be viewed in the context of a cluster, the data listed is from Datastore objects themselves. When selecting a subject, an option is provided to select multiple object types. If this is done, only data that is common to both types will be available. Adding Data to View Data is the content that makes up the view based on the selected subject. The type of data that can be added and any additional options available depend on the select presentation type. An important feature with views is that they are able to display and filter based on properties, and not just standard metrics. This is particularly useful when filtering a list or distribution group. For example, the following screenshot shows badge information in a given vSphere Cluster, as long as they contain a vSphere tag of BackupProtectedVM. This allows a view to be filtered only to virtual machines that are deployed and managed by vRealize Automation: Adding Visibility layer One of the most useful features about views is that you have the ability to decide where they show up and where they can be linked from. The visibility layer defines where you can see a hyperlink to a view in vRealize Operations based on a series of checkboxes. The visibility step is broken into three categories, which are Availability, Further Analysis, and Blacklist, as shown in the following screenshot: Subsequently, you can also make the view available in a dashboard. To make this view available inside a dashboard, you can either edit an existing one or create a new dashboard by navigating to Home, Actions, then Create Dashboard. You can add the desired view within your dashboard configuration. Availability The availability checkboxes allow an administrator to devise how their view can be used and if there are cases where they wish to restrict its availability: Dashboard through the view widget: The view widget allows any created view to be displayed on a dashboard. This essentially allows an unlimited amount of data types to be displayed on the classic dashboards, with the flexibility of the different presentation types. Report template creation and modification: This setting allows views to be used in reports. If you are creating views explicitly to use in reports, ensure this box is checked. Details tab in the environment: The Details tab in the environment is the default location where administrators will use views. It is also the location where the Further Analysis links will take an administrator if selected. In most cases, it is recommended that this option be enabled, unless a view is not yet ready to be released to other users. Further Analysis The Further Analysis checkbox is a feature that allows an administrator to link views that they have created to the minor badges in the Object Analysis tab. Although this feature may seem comparatively small, it allows users to create relevant views for certain troubleshooting scenarios and link them directly to where administrators will be working. This allows administrators to leverage views more quickly for troubleshooting rather than simply jumping to the All Metrics tab and looking for dynamic threshold breaches. Blacklist The blacklist allows administrators to ensure that views cannot be used against certain object types. This is useful if you want to ensure that a view is only partially promoted up to a parent and not, for example, to a grandparent. How to Delete a View Views show up in multiple places. When you're tempted to delete a view, ask yourself: Do I want to delete this entire view, or do I just want to no longer show it in one part of the UI? Don't delete a view when you just want to hide it in one part of the UI. When you delete a view, areas in the UI that use the view are adjusted: Report templates: The view is removed from the report template Dashboards: The view widget displays the message The view does not exist Further Analysis panel of badges on the Analysis tab: The link to the view is removed Details > Views tab for the selected object: The view is removed from the list vRealize Operations will display a message informing you that deleting the view will modify the report templates that are using the view. Now, you have learned the new powerful features available in views and the different view presentation types. To know more about handling alerts and notifications in vRealize Operations, check out this book Mastering vRealize Operations Manager - Second Edition. VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Are containers the end of virtual machines?
Read more
  • 0
  • 0
  • 8358
article-image-multi-robot-cooperation-model-with-swarm-intelligence
Sugandha Lahoti
02 Aug 2018
7 min read
Save for later

IoT project: Design a Multi-Robot Cooperation model with Swarm Intelligence [Tutorial]

Sugandha Lahoti
02 Aug 2018
7 min read
Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. Swarm intelligence (SI) is a subset of collective intelligence and defines the collective behavior of decentralized, self-organized systems, natural or artificial. In this tutorial, we will talk about how to design a multi-robot cooperation model using swarm intelligence. This article is an excerpt from Intelligent IoT Projects in 7 Days by Agus Kurniawan. In this book, you will learn how to build your own Intelligent Internet of Things projects. What is swarm intelligence Swarm intelligence is inspired by the collective behavior of social animal colonies such as ants, birds, wasps, and honey bees. These animals work together to achieve a common goal. Swarm intelligence phenomena can be found in our environment. You can see swarm intelligence in deep-sea animals, shown in the following image of a school of fish in a formation that was captured by a photographer in Cabo Pulmo: Image source: http://octavioaburto.com/cabo-pulmo Using information from swarm intelligence studies, swarm intelligence is applied to coordinate among autonomous robots. Each robot can be described as a self-organization system. Each one negotiates with the others on how to achieve the goal. There are various algorithms to implement swarm intelligence. The following is a list of swarm intelligence types that researchers and developers apply to their problems: Particle swarm optimization Ant system Ant colony system Bees algorithm Bacterial foraging optimization algorithm The Particle Swarm Optimization (PSO) algorithm is inspired by the social foraging behavior of some animals such as the flocking behavior of birds and the schooling behavior of fish. A sample of PSO algorithm in Python can be found at https://gist.github.com/btbytes/79877. This program needs the numpy library. numpy (Numerical Python) is a package for scientific computing with Python. Your computer should have installed Python. If not, you can download and install on this site, https://www.python.org. If your computer does not have numpy , you can install it by typing this command in the terminal (Linux and Mac platforms): $ pip install numpy For Windows platform, please install numpy refer to this https://www.scipy.org/install.html. You can copy the following code into your editor. Save it as code_1.py and then run it on your computer using terminal: from numpy import array from random import random from math import sin, sqrt iter_max = 10000 pop_size = 100 dimensions = 2 c1 = 2 c2 = 2 err_crit = 0.00001 class Particle: pass def f6(param): '''Schaffer's F6 function''' para = param*10 para = param[0:2] num = (sin(sqrt((para[0] * para[0]) + (para[1] * para[1])))) * (sin(sqrt((para[0] * para[0]) + (para[1] * para[1])))) - 0.5 denom = (1.0 + 0.001 * ((para[0] * para[0]) + (para[1] * para[1]))) * (1.0 + 0.001 * ((para[0] * para[0]) + (para[1] * para[1]))) f6 = 0.5 - (num/denom) errorf6 = 1 - f6 return f6, errorf6; #initialize the particles particles = [] for i in range(pop_size): p = Particle() p.params = array([random() for i in range(dimensions)]) p.fitness = 0.0 p.v = 0.0 particles.append(p) # let the first particle be the global best gbest = particles[0] err = 999999999 while i < iter_max : for p in particles: fitness,err = f6(p.params) if fitness > p.fitness: p.fitness = fitness p.best = p.params if fitness > gbest.fitness: gbest = p v = p.v + c1 * random() * (p.best - p.params) + c2 * random() * (gbest.params - p.params) p.params = p.params + v i += 1 if err < err_crit: break #progress bar. '.' = 10% if i % (iter_max/10) == 0: print '.' print 'nParticle Swarm Optimisationn' print 'PARAMETERSn','-'*9 print 'Population size : ', pop_size print 'Dimensions : ', dimensions print 'Error Criterion : ', err_crit print 'c1 : ', c1 print 'c2 : ', c2 print 'function : f6' print 'RESULTSn', '-'*7 print 'gbest fitness : ', gbest.fitness print 'gbest params : ', gbest.params print 'iterations : ', i+1 ## Uncomment to print particles for p in particles: print 'params: %s, fitness: %s, best: %s' % (p.params, p.fitness, p.best) You can run this program by typing this command: $ python code_1.py.py This program will generate PSO output parameters based on input. You can see PARAMETERS value on program output.  At the end of the code, we can print all PSO particle parameter while iteration process. Introducing multi-robot cooperation Communicating and negotiating among robots is challenging. We should ensure our robots address collision while they are moving. Meanwhile, these robots should achieve their goals collectively. For example, Keisuke Uto has created a multi-robot implementation to create a specific formation. They take input from their cameras. Then, these robots arrange themselves to create a formation. To get the correct robot formation, this system uses a camera to detect the current robot formation. Each robot has been labeled so it makes the system able to identify the robot formation. By implementing image processing, Keisuke shows how multiple robots create a formation using multi-robot cooperation. If you are interested, you can read about the project at https://www.digi.com/blog/xbee/multi-robot-formation-control-by-self-made-robots/. Designing a multi-robot cooperation model using swarm intelligence A multi-robot cooperation model enables some robots to work collectively to achieve a specific purpose. Having multi-robot cooperation is challenging. Several aspects should be considered in order to get an optimized implementation. The objective, hardware, pricing, and algorithm can have an impact on your multi-robot design. In this section, we will review some key aspects of designing multi-robot cooperation. This is important since developing a robot needs multi-disciplinary skills. Define objectives The first step to developing multi-robot swarm intelligence is to define the objectives. We should state clearly what the goal of the multi-robot implementation is. For instance, we can develop a multi-robot system for soccer games or to find and fight fire. After defining the objectives, we can continue to gather all the material to achieve them: robot platform, sensors, and algorithms are components that we should have. Selecting a robot platform The robot platform is the MCU model that will be used. There are several MCU platforms that you use for a multi-robot implementation. Arduino, Raspberry Pi, ESP8266, ESP32, TI LaunchPad, and BeagleBone are samples of MCU platforms that can probably be applied for your case. Sometimes, you may nee to consider the price parameter to decide upon a robot platform. Some researchers and makers make their robot devices with minimum hardware to get optimized functionalities. They also share their hardware and software designs. I recommend you visit Open Robotics, https://www.osrfoundation.org, to explore robot projects that might fit your problem. Alternatively, you can consider using robot kits. Using a kit means you don't need to solder electronic components. It is ready to use. You can find robot kits in online stores such as Pololu (https://www.pololu.com), SparkFun (https://www.sparkfun.com), DFRobot (https://www.dfrobot.com), and Makeblock (http://www.makeblock.com). You can see my robots from Pololu and DFRobot here: Selecting the algorithm for swarm intelligence The choice of algorithm, especially for swarm intelligence, should be connected to what kind of robot platform is used. We already know that some hardware for robots have computational limitations. Applying complex algorithms to limited computation devices can drain the hardware battery. You must research the best parameters for implementing multi-robot systems. Implementing swarm intelligence in swarm robots can be described as in the following figure. A swarm robot system will perform sensing to gather its environmental information, including detecting peer robot presence. By combining inputs from sensors and peers, we can actuate the robots based on the result of our swarm intelligence computation. Actuation can be movement and actions. We designed a multi-robot cooperation model using swarm intelligence. To know how to create more smart IoT projects, check out this book Intelligent IoT Projects in 7 Days. AI-powered Robotics: Autonomous machines in the making How to assemble a DIY selfie drone with Arduino and ESP8266 Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 5883

article-image-react-component-lifecycle-methods-tutorial
Sugandha Lahoti
01 Aug 2018
14 min read
Save for later

Implementing React Component Lifecycle methods [Tutorial]

Sugandha Lahoti
01 Aug 2018
14 min read
All the React component’s lifecycle methods can be split into four phases: initialization, mounting, updating and unmounting. The process where all these stages are involved is called the component’s lifecycle and every React component goes through it. React provides several methods that notify us when a certain stage of this process occurs. These methods are called the component’s lifecycle methods and they are invoked in a predictable order. In this article we will learn about the lifecycle of React components and how to write code that responds to lifecycle events. We'll kick things off with a brief discussion on why components need a lifecycle. And then we will implement several example components that will  initialize their properties and state using these methods. This article is an excerpt from React and React Native by Adam Boduch.  Why components need a lifecycle React components go through a lifecycle, whether our code knows about it or not. Rendering is one of the lifecycle events in a React component. For example, there are lifecycle events for when the component is about to be mounted into the DOM, for after the component has been mounted to the DOM, when the component is updated, and so on. Lifecycle events are yet another moving part, so you'll want to keep them to a minimum. Some components do need to respond to lifecycle events to perform initialization, render heuristics, or clean up after the component when it's unmounted from the DOM. The following diagram gives you an idea of how a component flows through its lifecycle, calling the corresponding methods in turn: These are the two main lifecycle flows of a React component. The first happens when the component is initially rendered. The second happens whenever the component is re-rendered. However, the componentWillReceiveProps() method is only called when the component's properties are updated. This means that if the component is re-rendered because of a call to setState(), this lifecycle method isn't called, and the flow starts with shouldComponentUpdate() instead. The other lifecycle method that isn't included in this diagram is componentWillUnmount(). This is the only lifecycle method that's called when a component is about to be removed. Initializing properties and state In this section, you'll see how to implement initialization code in React components. This involves using lifecycle methods that are called when the component is first created. First, we'll walk through a basic example that sets the component up with data from the API. Then, you'll see how state can be initialized from properties, and also how state can be updated as properties change. Fetching component data One of the first things you'll want to do when your components are initialized is populate their state or properties. Otherwise, the component won't have anything to render other than its skeleton markup. For instance, let's say you want to render the following user list component: import React from 'react'; import { Map as ImmutableMap } from 'immutable'; // This component displays the passed-in "error" // property as bold text. If it's null, then // nothing is rendered. const ErrorMessage = ({ error }) => ImmutableMap() .set(null, null) .get( error, (<strong>{error}</strong>) ); // This component displays the passed-in "loading" // property as italic text. If it's null, then // nothing is rendered. const LoadingMessage = ({ loading }) => ImmutableMap() .set(null, null) .get( loading, (<em>{loading}</em>) ); export default ({ error, loading, users, }) => ( <section> { /* Displays any error messages... */ } <ErrorMessage error={error} /> { /* Displays any loading messages, while waiting for the API... */ } <LoadingMessage loading={loading} /> { /* Renders the user list... */ } <ul> {users.map(i => ( <li key={i.id}>{i.name}</li> ))} </ul> </section> ); There are three pieces of data that this JSX relies on: loading: This message is displayed while fetching API data error: This message is displayed if something goes wrong users: Data fetched from the API There's also two helper components used here: ErrorMessage and LoadingMessage. They're used to format the error and the loading state, respectively. However, if error or loading are null, neither do we want to render anything nor do we want to introduce imperative logic into these simple functional components. This is why we're using a cool little trick with Immutable.js maps. First, we create a map that has a single key-value pair. The key is null, and the value is null. Second, we call get() with either an error or a loading property. If the error or loading property is null, then the key is found and nothing is rendered. The trick is that get() accepts a second parameter that's returned if no key is found. This is where we pass in our truthy value and avoid imperative logic all together. This specific component is simple, but the technique is especially powerful when there are more than two possibilities. How should we go about making the API call and using the response to populate the users collection? The answer is to use a container component, introduced in the preceding chapter that makes the API call and then renders the UserList component: import React, { Component } from 'react'; import { fromJS } from 'immutable'; import { users } from './api'; import UserList from './UserList'; export default class UserListContainer extends Component { state = { data: fromJS({ error: null, loading: 'loading...', users: [], }), } // Getter for "Immutable.js" state data... get data() { return this.state.data; } // Setter for "Immutable.js" state data... set data(data) { this.setState({ data }); } // When component has been rendered, "componentDidMount()" // is called. This is where we should perform asynchronous // behavior that will change the state of the component. // In this case, we're fetching a list of users from // the mock API. componentDidMount() { users().then( (result) => { // Populate the "users" state, but also // make sure the "error" and "loading" // states are cleared. this.data = this.data .set('loading', null) .set('error', null) .set('users', fromJS(result.users)); }, (error) => { // When an error occurs, we want to clear // the "loading" state and set the "error" // state. this.data = this.data .set('loading', null) .set('error', error); } ); } render() { return ( <UserList {...this.data.toJS()} /> ); } } Let's take a look at the render() method. It's sole job is to render the <UserList> component, passing in this.state as its properties. The actual API call happens in the componentDidMount() method. This method is called after the component is mounted into the DOM. This means that <UserList> will have rendered once, before any data from the API arrives. But this is fine, because we've set up the UserListContainer state to have a default loading message, and UserList will display this message while waiting for API data. Once the API call returns with data, the users collection is populated, causing the UserList to re-render itself, only this time, it has the data it needs. So, why would we want to make this API call in componentDidMount() instead of in the component constructor, for example? The rule-of-thumb here is actually very simple to follow. Whenever there's asynchronous behavior that changes the state of a React component, it should be called from a lifecycle method. This way, it's easy to reason about how and when a component changes state. Let's take a look at the users() mock API function call used here: // Returns a promise that's resolved after 2 // seconds. By default, it will resolve an array // of user data. If the "fail" argument is true, // the promise is rejected. export function users(fail) { return new Promise((resolve, reject) => { setTimeout(() => { if (fail) { reject('epic fail'); } else { resolve({ users: [ { id: 0, name: 'First' }, { id: 1, name: 'Second' }, { id: 2, name: 'Third' }, ], }); } }, 2000); }); } It simply returns a promise that's resolved with an array after 2 seconds. Promises are a good tool for mocking things like API calls because this enables you to use more than simple HTTP calls as a data source in your React components. For example, you might be reading from a local file or using some library that returns promises that resolve data from unknown sources. Here's what the UserList component renders when the loading state is a string, and the users state is an empty array: Here's what it renders when loading is null and users is non-empty: I can't promise that this is the last time I'm going to make this point in the book, but I'll try to keep it to a minimum. I want to hammer home the separation of responsibilities between the UserListContainer and the UserList components. Because the container component handles the lifecycle management and the actual API communication, this enables us to create a very generic user list component. In fact, it's a functional component that doesn't require any state, which means this is easy to reuse throughout our application. Initializing state with properties The preceding example showed you how to initialize the state of a container component by making an API call in the componentDidMount() lifecycle method. However, the only populated part of the component state is the users collection. You might want to populate other pieces of state that don't come from API endpoints. For example, the error and loading state messages have default values set when the state is initialized. This is great, but what if the code that is rendering UserListContainer wants to use a different loading message? You can achieve this by allowing properties to override the default state. Let's build on the UserListContainer component: import React, { Component } from 'react'; import { fromJS } from 'immutable'; import { users } from './api'; import UserList from './UserList'; class UserListContainer extends Component { state = { data: fromJS({ error: null, loading: null, users: [], }), } // Getter for "Immutable.js" state data... get data() { return this.state.data; } // Setter for "Immutable.js" state data... set data(data) { this.setState({ data }); } // Called before the component is mounted into the DOM // for the first time. componentWillMount() { // Since the component hasn't been mounted yet, it's // safe to change the state by calling "setState()" // without causing the component to re-render. this.data = this.data .set('loading', this.props.loading); } // When component has been rendered, "componentDidMount()" // is called. This is where we should perform asynchronous // behavior that will change the state of the component. // In this case, we're fetching a list of users from // the mock API. componentDidMount() { users().then( (result) => { // Populate the "users" state, but also // make sure the "error" and "loading" // states are cleared. this.data = this.data .set('loading', null) .set('error', null) .set('users', fromJS(result.users)); }, (error) => { // When an error occurs, we want to clear // the "loading" state and set the "error" // state. this.data = this.data .set('loading', null) .set('error', error); } ); } render() { return ( <UserList {...this.data.toJS()} /> ); } } UserListContainer.defaultProps = { loading: 'loading...', }; export default UserListContainer; You can see that loading no longer has a default string value. Instead, we've introduced defaultProps, which provide default values for properties that aren't passed in through JSX markup. The new lifecycle method we've added is componentWillMount(), and it uses the loading property to initialize the state. Since the loading property has a default value, it's safe to just change the state. However, calling setState() (via this.data) here doesn't cause the component to re-render itself. The method is called before the component mounts, so the initial render hasn't happened yet. Let's see how we can pass state data to UserListContainer now: import React from 'react'; import { render } from 'react-dom'; import UserListContainer from './UserListContainer'; // Renders the component with a "loading" property. // This value ultimately ends up in the component state. render(( <UserListContainer loading="playing the waiting game..." /> ), document.getElementById('app') ); Pretty cool, right? Just because the component has state, doesn't mean that we can't be flexible and allow for customization of this state. We'll look at one more variation on this theme—updating component state through properties. Here's what the initial loading message looks like when UserList is first rendered: Updating state with properties You've seen how the componentWillMount() and componentDidMount() lifecycle methods help get your component the data it needs. There's one more scenario that we should consider here—re-rendering the component container. Let's take a look at a simple button component that tracks the number of times it's been clicked: import React from 'react'; export default ({ clicks, disabled, text, onClick, }) => ( <section> { /* Renders the number of button clicks, using the "clicks" property. */ } <p>{clicks} clicks</p> { /* Renders the button. It's disabled state is based on the "disabled" property, and the "onClick()" handler comes from the container component. */} <button disabled={disabled} onClick={onClick} > {text} </button> </section> ); Now, let's implement a container component for this feature: import React, { Component } from 'react'; import { fromJS } from 'immutable'; import MyButton from './MyButton'; class MyFeature extends Component { state = { data: fromJS({ clicks: 0, disabled: false, text: '', }), } // Getter for "Immutable.js" state data... get data() { return this.state.data; } // Setter for "Immutable.js" state data... set data(data) { this.setState({ data }); } // Sets the "text" state before the initial render. // If a "text" property was provided to the component, // then it overrides the initial "text" state. componentWillMount() { this.data = this.data .set('text', this.props.text); } // If the component is re-rendered with new // property values, this method is called with the // new property values. If the "disabled" property // is provided, we use it to update the "disabled" // state. Calling "setState()" here will not // cause a re-render, because the component is already // in the middle of a re-render. componentWillReceiveProps({ disabled }) { this.data = this.data .set('disabled', disabled); } // Click event handler, increments the "click" count. onClick = () => { this.data = this.data .update('clicks', c => c + 1); } // Renders the "<MyButton>" component, passing it the // "onClick()" handler, and the state as properties. render() { return ( <MyButton onClick={this.onClick} {...this.data.toJS()} /> ); } } MyFeature.defaultProps = { text: 'A Button', }; export default MyFeature; The same approach as the preceding example is taken here. Before the component is mounted, set the value of the text state to the value of the text property. However, we also set the text state in the componentWillReceiveProps() method. This method is called when property values change, or in other words, when the component is re-rendered. Let's see how we can re-render this component and whether or not the state behaves as we'd expect it to: import React from 'react'; import { render as renderJSX } from 'react-dom'; import MyFeature from './MyFeature'; // Determines the state of the button // element in "MyFeature". let disabled = true; function render() { // Toggle the state of the "disabled" property. disabled = !disabled; renderJSX( (<MyFeature {...{ disabled }} />), document.getElementById('app') ); } // Re-render the "<MyFeature>" component every // 3 seconds, toggling the "disabled" button // property. setInterval(render, 3000); render(); Sure enough, everything goes as planned. Whenever the button is clicked, the click counter is updated. But as you can see, <MyFeature> is re-rendered every 3 seconds, toggling the disabled state of the button. When the button is re-enabled and clicking resumes, the counter continues from where it left off. Here is what the MyButton component looks like when first rendered: Here's what it looks like after it has been clicked a few times and the button has moved into a disabled state: We learned about the lifecycle of React components. We also discussed why React components need a lifecycle. It turns out that React can't do everything automatically for us, so we need to write some code that's run at the appropriate time during the components' lifecycles. To know more about how to take the concepts of React and apply them to building Native UIs using React Native, read this book React and React Native. What is React.js and how does it work? What is the Reactive Manifesto? Is React Native is really a Native framework?
Read more
  • 0
  • 1
  • 8440