Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-how-to-perform-exception-handling-in-python-with-try-catch-and-finally
Guest Contributor
10 Dec 2019
9 min read
Save for later

How to perform exception handling in Python with ‘try, catch and finally’

Guest Contributor
10 Dec 2019
9 min read
An integral part of using Python involves the art of handling exceptions. There are primarily two types of exceptions; Built-in exceptions and User-Defined Exceptions. In such cases, the error handling resolution is to save the state of execution in the moment of error which interrupts the normal program flow to execute a special function or a code which is called Exception Handler. There are many types of errors like ‘division by zero’, ‘file open error’, etc. where an error handler needs to fix the issue. This allows the program to continue based on prior data saved. Source: Eyehunts Tutorial Just like Java, exceptions handling in Python is no different. It is a code embedded in a try block to run exceptions. Compare that to Java where catch clauses are used to catch the Exceptions. The same sort of Catch clause is used in Python that begins with except. Also, custom-made exception is possible in Python by using the raise statement where it forces a specified exception to take place. Reason to use exceptions Errors are always expected while writing a program in Python which requires a backup mechanism. Such a mechanism is set to handle any encountered errors and not doing so may crash the program completely. The reason to equip python program with the exception mechanism is to set and define a backup plan just in case any possible error situation erupts while executing it. Catch exceptions in Python Try statement is used for handling the exception in Python. A Try clause will consist of a raised exception associated with a particular, critical operation. For handling the exception the code is written within the Except Clause. The choice of performing a type of operation depends on the programmer once catching the exception is done. The below-defined program loops until the user enters an integer value having a valid reciprocal. A part of code that triggers an exception is contained inside the Try block. In case of absence of any exceptions then the normal flow of execution continues skipping the except block. And in case of exceptions raising the except block is caught. Checkout the example: The Output will be: Naming the exception is possible by using the ex_info() function that is present inside the sys module. It asks the user to make another attempt for naming it. Any unexpected values like 'a' or '1.3' will trigger the ValueError. Also, the return value of '0' leads to ZeroDivisionError. Exception handling in Python: try, except and finally There are instances where the suspicious code may raise exceptions which are placed inside such try statement block. Again, there is a code that is dedicated to handling such raised exceptions and the same is placed within the Except block. Below is an example of above-explained try and except statement when used in Python. try:   ** Operational/Suspicious Code except for SomeException:   ** Code to handle the exception How do they work in Python: The primarily used try block statements are triggered for checking whether or not there is any exception occurring within the code. In the event of non-occurrence of exception, the except block (Containing the exceptions handling statements) is executed post executing the try block. When the exception matches the predefined name as mentioned in 'SomeException' for handling the except block, it does the handling and enables the program to continue. In case of absence of any corresponding handlers that deals with the ones to be found in the except block then the activity of program execution is halted along with the error defining it. Defining Except without the exception To define the Except Clause isn’t always a viable option regardless of which programming language is used. As equipping the execution with the try-except clause is capable of handling all the possible types of exceptions. It will keep users ignorant about whether the exception was even raised in the first place. It is also a good idea to use the except statement without the exceptions field, for example some of the statements are defined below: try:    You do your operations here;    ...................... except:    If there is an exception, then execute this block.    ...................... else:    If there is no exception then execute this block.  OR, follow the below-defined syntax: try:   #do your operations except:   #If there is an exception raised, execute these statements else:   #If there is no exception, execute these statements Here is an example if the intent is to catch an exception within the file. This is useful when the intention is to read the file but it does not exist. try:   fp = open('example.txt', r) except:   print ('File is not found')   fp.close This example deals with opening the 'example.txt'. In such cases, when the called upon file is not found or does not exist then the code executes the except block giving the error read like 'File is not found'. Defining except clause for multiple exceptions It is possible to deal with multiple exceptions in a single block using the try statement. It allows doing so by enabling programmers to specify the different exception handlers. Also, it is recommended to define a particular exception within the code as a part of good programming practice. The better way out in such cases is to define the multiple exceptions using the same, above-mentioned except clause. And it all boils down to the process of execution wherein if the interpreter gets hold of a matching exception, then the code written under the except code will be executed. One way to do is by defining a tuple that can deal with the predefined multiple exceptions within the except clause. The below example shows the way to define such exceptions: try:    # do something  except (Exception1, Exception2, ..., ExceptionN):    # handle multiple exceptions    pass except:    # handle all other exceptions You can also use the same except statement to handle multiple exceptions as follows − try:    You do your operations here;    ...................... except(Exception1[, Exception2[,...ExceptionN]]]):    If there is an exception from the given exception list,     then execute this block.    ...................... else:    If there is no exception then execute this block.  Exception handling in Python using the try-finally clause Apart from implementing the try and except blocks within one, it is also a good idea to put together try and finally blocks. Here, the final block will carry all the necessary statements required to be executed regardless of the exception being raised in the try block. One benefit of using this method is that it helps in releasing external resources and clearing up the cache memories beefing up the program. Here is the pseudo-code for try..finally clause. try:    # perform operations finally:    #These statements must be executed Defining exceptions in try... finally block The example given below executes an event that shuts the file once all the operations are completed. try:    fp = open("example.txt",'r')    #file operations finally:    fp.close() Again, using the try statement in Python, it is wise to consider that it also comes with an optional clause – finally. Under any given circumstances, this code is executed which is usually put to use for releasing the additional external resource. It is not new for the developers to be connected to a remote data centre using a network. Also, there are chances of developers working with a file loaded with Graphic User Interface. Such situations will push the developers to clean up the used resources. Even if the resources used, yield successful results, such post-execution steps are always considered as a good practice. Actions like shutting down the GUI, closing a file or even disconnecting from a connected network written down in the finally block assures the execution of the code. The finally block is something that defines what must be executed regardless of raised exceptions. Below is the syntax used for such purpose: The file operations example below illustrates this very well: try: f = open("test.txt",encoding = 'utf-8') # perform file operations finally: f.close() Or In simpler terms: try:    You do your operations here;    ......................    Due to any exception, this may be skipped. finally:    This would always be executed.    ...................... Constructing such a block is a better way to ensure the file is closed even if the exception has taken place. Make a note that it is not possible to use the else clause along with the above-defined finally clause. Understanding user-defined exceptions Python users can create exceptions and it is done by deriving classes out of the built-in exceptions that come as standard exceptions. There are instances where displaying any specific information to users is crucial, especially upon catching the exception. In such cases, it is best to create a class that is subclassed from the RuntimeError. For that matter, the try block will raise a user-defined exception. The same is caught in the except block. Creating an instance of the class Networkerror will need the user to use variable e. Below is the syntax: class Networkerror(RuntimeError):    def __init__(self, arg):       self.args = arg   Once the class is defined, raising the exception is possible by following the below-mentioned syntax. try:    raise Networkerror("Bad hostname") except Networkerror,e:    print e.args Key points to remember Note that an exception is an error that occurs while executing the program indicating such events (error) occur though less frequently. As mentioned in the examples above, the most common exceptions are ‘divisible by 0’, ‘attempt to access non-existent file’ and ‘adding two non-compatible types’. Ensure putting up a try statement with a code where you are not sure whether or not the exception will occur. Specify an else block alongside try-except statement which will trigger when there is no exception raised in a try block. Author bio Shahid Mansuri Co-founder Peerbits, one of the leading software development company, USA, founded in 2011 which provides Python development services. Under his leadership, Peerbits used Python on a project to embed reports & researches on a platform that helped every user to access the dashboard that was freely available and also to access the dashboard that was exclusively available. His visionary leadership and flamboyant management style have yield fruitful results for the company. He believes in sharing his strong knowledge base with a learned concentration on entrepreneurship and business. Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track Fake Python libraries removed from PyPi when caught stealing SSH and GPG keys, reports ZDNet There’s more to learning programming than just writing code
Read more
  • 0
  • 0
  • 15649

article-image-implementing-horizontal-pod-autoscaling-in-kubernetes-tutorial
Savia Lobo
18 Jul 2019
18 min read
Save for later

Implementing Horizontal Pod Autoscaling in Kubernetes [Tutorial]

Savia Lobo
18 Jul 2019
18 min read
When we use Kubernetes deployments to deploy our pod workloads, it is simple to scale the number of replicas used by our applications up and down using the kubectl scale command. However, if we want our applications to automatically respond to changes in their workloads and scale to meet demand, then Kubernetes provides us with Horizontal Pod Autoscaling. This article is an excerpt taken from the book Kubernetes on AWS written by Ed Robinson. In this book, you will start by learning about Kubernetes' powerful abstractions - Pods and Services - that make managing container deployments easy.  Horizontal Pod Autoscaling allows us to define rules that will scale the numbers of replicas up or down in our deployments based on CPU utilization and optionally other custom metrics. Before we are able to use Horizontal Pod Autoscaling in our cluster, we need to deploy the Kubernetes metrics server; this server provides endpoints that are used to discover CPU utilization and other metrics generated by our applications. In this article, you will learn how to use the horizontal pod autoscaling method to automatically scale your applications and to automatically provision and terminate EC2 instances. Deploying the metrics server Before we can make use of Horizontal Pod Autoscaling, we need to deploy the Kubernetes metrics server to our cluster. This is because the Horizontal Pod Autoscaling controller makes use of the metrics provided by the metrics.k8s.io API, which is provided by the metrics server. While some installations of Kubernetes may install this add-on by default, in our EKS cluster we will need to deploy it ourselves. There are a number of ways to deploy add-on components to your cluster: If you are using helm to manage applications on your cluster, you could use the stable/metrics server chart. For simplicity we are just going to deploy the metrics server manifests using kubectl. I like to integrate deploying add-ons such as the metrics server and kube2iam with the process that provisions the cluster, as I see them as integral parts of the cluster infrastructure. But if you are going to use a tool like a helm to manage deploying applications to your cluster, then you might prefer to manage everything running on your cluster with the same tool. The decision you take really depends on the processes you and your team adopt for managing your cluster and the applications that run on it. The metrics server is developed in the GitHub repository. You will find the manifests required to deploy it in the deploy directory of that repository. Start by cloning the configuration from GitHub. The metrics server began supporting the authentication methods provided by EKS in version 0.0.3 so make sure the manifests you have use at least that version. You will find a number of manifests in the deploy/1.8+ directory. The auth-reader.yaml and auth-delegator.yaml files configure the integration of the metrics server with the Kubernetes authorization infrastructure. The resource-reader.yaml file configures a role to give the metrics server the permissions to read resources from the API server, in order to discover the nodes that pods are running on. Basically, metrics-server-deployment.yaml and metrics-server-service.yaml define the deployment used to run the service itself and a service to be able to access it. Finally, the metrics-apiservice.yaml file defines an APIService resource that registers the metrics.k8s.io API group with the Kubernetes API server aggregation layer; this means that requests to the API server for the metrics.k8s.io group will be proxied to the metrics server service. Deploying these manifests with kubectl is simple, just submit all of the manifests to the cluster with kubectl apply: $ kubectl apply -f deploy/1.8+ You should see a message about each of the resources being created on the cluster. If you are using a tool like Terraform to provision your cluster, you might use it to submit the manifests for the metrics server when you create your cluster. Verifying the metrics server and troubleshooting Before we continue, we should take a moment to check that our cluster and the metrics server are correctly configured to work together. After the metrics server is running on your cluster and has had a chance to collect metrics from the cluster (give it a minute or so), you should be able to use the kubectl top command to see the resource usage of the pods and nodes in your cluster. Start by running kubectl top nodes. If you see output like this, then the metrics server is configured correctly and is collecting metrics from your nodes: $ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-3-29-209 20m 1% 717Mi 19% ip-10-3-61-119 24m 1% 1011Mi 28% If you see an error message, then there are a number of troubleshooting steps you can follow. You should start by describing the metrics server deployment and checking that one replica is available: kubectl -n kube-system describe deployment metrics-server If it is not, you should debug the created pod by running kubectl -n kube-system describe pod. Look at the events to see why the server is not available. Make sure that you are running at least version 0.0.3 of the metrics server. If the metrics server is running correctly and you still see errors when running kubectl top, the issue is that the APIservice registered with the aggregation layer is not configured correctly. Check the events output at the bottom of the information returned when you run kubectl describe apiservice v1beta1.metrics.k8s.io. One common issue is that the EKS control plane cannot connect to the metrics server service on port 443. Autoscaling pods based on CPU usage Once the metrics server has been installed into our cluster, we will be able to use the metrics API to retrieve information about CPU and memory usage of the pods and nodes in our cluster. Using the kubectl top command is a simple example of this. The Horizontal Pod Autoscaler can also use this same metrics API to gather information about the current resource usage of the pods that make up a deployment. Let's look at an example of this; we are going to deploy a sample application that uses a lot of CPU under load, then configure a Horizontal Pod Autoscaler to scale up extra replicas of this pod to provide extra capacity when CPU utilization exceeds a target level. The application we will be deploying as an example is a simple Ruby web application that can calculate the nth number in the Fibonacci sequence, this application uses a simple recursive algorithm, and is not very efficient (perfect for us to experiment with autoscaling). The deployment for this application is very simple. It is important to set resource limits for CPU because the target CPU utilization is based on a percentage of this limit: deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: fib labels: app: fib spec: selector: matchLabels: app: fib template: metadata: labels: app: fib spec: containers: - name: fib image: errm/fib ports: - containerPort: 9292 resources: limits: cpu: 250m memory: 32Mi We are not specifying a number of replicas in the deployment spec; when we first submit this deployment to the cluster, the number of replicas will therefore default to 1. This is good practice when creating a deployment where we intend the replicas to be adjusted by a Horizontal Pod Autoscaler, because it means that if we use kubectl apply to update the deployment later, we won't override the replica value the Horizonal Pod Autoscaler has set (inadvertently scaling the deployment down or up). Let's deploy this application to the cluster: kubectl apply -f deployment.yaml You could run kubectl get pods -l app=fib to check that the application started up correctly. We will create a service, so we are able to access the pods in our deployment, requests will be proxied to each of the replicas, spreading the load: service.yaml kind: Service apiVersion: v1 metadata: name: fib spec: selector: app: fib ports: - protocol: TCP port: 80 targetPort: 9292 Submit the service manifest to the cluster with kubectl: kubectl apply -f service.yaml We are going to configure a Horizonal Pod Autoscaler to control the number of replicas in our deployment. The spec defines how we want the autoscaler to behave; we have defined here that we want the autoscaler to maintain between 1 and 10 replicas of our application and achieve a target average CPU utilization of 60, across those replicas. When CPU utilization falls below 60%, then the autoscaler will adjust the replica count of the targeted deployment down; when it goes above 60%, replicas will be added: hpa.yaml kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2beta1 metadata: name: fib spec: maxReplicas: 10 minReplicas: 1 scaleTargetRef: apiVersion: app/v1 kind: Deployment name: fib metrics: - type: Resource resource: name: cpu targetAverageUtilization: 60 Create the autoscaler with kubectl: kubectl apply -f hpa.yaml The kubectl autoscale command is a shortcut to create a HorizontalPodAutoscaler. Running kubectl autoscale deployment fib --min=1 --max=10 --cpu-percent=60 would create an equivalent autoscaler. Once you have created the Horizontal Pod Autoscaler, you can see a lot of interesting information about its current state with kubectl describe: $ kubectl describe hpa fib Name: fib Namespace: default CreationTimestamp: Sat, 15 Sep 2018 14:32:46 +0100 Reference: Deployment/fib Metrics: ( current / target ) resource cpu: 0% (1m) / 60% Min replicas: 1 Max replicas: 10 Deployment pods: 1 current / 1 desired Now we have set up our Horizontal Pod Autoscaler, we should generate some load on the pods in our deployment to illustrate how it works. In this case, we are going to use the ab (Apache benchmark) tool to repeatedly ask our application to compute the thirtieth Fibonacci number: load.yaml apiVersion: batch/v1 kind: Job metadata: name: fib-load labels: app: fib component: load spec: template: spec: containers: - name: fib-load image: errm/ab args: ["-n1000", "-c4", "fib/30"] restartPolicy: OnFailure This job uses ab to make 1,000 requests to the endpoint (with a concurrency of 4). Submit the job to the cluster, then observe the state of the Horizontal Pod Autoscaler: kubectl apply -f load.yaml watch kubectl describe hpa fib Once the load job has started to make requests, the autoscaler will scale up the deployment in order to handle the load: Name: fib Namespace: default CreationTimestamp: Sat, 15 Sep 2018 14:32:46 +0100 Reference: Deployment/fib Metrics: ( current / target ) resource cpu: 100% (251m) / 60% Min replicas: 1 Max replicas: 10 Deployment pods: 2 current / 2 desired Autoscaling pods based on other metrics The metrics server provides APIs that the Horizontal Pod Autoscaler can use to gain information about the CPU and memory utilization of pods in the cluster. It is possible to target a utilization percentage like we did for the CPU metric, or to target the absolute value as we have here for the memory metric: hpa.yaml kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2beta1 metadata: name: fib spec: maxReplicas: 10 minReplicas: 1 scaleTargetRef: apiVersion: app/v1 kind: Deployment name: fib metrics: - type: Resource resource: name: memory targetAverageValue: 20M The Horizonal Pod Autoscaler also allows us to scale on other metrics provided by more comprehensive metrics systems. Kubernetes allows for metrics APIs to be aggregated for custom and external metrics. Custom metrics are metrics other than CPU and memory that are associated with a pod. You might for example use an adapter that allows you to use metrics that a system like Prometheus has collected from your pods. This can be very beneficial if you have more detailed metrics available about the utilization of your application, for example, a forking web server that exposes a count of busy worker processes, or a queue processing application that exposes metrics about the number of items currently enqueued. External metrics adapters provide information about resources that are not associated with any object within Kubernetes, for example, if you were using an external queuing system, such as the AWS SQS service.   On the whole, it is simpler if your applications can expose metrics about resources that they depend on that use an external metrics adapter, as it can be hard to limit access to particular metrics, whereas custom metrics are tied to a particular Pod, so Kubernetes can limit access to only those users and processes that need to use them. Autoscaling the cluster The capabilities of Kubernetes Horizontal Pod Autoscaler allow us to add and remove pod replicas from our applications as their resource usage changes over time. However, this makes no difference to the capacity of our cluster. If our pod autoscaler is adding pods to handle an increase in load, then eventually we might run out of space in our cluster, and additional pods would fail to be scheduled. If there is a decrease in the load on our application and the pod autoscaler removes pods, then we are paying AWS for EC2 instances that will sit idle. When we created our cluster in Chapter 7, A Production-Ready Cluster, we deployed the cluster nodes using an autoscaling group, so we should be able to use this to grow and shrink the cluster as the needs of the applications deployed to it change over time. Autoscaling groups have built-in support for scaling the size of the cluster, based on the average CPU utilization of the instances. This, however, is not really suitable when dealing with a Kubernetes cluster because the workloads running on each node of our cluster might be quite different, so the average CPU utilization is not really a very good proxy for the free capacity of the cluster. Thankfully, in order to schedule pods to nodes effectively, Kubernetes keeps track of the capacity of each node and the resources requested by each pod. By utilizing this information, we can automate scaling the cluster to match the size of the workload. The Kubernetes autoscaler project provides a cluster autoscaler component for some of the main cloud providers, including AWS. The cluster autoscaler can be deployed to our cluster quite simply. As well as being able to add instances to our cluster, the cluster autoscaler is also able to drain the pods from and then terminate instances when the capacity of the cluster can be reduced.   Deploying the cluster autoscaler Deploying the cluster autoscaler to our cluster is quite simple as it just requires a simple pod to be running. All we need for this is a simple Kubernetes deployment. In order for the cluster autoscaler to update the desired capacity of our autoscaling group, we need to give it permissions via an IAM role. If you are using kube2iam, we will be able to specify this role for the cluster autoscaler pod via an appropriate annotation: cluster_autoscaler.tf data "aws_iam_policy_document" "eks_node_assume_role_policy" { statement { actions = ["sts:AssumeRole"] principals { type = "AWS" identifiers = ["${aws_iam_role.node.arn}"] } } } resource "aws_iam_role" "cluster-autoscaler" { name = "EKSClusterAutoscaler" assume_role_policy = "${data.aws_iam_policy_document.eks_node_assume_role_policy.json}" } data "aws_iam_policy_document" "autoscaler" { statement { actions = [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeAutoScalingInstances", "autoscaling:DescribeTags", "autoscaling:SetDesiredCapacity", "autoscaling:TerminateInstanceInAutoScalingGroup" ] resources = ["*"] } } resource "aws_iam_role_policy" "cluster_autoscaler" { name = "cluster-autoscaler" role = "${aws_iam_role.cluster_autoscaler.id}" policy = "${data.aws_iam_policy_document.autoscaler.json}" }   In order to deploy the cluster autoscaler to our cluster, we will submit a deployment manifest using kubectl. We will use Terraform's templating system to produce the manifest. We create a service account that is used by the autoscaler to connect to the Kubernetes API: cluster_autoscaler.tpl --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler name: cluster-autoscaler namespace: kube-system The cluster autoscaler needs to read information about the current resource usage of the cluster, and needs to be able to evict pods from nodes that need to be removed from the cluster and terminated. Basically, cluster-autoscalerClusterRole provides the required permissions for these actions. The following is the code continuation for cluster_autoscaler.tpl: --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: cluster-autoscaler labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler rules: - apiGroups: [""] resources: ["events","endpoints"] verbs: ["create", "patch"] - apiGroups: [""] resources: ["pods/eviction"] verbs: ["create"] - apiGroups: [""] resources: ["pods/status"] verbs: ["update"] - apiGroups: [""] resources: ["endpoints"] resourceNames: ["cluster-autoscaler"] verbs: ["get","update"] - apiGroups: [""] resources: ["nodes"] verbs: ["watch","list","get","update"] - apiGroups: [""] resources: ["pods","services","replicationcontrollers","persistentvolumeclaims","persistentvolumes"] verbs: ["watch","list","get"] - apiGroups: ["extensions"] resources: ["replicasets","daemonsets"] verbs: ["watch","list","get"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["watch","list"] - apiGroups: ["apps"] resources: ["statefulsets"] verbs: ["watch","list","get"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["watch","list","get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: cluster-autoscaler labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-autoscaler subjects: - kind: ServiceAccount name: cluster-autoscaler namespace: kube-system Note that cluster-autoscaler stores state information in a config map, so needs permissions to be able to read and write from it. This role allows that. The following is the code continuation for cluster_autoscaler.tpl: --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: cluster-autoscaler namespace: kube-system labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] - apiGroups: [""] resources: ["configmaps"] resourceNames: ["cluster-autoscaler-status"] verbs: ["delete","get","update"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: cluster-autoscaler namespace: kube-system labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cluster-autoscaler subjects: - kind: ServiceAccount name: cluster-autoscaler namespace: kube-system Finally, let's consider the manifest for the cluster autoscaler deployment itself. The cluster autoscaler pod contains a single container running the cluster autoscaler control loop. You will notice that we are passing some configuration to the cluster autoscaler as command-line arguments. Most importantly, the --node-group-auto-discovery flag allows the autoscaler to operate on autoscaling groups with the kubernetes.io/cluster/<cluster_name> tag. This is convenient because we don't have to explicitly configure the autoscaler with our cluster autoscaling group. If your Kubernetes cluster has nodes in more than one availability zone and you are running pods that rely on being scheduled to a particular zone (for example, pods that are making use of EBS volumes), it is recommended to create an autoscaling group for each availability zone that you plan to use. If you use one autoscaling group that spans several zones, then the cluster autoscaler will be unable to specify the availability zone of the instances that it launches. Here is the code continuation for cluster_autoscaler.tpl: --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cluster-autoscaler namespace: kube-system labels: app: cluster-autoscaler spec: replicas: 1 selector: matchLabels: app: cluster-autoscaler template: metadata: annotations: iam.amazonaws.com/role: ${iam_role} labels: app: cluster-autoscaler spec: serviceAccountName: cluster-autoscaler containers: - image: k8s.gcr.io/cluster-autoscaler:v1.3.3 name: cluster-autoscaler resources: limits: cpu: 100m memory: 300Mi requests: cpu: 100m memory: 300Mi command: - ./cluster-autoscaler - --v=4 - --stderrthreshold=info - --cloud-provider=aws - --skip-nodes-with-local-storage=false - --expander=least-waste - --node-group-auto-discovery=asg:tag=kubernetes.io/cluster/${cluster_name} env: - name: AWS_REGION value: ${aws_region} volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs/ca-certificates.crt readOnly: true imagePullPolicy: "Always" volumes: - name: ssl-certs hostPath: path: "/etc/ssl/certs/ca-certificates.crt" Finally, we render the templated manifest by passing in the variables for the AWS region, cluster name and IAM role, and submitting the file to Kubernetes using kubectl: Here is the code continuation for cluster_autoscaler.tpl: data "aws_region" "current" {} data "template_file" " cluster_autoscaler " { template = "${file("${path.module}/cluster_autoscaler.tpl")}" vars { aws_region = "${data.aws_region.current.name}" cluster_name = "${aws_eks_cluster.control_plane.name}" iam_role = "${aws_iam_role.cluster_autoscaler.name}" } } resource "null_resource" "cluster_autoscaler" { trigers = { manifest_sha1 = "${sha1("${data.template_file.cluster_autoscaler.rendered}")}" } provisioner "local-exec" { command = "kubectl --kubeconfig=${local_file.kubeconfig.filename} apply -f -<<EOF\n${data.template_file.cluster_autoscaler.rendered}\nEOF" } } Thus, by understanding how Kubernetes assigns Quality of Service classes to your pods based on the resource requests and limits that you assign them, you can have precisely control how your pods are managed. By ensuring your critical applications, such as web servers and databases, run with the Guaranteed class, you can ensure that they will perform consistently and suffer minimal disruption when pods need to be rescheduled. If you have enjoyed reading this post, head over to our book, Kubernetes on AWS, for tips on deploying and managing applications, keeping your cluster and applications secure, and ensuring that your whole system is reliable and resilient to failure Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas A vulnerability discovered in Kubernetes kubectl cp command can allow malicious directory traversal attack on a targeted system Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!
Read more
  • 0
  • 0
  • 15640

article-image-architects-love-api-driven-architecture
Aaron Lazar
07 Jun 2018
6 min read
Save for later

8 Reasons why architects love API driven architecture

Aaron Lazar
07 Jun 2018
6 min read
Everyday, we see a new architecture popping up, being labeled as a modern architecture for application development. That’s what happened with Microservices in the beginning, and then all went for a toss when they were termed as a design pattern rather than an architecture on a whole. APIs are growing in popularity and are even being used as a basis to draw out the architecture of applications. We’re going to try and understand what some of the top factors are, which make Architects (and Developers) appreciate API driven architectures over the other “modern” and upcoming architectures. Before we get to the reasons, let’s understand where I’m coming from in the first place. So, we recently published our findings from the Skill Up survey that we conducted for 8,000 odd IT pros. We asked them various questions ranging from what their favourite tools were, to whether they felt they knew more than what their managers did. Of the questions, one of them was directed to find out which of the modern architectures interested them the most. The choices were among Chaos Engineering, API Driven Architecture and Evolutionary Architecture. Source: Skill Up 2018 From the results, it's evident that they’re more inclined towards API driven Architecture. Or maybe, those who didn’t really find the architecture of their choice among the lot, simply chose API driven to be the best of the lot. But why do architects love API driven development? Anyway, I’ve been thinking about it a bit and thought I would come up with a few reasons as to why this might be so. So here goes… Reason #1: The big split between the backend and frontend Also known as Split Stack Development, API driven architecture allows for the backend and frontend of the application to be decoupled. This allows developers and architects to mitigate any dependencies that each end might have or rather impose on the other. Instead of having the dependencies, each end communicates with the other via APIs. This is extremely beneficial in the sense that each end can be built in completely different tools and technologies. For example, the backend could be in Python/Java, while the front end is built in JavaScript. Reason #2: Sensibility in scalability When APIs are the foundation of an architecture, it enables the organisation to scale the app by simply plugging in services as and when needed, instead of having to modify the app itself. This is a great way to plugin and plugout functionality as and when needed without disrupting the original architecture. Reason #3: Parallel Development aka Agile When different teams work on the front and back end of the application, there’s no reason for them to be working together. That doesn’t mean they don’t work together at all, rather, what I mean is that the only factor they have to agree upon is the API structure and nothing else. This is because of Reason #1, where both layers of the architecture are disconnected or decoupled. This enables teams to be more flexible and agile when developing the application. It is only at the testing and deployment stages that the teams will collaborate more. Reason #4: API as a product This is more of a business case, rather than developer centric, but I thought I should add it in anyway. So, there’s something new that popped up on the Thoughtworks Radar, a few months ago - API-as-a-product.  As a matter of fact, you could consider this similar to API-as-a-Service. Organisations like Salesforce have been offering their services in the form of APIs. For example, suppose you’re using Salesforce CRM and you want to extend the functionality, all you need to do is use the APIs for extending the system. Google is another good example of a company that offers APIs as products. This is a great way to provide extensibility instead of having a separate application altogether. Individual APIs or groups of them can be priced with subscription plans. These plans contain not only access to the APIs themselves, but also a defined number of calls or data that is allowed. Reason #5: Hiding underlying complexity In an API driven architecture, all components that are connected to the API are modular, exist on their own and communicate via the API. The modular nature of the application makes it easier to test and maintain. Moreover, if you’re using or consuming someone else’s API, you needn’t learn/decipher the entire code’s working, rather you can just plug in the API and use it. That reduces complexity to a great extent. Reason #6: Business Logic comes first API driven architecture allows developers to focus on the Business Logic, rather than having to worry about structuring the application. The initial API structure is all that needs to be planned out, after which each team goes forth and develops the individual APIs. This greatly reduces development time as well. Reason #7: IoT loves APIs API architecture makes for a great way to build IoT applications, as IoT needs a great deal of scalability. An application that is built on a foundation of APIs is a dream for IoT developers as devices can be easily connected to the mother app. I expect everything to be connected via APIs in the next 5 years. If it doesn’t happen, you can always get back at me in the comments section! ;) Reason #8: APIs and DevOps are a match made in Heaven APIs allow for a more streamlined deployment pipeline, while also eliminating the production of duplicate assets by development teams. Moreover, deployments can reach production a lot faster through these slick pipelines, thus increasing efficiency and reducing costs by a great deal. The merger of DevOps and API driven architecture, however, is not a walk in the park, as it requires a change in mindset. Teams need to change culturally, to become enablers of reusable, self-service consumption. The other side of the coin Well, there’s always two sides to the coin, and there are some drawbacks to API driven architecture. For starters, you’ll have APIs all over the place! While that was the point in the first place, it becomes really tedious to manage all those APIs. Secondly, when you have things running in parallel, you require a lot of processing power - more cores, more infrastructure. Another important issue is regarding security. With so many cyber attacks, and privacy breaches, an API driven architecture only invites trouble with more doors for hackers to open. So apart from the above flipside, those were some of the reasons I could think of, as to why Architects would be interested in an API driven architecture. APIs give customers, i.e both internal and external stakeholders, the freedom to leverage enterprise’s assets, while customizing as required. In a way, APIs aren’t just ways to offer integration and connectivity for large enterprise apps. Rather, they should be looked at as a way to drive faster and more modern software architecture and delivery. What are web developers favorite front-end tools? The best backend tools in web development The 10 most common types of DoS attacks you need to know
Read more
  • 0
  • 0
  • 15583

article-image-harrison-ferrone-why-c-preferred-programming-language-building-games-unity
Sugandha Lahoti
16 Dec 2019
6 min read
Save for later

Harrison Ferrone explains why C# is the preferred programming language for building games in Unity

Sugandha Lahoti
16 Dec 2019
6 min read
C# is one of the most popular programming languages which is used to create games in the Unity game engine. Experiences (games, AR/VR apps, etc) built with Unity have reached nearly 3 billion devices worldwide and were installed 24 billion times in the last 12 months. We spoke to Harrison Ferrone, software engineer, game developer, creative technologist and author of the book, “Learning C# by Developing Games with Unity 2019”. We talked about why C# is used for game designing, the recent Unity 2019.2 release, and some tips and tricks tips for those developing games with Unity. On C# and Game development Why is C# is widely-used to create games? How does it compare to C++? How is C# being used in other areas such as mobile and web development? I think Unity chose to move forward with C# instead of Javascript or Boo because of its learning curve and its history with Microsoft. [Boo was one of the three scripting languages for the Unity game engine until it was dropped in 2014]. In my experience, C# is easier to learn than languages like C++, and that accessibility is a huge draw for game designers and programmers in general. With Xamarin mobile development and ASP.NET web applications in the mix, there’s really no stopping the C# language any time soon. What are C# scripts? How are they useful for creating games with Unity? C# scripts are the code files that store behaviors in Unity, powering everything the engine does. While there are a lot of new tools that will allow a developer to make a game without them, scripts are still the best way to create custom actions and interactions within a game space. Editor’s Tip: To get started with how to create a C# script in Unity, you can go through Chapter 1 of Harrison Ferrone’s book Learning C# by Developing Games with Unity 2019. On why Harrison wrote his book, Learning C# by Developing Games with Unity 2019 Tell us the motivation behind writing your book Learning C# by Developing Games with Unity 2019. Why is developing Unity games a good way to learn the C# programming language? Why do you prefer Unity over other game engines? My main motivation for writing the book was two-fold. First, I always wanted to be a writer, so marrying my love for technology with a lifelong dream was a no-brainer. Second, I wanted to write a beginner’s book that would stay true to a beginner audience, always keeping them in mind. In terms of choosing games as a medium for learning, I’ve found that making something interesting and novel while learning a new skill-set leads to greater absorption of the material and more overall enjoyment. Unity has always been my go-to engine because its interface is highly intuitive and easy to get started with. You have 3 years of experience building iOS applications in Swift. You also have a number of articles and tutorials on the same on the Ray Wenderlich website. Recently, you started branching out into C++ and Unreal Engine 4. How did you get into game design and Unity development? What made you interested in building games?  I actually got into Game design and Unity development first, before all the iOS and Swift experience. It was my major in university, and even though I couldn’t find a job in the game industry right after I graduated, I still held onto it as a passion. On developing games The latest release of Unity, Unity 2019.2 has a number of interesting features such as ProBuilder, Shader Graph, and effects, 2D Animation, Burst Compiler, etc. What are some of your favorite features in this release? What are your expectations from Unity 2019.3?  I’m really excited about ProBuilder in this release, as it’s a huge time saver for someone as artistically challenged as I am. I think tools like this will level the playing field for independent developers who may not have access to the environment or level builders. What are some essential tips and tricks that a game developer must keep in mind when working in Unity? What are the do’s and don’ts? I’d say the biggest thing to keep in mind when working with Unity is the component architecture that it’s built on. When you’re writing your own scripts, think about how they can be separated into their individual functions and structure them like that - with purpose. There’s nothing worse than having a huge, bloated C# script that does everything under the sun and attaching it to a single game object in your project, then realizing it really needs to be separated into its component parts. What are the biggest challenges today in the field of game development? What is your advice for those developing games using C#? Reaching the right audience is always challenge number one in any industry, and game development is no different. This is especially true for indie game developers as they have to always be mindful of who they are making their game for and purposefully design and program their games accordingly. As far as advice goes, I always say the same thing - learn design patterns and agile development methodologies, they will open up new avenues for professional programming and project management. Rust has been touted as one of the successors of the C family of languages. The present state of game development in Rust is also quite encouraging. What are your thoughts on Rust for game dev? Do you think major game engines like Unity and Unreal will support Rust for game development in the future? I don’t have any experience with Rust, but major engines like Unity and Unreal are unlikely to adopt a new language because of the huge cost associated with a changeover of that magnitude. However, that also leaves the possibility open for another engine to be developed around Rust in the future that targets games, mobile, and/or web development. About the Author Harrison Ferrone was born in Chicago, IL, and raised all over. Most days, you can find him creating instructional content for LinkedIn Learning and Pluralsight, or tech editing for the Ray Wenderlich website. After a few years as an iOS developer at small start-ups, and one Fortune 500 company, he fell into a teaching career and never looked back. Throughout all this, he's bought many books, acquired a few cats, worked abroad, and continually wondered why Neuromancer isn't on more course syllabi. You can follow him on Linkedin, and GitHub.
Read more
  • 0
  • 0
  • 15566

article-image-how-to-handle-backup-and-recovery-with-postgresql-11-tutorial
Amrata Joshi
02 Mar 2019
11 min read
Save for later

How to handle backup and recovery with PostgreSQL 11 [Tutorial]

Amrata Joshi
02 Mar 2019
11 min read
If you are running a PostgreSQL setup, there are basically two major methods to perform backups: Logical dumps (extract an SQL script representing your data) Transaction log shipping The idea behind transaction log shipping is to archive binary changes made to the database. Most people claim that transaction log shipping is the only real way to do backups. However, in my opinion, this is not necessarily true. Many people rely on pg_dump to simply extract a textual representation of the data. Interestingly, pg_dump is also the oldest method of creating a backup and has been around since the very early days of the PostgreSQL project (transaction log shipping was added much later). Every PostgreSQL administrator will become familiar with pg_dump sooner or later, so it is important to know how it really works and what it does. This article is an excerpt taken from the book Mastering PostgreSQL 11 - Second Edition by Hans-Jürgen Schönig. In this book, you will learn the approach to get to grips with advanced PostgreSQL 11 features and SQL functions, master replication and failover techniques, configure database security and more. In this article, you will learn the process of partially dumping data, restoring backups, saving global data and much more. Running pg_dump The first thing we want to do is create a simple textual dump: [hs@linuxpc ~]$ pg_dump test > /tmp/dump.sql This is the most simplistic backup you can imagine. Basically, pg_dump logs in to the local database instance, connects to a database test, and starts to extract all the data, which will then be sent to stdout and redirected to the file. The beauty, here, is that the standard output gives you all the flexibility of a Unix system. You can easily compress the data using a pipe or do whatever you want to do with it. In some cases, you might want to run pg_dump as a different user. All PostgreSQL client programs support a consistent set of command-line parameters to pass user information. If you just want to set the user, use the -U flag as follows: [hs@linuxpc ~]$ pg_dump -U whatever_powerful_user test > /tmp/dump.sql The following set of parameters can be found in all PostgreSQL client programs: ... Connection options: -d, --dbname=DBNAME database to dump -h, --host=HOSTNAME database server host or socket directory -p, --port=PORT database server port number -U, --username=NAME connect as specified database user -w, --no-password never prompt for password -W, --password force password prompt (should happen automatically) --role=ROLENAME do SET ROLE before dump ... You can just pass the information you want to pg_dump, and if you have enough permissions, PostgreSQL will fetch the data. The important thing here is to see how the program really works. Basically, pg_dump connects to the database and opens a large repeatable read transaction that simply reads all the data. Remember, a repeatable read ensures that PostgreSQL creates a consistent snapshot of the data, which does not change throughout the transactions. In other words, a dump is always consistent—no foreign keys will be violated. The output is a snapshot of data as it was when the dump started. Consistency is a key factor here. It also implies that changes made to the data while the dump is running won't make it to the backup anymore. A dump simply reads everything—therefore, there are no separate permissions to be able to dump something. As long as you can read it, you can back it up. Passing passwords and connection information If you take a close look at the connection parameters shown in the previous section, you will notice that there is no way to pass a password to pg_dump. You can enforce a password prompt, but you cannot pass the parameter to pg_dump using a command-line option. The reason for this is simply because the password might show up in the process table and be visible to other people. The question now is, if pg_hba.conf, which is on the server, enforces a password, how can the client program provide it? There are various means of doing this. Some of them are as follows: Making use of environment variables Making use of .pgpass Using service files In this section, we will learn about all three methods. Extracting subsets of data Up until now, we have seen how to dump an entire database. However, this is not what we might wish for. In many cases, we just want to extract a subset of tables or schemas.  Fortunately, pg_dump can help us do that while also providing a number of switches: -a: It only dumps the data and does not dump the data structure -s: It dumps the data structure but skips the data -n: It only dumps a certain schema -N: It dumps everything but excludes certain schemas -t: It only dumps certain tables -T: It dumps everything but certain tables (this can make sense if you want to exclude logging tables and so on) Partial dumps can be very useful in order to speed things up considerably. Handling various formats So far, we have seen that pg_dump can be used to create text files. The problem here is that a text file can only be replayed completely. If we have saved an entire database, we can only replay the entire thing. In most cases, this is not what we want. Therefore, PostgreSQL has additional formats that offer more functionality. At this point, four formats are supported: -F, --format=c|d|t|p output file format (custom, directory, tar, plain text (default)) We have already seen plain, which is just normal text. On top of that, we can use a custom format. The idea behind a custom format is to have a compressed dump, including a table of contents. Here are two ways to create a custom format dump: [hs@linuxpc ~]$ pg_dump -Fc test > /tmp/dump.fc [hs@linuxpc ~]$ pg_dump -Fc test -f /tmp/dump.fc In addition to the table of contents, the compressed dump has one more advantage. It is a lot smaller. The rule of thumb is that a custom format dump is around 90% smaller than the database instance you are about to back up. Of course, this is highly dependent on the number of indexes, but for many database applications, this rough estimation will hold true. Once the backup is created, we can inspect the backup file: [hs@linuxpc ~]$ pg_restore --list /tmp/dump.fc ; ; Archive created at 2018-11-04 15:44:56 CET ; dbname: test ; TOC Entries: 18 ; Compression: -1 ; Dump Version: 1.12-0 ; Format: CUSTOM ; Integer: 4 bytes ; Offset: 8 bytes ; Dumped from database version: 11.0 ; Dumped by pg_dump version: 11.0 ; ; Selected TOC Entries: ; 3103; 1262 16384 DATABASE - test hs 3; 2615 2200 SCHEMA - public hs 3104; 0 0 COMMENT - SCHEMA public hs 1; 3079 13350 EXTENSION - plpgsql 3105; 0 0 COMMENT - EXTENSION plpgsql 187; 1259 16391 TABLE public t_test hs ... Note that pg_restore --list will return the table of contents of the backup. Using a custom format is a good idea as the backup will shrink in size. However, there's more; the -Fd command will create a backup in the directory format. Instead of a single file, you will now get a directory containing a couple of files: [hs@linuxpc ~]$ mkdir /tmp/backup [hs@linuxpc ~]$ pg_dump -Fd test -f /tmp/backup/ [hs@linuxpc ~]$ cd /tmp/backup/ [hs@linuxpc backup]$ ls -lh total 86M -rw-rw-r--. 1 hs hs 85M Jan 4 15:54 3095.dat.gz -rw-rw-r--. 1 hs hs 107 Jan 4 15:54 3096.dat.gz -rw-rw-r--. 1 hs hs 740K Jan 4 15:54 3097.dat.gz -rw-rw-r--. 1 hs hs 39 Jan 4 15:54 3098.dat.gz -rw-rw-r--. 1 hs hs 4.3K Jan 4 15:54 toc.dat One advantage of the directory format is that we can use more than one core to perform the backup. In the case of a plain or custom format, only one process will be used by pg_dump. The directory format changes that rule. The following example shows how we can tell pg_dump to use four cores (jobs): [hs@linuxpc backup]$ rm -rf * [hs@linuxpc backup]$ pg_dump -Fd test -f /tmp/backup/ -j 4 The more objects in our database, the more of a chance there is for a potential speedup. Replaying backups Having a backup is pointless unless you have tried to actually replay it. Fortunately, this is easy to do. If you have created a plain text backup, simply take the SQL file and execute it. The following example shows how that can be done: psql your_db < your_file.sql A plain text backup is simply a text file containing everything. We can always simply replay a text file. If you have decided on a custom format or directory format, you can use pg_restore to replay the backup. Additionally, pg_restore allows you to do all kinds of fancy things such as replaying just part of a database and so on. In most cases, however, you will simply replay the entire database. In this example, we will create an empty database and just replay a custom format dump: [hs@linuxpc backup]$ createdb new_db [hs@linuxpc backup]$ pg_restore -d new_db -j 4 /tmp/dump.fc Note that pg_restore will add data to an existing database. If your database is not empty, pg_restore might error out but continue. Again, -j is used to throw up more than one process. In this example, four cores are used to replay the data; however, this only works when more than one table is being replayed. If you are using a directory format, you can simply pass the name of the directory instead of the file. As far as performance is concerned, dumps are a good solution if you are working with small or medium amounts of data. There are two major downsides: We will get a snapshot, so everything since the last snapshot will be lost Rebuilding a dump from scratch is comparatively slow compared to binary copies because all of the indexes have to be rebuilt Handling global data In the previous sections, we learned about pg_dump and pg_restore, which are two vital programs when it comes to creating backups. The thing is, pg_dump creates database dumps—it works on the database level. If we want to back up an entire instance, we have to make use of pg_dumpall or dump all of the databases separately. Before we dig into that, it makes sense to see how pg_dumpall works: pg_dumpall > /tmp/all.sql Let's see, pg_dumpall will connect to one database after the other and send stuff to standard out, where you can process it with Unix. Note that pg_dumpall can be used just like pg_dump. However, it has some downsides. It does not support a custom or directory format, and therefore does not offer multicore support. This means that we will be stuck with one thread. However, there is more to pg_dumpall. Keep in mind that users live on the instance level. If you create a normal database dump, you will get all of the permissions, but you won't get all of the CREATE USER statements. Those globals are not included in a normal dump—they will only be extracted by pg_dumpall. If we only want the globals, we can run pg_dumpall using the -g option: pg_dumpall -g > /tmp/globals.sql In most cases, you might want to run pg_dumpall -g, along with a custom or directory format dump to extract your instances. A simple backup script might look such as this: #!/bin/sh BACKUP_DIR=/tmp/ pg_dumpall -g > $BACKUP_DIR/globals.sql for x in $(psql -c "SELECT datname FROM pg_database WHERE datname NOT IN ('postgres', 'template0', 'template1')" postgres -A -t) do pg_dump -Fc $x > $BACKUP_DIR/$x.fc done It will first dump the globals and then loop through the list of databases to extract them one by one in a custom format. To summarize, in this article, we learned about creating backups and dumps in general. To know more about streaming replication and binary backups, check out our book Mastering PostgreSQL 11 - Second Edition. Handling backup and recovery in PostgreSQL 10 [Tutorial] Understanding SQL Server recovery models to effectively backup and restore your database Saving backups on cloud services with ElasticSearch plugins
Read more
  • 0
  • 0
  • 15564

article-image-build-cartpole-game-using-openai-gym
Savia Lobo
10 Mar 2018
11 min read
Save for later

How to build a cartpole game using OpenAI Gym

Savia Lobo
10 Mar 2018
11 min read
[box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango. In this book, you will learn advanced features of TensorFlow1.x, such as distributed TensorFlow with TF Clusters, deploy production models with TensorFlow Serving, and more. [/box] Today, we will help you understand OpenAI Gym and how to apply the basics of OpenAI Gym onto a cartpole game. OpenAI Gym 101 OpenAI Gym is a Python-based toolkit for the research and development of reinforcement learning algorithms. OpenAI Gym provides more than 700 opensource contributed environments at the time of writing. With OpenAI, you can also create your own environment. The biggest advantage is that OpenAI provides a unified interface for working with these environments, and takes care of running the simulation while you focus on the reinforcement learning algorithms. Note : The research paper describing OpenAI Gym is available here: http://arxiv.org/abs/1606.01540 You can install OpenAI Gym using the following command: pip3  install  gym Note: If the above command does not work, then you can find further help with installation at the following link: https://github.com/openai/ gym#installation  Let us print the number of available environments in OpenAI Gym: all_env  =  list(gym.envs.registry.all()) print('Total  Environments  in  Gym  version  {}  :  {}' .format(gym.     version     ,len(all_env))) Total  Environments  in  Gym  version  0.9.4  :  777  Let us print the list of all environments: for  e  in  list(all_env): print(e) The partial list from the output is as follows: EnvSpec(Carnival-ramNoFrameskip-v0) EnvSpec(EnduroDeterministic-v0) EnvSpec(FrostbiteNoFrameskip-v4) EnvSpec(Taxi-v2) EnvSpec(Pooyan-ram-v0) EnvSpec(Solaris-ram-v4) EnvSpec(Breakout-ramDeterministic-v0) EnvSpec(Kangaroo-ram-v4) EnvSpec(StarGunner-ram-v4) EnvSpec(Enduro-ramNoFrameskip-v4) EnvSpec(DemonAttack-ramDeterministic-v0) EnvSpec(TimePilot-ramNoFrameskip-v0) EnvSpec(Amidar-v4) Each environment, represented by the env object, has a standardized interface, for example: An env object can be created with the env.make(<game-id-string>) function by passing the id string. Each env object contains the following main functions: The step() function takes an action object as an argument and returns four objects: observation: An object implemented by the environment, representing the observation of the environment. reward: A signed float value indicating the gain (or loss) from the previous action. done: A Boolean value representing if the scenario is finished. info: A Python dictionary object representing the diagnostic information. The render() function creates a visual representation of the environment. The reset() function resets the environment to the original state. Each env object comes with well-defined actions and observations, represented by action_space and observation_space. One of the most popular games in the gym to learn reinforcement learning is CartPole. In this game, a pole attached to a cart has to be balanced so that it doesn't fall. The game ends if either the pole tilts by more than 15 degrees or the cart moves by more than 2.4 units from the center. The home page of OpenAI.com emphasizes the game in these words: The small size and simplicity of this environment make it possible to run very quick experiments, which is essential when learning the basics. The game has only four observations and two actions. The actions are to move a cart by applying a force of +1 or -1. The observations are the position of the cart, the velocity of the cart, the angle of the pole, and the rotation rate of the pole. However, knowledge of the semantics of observation is not necessary to learn to maximize the rewards of the game. Now let us load a popular game environment, CartPole-v0, and play it with stochastic control:  Create the env object with the standard make function: env  =  gym.make('CartPole-v0')  The number of episodes is the number of game plays. We shall set it to one, for now, indicating that we just want to play the game once. Since every episode is stochastic, in actual production runs you will run over several episodes and calculate the average values of the rewards. Additionally, we can initialize an array to store the visualization of the environment at every timestep: n_episodes  =  1 env_vis  =  []  Run two nested loops—an external loop for the number of episodes and an internal loop for the number of timesteps you would like to simulate for. You can either keep running the internal loop until the scenario is done or set the number of steps to a higher value. At the beginning of every episode, reset the environment using env.reset(). At the beginning of every timestep, capture the visualization using env.render(). for  i_episode  in  range(n_episodes): observation  =  env.reset() for  t  in  range(100): env_vis.append(env.render(mode  =  'rgb_array')) print(observation) action  =  env.action_space.sample() observation,  reward,  done,  info  =  env.step(action) if  done: print("Episode  finished  at  t{}".format(t+1)) break  Render the environment using the helper function: env_render(env_vis)  The code for the helper function is as follows: def  env_render(env_vis): plt.figure() plot  =  plt.imshow(env_vis[0]) plt.axis('off') def  animate(i): plot.set_data(env_vis[i]) anim  =  anm.FuncAnimation(plt.gcf(), animate, frames=len(env_vis), interval=20, repeat=True, repeat_delay=20) display(display_animation(anim,  default_mode='loop')) We get the following output when we run this example: [-0.00666995  -0.03699492  -0.00972623    0.00287713] [-0.00740985    0.15826516  -0.00966868  -0.29285861] [-0.00424454  -0.03671761  -0.01552586  -0.00324067] [-0.0049789    -0.2316135    -0.01559067    0.28450351] [-0.00961117  -0.42650966  -0.0099006    0.57222875] [-0.01814136  -0.23125029    0.00154398    0.27644332] [-0.02276636  -0.0361504    0.00707284  -0.01575223] [-0.02348937    0.1588694       0.0067578    -0.30619523] [-0.02031198  -0.03634819    0.00063389  -0.01138875] [-0.02103895    0.15876466    0.00040612  -0.3038716  ] [-0.01786366    0.35388083  -0.00567131  -0.59642642] [-0.01078604    0.54908168  -0.01759984  -0.89089036] [    1.95594914e-04   7.44437934e-01    -3.54176495e-02    -1.18905344e+00] [ 0.01508435 0.54979251 -0.05919872 -0.90767902] [ 0.0260802 0.35551978 -0.0773523 -0.63417465] [ 0.0331906 0.55163065 -0.09003579 -0.95018025] [ 0.04422321 0.74784161 -0.1090394 -1.26973934] [ 0.05918004 0.55426764 -0.13443418 -1.01309691] [ 0.0702654 0.36117014 -0.15469612 -0.76546874] [ 0.0774888 0.16847818 -0.1700055 -0.52518186] [ 0.08085836 0.3655333 -0.18050913 -0.86624457] [ 0.08816903 0.56259197 -0.19783403 -1.20981195] Episode  finished  at  t22 It took 22 time-steps for the pole to become unbalanced. At every run, we get a different time-step value because we picked the action scholastically by using env.action_space.sample(). Since the game results in a loss so quickly, randomly picking an action and applying it is probably not the best strategy. There are many algorithms for finding solutions to keeping the pole straight for a longer number of time-steps that you can use, such as Hill Climbing, Random Search, and Policy Gradient. Note: Some of the algorithms for solving the Cartpole game are available at the following links: https://openai.com/requests-for-research/#cartpole http://kvfrans.com/simple-algoritms-for-solving-cartpole/ https://github.com/kvfrans/openai-cartpole Applying simple policies to a cartpole game So far, we have randomly picked an action and applied it. Now let us apply some logic to picking the action instead of random chance. The third observation refers to the angle. If the angle is greater than zero, that means the pole is tilting right, thus we move the cart to the right (1). Otherwise, we move the cart to the left (0). Let us look at an example: We define two policy functions as follows: def  policy_logic(env,obs): return  1  if  obs[2]  >  0  else  0 def  policy_random(env,obs): return  env.action_space.sample() Next, we define an experiment function that will run for a specific number of episodes; each episode runs until the game is lost, namely when done is True. We use rewards_max to indicate when to break out of the loop as we do not wish to run the experiment forever: def  experiment(policy,  n_episodes,  rewards_max): rewards=np.empty(shape=(n_episodes)) env  =  gym.make('CartPole-v0') for  i  in  range(n_episodes): obs  =  env.reset() done  =  False episode_reward  =  0 while  not  done: action  =  policy(env,obs) obs,  reward,  done,  info  =  env.step(action) episode_reward  +=  reward if  episode_reward  >  rewards_max: break rewards[i]=episode_reward print('Policy:{},  Min  reward:{},  Max  reward:{}' .format(policy.     name     , min(rewards), max(rewards))) We run the experiment 100 times, or until the rewards are less than or equal to rewards_max, that is set to 10,000: n_episodes  =  100 rewards_max  =  10000 experiment(policy_random,  n_episodes,  rewards_max) experiment(policy_logic,  n_episodes,  rewards_max) We can see that the logically selected actions do better than the randomly selected ones, but not that much better: Policy:policy_random,  Min  reward:9.0,  Max  reward:63.0,  Average  reward:20.26 Policy:policy_logic,  Min  reward:24.0,  Max  reward:66.0,  Average  reward:42.81 Now let us modify the process of selecting the action further—to be based on parameters. The parameters will be multiplied by the observations and the action will be chosen based on whether the multiplication result is zero or one. Let us modify the random search method in which we initialize the parameters randomly. The code looks as follows: def  policy_logic(theta,obs): #  just  ignore  theta return  1  if  obs[2]  >  0  else  0 def  policy_random(theta,obs): return  0  if  np.matmul(theta,obs)  <  0  else  1 def  episode(env,  policy,  rewards_max): obs  =  env.reset() done  =  False episode_reward  =  0 if  policy.   name          in  ['policy_random']: theta  =  np.random.rand(4)  *  2  -  1 else: theta  =  None while  not  done: action  =  policy(theta,obs) obs,  reward,  done,  info  =  env.step(action) episode_reward  +=  reward if  episode_reward  >  rewards_max: break return  episode_reward def  experiment(policy,  n_episodes,  rewards_max): rewards=np.empty(shape=(n_episodes)) env  =  gym.make('CartPole-v0') for  i  in  range(n_episodes): rewards[i]=episode(env,policy,rewards_max) #print("Episode  finished  at  t{}".format(reward)) print('Policy:{},  Min  reward:{},  Max  reward:{},  Average  reward:{}' .format(policy.     name     , np.min(rewards), np.max(rewards), np.mean(rewards))) n_episodes  =  100 rewards_max  =  10000 experiment(policy_random,  n_episodes,  rewards_max) experiment(policy_logic,  n_episodes,  rewards_max) We can see that random search does improve the results: Policy:policy_random,  Min  reward:8.0,  Max  reward:200.0,  Average reward:40.04 Policy:policy_logic,  Min  reward:25.0,  Max  reward:62.0,  Average  reward:43.03 With the random search, we have improved our results to get the max rewards of 200. On average, the rewards for random search are lower because random search tries various bad parameters that bring the overall results down. However, we can select the best parameters from all the runs and then, in production, use the best parameters. Let us modify the code to train the parameters first: def  policy_logic(theta,obs): #  just  ignore  theta return  1  if  obs[2]  >  0  else  0 def  policy_random(theta,obs): return  0  if  np.matmul(theta,obs)  <  0  else  1 def  episode(env,policy,  rewards_max,theta): obs  =  env.reset() done  =  False episode_reward  =  0 while  not  done: action  =  policy(theta,obs) obs,  reward,  done,  info  =  env.step(action) episode_reward  +=  reward if  episode_reward  >  rewards_max: break return  episode_reward def  train(policy,  n_episodes,  rewards_max): env  =  gym.make('CartPole-v0') theta_best  =  np.empty(shape=[4]) reward_best  =  0 for  i  in  range(n_episodes): if  policy.   name          in  ['policy_random']: theta  =  np.random.rand(4)  *  2  -  1 else: theta  =  None reward_episode=episode(env,policy,rewards_max,  theta) if  reward_episode  >  reward_best: reward_best  =  reward_episode theta_best  =  theta.copy() return  reward_best,theta_best def  experiment(policy,  n_episodes,  rewards_max,  theta=None): rewards=np.empty(shape=[n_episodes]) env  =  gym.make('CartPole-v0') for  i  in  range(n_episodes): rewards[i]=episode(env,policy,rewards_max,theta) #print("Episode  finished  at  t{}".format(reward)) print('Policy:{},  Min  reward:{},  Max  reward:{},  Average  reward:{}' .format(policy.     name     , np.min(rewards), np.max(rewards), np.mean(rewards))) n_episodes  =  100 rewards_max  =  10000 reward,theta  =  train(policy_random,  n_episodes,  rewards_max) print('trained  theta:  {},  rewards:  {}'.format(theta,reward)) experiment(policy_random,  n_episodes,  rewards_max,  theta) experiment(policy_logic,  n_episodes,  rewards_max) We train for 100 episodes and then use the best parameters to run the experiment for the random search policy: n_episodes  =  100 rewards_max  =  10000 reward,theta  =  train(policy_random,  n_episodes,  rewards_max) print('trained  theta:  {},  rewards:  {}'.format(theta,reward)) experiment(policy_random,  n_episodes,  rewards_max,  theta) experiment(policy_logic,  n_episodes,  rewards_max) We find the that the training parameters gives us the best results of 200: trained  theta:  [-0.14779543               0.93269603    0.70896423   0.84632461],  rewards: 200.0 Policy:policy_random,  Min  reward:200.0,  Max  reward:200.0,  Average reward:200.0 Policy:policy_logic,  Min  reward:24.0,  Max  reward:63.0,  Average  reward:41.94 We may optimize the training code to continue training until we reach a maximum reward. To summarize, we learnt the basics of OpenAI Gym and also applied it onto a cartpole game for relevant output.   If you found this post useful, do check out this book Mastering TensorFlow 1.x  to build, scale, and deploy deep neural network models using star libraries in Python.
Read more
  • 0
  • 1
  • 15536
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-compute-discrete-fourier-transform-dft-using-scipy
Pravin Dhandre
02 Mar 2018
5 min read
Save for later

How to compute Discrete Fourier Transform (DFT) using SciPy

Pravin Dhandre
02 Mar 2018
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book co-authored by L. Felipe Martins, Ruben Oliva Ramos and V Kishore Ayyadevara titled SciPy Recipes. This book provides numerous recipes to tackle day-to-day challenges associated with scientific computing and data manipulation using SciPy stack.[/box] Today, we will compute Discrete Fourier Transform (DFT) and inverse DFT using SciPy stack. In this article, we will focus majorly on the syntax and the application of DFT in SciPy assuming you are well versed with the mathematics of this concept. Discrete Fourier Transforms   A discrete Fourier transform transforms any signal from its time/space domain into a related signal in frequency domain. This allows us to not only analyze the different frequencies of the data, but also enables faster filtering operations, when used properly. It is possible to turn a signal in a frequency domain back to its time/spatial domain, thanks to inverse Fourier transform (IFT). How to do it… To follow with the example, we need to continue with the following steps: The basic routines in the scipy.fftpack module compute the DFT and its inverse, for discrete signals in any dimension—fft, ifft (one dimension), fft2, ifft2 (two dimensions), and fftn, ifftn (any number of dimensions). Verify all these routines assume that the data is complex valued. If we know beforehand that a particular dataset is actually real-valued, and should offer realvalued frequencies, we use rfft and irfft instead, for a faster algorithm. In order to complete with this, these routines are designed so that composition with their inverses always yields the identity. The syntax is the same in all cases, as follows: fft(x[, n, axis, overwrite_x]) The first parameter, x, is always the signal in any array-like form. Note that fft performs one-dimensional transforms. This means that if x happens to be two-dimensional, for example, fft will output another two-dimensional array, where each row is the transform of each row of the original. We can use columns instead, with the optional axis parameter. The rest of the parameters are also optional; n indicates the length of the transform and overwrite_x gets rid of the original data to save memory and resources. We usually play with the n integer when we need to pad the signal with zeros or truncate it. For a higher dimension, n is substituted by shape (a tuple) and axis by axes (another tuple). To better understand the output, it is often useful to shift the zero frequencies to the center of the output arrays with ifftshift. The inverse of this operation, ifftshift, is also included in the module. How it works… The following code shows some of these routines in action when applied to a checkerboard: import numpy from scipy.fftpack import fft,fft2, fftshift import matplotlib.pyplot as plt B=numpy.ones((4,4)); W=numpy.zeros((4,4)) signal = numpy.bmat("B,W;W,B") onedimfft = fft(signal,n=16) twodimfft = fft2(signal,shape=(16,16)) plt.figure() plt.gray() plt.subplot(121,aspect='equal') plt.pcolormesh(onedimfft.real) plt.colorbar(orientation='horizontal') plt.subplot(122,aspect='equal') plt.pcolormesh(fftshift(twodimfft.real)) plt.colorbar(orientation='horizontal') plt.show() Note how the first four rows of the one-dimensional transform are equal (and so are the last four), while the two-dimensional transform (once shifted) presents a peak at the origin and nice symmetries in the frequency domain. In the following screenshot, which has been obtained from the previous code, the image on the left is the fft and the one on the right is the fft2 of a 2 x 2 checkerboard signal: Computing the discrete Fourier transform (DFT) of a data series using the FFT Algorithm In this section, we will see how to compute the discrete Fourier transform and some of its Applications. How to do it… In the following table, we will see the parameters to create a data series using the FFT algorithm: How it works… This code represents computing an FFT discrete Fourier in the main part: np.fft.fft(np.exp(2j * np.pi * np.arange(8) / 8)) array([ -3.44505240e-16 +1.14383329e-17j, 8.00000000e+00 -5.71092652e-15j, 2.33482938e-16 +1.22460635e-16j, 1.64863782e-15 +1.77635684e-15j, 9.95839695e-17 +2.33482938e-16j, 0.00000000e+00 +1.66837030e-15j, 1.14383329e-17 +1.22460635e-16j, -1.64863782e-15 +1.77635684e-15j]) In this example, real input has an FFT that is Hermitian, that is, symmetric in the real part and anti-symmetric in the imaginary part, as described in the numpy.fft documentation. import matplotlib.pyplot as plt t = np.arange(256) sp = np.fft.fft(np.sin(t)) freq = np.fft.fftfreq(t.shape[-1]) plt.plot(freq, sp.real, freq, sp.imag) [<matplotlib.lines.Line2D object at 0x...>, <matplotlib.lines.Line2D object at 0x...>] plt.show() The following screenshot shows how we represent the results: Computing the inverse DFT of a data series In this section, we will learn how to compute the inverse DFT of a data series. How to do it… In this section we will see how to compute the inverse Fourier transform. The returned complex array contains y(0), y(1),..., y(n-1) where: How it works… In this part, we represent the calculous of the DFT: np.fft.ifft([0, 4, 0, 0]) array([ 1.+0.j, 0.+1.j, -1.+0.j, 0.-1.j]) Create and plot a band-limited signal with random phases: import matplotlib.pyplot as plt t = np.arange(400) n = np.zeros((400,), dtype=complex) n[40:60] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20,))) s = np.fft.ifft(n) plt.plot(t, s.real, 'b-', t, s.imag, 'r--') plt.legend(('real', 'imaginary')) plt.show() Then we represent it, as shown in the following screenshot:   We successfully explored how to transform signals from time or space domain into frequency domain and vice-versa, allowing you to analyze frequencies in detail. If you found this tutorial useful, do check out the book SciPy Recipes to get hands-on recipes to perform various data science tasks with ease.    
Read more
  • 0
  • 1
  • 15521

article-image-azure-function-asp-net-core-mvc-application
Aaron Lazar
03 May 2018
10 min read
Save for later

How to call an Azure function from an ASP.NET Core MVC application

Aaron Lazar
03 May 2018
10 min read
In this tutorial, we'll learn how to call an Azure Function from an ASP.NET Core MVC application. [box type="shadow" align="" class="" width=""]This article is an extract from the book C# 7 and .NET Core Blueprints, authored by Dirk Strauss and Jas Rademeyer. This book is a step-by-step guide that will teach you essential .NET Core and C# concepts with the help of real-world projects.[/box] We will get started with creating an ASP.NET Core MVC application that will call our Azure Function to validate an email address entered into a login screen of the application: This application does no authentication at all. All it is doing is validating the email address entered. ASP.NET Core MVC authentication is a totally different topic and not the focus of this post. In Visual Studio 2017, create a new project and select ASP.NET Core Web Application from the project templates. Click on the OK button to create the project. This is shown in the following screenshot: On the next screen, ensure that .NET Core and ASP.NET Core 2.0 is selected from the drop-down options on the form. Select Web Application (Model-View-Controller) as the type of application to create. Don't bother with any kind of authentication or enabling Docker support. Just click on the OK button to create your project: After your project is created, you will see the familiar project structure in the Solution Explorer of Visual Studio: Creating the login form For this next part, we can create a plain and simple vanilla login form. For a little bit of fun, let's spice things up a bit. Have a look on the internet for some free login form templates: I decided to use a site called colorlib that provided 50 free HTML5 and CSS3 login forms in one of their recent blog posts. The URL to the article is: https://colorlib.com/wp/html5-and-css3-login-forms/. I decided to use Login Form 1 by Colorlib from their site. Download the template to your computer and extract the ZIP file. Inside the extracted ZIP file, you will see that we have several folders. Copy all the folders in this extracted ZIP file (leave the index.html file as we will use this in a minute): Next, go to the solution for your Visual Studio application. In the wwwroot folder, move or delete the contents and paste the folders from the extracted ZIP file into the wwwroot folder of your ASP.NET Core MVC application. Your wwwroot folder should now look as follows: 4. Back in Visual Studio, you will see the folders when you expand the wwwroot node in the CoreMailValidation project. 5. I also want to focus your attention to the Index.cshtml and _Layout.cshtml files. We will be modifying these files next: Open the Index.cshtml file and remove all the markup (except the section in the curly brackets) from this file. Paste the HTML markup from the index.html file from the ZIP file we extracted earlier. Do not copy the all the markup from the index.html file. Only copy the markup inside the <body></body> tags. Your Index.cshtml file should now look as follows: @{ ViewData["Title"] = "Login Page"; } <div class="limiter"> <div class="container-login100"> <div class="wrap-login100"> <div class="login100-pic js-tilt" data-tilt> <img src="images/img-01.png" alt="IMG"> </div> <form class="login100-form validate-form"> <span class="login100-form-title"> Member Login </span> <div class="wrap-input100 validate-input" data-validate="Valid email is required: [email protected]"> <input class="input100" type="text" name="email" placeholder="Email"> <span class="focus-input100"></span> <span class="symbol-input100"> <i class="fa fa-envelope" aria-hidden="true"></i> </span> </div> <div class="wrap-input100 validate-input" data-validate="Password is required"> <input class="input100" type="password" name="pass" placeholder="Password"> <span class="focus-input100"></span> <span class="symbol-input100"> <i class="fa fa-lock" aria-hidden="true"></i> </span> </div> <div class="container-login100-form-btn"> <button class="login100-form-btn"> Login </button> </div> <div class="text-center p-t-12"> <span class="txt1"> Forgot </span> <a class="txt2" href="#"> Username / Password? </a> </div> <div class="text-center p-t-136"> <a class="txt2" href="#"> Create your Account <i class="fa fa-long-arrow-right m-l-5" aria-hidden="true"></i> </a> </div> </form> </div> </div> </div> The code for this chapter is available on GitHub here: Next, open the Layout.cshtml file and add all the links to the folders and files we copied into the wwwroot folder earlier. Use the index.html file for reference. You will notice that the _Layout.cshtml file contains the following piece of code—@RenderBody(). This is a placeholder that specifies where the Index.cshtml file content should be injected. If you are coming from ASP.NET Web Forms, think of the _Layout.cshtml page as a master page. Your Layout.cshtml markup should look as follows: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>@ViewData["Title"] - CoreMailValidation</title> <link rel="icon" type="image/png" href="~/images/icons/favicon.ico" /> <link rel="stylesheet" type="text/css" href="~/vendor/bootstrap/css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="~/fonts/font-awesome-4.7.0/css/font-awesome.min.css"> <link rel="stylesheet" type="text/css" href="~/vendor/animate/animate.css"> <link rel="stylesheet" type="text/css" href="~/vendor/css-hamburgers/hamburgers.min.css"> <link rel="stylesheet" type="text/css" href="~/vendor/select2/select2.min.css"> <link rel="stylesheet" type="text/css" href="~/css/util.css"> <link rel="stylesheet" type="text/css" href="~/css/main.css"> </head> <body> <div class="container body-content"> @RenderBody() <hr /> <footer> <p>© 2018 - CoreMailValidation</p> </footer> </div> <script src="~/vendor/jquery/jquery-3.2.1.min.js"></script> <script src="~/vendor/bootstrap/js/popper.js"></script> <script src="~/vendor/bootstrap/js/bootstrap.min.js"></script> <script src="~/vendor/select2/select2.min.js"></script> <script src="~/vendor/tilt/tilt.jquery.min.js"></script> <script> $('.js-tilt').tilt({ scale: 1.1 }) </script> <script src="~/js/main.js"></script> @RenderSection("Scripts", required: false) </body> </html> If everything worked out right, you will see the following page when you run your ASP.NET Core MVC application. The login form is obviously totally non-functional: However, the login form is totally responsive. If you had to reduce the size of your browser window, you will see the form scale as your browser size reduces. This is what you want. If you want to explore the responsive design offered by Bootstrap, head on over to https://getbootstrap.com/ and go through the examples in the documentation:   The next thing we want to do is hook this login form up to our controller and call the Azure Function we created to validate the email address we entered. Let's look at doing that next. Hooking it all up To simplify things, we will be creating a model to pass to our controller: Create a new class in the Models folder of your application called LoginModel and click on the Add button:  2. Your project should now look as follows. You will see the model added to the Models folder: The next thing we want to do is add some code to our model to represent the fields on our login form. Add two properties called Email and Password: namespace CoreMailValidation.Models { public class LoginModel { public string Email { get; set; } public string Password { get; set; } } } Back in the Index.cshtml view, add the model declaration to the top of the page. This makes the model available for use in our view. Take care to specify the correct namespace where the model exists: @model CoreMailValidation.Models.LoginModel @{ ViewData["Title"] = "Login Page"; } The next portion of code needs to be written in the HomeController.cs file. Currently, it should only have an action called Index(): public IActionResult Index() { return View(); } Add a new async function called ValidateEmail that will use the base URL and parameter string of the Azure Function URL we copied earlier and call it using an HTTP request. I will not go into much detail here, as I believe the code to be pretty straightforward. All we are doing is calling the Azure Function using the URL we copied earlier and reading the return data: private async Task<string> ValidateEmail(string emailToValidate) { string azureBaseUrl = "https://core-mail- validation.azurewebsites.net/api/HttpTriggerCSharp1"; string urlQueryStringParams = $"? code=/IS4OJ3T46quiRzUJTxaGFenTeIVXyyOdtBFGasW9dUZ0snmoQfWoQ ==&email={emailToValidate}"; using (HttpClient client = new HttpClient()) { using (HttpResponseMessage res = await client.GetAsync( $"{azureBaseUrl}{urlQueryStringParams}")) { using (HttpContent content = res.Content) { string data = await content.ReadAsStringAsync(); if (data != null) { return data; } else return ""; } } } } Create another public async action called ValidateLogin. Inside the action, check to see if the ModelState is valid before continuing. For a nice explanation of what ModelState is, have a look at the following article—https://www.exceptionnotfound.net/asp-net-mvc-demystified-modelstate/. We then do an await on the ValidateEmail function, and if the return data contains the word false, we know that the email validation failed. A failure message is then passed to the TempData property on the controller. The TempData property is a place to store data until it is read. It is exposed on the controller by ASP.NET Core MVC. The TempData property uses a cookie-based provider by default in ASP.NET Core 2.0 to store the data. To examine data inside the TempData property without deleting it, you can use the Keep and Peek methods. To read more on TempData, see the Microsoft documentation here: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/app-state?tabs=aspnetcore2x. If the email validation passed, then we know that the email address is valid and we can do something else. Here, we are simply just saying that the user is logged in. In reality, we will perform some sort of authentication here and then route to the correct controller. So now you know how to call an Azure Function from an ASP.NET Core application. If you found this tutorial helpful and you'd like to learn more, go ahead and pick up the book C# 7 and .NET Core Blueprints. What is ASP.NET Core? Why ASP.NET makes building apps for mobile and web easy How to dockerize an ASP.NET Core application    
Read more
  • 0
  • 2
  • 15500

article-image-configuring-and-securing-python-ldap-applications-part-2
Packt
23 Oct 2009
14 min read
Save for later

Configuring and securing PYTHON LDAP Applications Part 2

Packt
23 Oct 2009
14 min read
This is the second article in the article mini-series on Python LDAP applications by Matt Butcher. For first part please visit this link. In this article we will see some of the LDAP operations such as compare operation, search operation. We will also see how to change an LDAP password. The LDAP Compare Operation One of the simplest LDAP operations to perform is the compare operation. The LDAP compare operation takes a DN, an attribute name, and an attribute value and checks the directory to see if the given DN has an attribute with the given attribute name, and the given attribute value. If it returns true then there is a match, and if false then otherwise. The Python-LDAP API supports LDAP compare operations through the LDAPObject's compare() and compare_s() functions. The synchronous function is simple. It takes three string parameters (DN, attribute name, and asserted value), and returns 0 for false, and 1 for true: >>> dn = 'uid=matt,ou=users,dc=example,dc=com'>>> attr_name = 'sn'>>> attr_val = 'Butcher'>>> con.compare_s(dn, attr_name, attr_val)1 In this case, we check the DN uid=matt,ou=user,dc=example,dc=com to see if the surname (sn) has the value Butcher. It does, so the method returns 1. But let's set the attr_val to a different surname, one that the record does not contain: >>> attr_val = 'Smith'>>> con.compare_s(dn, attr_name, attr_val)0>>> Since the record identified by the DN uid=matt,ou=users,dc=example,dc=com does not have an SN attribute with the value Smith, this method returns 0, false. Historically, Python has treated the boolean value False with 0, and numeric values greater than zero as boolean True. So it is possible to use a compare like this: if con.compare_s(dn, attr_name, attr_val): print "Match"else: print "No match." If compare_s() returns 1, this will print Match. If it returns 0, it will print No match. Let's take a quick look, now, at the asynchronous version of the compare operation, compare(). As we saw in the section on binding, the asynchronous version starts the operation in a new thread, and then immediately returns control to the program, not waiting for the operation to complete. Later, the result of the operation can be examined using the LDAPObject's result() method. Running the compare() method is almost identical to the synchronized version, with the difference being the value returned: >>> retval = con.compare( dn, attr_name, attr_val )>>> print retval15 Here, we run a compare() method, storing the identification number for the returned information in the variable retval. Finding out the value of the returned information is a little trickier than one might guess. Any attempt to retrieve the result of a compare operation using the result() method will raise an exception. But, this is not a sign that the application has encountered an error. Instead, the exception itself indicates whether the compare operation returned true or false. For example, let's fetch the result for the previous operation in the way we might expect: >>> print con.result( retval )Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/site-packages/ldap/ldapobject.py", line 405, in result res_type,res_data,res_msgid = self.result2(msgid,all,timeout) File "/usr/lib/python2.5/site-packages/ldap/ldapobject.py", line 409, in result2 res_type, res_data, res_msgid, srv_ctrls = self.result3 (msgid,all,timeout) File "/usr/lib/python2.5/site-packages/ldap/ldapobject.py", line 415, in result3 rtype, rdata, rmsgid, serverctrls = self._ldap_call (self._l.result3,msgid,all,timeout) File "/usr/lib/python2.5/site-packages/ldap/ldapobject.py", line 94, in _ldap_call result = func(*args,**kwargs)ldap.COMPARE_TRUE: {'info': '', 'desc': 'Compare True'} What is going on here? Attempting to retrieve the value resulted in an exception being thrown. As we can see from the last line, the exception raised was COMPARE_TRUE. Why? The developers of the Python-LDAP API worked around a difficulty in the standard LDAP C API by providing the results of the compare operation in the form of raised exceptions. Thus, the way to retrieve information from the asynchronous form of compare is with a try/except block: >>> retval = con.compare( dn, attr_name, attr_val )>>> try: ... con.result( retval )...... except ldap.COMPARE_TRUE:... print "Returned TRUE."...... except ldap.COMPARE_FALSE:... print "Returned FALSE."... Returned TRUE. In this example, we use the raised exception to determine whether the compare returned true, which raises the COMPARE_TRUE exception, or returned false, which raises COMPARE_FALSE. Performing compare operations is fairly straightforward, even with the nuances of the asynchronous version. The next operation we will examine is search. The Search Operation LDAP servers are intended as high read, low write databases, which means that it is expected that most operations that the server handles will be “read” operations that do not modify the contents of the directory information tree. And the main operation for reading a directory, as we have seen throughout this book, is the LDAP search operation. As a reminder, the LDAP search operation typically requires five parameters: The base DN, which indicates where in the directory information tree the search should start. The scope, which indicates how deeply the search should delve into the directory information tree. The search filter, which indicates which entries should be considered matches. The attribute list, which indicates which attributes of a matching record should be returned. A flag indicating whether attribute values should be returned (the Attrs Only flag). There are other additional parameters, such as time and size limits, and special client or server controls, but those are less frequently used. Once a search is processed, the server will return a bundle of information including the status of the search, all of the matching records (with the appropriate attributes), and, occasionally, error messages indicate some outstanding condition on the server. When writing Python-LDAP code to perform searches, we will need to handle all of these issues. In the Python-LDAP API, there are three (functional) variations of the search function: search() search_s() search_st() The first is the asynchronous form, and the second is the synchronous form. The third is a special form of the synchronous form that allows the programmer to add on a hard time limit in which the client must respond. There are two other versions of the search method, search_ext() and search_ext_s(). These two provide parameter placeholders for passing client and server extension mechanisms, but such extension handling is not yet functional, so neither of these functions is performatively different than the three above. We will begin by looking at the second method, search_s(). The search_s() function of the LDAPObject has two required parameters (Base DN and scope), and three optional parameters (search filter, attribute list, and the attrs only flag). Here, we will do a simple search for a list of surnames for all of the users in our directory information tree. For this, we will not need to set the attrs only flag (which is off by default, and, when turned on, will not return the attribute values). But we will need the other four parameters: Base DN: The users branch, ou=users,dc=example,dc=com Scope: Subtree (ldap.SCOPE_SUBTREE) Filter: Any person objects, (objectclass=person) Attributes: Surname (sn) Now we can perform our search in the Python interpreter: >>> import ldap>>> dn = "uid=matt,ou=users,dc=example,dc=com">>> pw = "secret">>> >>> con = ldap.initialize('ldap://localhost')>>> con.simple_bind_s( dn, pw )(97, [])>>>>>> base_dn = 'ou=users,dc=example,dc=com'>>> filter = '(objectclass=person)'>>> attrs = ['sn']>>> >>> con.search_s( base_dn, ldap.SCOPE_SUBTREE, filter, attrs )[('uid=matt,ou=Users,dc=example,dc=com', {'sn': ['Butcher']}),('uid=barbara,ou=Users,dc=example,dc=com', {'sn': ['Jensen']}),('uid=adam,ou=Users,dc=example,dc=com', {'sn': ['Smith']}),('uid=dave,ou=Users,dc=example,dc=com', {'sn': ['Hume']}),('uid=manny,ou=Users,dc=example,dc=com', {'sn': ['Kant']}),('uid=cicero,ou=Users,dc=example,dc=com', {'sn': ['Tullius']}),('uid=mary,ou=Users,dc=example,dc=com', {'sn': ['Wollstonecraft']}),('uid=thomas,ou=Users,dc=example,dc=com', {'sn': ['Hobbes']})]>>> The first seven lines should look familiar – there is nothing in these lines not covered in the previous sections. Next, we declare variables for the Base DN (base_dn), filter (filter), and attributes (attrs). While base_dn and filter are strings, attrs requires a list. In our case, it is a list with one member: ['sn']. Safe FiltersIf you are generating the LDAP filter dynamically (or letting users specify the filter), then you may want to use the escape_filter_chars() and filter_format() functions in the ldap.filter module to keep your filter strings safely escaped. We don't need to create a variable for the scope, since all of the available scopes (subtree, base, and onelevel) are available as constants in the ldap module: ldap.SCOPE_SUBTREE, ldap.SCOPE_BASE, and ldap.SCOPE_ONELEVEL. The line highlighted above shows the search, and the lines following – that big long messy conglomeration of tuples, dicts, and lists – is the result returned from the server. Strictly speaking, the result returned from search_s() is a list of tuples, where each tuple contains a DN string, and a dict of attributes. Each dict of attributes has a string key (the attribute name), and a list of string values. While this data structure is compact, it is not particularly easy to work with. For a complex data structure like this, it can be useful to create some wrapper objects to make use of this information a little more intuitive. The ldaphelper Helper Module To better work with LDAP results, we will create a simple package with just one class. This will be our ldaphelper module, stored in ldaphelper.py: import ldiffrom StringIO import StringIOfrom ldap.cidict import cidictdef get_search_results(results): """Given a set of results, return a list of LDAPSearchResult objects. """ res = [] if type(results) == tuple and len(results) == 2 : (code, arr) = results elif type(results) == list: arr = results if len(results) == 0: return res for item in arr: res.append( LDAPSearchResult(item) ) return resclass LDAPSearchResult: """A class to model LDAP results. """ dn = '' def __init__(self, entry_tuple): """Create a new LDAPSearchResult object.""" (dn, attrs) = entry_tuple if dn: self.dn = dn else: return self.attrs = cidict(attrs) def get_attributes(self): """Get a dictionary of all attributes. get_attributes()->{'name1':['value1','value2',...], 'name2: [value1...]} """ return self.attrs def set_attributes(self, attr_dict): """Set the list of attributes for this record. The format of the dictionary should be string key, list of string alues. e.g. {'cn': ['M Butcher','Matt Butcher']} set_attributes(attr_dictionary) """ self.attrs = cidict(attr_dict) def has_attribute(self, attr_name): """Returns true if there is an attribute by this name in the record. has_attribute(string attr_name)->boolean """ return self.attrs.has_key( attr_name ) def get_attr_values(self, key): """Get a list of attribute values. get_attr_values(string key)->['value1','value2'] """ return self.attrs[key] def get_attr_names(self): """Get a list of attribute names. get_attr_names()->['name1','name2',...] """ return self.attrs.keys() def get_dn(self): """Get the DN string for the record. get_dn()->string dn """ return self.dn def pretty_print(self): """Create a nice string representation of this object. pretty_print()->string """ str = "DN: " + self.dn + "n" for a, v_list in self.attrs.iteritems(): str = str + "Name: " + a + "n" for v in v_list: str = str + " Value: " + v + "n" str = str + "========" return str def to_ldif(self): """Get an LDIF representation of this record. to_ldif()->string """ out = StringIO() ldif_out = ldif.LDIFWriter(out) ldif_out.unparse(self.dn, self.attrs) return out.getvalue() This is a large chunk of code to take in at once, but the function of it is easy to describe. Remember, to use a Python module, you must make sure that the module is in the interpreter's path. See the official Python documentation (http://python.org) for more information. The package has two main components: the get_search_results() function, and the LDAPSearchResult class. The get_search_results() function simply takes the results from a search (either the synchronous ones, or the results from an asynchronous one, fetched with result()) and converts the results to a list of LDAPSearchResult objects. An LDAPSearchResults object provides some convenience methods for getting information about a record. The get_dn() method returns the record's DN, and the following methods all provide access to the attributes or the record: get_dn(): return the string DN for this record. get_attributes(): get a dictionary of all of the attributes. The keys  are attribute name strings, and the values are lists of attribute value  strings. set_attributes(): takes a dictionary with attribute names for keys, and  lists of attribute values for the value field. has_attribute(): takes a string attribute name and returns true if that attribute name is in the dict  of attributes returned. get_attr_values(): given an attribute name, this returns all of the  values for that attribute (or none if that attribute does not exist). get_attr_names(): returns a list of all of the attribute names for this  record. pretty_print(): returns a formatted string presentation of the record. to_ldif(): returns an LDIF formatted representation of the record. This object doesn't add much to the original returned data. It just makes it a little easier to access. Attribute NamesLDAP attributes can have multiple names. The attribute for surnames has two names: surname and sn (though most LDAP directory entries use sn). Either one might be returned by the server. To make your application aware of this difference, you can use the ldap.schema package to get schema information. The Case Sensitivity Gotcha There is one noteworthy detail in the code above. The search operation returns the attributes in a dictionary. The Python dictionary is case sensitive; the key TEST is different than the key test. This exemplifies a minor problem in dealing with LDAP information. Standards-compliant LDAP implementations treat some information in a case-insensitive way. The following items are, as a rule, treated as case-insensitive: Object class names: inetorgperson is treated as being the same as inetOrgPerson. Attribute Names: givenName is treated as being the same as givenname. Distinguished Names: DNs are case-insensitive, though the all-lower-case version of a DN is called  Normalized Form. The main area where this problem surfaces is in retrieving information from a search. Since the attributes are returned in a dict, they are, by default, treated as case-sensitive. For example, attrs.has_key('objectclass') will return False if the object class attribute name is spelled objectClass. To resolve this problem, the Python-LDAP developers created a case-insensitive dictionary implementation (ldap.cidict.cidict). This cidict class is used above to wrap the returned attribute dictionary. Make sure you do something similar in your own code, or you may end up with false misses when you look for attributes in a case-sensitive way, e.g. when you look for givenName in an entry where the attribute name is in the form givenname.
Read more
  • 0
  • 1
  • 15419

article-image-restful-web-services-with-kotlin
Natasha Mathur
01 Jun 2018
9 min read
Save for later

Building RESTful web services with Kotlin

Natasha Mathur
01 Jun 2018
9 min read
Kotlin has been eating up the Java world. It has already become a hit in the Android Ecosystem which was dominated by Java and is welcomed with open arms. Kotlin is not limited to Android development and can be used to develop server-side and client-side web applications as well. Kotlin is 100% compatible with the JVM so you can use any existing frameworks such as Spring Boot, Vert.x, or JSF for writing Java applications. In this tutorial, we will learn how to implement RESTful web services using Kotlin. This article is an excerpt from the book 'Kotlin Programming Cookbook', written by, Aanand Shekhar Roy and Rashi Karanpuria. Setting up dependencies for building RESTful services In this recipe, we will lay the foundation for developing the RESTful service. We will see how to set up dependencies and run our first SpringBoot web application. SpringBoot provides great support for Kotlin, which makes it easy to work with Kotlin. So let's get started. We will be using IntelliJ IDEA and Gradle build system. If you don't have that, you can get it from https://www.jetbrains.com/idea/. How to do it… Let's follow the given steps to set up the dependencies for building RESTful services: First, we will create a new project in IntelliJ IDE. We will be using the Gradle build system for maintaining dependency, so create a Gradle project: When you have created the project, just add the following lines to your build.gradle file. These lines of code contain spring-boot dependencies that we will need to develop the web app: buildscript { ext.kotlin_version = '1.1.60' // Required for Kotlin integration ext.spring_boot_version = '1.5.4.RELEASE' repositories { jcenter() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // Required for Kotlin integration classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin classpath "org.springframework.boot:spring-boot-gradle-plugin:$spring_boot_version" } } apply plugin: 'kotlin' // Required for Kotlin integration apply plugin: "kotlin-spring" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin apply plugin: 'org.springframework.boot' jar { baseName = 'gs-rest-service' version = '0.1.0' } sourceSets { main.java.srcDirs += 'src/main/kotlin' } repositories { jcenter() } dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" // Required for Kotlin integration compile 'org.springframework.boot:spring-boot-starter-web' testCompile('org.springframework.boot:spring-boot-starter-test') } Let's now create an App.kt file in the following directory hierarchy: It is important to keep the App.kt file in a package (we've used the college package). Otherwise, you will get an error that says the following: ** WARNING ** : Your ApplicationContext is unlikely to start due to a `@ComponentScan` of the default package. The reason for this error is that if you don't include a package declaration, it considers it a "default package," which is discouraged and avoided. Now, let's try to run the App.kt class. We will put the following code to test if it's running: @SpringBootApplication open class App { } fun main(args: Array<String>) { SpringApplication.run(App::class.java, *args) } Now run the project; if everything goes well, you will see output with the following line at the end: Started AppKt in 5.875 seconds (JVM running for 6.445) We now have our application running on our embedded Tomcat server. If you go to http://localhost:8080, you will see an error as follows: The preceding error is 404 error and the reason for that is we haven't told our application to do anything when a user is on the / path. Creating a REST controller In the previous recipe, we learned how to set up dependencies for creating RESTful services. Finally, we launched our backend on the http://localhost:8080 endpoint but got 404 error as our application wasn't configured to handle requests at that path (/). We will start from that point and learn how to create a REST controller. Let's get started! We will be using IntelliJ IDE for coding purposes. For setting up of the environment, refer to the previous recipe. You can also find the source in the repository at https://gitlab.com/aanandshekharroy/kotlin-webservices. How to do it… In this recipe, we will create a REST controller that will fetch us information about students in a college. We will be using an in-memory database using a list to keep things simple: Let's first create a Student class having a name and roll number properties: package college class Student() { lateinit var roll_number: String lateinit var name: String constructor( roll_number: String, name: String): this() { this.roll_number = roll_number this.name = name } } Next, we will create the StudentDatabase endpoint, which will act as a database for the application: @Component class StudentDatabase { private val students = mutableListOf<Student>() } Note that we have annotated the StudentDatabase class with @Component, which means its lifecycle will be controlled by Spring (because we want it to act as a database for our application). We also need a @PostConstruct annotation, because it's an in-memory database that is destroyed when the application closes. So we would like to have a filled database whenever the application launches. So we will create an init method, which will add a few items into the "database" at startup time: @PostConstruct private fun init() { students.add(Student("2013001","Aanand Shekhar Roy")) students.add(Student("2013165","Rashi Karanpuria")) } Now, we will create a few other methods that will help us deal with our database: getStudent: Gets the list of students present in our database: fun getStudents()=students addStudent: This method will add a student to our database: fun addStudent(student: Student): Boolean { students.add(student) return true } Now let's put this database to use. We will be creating a REST controller that will handle the request. We will create a StudentController and annotate it with @RestController. Using @RestController is simple, and it's the preferred method for creating MVC RESTful web services. Once created, we need to provide our database using Spring dependency injection, for which we will need the @Autowired annotation. Here's how our StudentController looks: @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase } Now we will set our response to the / path. We will show the list of students in our database. For that, we will simply create a method that lists out students. We will need to annotate it with @RequestMapping and provide parameters such as path and request method (GET, POST, and such): @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() This is what our controller looks like now. It is a simple REST controller: package college import org.springframework.beans.factory.annotation.Autowired import org.springframework.web.bind.annotation.RequestMapping import org.springframework.web.bind.annotation.RequestMethod import org.springframework.web.bind.annotation.RestController @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() } Now when you restart the server and go to http://localhost:8080, we will see the response as follows: As you can see, Spring is intelligent enough to provide the response in the JSON format, which makes it easy to design APIs. Now let's try to create another endpoint that will fetch a student's details from a roll number: @GetMapping("/student/{roll_number}") fun studentWithRollNumber( @PathVariable("roll_number") roll_number:String) = database.getStudentWithRollNumber(roll_number) Now, if you try the http://localhost:8080/student/2013001 endpoint, you will see the given output: {"roll_number":"2013001","name":"Aanand Shekhar Roy"} Next, we will try to add a student to the database. We will be doing it via the POST method: @RequestMapping("/add", method = arrayOf(RequestMethod.POST)) fun addStudent(@RequestBody student: Student) = if (database.addStudent(student)) student else throw Exception("Something went wrong") There's more… So far, our server has been dependent on IDE. We would definitely want to make it independent of an IDE. Thanks to Gradle, it is very easy to create a runnable JAR just with the following: ./gradlew clean bootRepackage The preceding command is platform independent and uses the Gradle build system to build the application. Now, you just need to type the mentioned command to run it: java -jar build/libs/gs-rest-service-0.1.0.jar You can then see the following output as before: Started AppKt in 4.858 seconds (JVM running for 5.548) This means your server is running successfully. Creating the Application class for Spring Boot The SpringApplication class is used to bootstrap our application. We've used it in the previous recipes; we will see how to create the Application class for Spring Boot in this recipe. We will be using IntelliJ IDE for coding purposes. To set up the environment, read previous recipes, especially the Setting up dependencies for building RESTful services recipe. How to do it… If you've used Spring Boot before, you must be familiar with using @Configuration, @EnableAutoConfiguration, and @ComponentScan in your main class. These were used so frequently that Spring Boot provides a convenient @SpringBootApplication alternative. The Spring Boot looks for the public static main method, and we will use a top-level function outside the Application class. If you noted, while setting up the dependencies, we used the kotlin-spring plugin, hence we don't need to make the Application class open. Here's an example of the Spring Boot application: package college import org.springframework.boot.SpringApplication import org.springframework.boot.autoconfigure.SpringBootApplication @SpringBootApplication class Application fun main(args: Array<String>) { SpringApplication.run(Application::class.java, *args) } The Spring Boot application executes the static run() method, which takes two parameters and starts a autoconfigured Tomcat web server when Spring application is started. When everything is set, you can start the application by executing the following command: ./gradlew bootRun If everything goes well, you will see the following output in the console: This is along with the last message—Started AppKt in xxx seconds. This means that your application is up and running. In order to run it as an independent server, you need to create a JAR and then you can execute as follows: ./gradlew clean bootRepackage Now, to run it, you just need to type the following command: java -jar build/libs/gs-rest-service-0.1.0.jar We learned how to set up dependencies for building RESTful services, creating a REST controller, and creating the application class for Spring boot. If you are interested in learning more about Kotlin then be sure to check out the 'Kotlin Programming Cookbook'. Build your first Android app with Kotlin 5 reasons to choose Kotlin over Java Getting started with Kotlin programming Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 15414
article-image-cups-how-manage-multiple-printers
Packt
23 Oct 2009
7 min read
Save for later

CUPS: How to Manage Multiple Printers

Packt
23 Oct 2009
7 min read
Configuring Printer Classes By default there are no printer classes set up. You will need to define them. The following are some of the criteria you can use to define printer classes: Printer Type: Printer type can be a PostScript or non-PostScript printer. Location: The location can describe the printer's place; for example the printer is placed on the third floor of the building. Department: Printer classes can also be defined on the basis of the department to which the printer belongs. The printer class might contain several printers that are used in a particular order. CUPS always checks for an available printer in the order in which printers were added to a class. Therefore, if you want a high-speed printer to be accessed first, you would add the high-speed printer to the class before you add a low-speed printer. This way, the high-speed printer can handle as many print requests as possible, and the low-speed printer would be reserved as a backup printer when the high-speed printer is in use. It is not compulsory to add printers in classes. There are a few important tasks that you need to do to manage and configure printer classes. Printer classes can themselves be members of other classes. So it is possible for you to define printer classes for high availability for printing. Once you configure the printer class, you can print to the printer class in the same way that you print to a single printer. Features and Advantages Here are some of the features and advantages of printer classes in CUPS: Even if a printer is a member of a class, it can still be accessed directly by users if you allow it. However, you can make individual printers reject jobs while groups accept them. As the system administrator, you have control over how printers in classes can be used. The replacement of printers within the class can easily be done. Let's understand this with the help of an example. You have a network consisting of seven computers running Linux, all having CUPS installed. You want to change printers assigned to the class. You can remove a printer and add a new one to the class in less than a minute. The entire configuration required is done as all other computers get their default printing routes updated in another 30 seconds. It takes less than one minute for the whole change—less time than a laser printer takes to warm up. A company is having the following type of printers with their policy as: A class for B/W laser printers that anybody can print on A class for draft color printers that anybody can print on, but with restrictions on volume A class for precision color printers that is unblocked only under the administrator's supervision CUPS provide the means for centralizing printers, and users will only have to look for a printer in a single place It provides the means for printing on another Ethernet segment without allowing normal Windows to broadcast traffic to get across and clutter up the network bandwidth It makes sure that the person printing from his desk on the second floor of the other building doesn't get stuck because the departmental printer on the ground floor of this building has run out of paper and his print job has got redirected to the standby printer All of these printers hang off Windows machines, and would be available directly for other computers running under Windows. However, we get the following advantages by providing them through CUPS on a central router: Implicit Class CUPS also supports the special type of printer class called as implicit class. These implicit classes work just like printer classes, but they are created automatically based on the available "printers and printer classes" on the network. CUPS identifies printers with identical configurations intelligently, and has the client machines send their print jobs to the first available printer. If one or more printers go down, the jobs are automatically redirected to the servers that are running, providing fail-safe printing. Managing Printer Classes Through Command-Line You can perform this task only by using the lpadmin -c command. Jobs sent to a printer class are forwarded to the first available printer in the printer class. Adding a Printer to a Class You can run the following command with the –p and -c options to add a printer to a class: $sudo lpadmin –p cupsprinter –c cupsclass The above example shows that the printer cupsprinter has been added to printer class cupsclass: You can verify whether the printers are in a printer class: $lpstat -c cupsclass Removing a Printer from a Class You need to run lpadmin command with –p and –r options to remove printer from a class. If all the printers from a class are removed, then that class can get deleted automatically. $sudo lpadmin –p cupsprinter –r cupsclass The above example shows that the printer cupsprinter has been removed from the printer class, cupsclass: Removing a Class To remove a class, you can run the lpadmin command with the –x option: $sudo lpadmin -x cupsclass The above command will remove cupsclass. Managing Printer Classes Through CUPS Web Interface Like printers, and groups of printers, printer classes can also be managed by the CUPS web interface. In the web interface, CUPS displays a tab called Classes, which has all the options to manage the printer classes. You can get this tab directly by visiting the following URL: http://localhost:631/classes If no classes are defined, then the screen will appear as follows which shows the search and sorting options: Adding a New Printer Class A printer class can be added using the Add Class option in the Administration tab. It is useful to have a helpful description in the Name field to identify your class. You can add the additional information regarding the printer class under the Description field that would be seen by users when they select this printer class for a job. The Location field can be used to help you group a set of printers logically and thus help you identify different classes. In the following figure, we are adding all black and white printers into one printer class. The Members box will be pre-populated with a list of all printers that have been added to CUPS. Select the appropriate printers for your class and it will be ready for use. Once your class is added, you can manage it using the Classes tab. Most of the options here are quite similar to the ones for managing individual printers, as CUPS treats each class as a single entity. In the Classes tab, we can see following options with each printer class: Stop Class Clicking on Stop Class changes the status of all the printers in that class to "stop". When a class is stopped, this option changes to Start Class. This changes the status of all of the printers to "idle". Now, they are once again ready to receive print jobs. Reject Jobs Clicking on Reject Jobs changes the status of all the printers in that class to "reject jobs". When a class is in this state, this option changes to Accept Jobs which changes the status of all of the printers to "accept jobs" so that they are once again ready to accept print jobs.    
Read more
  • 0
  • 1
  • 15393

article-image-flexible-layouts-swift-and-uistackview-0
Milton Moura
04 Jan 2016
12 min read
Save for later

Flexible Layouts with Swift and UIStackview

Milton Moura
04 Jan 2016
12 min read
In this post we will build a Sign In and Password Recovery form with a single flexible layout, using Swift and the UIStackView class, which has been available since the release of the iOS 9 SDK. By taking advantage of UIStackView's properties, we will dynamically adapt to the device's orientation and show / hide different form components with animations. The source code for this post can the found in this github repository. Auto Layout Auto Layout has become a requirement for any application that wants to adhere to modern best practices of iOS development. When introduced in iOS 6, it was optional and full visual support in Interface Builder just wasn't there. With the release of iOS 8 and the introduction of Size Classes, the tools and the API improved but you could still dodge and avoid Auto Layout. But now, we are at a point where, in order to fully support all device sizes and split-screen multitasking on the iPad, you must embrace it and design your applications with a flexible UI in mind. The problem with Auto Layout Auto Layout basically works as an linear equation solver, taking all of the constraints defined in your views and subviews, and calculates the correct sizes and positioning for them. One disadvantage of this approach is that you are obligated to define, typically, between 2 to 6 constraints for each control you add to your view. With different constraint sets for different size classes, the total number of constraints increases considerably and the complexity of managing them increases as well. Enter the Stack View In order to reduce this complexity, the iOS 9 SDK introduced the UIStackView, an interface control that serves the single purpose of laying out collections of views. A UIStackView will dynamically adapt its containing views' layout to the device's current orientation, screen sizes and other changes in its views. You should keep the following stack view properties in mind: The views contained in a stack view can be arranged either Vertically or Horizontally, in the order they were added to the arrangedSubviews array. You can embed stack views within each other, recursively. The containing views are laid out according to the stack view's [distribution](...) and [alignment](...) types. These attributes specify how the view collection is laid out across the span of the stack view (distribution) and how to align all subviews within the stack view's container (alignment). Most properties are animatable and inserting / deleting / hiding / showing views within an animation block will also be animated. Even though you can use a stack view within an UIScrollView, don't try to replicate the behaviour of an UITableView or UICollectionView, as you'll soon regret it. Apple recommends that you use UIStackView for all cases, as it will seriously reduce constraint overhead. Just be sure to judiciously use compression and content hugging priorities to solve possible layout ambiguities. A Flexible Sign In / Recover Form The sample application we'll build features a simple Sign In form, with the option for recovering a forgotten password, all in a single screen. When tapping on the "Forgot your password?" button, the form will change, hiding the password text field and showing the new call-to-action buttons and message labels. By canceling the password recovery action, these new controls will be hidden once again and the form will return to it's initial state. 1. Creating the form This is what the form will look like when we're done. Let's start by creating a new iOS > Single View Application template. Then, we add a new UIStackView to the ViewController and add some constraints for positioning it within its parent view. Since we want a full screen width vertical form, we set its axis to .Vertical, the alignment to .Fill and the distribution to .FillProportionally, so that individual views within the stack view can grow bigger or smaller, according to their content.    class ViewController : UIViewController    {        let formStackView = UIStackView()        ...        override func viewDidLoad() {            super.viewDidLoad()                       // Initialize the top-level form stack view            formStackView.axis = .Vertical            formStackView.alignment = .Fill            formStackView.distribution = .FillProportionally            formStackView.spacing = 8            formStackView.translatesAutoresizingMaskIntoConstraints = false                       view.addSubview(formStackView)                       // Anchor it to the parent view            view.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("H:|-20-[formStackView]-20-|", options: [.AlignAllRight,.AlignAllLeft], metrics: nil, views: ["formStackView": formStackView])            )            view.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("V:|-20-[formStackView]-8-|", options: [.AlignAllTop,.AlignAllBottom], metrics: nil, views: ["formStackView": formStackView])            )            ...        }        ...    } Next, we'll add all the fields and buttons that make up our form. We'll only present a couple of them here as the rest of the code is boilerplate. In order to refrain UIStackView from growing the height of our inputs and buttons as needed to fill vertical space, we add height constraints to set the maximum value for their vertical size.    class ViewController : UIViewController    {        ...        var passwordField: UITextField!        var signInButton: UIButton!        var signInLabel: UILabel!        var forgotButton: UIButton!        var backToSignIn: UIButton!        var recoverLabel: UILabel!        var recoverButton: UIButton!        ...               override func viewDidLoad() {            ...                       // Add the email field            let emailField = UITextField()            emailField.translatesAutoresizingMaskIntoConstraints = false            emailField.borderStyle = .RoundedRect            emailField.placeholder = "Email Address"            formStackView.addArrangedSubview(emailField)                       // Make sure we have a height constraint, so it doesn't change according to the stackview auto-layout            emailField.addConstraints(                NSLayoutConstraint.constraintsWithVisualFormat("V:[emailField(<=30)]", options: [.AlignAllTop, .AlignAllBottom], metrics: nil, views: ["emailField": emailField])             )                       // Add the password field            passwordField = UITextField()            passwordField.translatesAutoresizingMaskIntoConstraints = false            passwordField.borderStyle = .RoundedRect            passwordField.placeholder = "Password"            formStackView.addArrangedSubview(passwordField)                       // Make sure we have a height constraint, so it doesn't change according to the stackview auto-layout            passwordField.addConstraints(                 NSLayoutConstraint.constraintsWithVisualFormat("V:[passwordField(<=30)]", options: .AlignAllCenterY, metrics: nil, views: ["passwordField": passwordField])            )            ...        }        ...    } 2. Animating by showing / hiding specific views By taking advantage of the previously mentioned properties of UIStackView, we can transition from the Sign In form to the Password Recovery form by showing and hiding specific field and buttons. We do this by setting the hidden property within a UIView.animateWithDuration block.    class ViewController : UIViewController    {        ...        // Callback target for the Forgot my password button, animates old and new controls in / out        func forgotTapped(sender: AnyObject) {            UIView.animateWithDuration(0.2) { [weak self] () -> Void in                self?.signInButton.hidden = true                self?.signInLabel.hidden = true                self?.forgotButton.hidden = true                self?.passwordField.hidden = true                self?.recoverButton.hidden = false                self?.recoverLabel.hidden = false                self?.backToSignIn.hidden = false            }        }               // Callback target for the Back to Sign In button, animates old and new controls in / out        func backToSignInTapped(sender: AnyObject) {            UIView.animateWithDuration(0.2) { [weak self] () -> Void in                self?.signInButton.hidden = false                self?.signInLabel.hidden = false                self?.forgotButton.hidden = false                self?.passwordField.hidden = false                self?.recoverButton.hidden = true                self?.recoverLabel.hidden = true                self?.backToSignIn.hidden = true            }        }        ...    } 3. Handling different Size Classes Because we have many vertical input fields and buttons, space can become an issue when presenting in a compact vertical size, like the iPhone in landscape. To overcome this, we add a stack view to the header section of the form and change its axis orientation between Vertical and Horizontal, according to the current active size class.    override func viewDidLoad() {        ...        // Initialize the header stack view, that will change orientation type according to the current size class        headerStackView.axis = .Vertical        headerStackView.alignment = .Fill        headerStackView.distribution = .Fill        headerStackView.spacing = 8        headerStackView.translatesAutoresizingMaskIntoConstraints = false        ...    }       // If we are presenting in a Compact Vertical Size Class, let's change the header stack view axis orientation    override func willTransitionToTraitCollection(newCollection: UITraitCollection, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {        if newCollection.verticalSizeClass == .Compact {            headerStackView.axis = .Horizontal        } else {            headerStackView.axis = .Vertical        }    } 4. The flexible form layout So, with a couple of UIStackViews, we've built a flexible form only by defining a few height constraints for our input fields and buttons, with all the remaining constraints magically managed by the stack views. Here is the end result: Conclusion We have included in the sample source code a view controller with this same example but designed with Interface Builder. There, you can clearly see that we have less than 10 constraints, on a layout that could easily have up to 40-50 constraints if we had not used UIStackView. Stack Views are here to stay and you should use them now if you are targeting iOS 9 and above. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 15282

article-image-installing-numpy-scipy-matplotlib-ipython
Packt Editorial Staff
12 Oct 2014
7 min read
Save for later

Installing NumPy, SciPy, matplotlib, and IPython

Packt Editorial Staff
12 Oct 2014
7 min read
This article written by Ivan Idris, author of the book, Python Data Analysis, will guide you to install NumPy, SciPy, matplotlib, and IPython. We can find a mind map describing software that can be used for data analysis at https://www.xmind.net/m/WvfC/. Obviously, we can't install all of this software in this article. We will install NumPy, SciPy, matplotlib, and IPython on different operating systems. [box type="info" align="" class="" width=""]Packt has the following books that are focused on NumPy: NumPy Beginner's Guide Second Edition, Ivan Idris NumPy Cookbook, Ivan Idris Learning NumPy Array, Ivan Idris [/box] SciPy is a scientific Python library, which supplements and slightly overlaps NumPy. NumPy and SciPy, historically shared their codebase but were later separated. matplotlib is a plotting library based on NumPy. IPython provides an architecture for interactive computing. The most notable part of this project is the IPython shell. Software used The software used in this article is based on Python, so it is required to have Python installed. On some operating systems, Python is already installed. You, however, need to check whether the Python version is compatible with the software version you want to install. There are many implementations of Python, including commercial implementations and distributions. [box type="note" align="" class="" width=""]You can download Python from https://www.python.org/download/. On this website, we can find installers for Windows and Mac OS X, as well as source archives for Linux, Unix, and Mac OS X.[/box] The software we will install has binary installers for Windows, various Linux distributions, and Mac OS X. There are also source distributions, if you prefer that. You need to have Python 2.4.x or above installed on your system. Python 2.7.x is currently the best Python version to have because most Scientific Python libraries support it. Python 2.7 will be supported and maintained until 2020. After that, we will have to switch to Python 3. Installing software and setup on Windows Installing on Windows is, fortunately, a straightforward task that we will cover in detail. You only need to download an installer, and a wizard will guide you through the installation steps. We will give steps to install NumPy here. The steps to install the other libraries are similar. The actions we will take are as follows: Download installers for Windows from the SourceForge website (refer to the following table). The latest release versions may change, so just choose the one that fits your setup best. Library URL Latest Version NumPy http://sourceforge.net/projects/numpy/files/ 1.8.1 SciPy http://sourceforge.net/projects/scipy/files/ 0.14.0 matplotlib http://sourceforge.net/projects/matplotlib/files/ 1.3.1 IPython http://archive.ipython.org/release/ 2.0.0 Choose the appropriate version. In this example, we chose numpy-1.8.1-win32-superpack-python2.7.exe. Open the EXE installer by double-clicking on it. Now, we can see a description of NumPy and its features. Click on the Next button.If you have Python installed, it should automatically be detected. If it is not detected, maybe your path settings are wrong. Click on the Next button if Python is found; otherwise, click on the Cancel button and install Python (NumPy cannot be installed without Python). Click on the Next button. This is the point of no return. Well, kind of, but it is best to make sure that you are installing to the proper directory and so on and so forth. Now the real installation starts. This may take a while. [box type="note" align="" class="" width=""]The situation around installers is rapidly evolving. Other alternatives exist in various stage of maturity (see https://www.scipy.org/install.html). It might be necessary to put the msvcp71.dll file in your C:Windowssystem32 directory. You can get it from http://www.dll-files.com/dllindex/dll-files.shtml?msvcp71.[/box] Installing software and setup on Linux Installing the recommended software on Linux depends on the distribution you have. We will discuss how you would install NumPy from the command line, although, you could probably use graphical installers; it depends on your distribution (distro). The commands to install matplotlib, SciPy, and IPython are the same – only the package names are different. Installing matplotlib, SciPy, and IPython is recommended, but optional. Most Linux distributions have NumPy packages. We will go through the necessary steps for some of the popular Linux distros: Run the following instructions from the command line for installing NumPy on Red Hat: $ yum install python-numpy To install NumPy on Mandriva, run the following command-line instruction: $ urpmi python-numpy To install NumPy on Gentoo run the following command-line instruction: $ sudo emerge numpy To install NumPy on Debian or Ubuntu, we need to type the following: $ sudo apt-get install python-numpy The following table gives an overview of the Linux distributions and corresponding package names for NumPy, SciPy, matplotlib, and IPython. Linux distribution NumPy SciPy matplotlib IPython Arch Linux python-numpy python-scipy python-matplotlib Ipython Debian python-numpy python-scipy python-matplotlib Ipython Fedora numpy python-scipy python-matplotlib Ipython Gentoo dev-python/numpy scipy matplotlib ipython OpenSUSE python-numpy, python-numpy-devel python-scipy python-matplotlib ipython Slackware numpy scipy matplotlib ipython Installing software and setup on Mac OS X You can install NumPy, matplotlib, and SciPy on the Mac with a graphical installer or from the command line with a port manager such as MacPorts, depending on your preference. Prerequisite is to install XCode as it is not part of OS X releases. We will install NumPy with a GUI installer using the following steps: We can get a NumPy installer from the SourceForge website http://sourceforge.net/projects/numpy/files/. Similar files exist for matplotlib and SciPy. Just change numpy in the previous URL to scipy or matplotlib. IPython didn't have a GUI installer at the time of writing. Download the appropriate DMG file usually the latest one is the best.Another alternative is the SciPy Superpack (https://github.com/fonnesbeck/ScipySuperpack). Whichever option you choose it is important to make sure that updates which impact the system Python library don't negatively influence already installed software by not building against the Python library provided by Apple. Open the DMG file (in this example, numpy-1.8.1-py2.7-python.org-macosx10.6.dmg). Double-click on the icon of the opened box, the one having a subscript that ends with .mpkg. We will be presented with the welcome screen of the installer. Click on the Continue button to go to the Read Me screen, where we will be presented with a short description of NumPy. Click on the Continue button to the License the screen. Read the license, click on the Continue button and then on the Accept button, when prompted to accept the license. Continue through the next screens and click on the Finish button at the end. Alternatively, we can install NumPy, SciPy, matplotlib, and IPython through the MacPorts route, with Fink or Homebrew. The following installation steps shown, installs all these packages. [box type="info" align="" class="" width=""]For installing with MacPorts, type the following command: sudo port install py-numpy py-scipy py-matplotlib py- ipython [/box] Installing with setuptools If you have pip you can install NumPy, SciPy, matplotlib and IPython with the following commands. pip install numpy pip install scipy pip install matplotlib pip install ipython It may be necessary to prepend sudo to these commands, if your current user doesn't have sufficient rights on your system. Summary In this article, we installed NumPy, SciPy, matplotlib and IPython on Windows, Mac OS X and Linux. Resources for Article: Further resources on this subject: Plotting Charts with Images and Maps Importing Dynamic Data Python 3: Designing a Tasklist Application
Read more
  • 0
  • 0
  • 15282
article-image-understand-how-to-access-the-dark-web-with-tor-browser-tutorial
Savia Lobo
16 Feb 2019
8 min read
Save for later

Understand how to access the Dark Web with Tor Browser [Tutorial]

Savia Lobo
16 Feb 2019
8 min read
According to the Tor Project website: “Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security. The Tor network is a group of volunteer-operated servers that allows people to improve their privacy and security on the Internet. Tor's users employ this network by connecting through a series of virtual tunnels rather than making a direct connection, thus allowing both organizations and individuals to share information over public networks without compromising their privacy. Along the same line, Tor is an effective censorship circumvention tool, allowing its users to reach otherwise blocked destinations or content. Tor can also be used as a building block for software developers to create new communication tools with built-in privacy features.” This article is an excerpt taken from the book, Hands-On Dark Web Analysis written by Sion Retzkin. In this book, you will learn how to install operating systems and Tor Browser for privacy, security, and anonymity while accessing them. In this article, we will understand what Tor and the Tor browser is and how to install it in several ways. Tor (which is an acronym for The Onion Router, by the way) is a privacy-focused network that hides your traffic, by routing it through multiple random servers on the Tor network. So, instead of the packets that make up your communication with another party (person or organization), going from point A to B directly, using Tor, they will jump all over the place, between multiple servers, before reaching point B, hiding the trail. Additionally, the packets that make up the traffic (or communication) in the Tor network are wrapped in special layers, which only show the previous server or step that the packet came from, and the next step, hiding the entire route effectively. Tor Browser is a web browser, based on Firefox that was created for the purpose of accessing the Tor network, securely and privately. Even if you use Tor, this doesn't mean that you're secure. Why is that? Because Tor Browser has software vulnerabilities, the same as every other browser. It's also based on Firefox, so it inherits some of its vulnerabilities from there as well. You can minimize attack vectors by applying common security sense, and by employing various tools to try to limit or prevent malicious activity, related to infecting the Tor Browser or the host running it. Installing Tor on Linux Let's start with a classic installation, by accessing the Tor Project website, via a browser. The default browser that ships with Ubuntu is Firefox, which is what we'll use. Although you might think that this would be the best way to install Tor Browser, it's actually the least secure, since the Tor Project website is continuously targeted by hackers and might have any number of security or privacy issues on it. Instead of just downloading Tor Browser and immediately installing it (which is dangerous), you can either download the file and verify its hash (to verify that it is indeed the correct one), or you could install it through other methods, for example, via the Terminal, by using Linux commands, or from the Ubuntu Software Center. We'll start by going over the steps to download Tor Browser from the Tor Project website: After booting your Linux installation, open your browser Enter the following address and navigate to it: https://www.torproject.org/download/download-easy.html.en#linux. Notice that the URL takes you directly to the Linux download section of the Tor Project website. I usually prefer this direct method, rather than starting with Google (or any other search engine), searching for Tor, and then accessing the Tor Project website, since, as you may know, Google collects information about users accessing it, and the whole idea of this book is to maintain our privacy and security. Also, always verify that you're accessing the Tor Project website via HTTPS. Choose the correct architecture (32 or 64 bit), and click the Download link. You'll be able to choose what you want to do with the file—open it with Ubuntu's Archive Manager, or save it to a location on the disk: Downloading Tor Browser   Again, the quickest way to go would be to open the compressed file, but the more secure way would be to download the file and to verify its hash, before doing anything else. The Tor Project provides GNU Privacy Guard (GPG) signature files, with each version of Tor Browser. You will need to install GnuPG on your Linux OS, if it isn't there already, in order to be able to verify the hash of the browser package. To do so, just open the Terminal and type in the following: sudo apt install gnupg Enter your password when required, and the installation will commence. Most Linux installations already include gnupg, as can be seen in the following screenshot: Installing GnuPG    After installing GnuPG, you need to import the key that signed the package. According to the Tor Project website, the Tor Browser import key is 0x4e2C6e8793298290. The Tor Project updates and changes the keys from time to time, so you can always navigate to:  https://www.torproject.org/docs/verifying-signatures.html.en to find the current import key if the one in the book doesn't work. The command to import the key is as follows: gpg --keyserver pool.sks-keyservers.net --recv-keys 0x4e2C6e8793298290 This is followed by this: gpg --fingerprint 0x4e2C6e8793298290 This will tell you whether the key fingerprint is correct. You should see the following: Verify key fingerprint Now, you need to download the .asc file, which is found on the Tor Browser Downloads page, next to the relevant package of the browser (it appears as sig, short for signature): ASC file location   You can find the Tor Browser download page here: https://www.torproject.org/projects/torbrowser.html Now, you can verify the signature of the package, using the ASC file. To do so, enter the following command in the Terminal: gpg --verify tor-browser-linux64-7.5.6_en-US.tar.xz.asc tor-browser-linux64-7.5.6_en-US.tar.xz Note the 64 that I marked in bold. If your OS is 32-bit, change the number to 32. The result you should get is as follows: Verifying the signature   After verifying the hash (signature) of the Tor Browser package, you can install it. You can do so by either: Double-clicking the Tor Browser package file (which will open up the Archive Manager program), clicking Extract, and choosing the location of your choice. Right-clicking the file and choosing Extract here or Extract to and choosing a location. After extracting, perform the following steps: Navigate to the location you defined. Double-click on the Start-tor-browser.desktop file to launch Tor Browser. Press Trust and Launch in the window that appears: Launching Tor   Notice that the filename and icon changed to Tor Browser. Press Connect and you will be connected to the Tor network, and will be able to browse it, using Tor Browser: Connecting to Tor   Before we discuss using Tor Browser, let's talk about alternative ways to install it, for example, by using the Ubuntu Software application. Start by clicking on the Ubuntu Software icon: Ubuntu Software Search for Tor Browser, then click on the relevant result: Tor Browser in Ubuntu Software Then, click Install. After entering your password, the installation process will start. When it ends, click Launch to start Tor Browser. Installing Tor Browser via the Terminal, from the downloaded package Another way to install Tor is to use commands, via the Terminal. There are several ways to do so, as follows: First, download the required Tor Browser package from the website Verify the download, as we discussed before, and then keep the Terminal open Navigate to the location where you downloaded Tor, by entering the following command: cd path/Tor_Browser_Directory For example, note the following: cd /downloads/tor-browser_en_US Then, launch Tor Browser by running the following: ./start-tor-browser.desktop Never launch Tor as root (or with the sudo command). Installing the Tor Browser entirely via the Terminal Next, we'll discuss how to install Tor entirely via the Terminal: First, launch the Terminal, as before. Then, execute the following command: sudo apt install torbrowser-launcher This command will install the Tor Browser. We need root access to install an app, not to launch it. You can then run Tor by executing the following command: ./start-tor-browser.desktop Thus, in this post, we talked about Tor, Tor Browser, how to install it in several ways, and how to use it. If you've enjoyed this post and want to know more about the concept of the Deep Web and the Dark Web and their significance in the security sector, head over to the book  Hands-On Dark Web Analysis. Tor Project gets its first official mobile browser for Android, the privacy-friendly Tor Browser Tor Browser 8.0 powered by Firefox 60 ESR released How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 15105

article-image-manipulating-jquery-tables
Packt
24 Oct 2009
20 min read
Save for later

Manipulating jQuery tables

Packt
24 Oct 2009
20 min read
In this article by Karl Swedberg and Jonathan Chaffer, we will use an online bookstore as our model website, but the techniques we cook up can be applied to a wide variety of other sites as well, from weblogs to portfolios, from market-facing business sites to corporate intranets. In this article, we will use jQuery to apply techniques for increasing the readability, usability, and visual appeal of tables, though we are not dealing with tables used for layout and design. In fact, as the web standards movement has become more pervasive in the last few years, table-based layout has increasingly been abandoned in favor of CSS‑based designs. Although tables were often employed as a somewhat necessary stopgap measure in the 1990s to create multi-column and other complex layouts, they were never intended to be used in that way, whereas CSS is a technology expressly created for presentation. But this is not the place for an extended discussion on the proper role of tables. Suffice it to say that in this article we will explore ways to display and interact with tables used as semantically marked up containers of tabular data. For a closer look at applying semantic, accessible HTML to tables, a good place to start is Roger Johansson's blog entry, Bring on the Tables at www.456bereastreet.com/archive/200410/bring_on_the_tables/. Some of the techniques we apply to tables in this article can be found in plug‑ins such as Christian Bach's Table Sorter. For more information, visit the jQuery Plug‑in Repository at http://jQuery.com/plugins. Sorting One of the most common tasks performed with tabular data is sorting. In a large table, being able to rearrange the information that we're looking for is invaluable. Unfortunately, this helpful operation is one of the trickiest to put into action. We can achieve the goal of sorting in two ways, namely Server-Side Sorting and JavaScript Sorting. Server-Side Sorting A common solution for data sorting is to perform it on the server side. Data in tables often comes from a database, which means that the code that pulls it out of the database can request it in a given sort order (using, for example, the SQL language's ORDER BY clause). If we have server-side code at our disposal, it is straightforward to begin with a reasonable default sort order. Sorting is most useful when the user can determine the sort order. A common idiom is to make the headers of sortable columns into links. These links can go to the current page, but with a query string appended indicating the column to sort by: <table id="my-data">   <tr>     <th class="name"><a href="index.php?sort=name">Name</a></th>     <th class="date"><a href="index.php?sort=date">Date</a></th>   </tr>   ... </table> The server can react to the query string parameter by returning the database contents in a different order. Preventing Page Refreshes This setup is simple, but requires a page refresh for each sort operation. As we have seen, jQuery allows us to eliminate such page refreshes by using AJAX methods. If we have the column headers set up as links as before, we can add jQuery code to change those links into AJAX requests: $(document).ready(function() {   $('#my-data .name a').click(function() {     $('#my-data').load('index.php?sort=name&type=ajax');     return false;   });   $('#my-data .date a').click(function() {     $('#my-data').load('index.php?sort=date&type=ajax');     return false;   }); }); Now when the anchors are clicked, jQuery sends an AJAX request to the server for the same page. We add an additional parameter to the query string so that the server can determine that an AJAX request is being made. The server code can be written to send back only the table itself, and not the surrounding page, when this parameter is present. This way we can take the response and insert it in place of the table. This is an example of progressiveenhancement. The page works perfectly well without any JavaScript at all, as the links for server-side sorting are still present. When JavaScript is present, however, the AJAX hijacks the page request and allows the sort to occur without a full page load. JavaScript Sorting There are times, though, when we either don't want to wait for server responses when sorting, or don't have a server-side scripting language available to us. A viable alternative in this case is to perform the sorting entirely on the browser using JavaScript client-side scripting. For example, suppose we have a table listing books, along with their authors, release dates, and prices: <table class="sortable">   <thead>     <tr>       <th></th>       <th>Title</th>       <th>Author(s)</th>       <th>Publish&nbsp;Date</th>       <th>Price</th>     </tr>   </thead>   <tbody>     <tr>       <td>         <img src="../covers/small/1847192386.png" width="49"              height="61" alt="Building Websites with                                                 Joomla! 1.5 Beta 1" />       </td>       <td>Building Websites with Joomla! 1.5 Beta 1</td>       <td>Hagen Graf</td>       <td>Feb 2007</td>       <td>$40.49</td>     </tr>     <tr>       <td><img src="../covers/small/1904811620.png" width="49"                height="61" alt="Learning Mambo: A Step-by-Step                Tutorial to Building Your Website" /></td>       <td>Learning Mambo: A Step-by-Step Tutorial to Building Your           Website</td>       <td>Douglas Paterson</td>       <td>Dec 2006</td>       <td>$40.49</td>     </tr>     ...   </tbody> </table> We'd like to turn the table headers into buttons that sort by their respective columns. Let us look into ways of doing this.   Row Grouping Tags Note our use of the <thead> and <tbody> tags to segment the data into row groupings. Many HTML authors omit these implied tags, but they can prove useful in supplying us with more convenient CSS selectors to use. For example, suppose we wish to apply typical even/odd row striping to this table, but only to the body of the table: $(document).ready(function() {   $('table.sortable tbody tr:odd').addClass('odd');   $('table.sortable tbody tr:even').addClass('even'); }); This will add alternating colors to the table, but leave the header untouched: Basic Alphabetical Sorting Now let's perform a sort on the Titlecolumn of the table. We'll need a class on the table header cell so that we can select it properly: <thead>   <tr>     <th></th>    <th class="sort-alpha">Title</th>     <th>Author(s)</th>     <th>Publish&nbsp;Date</th>     <th>Price</th>   </tr> </thead> To perform the actual sort, we can use JavaScript's built in .sort()method. It does an in‑place sort on an array, and can take a function as an argument. This function compares two items in the array and should return a positive or negative number depending on the result. Our initial sort routine looks like this: $(document).ready(function() {   $('table.sortable').each(function() {     var $table = $(this);     $('th', $table).each(function(column) {       if ($(this).is('.sort-alpha')) {         $(this).addClass('clickable').hover(function() {           $(this).addClass('hover');         }, function() {           $(this).removeClass('hover');         }).click(function() {           var rows = $table.find('tbody > tr').get();           rows.sort(function(a, b) {             var keyA = $(a).children('td').eq(column).text()                                                       .toUpperCase();             var keyB = $(b).children('td').eq(column).text()                                                       .toUpperCase();             if (keyA < keyB) return -1;             if (keyA > keyB) return 1;             return 0;           });           $.each(rows, function(index, row) {             $table.children('tbody').append(row);           });         });       }     });   }); }); The first thing to note is our use of the .each() method to make iteration explicit. Even though we could bind a click handler to all headers with the sort-alpha class just by calling $('table.sortable th.sort-alpha').click(), this wouldn't allow us to easily capture a crucial bit of information&#x97;the column index of the clicked header. Because .each() passes the iteration index into its callback function, we can use it to find the relevant cell in each row of the data later. Once we have found the header cell, we retrieve an array of all of the data rows. This is a great example of how .get()is useful in transforming a jQuery object into an array of DOM nodes; even though jQuery objects act like arrays in many respects, they don't have any of the native array methods available, such as .sort(). With .sort() at our disposal, the rest is fairly straightforward. The rows are sorted by comparing the textual contexts of the relevant table cell. We know which cell to look at because we captured the column index in the enclosing .each() call. We convert the text to uppercase because string comparisons in JavaScript are case-sensitive and we wish our sort to be case-insensitive. Finally, with the array sorted, we loop through the rows and reinsert them into the table. Since .append() does not clone nodes, this moves them rather than copying them. Our table is now sorted. This is an example of progressive enhancement's counterpart, gracefuldegradation. Unlike with the AJAX solution discussed earlier, we cannot make the sort work without JavaScript, as we are assuming the server has no scripting language available to it in this case. The JavaScript is required for the sort to work, so by adding the "clickable" class only through code, we make sure not to indicate with the interface that sorting is even possible unless the script can run. The page degrades into one that is still functional, albeit without sorting available. We have moved the actual rows around, hence our alternating row colors are now out of whack: We need to reapply the row colors after the sort is performed. We can do this by pulling the coloring code out into a function that we call when needed: $(document).ready(function() {   var alternateRowColors = function($table) {     $('tbody tr:odd', $table).removeClass('even').addClass('odd');     $('tbody tr:even', $table).removeClass('odd').addClass('even');   };     $('table.sortable').each(function() {     var $table = $(this);     alternateRowColors($table);     $('th', $table).each(function(column) {       if ($(this).is('.sort-alpha')) {         $(this).addClass('clickable').hover(function() {           $(this).addClass('hover');         }, function() {           $(this).removeClass('hover');         }).click(function() {           var rows = $table.find('tbody > tr').get();           rows.sort(function(a, b) {             var keyA = $(a).children('td').eq(column).text()                                                       .toUpperCase();             var keyB = $(b).children('td').eq(column).text()                                                       .toUpperCase();             if (keyA < keyB) return -1;             if (keyA > keyB) return 1;             return 0;           });           $.each(rows, function(index, row) {             $table.children('tbody').append(row);           });           alternateRowColors($table);         });       }     });   }); }); This corrects the row coloring after the fact, fixing our issue:   The Power of Plug-ins The alternateRowColors()function that we wrote is a perfect candidate to become a jQuery plug-in. In fact, any operation that we wish to apply to a set of DOM elements can easily be expressed as a plug-in. In this case, we only need to modify our existing function a little bit: jQuery.fn.alternateRowColors = function() {   $('tbody tr:odd', this).removeClass('even').addClass('odd');   $('tbody tr:even', this).removeClass('odd').addClass('even');   return this; }; We have made three important changes to the function. It is defined as a new property of jQuery.fn rather than as a standalone function. This registers the function as a plug-in method. We use the keyword this as a replacement for our $table parameter. Within a plug-in method, thisrefers to the jQuery object that is being acted upon. Finally, we return this at the end of the function. The return value makes our new method chainable. More information on writing jQuery plug-ins can be found in Chapter 10 of our book Learning jQuery. There we will discuss making a plug-in ready for public consumption, as opposed to the small example here that is only to be used by our own code. With our new plug-in defined, we can call $table.alternateRowColors(), which is a more natural jQuery syntax, intead of alternateRowColors($table). Performance Concerns Our code works, but is quite slow. The culprit is the comparator function, which is performing a fair amount of work. This comparator will be called many times during the course of a sort, which means that every extra moment it spends on processing will be magnified. The actual sort algorithm used by JavaScript is not defined by the standard. It may be a simple sort like a bubble sort (worst case of Θ(n2) in computational complexity terms) or a more sophisticated approach like quick sort (which is Θ(n log n) on average). In either case doubling the number of items increases the number of times the comparator function is called by more than double. The remedy for our slow comparator is to pre-compute the keys for the comparison. We begin with the slow sort function: rows.sort(function(a, b) {   keyA = $(a).children('td').eq(column).text().toUpperCase();   keyB = $(b).children('td').eq(column).text().toUpperCase();   if (keyA < keyB) return -1;   if (keyA > keyB) return 1;   return 0; }); $.each(rows, function(index, row) {   $table.children('tbody').append(row); }); We can pull out the key computation and do that in a separate loop: $.each(rows, function(index, row) {   row.sortKey = $(row).children('td').eq(column).text().toUpperCase(); }); rows.sort(function(a, b) {   if (a.sortKey < b.sortKey) return -1;   if (a.sortKey > b.sortKey) return 1;   return 0; }); $.each(rows, function(index, row) {   $table.children('tbody').append(row);   row.sortKey = null; }); In the new loop, we are doing all of the expensive work and storing the result in a new property. This kind of property, attached to a DOM element but not a normal DOM attribute, is called an expando.This is a convenient place to store the key since we need one per table row element. Now we can examine this attribute within the comparator function, and our sort is markedly faster.  We set the expando property to null after we're done with it to clean up after ourselves. This is not necessary in this case, but is a good habit to establish because expando properties left lying around can be the cause of memory leaks. For more information, see Appendix C.   Finessing the Sort Keys Now we want to apply the same kind of sorting behavior to the Author(s) column of our table. By adding the sort-alpha class to its table header cell, the Author(s)column can be sorted with our existing code. But ideally authors should be sorted by last name, not first. Since some books have multiple authors, and some authors have middle names or initials listed, we need outside guidance to determine what part of the text to use as our sort key. We can supply this guidance by wrapping the relevant part of the cell in a tag: <tr>   <td>     <img src="../covers/small/1847192386.png" width="49" height="61"             alt="Building Websites with Joomla! 1.5 Beta 1" /></td>   <td>Building Websites with Joomla! 1.5 Beta 1</td>   <td>Hagen <span class="sort-key">Graf</span></td>   <td>Feb 2007</td>   <td>$40.49</td> </tr> <tr>   <td>     <img src="../covers/small/1904811620.png" width="49" height="61"          alt="Learning Mambo: A Step-by-Step Tutorial to Building                                                 Your Website" /></td>   <td>     Learning Mambo: A Step-by-Step Tutorial to Building Your Website   </td>   <td>Douglas <span class="sort-key">Paterson</span></td>   <td>Dec 2006</td>   <td>$40.49</td> </tr> <tr>   <td>     <img src="../covers/small/1904811299.png" width="49" height="61"                   alt="Moodle E-Learning Course Development" /></td>   <td>Moodle E-Learning Course Development</td>   <td>William <span class="sort-key">Rice</span></td>   <td>May 2006</td>   <td>$35.99</td> </tr> Now we have to modify our sorting code to take this tag into account, without disturbing the existing behavior for the Titlecolumn, which is working well. By prepending the marked sort key to the key we have previously calculated, we can sort first on the last name if it is called out, but on the whole string as a fallback: $.each(rows, function(index, row) {   var $cell = $(row).children('td').eq(column);   row.sortKey = $cell.find('.sort-key').text().toUpperCase()                                   + ' ' + $cell.text().toUpperCase(); }); Sorting by the Author(s)column now uses the last name:     If two last names are identical, the sort uses the entire string as a tiebreaker for positioning.
Read more
  • 0
  • 0
  • 14939