Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-what-is-the-difference-between-oauth-1-0-and-2-0
Pavan Ramchandani
13 Jun 2018
11 min read
Save for later

What's the difference between OAuth 1.0 and OAuth 2.0?

Pavan Ramchandani
13 Jun 2018
11 min read
The OAuth protocol specifies a process for resource owners to authorize third-party applications in accessing their server resources without sharing their credentials. This tutorial will take you through understanding OAuth protocol and introduce you to the offerings of OAuth 2.0 in a practical manner. This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. Consider a scenario where Jane (the user of an application) wants to let an application access her private data, which is stored in a third-party service provider. Before OAuth 1.0 or other similar open source protocols, such as Google AuthSub and FlickrAuth, if Jane wanted to let a consumer service use her data stored on some third-party service provider, she would need to give her user credentials to the consumer service to access data from the third-party service via appropriate service calls. Instead of Jane passing her login information to multiple consumer applications, OAuth 1.0 solves this problem by letting the consumer applications request authorization from the service provider on Jane's behalf. Jane does not divulge her login information; authorization is granted by the service provider, where both her data and credentials are stored. The consumer application (or consumer service) only receives an authorization token that can be used to access data from the service provider. Note that the user (Jane) has full control of the transaction and can invalidate the authorization token at any time during the signup process, or even after the two services have been used together. The typical example used to explain OAuth 1.0 is that of a service provider that stores pictures on the web (let's call the service StorageInc) and a fictional consumer service that is a picture printing service (let's call the service PrintInc). On its own, PrintInc is a full-blown web service, but it does not offer picture storage; its business is only printing pictures. For convenience, PrintInc has created a web service that lets its users download their pictures from StorageInc for printing. This is what happens when a user (the resource owner) decides to use PrintInc (the client application) to print his/her images stored in StorageInc (the service provider): The user creates an account in PrintInc. Let's call the user Jane, to keep things simple. PrintInc asks whether Jane wants to use her pictures stored in StorageInc and presents a link to get the authorization to download her pictures (the protected resources). Jane is the resource owner here. Jane decides to let PrintInc connect to StorageInc on her behalf and clicks on the authorization link. Both PrintInc and StorageInc have implemented the OAuth protocol, so StorageInc asks Jane whether she wants to let PrintInc use her pictures. If she says yes, then StorageInc asks Jane to provide her username and password. Note, however, that her credentials are being used at StorageInc's site and PrintInc has no knowledge of her credentials. Once Jane provides her credentials, StorageInc passes PrintInc an authorization token, which is stored as a part of Jane's account on PrintInc. Now, we are back at PrintInc's web application, and Jane can now print any of her pictures stored in StorageInc's web service. Finally, every time Jane wants to print more pictures, all she needs to do is come back to PrintInc's website and download her pictures from StorageInc without providing the username and password again, as she has already authorized these two web services to exchange data on her behalf. The preceding example clearly portrays the authorization flow in OAuth 1.0 protocol. Before getting deeper into OAuth 1.0, here is a brief overview of the common terminologies and roles that we saw in this example: Client (consumer): This refers to an application (service) that tries to access a protected resource on behalf of the resource owner and with the resource owner's consent. A client can be a business service, mobile, web, or desktop application. In the previous example, PrintInc is the client application. Server (service provider): This refers to an HTTP server that understands the OAuth protocol. It accepts and responds to the requests authenticated with the OAuth protocol from various client applications (consumers). If you relate this with the previous example, StorageInc is the service provider. Protected resource: Protected resources are resources hosted on servers (the service providers) that are access-restricted. The server validates all incoming requests and grants access to the resource, as appropriate. Resource owner: This refers to an entity capable of granting access to a protected resource. Mostly, it refers to an end user who owns the protected resource. In the previous example, Jane is the resource owner. Consumer key and secret (client credentials): These two strings are used to identify and authenticate the client application (the consumer) making the request. Request token (temporary credentials): This is a temporary credential provided by the server when the resource owner authorizes the client application to use the resource. As the next step, the client will send this request token to the server to get authorized. On successful authorization, the server returns an access token. The access token is explained next. Access token (token credentials): The server returns an access token to the client when the client submits the temporary credentials obtained from the server during the resource grant approval by the user. The access token is a string that identifies a client that requests for protected resources. Once the access token is obtained, the client passes it along with each resource request to the server. The server can then verify the identity of the client by checking this access token. The following sequence diagram shows the interactions between the various parties involved in the OAuth 1.0 protocol: You can get more information about the OAuth 1.0 protocol here. What is OAuth 2.0? OAuth 2.0 is the latest release of the OAuth protocol, mainly focused on simplifying the client-side development. Note that OAuth 2.0 is a completely new protocol, and this release is not backwards-compatible with OAuth 1.0. It offers specific authorization flows for web applications, desktop applications, mobile phones, and living room devices. The following are some of the major improvements in OAuth 2.0, as compared to the previous release: The complexity involved in signing each request: OAuth 1.0 mandates that the client must generate a signature on every API call to the server resource using the token secret. On the receiving end, the server must regenerate the same signature, and the client will be given access only if both the signatures match. OAuth 2.0 requires neither the client nor the server to generate any signature for securing the messages. Security is enforced via the use of TLS/SSL (HTTPS) for all communication. Addressing non-browser client applications: Many features of OAuth 1.0 are designed by considering the way a web client application interacts with the inbound and outbound messages. This has proven to be inefficient while using it with non-browser clients such as on-device mobile applications. OAuth 2.0 addresses this issue by accommodating more authorization flows suitable for specific client needs that do not use any web UI, such as on-device (native) mobile applications or API services. This makes the protocol very flexible. The separation of roles: OAuth 2.0 clearly defines the roles for all parties involved in the communication, such as the client, resource owner, resource server, and authorization server. The specification is clear on which parts of the protocol are expected to be implemented by the resource owner, authorization server, and resource server. The short-lived access token: Unlike in the previous version, the access token in OAuth 2.0 can contain an expiration time, which improves the security and reduces the chances of illegal access. The refresh token: OAuth 2.0 offers a refresh token that can be used for getting a new access token on the expiry of the current one, without going through the entire authorization process again. Before we get into the details of OAuth 2.0, let's take a quick look at how OAuth 2.0 defines roles for each party involved in the authorization process. Though you might have seen similar roles while discussing OAuth 1.0 in last section, it does not clearly define which part of the protocol is expected to be implemented by each one: The resource owner: This refers to an entity capable of granting access to a protected resource. In a real-life scenario, this can be an end user who owns the resource. The resource server: This hosts the protected resources. The resource server validates and authorizes the incoming requests for the protected resource by contacting the authorization server. The client (consumer): This refers to an application that tries to access protected resources on behalf of the resource owner. It can be a business service, mobile, web, or desktop application. Authorization server: This, as the name suggests, is responsible for authorizing the client that needs access to a resource. After successful authentication, the access token is issued to the client by the authorization server. In a real-life scenario, the authorization server may be either the same as the resource server or a separate entity altogether. The OAuth 2.0 specification does not really enforce anything on this part. It would be interesting to learn how these entities talk with each other to complete the authorization flow. The following is a quick summary of the authorization flow in a typical OAuth 2.0 implementation: Let's understand the diagram in more detail: The client application requests authorization to access the protected resources from the resource owner (user). The client can either directly make the authorization request to the resource owner or via the authorization server by redirecting the resource owner to the authorization server endpoint. The resource owner authenticates and authorizes the resource access request from the client application and returns the authorization grant to the client. The authorization grant type returned by the resource owner depends on the type of client application that tries to access the OAuth protected resource. Note that the OAuth 2.0 protocol defines four types of grants in order to authorize access to protected resources. The client application requests an access token from the authorization server by passing the authorization grant along with other details for authentication, such as the client ID, client secret, and grant type. On successful authentication, the authorization server issues an access token (and, optionally, a refresh token) to the client application. The client application requests the protected resource (RESTful web API) from the resource server by presenting the access token for authentication. On successful authentication of the client request, the resource server returns the requested resource. The sequence of interaction that we just discussed is of a very high level. Depending upon the grant type used by the client, the details of the interaction may change. The following section will help you understand the basics of grant types. Understanding grant types in OAuth 2.0 Grant types in the OAuth 2.0 protocol are, in essence, different ways to authorize access to protected resources using different security credentials (for each type). The OAuth 2.0 protocol defines four types of grants, as listed here; each can be used in different scenarios, as appropriate: Authorization code: This is obtained from the authentication server instead of directly requesting it from the resource owner. In this case, the client directs the resource owner to the authorization server, which returns the authorization code to the client. This is very similar to OAuth 1.0, except that the cryptographic signing of messages is not required in OAuth 2.0. Implicit: This grant is a simplified version of the authorization code grant type flow. In the implicit grant flow, the client is issued an access token directly as the result of the resource owner's authorization. This is less secure, as the client is not authenticated. This is commonly used for client-side devices, such as mobile, where the client credentials cannot be stored securely. Resource owner password credentials: The resource owner's credentials, such as username and password, are used by the client for directly obtaining the access token during the authorization flow. The access code is used thereafter for accessing resources. This grant type is only used with trusted client applications. This is suitable for legacy applications that use the HTTP basic authentication to incrementally transition to OAuth 2.0. Client credentials: These are used directly for getting access tokens. This grant type is used when the client is also the resource owner. This is commonly used for embedded services and backend applications, where the client has an account (direct access rights). Read Next: Understanding OAuth Authentication methods - Tutorial OAuth 2.0 – Gaining Consent - Tutorial
Read more
  • 0
  • 0
  • 13021

article-image-create-connection-qlik-engine-tip
Amey Varangaonkar
13 Jun 2018
8 min read
Save for later

5 ways to create a connection to the Qlik Engine [Tip]

Amey Varangaonkar
13 Jun 2018
8 min read
With mashups or web apps, the Qlik Engine sits outside of your project and is not accessible and loaded by default. The first step before doing anything else is to create a connection with the Qlik Engine, after which you can continue to open a session and perform further actions on that app, such as: Opening a document/app Making selections Retrieving visualizations and apps For using the Qlik Engine API, open a WebSocket to the engine. There may be a difference in the way you do this, depending on whether you are working with Qlik Sense Enterprise or Qlik Sense Desktop. In this article, we will elaborate on how you can achieve a connection to the Qlik engine and the benefits of doing so. The following excerpt has been taken from the book Mastering Qlik Sense, authored by Martin Mahler and Juan Ignacio Vitantonio. Creating a connection To create a connection using WebSockets, you first need to establish a new web socket communication line. To open a WebSocket to the engine, use one of the following URIs: Qlik Sense Enterprise Qlik Sense Desktop wss://server.domain.com:4747/app/ or wss://server.domain.com[/virtual proxy]/app/ ws://localhost:4848/app Creating a Connection using WebSockets In the case of Qlik Sense Desktop, all you need to do is define a WebSocket variable, including its connection string in the following way: var ws = new WebSocket("ws://localhost:4848/app/"); Once the connection is opened and checking for ws.open(), you can call additional methods to the engine using ws.send(). This example will retrieve the number of available documents in my Qlik Sense Desktop environment, and append them to an HTML list: <html> <body> <ul id='docList'> </ul> </body> </html> <script> var ws = new WebSocket("ws://localhost:4848/app/"); var request = { "handle": -1, "method": "GetDocList", "params": {}, "outKey": -1, "id": 2 } ws.onopen = function(event){ ws.send(JSON.stringify(request)); // Receive the response ws.onmessage = function (event) { var response = JSON.parse(event.data); if(response.method != ' OnConnected'){ var docList = response.result.qDocList; var list = ''; docList.forEach(function(doc){ list += '<li>'+doc.qDocName+'</li>'; }) document.getElementById('docList').innerHTML = list; } } } </script> The preceding example will produce the following output on your browser if you have Qlik Sense Desktop running in the background: All Engine methods and calls can be tested in a user-friendly way by exploring the Qlik Engine in the Dev Hub. A single WebSocket connection can be associated with only one engine session (consisting of the app context, plus the user). If you need to work with multiple apps, you must open a separate WebSocket for each one. If you wish to create a WebSocket connection directly to an app, you can extend the configuration URL to include the application name, or in the case of the Qlik Sense Enterprise, the GUID. You can then use the method from the app class and any other classes as you continue to work with objects within the app. var ws = new WebSocket("ws://localhost:4848/app/MasteringQlikSense.qvf"); Creating  Connection to the Qlik Server Engine Connecting to the engine on a Qlik Sense environment is a little bit different as you will need to take care of authentication first. Authentication is handled in different ways, depending on how you have set up your server configuration, with the most common ones being: Ticketing Certificates Header authentication Authentication also depends on where the code that is interacting with the Qlik Engine is running. If your code is running on a trusted computer, authentication can be performed in several ways, depending on how your installation is configured and where the code is running: If you are running the code from a trusted computer, you can use certificates, which first need to be exported via the QMC If the code is running on a web browser, or certificates are not available, then you must authenticate via the virtual proxy of the server Creating a connection using certificates Certificates can be considered as a seal of trust, which allows you to communicate with the Qlik Engine directly with full permission. As such, only backend solutions ever have access to certificates, and you should guard how you distribute them carefully. To connect using certificates, you first need to export them via the QMC, which is a relatively easy thing to do: Once they are exported, you need to copy them to the folder where your project is located using the following code: <html> <body> <h1>Mastering QS</h1> </body> <script> var certPath = path.join('C:', 'ProgramData', 'Qlik', 'Sense', 'Repository', 'Exported Certificates', '.Local Certificates'); var certificates = { cert: fs.readFileSync(path.resolve(certPath, 'client.pem')), key: fs.readFileSync(path.resolve(certPath, 'client_key.pem')), root: fs.readFileSync(path.resolve(certPath, 'root.pem')) }; // Open a WebSocket using the engine port (rather than going through the proxy) var ws = new WebSocket('wss://server.domain.com:4747/app/', { ca: certificates.root, cert: certificates.cert, key: certificates.key, headers: { 'X-Qlik-User': 'UserDirectory=internal; UserId=sa_engine' } }); ws.onopen = function (event) { // Call your methods } </script> Creating a connection using the Mashup API Now, while connecting to the engine is a fundamental step to start interacting with Qlik, it's very low-level, connecting via WebSockets. For advanced use cases, the Mashup API is one way to help you get up to speed with a more developer-friendly abstraction layer. The Mashup API utilizes the qlik interface as an external interface to Qlik Sense, used for mashups and for including Qlik Sense objects in external web pages. To load the qlik module, you first need to ensure RequireJS is available in your main project file. You will then have to specify the URL of your Qlik Sense environment, as well as the prefix of the virtual proxy, if there is one: <html> <body> <h1>Mastering QS</h1> </body> </html> <script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.5/require.min.js"> <script> //Prefix is used for when a virtual proxy is used with the browser. var prefix = window.location.pathname.substr( 0, window.location.pathname.toLowerCase().lastIndexOf( "/extensions" ) + 1 ); //Config for retrieving the qlik.js module from the Qlik Sense Server var config = { host: window.location.hostname, prefix: prefix, port: window.location.port, isSecure: window.location.protocol === "https:" }; require.config({ baseUrl: (config.isSecure ? "https://" : "http://" ) + config.host + (config.port ? ":" + config.port : "" ) + config.prefix + "resources" }); require(["js/qlik"], function (qlik) { qlik.setOnError( function (error) { console.log(error); }); //Open an App var app = qlik.openApp('MasteringQlikSense.qvf', config); </script> Once you have created the connection to an app, you can start leveraging the full API by conveniently creating HyperCubes, connecting to fields, passing selections, retrieving objects, and much more. The Mashup API is intended for browser-based projects where authentication is handled in the same way as if you were going to open Qlik Sense. If you wish to use the Mashup API, or some parts of it, with a backend solution, you need to take care of authentication first. Creating a connection using enigma.js Enigma is Qlik's open-source promise wrapper for the engine. You can use enigma directly when you're in the Mashup API, or you can load it as a separate module. When you are writing code from within the Mashup API, you can retrieve the correct schema directly from the list of available modules which are loaded together with qlik.js via 'autogenerated/qix/engine-api'.   The following example will connect to a Demo App using enigma.js: define(function () { return function () { require(['qlik','enigma','autogenerated/qix/engine-api'], function (qlik, enigma, schema) { //The base config with all details filled in var config = { schema: schema, appId: "My Demo App.qvf", session:{ host:"localhost", port: 4848, prefix: "", unsecure: true, }, } //Now that we have a config, use that to connect to the //QIX service. enigma.getService("qix" , config).then(function(qlik){ qlik.global.openApp(config.appId) //Open App qlik.global.openApp(config.appId).then(function(app){ //Create SessionObject for FieldList app.createSessionObject( { qFieldListDef: { qShowSystem: false, qShowHidden: false, qShowSrcTables: true, qShowSemantic: true, qShowDerivedFields: true }, qInfo: { qId: "FieldList", qType: "FieldList" } } ).then( function(list) { return list.getLayout(); } ).then( function(listLayout) { return listLayout.qFieldList.qItems; } ).then( function(fieldItems) { console.log(fieldItems) } ); }) } })}}) It's essential to also load the correct schema whenever you load enigma.js. The schema is a collection of the available API methods that can be utilized in each version of Qlik Sense. This means your schema needs to be in sync with your QS version. Thus, we see it is fairly easy to create a stable connection with the Qlik Engine API. If you liked the above excerpt, make sure you check out the book Mastering Qlik Sense to learn more tips and tricks on working with different kinds of data using Qlik Sense and extract useful business insights. How Qlik Sense is driving self-service Business Intelligence Overview of a Qlik Sense® Application’s Life Cycle What we learned from Qlik Qonnections 2018
Read more
  • 0
  • 0
  • 14640

article-image-design-a-restful-web-api-with-java-tutorial
Pavan Ramchandani
12 Jun 2018
12 min read
Save for later

Design a RESTful web API with Java [Tutorial]

Pavan Ramchandani
12 Jun 2018
12 min read
In today's tutorial, you will learn to design REST services. We will break down the key design considerations you need to make when building RESTful web APIs. In particular, we will focus on the core elements of the REST architecture style: Resources and their identifiers Interaction semantics for RESTful APIs (HTTP methods) Representation of resources Hypermedia controls This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. This book will help you build robust, scalable and secure RESTful web services, making use of the JAX-RS and Jersey framework extensions. Let's start by discussing the guidelines for identifying resources in a problem domain. Richardson Maturity Model—Leonardo Richardson has developed a model to help with assessing the compliance of a service to REST architecture style. The model defines four levels of maturity, starting from level-0 to level-3 as the highest maturity level. The maturity levels are decided considering the aforementioned principle elements of the REST architecture. Identifying resources in the problem domain The basic steps that yoneed to take while building a RESTful web API for a specific problem domain are: Identify all possible objects in the problem domain. This can be done by identifying all the key nouns in the problem domain. For example, if you are building an application to manage employees in a department, the obvious nouns are department and employee. The next step is to identify the objects that can be manipulated using CRUD operations. These objects can be classified as resources. Note that you should be careful while choosing resources. Based on the usage pattern, you can classify resources as top-level and nested resources (which are the children of a top-level resource). Also, there is no need to expose all resources for use by the client; expose only those resources that are required for implementing the business use case. Transforming operations to HTTP methods Once you have identified all resources, as the next step, you may want to map the operations defined on the resources to the appropriate HTTP methods. The most commonly used HTTP methods (verbs) in RESTful web APIs are POST, GET, PUT, and DELETE. Note that there is no one-to-one mapping between the CRUD operations defined on the resources and the HTTP methods. Understanding of idempotent and safe operation concepts will help with using the correct HTTP method. An operation is called idempotent if multiple identical requests produce the same result. Similarly, an idempotent RESTful web API will always produce the same result on the server irrespective of how many times the request is executed with the same parameters; however, the response may change between requests. An operation is called safe if it does not modify the state of the resources. Check out the following table: MethodIdempotentSafeGETYESYESOPTIONSYESYESHEADYESYESPOSTNONOPATCHNONOPUTYESNODELETEYESNO Here are some tips for identifying the most appropriate HTTP method for the operations that you want to perform on the resources: GET: You can use this method for reading a representation of a resource from the server. According to the HTTP specification, GET is a safe operation, which means that it is only intended for retrieving data, not for making any state changes. As this is an idempotent operation, multiple identical GET requests will behave in the same manner. A GET method can return the 200 OK HTTP response code on the successful retrieval of resources. If there is any error, it can return an appropriate status code such as 404 NOT FOUND or 400 BAD REQUEST. DELETE: You can use this method for deleting resources. On successful deletion, DELETE can return the 200 OK status code. According to the HTTP specification, DELETE is an idempotent operation. Note that when you call DELETE on the same resource for the second time, the server may return the 404 NOT FOUND status code since it was already deleted, which is different from the response for the first request. The change in response for the second call is perfectly valid here. However, multiple DELETE calls on the same resource produce the same result (state) on the server. PUT: According to the HTTP specification, this method is idempotent. When a client invokes the PUT method on a resource, the resource available at the given URL is completely replaced with the resource representation sent by the client. When a client uses the PUT request on a resource, it has to send all the available properties of the resource to the server, not just the partial data that was modified within the request. You can use PUT to create or update a resource if all attributes of the resource are available with the client. This makes sure that the server state does not change with multiple PUT requests. On the other hand, if you send partial resource content in a PUT request multiple times, there is a chance that some other clients might have updated some attributes that are not present in your request. In such cases, the server cannot guarantee that the state of the resource on the server will remain identical when the same request is repeated, which breaks the idempotency rule. POST: This method is not idempotent. This method enables you to use the POST method to create or update resources when you do not know all the available attributes of a resource. For example, consider a scenario where the identifier field for an entity resource is generated at the server when the entity is persisted in the data store. You can use the POST method for creating such resources as the client does not have an identifier attribute while issuing the request. Here is a simplified example that illustrates this scenario. In this example, the employeeID attribute is generated on the server: POST hrapp/api/employees HTTP/1.1 Host: packtpub.com {employee entity resource in JSON} On the successful creation of a resource, it is recommended to return the status of 201 Created and the location of the newly created resource. This allows the client to access the newly created resource later (with server-generated attributes). The sample response for the preceding example will look as follows: 201 Created Location: hrapp/api/employees/1001 Best practice Use caching only for idempotent and safe HTTP methods, as others have an impact on the state of the resources. Understanding the difference between PUT and POST A common question that you will encounter while designing a RESTful web API is when you should use the PUT and POST methods? Here's the simplified answer: You can use PUT for creating or updating a resource, when the client has the full resource content available. In this case, all values are with the client and the server does not generate a value for any of the fields. You will use POST for creating or updating a resource if the client has only partial resource content available. Note that you are losing the idempotency support with POST. An idempotent method means that you can call the same API multiple times without changing the state. This is not true for the POST method; each POST method call may result in a server state change. PUT is idempotent, and POST is not. If you have strong customer demands, you can support both methods and let the client choose the suitable one on the basis of the use case. Naming RESTful web resources Resources are a fundamental concept in RESTful web services. A resource represents an entity that is accessible via the URI that you provide. The URI, which refers to a resource (which is known as a RESTful web API), should have a logically meaningful name. Having meaningful names improves the intuitiveness of the APIs and, thereby, their usability. Some of the widely followed recommendations for naming resources are shown here: It is recommended you use nouns to name both resources and path segments that will appear in the resource URI. You should avoid using verbs for naming resources and resource path segments. Using nouns to name a resource improves the readability of the corresponding RESTful web API, particularly when you are planning to release the API over the internet for the general public. You should always use plural nouns to refer to a collection of resources. Make sure that you are not mixing up singular and plural nouns while forming the REST URIs. For instance, to get all departments, the resource URI must look like /departments. If you want to read a specific department from the collection, the URI becomes /departments/{id}. Following the convention, the URI for reading the details of the HR department identified by id=10 should look like /departments/10. The following table illustrates how you can map the HTTP methods (verbs) to the operations defined for the departments' resources: ResourceGETPOSTPUTDELETE/departmentsGet all departmentsCreate a new departmentBulk update on departmentsDelete all departments/departments/10Get the HR department with id=10Not allowedUpdate the HR departmentDelete the HR department While naming resources, use specific names over generic names. For instance, to read all programmers' details of a software firm, it is preferable to have a resource URI of the form /programmers (which tells about the type of resource), over the much generic form /employees. This improves the intuitiveness of the APIs by clearly communicating the type of resources that it deals with. Keep the resource names that appear in the URI in lowercase to improve the readability of the resulting resource URI. Resource names may include hyphens; avoid using underscores and other punctuation. If the entity resource is represented in the JSON format, field names used in the resource must conform to the following guidelines: Use meaningful names for the properties Follow the camel case naming convention: The first letter of the name is in lowercase, for example, departmentName The first character must be a letter, an underscore (_), or a dollar sign ($), and the subsequent characters can be letters, digits, underscores, and/or dollar signs Avoid using the reserved JavaScript keywords If a resource is related to another resource(s), use a subresource to refer to the child resource. You can use the path parameter in the URI to connect a subresource to its base resource. For instance, the resource URI path to get all employees belonging to the HR department (with id=10) will look like /departments/10/employees. To get the details of employee with id=200 in the HR department, you can use the following URI: /departments/10/employees/200. The resource path URI may contain plural nouns representing a collection of resources, followed by a singular resource identifier to return a specific resource item from the collection. This pattern can repeat in the URI, allowing you to drill down a collection for reading a specific item. For instance, the following URI represents an employee resource identified by id=200 within the HR department: /departments/hr/employees/200. Although the HTTP protocol does not place any limit on the length of the resource URI, it is recommended not to exceed 2,000 characters because of the restriction set by many popular browsers. Best practice: Avoid using actions or verbs in the URI as it refers to a resource. Using HATEOAS in response representation Hypertext as the Engine of Application State (HATEOAS) refers to the use of hypermedia links in the resource representations. This architectural style lets the clients dynamically navigate to the desired resource by traversing the hypermedia links present in the response body. There is no universally accepted single format for representing links between two resources in JSON. Hypertext Application Language The Hypertext API Language (HAL) is a promising proposal that sets the conventions for expressing hypermedia controls (such as links) with JSON or XML. Currently, this proposal is in the draft stage. It mainly describes two concepts for linking resources: Embedded resources: This concept provides a way to embed another resource within the current one. In the JSON format, you will use the _embedded attribute to indicate the embedded resource. Links: This concept provides links to associated resources. In the JSON format, you will use the _links attribute to link resources. Here is the link to this proposal: http://tools.ietf.org/html/draft-kelly-json-hal-06. It defines the following properties for each resource link: href: This property indicates the URI to the target resource representation template: This property would be true if the URI value for href has any PATH variable inside it (template) title: This property is used for labeling the URI hreflang: This property specifies the language for the target resource title: This property is used for documentation purposes name: This property is used for uniquely identifying a link The following example demonstrates how you can use the HAL format for describing the department resource containing hyperlinks to the associated employee resources. This example uses the JSON HAL for representing resources, which is represented using the application/hal+json media type: GET /departments/10 HTTP/1.1 Host: packtpub.com Accept: application/hal+json HTTP/1.1 200 OK Content-Type: application/hal+json { "_links": { "self": { "href": "/departments/10" }, "employees": { "href": "/departments/10/employees" }, "employee": { "href": "/employees/{id}", "templated": true } }, "_embedded": { "manager": { "_links": { "self": { "href": "/employees/1700" } }, "firstName": "Chinmay", "lastName": "Jobinesh", "employeeId": "1700", } }, "departmentId": 10, "departmentName": "Administration" } To summarize, we discussed the details of designing RESTful web APIs including identifying the resources, using HTTP methods, and naming the web resources. Additionally we got introduced to Hypertext application language. Read More: Getting started with Django RESTful Web Services Testing RESTful Web Services with Postman Documenting RESTful Java web services using Swagger
Read more
  • 0
  • 0
  • 6546

article-image-vr-experiences-with-react-vr-create-maze
Sunith Shetty
12 Jun 2018
16 min read
Save for later

Building VR experiences with React VR 2.0: How to create maze that's new every time you play

Sunith Shetty
12 Jun 2018
16 min read
In today’s tutorial, we will examine the functionality required to build a simple maze. There are a few ways we could build a maze. The most straightforward way would be to fire up our 3D modeler package (say, Blender) and create a labyrinth out of polygons. This would work fine and could be very detailed. However, it would also be very boring. Why? The first time we get through the maze will be exciting, but after a few tries, you'll know the way through. When we construct VR experiences, you usually want people to visit often and have fun every time. This tutorial is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you will learn how to create amazing 360 and virtual reality content that runs directly in your browsers. A modeled labyrinth would be boring. Life is too short to do boring things. So, we want to generate a Maze randomly. This way, you can change the Maze every time so that it'll be fresh and different. The way to do that is through random numbers to ensure that the Maze doesn't shift around us, so we want to actually do it with pseudo-random numbers. To start doing that, we'll need a basic application created. Please go to your VR directory and create an application called 'WalkInAMaze': react-vr init WalkInAMaze Almost random–pseudo random number generators To have a chance of replaying value or being able to compare scores between people, we really need a pseudo-random number generator. The basic JavaScript Math.random() is not a pseudo-random generator; it really gives you a totally random number every time. We need a pseudo-random number generator that takes a seed value. If you give the same seed to the random number generator, it will generate the same sequence of random numbers. (They aren't completely random but are very close.) Random number generators are a complex topic; for example, they are used in cryptography, and if your random number generator isn't completely random, someone could break your code. We aren't so worried about that, we just want repeatability. Although the UI for this may be a bit beyond the scope of this book, creating the Maze in a way that clicking on Refresh won't generate a totally different Maze is really a good thing and will avoid frustration on the part of the user. This will also allow two users to compare scores; we could persist a board number for the Maze and show this. This may be out of scope for our book; however, having a predictable Maze will help immensely during development. If it wasn't for this, you might get lost while working on your world. (Well, probably not, but it makes testing easier.) Including library code from other projects Up to this point, I've shown you how to create components in React VR (or React). JavaScript interestingly has a historical issue with include. With C++, Java, or C#, you can include a file in another file or make a reference to a file in a project. After doing that, everything in those other files, such as functions, classes, and global properties (variables), are then usable from the file that you've issued the include statement in. With a browser, the concept of "including" JavaScript is a little different. With Node.js, we use package.json to indicate what packages we need. To bring those packages into our code, we will use the following syntax in your .js files: var MersenneTwister = require('mersenne-twister'); Then, instead of using Math.random(), we will create a new random number generator and pass a seed, as follows: var rng = new MersenneTwister(this.props.Seed); From this point on, you just call rng.random() instead of Math.random(). We can just use npm install <package> and the require statement for properly formatted packages. Much of this can be done for you by executing the npm command: npm install mersenne-twister --save Remember, the --save command to update our manifest in the project. While we are at it, we can install another package we'll need later: npm install react-vr-gaze-button --save Now that we have a good random number generator, let's use it to complicate our world. The Maze render() How do we build a Maze? I wanted to develop some code that dynamically generates the Maze; anyone could model it in a package, but a VR world should be living. Having code that can dynamically build Maze in any size (to a point) will allow a repeat playing of your world. There are a number of JavaScript packages out there for printing mazes. I took one that seemed to be everywhere, in the public domain, on GitHub and modified it for HTML. This app consists of two parts: Maze.html and makeMaze.JS. Neither is React, but it is JavaScript. It works fairly well, although the numbers don't really represent exactly how wide it is. First, I made sure that only one x was displaying, both vertically and horizontally. This will not print well (lines are usually taller than wide), but we are building a virtually real Maze, not a paper Maze. The Maze that we generate with the files at Maze.html (localhost:8081/vr/maze.html) and the JavaScript file—makeMaze.js—will now look like this: x1xxxxxxx x x x xxx x x x x x x x x xxxxx x x x x x x x x x x x x 2 xxxxxxxxx It is a little hard to read, but you can count the squares vs. xs. Don't worry, it's going to look a lot fancier. Now that we have the HTML version of a Maze working, we'll start building the hedges. This is a slightly larger piece of code than I expected, so I broke it into pieces and loaded the Maze object onto GitHub rather than pasting the entire code here, as it's long. You can find a link for the source at: http://bit.ly/VR_Chap11 Adding the floors and type checking One of the things that look odd with a 360 Pano background, as we've talked about before, is that you can seem to "float" against the ground. One fix, other than fixing the original image, is to simply add a floor. This is what we did with the Space Gallery, and it looks pretty good as we were assuming we were floating in space anyway. For this version, let's import a ground square. We could use a large square that would encompass the entire Maze; we'd then have to resize it if the size of the Maze changes. I decided to use a smaller cube and alter it so that it's "underneath" every cell of the Maze. This would allow us some leeway in the future to rotate the squares for worn paths, water traps, or whatever. To make the floor, we will use a simple cube object that I altered slightly and is UV mapped. I used Blender for this. We also import a Hedge model, and a Gem, which will represent where we can teleport to. Inside 'Maze.js' we added the following code: import Hedge from './Hedge.js'; import Floor from './Hedge.js'; import Gem from './Gem.js'; Then, inside the Maze.js we could instantiate our floor with the code: <Floor X={-2} Y={-4}/> Notice that we don't use 'vr/components/Hedge.js' when we do the import; we're inside Maze.js. However, in index.vr.js to include the Maze, we do need: import Maze from './vr/components/Maze.js'; It's slightly more complicated though. In our code, the Maze builds the data structures when props have changed; when moving, if the maze needs rendering again, it simply loops through the data structure and builds a collection (mazeHedges) with all of the floors, teleport targets, and hedges in it. Given this, to create the floors, the line in Maze.js is actually: mazeHedges.push(<Floor {...cellLoc} />); Here is where I ran into two big problems, and I'll show you what happened so that you can avoid these issues. Initially, I was bashing my head against the wall trying to figure out why my floors looked like hedges. This one is pretty easy—we imported Floor from the Hedge.js file. The floors will look like hedges (did you notice this in my preceding code? If so, I did this on purpose as a learning experience. Honest). This is an easy fix. Make sure that you code import Floor from './floor.js'; note that Floor not type-checked. (It is, after all, JavaScript.) I thought this was odd, as the hedge.js file exports a Hedge object, not a Floor object, but be aware you can rename the objects as you import them. The second problem I had was more of a simple goof that is easy to occur if you aren't really thinking in React. You may run into this. JavaScript is a lovely language, but sometimes I miss a strongly typed language. Here is what I did: <Maze SizeX='4' SizeZ='4' CellSpacing='2.1' Seed='7' /> Inside the maze.js file, I had code like this: for (var j = 0; j < this.props.SizeX + 2; j++) { After some debugging, I found out that the value of j was going from 0 to 42. Why did it get 42 instead of 6? The reason was simple. We need to fully understand JavaScript to program complex apps. The mistake was in initializing SizeX to be '4' ; this makes it a string variable. When calculating j from 0 (an integer), React/JavaScript takes 2, adds it to a string of '4', and gets the 42 string, then converts it to an integer and assigns this to j. When this is done, very weird things happened. When we were building the Space Gallery, we could easily use the '5.1' values for the input to the box: <Pedestal MyX='0.0' MyZ='-5.1'/> Then, later use the transform statement below inside the class: transform: [ { translate: [ this.props.MyX, -1.7, this.props.MyZ] } ] React/JavaScript will put the string values into This.Props.MyX, then realize it needs an integer, and then quietly do the conversion. However, when you get more complicated objects, such as our Maze generation, you won't get away with this. Remember that your code isn't "really" JavaScript. It's processed. At the heart, this processing is fairly simple, but the implications can be a killer. Pay attention to what you code. With a loosely typed language such as JavaScript, with React on top, any mistakes you make will be quietly converted to something you didn't intend. You are the programmer. Program correctly. So, back to the Maze. The Hedge and Floor are straightforward copies of the initial Gem code. Let's take a look at our starting Gem, although note it gets a lot more complicated later (and in your source files): import React, { Component } from 'react'; import { asset, Box, Model, Text, View } from 'react-vr'; export default class Gem extends Component { constructor() { super(); this.state = { Height: -3 }; } render() { return ( <Model source={{ gltf2: asset('TeleportGem.gltf'), }} style={{ transform: [{ translate: [this.props.X, this.state.Height, this.props.Z] }] }} /> ); } } The Hedge and Floor are essentially the same thing. (We could have made a prop be the file loaded, but we want a different behavior for the Gem, so we will edit this file extensively.) To run this sample, first, we should have created a directory as you have before, called WalkInAMaze. Once you do this, download the files from the Git source for this part of the article (http://bit.ly/VR_Chap11). Once you've created the app, copied the files, and fired it up, (go to the WalkInAMaze directory and type npm start), and you should see something like this once you look around - except, there is a bug. This is what the maze should look like (if you use the file  'MazeHedges2DoubleSided.gltf' in Hedge.js, in the <Model> statement):> Now, how did we get those neat-looking hedges in the game? (OK, they are pretty low poly, but it is still pushing it.) One of the nice things about the pace of improvement on web standards is their new features. Instead of just .obj file format, React VR now has the capability to load glTF files. Using the glTF file format for models glTF files are a new file format that works pretty naturally with WebGL. There are exporters for many different CAD packages. The reason I like glTF files is that getting a proper export is fairly straightforward. Lightwave OBJ files are an industry standard, but in the case of React, not all of the options are imported. One major one is transparency. The OBJ file format allows that, but at of the time of writing this book, it wasn't an option. Many other graphics shaders that modern hardware can handle can't be described with the OBJ file format. This is why glTF files are the next best alternative for WebVR. It is a modern and evolving format, and work is being done to enhance the capabilities and make a fairly good match between what WebGL can display and what glTF can export. This is however on interacting with the world, so I'll give a brief mention on how to export glTF files and provide the objects, especially the Hedge, as glTF models. The nice thing with glTF from the modeling side is that if you use their material specifications, for example, for Blender, then you don't have to worry that the export won't be quite right. Today's physically Based Rendering (PBR) tends to use the metallic/roughness model, and these import better than trying to figure out how to convert PBR materials into the OBJ file's specular lighting model. Here is the metallic-looking Gem that I'm using as the gaze point: Using the glTF Metallic Roughness model, we can assign the texture maps that programs, such as Substance Designer, calculate and import easily. The resulting figures look metallic where they are supposed to be metallic and dull where the paint still holds on. I didn't use Ambient Occlusion here, as this is a very convex model; something with more surface depressions would look fantastic with Ambient Occlusion. It would also look great with architectural models, for example, furniture. To convert your models, there is user documentation at http://bit.ly/glTFExporting. You will need to download and install the Blender glTF exporter. Or, you can just download the files I have already converted. If you do the export, in brief, you do the following steps: Download the files from http://bit.ly/gLTFFiles. You will need the gltf2_Principled.blend file, assuming that you are on a newer version of Blender. In Blender, open your file, then link to the new materials. Go to File->Link, then choose the gltf2_Principled.blend file. Once you do that, drill into "NodeTree" and choose either glTF Metallic Roughness (for metal), or glTF specular glossiness for other materials. Choose the object you are going to export; make sure that you choose the Cycles renderer. Open the Node Editor in a window. Scroll down to the bottom of the Node Editor window, and make sure that the box Use Nodes is checked. Add the node via the nodal menu, Add->Group->glTF Specular Glossiness or Metallic Roughness. Once the node is added, go to Add->Texture->Image texture. Add as many image textures as you have image maps, then wire them up. You should end up with something similar to this diagram. To export the models, I recommend that you disable camera export and combine the buffers unless you think you will be exporting several models that share geometry or materials. The Export options I used are as follows: Now, to include the exported glTF object, use the <Model> component as you would with an OBJ file, except you have no MTL file. The materials are all described inside the .glTF file. To include the exported glTF object, you just put the filename as a gltf2 prop in the <Model: <Model source={{ gltf2: asset('TeleportGem2.gltf'),}} ... To find out more about these options and processes, you can go to the glTF export web site. This site also includes tutorials on major CAD packages and the all-important glTF shaders (for example, the Blender model I showed earlier). I have loaded several .OBJ files and .glTF files so you can experiment with different combinations of low poly and transparency. When glTF support was added in React VR version 2.0.0, I was very excited as transparency maps are very important for a lot of VR models, especially vegetation; just like our hedges. However, it turns out there is a bug in WebGL or three.js that does not render the transparency properly. As a result, I have gone with a low polygon version in the files on the GitHub site; the pictures, above, were with the file MazeHedges2DoubleSided.gltf in the Hedges.js file (in vr/components). If you get 404 errors, check the paths in the glTF file. It depends on which exporter you use—if you are working with Blender, the gltf2 exporter from the Khronos group calculates the path correctly, but the one from Kupoman has options, and you could export the wrong paths. We discussed important mechanics of props, state, and events. We also discussed how to create a maze using pseudo-random number generators to make sure that our props and state didn't change chaotically. To know more about how to create, move around in, and make worlds react to us in a Virtual Reality world, including basic teleport mechanics, do check out this book Getting Started with React VR.  Read More: Google Daydream powered Lenovo Mirage solo hits the market Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand alone VR headset arrives!
Read more
  • 0
  • 0
  • 3744

article-image-how-to-prevent-errors-while-using-utilities-for-loading-data-in-teradata
Pravin Dhandre
11 Jun 2018
9 min read
Save for later

How to prevent errors while using utilities for loading data in Teradata

Pravin Dhandre
11 Jun 2018
9 min read
In today’s tutorial we will assist you to overcome the errors that arise while loading, deleting or updating large volumes of data using Teradata Utilities. [box type="note" align="" class="" width=""]This article is an excerpt from Teradata Cookbook co-authored by Abhinav Khandelwal and Rajsekhar Bhamidipati. This book provides recipes to simplify the daily tasks performed by database administrators (DBA) along with providing efficient data warehousing solutions in Teradata database system.[/box] Resolving FastLoad error 2652 When data is being loaded via FastLoad, a table lock is placed on the target table. This means that the table is unavailable for any other operation. A lock on a table is only released when FastLoad encounters the END LOADING command, which terminates phase 2, the so-called application phase. FastLoad may get terminated in phase 1 due to any of the following reasons: Load script results in failure (error code 8 or 12) Load script is aborted by admin or some other session FastLoad fails due to bad record or file Forgetting to add end loading statement in script If so, it keeps a lock on the table, which needs to be released manually. In this recipe, we will see the steps to release FastLoad locks. Getting ready Identify the table on which FastLoad is been ended prematurely and tables are in locked state. You need to have valid credentials for the Teradata Database. Execute the dummy FastLoad script from the same user or the user which has write access to the lock table. A user requires the following privileges/rights in order to execute the FastLoad: SELECT and INSERT (CREATE and DROP or DELETE) access to the target or loading table CREATE and DROP TABLE on error tables SELECT, INSERT, UPDATE, and DELETE are required privileges for the user PUBLIC on the restart log table (SYSADMIN.FASTLOG). There will be a row in the FASTLOG table for each FastLoad job that has not completed in the system. How to do it... Open a notepad and create the following script: .LOGON 127.0.0.1/dbc, dbc; /* Vaild system name and credentials to your system */ .DATABASE Database_Name; /* database under which locked table is */ erorfiles errortable_name, uv_tablename /* same error table name as in script */ begin loading locked_table; /* table which is getting 2652 error */ .END LOADING; /* to end pahse 2 and release the lock */ .LOGOFF; Save it as dummy_fl.txt. Open the windows Command Prompt and execute this using the FastLoad command, as shown in the following screenshot: This dummy script with no insert statement should release the lock on the target Table. Execute Select on the locked table to see if the lock is released on the table. How it works... As FastLoad is designed to work only on empty tables, it becomes necessary that the loading of the table finishes in one go. If the load script is errored out prematurely in phase 2, without encountering the END loading command, it leaves a lock on loading the table. Fastload locks can't be released via the HUT utility, as there are no technical lock on the table. To execute FastLoad, the following are some requirements: Log table: FastLoad puts its progress information in the fastlog table. EMPTY TABLE: FastLoad needs the table to be empty before inserting rows into that table. TWO ERROR TABLES: FastLoad requires two error tables to be created; you just need to name them, and no ddl is required. The first error table records any translation or constraint violation error, whereas the second error table captures errors related to the duplication of values for Unique Primary Indexes (UPI). After the completion of FastLoad, you can analyze these error tables as to why the records got rejected. There's more... If this does not fix the issue, you need to drop the target table and error tables associated with it. Before proceeding with dropping tables, check with the administrator to abort any FastLoad sessions associated with this table. Resolving MLOAD error 2571 MLOAD works in five phases, unlike FastLoad, which only works in two phases. MLOAD can fail in either phase three or four. Figure shows 5 stages of MLOAD. Preliminary: Basic setup. Syntax checking, establishing session with the Teradata Database, creation of error tables (two error tables per target table), and the creation of work tables and log tables are done in this phase. DML Transaction phase: Request is parse through PE and a step plan is generated. Steps and DML are then sent to AMP and stored in appropriate work tables for each target table. Input data sent will be stored in these work tables, which will be applied to the target table later on. Acquisition phase: Unsorted data is sent to AMP in blocks of 64K. Rows are hashed by PI and sent to appropriate AMPs. Utility places locks on target tables in preparation for the application phase to apply rows in target tables. Application phase: Changes are applied to target tables and NUSI subtables. Lock on table is held in this phase. Cleanup phase: If the error code of all the steps is 0, MLOAD successfully completes and releases all the locks on the specified table. This being the case, all empty error tables, worktables, and the log table are dropped. Getting ready Identify the table which is getting affected by error 2571. Make sure no host utility is running on this table and the load job is in a failed state for this table. How to do it... Check on viewpoint for any active utility job for this table. If you find any active job, let it complete. If there is a reason that you need to release the lock, first abort all the sessions of the host utility from viewpoint. Ask your administrator to do it. Execute the following command: RELEASE MLOAD <databasename.tablename>; > If you get a Not able to release MLOAD Lock error, execute the following Command: /* Release lock in application phase */ RELEASE MLOAD <databasename.tablename> in apply; Once the locks are released you need to drop all the associated error tables, the log table, and work tables with it. Re-execute MLOAD after correcting the error. How it works... The Mload utility places a lock in table headers to alert other utilities that a MultiLoad is in session for this table. They include: Acquisition lock: DML allows all DDL allows DROP only Application lock: DML allows SELECT with ACCESS only DDL allows DROP only There's more... If the release lock statement still gives an error and does not release the lock on the table, you need to use SELECT with the ACCESS lock to copy the content of the locked table to a new one and drop the locked tables. If you start receiving the error 7446 Mload table %ID cannot be released because NUSI exists, you need to drop all the NUSI on the table and use ALTER Table to nonfallback to accomplish the task. Resolving failure 7547 This error is associated with the UPDATE statement, which could be SQL based or could be in MLOAD. Various times, while updating the set of rows in a table, the update fails on Failure 7547 Target row updated by multiple source rows. This error will happen when you update the target with multiple rows from the source. This means there are duplicated values present in the source tables. Getting ready Let's create sample volatile tables and insert values into them. After that, we will execute the UPDATE command, which will fail to result in 7547: Create a TARGET TABLE with the following DDL and insert values into it: ** TARGET TABLE** create volatile table accounts ( CUST_ID, CUST_NAME, Sal )with data primary index(cust_id) insert values (1,'will',2000); insert values (2,'bekky',2800); insert values (3,'himesh',4000); Create a SOURCE TABLE with the following DDL and insert values into it: ** SOURCE TABLE** create volatile table Hr_payhike ( CUST_ID, CUST_NAME, Sal_hike ) with data primary index(cust_id) insert values (1,'will',2030); insert values (1,'bekky',3800); insert values (3,'himesh',7000); Execute the MLOAD script. Following the snippet from the MLOAD script, only update part (which will fail): /* Snippet from MLOAD update */ UPDATE ACC FROM ACCOUNTS ACC , Hr_payhike SUPD SET Sal= TUPD.Sal_hike WHERE Acc.CUST_ID = SUPD.CUST_ID; Failure: Target row updated by multiple source rows How to do it... Check for duplicate values in the source table using the following: /*Check for duplicate values in source table*/ SELECT cust_id,count(*) from Hr_payhike group by 1 order by 2 desc The output will be generated with CUST_ID =1 and has two values which are causing errors. The reason for this is that while updating the TARGET table, the optimizer won't be able to understand from which row it should update the TARGET row. Who's salary will be updated Will or Bekky? To resolve the error, execute the following update query: /* Update part of MLOAD */ UPDATE ACC FROM ACCOUNTS ACC , ( SELECT CUST_ID, CUST_NAME, SAL_HIKE FROM Hr_payhike QUALIFY ROW_NUMBER() OVER (PARTITION BY CUST_ID ORDER BY CUST_NAME,SAL_HIKE DESC)=1) SUPD SET Sal= SUPD.Sal_hike WHERE Acc.CUST_ID = SUPD.CUST_ID; Now, the update will run without error. How it works... Failure will happen when you update the target with multiple rows from the source. If you defined a primary index column for your target, and if those columns are in an update query condition, this error will occur. To further resolve this, you can delete the duplicate from the source table itself and execute the original update without any modification. But if the source data can't be changed, then you need to change the update statement. To summarize, we have successfully learned how to overcome or prevent errors while using utilities for loading data into database. You could also check out the Teradata Cookbook  for more than 100 recipes on enterprise data warehousing solutions. 2018 is the year of graph databases. Here’s why. 6 reasons to choose MySQL 8 for designing database solutions Amazon Neptune, AWS’ cloud graph database, is now generally available
Read more
  • 0
  • 0
  • 12416

article-image-3-ways-to-use-indexes-in-teradata-to-improve-database-performance
Pravin Dhandre
11 Jun 2018
15 min read
Save for later

3 ways to use Indexes in Teradata to improve database performance

Pravin Dhandre
11 Jun 2018
15 min read
In this tutorial, we will create solutions to design indexes to help us improve query performance of Teradata database management system. [box type="note" align="" class="" width=""]This article is an excerpt from a book co-authored by Abhinav Khandelwal and Rajsekhar Bhamidipati titled Teradata Cookbook. This book will teach you to tackle problems related to efficient querying, stored procedure searching, and navigation techniques in a Teradata database.[/box] Creating a partitioned primary index to improve performance A PPI (partitioned primary index) is a type of index that enables users to set up databases that provide performance benefits from a data locality, while retaining the benefits of scalability inherent in the hash architecture of the Teradata database. This is achieved by hashing rows to different virtual AMPs, as is done with a normal PI, but also by creating local partitions within each virtual AMP. We will see how PPIs will improve the performance of a query. Getting ready You need to connect to the Teradata database. Let's create a table and insert data into it using the following DDL. This will be a non-partitioned table, as follows: /*NON PPI TABLE DDL*/ CREATE volatile TABLE EMP_SAL_NONPPI ( id INT, Sal INT, dob DATE, o_total INT ) primary index( id)   on commit preserve rows; INSERT into EMP_SAL_NONPPI VALUES (1001,2500,'2017-09-01',890); INSERT into EMP_SAL_NONPPI VALUES (1002,5500,'2017-09-10',890); INSERT into EMP_SAL_NONPPI VALUES (1003,500,'2017-09-02',890); INSERT into EMP_SAL_NONPPI VALUES (1004,54500,'2017-09-05',890); INSERT into EMP_SAL_NONPPI VALUES (1005,900,'2017-09-23',890); INSERT into EMP_SAL_NONPPI VALUES (1006,8900,'2017-08-03',890); INSERT into EMP_SAL_NONPPI VALUES (1007,8200,'2017-08-21',890); INSERT into EMP_SAL_NONPPI VALUES (1008,6200,'2017-08-06',890); INSERT into EMP_SAL_NONPPI VALUES (1009,2300,'2017-08-12',890); INSERT into EMP_SAL_NONPPI VALUES (1010,9200,'2017-08-15',890); Let's check the explain plan of the following query; we are selecting data based on the DOB column using the following code: /*Select on NONPPI table*/ SELECT * from EMP_SAL_NONPPI where dob <= 2017-08-01 Following is the snippet from SQLA showing explain plan of the query: As seen in the following explain plan, an all-rows scan can be costly in terms of CPU and I/O if the table has millions of rows: Explain SELECT * from EMP_SAL_NONPPI where dob <= 2017-08-01; /*EXPLAIN PLAN of SELECT*/ 1) First, we do an all-AMPs RETRIEVE step from DBC.EMP_SAL_NONPPI by way of an all-rows scan with a condition of ("DBC.EMP_SAL_NONPPI.dob <= DATE '1900-12-31'") into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with no confidence to be 4 rows (148 bytes). The estimated time for this step is 0.04 seconds. 2) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. Let's see how we can enable partition retrieval in the same query. How to do it... Connect to the Teradata database using SQLA or Studio. Create the following table with the data. We will define a PPI on the column DOB: /*Partition table*/ CREATE volatile TABLE EMP_SAL_PPI ( id INT, Sal int, dob date, o_total int ) primary index( id) PARTITION BY RANGE_N (dob BETWEEN DATE '2017-01-01' AND DATE '2017-12-01' EACH INTERVAL '1' DAY) on commit preserve rows; INSERT into EMP_SAL_PPI VALUES (1001,2500,'2017-09-01',890); INSERT into EMP_SAL_PPI VALUES (1002,5500,'2017-09-10',890); INSERT into EMP_SAL_PPI VALUES (1003,500,'2017-09-02',890); INSERT into EMP_SAL_PPI VALUES (1004,54500,'2017-09-05',890); INSERT into EMP_SAL_PPI VALUES (1005,900,'2017-09-23',890); INSERT into EMP_SAL_PPI VALUES (1006,8900,'2017-08-03',890); INSERT into EMP_SAL_PPI VALUES (1007,8200,'2017-08-21',890); INSERT into EMP_SAL_PPI VALUES (1008,6200,'2017-08-06',890); INSERT into EMP_SAL_PPI VALUES (1009,2300,'2017-08-12',890); INSERT into EMP_SAL_PPI VALUES (1010,9200,'2017-08-15',890); Let's execute the same query on a new partition table: /*SELECT on PPI table*/ sel * from EMP_SAL_PPI where dob <= 2017-08-01 Following snippet from SQLA shows query and explain plan of the query: The data is being accessed using only a single partition, as shown in the following block: /*EXPLAIN PLAN*/ 1) First, we do an all-AMPs RETRIEVE step from a single partition of SYSDBA.EMP_SAL_PPI with a condition of ("SYSDBA.EMP_SAL_PPI.dob = DATE '2017-08-01'") with a residual condition of ( "SYSDBA.EMP_SAL_PPI.dob = DATE '2017-08-01'") into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with no confidence to be 1 row (37 bytes). The estimated time for this step is 0.04 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. How it works... A partitioned PI helps in improving the performance of a query by avoiding a full table scan elimination. A PPI works the same as a primary index for data distribution, but creates partitions according to ranges or cases, as specified in the table. There are four types of PPI that can be created in a table: Case partitioning: /*CASE partition*/ CREATE TABLE SALES_CASEPPI ( ORDER_ID INTEGER, CUST_ID INTERGER, ORDER_DT DATE, ) PRIMARY INDEX(ORDER_ID) PARTITION BY CASE_N(ORDER_ID < 101, ORDER_ ID < 201, ORDER_ID < 501, NO CASE,UNKNOWN); Range-based partitioning: /*Range Partition table*/ CREATE volatile TABLE EMP_SAL_PPI ( id INT, Sal int, dob date, o_total int ) primary index( id) PARTITION BY RANGE_N (dob BETWEEN DATE '2017-01-01' AND DATE '2017-12-01' EACH INTERVAL '1' DAY) on commit preserve rows Multi-level partitioning: CREATE TABLE SALES_MLPPI_TABLE ( ORDER_ID INTEGER NOT NULL, CUST_ID INTERGER, ORDER_DT DATE, ) PRIMARY INDEX(ORDER_ID) PARTITION BY (RANGE_N(ORDER_DT BETWEEN DATE '2017-08-01' AND DATE '2017-12-31' EACH INTERVAL '1' DAY) CASE_N (ORDER_ID < 1001, ORDER_ID < 2001, ORDER_ID < 3001, NO CASE, UNKNOWN)); Character-based partitioning: /*CHAR Partition*/ CREATE TABLE SALES_CHAR_PPI ( ORDR_ID INTEGER, EMP_NAME VARCHAR (30) CHARACTER, PRIMARY INDEX (ORDR_ID) PARTITION BY CASE_N ( EMP_NAME LIKE 'A%', EMP_NAME LIKE 'B%', EMP_NAME LIKE 'C%', EMP_NAME LIKE 'D%', EMP_NAME LIKE 'E%', EMP_NAME LIKE 'F%', NO CASE, UNKNOWN); PPI not only helps in improving the performance of queries, but also helps in table maintenance. But there are certain performance considerations that you might need to keep in mind when creating a PPI on a table, and they are: If partition column criteria is not present in the WHERE clause while selecting primary indexes, it can slow the query The partitioning of the column must be carefully chosen in order to gain maximum benefits Drop unneeded secondary indexes or value-ordered join indexes Creating a join index to improve performance A join index is a data structure that contains data from one or more tables, with or without aggregation: In this, we will see how join indexes help in improving the performance of queries. Getting ready You need to connect to the Teradata database using SQLA or Studio. Let's create a table and insert the following code into it: CREATE TABLE td_cookbook.EMP_SAL ( id INT, DEPT varchar(25), emp_Fname varchar(25), emp_Lname varchar(25), emp_Mname varchar(25), status INT )primary index(id); INSERT into td_cookbook.EMP_SAL VALUES (1,'HR','Anikta','lal','kumar',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Anik','kumar','kumar',2); INSERT into td_cookbook.EMP_SAL VALUES (3,'IT','Arjun','sharma','lal',1); INSERT into td_cookbook.EMP_SAL VALUES (4,'SALES','Billa','Suti','raj',2); INSERT into td_cookbook.EMP_SAL VALUES (4,'IT','Koyd','Loud','harlod',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Harlod','lal','kumar',1); Further, we will create a single table join index with a different primary index of the table. How to do it... The following are the steps to create a join index to improve performance: Connect to the Teradata database using SQLA or Studio. Check the explain plan for the following query: /*SELECT on base table*/ EXPLAIN SELECT id,dept,emp_Fname,emp_Lname,status from td_cookbook.EMP_SAL where id=4; 1) First, we do a single-AMP RETRIEVE step from td_cookbook.EMP_SAL by way of the primary index "td_cookbook.EMP_SAL.id = 4" with no residual conditions into Spool 1 (one-amp), which is built locally on that AMP. The size of Spool 1 is estimated with low confidence to be 2 rows (118 bytes). The estimated time for this step is 0.02 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.02 seconds. Query with a WHERE clause on id; then the system will query the EMP table using the primary index of the base table, which is id. Now, if a user wants to query a table on column emp_Fname, an all row scan will occur, which will degrade the performance of the query, as shown in the following screenshot: Now, we will create a JOIN INDEX using emp_Fname as the primary index: /*Join Index*/ CREATE JOIN INDEX td_cookbook.EMP_JI AS SELECT id,emp_Fname,emp_Lname,status,emp_Mname,dept FROM td_cookbook.EMP_SAL PRIMARY INDEX(emp_Fname); Let's collect statistics on the join index: /*Collect stats on JI*/ collect stats td_cookbook.EMP_JI column emp_Fname Now, we will check the explain plan query on the WHERE clause using the column emp_Fname: Explain sel id,dept,emp_Fname,emp_Lname,status from td_cookbook.EMP_SAL where emp_Fname='ankita'; 1) First, we do a single-AMP RETRIEVE step from td_cookbooK.EMP_JI by way of the primary index "td_cookbooK.EMP_JI.emp_Fname = 'ankita'" with no residual conditions into Spool 1 (one-amp), which is built locally on that AMP. The size of Spool 1 is estimated with low confidence to be 2 rows (118 bytes). The estimated time for this step is 0.02 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.02 seconds. In EXPLAIN, you can see that the optimizer is using the join index instead of the base table when the table queries are using the Emp_Fname column. How it works... Query performance improves any time a join index can be used instead of the base tables. A join index is most useful when its columns can satisfy, or cover, most or all of the requirements in a query. For example, the optimizer may consider using a covering index instead of performing a merge join. When we are able to cover all the queried columns that can be satisfied by a join index, then it is called a cover query. Covering indexes improve the speed of join queries. The extent of improvement can be dramatic, especially for queries involving complex, large-table, and multiple-table joins. The extent of such improvement depends on how often an index is appropriate to a query. There are a few more join indexes that can be used in Teradata: Aggregate-table join index: A type of join index which pre-joins and summarizes aggregated tables without requiring any physical summary tables. It refreshes automatically whenever the base table changes. Only COUNT and SUM are permitted, and DISTINCT is not permitted: /*AG JOIN INDEX*/ CREATE JOIN INDEX Agg_Join_Index AS SELECT Cust_ID, Order_ID, SUM(Sales_north) -- Aggregate column FROM sales_table GROUP BY 1,2 Primary Index(Cust_ID) Use FLOAT as a data type for COUNT and SUM to avoid overflow. Sparse join index: When a WHERE clause is applied in a JOIN INDEX, it is know as a sparse join index. By limiting the number of rows retrieved in a join, it reduces the size of the join index. It is also useful for UPDATE statements where the index is highly selective: /*SP JOIN INDEX*/ CREATE JOIN INDEX Sparse_Join_Index AS SELECT Cust_ID, Order_ID, SUM(Sales_north) -- Aggregate column FROM sales_table where Order_id = 1 -- WHERE CLAUSE GROUP BY 1,2 Primary Index(Cust_ID) Creating a hash index to improve performance Hash indexes are designed to improve query performance like join indexes, especially single table join indexes, and in addition, they enable you to avoid accessing the base table. The syntax for the hash index is as follows: /*Hash index syntax*/ CREATE HASH INDEX <hash-index-name> [, <fallback-option>] (<column-name-list1>) ON <base-table> [BY (<partition-column-name-list2>)] [ORDER BY <index-sort-spec>] ; Getting ready You need to connect to the Teradata database. Let's create a table and insert data into it using the following DDL: /*Create table with data*/ CREATE TABLE td_cookbook.EMP_SAL ( id INT, DEPT varchar(25), emp_Fname varchar(25), emp_Lname varchar(25), emp_Mname varchar(25), status INT )primary index(id); INSERT into td_cookbook.EMP_SAL VALUES (1,'HR','Anikta','lal','kumar',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Anik','kumar','kumar',2); INSERT into td_cookbook.EMP_SAL VALUES (3,'IT','Arjun','sharma','lal',1); INSERT into td_cookbook.EMP_SAL VALUES (4,'SALES','Billa','Suti','raj',2); INSERT into td_cookbook.EMP_SAL VALUES (4,'IT','Koyd','Loud','harlod',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Harlod','lal','kumar',1); How to do it... You need to connect to the Teradata database using SQLA or Studio. Let's check the explain plan of the following query shown in the figure: /*EXPLAIN of SELECT*/ Explain sel id,emp_Fname from td_cookbook.EMP_SAL; 1) First, we lock td_cookbook.EMP_SAL for read on a reserved RowHash to prevent global deadlock. 2) Next, we lock td_cookbook.EMP_SAL for read. 3) We do an all-AMPs RETRIEVE step from td_cookbook.EMP_SAL by way of an all-rows scan with no residual conditions into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with high confidence to be 6 rows (210 bytes). The estimated time for this step is 0.04 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. Now let's create a hash join index on the EMP_SAL table: /*Hash Indx*/ CREATE HASH INDEX td_cookbook.EMP_HASH_inx (id, DEPT) ON td_cookbook.EMP_SAL BY (id) ORDER BY HASH (id); Let's now check the explain plan on the select query after the hash index creation: /*Select after hash idx*/ EXPLAIN SELCT id,dept from td_cookbook.EMP_SAL 1) First, we lock td_cookbooK.EMP_HASH_INX for read on a reserved RowHash to prevent global deadlock. 2) Next, we lock td_cookbooK.EMP_HASH_INX for read. 3) We do an all-AMPs RETRIEVE step from td_cookbooK.EMP_HASH_INX by way of an all-rows scan with no residual conditions into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with high confidence to be 6 rows (210 bytes). The estimated time for this step is 0.04 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. Explain plan can be see in the snippet from SQLA: How it works... Points to consider about the hash index definition are: Each hash index row contains the department id and the department name. Specifying the department id is unnecessary, since it is the primary index of the base table and will therefore be automatically included. The BY clause indicates that the rows of this index will be distributed by the department id hash value. The ORDER BY clause indicates that the index rows will be ordered on each AMP in sequence by the department id hash value. The column specified in the BY clause should be part of the columns which make up the hash index. The BY clause comes with the ORDER BY clause. Unlike join indexes, hash indexes can only be on a single table. We explored how to create different types of index to bring up maximum performance in your database queries. If this article made your way, do check out the book Teradata Cookbook and gain confidence in running a wide variety of Data analytics to develop applications for the Teradata environment. Why MongoDB is the most popular NoSQL database today Why Oracle is losing the Database Race Using the Firebase Real-Time Database  
Read more
  • 0
  • 0
  • 14240
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-extension-functions-in-kotlin
Aaron Lazar
08 Jun 2018
8 min read
Save for later

Extension functions in Kotlin: everything you need to know

Aaron Lazar
08 Jun 2018
8 min read
Kotlin is a rapidly rising programming language. It offers developers the simplicity and effectiveness to develop robust and lightweight applications. Kotlin offers great functional programming support, and one of the best features of Kotlin in this respect are extension functions, hands down! Extension functions are great, because they let you modify existing types with new functions. This is especially useful when you're working with Android and you want to add extra functions to the framework classes. In this article, we'll see what Extension functions are and how the're a blessing in disguise! This article has been extracted from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. The book bridges the language gap for Kotlin developers by showing you how to create and consume functional constructs in Kotlin. fun String.sendToConsole() = println(this) fun main(args: Array<String>) { "Hello world! (from an extension function)".sendToConsole() } To add an extension function to an existing type, you must write the function's name next to the type's name, joined by a dot (.). In our example, we add an extension function (sendToConsole()) to the String type. Inside the function's body, this refers the instance of String type (in this extension function, string is the receiver type). Apart from the dot (.) and this, extension functions have the same syntax rules and features as a normal function. Indeed, behind the scenes, an extension function is a normal function whose first parameter is a value of the receiver type. So, our sendToConsole() extension function is equivalent to the next code: fun sendToConsole(string: String) = println(string) sendToConsole("Hello world! (from a normal function)") So, in reality, we aren't modifying a type with new functions. Extension functions are a very elegant way to write utility functions, easy to write, very fun to use, and nice to read—a win-win. This also means that extension functions have one restriction—they can't access private members of this, in contrast with a proper member function that can access everything inside the instance: class Human(private val name: String) fun Human.speak(): String = "${this.name} makes a noise" //Cannot access 'name': it is private in 'Human' Invoking an extension function is the same as a normal function—with an instance of the receiver type (that will be referenced as this inside the extension), invoke the function by name. Extension functions and inheritance There is a big difference between member functions and extension functions when we talk about inheritance. The open class Canine has a subclass, Dog. A standalone function, printSpeak, receives a parameter of type Canine and prints the content of the result of the function speak(): String: open class Canine { open fun speak() = "<generic canine noise>" } class Dog : Canine() { override fun speak() = "woof!!" } fun printSpeak(canine: Canine) { println(canine.speak()) } Open classes with open methods (member functions) can be extended and alter their behavior. Invoking the speak function will act differently depending on which type is your instance. The printSpeak function can be invoked with any instance of a class that is-a Canine, either Canine itself or any subclass: printSpeak(Canine()) printSpeak(Dog()) If we execute this code, we can see this on the console: Although both are Canine, the behavior of speak is different in both cases, as the subclass overrides the parent implementation. But with extension functions, many things are different. As with the previous example, Feline is an open class extended by the Cat class. But speak is now an extension function: open class Feline fun Feline.speak() = "<generic feline noise>" class Cat : Feline() fun Cat.speak() = "meow!!" fun printSpeak(feline: Feline) { println(feline.speak()) } Extension functions don't need to be marked as override, because we aren't overriding anything: printSpeak(Feline()) printSpeak(Cat() If we execute this code, we can see this on the console: In this case, both invocations produce the same result. Although in the beginning, it seems confusing, once you analyze what is happening, it becomes clear. We're invoking the Feline.speak() function twice; this is because each parameter that we pass is a Feline to the printSpeak(Feline) function: open class Primate(val name: String) fun Primate.speak() = "$name: <generic primate noise>" open class GiantApe(name: String) : Primate(name) fun GiantApe.speak() = "${this.name} :<scary 100db roar>" fun printSpeak(primate: Primate) { println(primate.speak()) } printSpeak(Primate("Koko")) printSpeak(GiantApe("Kong")) If we execute this code, we can see this on the console: In this case, it is still the same behavior as with the previous examples, but using the right value for name. Speaking of which, we can reference name with name and this.name; both are valid. Extension functions as members Extension functions can be declared as members of a class. An instance of a class with extension functions declared is called the dispatch receiver. The Caregiver open class internally defines, extension functions for two different classes, Feline and Primate: open class Caregiver(val name: String) { open fun Feline.react() = "PURRR!!!" fun Primate.react() = "*$name plays with ${[email protected]}*" fun takeCare(feline: Feline) { println("Feline reacts: ${feline.react()}") } fun takeCare(primate: Primate){ println("Primate reacts: ${primate.react()}") } } Both extension functions are meant to be used inside an instance of Caregiver. Indeed, it is a good practice to mark member extension functions as private, if they aren't open. In the case of Primate.react(), we are using the name value from Primate and the name value from Caregiver. To access members with a name conflict, the extension receiver (this) takes precedence and to access members of the dispatcher receiver, the qualified this syntax must be used. Other members of the dispatcher receiver that don't have a name conflict can be used without qualified this. Don't get confused by the various means of this that we have already covered: Inside a class, this means the instance of that class Inside an extension function, this means the instance of the receiver type like the first parameter in our utility function with a nice syntax: class Dispatcher { val dispatcher: Dispatcher = this fun Int.extension(){ val receiver: Int = this val dispatcher: Dispatcher = this@Dispatcher } } Going back to our Zoo example, we instantiate a Caregiver, a Cat, and a Primate, and we invoke the function Caregiver.takeCare with both animal instances: val adam = Caregiver("Adam") val fulgencio = Cat() val koko = Primate("Koko") adam.takeCare(fulgencio) adam.takeCare(koko) If we execute this code, we can see this on the console: Any zoo needs a veterinary surgeon. The class Vet extends Caregiver: open class Vet(name: String): Caregiver(name) { override fun Feline.react() = "*runs away from $name*" } We override the Feline.react() function with a different implementation. We are also using the Vet class's name directly, as the Feline class doesn't have a property name: val brenda = Vet("Brenda") listOf(adam, brenda).forEach { caregiver -> println("${caregiver.javaClass.simpleName} ${caregiver.name}") caregiver.takeCare(fulgencio) caregiver.takeCare(koko) } After which, we get the following output: Extension functions with conflicting names What happens when an extension function has the same name as a member function? The Worker class has a function work(): String and a private function rest(): String. We also have two extension functions with the same signature, work and rest: class Worker { fun work() = "*working hard*" private fun rest() = "*resting*" } fun Worker.work() = "*not working so hard*" fun <T> Worker.work(t:T) = "*working on $t*" fun Worker.rest() = "*playing video games*" Having extension functions with the same signature isn't a compilation error, but a warning: Extension is shadowed by a member: public final fun work(): String It is legal to declare a function with the same signature as a member function, but the member function always takes precedence, therefore, the extension function is never invoked. This behavior changes when the member function is private, in this case, the extension function takes precedence. It is also possible to overload an existing member function with an extension function: val worker = Worker() println(worker.work()) println(worker.work("refactoring")) println(worker.rest()) On execution, work() invokes the member function and work(String) and rest() are extension functions: Extension functions for objects In Kotlin, objects are a type, therefore they can have functions, including extension functions (among other things, such as extending interfaces and others). We can add a buildBridge extension function to the object, Builder: object Builder { } fun Builder.buildBridge() = "A shinny new bridge" We can include companion objects. The class Designer has two inner objects, the companion object and Desk object: class Designer { companion object { } object Desk { } } fun Designer.Companion.fastPrototype() = "Prototype" fun Designer.Desk.portofolio() = listOf("Project1", "Project2") Calling this functions works like any normal object member function: Designer.fastPrototype() Designer.Desk.portofolio().forEach(::println) So there you have it! You now know how to take advantage of extension functions in Kotlin. If you found this tutorial helpful and would like to learn more, head on over to purchase the full book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Forget C and Java. Learn Kotlin: the next universal programming language 5 reasons to choose Kotlin over Java Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 6272

article-image-upgrading-packaging-publishing-react-vr-app
Sunith Shetty
08 Jun 2018
19 min read
Save for later

Upgrading, packaging, and publishing your React VR app

Sunith Shetty
08 Jun 2018
19 min read
It is fun to develop and experience virtual worlds at home. Eventually, though, you want the world to see your creation. To do that, we need to package and publish our app. In the course of development, upgrades to React may come along; before publishing, you will need to decide whether you need to "code freeze" and ship with a stable version, or upgrade to a new version. This is a design decision. In today’s tutorial, we will learn to upgrade React VR and bundle the code in order to publish on the web. This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. This book will get you well-versed with Virtual Reality (VR) and React VR components to create your own VR apps. One of the neat things, although it can be frustrating, is that web projects are frequently updated.  There are a couple of different ways to do an upgrade: You can install/create a new app with the same name You will then go to your old app and copy everything over This is a facelift upgrade or Rip and Replace Do an update. Mostly, this is an update to package.json, and then delete node_modules and rebuild it. This is an upgrade in place. It is up to you which method you use, but the major difference is that an upgrade in place is somewhat easier—no source code to modify and copy—but it may or may not work. A Facelift upgrade also relies on you using the correct react-vr-cli. There is a notice that runs whenever you run React VR from the Command Prompt that will tell you whether it's old: The error or warning that comes up about an upgrade when you run React VR from a Command Prompt may fly by quickly. It takes a while to run, so you may go away for a cup of coffee. Pay attention to red lines, seriously. To do an upgrade in place, you will typically get an update notification from Git if you have subscribed to the project. If you haven't, you should go to: http://bit.ly/ReactVR, create an account (if you don't have one already), and click on the eyeball icon to join the watch list. Then, you will get an email every time there is an upgrade. We will cover the most straightforward way to do an upgrade—upgrade in place, first. Upgrading in place How do you know what version of React you have installed? From a Node.js prompt, type this: npm list react-vr Also, check the version of react-vr-web: npm list react-vr-web Check the version of react-vr-cli (the command-line interface, really only for creating the hello world app). npm list react-vr-cli Check the version of ovrui (open VR's user interface): npm list ovrui You can check these against the versions on the documentation. If you've subscribed to React VR on GitHub (and you should!), then you will get an email telling you that there is an upgrade. Note that the CLI will also tell you if it is out of date, although this only applies when you are creating a new application (folder/website). The release notes are at: http://bit.ly/VRReleases . There, you will find instructions to upgrade. The upgrade instructions usually have you do the following: Delete your node_modules directory. Open your package.json file. Update react-vr, react-vr-web, and ovrui to "New version number" for example, 2.0.0. Update react to "a.b.c". Update react-native to "~d.e.f". Update three to "^g.h.k". Run npm install or yarn. Note the ~ and ^ symbols; ~version means approximately equivalent to version and ^version means compatible with version. This is a help, as you may have other packages that may want other versions of react-native and three, specifically. To get the values of {a...k}, refer to the release notes. I have also found that you may need to include these modules in the devDependencies section of package.json: "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", You may see this error: module.js:529 throw err; ^ Error: Cannot find module './node_modules/react-native/packager/blacklist' If you do, make the following changes in your projects root folder in the rncli.config.js file. Replace the var blacklist = require('./node_modules/react-native/packager/blacklist'); line with var blacklist = require('./node_modules/metro-bundler/src/blacklist');. Third-party dependencies If you have been experimenting and adding modules with npm install <something>, you may find, after an upgrade, that things do not work. The package.json file also needs to know about all the additional packages you installed during experimentation. This is the project way (npm way) to ensure that Node.js knows we need a particular piece of software. If you have this issue, you'll need to either repeat the install with the—save parameter, or edit the dependencies section in your package.json file. { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } Again, this is the manual way; a better way is to use npm install <package> -save. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors even after removing node_modules, issue these commands: npm cache clean --force npm start -- --reset-cache The cache clean won't do it by itself; you need the reset-cache, otherwise, the problem packages will still be saved, even if they don't physically exist! Really broken upgrades – rip and replace If, however, after all that work, your upgrade still does not work, all is not lost. We can do a rip and replace upgrade. Note that this is sort of a "last resort", but it does work fairly well. Follow these steps: Ensure that your react-vr-cli package is up to date, globally: [F:ReactVR]npm install react-vr-cli -g C:UsersJohnAppDataRoamingnpmreact-vr -> C:UsersJohnAppDataRoamingnpmnode_modulesreact-vr-cliindex.js + [email protected] updated 8 packages in 2.83s This is important, as when there is a new version of React, you may not have the most up-to-date react-vr-cli. It will tell you when you use it that there is a newer version out, but that line frequently scrolls by; if you get bored and don't note, you can spend a lot of time trying to install an updated version, to no avail. An npm generates a lot of verbiage, but it is important to read what it says, especially red formatted lines. Ensure that all CLI (DOS) windows, editing sessions, Node.js running CLIs, and so on, are closed. (You shouldn't need to reboot, however; just close everything using the old directory). Rename the old code to MyAppName140 (add a version number to the end of the old react-vr directory). Create the application, using react-vr init MyAppName, in other words, the original app name. The next step is easiest using a diff program (refer to http://bit.ly/WinDiff). I use Beyond Compare, but there are other ones too. Choose one and install it, if needed. Compare the two directories, .MyAppName (new) and .MyAppName140, and see what files have changed. Move over any new files from your old app, including assets (you can probably copy over the entire static_assets folder). Merge any files that have changed, except package.json. Generally, you will need to merge these files: index.vr.js client.js (if you changed it) For package.json, see what lines have been added, and install those packages in the new app via npm install <missed package> --save, or start the app and see what is missing. Remove any files seeded by the hello world app, such as chess-world.jpg (unless you are using that background, of course). Usually, you don't change the rn-cli.config.js file (unless you modified the seeded version). Most code will move directly over. Ensure that you change the application name if you changed the directory name, but with the preceding directions, you won't have to. The preceding list of upgrade steps may be slightly easier if there are massive changes to React VR; it will require some picking through source files. The source is pretty straightforward, so this should be easy in practice. I found that these techniques will work best if the automatic upgrade did not work. As mentioned earlier, the time to do a major upgrade probably is not right before publishing the app, unless there is some new feature you need. You want to adequately test your app to ensure that there aren't any bugs. I'm including the upgrade steps here, though, but not because you should do it right before publishing. Getting your code ready to publish Honestly, you should never put off organizing your clothes until, oh, wait, we're talking about code. You should never put off organizing your code until the night you want to ship it. Even the code you think is throw away may end up in production. Learn good coding habits and style from the beginning. Good code organization Good code, from the very start, is very important for many reasons: If your code uses sloppy indentation, it's more difficult to read. Many code editors, such as Visual Studio Code, Atom, and Webstorm, will format code for you, but don't rely on these tools. Poor naming conventions can hide problems. An improper case on variables can hide problems, such as using this.State instead of this.state. Most of the time spent coding, as much as 80%, is in maintenance. If you can't read the code, you can't maintain it. When you're a starting out programmer, you frequently think you'll always be able to read your own code, but when you pick up a piece years later and say "Who wrote this junk?" and then realize it was you, you will quit doing things like a, b, c, d variable names and the like. Most software at some point is maintained, read, copied, or used by someone other than the author. Most programmers think code standards are for "the other guy," yet complain when they have to code well. Who then does? Most programmers will immediately ask for the code documentation and roll their eyes when they don't find it. I usually ask to see the documentation they wrote for their last project. Every programmer I've hired usually gives me a deer in the headlights look. This is why I usually require good comments in the code. A good comment is not something like this: //count from 99 to 1 for (i=99; i>0; i--) ... A good comment is this: //we are counting bottles of beer for (i=99; i>0; i--) ... Cleaning the lint trap (checking code standards) When you wash clothes, the lint builds up and will eventually clog your washing machine or dryer, or cause a fire. In the PC world, old code, poorly typed names, and all can also build up. Refactoring is one way to clean up the code. I highly recommend that you use some form of version control, such as Git or bitbucket to check your code; while refactoring, it's quite possible to totally mess up your code and if you don't use version control, you may lose a lot of work. A great way to do a code review of your work, before you publish, is to use a linter. Linters go through your code and point out problems (crud), improper syntax, things that may work differently than you intend, and generally try to pick up your room after you, like your mom does. While you might not like it if your mom does that, these tools are invaluable. Computers are, after all, very picky and why not use the machines against each other? One of the most common ways to let software check your software for JavaScript is a program called ESLint. You can read about it at: http://bit.ly/JSLinter. To install ESLint, you can do it via npm like most packages—npm install eslint --save-dev. The --save-dev option puts a requirement in your project while you are developing. Once you've published your app, you won't need to pack the ESLint information with your project! There are a number of other things you need to get ESLint to work properly; read the configuration pages and go through the tutorials. A lot depends on what IDE you use. You can use ESLint with Visual Studio, for example. Once you've installed ESLint, you need to configure a local configuration file. Do this with eslint --init. The --init command will display a prompt that will ask you how to configure the rules it will follow. It will ask a series of questions, and ask what style to use. AirBNB is fairly common, although you can use others; there's no wrong choice. If you are working for a company, they may already have standards, so check with management. One of the prompts will ask if you need React. React VR coding style Coding style can be nearly religious, but in the JavaScript and React world, some standards are very common. AirBNB has one good, fairly well–regarded style guide at: http://bit.ly/JStyle. For React VR, some style options to consider are as follows: Use lowercase for the first letter of a variable name. In other words, this.props.currentX, not this.props.CurrentX, and don't use underscores (this is called camelCase). Use PascalCase only when naming constructors or classes. As you're using PascalCase for files, make the filename match the class, so   import MyClass from './MyClass'. Be careful about 0 vs {0}. In general, learn JavaScript and React. Always use const or let to declare variables to avoid polluting the global namespace. Avoid using ++ and --. This one was hard for me, being a C++ programmer. Hopefully, by the time you've read this, I've fixed it in the source examples. If not, do as I say, not as I do! Learn the difference between == and ===, and use them properly, another thing that is new for C++ and C# programmers. In general, I highly recommend that you pour over these coding styles and use a linter when you write your code: Third-party dependencies For your published website/application to really work reliably, we also need to update package.json; this is sort of the "project" way to ensure that Node.js knows we need a particular piece of software. We will edit the "dependencies" section to add the last line,(bold emphasis mine, bold won't show up in a text editor, obviously!): { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } This is the manual way; a better way is to use npm install <package> -s. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions, if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors, even after removing node_modules, issue these commands: npm start -- --reset-cache npm cache clean --force The cache clean won't do it by itself; you need the reset–cache, otherwise the problem packages will still be saved, even if they don't physically exist! Bundling for publishing on the web Assuming that you have your project dependencies set up correctly to get your project to run from a web server, typically through an ISP or service provider, you need to "bundle" it. React VR has a script that will package up everything into just a few files. Note, of course, that your desktop machine counts as a "web server", although I wouldn't recommend that you expose your development machine to the web. The better way to have other people experience your new Virtual Reality is to bundle it and put it on a commercial web service. Packaging React VR for release on a website The basic process is easy with the React VR provided script: Go to the VR directory where you normally run npm start, and run the npm run bundle command: You will then go to your website the same way you normally upload files, and create a directory called vr. In your project directory, in our case f:ReactVRWalkInAMaze, find the following files in .VRBuild: client.bundle.js index.bundle.js Copy those to your website. Make a directory called static_assets. Copy all of your files (that your app uses) from AppNamestatic_assets to the new static_assets folder. Ensure that you have MIME mapping set up for all of your content; in particular, .obj, .mtl, and .gltf files may need new mappings. Check with your web server documentation: For gltf files, use model/gltf-binary Any .bin files used by gltf should be application/octet-stream For .obj files, I've used application/octet-stream The official list is at http://bit.ly/MimeTypes Very generally, application/octet-stream will send the files "exactly" as they are on the server, so this is sort of a general purpose "catch all" Copy the index.html from the root of your application to the directory on your website where you are publishing the app; in our case, it'll be the vr directory, so the file is alongside the two .js files. Modify index.html for the following lines (note the change to ./index.vr): <html> <head> <title>WalkInAMaze</title> <style>body { margin: 0; }</style> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no"> </head> <body> <!-- When you're ready to deploy your app, update this line to point to your compiled client.bundle.js --> <script src="./client.bundle?platform=vr"></script> <script> // Initialize the React VR application ReactVR.init( // When you're ready to deploy your app, update this line to point to // your compiled index.bundle.js './index.vr.bundle?platform=vr&dev=false', // Attach it to the body tag document.body ); </script> </body> </html> Note for a production release, which means if you're pointing to a prebuilt bundle on a static web server and not the React Native bundler, the dev and platform flags actually won't do anything, so there's no difference between dev=true, dev=false, or even dev=foobar. Obtaining releases and attribution If you used any assets from anywhere on the web, ensure that you have the proper release. For example, many Daz3D or Poser models do not include the rights to publish the geometry information; including these on your website as an OBJ or glTF file may be a violation of that agreement. Someone could easily download the model, or nearly all the geometry fairly easily, and then use it for something else. I am not a lawyer; you should check with wherever you get your models to ensure that you have permission, and if necessary, attribute properly. Attribution licenses are a little difficult with a VR world, unless you embed the attribution into a graphic somewhere; as we've seen, adding text can sometimes be distracting, and you will always have scale issues. If you embed a VR world in a page with <iframe>, you can always give proper attribution on the HTML side. However, this isn't really VR. Checking image sizes and using content delivery sites Some of the images you use, especially the ones in a <pano> statement, can be quite large. You may need to optimize these for proper web speed and responsiveness. This is a fairly general topic, but one thing that can help is a content delivery network (CDN), especially if your world will be a high-volume one. Adding a CDN to your web server is easy. You host your asset files from a separate location, and you pass the root directory as the assetRoot at the ReactVR.init() call. For example, if your files were hosted at https://cdn.example.com/vr_assets/, you would change the method call in index.html to include the following third argument: ReactVR.init( './index.bundle.js?platform=vr&dev=false', document.body, { assetRoot: 'https://cdn.example.com/vr_assets/' } ); Optimizing your models If you were watching the web console, you may have noted this model being loaded over and over. It is not necessarily the most efficient way. Consider other techniques such as passing a model for the various child components as a prop. Polygon decimation is another technique that is very valuable in optimizing models for the web and VR. With the glTF file format, you can use "normal maps" and still make a low polygon model look like a high-resolution one. Techniques to do this are well documented in the game development field. These techniques really do work well. You should also optimize models to not display unseen geometry. If you are showing a car model with blacked out windows, for example, there is no need to have engine detail and interior details loaded (unless the windows are transparent). This sounds obvious, although I found the lamp that I used to illustrate the lighting examples had almost tripled the number of polygons than was needed; the glass lamp shade had inner and outer polygons that were inside the model. We learned to do version upgrades, and if need be, how to do rip and replace upgrades. We further discussed when to do an upgrade and how to publish it on the web. If you are interested to know about how to include existing high-performance web code into a VR app, you may refer to the book Getting Started with React VR.   Build a Virtual Reality Solar System in Unity for Google Cardboard Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 3054

article-image-working-with-the-vue-router-plugin-for-spas
Pravin Dhandre
07 Jun 2018
5 min read
Save for later

Working with the Vue-router plugin for SPAs

Pravin Dhandre
07 Jun 2018
5 min read
Single-Page Applications (SPAs) are web applications that load a single HTML page and updates that page dynamically based on the user interaction with the app. These SPAs use AJAX and HTML5 for creating fluid and responsive web applications without any requirement of constant page reloads. In this tutorial, we will show a step-by-step approach of how to install an extremely powerful plugin Vue-router to build Single Page Applications. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example. Similar to how you add Vue and Vuex to your applications, you can either directly include the library from unpkg, or head to the following URL and download a local copy for yourself: https://unpkg.com/Vue-router. Add the JavaScript to a new HTML document, along with Vue, and your application's JavaScript. Create an application container element, your view, as well. In the following example, I have saved the Vue-router JavaScript file as router.js: Initialize VueRouter in your JavaScript application We are now ready to add VueRouter and utilize its power. Before we do that, however, we need to create some very simple components which we can load and display based on the URL. As we are going to be loading the components with the router, we don't need to register them with Vue.component, but instead create JavaScript objects with the same properties as we would a Vue component. For this first exercise, we are going to create two pages—Home and About pages. Found on most websites, these should help give you context as to what is loading where and when. Create two templates in your HTML page for us to use: Don't forget to encapsulate all your content in one "root" element (represented here by the wrapping div tags). You also need to ensure you declare the templates before your application JavaScript is loaded. We've created a Home page template, with the id of homepage, and an About page, containing some placeholder text from lorem ipsum, with the id of about. Create two components in your JavaScript which reference these two templates: The next step is to give the router a placeholder to render the components in the view. This is done by using a custom <router-view> HTML element. Using this element gives you control over where your content will render. It allows us to have a header and footer right in the app view, without needing to deal with messy templates or include the components themselves. Add a header, main, and footer element to your app. Give yourself a logo in the header and credits in the footer; in the main HTML element, place the router-view placeholder: Everything in the app view is optional, except the router-view, but it gives you an idea of how the router HTML element can be implemented into a site structure. The next stage is to initialize the Vue-router and instruct Vue to use it. Create a new instance of VueRouter and add it to the Vue instance—similar to how we added Vuex in the previous section: We now need to tell the router about our routes (or paths), and what component it should load when it encounters each one. Create an object inside the Vue-router instance with a key of routes and an array as the value. This array needs to include an object for each route: Each route object contains a path and component key. The path is a string of the URL that you want to load the component on. Vue-router serves up components based on a first-come-first-served basis. For example, if there are several routes with the same path, the first one encountered is used. Ensure each route has the beginning slash—this tells the router it is a root page and not a sub-page. Press save and view your app in the browser. You should be presented with the content of the Home template component. If you observe the URL, you will notice that on page load a hash and forward slash (#/) are appended to the path. This is the router creating a method for browsing the components and utilizing the address bar. If you change this to the path of your second route, #/about, you will see the contents of the About component. Vue-router is also able to use the JavaScript history API to create prettier URLs. For example, yourdomain.com/index.html#about would become yourdomain.com/about. This is activated by adding mode: 'history' to your VueRouter instance: With this, you should now be familiar with Vue-router and how to initialize it for creating new routes and paths for different web pages on your website. Do check out the book Vue.js 2.x by Example to start developing building blocks for a successful e-commerce website. What is React.js and how does it work? Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js
Read more
  • 0
  • 0
  • 2548

article-image-architects-love-api-driven-architecture
Aaron Lazar
07 Jun 2018
6 min read
Save for later

8 Reasons why architects love API driven architecture

Aaron Lazar
07 Jun 2018
6 min read
Everyday, we see a new architecture popping up, being labeled as a modern architecture for application development. That’s what happened with Microservices in the beginning, and then all went for a toss when they were termed as a design pattern rather than an architecture on a whole. APIs are growing in popularity and are even being used as a basis to draw out the architecture of applications. We’re going to try and understand what some of the top factors are, which make Architects (and Developers) appreciate API driven architectures over the other “modern” and upcoming architectures. Before we get to the reasons, let’s understand where I’m coming from in the first place. So, we recently published our findings from the Skill Up survey that we conducted for 8,000 odd IT pros. We asked them various questions ranging from what their favourite tools were, to whether they felt they knew more than what their managers did. Of the questions, one of them was directed to find out which of the modern architectures interested them the most. The choices were among Chaos Engineering, API Driven Architecture and Evolutionary Architecture. Source: Skill Up 2018 From the results, it's evident that they’re more inclined towards API driven Architecture. Or maybe, those who didn’t really find the architecture of their choice among the lot, simply chose API driven to be the best of the lot. But why do architects love API driven development? Anyway, I’ve been thinking about it a bit and thought I would come up with a few reasons as to why this might be so. So here goes… Reason #1: The big split between the backend and frontend Also known as Split Stack Development, API driven architecture allows for the backend and frontend of the application to be decoupled. This allows developers and architects to mitigate any dependencies that each end might have or rather impose on the other. Instead of having the dependencies, each end communicates with the other via APIs. This is extremely beneficial in the sense that each end can be built in completely different tools and technologies. For example, the backend could be in Python/Java, while the front end is built in JavaScript. Reason #2: Sensibility in scalability When APIs are the foundation of an architecture, it enables the organisation to scale the app by simply plugging in services as and when needed, instead of having to modify the app itself. This is a great way to plugin and plugout functionality as and when needed without disrupting the original architecture. Reason #3: Parallel Development aka Agile When different teams work on the front and back end of the application, there’s no reason for them to be working together. That doesn’t mean they don’t work together at all, rather, what I mean is that the only factor they have to agree upon is the API structure and nothing else. This is because of Reason #1, where both layers of the architecture are disconnected or decoupled. This enables teams to be more flexible and agile when developing the application. It is only at the testing and deployment stages that the teams will collaborate more. Reason #4: API as a product This is more of a business case, rather than developer centric, but I thought I should add it in anyway. So, there’s something new that popped up on the Thoughtworks Radar, a few months ago - API-as-a-product.  As a matter of fact, you could consider this similar to API-as-a-Service. Organisations like Salesforce have been offering their services in the form of APIs. For example, suppose you’re using Salesforce CRM and you want to extend the functionality, all you need to do is use the APIs for extending the system. Google is another good example of a company that offers APIs as products. This is a great way to provide extensibility instead of having a separate application altogether. Individual APIs or groups of them can be priced with subscription plans. These plans contain not only access to the APIs themselves, but also a defined number of calls or data that is allowed. Reason #5: Hiding underlying complexity In an API driven architecture, all components that are connected to the API are modular, exist on their own and communicate via the API. The modular nature of the application makes it easier to test and maintain. Moreover, if you’re using or consuming someone else’s API, you needn’t learn/decipher the entire code’s working, rather you can just plug in the API and use it. That reduces complexity to a great extent. Reason #6: Business Logic comes first API driven architecture allows developers to focus on the Business Logic, rather than having to worry about structuring the application. The initial API structure is all that needs to be planned out, after which each team goes forth and develops the individual APIs. This greatly reduces development time as well. Reason #7: IoT loves APIs API architecture makes for a great way to build IoT applications, as IoT needs a great deal of scalability. An application that is built on a foundation of APIs is a dream for IoT developers as devices can be easily connected to the mother app. I expect everything to be connected via APIs in the next 5 years. If it doesn’t happen, you can always get back at me in the comments section! ;) Reason #8: APIs and DevOps are a match made in Heaven APIs allow for a more streamlined deployment pipeline, while also eliminating the production of duplicate assets by development teams. Moreover, deployments can reach production a lot faster through these slick pipelines, thus increasing efficiency and reducing costs by a great deal. The merger of DevOps and API driven architecture, however, is not a walk in the park, as it requires a change in mindset. Teams need to change culturally, to become enablers of reusable, self-service consumption. The other side of the coin Well, there’s always two sides to the coin, and there are some drawbacks to API driven architecture. For starters, you’ll have APIs all over the place! While that was the point in the first place, it becomes really tedious to manage all those APIs. Secondly, when you have things running in parallel, you require a lot of processing power - more cores, more infrastructure. Another important issue is regarding security. With so many cyber attacks, and privacy breaches, an API driven architecture only invites trouble with more doors for hackers to open. So apart from the above flipside, those were some of the reasons I could think of, as to why Architects would be interested in an API driven architecture. APIs give customers, i.e both internal and external stakeholders, the freedom to leverage enterprise’s assets, while customizing as required. In a way, APIs aren’t just ways to offer integration and connectivity for large enterprise apps. Rather, they should be looked at as a way to drive faster and more modern software architecture and delivery. What are web developers favorite front-end tools? The best backend tools in web development The 10 most common types of DoS attacks you need to know
Read more
  • 0
  • 0
  • 15583
article-image-feedforward-networks-tensorflow
Aarthi Kumaraswamy
07 Jun 2018
12 min read
Save for later

Implementing feedforward networks with TensorFlow

Aarthi Kumaraswamy
07 Jun 2018
12 min read
Deep feedforward networks, also called feedforward neural networks, are sometimes also referred to as Multilayer Perceptrons (MLPs). The goal of a feedforward network is to approximate the function of f∗. For example, for a classifier, y=f∗(x) maps an input x to a label y. A feedforward network defines a mapping from input to label y=f(x;θ). It learns the value of the parameter θ that results in the best function approximation. This tutorial is an excerpt from the book, Neural Network Programming with Tensorflow by Manpreet Singh Ghotra, and Rajdeep Dua. With this book, learn how to implement more advanced neural networks like CCNs, RNNs, GANs, deep belief networks and others in Tensorflow. How do feedforward networks work? Feedforward networks are a conceptual stepping stone on the path to recurrent networks, which power many natural language applications. Feedforward neural networks are called networks because they compose together many different functions which represent them. These functions are composed in a directed acyclic graph. The model is associated with a directed acyclic graph describing how the functions are composed together. For example, there are three functions f(1), f(2), and f(3) connected to form f(x) =f(3)(f(2)(f(1)(x))). These chain structures are the most commonly used structures of neural networks. In this case, f(1) is called the first layer of the network, f(2) is called the second layer, and so on. The overall length of the chain gives the depth of the model. It is from this terminology that the name deep learning arises. The final layer of a feedforward network is called the output layer. Diagram showing various functions activated on input x to form a neural network These networks are called neural because they are inspired by neuroscience. Each hidden layer is a vector. The dimensionality of these hidden layers determines the width of the model. Implementing feedforward networks with TensorFlow Feedforward networks can be easily implemented using TensorFlow by defining placeholders for hidden layers, computing the activation values, and using them to calculate predictions. Let's take an example of classification with a feedforward network: X = tf.placeholder("float", shape=[None, x_size]) y = tf.placeholder("float", shape=[None, y_size]) weights_1 = initialize_weights((x_size, hidden_size), stddev) weights_2 = initialize_weights((hidden_size, y_size), stddev) sigmoid = tf.nn.sigmoid(tf.matmul(X, weights_1)) y = tf.matmul(sigmoid, weights_2) Once the predicted value tensor has been defined, we calculate the cost function: cost = tf.reduce_mean(tf.nn.OPERATION_NAME(labels=<actual value>, logits=<predicted value>)) updates_sgd = tf.train.GradientDescentOptimizer(sgd_step).minimize(cost) Here, OPERATION_NAME could be one of the following: tf.nn.sigmoid_cross_entropy_with_logits: Calculates sigmoid cross entropy on incoming logits and labels: sigmoid_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, name=None )Formula implemented is max(x, 0) - x * z + log(1 + exp(-abs(x))) _sentinel: Used to prevent positional parameters. Internal, do not use. labels: A tensor of the same type and shape as logits. logits: A tensor of type float32 or float64. The formula implemented is ( x = logits, z = labels) max(x, 0) - x * z + log(1 + exp(-abs(x))). tf.nn.softmax: Performs softmax activation on the incoming tensor. This only normalizes to make sure all the probabilities in a tensor row add up to one. It cannot be directly used in a classification. softmax = exp(logits) / reduce_sum(exp(logits), dim) logits: A non-empty tensor. Must be one of the following types--half, float32, or float64. dim: The dimension softmax will be performed on. The default is -1, which indicates the last dimension. name: A name for the operation (optional). tf.nn.log_softmax: Calculates the log of the softmax function and helps in normalizing underfitting. This function is also just a normalization function. log_softmax( logits, dim=-1, name=None ) logits: A non-empty tensor. Must be one of the following types--half, float32, or float64. dim: The dimension softmax will be performed on. The default is -1, which indicates the last dimension. name: A name for the operation (optional). tf.nn.softmax_cross_entropy_with_logits softmax_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, dim=-1, name=None ) _sentinel: Used to prevent positional parameters. For internal use only. labels: Each rows labels[i] must be a valid probability distribution. logits: Unscaled log probabilities. dim: The class dimension. Defaulted to -1, which is the last dimension. name: A name for the operation (optional). The preceding code snippet computes softmax cross entropy between logits and labels. While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. For exclusive labels, use (where one and only one class is true at a time) sparse_softmax_cross_entropy_with_logits. tf.nn.sparse_softmax_cross_entropy_with_logits sparse_softmax_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, name=None ) labels: Tensor of shape [d_0, d_1, ..., d_(r-1)] (where r is the rank of labels and result) and dtype, int32, or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this operation is run on the CPU and return NaN for corresponding loss and gradient rows on the GPU. logits: Unscaled log probabilities of shape [d_0, d_1, ..., d_(r-1), num_classes] and dtype, float32, or float64. The preceding code computes sparse softmax cross entropy between logits and labels. The probability of a given label is considered exclusive. Soft classes are not allowed, and the label's vector must provide a single specific index for the true class for each row of logits. tf.nn.weighted_cross_entropy_with_logits weighted_cross_entropy_with_logits( targets, logits, pos_weight, name=None ) targets: A tensor of the same type and shape as logits. logits: A tensor of type float32 or float64. pos_weight: A coefficient to use on the positive examples. This is similar to sigmoid_cross_entropy_with_logits() except that pos_weight allows a trade-off of recall and precision by up or down-weighting the cost of a positive error relative to a negative error. Analyzing the Iris dataset with a Tensorflow feedforward network Let's look at a feedforward example using the Iris dataset. You can download the dataset from https://github.com/ml-resources/neuralnetwork-programming/blob/ed1/ch02/iris/iris.csv and the target labels from https://github.com/ml-resources/neuralnetwork-programming/blob/ed1/ch02/iris/target.csv. In the Iris dataset, we will use 150 rows of data made up of 50 samples from each of three Iris species: Iris setosa, Iris virginica, and Iris versicolor. Petal geometry compared from three iris species: Iris Setosa, Iris Virginica, and Iris Versicolor. In the dataset, each row contains data for each flower sample: sepal length, sepal width, petal length, petal width, and flower species. Flower species are stored as integers, with 0 denoting Iris setosa, 1 denoting Iris versicolor, and 2 denoting Iris virginica. First, we will create a run() function that takes three parameters--hidden layer size h_size, standard deviation for weights stddev, and Step size of Stochastic Gradient Descent sgd_step: def run(h_size, stddev, sgd_step) Input data loading is done using the genfromtxt function in numpy. The Iris data loaded has a shape of L: 150 and W: 4. Data is loaded in the all_X variable. Target labels are loaded from target.csv in all_Y with the shape of L: 150, W:3: def load_iris_data(): from numpy import genfromtxt data = genfromtxt('iris.csv', delimiter=',') target = genfromtxt('target.csv', delimiter=',').astype(int) # Prepend the column of 1s for bias L, W = data.shape all_X = np.ones((L, W + 1)) all_X[:, 1:] = data num_labels = len(np.unique(target)) all_y = np.eye(num_labels)[target] return train_test_split(all_X, all_y, test_size=0.33, random_state=RANDOMSEED) Once data is loaded, we initialize the weights matrix based on x_size, y_size, and h_size with standard deviation passed to the run() method: x_size= 5 y_size= 3 h_size= 128 (or any other number chosen for neurons in the hidden layer) # Size of Layers x_size = train_x.shape[1] # Input nodes: 4 features and 1 bias y_size = train_y.shape[1] # Outcomes (3 iris flowers) # variables X = tf.placeholder("float", shape=[None, x_size]) y = tf.placeholder("float", shape=[None, y_size]) weights_1 = initialize_weights((x_size, h_size), stddev) weights_2 = initialize_weights((h_size, y_size), stddev) Next, we make the prediction using sigmoid as the activation function defined in the forward_propagration() function: def forward_propagation(X, weights_1, weights_2): sigmoid = tf.nn.sigmoid(tf.matmul(X, weights_1)) y = tf.matmul(sigmoid, weights_2) return y First, sigmoid output is calculated from input X and weights_1. This is then used to calculate y as a matrix multiplication of sigmoid and weights_2: y_pred = forward_propagation(X, weights_1, weights_2) predict = tf.argmax(y_pred, dimension=1) Next, we define the cost function and optimization using gradient descent. Let's look at the GradientDescentOptimizer being used. It is defined in the tf.train.GradientDescentOptimizer class and implements the gradient descent algorithm. To construct an instance, we use the following constructor and pass sgd_step as a parameter: # constructor for GradientDescentOptimizer __init__( learning_rate, use_locking=False, name='GradientDescent' ) Arguments passed are explained here: learning_rate: A tensor or a floating point value. The learning rate to use. use_locking: If True, use locks for update operations. name: Optional name prefix for the operations created when applying gradients. The default name is "GradientDescent". The following list shows the code to implement the cost function: cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_pred)) updates_sgd = tf.train.GradientDescentOptimizer(sgd_step).minimize(cost) Next, we will implement the following steps: Initialize the TensorFlow session: sess = tf.Session() Initialize all the variables using tf.initialize_all_variables(); the return object is used to instantiate the session. Iterate over steps (1 to 50). For each step in train_x and train_y, execute updates_sgd. Calculate the train_accuracy and test_accuracy. We stored the accuracy for each step in a list so that we could plot a graph: init = tf.initialize_all_variables() steps = 50 sess.run(init) x = np.arange(steps) test_acc = [] train_acc = [] print("Step, train accuracy, test accuracy") for step in range(steps): # Train with each example for i in range(len(train_x)): sess.run(updates_sgd, feed_dict={X: train_x[i: i + 1], y: train_y[i: i + 1]}) train_accuracy = np.mean(np.argmax(train_y, axis=1) == sess.run(predict, feed_dict={X: train_x, y: train_y})) test_accuracy = np.mean(np.argmax(test_y, axis=1) == sess.run(predict, feed_dict={X: test_x, y: test_y})) print("%d, %.2f%%, %.2f%%" % (step + 1, 100. * train_accuracy, 100. * test_accuracy)) test_acc.append(100. * test_accuracy) train_acc.append(100. * train_accuracy) Code execution Let's run this code for h_size of 128, standard deviation of 0.1, and sgd_step of 0.01: def run(h_size, stddev, sgd_step): ... def main(): run(128,0.1,0.01) if __name__ == '__main__': main() The preceding code outputs the following graph, which plots the steps versus the test and train accuracy: Let's compare the change in SGD steps and its effect on training accuracy. The following code is very similar to the previous code example, but we will rerun it for multiple SGD steps to see how SGD steps affect accuracy levels. def run(h_size, stddev, sgd_steps): .... test_accs = [] train_accs = [] time_taken_summary = [] for sgd_step in sgd_steps: start_time = time.time() updates_sgd = tf.train.GradientDescentOptimizer(sgd_step).minimize(cost) sess = tf.Session() init = tf.initialize_all_variables() steps = 50 sess.run(init) x = np.arange(steps) test_acc = [] train_acc = [] print("Step, train accuracy, test accuracy") for step in range(steps): # Train with each example for i in range(len(train_x)): sess.run(updates_sgd, feed_dict={X: train_x[i: i + 1], y: train_y[i: i + 1]}) train_accuracy = np.mean(np.argmax(train_y, axis=1) == sess.run(predict, feed_dict={X: train_x, y: train_y})) test_accuracy = np.mean(np.argmax(test_y, axis=1) == sess.run(predict, feed_dict={X: test_x, y: test_y})) print("%d, %.2f%%, %.2f%%" % (step + 1, 100. * train_accuracy, 100. * test_accuracy)) #x.append(step) test_acc.append(100. * test_accuracy) train_acc.append(100. * train_accuracy) end_time = time.time() diff = end_time -start_time time_taken_summary.append((sgd_step,diff)) t = [np.array(test_acc)] t.append(train_acc) train_accs.append(train_acc) Output of the preceding code will be an array with training and test accuracy for each SGD step value. In our example, we called the function sgd_steps for an SGD step value of [0.01, 0.02, 0.03]: def main(): sgd_steps = [0.01,0.02,0.03] run(128,0.1,sgd_steps) if __name__ == '__main__': main() This is the plot showing how training accuracy changes with sgd_steps. For an SGD value of 0.03, it reaches a higher accuracy faster as the step size is larger. In this post, we built our first neural network, which was feedforward only, and used it for classifying the contents of the Iris dataset. You enjoyed a tutorial from the book, Neural Network Programming with Tensorflow. To implement advanced neural networks like CCNs, RNNs, GANs, deep belief networks and others in Tensorflow, grab your copy today! Neural Network Architectures 101: Understanding Perceptrons How to Implement a Neural Network with Single-Layer Perceptron Deep Learning Algorithms: How to classify Irises using multi-layer perceptrons
Read more
  • 0
  • 0
  • 8685

article-image-ai-unity-game-developers-emulate-real-world-senses
Kunal Chaudhari
06 Jun 2018
19 min read
Save for later

AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior

Kunal Chaudhari
06 Jun 2018
19 min read
An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player's sight, and so on. The quality of our  Non-Player Character (NPC's) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it's worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we'll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent's sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room's layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent's AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that's looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we'll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we'll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn't accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent's eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener's hearing is represented by a sphere, and the sounds closest to the listener are more likely to be "heard." We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn't necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player's health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it's another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you'll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: Let's create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. Add a plane to be used as a floor. Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you've set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine; public class Target : MonoBehaviour { public Transform targetMarker; void Start (){} void Update () { int button = 0; //Get the point of the hit position when the mouse is being clicked if(Input.GetMouseButtonDown(button)) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; targetMarker.position = targetPosition; } } } } You'll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don't do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity's Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine; public class PlayerTank : MonoBehaviour { public Transform targetTransform; public float targetDistanceTolerance = 3.0f; private float movementSpeed; private float rotationSpeed; // Use this for initialization void Start () { movementSpeed = 10.0f; rotationSpeed = 2.0f; } // Update is called once per frame void Update () { if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) { return; } Vector3 targetPosition = targetTransform.position; targetPosition.y = transform.position.y; Vector3 direction = targetPosition - transform.position; Quaternion tarRot = Quaternion.LookRotation(direction); transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } } The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let's take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That's all of the variables we need in this component. Whenever our AI character senses something, we'll check the  aspectType to see whether it's the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine; public class Aspect : MonoBehaviour { public enum AspectTypes { PLAYER, ENEMY, } public AspectTypes aspectType; } Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It'll have the following two senses: The perspective sense will check whether the tank aspect is within a set visible range and distance The touch sense will detect if the enemy aspect has collided with its box collider, which we'll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine; public class Wander : MonoBehaviour { private Vector3 targetPosition; private float movementSpeed = 5.0f; private float rotationSpeed = 2.0f; private float targetPositionTolerance = 3.0f; private float minX; private float maxX; private float minZ; private float maxZ; void Start() { minX = -45.0f; maxX = 45.0f; minZ = -45.0f; maxZ = 45.0f; //Get Wander Position GetNextPosition(); } void Update() { if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) { GetNextPosition(); } Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } void GetNextPosition() { targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ)); } } The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine; public class Sense : MonoBehaviour { public bool enableDebug = true; public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY; public float detectionRate = 1.0f; protected float elapsedTime = 0.0f; protected virtual void Initialize() { } protected virtual void UpdateSense() { } // Use this for initialization void Start () { elapsedTime = 0.0f; Initialize(); } // Update is called once per frame void Update () { UpdateSense(); } } The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we'll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine; public class Perspective : Sense { public int fieldOfView = 45; public int viewDistance = 100; private Transform playerTransform; private Vector3 rayDirection; protected override void Initialize() { playerTransform = GameObject.FindGameObjectWithTag("Player").transform; } protected override void UpdateSense() { elapsedTime += Time.deltaTime; if (elapsedTime >= detectionRate) { DetectAspect(); } } //Detect perspective field of view for the AI Character void DetectAspect() { RaycastHit hit; rayDirection = playerTransform.position - transform.position; if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView) { // Detect if player is within the field of view if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance)) { Aspect aspect = hit.collider.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Detected"); } } } } } We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI's current direction. If it's in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it's hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character's line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; } Debug.DrawLine(transform.position, playerTransform.position, Color.red); Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance); //Approximate perspective visualization Vector3 leftRayPoint = frontRayPoint; leftRayPoint.x += fieldOfView * 0.5f; Vector3 rightRayPoint = frontRayPoint; rightRayPoint.x -= fieldOfView * 0.5f; Debug.DrawLine(transform.position, frontRayPoint, Color.green); Debug.DrawLine(transform.position, leftRayPoint, Color.green); Debug.DrawLine(transform.position, rightRayPoint, Color.green); } } Touching is believing The next sense we'll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object's collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine; public class Touch : Sense { void OnTriggerEnter(Collider other) { Aspect aspect = other.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Touch Detected"); } } } } Our sample NPC and tank have  BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you're setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we'll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you'll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to explore the brand-new features in Unity 2017. How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 17728

article-image-troubleshooting-in-vrealize-operations-components
Vijin Boricha
06 Jun 2018
11 min read
Save for later

Troubleshooting techniques in vRealize Operations components

Vijin Boricha
06 Jun 2018
11 min read
vRealize Operations Manager helps in managing the health, efficiency and compliance of virtualized environments. In this tutorial, we have covered the latest troubleshooting techniques of vRealize Operations. Let's see what we can do to monitor and troubleshoot the key vRealize Operations components and services. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, and Christopher Slater. Troubleshooting Services Let's first take a look into some of the most important vRealize Operations services. The Apache2 service vRealize Operations internally uses the Apache2 web server to host Admin UI, Product UI, and Suite API. Here are some useful service log locations: Log file nameLocationPurposeaccess_log & error_logstorage/log/var/log/apache2 Stores log information for the Apache service The Watchdog service Watchdog is a vRealize Operations service that maintains the necessary daemons/services and attempts to restart them as necessary should there be any failure. The vcops-watchdog is a Python script that runs every 5 minutes by means of the vcops-watchdog-daemon with the purpose of monitoring the various vRealize Operations services, including CaSA. The Watchdog service performs the following checks: PID file of the service Service status Here are some useful service log locations: Log file nameLocationPurposevcops-watchdog.log/usr/lib/vmware-vcops/user/log/vcops-watchdog/Stores log information for the WatchDog service The Collector service The Collector service sends a heartbeat to the controller every 30 seconds. By default, the Collector will wait for 30 minutes for adapters to synchronize. The collector properties, including enabling or disabling Self Protection, can be configured from the collector.properties properties file located in /usr/lib/vmware-vcops/user/conf/collector. Here are some useful service log locations: Log file nameLocationPurposecollector.log/storage/log/vcops/log/Stores log information for the Collector service The Controller service The Controller service is part of the analytics engine. The controller does the decision making on where new objects should reside. The controller manages the storage and retrieval of the inventory of the objects within the system. The Controller service has the following uses: It will monitor the collector status every minute How long a deleted resource is available in the inventory How long a non-existing resource is stored in the database Here are some useful service file locations: Log file nameLocationPurposecontroller.properties/usr/lib/vmware-vcops/user/conf/controller/Stores properties information for the Controller service Databases As we learned, vRealize Operations contains quite a few databases, all of which are of great importance for the function of the product. Let’s take a deeper look into those databases. Cassandra DB Currently, Cassandra DB stores the following information: User Preferences and Config Alerts definition Customizations Dashboards, Policies, and View Reports and Licensing Shard Maps Activities Cassandra stores all the information which we see in the content folder; basically, any settings which are applied globally. You are able to log into the Cassandra database from any Analytic Node. The information is the same across nodes. There are two ways to connect to the Cassandra database: cqlshrc is a command-line tool used to get the data within Cassandra, in a SQL-like fashion (inbuilt). To connect to the DB, run the following from the command line (SSH): $VMWARE_PYTHON_BIN $ALIVE_BASE/cassandra/apache-cassandra-2.1.8/bin/cqlsh --ssl --cqlshrc $ALIVE_BASE/user/conf/cassandra/cqlshrc Once you are connected to the DB, we need to navigate to the globalpersistence key space using the following command: vcops_user@cqlsh&gt; use globalpersistence ; The nodetool command-line tool Once you are logged on to the Cassandra DB, we can run the following commands to see information: Command syntaxPurposeDescribe tablesTo list all the relation (tables) in the current database instanceDescribe &lt;table_name&gt;To list the content of that particular tableExitTo exit the Cassandra command lineselect commandTo select any Column data from a tabledelete commandTo delete any Column data from a table Some of the important tables in Cassandra are: Table namePurposeactivity_2_tblStores all the activitiestbl_2a8b303a3ed03a4ebae2700cbfae90bfStores the Shard mapping information of an object (table name may be differ in each environment)supermetricStores the defined super metricspolicyStores all the defined policiesAuthStores all the user details in the clusterglobal_settingsAll the configured global settings are stored herenamespace_to_classtypeInforms what type of data is stored in what table under CassandrasymptomproblemdefinitionAll the defined symptomscertificatesStores all the adapter and data source certificates The Cassandra database has the following configuration files: File typeLocationCassandra.yaml/usr/lib/vmware-vcops/user/conf/cassandra/vcops_cassandra.properties/usr/lib/vmware-vcops/user/conf/cassandra/Cassandra conf scripts/usr/lib/vmware_vcopssuite/utilities/vmware/vcops/cassandra/ The Cassandra.yaml file stores certain information such as the default location to save data (/storage/db/vcops/cassandra/data). The file contains information about all the nodes. When a new node joins the cluster, it refers to this file to make sure it contacts the right node (master node). It also has all the SSL certificate information. Cassandra is started and stopped via the CaSA service, but just because CaSA is running does not mean that the Cassandra service is necessarily running. The service command to check the status of the Cassandra DB service from the command line (SSH) is: service vmware-vcops status cassandra The Cassandra cassandraservice.Sh is located in: $VCOPS_BASE/cassandra/bin/ Here are some useful Cassandra DB log locations: Log File NameLocationPurposeSystem.log/usr/lib/vmware-vcops/user/log/cassandraStores all the Cassandra-related activitieswatchdog_monitor.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra Watchdog logswrapper.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra start and stop-related logsconfigure.log /usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsinitial_cluster_setup.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsvalidate_cluster.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRops To enable debug logging for the Cassandra database, edit the logback.xml XML file located in: /usr/lib/vmware-vcops/user/conf/cassandra/ Change &lt;root level="INFO"&gt; to &lt;root level="DEBUG"&gt;. The System.log will not show the debug logs for Cassandra. If you experience any of the following issues, this may be an indicator that the database has performance issues, and so you may need to take the appropriate steps to resolve them: Relationship changes are not be reflected immediately Deleted objects still show up in the tree View and Reports takes slow in processing and take a long time to open. Logging on to the Product UI takes a long time for any user Alert was supposed to trigger but it never happened Cassandra can tolerate 5 ms of latency at any given point of time. You can validate the cluster by running the following command from the command line (SSH). First, navigate to the following folder: /usr/lib/vmware-vcopssuite/utilities/ Run the following command to validate the cluster: $VMWARE_PYTHON_BIN -m vmware.vcops.cassandra.validate_cluster &lt;IP_ADDRESS_1 IP_ADDRESS_2&gt; You can also use the nodetool to perform a health check, and possibly resolve database load issues. The command to check the load status of the activity tables in vRealize Operations version 6.3 and older is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool --port 9008 status For the 6.5 release and newer, VMware added the requirement of using a 'maintenanceAdmin' user along with a password file. The new command is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool -p 9008 --ssl -u maintenanceAdmin --password-file /usr/lib/vmware-vcops/user/conf/jmxremote.password status Regardless of which method you choose to perform the health check, if any of the nodes have over 600 MB of load, you should consult with VMware Global Support Services on the next steps to take, and how to elevate the load issues. Central (Repl DB) The Postgres database was introduced in 6.1. It has two instances in version 6.6. The Central Postgres DB, also called repl, and the Alerts/HIS Postgres DB, also called data, are two separate database instances under the database called vcopsdb. The central DB exists only on the master and the master replica nodes when HA is enabled. It is accessible via port 5433 and it is located in /storage/db/vcops/vpostgres/repl. Currently, the database stores the Resources inventory. You can connect to the central DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5433 The service command to start the central DB from the command line (SSH) is: service vpostgres-repl start Here are some useful log locations: Log file nameLocationPurposepostgresql-&lt;xx&gt;.log/storage/db/vcops/vpostgres/data/pg_logProvides information on Postgres database cleanup and other disk-related information Alerts/HIS (Data) DB The Alerts DB is called data on all the data nodes including the master and master replica node. It was again introduced in 6.1. Starting from 6.2, the  Historical Inventory Service xDB was merged with the Alerts DB. It is accessible via port 5432 and it is located in /storage/db/vcops/vpostgres/data. Currently, the database stores: Alerts and alarm history History of resource property data History of resource relationship You can connect to the Alerts DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5432 The service command to start the Alerts DB from the command line (SSH) is: service vpostgres start FSDB The File System Database (FSDB) contains all raw time series metrics and super metrics data for the discovered resources. What is FSDB in vRealize Operations Manager?: FSDB is a GemFire server and runs inside analytics JVM. FSDB in vRealize Operations uses the Sharding Manager to distribute data between nodes (new objects). (We will discuss what vRealize Operations cluster nodes are later in this chapter.) The File System Database is available in all the nodes of a vRops Cluster deployment. It has its own properties file. FSDB stores data (time series data ) collected by adapters and data which is generated/calculated (system, super, badge, CIQ metrics, and so on) based on analysis of that data. If you are troubleshooting FSDB performance issues, you should start from the Self Performance Details dashboard, more precisely, the FSDB Data Processing widget. We covered both of these earlier in this chapter. You can also take a look at the metrics provided by the FSDB Metric Picker: You can access it by navigating to the Environment tab, vRealize Operations Clusters, selecting a node, and selecting vRealize Operations Manager FSDB. Then, select the All Metrics tab. You can check the synchronization state of the FSDB to determine the overall health of the cluster by running the following command from the command line (SSH): $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getShardStateMappingInfo By restarting the FSDB, you can trigger synchronization of all the data by getting missing data from other FSDBs. Synchronization takes place only when you have vRealize Operations HA configured. Here are some useful log locations for FSDB: Log file nameLocaitonPurposeAnalytics-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/log Used to check the Sharding Module functionality Can trace which objects have synced ShardingManager_&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logCan be used to get the total time the sync tookfsdb-accessor-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logProvides information on FSDB database cleanup and other disk-related information Platform-cli Platform-cli is a tool by which we can get information from various databases, including the GemFire cache, Cassandra, and the Alerts/HIS persistence databases. In order to run this Python script, you need to run the following command: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py The following example of using this command will list all the resources in ascending order and also show you which shard it is stored on: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getResourceToShardMapping We discussed how to troubleshoot some of the most important components like services and databases along with troubleshooting failures in the upgrade process. To know more about self-monitoring dashboards and infrastructure compliance, check out this book Mastering vRealize Operations Manager - Second Edition. What to expect from vSphere 6.7 Introduction to SDN – Transformation from legacy to SDN Learning Basic PowerCLI Concepts
Read more
  • 0
  • 0
  • 17389
article-image-getting-started-polygons-blender
Sunith Shetty
05 Jun 2018
11 min read
Save for later

Building VR objects in React V2 2.0: Getting started with polygons in Blender

Sunith Shetty
05 Jun 2018
11 min read
A polygon is an n-sided object composed of vertices (points), edges, and faces. A face can face in or out or be double-sided. For most real-time VR, we use single–sided polygons; we noticed this when we first placed a plane in the world, depending on the orientation, you may not see it. In today’s tutorial, we will understand why Polygons are the best way to present real-time graphics. To really show how this all works, I'm going to show the internal format of an OBJ file. Normally, you won't hand edit these — we are beyond the days of VR constructed with a few thousand polygons (my first VR world had a train that represented downloads, and it had six polygons, each point lovingly crafted by hand), so hand editing things isn't necessary, but you may need to edit the OBJ files to include the proper paths or make changes your modeler may not do natively–so let's dive in! This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you'll gain a deeper understanding of Virtual Reality and a full-fledged  VR app to add to your profile. Polygons are constructed by creating points in 3D space, and connecting them with faces. You can consider that vertices are connected by lines (most modelers work this way), but in the native WebGL that React VR is based on, it's really just faces. The points don't really exist by themselves, but more or less "anchor" the corners of the polygon. For example, here is a simple triangle, modeled in Blender: In this case, I have constructed a triangle with three vertices and one face (with just a flat color, in this case green). The edges, shown in yellow or lighter shade, are there for the convenience of the modeler and won't be explicitly rendered. Here is what the triangle looks like inside our gallery: If you look closely in the Blender photograph, you'll notice that the object is not centered in the world. When it exports, it will export with the translations that you have applied in Blender. This is why the triangle is slightly off center on the pedestal. The good news is that we are in outer space, floating in orbit, and therefore do not have to worry about gravity. (React VR does not have a physics engine, although it is straightforward to add one.) The second thing you may notice is that the yellow lines (lighter gray lines in print) around the triangle in Blender do not persist in the VR world. This is because the file is exported as one face, which connects three vertices. The plural of vertex is vertices, not vertexes. If someone asks you about vertexes, you can laugh at them almost as much as when someone pronouncing Bézier curve as "bez ee er." Ok, to be fair, I did that once, now I always say Beh zee a. Okay, all levity aside, now let's make it look more interesting than a flat green triangle. This is done through something usually called as texture mapping. Honestly, the phrase "textures" and "materials" often get swapped around interchangeably, although lately they have sort of settled down to materials meaning anything about an object's physical appearance except its shape; a material could be how shiny it is, how transparent it is, and so on. A texture is usually just the colors of the object — tile is red, skin may have freckles — and is therefore usually called a texture map which is represented with a JPG, TGA, or other image format. There is no real cross software file format for materials or shaders (which are usually computer code that represents the material). When it comes time to render, there are some shader languages that are standard, although these are not always used in CAD programs. You will need to learn what your CAD program uses, and become proficient in how it handles materials (and texture maps). This is far beyond the scope of this book. The OBJ file format (which is what React VR usually uses) allows the use of several different texture maps to properly construct the material. It also can indicate the material itself via parameters coded in the file. First, let's take a look at what the triangle consists of. We imported OBJ files via the Model keyword: <Model source={{ obj: asset('OneTri.obj'), mtl: asset('OneTri.mtl'), }} style={{ transform: [ { translate: [ -0, -1, -5. ] }, { scale: .1 }, ] }} /> First, let's open the MTL (material) file (as the .obj file uses the .mtl file). The OBJ file format was developed by Wavefront: # Blender MTL File: 'OneTri.blend' # Material Count: 1 newmtl BaseMat Ns 96.078431 Ka 1.000000 1.000000 1.000000 Kd 0.040445 0.300599 0.066583 Ks 0.500000 0.500000 0.500000 Ke 0.000000 0.000000 0.000000 Ni 1.000000 d 1.000000 illum 2 A lot of this is housekeeping, but the important things are the following parameters: Ka : Ambient color, in RGB format Kd : Diffuse color, in RGB format Ks : Specular color, in RGB format Ns : Specular exponent, from 0 to 1,000 d : Transparency (d meant dissolved). Note that WebGL cannot normally show refractive materials, or display real volumetric materials and raytracing, so d is simply the percentage of how much light is blocked. 1 (the default) is fully opaque. Note that d in the .obj specification works for illum mode 2. Tr : Alternate representation of transparency; 0 is fully opaque. illum <#> (a number from 0 to 10). Not all illumination models are supported by WebGL. The current list is: Color on and Ambient off. Color on and Ambient on. Highlight on (and colors) <= this is the normal setting. There are other illumination modes, but are currently not used by WebGL. This of course, could change. Ni is optical density. This is important for CAD systems, but the chances of it being supported in VR without a lot of tricks are pretty low.  Computers and video cards get faster and faster all the time though, so maybe optical density and real time raytracing will be supported in VR eventually, thanks to Moore's law (statistically, computing power roughly doubles every two years or so). Very important: Make sure you include the "lit" keyword with all of your model declarations, otherwise the loader will assume you have only an emissive (glowing) object and will ignore most of the parameters in the material file! YOU HAVE BEEN WARNED. It'll look very weird and you'll be completely confused. Don't ask me why I know! The OBJ file itself has a description of the geometry. These are not usually something you can hand edit, but it's useful to see the overall structure. For the simple object, shown before, it's quite manageable: # Blender v2.79 (sub 0) OBJ File: 'OneTri.blend' # www.blender.org mtllib OneTri.mtl o Triangle v -7.615456 0.218278 -1.874056 v -4.384528 15.177612 -6.276536 v 4.801097 2.745610 3.762014 vn -0.445200 0.339900 0.828400 usemtl BaseMat s off f 3//1 2//1 1//1 First, you see a comment (marked with #) that tells you what software made it, and the name of the original file. This can vary. The mtllib is a call out to a particular material file, that we already looked at. The o lines (and g line is if there a group) define the name of the object and group; although React VR doesn't  really  use these (currently), in most modeling packages this will be listed in the hierarchy of objects. The v and vn keywords are where it gets interesting, although these are still not something visible. The v keyword creates a vertex in x, y, z space. The vertices built will later be connected into polygons. The vn establishes the normal for those objects, and vt will create the texture coordinates of the same points. More on texture coordinates in a bit. The usemtl BaseMat establishes what material, specified in your .mtl file, that will be used for the following faces. The s off means smoothing is turned off. Smoothing and vertex normals can make objects look smooth, even if they are made with very few polygons. For example, take a look at these two teapots; the first is without smoothing. Looks pretty computer graphics like, right? Now, have a look at the same teapot with the "s 1" parameter specified throughout, and normals included in the file.  This is pretty normal (pun intended), what I mean is most CAD software will compute normals for you. You can make normals; smooth, sharp, and add edges where needed. This adds detail without excess polygons and is fast to render. The smooth teapot looks much more real, right? Well, we haven't seen anything yet! Let's discuss texture. I didn't used to like Sushi because of the texture. We're not talking about that kind of texture. Texture mapping is a lot like taking a piece of Christmas wrapping paper and putting it around an odd shaped object. Just like when you get that weird looking present at Christmas and don't know quite what to do, sometimes doing the wrapping doesn't have a clear right way to do it. Boxes are easy, but most interesting objects aren't always a box. I found this picture online with the caption "I hope it's an X-Box." The "wrapping" is done via U, V coordinates in the CAD system. Let's take a look at a triangle, with proper UV coordinates. We then go get our wrapping paper, that is to say, we take an image file we are going to use as the texture, like this: We then wrap that in our CAD program by specifying this as a texture map. We'll then export the triangle, and put it in our world. You would probably have expected to see "left and bottom" on the texture map. Taking a closer look in our modeling package (Blender still) we see that the default UV mapping (using Blender's standard tools) tries to use as much of the texture map as possible, but from an artistic standpoint, may not be what we want. This is not to show that Blender is "yer doin' it wrong" but to make the point that you've got to check the texture mapping before you export. Also, if you are attempting to import objects without U,V coordinates, double-check them! If you are hand editing an .mtl file, and your textures are not showing up, double–check your .obj file and make sure you have vt lines; if you don't, the texture will not show up. This means the U,V coordinates for the texture mapping were not set. Texture mapping is non-trivial; there is quite an art about it and even entire books written about texturing and lighting. After having said that, you can get pretty far with Blender and any OBJ file if you've downloaded something from the internet and want to make it look a little better. We'll show you how to fix it. The end goal is to get a UV map that is more usable and efficient. Not all OBJ file exporters export proper texture maps, and frequently .obj files you may find online may or may not have UVs set. You can use Blender to fix the unwrapping of your model. We have several good Blender books to provide you a head start in it. You can also use your favorite CAD modeling program, such as Max, Maya, Lightwave, Houdini, and so on. This is important, so I'll mention it again in an info box. If you already use a different polygon modeler or CAD page, you don't have to learn Blender; your program will undoubtedly work fine.  You can skim this section. If you don't want to learn Blender anyway, you can download all of the files that we construct from the Github link. You'll need some of the image files if you do work through the examples. Files for this article are at: http://bit.ly/VR_Chap7. To summarize, we learned the basics of polygon modeling with Blender, also got to know the importance of polygon budgets, how to export those models, and details about the OBJ/MTL file formats. To know more about how to make virtual worlds look real, do check out this book Getting Started with React VR. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Unity plugins for augmented reality application development
Read more
  • 0
  • 0
  • 4200

article-image-asking-if-developers-can-be-great-entrepreneurs-is-like-asking-if-moms-can-excel-at-work-and-motherhood
Aarthi Kumaraswamy
05 Jun 2018
13 min read
Save for later

Asking if good developers can be great entrepreneurs is like asking if moms can excel at work and motherhood

Aarthi Kumaraswamy
05 Jun 2018
13 min read
You heard me right. Asking, ‘can developers succeed at entrepreneurship?’ is like asking ‘if women should choose between work and having children’. ‘Can you have a successful career and be a good mom?’ is a question that many well-meaning acquaintances still ask me. You see I am a new mom, (not a very new one, my son is 2.5 years old now). I’m also the managing editor of this site since its inception last year when I rejoined work after a maternity break. In some ways, the Packt Hub feels like running a startup too. :) Now how did we even get to this question? It all started with the results of this year's skill up survey. Every year we conduct a survey amongst developers to know the pulse of the industry and to understand their needs, aspirations, and apprehensions better so that we can help them better to skill up. One of the questions we asked them this year was: ‘Where do you see yourself in the next 5 years?’ To this, an overwhelming one third responded stating they want to start a business. Surveys conducted by leading consultancies, time and again, show that only 2 or 3 in 10 startups survive. Naturally, a question that kept cropping up in our editorial discussions after going through the above response was: Can developers also succeed as entrepreneurs? To answer this, first let’s go back to the question: Can you have a successful career and be a good mom? The short answer is, Yes, it is possible to be both, but it will be hard work. The long answer is, This path is not for everyone. As a working mom, you need to work twice as hard, befriend uncertainty and nurture a steady support system that you trust both at work and home to truly flourish. At times, when you see your peers move ahead in their careers or watch stay-at-home moms with their kids, you might feel envy, inadequacy or other unsavory emotions in between. You need superhuman levels of mental, emotional and physical stamina to be the best version of yourself to move past such times with grace, and to truly appreciate the big picture: you have a job you love and a kid who loves you. But what has my experience as a working mom got to do anything with developers who want to start their own business? You’d be surprised to see the similarities. Starting a business is, in many ways, like having a baby. There is a long incubation period, then the painful launch into the world followed by sleepless nights of watching over the business so it doesn’t choke on itself while you were catching a nap. And you are doing this for the first time too, even if you have people giving you advice. Then there are those sustenance issues to look at and getting the right people in place to help the startup learn to turn over, sit up, stand up, crawl and then finally run and jump around till it makes you go mad with joy and apprehension at the same time thinking about what’s next in store now. How do I scale my business? Does my business even need me? To me, being a successful developer, writer, editor or any other profession, for that matter, is about being good at what you do (write code, write stories, spot raw talent, and bring out the best in others etc) and enjoying doing it immensely. It requires discipline, skill, expertise, and will. To see if the similarities continue, let’s try rewriting my statement for a developer turned entrepreneur. Can you be a good developer and a great entrepreneur? This path is not for everyone. As a developer-turned-entrepreneur, you need to work twice as hard as your friends who have a full-time job, to make it through the day wearing many hats and putting out workplace fires that have got nothing to do with your product development. You need a steady support system both at work and home to truly flourish. At times, when you see others move ahead in their careers or listen to entrepreneurs who have sold their business to larger companies or just got VC funded, you might feel envy, selfishness, inadequacy or any other emotion in between. You need superhuman levels of maturity to move past such times, and to truly appreciate the big picture: you built something incredible and now you are changing the world, even if it is just one customer at a time with your business. Now that we sail on the same boat, here are the 5 things I learned over the last year as a working mom that I hope you as a developer-entrepreneur will find useful. I’d love to hear your take on them in the comments below. [divider style="normal" top="20" bottom="20"] #1. Become a productivity master. Compartmentalize your roles and responsibilities into chunks spread across the day. Ruthlessly edit out clutter from your life. [divider style="normal" top="20" bottom="20"] Your life changed forever when your child (business) was born. What worked for you as a developer may not work as an entrepreneur. The sooner you accept this, the faster you will see the differences and adapt accordingly. For example, I used to work quite late into the night and wake up late in the morning before my son was born. I also used to binge watch TV shows during weekends. Now I wake up as early as 4.30 AM so that I can finish off the household chores for the day and get my son ready for play school by 9 AM. I don’t think about work at all at this time. I must accommodate my son during lunch break and have a super short lunch myself. But apart from that I am solely focused on the work at hand during office hours and don’t think about anything else. My day ends at 11 PM. Now I am more choosy about what I watch as I have very less time for leisure. I instead prefer to ‘do’ things like learning something new, spending time bonding with my son, or even catching up on sleep, on weekends. This spring-cleaning and compartmentalization took a while to get into habit, but it is worth the effort. Truth be told, I still occasionally fallback to binging, like I am doing right now with this article, writing it at 12 AM on a Saturday morning because I’m in a flow. As a developer-entrepreneur, you might be tempted to spend most of your day doing what you love, i.e., developing/creating something because it is a familiar territory and you excel at that. But doing so at the cost of your business will cost you your business, sooner than you think. Resist the urge and instead, organize your day and week such that you spend adequate time on key aspects of your business including product development. Make a note of everything you do for a few days and then decide what’s not worth doing and what else can you do instead in its place. Have room only for things that matter to you which enrich your business goals and quality of life. [divider style="normal" top="20" bottom="20"] #2. Don’t aim to create the best, aim for good enough. Ship the minimum viable product (MVP). [divider style="normal" top="20" bottom="20"] All new parents want the best for their kids from the diaper they use to the food they eat and the toys they play with. They can get carried away buying stuff that they think their child needs only to have a storeroom full of unused baby items. What I’ve realized is that babies actually need very less stuff. They just need basic food, regular baths, clean diapers (you could even get away without those if you are up for the challenge!), sleep, play and lots of love (read mommy and daddy time, not gifts). This is also true for developers who’ve just started up. They want to build a unique product that the world has never seen before and they want to ship the perfect version with great features and excellent customer reviews. But the truth is, your first product is, in fact, a prototype built on your assumption of what your customer wants. For all you know, you may be wrong about not just your customer’s needs but even who your customer is. This is why a proper market study is key to product development. Even then, the goal should be to identify the key features that will make your product viable. Ship your MVP, seek quick feedback, iterate and build a better product in the next version. This way you haven’t unnecessarily sunk upfront costs, you’re ahead of your competitors and are better at estimating customer needs as well. Simplicity is the highest form of sophistication. You need to find just the right elements to keep in your product and your business model. As Michelangelo would put it, “chip away the rest”. [divider style="normal" top="20" bottom="20"] #3. There is no shortcut to success. Don’t compromise on quality or your values. [divider style="normal" top="20" bottom="20"] The advice to ship a good enough product is not a permission to cut corners. Since their very first day, babies watch and learn from us. They are keen observers and great mimics. They will feel, talk and do what we feel, say and do, even if they don’t understand any of the words or gestures. I think they do understand emotions. One more reason for us to be better role models for our children. The same is true in a startup. The culture mimics the leader and it quickly sets in across the organization. As a developer, you may have worked long hours, even into the night to get that piece of code working and felt a huge rush from the accomplishment. But as an entrepreneur remember that you are being watched and emulated by those who work for you. You are what they want to be when they ‘grow up’. Do you want a crash and burn culture at your startup? This is why it is crucial that you clarify your business’s goals, purpose, and values to everyone in your company, even if it just has one other person working there. It is even more important that you practice what you preach. Success is an outcome of right actions taken via the right habits cultivated. [divider style="normal" top="20" bottom="20"] #4. You can’t do everything yourself and you can’t be everywhere. You need to trust others to do their job and appreciate them for a job well done. This also means you hire the right people first. [divider style="normal" top="20" bottom="20"] It takes a village to raise a child, they say. And it is very true, especially in families where both parents work. It would’ve been impossible for me to give my 100% at work, if I had to keep checking in on how my son is doing with his grandparents or if I refused the support my husband offers sharing household chores. Just because you are an exceptional developer, you can’t keep micromanaging product development at your startup. As a founder, you have a larger duty towards your entire business and customers. While it is good to check how your product is developing, your pure developer days are behind. Find people better than you at this job and surround yourself with people who are good at what they do, and share your values for key functions of your startup. Products succeed because of developers, businesses succeed because their leader knew when to get out of the way and when to step in. [divider style="normal" top="20" bottom="20"] #5. Customer first, product next, ego last. [divider style="normal" top="20" bottom="20"] This has been the toughest lesson so far. It looks deceptively simple in theory but hard to practice in everyday life. As developers and engineers, we are primed to find solutions. We also see things in binaries: success and failure, right and wrong, good and bad. This is a great quality to possess to build original products. However, it is also the Achilles heel for the developer turned entrepreneur. In business, things are seldom in black and white. Putting product first can be detrimental to knowing your customers. For example, you build a great product, but the customer is not using it as you intended it to be used. Do you train your customers to correct their ways or do you unearth the underlying triggers that contribute to that customer behavior and alter your product’s vision? The answer is, ‘it depends’. And your job as an entrepreneur is to enable your team to find the factors that influence your decision on the subject; it is not to find the answer yourself or make a decision based on your experience. You need to listen more than you talk to truly put your customer first. To do that you need to acknowledge that you don’t know everything. Put your ego last. Make it easy for customers to share their views with you directly. Acknowledge that your product/service exists to serve your customers’ needs. It needs to evolve with your customer. Find yourself a good mentor or two who you respect and want to emulate. They will be your sounding board and the light at the end of the tunnel during your darkest hours (you will have many of those, I guarantee). Be grateful for your support network of family, friends, and colleagues and make sure to let them know how much you appreciate them for playing their part in your success. If you have the right partner to start the business, jump in with both feet. Most tech startups have a higher likelihood of succeeding when they have more than one founder. Probably because the demands of tech and business are better balanced on more than one pair of shoulders. [dropcap]H[/dropcap]ow we frame our questions is a big part of the problem. Reframing them makes us see different perspectives thereby changing our mindsets. Instead of asking ‘can working moms have it all?’, what if we asked ‘what can organizations do to help working moms achieve work-life balance?’, ‘how do women cope with competing demands at work and home?’ Instead of asking ‘can developers be great entrepreneurs?’ better questions to ask are ‘what can developers do to start a successful business?’, ‘what can we learn from those who have built successful companies?’ Keep an open mind; the best ideas may come from the unlikeliest sources! 1 in 3 developers wants to be an entrepreneur. What does it take to make a successful transition? Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 3097