Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-nservicebus-architecture
Packt
25 Aug 2014
11 min read
Save for later

The NServiceBus Architecture

Packt
25 Aug 2014
11 min read
In this article by Rich Helton, the author of Mastering NServiceBus and Persistence, we will focus on the NServiceBus architecture. We will discuss the different message and storage types supported in NSB. This discussion will include an introduction to some of the tools and advantages of using NSB. We will conceptually look at how some of the pieces fit together while backing up the discussions with code examples. (For more resources related to this topic, see here.) NSB is the cornerstone of automation. As an Enterprise Service Bus (ESB), NSB is the most popular C# ESB solution. NSB is a framework that is used to provide many of the benefits of implementing a service-oriented architecture (SOA). It uses an IBus and its ESB bus to handle messages between NSB services, without having to create custom interaction. This type of messaging between endpoints creates the bus. The services, which are autonomous Windows processes, use both Windows and NSB hosting services. NSB-hosting services provide extra functionalities, such as creating endpoints; setting up Microsoft Queuing (MSMQ), DTC for transactions across queues, subscription storage for publish/subscribe message information, NSB sagas; and much more. Deploying these pieces for messaging manually can lead to errors and a lot of work is involved to get it correct. NSB takes care of provisioning its needed pieces. NSB is not a frontend framework, such as Microsoft's Model-View-Controller (MVC). It is not used as an Object-to-Relationship Mapper (ORM), such as Microsoft's Entity Frameworks, to map objects to SQL Server tables. It is also not a web service framework, such as Microsoft's Windows Communication Foundation (WCF). NSB is a framework to provide the communication and support for services to communicate with each other and provide an end-to-end workflow to process all of these pieces. Benefits of NSB NSB provides many components needed for automation that are only found in ESBs. ESBs provide the following: Separation of duties: From the frontend to the backend by allowing the frontend to fire a message to a service and continue with its processing not worrying about the results until it needs an update. Also, you can separate workflow responsibilities by separating NSB services. One service could be used to send payments to a bank, and another service can be used to provide feedback of the current status of the payment to the MVC-EF database so that a user may see the status of their payment. Message durability: Messages are saved in queues between services so that if the services are stopped, they can start from the messages saved in the queues when they are restarted. This is done so that the messages will persist, until told otherwise. Workflow retries: Messages, or endpoints, can be told to retry a number of times until they completely fail and send an error. The error is automated to return to an error queue. For instance, a web service message can be sent to a bank, and it can be set to retry the web service every 5 minutes for 20 minutes before giving up completely. This is useful while fixing any network or server issues. Monitoring: NSB's ServicePulse can keep a check on the heartbeat of its services. Other monitoring checks can be easily performed on NSB queues to report the number of messages. Encryption: Messages between services and endpoints can be easily encrypted. High availability: Multiple services, or subscribers, could be processing the same or similar messages from various services that live on different servers. When one server, or a service, goes down, others could be made available to take over that are already running. More on endpoints While working with a service-to-service interaction, messages are transmitted in the form of XML through queues that are normally part of Microsoft Server such as MSMQ, SQL Server such as SQL queuing, or even part of Microsoft Azure queues for cloud computing. There are other endpoints that services use to process resources that are not part of service-to-service communications. These endpoints are used to process commands and messages as well, for instance, sending a file to non-NSB-hosted services, sending SFTP files to non-NSB-hosted services, or sending web services, such as payments, to non-NSB services. While at the other end of these communications are non-NSB-hosted services, NSB offers a lot of integrity by checking how these endpoints were processed. NSB provides information on whether a web service was processed or not, with or without errors, and provides feedback and monitoring, and maintains the records through queues. It also provides saga patterns to provide feedback to the originating NSB services of the outcome while storing messages from a particular NSB service to the NSB service of everything that has happened. In many NSB services, an audit queue is used to keep a backup of each message that occurred successfully, and the error queue is used to keep track of any message that was not processed successfully. The application security perspective From the application security perspective, OWASP's top ten list of concerns, available at https://www.owasp.org/index.php/Top_10_2013-Top_10, seems to always surround injection, such as SQL injection, broken authentication, and cross-site scripting (XSS). Once an organization puts a product in production, they usually have policies in place for the company's security personnel to scan the product at will. Not all organizations have these policies in place, but once an organization attaches their product to the Internet, there are armies of hackers that may try various methods to attack the site, depending on whether there is money to be gained or not. Money comes in a new economy these days in the form of using a site as a proxy to stage other attacks, or to grab usernames and passwords that a user may have for a different system in order to acquire a user's identity or financial information. Many companies have suffered bankruptcy over the last decades thinking that they were secure. NSB offers processing pieces to the backend that would normally be behind a firewall to provide some protection. Firewalls provide some protection as well as Intrusion Detection Systems (IDSes), but there is so much white noise for viruses and scans that many real hack attacks may go unnoticed, except by very skilled antihackers. NSB offers additional layers of security by using queuing and messaging. The messages can be encrypted, and the queues may be set for limited authorization from production administrators. NSB hosting versus self-hosting NServiceBus.Host is an executable that will deploy the NSB service. When the NSB service is compiled, it turns into a Windows DLL that may contain all the configuration settings for the IBus. If there are additional settings needed for the endpoint's configuration that are not coded in the IBus's configuration, then it can be resolved by setting these configurations in the Host command. However, NServiceBus.Host need not be used to create the program that is used in NServiceBus. As a developer, you can create a console program that is run by a Window's task scheduler, or even create your own services that run the NSB IBus code as an endpoint. Not using the NSB-hosting engine is normally referred to as self-hosting. The NServiceBus host streamlines service development and deployment, allows you to change technologies without code, and is administrator friendly when setting permissions and accounts. It will deploy your application as an NSB-hosted solution. It can also add configurations to your program at the NServiceBus.Host.exe command line. If you develop a program with the NServiceBus.Host reference, you can use EndpoinConfig.cs to define your IBus configuration in this code, or add it as part of the command line instead of creating your own Program.cs that will do a lot of the same work with more code. When debugging with the NServiceBus.Host reference, the Visual Studio project is creating a windows DLL program that is run by the NserviceBus.Host.exe command. Here's an example form of the properties of a Visual Studio project: The NServiceBus.Host.exe command line has support for deploying Window's services as NSB-hosted services: These configurations are typically referred to as the profile for which the service will be running. Here are some of the common profiles: MultiSite: This turns on the gateway. Master: This makes the endpoint a "master node endpoint". This means that it runs the gateway for multisite interaction, the timeout manager, and the distributor. It also starts a worker that is enlisted with the distributor. It cannot be combined with the worker or distributor profiles. Worker: This makes the current endpoint enlist as a worker with its distributor running on the master node. It cannot be combined with the master or distributor profiles. Distributor: This starts the endpoint only as a distributor. This means that the endpoint does no actual work and only distributes the load among its enlisted workers. It cannot be combined with the Master and Worker profiles. Performance counters: This turns on the NServiceBus-specific performance counters. Performance counters are installed by default when you run a Production profile. Lite: This keeps everything in memory with the most detailed logging. Integration: This uses technologies closer to production but without a scale-out option and less logging. It is used in testing. Production: This uses scale-out-friendly technologies and minimal file-based logging. It is used in production. Using Powershell commands Many items can be managed in the Package Manager console program of Visual Studio 2012. Just as we add commands to the NServiceBus.Host.exe file to extend profiles and configurations, we may also use VS2012 Package Manager to extend some of the functionalities while debugging and testing. We will use the ScaleOut solution discussed later just to double check that the performance counters are installed correctly. We need to make sure that the PowerShell commandlets are installed correctly first. We do this by using Package Manager: Install the package, NServiceBus.PowerShell Import the module, .packagesNServiceBus.PowerShell.4.3.0libnet40NServiceBus.PowerShell.dll Test NServiceBusPerformanceCountersInstallation The "Import module" step is dependent on where NService.PowerShell.dll was installed during the "Install package" process. The "Install-package" command will add the DLL into a package directory related to the solution. We can find out more on PowerShell commandlets at http://docs.particular.net/nservicebus/managing-nservicebus-using-powershell and even by reviewing the help section of Package Manager. Here, we see that we can insert configurations into App.config when we look at the help section, PM> get-help about_NServiceBus. Message exchange patterns Let's discuss the various exchange patterns now. The publish/subscribe pattern One of the biggest benefits of using the ESB technology is the benefits of the publish/subscribe message pattern; refer to http://en.wikipedia.org/wiki/Publish-subscribe_pattern. The publish/subscribe pattern has a publisher that sends messages to a queue, say a MSMQ MyPublisher queue. Subscribers, say Subscriber1 and Subscriber2, will listen for messages on the queue that the subscribers are defined to take from the queue. If MyPublisher cannot process the messages, it will return them to the queue or to an error queue, based on the reasons why it could not process the message. The queue that the subscribers are looking for on the queue are called endpoint mappings. The publisher endpoint mapping is usually based on the default of the project's name. This concept is the cornerstone to understand NSB and ESBs. No messages will be removed, unless they are explicitly told to be removed by a service. Therefore, no messages will be lost, and all are accounted for from the services. The configuration data is saved to the database. Also, the subscribers can respond back to MyPublisher with messages indicating that everything was alright or not using the queue. So why is this important? It's because all the messages can then be accounted for, and feedback can be provided to all the services. A service is a Windows service that is created and hosted by the NSB host program. It could also be a Windows command console program or even an MVC program, but the service program is always up and running on the server, continuously checking queues and messages that are sent to it from other endpoints. These messages could be commands, such as instructions to go and look at the remote server to see whether it is still running, or data messages such as sending a particular payment to the bank through a web service. For NSB, we formalize that events are used in publish/subscribe, and commands are used in a request-response message exchange pattern. Windows Server could have too many services, so some of these services could just be standing by, waiting to take over if one service is not responding or processing messages simultaneously. This provides a very high availability.
Read more
  • 0
  • 0
  • 5902

article-image-common-qlikview-script-errors
Packt
22 Nov 2013
3 min read
Save for later

Common QlikView script errors

Packt
22 Nov 2013
3 min read
(For more resources related to this topic, see here.) QlikView error messages displayed during the running of the script, during reload, or just after the script is run are key to understanding what errors are contained in your code. After an error is detected and the error dialog appears, review the error, and click on OK or Cancel on the Script Error dialog box. If you have the debugger open, click on Close, then click on Cancel on the Sheet Properties dialog. Re-enter the Script Editor and examine your script to fix the error. Errors can come up as a result of syntax, formula or expression errors, join errors, circular logic, or any number of issues in your script. The following are a few common error messages you will encounter when developing your QlikView script. The first one, illustrated in the following screenshot, is the syntax error we received when running the code that missed a comma after Sales. This is a common syntax error. It's a little bit cryptic, but the error is contained in the code snippet that is displayed. The error dialog does not exactly tell you that it expected a comma in a certain place, but with practice, you will realize the error quickly. The next error is a circular reference error. This error will be handled automatically by QlikView. You can choose to accept QlikView's fix of loosening one of the tables in the circular reference (view the data model in Table Viewer for more information on which table is loosened, or view the Document Properties dialog, Tables tab to find out which table is marked Loosely Coupled). Alternatively, you can choose another table to be loosely coupled in the Document Properties, Tables tab, or you can go back into the script and fix the circular reference with one of the methods. The following screenshot is a warning/error dialog displayed when you have a circular reference in a script: Another common issue is an unknown statement error that can be caused by an error in writing your script—missed commas, colons, semicolons, brackets, quotation marks, or an improperly written formula. In the case illustrated in the following screenshot, the error has encountered an unknown statement—namely, the Customers line that QlikView is attempting to interpret as Customers Load *…. The fix for this error is to add a colon after Customers in the following way: Customers: There are instances when a load script will fail silently. Attempting to store a QVD or CSV to a file that is locked by another user viewing it is one such error. Another example is when you have two fields with the same name in your load statement. The debugger can help you find the script lines in which the silent error is present. Summary In this article we learned about QlikView error messages displayed during the script execution. Resources for Article: Further resources on this subject: Meet QlikView [Article] Introducing QlikView elements [Article] Linking Section Access to multiple dimensions [Article]
Read more
  • 0
  • 0
  • 5897

article-image-app-and-web-development-in-2019-what-we-loved-and-what-mattered
Richard Gall
17 Dec 2019
10 min read
Save for later

App and web development in 2019: What we loved and what mattered

Richard Gall
17 Dec 2019
10 min read
For app and web developers, the world at the end of the decade is very different to the one that began it. Sure, change is inevitable, but the way the discipline(s) have evolved in just a matter of years (arguably the most significant changes came in the latter half of the decade) is a mark of how technologies, business needs, customer expectations, and harsh economic realities have conspired to shape and remold our notion of what software development actually looks like. Full-stack, cloud-native, DevOps (and maybe even ‘NoOps’): all these things have been shaping the way app and web developers work over the last ten years. And in 2019 it feels like that new world is beginning to settle into a specific pattern. Many of the trends and technologies that really defined 2019 are, in truth, trends that have been nascent and emerging for a number of years. Cloud and microservices When cloud first emerged - at some point much earlier this decade - it was largely just about resource efficiency. The idea was to ditch your on premises servers and move instead to a model whereby you rent server space from big vendors. Okay, perhaps that’s a somewhat crude summation; but it’s nevertheless the case that cloud was primarily a field dealt with by administrators and IT professionals, rather than developers. Today, of course, cloud is having a very real impact on the way developers work, giving a degree of agility and flexibility in how software is deployed and managed. With cloud partnering nicely with microservices - which allow developers to break down an application into constituent parts - it’s easy to see how these two trends are getting many app and web developers excited. They shorten the development lifecycle and allow developers to get closer to their code as it runs in production. Learn cloud development - explore Packt's range of cloud bundles. Pick up 5 for $25 throughout our $5 campaign. An essential resource for microservices development: Microservices Development Cookbook. $5 for the rest of December and into January. Go and Rust The growth of Go and Rust throughout 2019 (okay, and a bit before that too) is directly related to the increasing importance of cloud and microservices in software development. Although JavaScript has been taken beyond the browser, it isn’t the best programming language for building high performance applications; that’s where the likes of Go and Rust have been taking over a not insignificant slice of the collective developer imagination. Both languages share a similar history (as this article nicely details); at a fundamental level, moreover, both also aim to build on C++, but with accessibility and safety in mind (C++ has long had a reputation for being both complicated and sometimes vulnerable to bugs and security issues). Go is likely to continue to grow at a faster rate than Rust: it’s a lot easier to use, so for web and app developers with experience in Java or JavaScript, it’s a much gentler learning curve. But this isn’t to say that Rust won’t remain a fixture for developers. Consistently ranked the ‘most loved’ language in Stack Overflow surveys, as developers seek relentless improvements to performance alongside watertight reliability and security, Rust will remain an important language in a fast-changing development world. Search Packt's extensive selection of Go eBooks and videos - $5 throughout December and into the new year. Visit the Packt store. Learn Rust with Rust Programming Cookbook. WebAssembly It’s impossible to talk about web and application development without mentioning WebAssembly. Arguably the full implications of WebAssembly are yet to be realised (indeed, at ReactConf 2019, Richard Feldman suggested that it was unlikely to initiate a wholesale transformation of the web - that, he believes, will take a few more years), but 2019 has been a year when it has properly started to make many developers sit up and take notice. But why is WebAssembly so exciting? Essentially, it allows you to run code on the web using multiple languages at a speed that’s almost akin to native applications. Indeed, WebAssembly is making languages like Rust more attractive to web developers. If WebAssembly is a bridge between Rust and JavaScript, Rust immediately becomes more attractive to developers who previously would have paid very little attention to it. If 2019 was the year more developers decided to take note of WebAssembly, 2020 will be the year when we start to see increased adoption. Learn WebAssembly is $5 throughout this year's $5 campaign. Get it here. State management: Redux, Flux, Vuex… For many years, MVC (Model-View-Controller) was the dominant model for managing application state. However, as applications have grown in complexity, it has become more and more difficult for us to establish a ‘single source of truth’ inside our apps.That can impact performance and can also make them harder to maintain on the development side. To tackle this, we’ve started to see a number of different patterns and frameworks emerging to help us manage application state. The growth of React has been instrumental here - as a very lightweight library it gives developers the freedom to manage application state however they choose - and it’s worth noting that Flux architecture was developed by Facebook to complement the library. Watch: Why do React developers love Redux for state management? https://www.youtube.com/watch?v=7YzgZA_hA48&feature=emb_title Following Flux we’ve also had Redux and Vuex - all of them, each with subtly different approaches, have become an essential aspect of modern web and app development. And while they might not have first emerged in 2019, it feels as though the state management discourse has hit the heights that it previously has not. If you haven’t yet had time to dive into this topic, it's well worth making sure you commit to it in 2020. Learning React with Redux and Flux [Video] is $5 - purchase it here on the Packt store. Learn Vuex with Vuex Quick Start Guide. Functional programming Functional programming is on the rise. This doesn’t however mean that purely functional languages like Haskell and Lisp are dominating the programming language landscape - in fact, it’s been said that JavaScript is now the language used for functional programming (even though it isn’t a functional language). Functional programming is popular because it can help minimize complexity and make it easier to test and reuse code. When you’re dealing with a dense codebase that grows and grows as your application scales, this is immensely valuable. It’s also worth placing functional programming in the context of managing application state. Insofar as functional programming allows you to be specific in determining how different parts of a component should interact with one another - the function is a theoretical abstraction that makes it easier to get to grips with managing the state of a complex and dynamic application. Get to grips with functional programming and discover how to leverage its power. Read Mastering Functional Programming. The new JavaScript framework boom I’m not sure whether JavaScript fatigue is over. On the one hand the space has coalesced around a handful of core tools and frameworks - React, GraphQL, Node.js, among a couple of others - but on the other hand, the last year (and a bit) have been characterized by many other small projects developed to support these core tools. So, while it’s maybe a little bit easier to parse the JavaScript ecosystem at pretty high level of abstraction than it was in the past, at a deeper level you have a range of tools that are designed for very specific purposes or to be used alongside some of those frameworks and tools just mentioned. Tools ranging from Koa.js (for Node), to Polymer, Nuxt, Next, Gatsby, Hugo, Vuelidate (to name just a random assortment) are all vying for developer mindshare. You could say that many of these tools are ‘second-order’ frameworks and libraries - they don’t fundamentally change the way you think about development but instead make it easier to do specific things. It’s for this reason that I’m reluctant to suggest that JavaScript fatigue will return to its former glory - this new JavaScript framework boom is very much geared towards productivity and immediate gains rather than overhauling the way you build applications because of some principled belief in the ‘right’ or ‘best’ way to do things. Learn Nuxt: pick up Build a News Feed with Nuxt 2 and Firestore [Video] for $5 before the end of the year. Get to grips with Next.js with Next.js Quick Start Guide. Learn Koa with Hands-on Server-Side Development with Koa.js [Video] Learn Gatsby with GatsbyJS: Build a PWA Blog with GraphQL, React, and WordPress [Video] GraphQL Much of this decade has been dominated by REST when it comes to APIs. But just as the so called ‘API economy’ has gone into overdrive, GraphQL has come on the scene. Adoption has been rapid, with many developers turning to it because it allows them to handle more complex and sophisticated requests at scale without writing long and confusing lines of code. This isn’t to say, of course, that GraphQL has all but killed REST. Instead, it’s more the case that GraphQL has been found to be a better tool for managing APIs in specific domains than REST. If you’re dealing with APIs that are complex in terms of the number of entities and their relationships between one another, then GraphQL can prove immensely useful. Find out how to put GraphQL to use. Pick up GraphQL Projects for $5 for the rest of December and into January. React Hooks (and Vue Hooks) Launched with React 16.8, React Hooks “let you use state and other React features without writing a class” (that’s from the project’s site). That’s a good thing because building components with a class can sometimes be somewhat inelegant. For a better explanation of the ‘point’ of React Hooks you could do a lot worse than this article. Vue Hooks is part of Vue 3.0 - this won’t be officially released until early next year. But the fact that both leading front end frameworks are taking similar approaches to improve the developer experience demonstrates that they’re responding to a need for more flexibility and control over large projects. That means 2019 has been the year that both tools have hit maturity in the web development space. Learn how React Hooks work with Packt's new React Hooks video. Conclusion The web and app development world is becoming difficult to parse. A few years ago discussion and debate really centered on frameworks; today it feels like there are many other elements to consider. Part of this is symptomatic of a slow DevOps revolution - the gap between build and production is smaller than it has ever been, and developers now have a significant degree of accountability and responsibility for things that were the preserve of different breeds of engineers and IT professionals. Perhaps that story is a bit of a simplification - however, it’s hard to dispute that the web and app developer skill set is incredibly diverse. That means there are an array of options and opportunities out there for those developers looking to push their careers forward, but it also means that they’ll need to do some serious decision making about what they want to do and how they want to do it.
Read more
  • 0
  • 0
  • 5891

article-image-speech2face-a-neural-network-that-imagines-faces-from-hearing-voices-is-it-too-soon-to-worry-about-ethnic-profiling
Savia Lobo
28 May 2019
8 min read
Save for later

Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling?

Savia Lobo
28 May 2019
8 min read
Last week, a few researchers from the MIT CSAIL and Google AI published their research study of reconstructing a facial image of a person from a short audio recording of that person speaking, in their paper titled, “Speech2Face: Learning the Face Behind a Voice”. The researchers designed and trained a neural network which uses millions of natural Internet/YouTube videos of people speaking. During training, they demonstrated that the model learns voice-face correlations that allows it to produce images that capture various physical attributes of the speakers such as age, gender, and ethnicity. The entire training was done in a self-supervised manner, by utilizing the natural co-occurrence of faces and speech in Internet videos, without the need to model attributes explicitly. They said they further evaluated and numerically quantified how their Speech2Face reconstructs, obtains results directly from audio, and how it resembles the true face images of the speakers. For this, they tested their model both qualitatively and quantitatively on the AVSpeech dataset and the VoxCeleb dataset. The Speech2Face model The researchers utilized the VGG-Face model, a face recognition model pre-trained on a large-scale face dataset called DeepFace and extracted a 4096-D face feature from the penultimate layer (fc7) of the network. These face features were shown to contain enough information to reconstruct the corresponding face images while being robust to many of the aforementioned variations. The Speech2Face pipeline consists of two main components: 1) a voice encoder, which takes a complex spectrogram of speech as input, and predicts a low-dimensional face feature that would correspond to the associated face; and 2) a face decoder, which takes as input the face feature and produces an image of the face in a canonical form (frontal-facing and with neutral expression). During training, the face decoder is fixed, and only the voice encoder is trained which further predicts the face feature. How were the facial features evaluated? To quantify how well different facial attributes are being captured in Speech2Face reconstructions, the researchers tested different aspects of the model. Demographic attributes Researchers used Face++, a leading commercial service for computing facial attributes. They evaluated and compared age, gender, and ethnicity, by running the Face++ classifiers on the original images and our Speech2Face reconstructions. The Face++ classifiers return either “male” or “female” for gender, a continuous number for age, and one of the four values, “Asian”, “black”, “India”, or “white”, for ethnicity. Source: Arxiv.org Craniofacial attributes Source: Arxiv.org The researchers evaluated craniofacial measurements commonly used in the literature, for capturing ratios and distances in the face. They computed the correlation between F2F and the corresponding S2F reconstructions. Face landmarks were computed using the DEST library. As can be seen, there is statistically significant (i.e., p < 0.001) positive correlation for several measurements. In particular, the highest correlation is measured for the nasal index (0.38) and nose width (0.35), the features indicative of nose structures that may affect a speaker’s voice. Feature similarity The researchers further test how well a person can be recognized from on the face features predicted from speech. They, first directly measured the cosine distance between the predicted features and the true ones obtained from the original face image of the speaker. The table above shows the average error over 5,000 test images, for the predictions using 3s and 6s audio segments. The use of longer audio clips exhibits consistent improvement in all error metrics; this further evidences the qualitative improvement observed in the image below. They further evaluated how accurately they could retrieve the true speaker from a database of face images. To do so, they took the speech of a person to predict the feature using the Speech2Face model and query it by computing its distances to the face features of all face images in the database. Ethical considerations with Speech2Face model Researchers said that the training data used is a collection of educational videos from YouTube and that it does not represent equally the entire world population. Hence, the model may be affected by the uneven distribution of data. They have also highlighted that “ if a certain language does not appear in the training data, our reconstructions will not capture well the facial attributes that may be correlated with that language”. “In our experimental section, we mention inferred demographic categories such as “White” and “Asian”. These are categories defined and used by a commercial face attribute classifier and were only used for evaluation in this paper. Our model is not supplied with and does not make use of this information at any stage”, the paper mentions. They also warn that any further investigation or practical use of this technology would be carefully tested to ensure that the training data is representative of the intended user population. “If that is not the case, more representative data should be broadly collected”, the researchers state. Limitations of the Speech2Face model In order to test the stability of the Speech2Face reconstruction, the researchers used faces from different speech segments of the same person, taken from different parts within the same video, and from a different video. The reconstructed face images were consistent within and between the videos. They further probed the model with an Asian male example speaking the same sentence in English and Chinese to qualitatively test the effect of language and accent. While having the same reconstructed face in both cases would be ideal, the model inferred different faces based on the spoken language. In other examples, the model was able to successfully factor out the language, reconstructing a face with Asian features even though the girl was speaking in English with no apparent accent. “In general, we observed mixed behaviors and a more thorough examination is needed to determine to which extent the model relies on language. More generally, the ability to capture the latent attributes from speech, such as age, gender, and ethnicity, depends on several factors such as accent, spoken language, or voice pitch. Clearly, in some cases, these vocal attributes would not match the person’s appearance”, the researchers state in the paper. Speech2Cartoon: Converting generated image into cartoon faces The face images reconstructed from speech may also be used for generating personalized cartoons of speakers from their voices. The researchers have used Gboard, the keyboard app available on Android phones, which is also capable of analyzing a selfie image to produce a cartoon-like version of the face. Such cartoon re-rendering of the face may be useful as a visual representation of a person during a phone or a video conferencing call when the person’s identity is unknown or the person prefers not to share his/her picture. The reconstructed faces may also be used directly, to assign faces to machine-generated voices used in home devices and virtual assistants. https://twitter.com/NirantK/status/1132880233017761792 A user on HackerNews commented, “This paper is a neat idea, and the results are interesting, but not in the way I'd expected. I had hoped it would the domain of how much person-specific information this can deduce from a voice, e.g. lip aperture, overbite, size of the vocal tract, openness of the nares. This is interesting from a speech perception standpoint. Instead, it's interesting more in the domain of how much social information it can deduce from a voice. This appears to be a relatively efficient classifier for gender, race, and age, taking voice as input.” “I'm sure this isn't the first time it's been done, but it's pretty neat to see it in action, and it's a worthwhile reminder: If a neural net is this good at inferring social, racial, and gender information from audio, humans are even better. And the idea of speech as a social construct becomes even more relevant”, he further added. This recent study is interesting considering the fact that it is taking AI to another level wherein we are able to predict the face just by using audio recordings and even without the need for a DNA. However, there can be certain repercussions, especially when it comes to security. One can easily misuse such technology by impersonating someone else and can cause trouble. It would be interesting to see how this study turns out to be in the near future. To more about the Speech2Face model in detail, head over to the research paper. OpenAI introduces MuseNet: A deep neural network for generating musical compositions An unsupervised deep neural network cracks 250 million protein sequences to reveal biological structures and functions OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 5889

article-image-building-your-first-chatbot-using-chatfuel-with-no-code-tutorial
Bhagyashree R
09 Oct 2018
12 min read
Save for later

Building your first chatbot using Chatfuel with no code [Tutorial]

Bhagyashree R
09 Oct 2018
12 min read
Building chatbots is fun. Although chatbots are just another kind of software, they are very different in terms of the expectations that they create in users. Chatbots are conversational. This ability to process language makes them project a kind of human personality and intelligence, whether we as developers intend it to be or not. To develop software with a personality and intelligence is quite challenging and therefore interesting. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. In this article, we will explore a popular tool called Chatfuel, and learn how to build a chatbot from scratch. Chatfuel is a tool that enables you to build a chatbot without having to code at all. It is a web-based tool with a GUI editor that allows the user to build a chatbot in a modular fashion. Getting started with Chatfuel Let's get started. Go to Chatfuel's website to create an account: Click GET STARTED FOR FREE. Remember, the Chatfuel toolkit is currently free to use. This will lead you to one of the following two options: If you are logged into Facebook, it will ask for permission to link your Chatfuel account to your Facebook account If you are not logged in, it will ask you to log into Facebook first before asking for permission Chatfuel links to Facebook to deploy bots. So it requires permission to use your Facebook account: Authorize Chatfuel to receive information about you and to be your Pages manager: That's it! You are all set to build your very first bot: Building your first bot Chatfuel bots can be published on two deployment platforms: Facebook Messenger and Telegram. Let us build a chatbot for Facebook Messenger first. In order to do that, we need to create a Facebook Page. Every chatbot on Facebook Messenger needs to be attached to a page. Here is how we can build a Facebook Page: Go to https://www.facebook.com/pages/create/. Click the category appropriate to the page content. In our case, we will use Brand or Product and choose App Page. Give the page a name. In our case, let's use Get_Around_Edinburgh. Note that Facebook does not make it easy to change page names. So choose wisely. Once the page is created, you will see Chatfuel asking for permission to connect to the page: Click CONNECT TO PAGE. You will be taken to the bot editor. The name of the bot is set to My First Bot. It has a Messenger URL, which you can see by the side of the name. Messenger URLs start with m.me. You might notice that the bot also comes with a Welcome message that is built in. On the left, you see the main menu with a number of options, with the Build option selected by default: Click the Messenger URL to start your first conversation with the bot. This will open Facebook Messenger in your browser tab: To start the conversation, click the Get Started button at the bottom of the chat window. There you go! Your conversation has just started. The bot has sent you a welcome message: Notice how it greets you with your name. It is because you have given the bot access to your info on Facebook. Now that you have built your first bot and had a conversation with it, give yourself a pat on your back. Welcome to the world of chatbots! Adding conversational flow Let's start building our bot: On the welcome block, click the default text and edit it.  Hovering the mouse around the block can reveal options such as deleting the card, rearranging the order of the cards, and adding new cards between existing cards. Delete the Main menu button: Add a Text card. Let's add a follow-up text card and ask the user a question. Add buttons for user responses. Click ADD BUTTON and type in the name of the button. Ignore block names for now. Since they are incomplete, they will appear in red. Remember, you can add up to three buttons to a text card: Button responses need to be tied to blocks so that when users hit a button the chatbot would know what to do or say. Let's add a few blocks. To add a new block, click ADD BLOCK in the Bot Structure tab. This creates a new untitled block. On the right side, fill in the name of the block. Repeat the same for each block you want to build: Now, go back to the buttons and specify block names to connect to. Click the button, choose Blocks, and provide the name of the block: For each block, you created, add content by adding appropriate cards. Remember, each block can have more than one card. Each card will appear as a response, one after another: Repeat the preceding steps to add more blocks and connect them to buttons of other blocks. When you are done, you can test it by clicking the TEST THIS CHATBOT button in the top-right corner of the editor. You should now see the new welcome message with buttons for responses. Go on and click one of them to have a conversation: Great! You now have a bot with a conversational flow. Handling navigation How can the user and the chatbot navigate through the conversation? How do they respond to each other and move the conversation forward? In this section, we will examine the devices to facilitate conversation flow. Buttons Buttons are a way to let users respond to the chatbot in an unambiguous manner. You can add buttons to text, gallery, and list cards. Buttons have a label and a payload. The label is what the user gets to see. Payload is what happens in the backend when the user clicks the button: A button can take one of four types of payloads: next block, URL, phone number, or share: The next block is identified by the name of the block. This will tell the chatbot which block to execute when the button is pressed. The URL can be specified, if the chatbot is to open a web page on the embedded web browser. Since the browser is embedded, the size of the window can also be specified. The phone number can be specified, if the chatbot is to make a voice call to someone. The share option can be used in cards such as lists and galleries to share the card with other contacts of the user. Go to block cards Buttons can be used to navigate the user from one block to another, however, the user has to push the button to enable navigation. However, there may be circumstances where the navigation needs to happen automatically. For instance, if the chatbot is giving the user step-by-step instructions on how to do something, it can be built by putting all the cards (one step of information per card) in one block. However, it might be a good idea to put them in different blocks for the sake of modularity. In such a case, we need to provide the user a next step button to move on to the next step. In Chatfuel, we can use the Go to Block card to address this problem. A Go to Block card can be placed at the end of any block to take the chatbot to another block. Once the chatbot executes all the cards in a block, it moves to another block automatically without any user intervention. Using Go to Block cards, we can build the chatbot in a modular fashion. To add a Go to Block card at the end of a block, choose ADD A CARD, click the + icon and choose Go to Block card. Fill in the block name for redirection: Redirections can also be made random and conditional. By choosing the random option, we can make the chatbot choose one of the mentioned blocks randomly. This adds a bit of uncertainty to the conversation. However, this needs to be used very carefully because the context of the conversation may get tricky to maintain. Conditional redirections can be done if there is a need to check the context before the redirection is done. Let's revisit this option after we discuss context. Managing context In any conversation, the context of conversation needs to be managed. Context can be maintained by creating a local cache where the information transferred between the two dialogue partners can be stored. For instance, the user may tell the chatbot their food preferences, and this information can be stored in context for future reference if not used immediately. Another instance is in a conversation where the user is asking questions about a story. These questions may be incomplete and may need to be interpreted in terms of the information available in the context. In this section, we will explore how context can be recorded and utilized during the conversation in Chatfuel. Let's take the task of finding a restaurant as part of your tour guide chatbot. The conversation between the chatbot and the user might go as follows: User : Find a restaurant Bot : Ok. Where? User : City center. Bot : Ok. Any cuisine that you fancy? User : Indian Bot : Ok. Let me see... I found a few Indian restaurants in the city center. Here they are. In the preceding conversation, up until the last bot utterance, the bot needs to save the information locally. When it has gathered all the information it needs, it can go off and search the database with appropriate parameters. Notice that it also needs to use that information in generating utterances dynamically. Let's explore how to do these two—dynamically generating utterances and searching the database. First, we need to build the conversational flow to take the user through the conversation just as we discussed in the Next steps section. Let's assume that the user clicks the Find_a_restaurant button on the welcome block. Let's build the basic blocks with text messages and buttons to navigate through the conversation: User input cards As you can imagine, building the blocks for every cuisine and location combination can become a laborious task. Let's try to build the same functionality in another way—forms. In order to use forms, the user input card needs to be used. Let's create a new block called Restaurant_search and to it, let's add a User Input card. To add a User Input card, click ADD A CARD, click the + icon, and select the User Input card. Add all the questions you want to ask the user under MESSAGE TO USER. The answers to each of these questions can be saved to variables. Name the variables against every question. These variables are always denoted with double curly brackets (for example, {{restaurant_location}}): Information provided by the user can also be validated before acceptance. In case the required information is a phone number, email address, or a number, these can be validated by choosing the appropriate format of the input information. After the user input card, let's add a Go to Block card to redirect the flow to the results page: And add a block where we present the results. As you can see here, the variables holding information can be used in chatbot utterances. These will be dynamically replaced from the context when the conversation is happening: The following screenshot shows the conversation so far on Messenger: Setting user attributes In addition to the user input cards, there is also another way to save information in context. This can be done by using the set up user attribute card. Using this card, you can set context-specific information at any point during the conversation. Let's take a look at how to do it. To add this card, choose ADD A CARD, click the + icon, and choose the Set Up User Attribute card: The preceding screenshot shows the user-likes-history variable being set to true when the user asked for historical attractions. This information can later be used to drive the conversation (as used in the Go to Block card) or to provide recommendations. Variables that are already in the context can be reset to new values or no value at all. To clear the value of a variable, use the special NOT SET value from the drop-down menu that appears as soon as you try to fill in a value for the variable. Also, you can set/reset more than one variable in a card. Default contextual variables Besides defining your own contextual variables, you can also use a list of predefined variables. The information contained in these variables include the following: Information that is obtained from the deployment platform (that is, Facebook) including the user's name, gender, time zone, and locale Contextual information—last pushed button, last visited block name, and so on To get a full list of variables, create a new text card and type {{. This will open the drop-down menu with a list of variables you can choose from. This list will also include the variables created by you: As with the developer-defined variables, these built-in variables can also be used in text messages and in conditional redirections using the Go to Block cards. Congratulations! In this tutorial, you have started a journey toward building awesome chatbots. Using tour guiding as the use case, we explored a variety of chatbot design and development topics along the way. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. Facebook’s Wit.ai: Why we need yet another chatbot development framework? Building a two-way interactive chatbot with Twilio: A step-by-step guide How to create a conversational assistant or chatbot using Python
Read more
  • 0
  • 0
  • 5886

article-image-human-readable-rules-drools-jboss-rules-50part-1
Packt
20 Oct 2009
6 min read
Save for later

Human-readable Rules with Drools JBoss Rules 5.0(Part 1)

Packt
20 Oct 2009
6 min read
Domain Specific Language The domain in this sense represents the business area (for example, life insurance or billing). Rules are expressed with the terminology of the problem domain. This means that domain experts can understand, validate, and modify these rules more easily. You can think of DSL as a translator. It defines how to translate sentences from the problem-specific terminology into rules. The translation process is defined in a .dsl file. The sentences themselves are stored in a .dslr file. The result of this process must be a valid .drl file. Building a simple DSL might look like: [condition][]There is a Customer with firstName {name}=$customer : Customer(firstName == {name})[consequence][]Greet Customer=System.out.println("Hello " + $customer.getFirstName()); Code listing 1: Simple DSL file, simple.dsl. The code listing above contains only two lines (each begins with [). However, because the lines are too long, they are wrapped effectively creating four lines. This will be the case in most of the code listings.When you are using the Drools Eclipse plugin to write this DSL, enter the text before the first equal sign into the field called Language expression, the text after equal sign into Rule mapping, leave the object field blank, and select the correct scope. The previous DSL defines two DSL mappings. They map a DSLR sentence to a DRL rule. The first one translates to a condition that matches a Customer object with the specified first name. The first name is captured into a variable called name. This variable is then used in the rule condition. The second line translates to a greeting message that is printed on the console. The following .dslr file can be written based on the previous DSL: package droolsbook.dsl;import droolsbook.bank.model.*;expander simple.dslrule "hello rule" when There is a Customer with firstName "David" then Greet Customerend Code listing 2: Simple .dslr file (simple.dslr) with rule that greets a customer with name David. As can be seen, the structure of a .dslr file is the same as the structure of a .drl file. Only the rule conditions and consequences are different. Another thing to note is the line containing expander simple.dsl. It informs Drools how to translate sentences in this file into valid rules. Drools reads the simple.dslr file and tries to translate/expand each line by applying all mappings from the simple.dsl file (it does it in a single pass process, line-by-line from top to bottom). The order of lines is important in a .dsl file. Please note that one condition/consequence must be written on one line, otherwise the expansion won't work (for example, the condition after the when clause, from the rule above, must be on one line). When you are writing .dslr files, consider using the Drools Eclipse plugin. It provides a special editor for .dslr files that has an editing mode and a read-only mode for viewing the resulting .drl file. A simple DSL editor is provided as well. The result of the translation process will look like the following screenshot: This translation process happens in memory and no .drl file is physically stored. We can now run this example. First of all, a knowledge base must be created from the simple.dsl and simple.dslr files. The process of creating a package using a DSL is as follows (only the package creation is shown) KnowledgeBuilder acts as the translator. It takes the .dslr file, and based on the .dsl file, creates the DRL. This DRL is then used as normal (we don't see it; it's internal to KnowledgeBuilder). The implementation is as follows: private KnowledgeBase createKnowledgeBaseFromDSL() throws Exception { KnowledgeBuilder builder = KnowledgeBuilderFactory.newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource( "simple.dsl"), ResourceType.DSL); builder.add(ResourceFactory.newClassPathResource( "simple.dslr"), ResourceType.DSLR); if (builder.hasErrors()) { throw new RuntimeException(builder.getErrors() .toString()); } KnowledgeBase knowledgeBase = KnowledgeBaseFactory .newKnowledgeBase(); knowledgeBase.addKnowledgePackages( builder.getKnowledgePackages()); return knowledgeBase; } Code listing 3: Creating knowledge base from .dsl and .dslr files. The .dsl and subsequently the .dslr files are passed into KnowledgeBuilder. The rest is similar to what we've seen before. DSL as an interface DSLs can be also looked at as another level of indirection between your .drl files and business requirements. It works as shown in the following figure: The figure above shows DSL as an interface (dependency diagram). At the top are the business requirements as defined by the business analyst. These requirements are represented as DSL sentences (.dslr file). The DSL then represents the interface between DSL sentences and rule implementation (.drl file) and the domain model. For example, we can change the transformation to make the resulting rules more efficient without changing the language. Further, we can change the language, for example, to make it more user friendly, without changing the rules. All of this can be done just by changing the .dsl file. DSL for validation rules We'll rewrite the three usually implemented object/field required rules as follows: If the Customer does not have an address, then Display warning message If the Customer does not have a phone number or it is blank, then Display error message If the Account does not have an owner, then Display error message for Account We can clearly see that all of them operate on some object (Customer/Account), test its property (address/phone/owner), and display a message (warning/error) possibly with some context (account). Our validation.dslr file might look like the following code: expander validation.dslrule "address is required" when The Customer does not have address then Display warningendrule "phone number is required" when The Customer does not have phone number or it is blank then Display errorendrule "account owner is required" when The Account does not have owner then Display error for Accountend Code listing 4: First DSL approach at defining the required object/field rules (validation.dslr file). The conditions could be mapped like this: [condition][]The {object} does not have {field}=${object} : {object}( {field} == null ) Code listing 5: validation.dsl. This covers the address and account conditions completely. For the phone number rule, we have to add the following mapping at the beginning of the validation.dsl file: [condition][] or it is blank = == "" || Code listing 6: Mapping that checks for a blank phone number. As it stands, the phone number condition will be expanded to: $Customer : Customer( phone number == "" || == null ) Code listing 7: Unfinished phone number condition. To correct it, phone number has to be mapped to phoneNumber. This can be done by adding the following at the end of the validation.dsl file: [condition][]phone number=phoneNumber Code listing 8: Phone number mapping. The conditions are working. Now, let's focus on the consequences. The following mapping will do the job: [consequence][]Display {message_type} for {object}={message_type}( kcontext, ${object} ); [consequence][]Display {message_type}={message_type}( kcontext ); Code listing 9: Consequence mappings. The three validation rules are now being expanded to the .drl representation
Read more
  • 0
  • 0
  • 5885
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-multi-robot-cooperation-model-with-swarm-intelligence
Sugandha Lahoti
02 Aug 2018
7 min read
Save for later

IoT project: Design a Multi-Robot Cooperation model with Swarm Intelligence [Tutorial]

Sugandha Lahoti
02 Aug 2018
7 min read
Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. Swarm intelligence (SI) is a subset of collective intelligence and defines the collective behavior of decentralized, self-organized systems, natural or artificial. In this tutorial, we will talk about how to design a multi-robot cooperation model using swarm intelligence. This article is an excerpt from Intelligent IoT Projects in 7 Days by Agus Kurniawan. In this book, you will learn how to build your own Intelligent Internet of Things projects. What is swarm intelligence Swarm intelligence is inspired by the collective behavior of social animal colonies such as ants, birds, wasps, and honey bees. These animals work together to achieve a common goal. Swarm intelligence phenomena can be found in our environment. You can see swarm intelligence in deep-sea animals, shown in the following image of a school of fish in a formation that was captured by a photographer in Cabo Pulmo: Image source: http://octavioaburto.com/cabo-pulmo Using information from swarm intelligence studies, swarm intelligence is applied to coordinate among autonomous robots. Each robot can be described as a self-organization system. Each one negotiates with the others on how to achieve the goal. There are various algorithms to implement swarm intelligence. The following is a list of swarm intelligence types that researchers and developers apply to their problems: Particle swarm optimization Ant system Ant colony system Bees algorithm Bacterial foraging optimization algorithm The Particle Swarm Optimization (PSO) algorithm is inspired by the social foraging behavior of some animals such as the flocking behavior of birds and the schooling behavior of fish. A sample of PSO algorithm in Python can be found at https://gist.github.com/btbytes/79877. This program needs the numpy library. numpy (Numerical Python) is a package for scientific computing with Python. Your computer should have installed Python. If not, you can download and install on this site, https://www.python.org. If your computer does not have numpy , you can install it by typing this command in the terminal (Linux and Mac platforms): $ pip install numpy For Windows platform, please install numpy refer to this https://www.scipy.org/install.html. You can copy the following code into your editor. Save it as code_1.py and then run it on your computer using terminal: from numpy import array from random import random from math import sin, sqrt iter_max = 10000 pop_size = 100 dimensions = 2 c1 = 2 c2 = 2 err_crit = 0.00001 class Particle: pass def f6(param): '''Schaffer's F6 function''' para = param*10 para = param[0:2] num = (sin(sqrt((para[0] * para[0]) + (para[1] * para[1])))) * (sin(sqrt((para[0] * para[0]) + (para[1] * para[1])))) - 0.5 denom = (1.0 + 0.001 * ((para[0] * para[0]) + (para[1] * para[1]))) * (1.0 + 0.001 * ((para[0] * para[0]) + (para[1] * para[1]))) f6 = 0.5 - (num/denom) errorf6 = 1 - f6 return f6, errorf6; #initialize the particles particles = [] for i in range(pop_size): p = Particle() p.params = array([random() for i in range(dimensions)]) p.fitness = 0.0 p.v = 0.0 particles.append(p) # let the first particle be the global best gbest = particles[0] err = 999999999 while i < iter_max : for p in particles: fitness,err = f6(p.params) if fitness > p.fitness: p.fitness = fitness p.best = p.params if fitness > gbest.fitness: gbest = p v = p.v + c1 * random() * (p.best - p.params) + c2 * random() * (gbest.params - p.params) p.params = p.params + v i += 1 if err < err_crit: break #progress bar. '.' = 10% if i % (iter_max/10) == 0: print '.' print 'nParticle Swarm Optimisationn' print 'PARAMETERSn','-'*9 print 'Population size : ', pop_size print 'Dimensions : ', dimensions print 'Error Criterion : ', err_crit print 'c1 : ', c1 print 'c2 : ', c2 print 'function : f6' print 'RESULTSn', '-'*7 print 'gbest fitness : ', gbest.fitness print 'gbest params : ', gbest.params print 'iterations : ', i+1 ## Uncomment to print particles for p in particles: print 'params: %s, fitness: %s, best: %s' % (p.params, p.fitness, p.best) You can run this program by typing this command: $ python code_1.py.py This program will generate PSO output parameters based on input. You can see PARAMETERS value on program output.  At the end of the code, we can print all PSO particle parameter while iteration process. Introducing multi-robot cooperation Communicating and negotiating among robots is challenging. We should ensure our robots address collision while they are moving. Meanwhile, these robots should achieve their goals collectively. For example, Keisuke Uto has created a multi-robot implementation to create a specific formation. They take input from their cameras. Then, these robots arrange themselves to create a formation. To get the correct robot formation, this system uses a camera to detect the current robot formation. Each robot has been labeled so it makes the system able to identify the robot formation. By implementing image processing, Keisuke shows how multiple robots create a formation using multi-robot cooperation. If you are interested, you can read about the project at https://www.digi.com/blog/xbee/multi-robot-formation-control-by-self-made-robots/. Designing a multi-robot cooperation model using swarm intelligence A multi-robot cooperation model enables some robots to work collectively to achieve a specific purpose. Having multi-robot cooperation is challenging. Several aspects should be considered in order to get an optimized implementation. The objective, hardware, pricing, and algorithm can have an impact on your multi-robot design. In this section, we will review some key aspects of designing multi-robot cooperation. This is important since developing a robot needs multi-disciplinary skills. Define objectives The first step to developing multi-robot swarm intelligence is to define the objectives. We should state clearly what the goal of the multi-robot implementation is. For instance, we can develop a multi-robot system for soccer games or to find and fight fire. After defining the objectives, we can continue to gather all the material to achieve them: robot platform, sensors, and algorithms are components that we should have. Selecting a robot platform The robot platform is the MCU model that will be used. There are several MCU platforms that you use for a multi-robot implementation. Arduino, Raspberry Pi, ESP8266, ESP32, TI LaunchPad, and BeagleBone are samples of MCU platforms that can probably be applied for your case. Sometimes, you may nee to consider the price parameter to decide upon a robot platform. Some researchers and makers make their robot devices with minimum hardware to get optimized functionalities. They also share their hardware and software designs. I recommend you visit Open Robotics, https://www.osrfoundation.org, to explore robot projects that might fit your problem. Alternatively, you can consider using robot kits. Using a kit means you don't need to solder electronic components. It is ready to use. You can find robot kits in online stores such as Pololu (https://www.pololu.com), SparkFun (https://www.sparkfun.com), DFRobot (https://www.dfrobot.com), and Makeblock (http://www.makeblock.com). You can see my robots from Pololu and DFRobot here: Selecting the algorithm for swarm intelligence The choice of algorithm, especially for swarm intelligence, should be connected to what kind of robot platform is used. We already know that some hardware for robots have computational limitations. Applying complex algorithms to limited computation devices can drain the hardware battery. You must research the best parameters for implementing multi-robot systems. Implementing swarm intelligence in swarm robots can be described as in the following figure. A swarm robot system will perform sensing to gather its environmental information, including detecting peer robot presence. By combining inputs from sensors and peers, we can actuate the robots based on the result of our swarm intelligence computation. Actuation can be movement and actions. We designed a multi-robot cooperation model using swarm intelligence. To know how to create more smart IoT projects, check out this book Intelligent IoT Projects in 7 Days. AI-powered Robotics: Autonomous machines in the making How to assemble a DIY selfie drone with Arduino and ESP8266 Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 5883

article-image-production-ready-pytorch-1-0-preview-release-is-here-with-torch-jit-c10d-distributed-library-c-api
Aarthi Kumaraswamy
02 Oct 2018
4 min read
Save for later

PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API

Aarthi Kumaraswamy
02 Oct 2018
4 min read
Back in May, the PyTorch team shared their roadmap for PyTorch 1.0 release and highlighted that this most anticipated version will not only continue to provide stability and simplicity of use to its users, but will also make it production ready while making it a hassle-free migration experience for its users. Today, Facebook announced the release of PyTorch 1.0 RC1. The official announcement states, “PyTorch 1.0 accelerates the workflow involved in taking breakthrough research in artificial intelligence to production deployment. With deeper cloud service support from Amazon, Google, and Microsoft, and tighter integration with technology providers ARM, Intel, IBM, NVIDIA, and Qualcomm, developers can more easily take advantage of PyTorch’s ecosystem of compatible software, hardware, and developer tools. The more software and hardware that is compatible with PyTorch 1.0, the easier it will be for AI developers to quickly build, train, and deploy state-of-the-art deep learning models.” PyTorch is an open-source Python-based deep learning framework which provides powerful GPU acceleration. PyTorch is known for advanced indexing and functions, imperative style, integration support and API simplicity. This is one of the key reasons why developers prefer PyTorch for research and hackability. On the downside, it has struggled with adoption in production environments. The pyTorch team acknowledged this in their roadmap and have worked on improving this aspect significantly in pyTorch 1.0 not just in terms of improving the library but also by enriching its ecosystems but partnering with key software and hardware vendors. “One of its biggest downsides has been production-support. What we mean by production-support is the countless things one has to do to models to run them efficiently at massive scale: exporting to C++-only runtimes for use in larger projects optimizing mobile systems on iPhone, Android, Qualcomm and other systems using more efficient data layouts and performing kernel fusion to do faster inference (saving 10% of speed or memory at scale is a big win) quantized inference (such as 8-bit inference)”, stated the pyTorch team in their roadmap post. Below are some key highlights of this major milestone for PyTorch. JIT The JIT is a set of compiler tools for bridging the gap between research in PyTorch and production. It includes a language called Torch Script ( a subset of Python), and two ways (Tracing mode and Script mode) in which the existing code can be made compatible with the JIT. Torch Script code can be aggressively optimized and it can be serialized for later use in the new C++ API, which doesn't depend on Python at all. torch.distributed new "C10D" library The torch.distributed package and torch.nn.parallel.DistributedDataParallel module are backed by the new "C10D" library. The main highlights of the new library are: C10D is performance driven and operates entirely asynchronously for all backends: Gloo, NCCL, and MPI. Significant Distributed Data Parallel performance improvements especially for slower network like ethernet-based hosts Adds async support for all distributed collective operations in the torch.distributed package. Adds send and recv support in the Gloo backend C++ Frontend [API Unstable] The C++ frontend is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend. It is intended to enable research in high performance, low latency and bare metal C++ applications. It provides equivalents to torch.nn, torch.optim, torch.data and other components of the Python frontend. The C++ frontend is marked as "API Unstable" as part of PyTorch 1.0. This means it is ready to be used for building research applications, but still has some open construction sites that will stabilize over the next month or two. In other words, it is not ready for use in production, yet. N-dimensional empty tensors, a collection of new operators inspired from numpy and scipy and new distributions such as Weibull, Negative binomial and multivariate log gamma distributions have been introduced. There have also been a lot of breaking changes, bug fixes, and other improvements made to pyTorch 1.0. For more details read the official announcement and also the official release notes for pyTorch. What is PyTorch and how does it work? Build your first neural network with PyTorch [Tutorial] Is Facebook-backed PyTorch better than Google’s TensorFlow? Can a production-ready Pytorch 1.0 give TensorFlow a tough time?
Read more
  • 0
  • 0
  • 5874

article-image-highcharts-configurations
Packt
21 Jan 2015
53 min read
Save for later

Highcharts Configurations

Packt
21 Jan 2015
53 min read
This article is written by Joe Kuan, the author of Learning Highcharts 4. All Highcharts graphs share the same configuration structure and it is crucial for us to become familiar with the core components. However, it is not possible to go through all the configurations within the book. In this article, we will explore the functional properties that are most used and demonstrate them with examples. We will learn how Highcharts manages layout, and then explore how to configure axes, specify single series and multiple series data, followed by looking at formatting and styling tool tips in both JavaScript and HTML. After that, we will get to know how to polish our charts with various types of animations and apply color gradients. Finally, we will explore the drilldown interactive feature. In this article, we will cover the following topics: Understanding Highcharts layout Framing the chart with axes (For more resources related to this topic, see here.) Configuration structure In the Highcharts configuration object, the components at the top level represent the skeleton structure of a chart. The following is a list of the major components that are covered in this article: chart: This has configurations for the top-level chart properties such as layouts, dimensions, events, animations, and user interactions series: This is an array of series objects (consisting of data and specific options) for single and multiple series, where the series data can be specified in a number of ways xAxis/yAxis/zAxis: This has configurations for all the axis properties such as labels, styles, range, intervals, plotlines, plot bands, and backgrounds tooltip: This has the layout and format style configurations for the series data tool tips drilldown: This has configurations for drilldown series and the ID field associated with the main series title/subtitle: This has the layout and style configurations for the chart title and subtitle legend: This has the layout and format style configurations for the chart legend plotOptions: This contains all the plotting options, such as display, animation, and user interactions, for common series and specific series types exporting: This has configurations that control the layout and the function of print and export features For reference information concerning all configurations, go to http://api.highcharts.com. Understanding Highcharts' layout Before we start to learn how Highcharts layout works, it is imperative that we understand some basic concepts first. First, set a border around the plot area. To do that we can set the options of plotBorderWidth and plotBorderColor in the chart section, as follows:         chart: {                renderTo: 'container',                type: 'spline',                plotBorderWidth: 1,                plotBorderColor: '#3F4044'        }, The second border is set around the Highcharts container. Next, we extend the preceding chart section with additional settings:         chart: {                renderTo: 'container',                ....                borderColor: '#a1a1a1',                borderWidth: 2,                borderRadius: 3        }, This sets the container border color with a width of 2 pixels and corner radius of 3 pixels. As we can see, there is a border around the container and this is the boundary that the Highcharts display cannot exceed: By default, Highcharts displays have three different areas: spacing, labeling, and plot area. The plot area is the area inside the inner rectangle that contains all the plot graphics. The labeling area is the area where labels such as title, subtitle, axis title, legend, and credits go, around the plot area, so that it is between the edge of the plot area and the inner edge of the spacing area. The spacing area is the area between the container border and the outer edge of the labeling area. The following screenshot shows three different kinds of areas. A gray dotted line is inserted to illustrate the boundary between the spacing and labeling areas. Each chart label position can be operated in one of the following two layouts: Automatic layout: Highcharts automatically adjusts the plot area size based on the labels' positions in the labeling area, so the plot area does not overlap with the label element at all. Automatic layout is the simplest way to configure, but has less control. This is the default way of positioning the chart elements. Fixed layout: There is no concept of labeling area. The chart label is specified in a fixed location so that it has a floating effect on the plot area. In other words, the plot area side does not automatically adjust itself to the adjacent label position. This gives the user full control of exactly how to display the chart. The spacing area controls the offset of the Highcharts display on each side. As long as the chart margins are not defined, increasing or decreasing the spacing area has a global effect on the plot area measurements in both automatic and fixed layouts. Chart margins and spacing settings In this section, we will see how chart margins and spacing settings have an effect on the overall layout. Chart margins can be configured with the properties margin, marginTop, marginLeft, marginRight, and marginBottom, and they are not enabled by default. Setting chart margins has a global effect on the plot area, so that none of the label positions or chart spacing configurations can affect the plot area size. Hence, all the chart elements are in a fixed layout mode with respect to the plot area. The margin option is an array of four margin values covered for each direction, the same as in CSS, starting from north and going clockwise. Also, the margin option has a lower precedence than any of the directional margin options, regardless of their order in the chart section. Spacing configurations are enabled by default with a fixed value on each side. These can be configured in the chart section with the property names spacing, spacingTop, spacingLeft, spacingBottom, and spacingRight. In this example, we are going to increase or decrease the margin or spacing property on each side of the chart and observe the effect. The following are the chart settings:             chart: {                renderTo: 'container',                type: ...                marginTop: 10,                marginRight: 0,                spacingLeft: 30,                spacingBottom: 0            }, The following screenshot shows what the chart looks like: The marginTop property fixes the plot area's top border 10 pixels away from the container border. It also changes the top border into fixed layout for any label elements, so the chart title and subtitle float on top of the plot area. The spacingLeft property increases the spacing area on the left-hand side, so it pushes the y axis title further in. As it is in automatic layout (without declaring marginLeft), it also pushes the plot area's west border in. Setting marginRight to 0 will override all the default spacing on the chart's right-hand side and change it to fixed layout mode. Finally, setting spacingBottom to 0 makes the legend touch the lower bar of the container, so it also stretches the plot area downwards. This is because the bottom edge is still in automatic layout even though spacingBottom is set to 0. Chart label properties Chart labels such as xAxis.title, yAxis.title, legend, title, subtitle, and credits share common property names, as follows: align: This is for the horizontal alignment of the label. Possible keywords are 'left', 'center', and 'right'. As for the axis title, it is 'low', 'middle', and 'high'. floating: This is to give the label position a floating effect on the plot area. Setting this to true will cause the label position to have no effect on the adjacent plot area's boundary. margin: This is the margin setting between the label and the side of the plot area adjacent to it. Only certain label types have this setting. verticalAlign: This is for the vertical alignment of the label. The keywords are 'top', 'middle', and 'bottom'. x: This is for horizontal positioning in relation to alignment. y: This is for vertical positioning in relation to alignment. As for the labels' x and y positioning, they are not used for absolute positioning within the chart. They are designed for fine adjustment with the label alignment. The following diagram shows the coordinate directions, where the center represents the label location: We can experiment with these properties with a simple example of the align and y position settings, by placing both title and subtitle next to each other. The title is shifted to the left with align set to 'left', whereas the subtitle alignment is set to 'right'. In order to make both titles appear on the same line, we change the subtitle's y position to 15, which is the same as the title's default y value:  title: {     text: 'Web browsers ...',     align: 'left' }, subtitle: {     text: 'From 2008 to present',     align: 'right',     y: 15 }, The following is a screenshot showing both titles aligned on the same line: In the following subsections, we will experiment with how changes in alignment for each label element affect the layout behavior of the plot area. Title and subtitle alignments Title and subtitle have the same layout properties, and the only differences are that the default values and title have the margin setting. Specifying verticalAlign for any value changes from the default automatic layout to fixed layout (it internally switches floating to true). However, manually setting the subtitle's floating property to false does not switch back to automatic layout. The following is an example of title in automatic layout and subtitle in fixed layout:     title: {       text: 'Web browsers statistics'    },    subtitle: {       text: 'From 2008 to present',       verticalAlign: 'top',       y: 60       }, The verticalAlign property for the subtitle is set to 'top', which switches the layout into fixed layout, and the y offset is increased to 60. The y offset pushes the subtitle's position further down. Due to the fact that the plot area is not in an automatic layout relationship to the subtitle anymore, the top border of the plot area goes above the subtitle. However, the plot area is still in automatic layout towards the title, so the title is still above the plot area: Legend alignment Legends show different behavior for the verticalAlign and align properties. Apart from setting the alignment to 'center', all other settings in verticalAlign and align remain in automatic positioning. The following is an example of a legend located on the right-hand side of the chart. The verticalAlign property is switched to the middle of the chart, where the horizontal align is set to 'right':           legend: {                align: 'right',                verticalAlign: 'middle',                layout: 'vertical'          }, The layout property is assigned to 'vertical' so that it causes the items inside the legend box to be displayed in a vertical manner. As we can see, the plot area is automatically resized for the legend box: Note that the border decoration around the legend box is disabled in the newer version. To display a round border around the legend box, we can add the borderWidth and borderRadius options using the following:           legend: {                align: 'right',                verticalAlign: 'middle',                layout: 'vertical',                borderWidth: 1,                borderRadius: 3          }, Here is the legend box with a round corner border: Axis title alignment Axis titles do not use verticalAlign. Instead, they use the align setting, which is either 'low', 'middle', or 'high'. The title's margin value is the distance between the axis title and the axis line. The following is an example of showing the y-axis title rotated horizontally instead of vertically (which it is by default) and displayed on the top of the axis line instead of next to it. We also use the y property to fine-tune the title location:             yAxis: {                title: {                    text: 'Percentage %',                    rotation: 0,                    y: -15,                    margin: -70,                    align: 'high'                },                min: 0            }, The following is a screenshot of the upper-left corner of the chart showing that the title is aligned horizontally at the top of the y axis. Alternatively, we can use the offset option instead of margin to achieve the same result. Credits alignment Credits is a bit different from other label elements. It only supports the align, verticalAlign, x, and y properties in the credits.position property (shorthand for credits: { position: … }), and is also not affected by any spacing setting. Suppose we have a graph without a legend and we have to move the credits to the lower-left area of the chart, the following code snippet shows how to do it:             legend: {                enabled: false            },            credits: {                position: {                   align: 'left'                },                text: 'Joe Kuan',                href: 'http://joekuan.wordpress.com'            }, However, the credits text is off the edge of the chart, as shown in the following screenshot: Even if we move the credits label to the right with x positioning, the label is still a bit too close to the x axis interval label. We can introduce extra spacingBottom to put a gap between both labels, as follows:             chart: {                   spacingBottom: 30,                    ....            },            credits: {                position: {                   align: 'left',                   x: 20,                   y: -7                },            },            .... The following is a screenshot of the credits with the final adjustments: Experimenting with an automatic layout In this section, we will examine the automatic layout feature in more detail. For the sake of simplifying the example, we will start with only the chart title and without any chart spacing settings:      chart: {         renderTo: 'container',         // border and plotBorder settings         borderWidth: 2,         .....     },     title: {            text: 'Web browsers statistics,     }, From the preceding example, the chart title should appear as expected between the container and the plot area's borders: The space between the title and the top border of the container has the default setting spacingTop for the spacing area (a default value of 10-pixels high). The gap between the title and the top border of the plot area is the default setting for title.margin, which is 15-pixels high. By setting spacingTop in the chart section to 0, the chart title moves up next to the container top border. Hence the size of the plot area is automatically expanded upwards, as follows: Then, we set title.margin to 0; the plot area border moves further up, hence the height of the plot area increases further, as follows: As you may notice, there is still a gap of a few pixels between the top border and the chart title. This is actually due to the default value of the title's y position setting, which is 15 pixels, large enough for the default title font size. The following is the chart configuration for setting all the spaces between the container and the plot area to 0: chart: {     renderTo: 'container',     // border and plotBorder settings     .....     spacingTop: 0},title: {     text: null,     margin: 0,     y: 0} If we set title.y to 0, all the gap between the top edge of the plot area and the top container edge closes up. The following is the final screenshot of the upper-left corner of the chart, to show the effect. The chart title is not visible anymore as it has been shifted above the container: Interestingly, if we work backwards to the first example, the default distance between the top of the plot area and the top of the container is calculated as: spacingTop + title.margin + title.y = 10 + 15 + 15 = 40 Therefore, changing any of these three variables will automatically adjust the plot area from the top container bar. Each of these offset variables actually has its own purpose in the automatic layout. Spacing is for the gap between the container and the chart content; thus, if we want to display a chart nicely spaced with other elements on a web page, spacing elements should be used. Equally, if we want to use a specific font size for the label elements, we should consider adjusting the y offset. Hence, the labels are still maintained at a distance and do not interfere with other components in the chart. Experimenting with a fixed layout In the preceding section, we have learned how the plot area dynamically adjusted itself. In this section, we will see how we can manually position the chart labels. First, we will start with the example code from the beginning of the Experimenting with automatic layout section and set the chart title's verticalAlign to 'bottom', as follows: chart: {    renderTo: 'container',    // border and plotBorder settings    .....},title: {    text: 'Web browsers statistics',    verticalAlign: 'bottom'}, The chart title is moved to the bottom of the chart, next to the lower border of the container. Notice that this setting has changed the title into floating mode; more importantly, the legend still remains in the default automatic layout of the plot area: Be aware that we haven't specified spacingBottom, which has a default value of 15 pixels in height when applied to the chart. This means that there should be a gap between the title and the container bottom border, but none is shown. This is because the title.y position has a default value of 15 pixels in relation to spacing. According to the diagram in the Chart label properties section, this positive y value pushes the title towards the bottom border; this compensates for the space created by spacingBottom. Let's make a bigger change to the y offset position this time to show that verticalAlign is floating on top of the plot area:  title: {     text: 'Web browsers statistics',     verticalAlign: 'bottom',     y: -90 }, The negative y value moves the title up, as shown here: Now the title is overlapping the plot area. To demonstrate that the legend is still in automatic layout with regard to the plot area, here we change the legend's y position and the margin settings, which is the distance from the axis label:                legend: {                   margin: 70,                   y: -10               }, This has pushed up the bottom side of the plot area. However, the chart title still remains in fixed layout and its position within the chart hasn't been changed at all after applying the new legend setting, as shown in the following screenshot: By now, we should have a better understanding of how to position label elements, and their layout policy relating to the plot area. Framing the chart with axes In this section, we are going to look into the configuration of axes in Highcharts in terms of their functional area. We will start off with a plain line graph and gradually apply more options to the chart to demonstrate the effects. Accessing the axis data type There are two ways to specify data for a chart: categories and series data. For displaying intervals with specific names, we should use the categories field that expects an array of strings. Each entry in the categories array is then associated with the series data array. Alternatively, the axis interval values are embedded inside the series data array. Then, Highcharts extracts the series data for both axes, interprets the data type, and formats and labels the values appropriately. The following is a straightforward example showing the use of categories:     chart: {        renderTo: 'container',        height: 250,        spacingRight: 20    },    title: {        text: 'Market Data: Nasdaq 100'    },    subtitle: {        text: 'May 11, 2012'    },    xAxis: {        categories: [ '9:30 am', '10:00 am', '10:30 am',                       '11:00 am', '11:30 am', '12:00 pm',                       '12:30 pm', '1:00 pm', '1:30 pm',                       '2:00 pm', '2:30 pm', '3:00 pm',                       '3:30 pm', '4:00 pm' ],         labels: {             step: 3         }     },     yAxis: {         title: {             text: null         }     },     legend: {         enabled: false     },     credits: {         enabled: false     },     series: [{         name: 'Nasdaq',         color: '#4572A7',         data: [ 2606.01, 2622.08, 2636.03, 2637.78, 2639.15,                 2637.09, 2633.38, 2632.23, 2632.33, 2632.59,                 2630.34, 2626.89, 2624.59, 2615.98 ]     }] The preceding code snippet produces a graph that looks like the following screenshot: The first name in the categories field corresponds to the first value, 9:30 am, 2606.01, in the series data array, and so on. Alternatively, we can specify the time values inside the series data and use the type property of the x axis to format the time. The type property supports 'linear' (default), 'logarithmic', or 'datetime'. The 'datetime' setting automatically interprets the time in the series data into human-readable form. Moreover, we can use the dateTimeLabelFormats property to predefine the custom format for the time unit. The option can also accept multiple time unit formats. This is for when we don't know in advance how long the time span is in the series data, so each unit in the resulting graph can be per hour, per day, and so on. The following example shows how the graph is specified with predefined hourly and minute formats. The syntax of the format string is based on the PHP strftime function:     xAxis: {         type: 'datetime',          // Format 24 hour time to AM/PM          dateTimeLabelFormats: {                hour: '%I:%M %P',              minute: '%I %M'          }               },     series: [{         name: 'Nasdaq',         color: '#4572A7',         data: [ [ Date.UTC(2012, 4, 11, 9, 30), 2606.01 ],                  [ Date.UTC(2012, 4, 11, 10), 2622.08 ],                   [ Date.UTC(2012, 4, 11, 10, 30), 2636.03 ],                  .....                ]     }] Note that the x axis is in the 12-hour time format, as shown in the following screenshot: Instead, we can define the format handler for the xAxis.labels.formatter property to achieve a similar effect. Highcharts provides a utility routine, Highcharts.dateFormat, that converts the timestamp in milliseconds to a readable format. In the following code snippet, we define the formatter function using dateFormat and this.value. The keyword this is the axis's interval object, whereas this.value is the UTC time value for the instance of the interval:     xAxis: {         type: 'datetime',         labels: {             formatter: function() {                 return Highcharts.dateFormat('%I:%M %P', this.value);             }         }     }, Since the time values of our data points are in fixed intervals, they can also be arranged in a cut-down version. All we need is to define the starting point of time, pointStart, and the regular interval between them, pointInterval, in milliseconds: series: [{     name: 'Nasdaq',     color: '#4572A7',     pointStart: Date.UTC(2012, 4, 11, 9, 30),     pointInterval: 30 * 60 * 1000,     data: [ 2606.01, 2622.08, 2636.03, 2637.78,             2639.15, 2637.09, 2633.38, 2632.23,             2632.33, 2632.59, 2630.34, 2626.89,             2624.59, 2615.98 ] }] Adjusting intervals and background We have learned how to use axis categories and series data arrays in the last section. In this section, we will see how to format interval lines and the background style to produce a graph with more clarity. We will continue from the previous example. First, let's create some interval lines along the y axis. In the chart, the interval is automatically set to 20. However, it would be clearer to double the number of interval lines. To do that, simply assign the tickInterval value to 10. Then, we use minorTickInterval to put another line in between the intervals to indicate a semi-interval. In order to distinguish between interval and semi-interval lines, we set the semi-interval lines, minorGridLineDashStyle, to a dashed and dotted style. There are nearly a dozen line style settings available in Highcharts, from 'Solid' to 'LongDashDotDot'. Readers can refer to the online manual for possible values. The following is the first step to create the new settings:             yAxis: {                 title: {                     text: null                 },                 tickInterval: 10,                 minorTickInterval: 5,                 minorGridLineColor: '#ADADAD',                 minorGridLineDashStyle: 'dashdot'            } The interval lines should look like the following screenshot: To make the graph even more presentable, we add a striping effect with shading using alternateGridColor. Then, we change the interval line color, gridLineColor, to a similar range with the stripes. The following code snippet is added into the yAxis configuration:                 gridLineColor: '#8AB8E6',                 alternateGridColor: {                     linearGradient: {                         x1: 0, y1: 1,                         x2: 1, y2: 1                     },                     stops: [ [0, '#FAFCFF' ],                              [0.5, '#F5FAFF'] ,                              [0.8, '#E0F0FF'] ,                              [1, '#D6EBFF'] ]                   } The following is the graph with the new shading background: The next step is to apply a more professional look to the y axis line. We are going to draw a line on the y axis with the lineWidth property, and add some measurement marks along the interval lines with the following code snippet:                  lineWidth: 2,                  lineColor: '#92A8CD',                  tickWidth: 3,                  tickLength: 6,                  tickColor: '#92A8CD',                  minorTickLength: 3,                  minorTickWidth: 1,                  minorTickColor: '#D8D8D8' The tickWidth and tickLength properties add the effect of little marks at the start of each interval line. We apply the same color on both the interval mark and the axis line. Then we add the ticks minorTickLength and minorTickWidth into the semi-interval lines in a smaller size. This gives a nice measurement mark effect along the axis, as shown in the following screenshot: Now, we apply a similar polish to the xAxis configuration, as follows:            xAxis: {                type: 'datetime',                labels: {                    formatter: function() {                        return Highcharts.dateFormat('%I:%M %P', this.value);                    },                },                gridLineDashStyle: 'dot',                gridLineWidth: 1,                tickInterval: 60 * 60 * 1000,                lineWidth: 2,                lineColor: '#92A8CD',                tickWidth: 3,                tickLength: 6,                tickColor: '#92A8CD',            }, We set the x axis interval lines to the hourly format and switch the line style to a dotted line. Then, we apply the same color, thickness, and interval ticks as on the y axis. The following is the resulting screenshot: However, there are some defects along the x axis line. To begin with, the meeting point between the x axis and y axis lines does not align properly. Secondly, the interval labels at the x axis are touching the interval ticks. Finally, part of the first data point is covered by the y-axis line. The following is an enlarged screenshot showing the issues: There are two ways to resolve the axis line alignment problem, as follows: Shift the plot area 1 pixel away from the x axis. This can be achieved by setting the offset property of xAxis to 1. Increase the x-axis line width to 3 pixels, which is the same width as the y-axis tick interval. As for the x-axis label, we can simply solve the problem by introducing the y offset value into the labels setting. Finally, to avoid the first data point touching the y-axis line, we can impose minPadding on the x axis. What this does is to add padding space at the minimum value of the axis, the first point. The minPadding value is based on the ratio of the graph width. In this case, setting the property to 0.02 is equivalent to shifting along the x axis 5 pixels to the right (250 px * 0.02). The following are the additional settings to improve the chart:     xAxis: {         ....         labels: {                formatter: ...,                y: 17         },         .....         minPadding: 0.02,         offset: 1     } The following screenshot shows that the issues have been addressed: As we can see, Highcharts has a comprehensive set of configurable variables with great flexibility. Using plot lines and plot bands In this section, we are going to see how we can use Highcharts to place lines or bands along the axis. We will continue with the example from the previous section. Let's draw a couple of lines to indicate the day's highest and lowest index points on the y axis. The plotLines field accepts an array of object configurations for each plot line. There are no width and color default values for plotLines, so we need to specify them explicitly in order to see the line. The following is the code snippet for the plot lines:       yAxis: {               ... ,               plotLines: [{                    value: 2606.01,                    width: 2,                    color: '#821740',                    label: {                        text: 'Lowest: 2606.01',                        style: {                            color: '#898989'                        }                    }               }, {                    value: 2639.15,                    width: 2,                    color: '#4A9338',                    label: {                        text: 'Highest: 2639.15',                        style: {                            color: '#898989'                        }                    }               }]         } The following screenshot shows what it should look like: We can improve the look of the chart slightly. First, the text label for the top plot line should not be next to the highest point. Second, the label for the bottom line should be remotely covered by the series and interval lines, as follows: To resolve these issues, we can assign the plot line's zIndex to 1, which brings the text label above the interval lines. We also set the x position of the label to shift the text next to the point. The following are the new changes:              plotLines: [{                    ... ,                    label: {                        ... ,                        x: 25                    },                    zIndex: 1                    }, {                    ... ,                    label: {                        ... ,                        x: 130                    },                    zIndex: 1               }] The following graph shows the label has been moved away from the plot line and over the interval line: Now, we are going to change the preceding example with a plot band area that shows the index change between the market's opening and closing values. The plot band configuration is very similar to plot lines, except that it uses the to and from properties, and the color property accepts gradient settings or color code. We create a plot band with a triangle text symbol and values to signify a positive close. Instead of using the x and y properties to fine-tune label position, we use the align option to adjust the text to the center of the plot area (replace the plotLines setting from the above example):               plotBands: [{                    from: 2606.01,                    to: 2615.98,                    label: {                        text: '▲ 9.97 (0.38%)',                        align: 'center',                        style: {                            color: '#007A3D'                        }                    },                    zIndex: 1,                    color: {                        linearGradient: {                            x1: 0, y1: 1,                            x2: 1, y2: 1                        },                        stops: [ [0, '#EBFAEB' ],                                 [0.5, '#C2F0C2'] ,                                 [0.8, '#ADEBAD'] ,                                 [1, '#99E699']                        ]                    }               }] The triangle is an alt-code character; hold down the left Alt key and enter 30 in the number keypad. See http://www.alt-codes.net for more details. This produces a chart with a green plot band highlighting a positive close in the market, as shown in the following screenshot: Extending to multiple axes Previously, we ran through most of the axis configurations. Here, we explore how we can use multiple axes, which are just an array of objects containing axis configurations. Continuing from the previous stock market example, suppose we now want to include another market index, Dow Jones, along with Nasdaq. However, both indices are different in nature, so their value ranges are vastly different. First, let's examine the outcome by displaying both indices with the common y axis. We change the title, remove the fixed interval setting on the y axis, and include data for another series:             chart: ... ,             title: {                 text: 'Market Data: Nasdaq & Dow Jones'             },             subtitle: ... ,             xAxis: ... ,             credits: ... ,             yAxis: {                 title: {                     text: null                 },                 minorGridLineColor: '#D8D8D8',                 minorGridLineDashStyle: 'dashdot',                 gridLineColor: '#8AB8E6',                 alternateGridColor: {                     linearGradient: {                         x1: 0, y1: 1,                         x2: 1, y2: 1                     },                     stops: [ [0, '#FAFCFF' ],                              [0.5, '#F5FAFF'] ,                              [0.8, '#E0F0FF'] ,                              [1, '#D6EBFF'] ]                 },                 lineWidth: 2,                 lineColor: '#92A8CD',                 tickWidth: 3,                 tickLength: 6,                 tickColor: '#92A8CD',                 minorTickLength: 3,                 minorTickWidth: 1,                 minorTickColor: '#D8D8D8'             },             series: [{               name: 'Nasdaq',               color: '#4572A7',               data: [ [ Date.UTC(2012, 4, 11, 9, 30), 2606.01 ],                          [ Date.UTC(2012, 4, 11, 10), 2622.08 ],                           [ Date.UTC(2012, 4, 11, 10, 30), 2636.03 ],                          ...                        ]             }, {               name: 'Dow Jones',               color: '#AA4643',               data: [ [ Date.UTC(2012, 4, 11, 9, 30), 12598.32 ],                          [ Date.UTC(2012, 4, 11, 10), 12538.61 ],                           [ Date.UTC(2012, 4, 11, 10, 30), 12549.89 ],                          ...                        ]             }] The following is the chart showing both market indices: As expected, the index changes that occur during the day have been normalized by the vast differences in value. Both lines look roughly straight, which falsely implies that the indices have hardly changed. Let us now explore putting both indices onto separate y axes. We should remove any background decoration on the y axis, because we now have a different range of data shared on the same background. The following is the new setup for yAxis:            yAxis: [{                  title: {                     text: 'Nasdaq'                 },               }, {                 title: {                     text: 'Dow Jones'                 },                 opposite: true             }], Now yAxis is an array of axis configurations. The first entry in the array is for Nasdaq and the second is for Dow Jones. This time, we display the axis title to distinguish between them. The opposite property is to put the Dow Jones y axis onto the other side of the graph for clarity. Otherwise, both y axes appear on the left-hand side. The next step is to align indices from the y-axis array to the series data array, as follows:             series: [{                 name: 'Nasdaq',                 color: '#4572A7',                 yAxis: 0,                 data: [ ... ]             }, {                 name: 'Dow Jones',                 color: '#AA4643',                 yAxis: 1,                 data: [ ... ]             }]          We can clearly see the movement of the indices in the new graph, as follows: Moreover, we can improve the final view by color-matching the series to the axis lines. The Highcharts.getOptions().colors property contains a list of default colors for the series, so we use the first two entries for our indices. Another improvement is to set maxPadding for the x axis, because the new y-axis line covers parts of the data points at the high end of the x axis:             xAxis: {                 ... ,                 minPadding: 0.02,                 maxPadding: 0.02                 },             yAxis: [{                 title: {                     text: 'Nasdaq'                 },                 lineWidth: 2,                 lineColor: '#4572A7',                 tickWidth: 3,                 tickLength: 6,                 tickColor: '#4572A7'             }, {                 title: {                     text: 'Dow Jones'                 },                 opposite: true,                 lineWidth: 2,                 lineColor: '#AA4643',                 tickWidth: 3,                 tickLength: 6,                 tickColor: '#AA4643'             }], The following screenshot shows the improved look of the chart: We can extend the preceding example and have more than a couple of axes, simply by adding entries into the yAxis and series arrays, and mapping both together. The following screenshot shows a 4-axis line graph: Summary In this article, major configuration components were discussed and experimented with, and examples shown. By now, we should be comfortable with what we have covered already and ready to plot some of the basic graphs with more elaborate styles. Resources for Article: Further resources on this subject: Theming with Highcharts [article] Integrating with other Frameworks [article] Highcharts [article]
Read more
  • 0
  • 0
  • 5870

article-image-transferring-data-ms-access-2003-sql-server-2008
Packt
07 Oct 2009
3 min read
Save for later

Transferring Data from MS Access 2003 to SQL Server 2008

Packt
07 Oct 2009
3 min read
Launching the Import and Export Wizard Click Start | All Programs| SQL Server 2008 and click on Import and Export Data (32 bit) as shown in the next figure. This brings up the - "Welcome to the SQL Server Import and Export Wizard”. Make sure you read the explanation on this window. Connecting to Source and Destination databases You will connect to the source and destination. In this example the data to be transferred is in a MS Access 2003 database. A table in the Northwind.mdb file available in MS Access will be used for the source. The destination is the 'tempdb' database (or the master database). These are chosen for no particular reason and you can even create a new database on the server at this point. Connecting to the Source Click Next. The default window gets displayed as shown. The data source is SQL Server Native Client 10.0. As we are transferring from MS Access 2003 to SQL Server 2008 we need to use an appropriate data source. Click on the drop-down handle for the data source and choose Microsoft Access as shown in the next figure. The 'SQL Server Import and Export Wizard' changes to the following. Click on the Browse... button to locate the Northwind.mdb file on your machine also as shown. For Username use the default 'Admin'. Click the Advanced button. It opens the Data Link properties window as shown. You can test the connection as well as look at other connection related properties using the Connection and Advanced tabs. Click OK to the Microsoft Data Link message as well as to the Data Link Properties window. Click Next. Choose Destination The Choose a Destination page of the wizard gets displayed. The Destination source (SQL Server Native Client 10.0) and the Server name that automatically shows up are defaults. These may be different in your case. Accept them. The authentication information is also correct (the SQL Server is configured for Windows Authentication). Click on the drop-down handle for the Database presently displaying <default> as shown. Choose tempdb and click on the Next button. Choosing a table or query for the data to be transferred You can transfer data in table or tables; view or views by copying them over, or you may create a query to collect the data you want transferred. In this article both options will described. Copying a table over to destination The Specify Table Copy or Query page of the wizard gets displayed as shown. First we will copy data from a table. Later we will see how to write a query to do data transfer. Click Next. The Select Source Tables and Views page gets displayed as shown. When you place a check mark for say, Customers, the Destination column gets an entry for the selection you made. We assume that just this table is transferred. However you can see that you can transfer all of them in one shot. This window also helps you with two more important things. First you can Preview... the data you are transferring. Secondly, you can edit the mappings, the scheme of how source columns go over to destination.
Read more
  • 0
  • 0
  • 5870
article-image-getting-started-spring-python
Packt
10 Dec 2010
12 min read
Save for later

Getting started with Spring Python

Packt
10 Dec 2010
12 min read
Spring Python for Python developers You have already picked one of the most popular and technically powerful dynamic languages to develop software, Python. Spring Python makes it even easier to solve common problems encountered by Python developers every day. Exploring Spring Python's non-invasive nature Spring Python has a non-invasive nature, which means it is easy to adopt the parts that meet your needs, without rewriting huge blocks of code. For example, Pyro (http://pyro.sourceforge.net) is a 3rd party library that provides an easy way to make remote procedure calls. In order to demonstrate the Spring way of non-invasiveness, let's code a simple service, publish it as a web service using Pyro's API, and then publish it using Spring Python's wrapper around Pyro. This will show the difference in how Spring Python simplifies API access for us, and how it makes the 3rd party library easier to use without as much rework to our own code. First, let's write a simple service that parses out the parameters from a web request string: class ParamParser(object): def parse_web_parms(self, parm): return [tuple(p.split("=")) for p in parm.split("&")] Now we can write a simple, functional piece of code that uses our service in order to have a working version. parser = ParamParser() parser.parse_web_parms("pages=5&article=Spring_Python") This is just instantiating the ParamParser and accessing the function. To make this a useful internet service, it needs to be instantiated on a central server and should be configured to listen for calls from clients. The next step is to advertise it as a web service using the API of Pyro. This will make it reachable by multiple Pyro clients. To do this, we define a daemon which will host our service on port 9000 and initialize it. daemon = Pyro.core.Daemon(host="localhost", port="9000") Pyro.core.initServer() Next, we create a Pyro object instance to act as proxy to our service as well as an instance of our ParamParser. We configure the proxy to delegate all method calls to our service. pyro_proxy = Pyro.core.ObjBase() parser = ParamParser() pyro_proxy.delegateTo(parser) Finally, we register the pyro_proxy object with the daemon, and startup a listen-dispatch loop so that it's ready to handle requests: daemon.connect(pyro_proxy, "mywebservice") daemon.requestLoop(True) When we run this server code, an instance of our ParamParser will be created and advertised at PYROLOC://localhost:9000/mywebservice. To make this service complete, we need to create a Pyro client that will call into our service. The proxy seamlessly transfers Python objects over the wire using the Pyro library, in this case the tuple of request parameters. url_base = "PYROLOC://localhost:9000" client_proxy = Pyro.core.getProxyForURI( url_base + "/mywebservice") print client_proxy.parse_web_parms( "pages=5&article=Spring_Python") The Pyro library is easy to use. One key factor is how our ParamParser never gets tightly coupled to the Pyro machinery used to serve it to remote clients. However, it's very invasive. What if we had already developed a simple application on a single machine with lots of methods making use of our utility? In order to convert our application into a client-server application, we would have to rewrite it to use the Pyro client proxy pattern everywhere that it was called. If we miss any instances, we will have bugs that need to be cleaned up. If we had written automated tests, they would also have to be rewritten as well. Converting a simple, one-machine application into a multi-node application can quickly generate a lot of work. That is where Spring Python comes in. It provides a different way of creating objects which makes it easy for us to replace a local object with a remoting mechanism such as Pyro. Let's utilize Spring Python's container to create our parser and also to serve it up with Pyro. from springpython.config import PythonConfig from springpython.config import Object from springpython.remoting.pyro import PyroServiceExporter from springpython.remoting.pyro import PyroProxyFactory class WebServiceContainer(PythonConfig): def __init__(self): super(WebServiceContainer, self).__init__() @Object(lazy_init=True) def my_web_server(self): return PyroServiceExporter(service=ParamParser(), service_name="mywebservice", service_port=9000) @Object(lazy_init=True) def my_web_client(self): myService = PyroProxyFactory() myService.service_url="PYROLOC://localhost:9000/mywebservice" return myService With this container definition, it is easy to write both a server application as well as a client application. To spin up one instance of our Pyro server, we use the following code: from springpython.context import ApplicationContext container = ApplicationContext(WebServiceContainer()) container.get_object("my_web_server") The client application looks very similar. from springpython.context import ApplicationContext container = ApplicationContext(WebServiceContainer()) myService = container.get_object("my_web_client") myService.parse_web_parms("pages=5&article=Spring_Python") The Spring Python container works by containing all the definitions for creating key objects. We create an instance of the container, ask it for a specific object, and then use it. This easily looks like just as much (if not more) code than using the Pyro API directly. So why is it considered less invasive? Looking at the last block of code, we can see that we are no longer creating the parser or the Pyro proxy. Instead, we are relying on the container to create it for us. The Spring Python container decouples the creation of our parser, whether its for a local application, or if it uses Pyro to join them remotely. The server application doesn't know that it is being exported as a Pyro service, because all that information is stored in the WebServiceContainer. Any changes made to the container definition aren't seen by the server application code. The same can be said for the client. By putting creation of the client inside the container, we don't have to know whether we are getting an instance of our service or a proxy. This means that additional changes can be made inside the definition of the container of Spring Python, without impacting our client and server apps. This makes it easy to split the server and client calls into separate scripts to be run in separate instances of Python or on separate nodes in our enterprise. This demonstrates how it is possible to mix in remoting to our existing application. By using this pattern of delegating creation of key objects to the container, it is easy to start with simple object creation, and then layer on useful services such as remoting. Later in this book, we will also see how this makes it easy to add other services like transactions and security. Due to Spring Python's open ended design, we can easily create new services and add them on without having to alter the original framework. Adding in some useful templates In addition to the non-invasive ability to mix in services, Spring Python has several utilities that ease the usage of low level APIs through a template pattern. The template pattern involves capturing a logical flow of steps. What occurs at each step is customizable by the developer, while still maintaining the same overall sequence. One example where a template would be useful is for writing a SQL query. Coding SQL queries by hand using Python's database API (http://www.python.org/dev/peps/pep-0249) is very tedious. We must properly handle errors and harvest the results. The extra code involved with connecting things together and handling issues is commonly referred to as plumbing code. Let's look at the following code to see how Python's database API functions. The more plumbing code we have to maintain, the higher the cost. Having an application with dozens or hundreds of queries can become unwieldy, even cost prohibitive to maintain. Using Python's database API, we only have to write the following code once for setup. ### One time setup import MySQLdb conn = MySQLdb.connection(username="me", password"secret", hostname="localhost", db="springpython") Now let's use Python's database API to perform a single query. ### Repeated for every query cursor = conn.cursor() results = [] try: cursor.execute("""select title, air_date, episode_number, writer from tv_shows where name = %s""", ("Monty Python",)) for row in cursor.fetchall(): tvShow = TvShow(title=row[0], airDate=row[1], episodeNumber=row[2], writer=row[3]) results.append(tvShow) finally: try: cursor.close() except Exception: pass conn.close() return results   The specialized code we wrote to look up TV shows is contained in the execute statement and also the part that creates an instance of TvShow. The rest is just plumbing code needed to handle errors, manage the database cursor, and iterate over the results.   This may not look like much, but have you ever developed an application with just one SQL query? We could have dozens or even hundreds of queries, and having to repeatedly code these steps can become overwhelming. Spring Python's DatabaseTemplate lets us just inject the query string and and a row mapper to reduce the total amount of code that we need to write. We need a slightly different setup than before. """One time setup""" from springpython.database.core import * from springpython.database.factory import * connectionFactory = MySQLConnectionFactory(username="me", password="secret", hostname="localhost", db="springpython") We also need to define a mapping to generate our TvShow objects. class TvShowMapper(RowMapper): def map_row(self, row, metadata=None): return TvShow(title=row[0], airDate=row[1], episodeNumber=row[2], writer=row[3]) With all this setup, we can now create an instance of DatabaseTemplate and use it to execute the same query with a much lower footprint. dt = DatabaseTemplate(connectionFactory) """Repeated for each query""" results = dt.query("""select title, air_date, episode_number, writer from tv_shows where name = %s""", ("Monty Python",), TvShowMapper()) This example shows how we can replace 19 lines of code with a single statement using Spring Python's template solution. Object Relational Mappers (ORMs) have sprung up in response to the low level nature of ANSI SQL's protocol. Many applications have simple object persistence requirements and many of us would prefer working on code, and not database design. By having a tool to help do the schema management work, these ORMs have been a great productivity boost. But they are not necessarily the answer for every use case. Some queries are very complex and involve looking up information spread between many tables, or involve making complex calculations and involve decoding specific values. Also, many legacy systems are denormalized and don't fit the paradigm that ORMs were originally designed to handle. The complexity of these queries can require working around, or even against, the ORM-based solutions, making them not worth the effort. To alleviate the frustration of working with SQL, Spring Python's DatabaseTemplate greatly simplifies writing SQL, while giving you complete freedom in mapping the results into objects, dictionaries, and tuples. DatabaseTemplate can easily augment your application, whether or not you are already using an ORM. That way, simple object persistence can be managed with ORM, while complex queries can be handed over to Spring Python's DatabaseTemplate, resulting in a nice blend of productive, functional code. Other templates, such as TransactionTemplate, relieve you of the burden of dealing with the low level idioms needed to code transactions that makes them challenging to incorporate correctly. Later in this book, we will learn how easy it is to add transactions to our code both programmatically and declaratively. Applying the services you need and abstracting away low level APIs is a key part of the Spring way and lets us focus our time and effort on our customer's business requirements instead of our own technical ones. By using the various components we just looked at, it isn't too hard to develop a simple Pyro service that serves up TV shows from a relational database. from springpython.database.factory import * from springpython.config import * from springpython.remoting.pyro import * class TvShowMapper(RowMapper): def map_row(self, row, metadata=None): return (title=row[0], airDate=row[1], episodeNumber=row[2], writer=row[3]) class TvShowService(object): def __init__(self): self.connFactory = MySQLConnectionFactory(username="me", password="secret", hostname="localhost", db="springpython") self.dt = DatabaseTemplate(connFactory) def get_tv_shows(self): return dt.query("""select title, air_date, episode_number, writer from tv_shows where name = %s""", ("Monty Python",), TvShowMapper()) class TvShowContainer(PythonConfig): def __init__(self): super(TvShowContainer, self).__init__() @Object(lazy_init=True) def web_server(self): return PyroServiceExporter(service=TvShowService(), service_name="tvshows", service_port=9000) @Object(lazy_init=True) def web_client(self): myService = PyroProxyFactory() myService.service_url="PYROLOC://localhost:9000/tvshows" return myService if __name__ == "__main__": container = ApplicationContext(TvShowContainer()) container.get_object("web_server") By querying the database for TV shows and serving it up through Pyro, this block of code demonstrates how easy it is to use these powerful modules without mixing them together. It is much easier to maintain software over time when things are kept simple and separated. We just took a quick walk through SQL and Pyro and examined their low level APIs. Many low level APIs require a certain sequence of steps to properly utilize them. We just looked at a SQL query. The need for templates also exists with database transactions and LDAP calls. By capturing the flow of the API in a Spring Python template and allowing you to insert your custom code, you can get out of writing plumbing code and instead work on your application's business logic.
Read more
  • 0
  • 0
  • 5868

article-image-oracle-bi-publisher-11g-working-multiple-data-sources
Packt
25 Oct 2011
5 min read
Save for later

Oracle BI Publisher 11g: Working with Multiple Data Sources

Packt
25 Oct 2011
5 min read
(For more resources on Oracle 11g, see here.) The Data Model Editor's interface deals with all the components and functionalities needed for the data model to achieve the structure you need. However, the main component is the Data Set. In order to create a data model structure in BIP, you can choose from a variety of data set types, such as: SQL Query MDX Query Oracle BI Analysis View Object Web Service LDAP Query XML file Microsoft Excel file Oracle BI Discoverer HTTP Taking advantage of this variety requires multiple Data Sources of different types to be defined in the BIP. In this article, we will see: How data sources are configured How the data is retrieved from different data sets How data set type characteristics and the links between elements influence the data model structure Administration Let's first see, how you can verify or configure your data sources. You must choose the Administration link found in the upper-right corner of any of the BIP interface pages, as shown in the following screenshot:     The connection to your database can be choosen from the following connection types: Java Database Connectivity (JDBC) Java Naming and Directory Interface (JNDI) Lightweight Directory Access Protocol (LDAP) Online Analytical Processing (OLAP) Available Data Sources To get to your data source, BIP offers two possibilities: YOu can use a connection. In order to use a connection, these are the available connection types: JDBC JNDI LDAP OLAP You can also use a file. In the following sections, the Data Source types&mdashJDBC, JNDI, OLAP Connections, and File&mdashwill be explained in detail. JDBC Connection Let's take the first example. To configure a Data Source to use JDBC, from the Administration page, choose JDBC Connection from the Data Sources types list, as shown in the following screenshot:     You can see the requested parameters for configuring a JDBC connection in the following screenshot: Data Source Name: Enter a name of your choice. Driver Type: Choose a type from the list. The relating parameters are: Database Driver Class: A driver, matching your database type. Connection String: Information containing the computer name on which your database server is running, for example, port, database name, and so on. Username: Enter a database username. Password: Provide the database user's password. The Use System User option allows you to use the operating system's credentials as your credentials. For example, in this case, your MS SQL Database Server uses Windows authentication as the only authentication method. When you have a system administrator in-charge of these configurations, all you have to do is to find which are the available Data Sources and eventually you can check if the connection works. Click on the Test Connection button at the bottom of the page to test the connection:     JNDI Connection JNDI Connection pool is in fact another way to access your JDBC Data Sources. Using a connection pool increases efficiency by maintaining a cache of physical connections that can be reused, allowing multiple clients to share a small number of physical connections. In order to configure a Data Source to use JNDI, from the Administration page, choose JNDI Connection from the Data Sources types list. The following screen will appear:     As you can see in the preceding screenshot, on the Add Data Source page you must enter the following parameters: Data Source Name: Enter a name of your choice JNDI Name: This is the JNDI location for the pool set up in your application server, for example, jdbc/BIP10gSource The users having roles included in Allowed Roles list only will be able to create reports using this Data Source. OLAP Connection Use the OLAP Connection to connect to OLAP databases. BI Publisher supports the following OLAP types: Oracle Hyperion Essbase Microsoft SQL Server 2000 Analysis Services Microsoft SQL Server 2005 Analysis Services SAP BW In order to configure a connection to an OLAP database, from the Administration page, choose OLAP Connection from the Data Sources types list. The following screen will appear:     On the Add Data Source page, the following parameters must be entered: Data Source Name: Enter a name of your choice OLAP Type: Choose a type from the list Connection String: Depending on the supported OLAP databases, the connection string format is as follows: Oracle Hyperion Essbase Format: [server name] Microsoft SQL Server 2000 Analysis Services Format: Data Source=[server];Provider=msolap;Initial Catalog=[catalog] Microsoft SQL Server 2005 Analysis Services Format: Data Source=[server];Provider=msolap.3;Initial Catalog=[catalog] SAP BW Format: ASHOST=[server] SYSNR=[system number] CLIENT=[client] LANG=[language] Username and Password: Used for OLAP database authentication File Another example of a data source type is File. In order to gain access to XML or Excel files, you need a File Data Source. In order to set up this kind of Data Source, only one step is required&mdashenter the path to the Directory in which your files reside. You can see in the following screenshot that demo files Data Source points to the default BIP files directory. The file needs to be accessible from the BI Server (not on your local machine):    
Read more
  • 0
  • 0
  • 5868

article-image-intelligent-mobile-projects-with-tensorflow-build-your-first-reinforcement-learning-model-on-raspberry-pi-tutorial
Bhagyashree R
05 Sep 2018
13 min read
Save for later

Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi [Tutorial]

Bhagyashree R
05 Sep 2018
13 min read
OpenAI Gym (https://gym.openai.com) is an open source Python toolkit that offers many simulated environments to help you develop, compare, and train reinforcement learning algorithms, so you don't have to buy all the sensors and train your robot in the real environment, which can be costly in both time and money. In this article, we'll show you how to develop and train a reinforcement learning model on Raspberry Pi using TensorFlow in an OpenAI Gym's simulated environment called CartPole (https://gym.openai.com/envs/CartPole-v0). This tutorial is an excerpt from a book written by Jeff Tang titled Intelligent Mobile Projects with TensorFlow. To install OpenAI Gym, run the following commands: git clone https://github.com/openai/gym.git cd gym sudo pip install -e . You can verify that you have TensorFlow 1.6 and gym installed by running pip list: pi@raspberrypi:~ $ pip list gym (0.10.4, /home/pi/gym) tensorflow (1.6.0) Or you can start IPython then import TensorFlow and gym: pi@raspberrypi:~ $ ipython Python 2.7.9 (default, Sep 17 2016, 20:26:04) IPython 5.5.0 -- An enhanced Interactive Python. In [1]: import tensorflow as tf In [2]: import gym In [3]: tf.__version__ Out[3]: '1.6.0' In [4]: gym.__version__ Out[4]: '0.10.4' We're now all set to use TensorFlow and gym to build some interesting reinforcement learning model running on Raspberry Pi. Understanding the CartPole simulated environment CartPole is an environment that can be used to train a robot to stay in balance. In the CartPole environment, a pole is attached to a cart, which moves horizontally along a track. You can take an action of 1 (accelerating right) or 0 (accelerating left) to the cart. The pole starts upright, and the goal is to prevent it from falling over. A reward of 1 is provided for every time step that the pole remains upright. An episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. Let's play with the CartPole environment now. First, create a new environment and find out the possible actions an agent can take in the environment: env = gym.make("CartPole-v0") env.action_space # Discrete(2) env.action_space.sample() # 0 or 1 Every observation (state) consists of four values about the cart: its horizontal position, its velocity, its pole's angle, and its angular velocity: obs=env.reset() obs # array([ 0.04052535, 0.00829587, -0.03525301, -0.00400378]) Each step (action) in the environment will result in a new observation, a reward of the action, whether the episode is done (if it is then you can't take any further steps), and some additional information: obs, reward, done, info = env.step(1) obs # array([ 0.04069127, 0.2039052 , -0.03533309, -0.30759772]) Remember action (or step) 1 means moving right, and 0 left. To see how long an episode can last when you keep moving the cart right, run: while not done: obs, reward, done, info = env.step(1) print(obs) #[ 0.08048328 0.98696604 -0.09655727 -1.54009127] #[ 0.1002226 1.18310769 -0.12735909 -1.86127705] #[ 0.12388476 1.37937549 -0.16458463 -2.19063676] #[ 0.15147227 1.5756628 -0.20839737 -2.52925864] #[ 0.18298552 1.77178219 -0.25898254 -2.87789912] Let's now manually go through a series of actions from start to end and print out the observation's first value (the horizontal position) and third value (the pole's angle in degrees from vertical) as they're the two values that determine whether an episode is done. First, reset the environment and accelerate the cart right a few times: import numpy as np obs=env.reset() obs[0], obs[2]*360/np.pi # (0.008710582898326602, 1.4858315848689436) obs, reward, done, info = env.step(1) obs[0], obs[2]*360/np.pi # (0.009525842685697472, 1.5936049816642313) obs, reward, done, info = env.step(1) obs[0], obs[2]*360/np.pi # (0.014239775393474322, 1.040038643681757) obs, reward, done, info = env.step(1) obs[0], obs[2]*360/np.pi # (0.0228521194217381, -0.17418034908781568) You can see that the cart's position value gets bigger and bigger as it's moved right, the pole's vertical degree gets smaller and smaller, and the last step shows a negative degree, meaning the pole is going to the left side of the center. All this makes sense, with just a little vivid picture in your mind of your favorite dog pushing a cart with a pole. Now change the action to accelerate the cart left (0) a few times: obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.03536432554326476, -2.0525933052704954) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.04397450935915654, -3.261322987287562) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.04868738508385764, -3.812330822419413) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.04950617929263011, -3.7134404042580687) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.04643238384389254, -2.968245724428785) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.039465670006712444, -1.5760901885345346) You may be surprised at first to see the 0 action causes the positions (obs[0]) to continue to get bigger for several times, but remember that the cart is moving at a velocity and one or several actions of moving the cart to the other direction won't decrease the position value immediately. But if you keep moving the cart to the left, you'll see that the cart's position starts becoming smaller (toward the left). Now continue the 0 action and you'll see the position gets smaller and smaller, with a negative value meaning the cart enters the left side of the center, while the pole's angle gets bigger and bigger: obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.028603948219811447, 0.46789197320636305) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (0.013843572459953138, 3.1726728882727504) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (-0.00482029774222077, 6.551160678086707) obs, reward, done, info = env.step(0) obs[0], obs[2]*360/np.pi # (-0.02739315127299434, 10.619948631208114) For the CartPole environment, the reward value returned in each step call is always 1, and the info is always {}.  So that's all there's to know about the CartPole simulated environment. Now that we understand how CartPole works, let's see what kinds of policies we can come up with so at each state (observation), we can let the policy tell us which action (step) to take in order to keep the pole upright for as long as possible, in other words, to maximize our rewards. Using neural networks to build a better policy Let's first see how to build a random policy using a simple fully connected (dense) neural network, which takes 4 values in an observation as input, uses a hidden layer of 4 neurons, and outputs the probability of the 0 action, based on which, the agent can sample the next action between 0 and 1: To follow along you can download the code files from the book's GitHub repository. # nn_random_policy.py import tensorflow as tf import numpy as np import gym env = gym.make("CartPole-v0") num_inputs = env.observation_space.shape[0] inputs = tf.placeholder(tf.float32, shape=[None, num_inputs]) hidden = tf.layers.dense(inputs, 4, activation=tf.nn.relu) outputs = tf.layers.dense(hidden, 1, activation=tf.nn.sigmoid) action = tf.multinomial(tf.log(tf.concat([outputs, 1-outputs], 1)), 1) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) total_rewards = [] for _ in range(1000): rewards = 0 obs = env.reset() while True: a = sess.run(action, feed_dict={inputs: obs.reshape(1, num_inputs)}) obs, reward, done, info = env.step(a[0][0]) rewards += reward if done: break total_rewards.append(rewards) print(np.mean(total_rewards)) Note that we use the tf.multinomial function to sample an action based on the probability distribution of action 0 and 1, defined as outputs and 1-outputs, respectively (the sum of the two probabilities is 1). The mean of the total rewards will be around 20-something. This is a neural network that is generating a random policy, with no training at all. To train the network, we use tf.nn.sigmoid_cross_entropy_with_logits to define the loss function between the network output and the desired y_target action, defined using the basic simple policy in the previous subsection, so we expect this neural network policy to achieve about the same rewards as the basic non-neural-network policy: # nn_simple_policy.py import tensorflow as tf import numpy as np import gym env = gym.make("CartPole-v0") num_inputs = env.observation_space.shape[0] inputs = tf.placeholder(tf.float32, shape=[None, num_inputs]) y = tf.placeholder(tf.float32, shape=[None, 1]) hidden = tf.layers.dense(inputs, 4, activation=tf.nn.relu) logits = tf.layers.dense(hidden, 1) outputs = tf.nn.sigmoid(logits) action = tf.multinomial(tf.log(tf.concat([outputs, 1-outputs], 1)), 1) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(0.01) training_op = optimizer.minimize(cross_entropy) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for _ in range(1000): obs = env.reset() while True: y_target = np.array([[1. if obs[2] < 0 else 0.]]) a, _ = sess.run([action, training_op], feed_dict={inputs: obs.reshape(1, num_inputs), y: y_target}) obs, reward, done, info = env.step(a[0][0]) if done: break print("training done") We define outputs as a sigmoid function of the logits net output, that is, the probability of action 0, and then use the tf.multinomial to sample an action. Note that we use the standard tf.train.AdamOptimizer and its minimize method to train the network. To test and see how good the policy is, run the following code: total_rewards = [] for _ in range(1000): rewards = 0 obs = env.reset() while True: y_target = np.array([1. if obs[2] < 0 else 0.]) a = sess.run(action, feed_dict={inputs: obs.reshape(1, num_inputs)}) obs, reward, done, info = env.step(a[0][0]) rewards += reward if done: break total_rewards.append(rewards) print(np.mean(total_rewards)) We're now all set to explore how we can implement a policy gradient method on top of this to make our neural network perform much better, getting rewards several times larger. The basic idea of a policy gradient is that in order to train a neural network to generate a better policy, when all an agent knows from the environment is the rewards it can get when taking an action from any given state, we can adopt two new mechanisms: Discounted rewards: Each action's value needs to consider its future action rewards. For example, an action that gets an immediate reward, 1, but ends the episode two actions (steps) later should have fewer long-term rewards than an action that gets an immediate reward, 1, but ends the episode 10 steps later. Test run the current policy and see which actions lead to higher discounted rewards, then update the current policy's gradients (of the loss for weights) with the discounted rewards, in a way that an action with higher discounted rewards will, after the network update, have a higher probability of being chosen next time. Repeat such test runs and update the process many times to train a neural network for a better policy. Implementing a policy gradient in TensorFlow Let's now see how to implement a policy gradient for our CartPole problem in TensorFlow. First, import tensorflow, numpy, and gym, and define a helper method that calculates the normalized and discounted rewards: import tensorflow as tf import numpy as np import gym def normalized_discounted_rewards(rewards): dr = np.zeros(len(rewards)) dr[-1] = rewards[-1] for n in range(2, len(rewards)+1): dr[-n] = rewards[-n] + dr[-n+1] * discount_rate return (dr - dr.mean()) / dr.std() Next, create the CartPole gym environment, define the learning_rate and discount_rate hyper-parameters, and build the network with four input neurons, four hidden neurons, and one output neuron as before: env = gym.make("CartPole-v0") learning_rate = 0.05 discount_rate = 0.95 num_inputs = env.observation_space.shape[0] inputs = tf.placeholder(tf.float32, shape=[None, num_inputs]) hidden = tf.layers.dense(inputs, 4, activation=tf.nn.relu) logits = tf.layers.dense(hidden, 1) outputs = tf.nn.sigmoid(logits) action = tf.multinomial(tf.log(tf.concat([outputs, 1-outputs], 1)), 1) prob_action_0 = tf.to_float(1-action) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=prob_action_0) optimizer = tf.train.AdamOptimizer(learning_rate) To manually fine-tune the gradients to take into consideration the discounted rewards for each action we first use the compute_gradients method, then update the gradients the way we want, and finally call the apply_gradients method. So let's now  compute the gradients of the cross-entropy loss for the network parameters (weights and biases), and set up gradient placeholders, which are to be fed later with the values that consider both the computed gradients and the discounted rewards of the actions taken using the current policy during test run: gvs = optimizer.compute_gradients(cross_entropy) gvs = [(g, v) for g, v in gvs if g != None] gs = [g for g, _ in gvs] gps = [] gvs_feed = [] for g, v in gvs: gp = tf.placeholder(tf.float32, shape=g.get_shape()) gps.append(gp) gvs_feed.append((gp, v)) training_op = optimizer.apply_gradients(gvs_feed) The  gvs returned from optimizer.compute_gradients(cross_entropy) is a list of tuples, and each tuple consists of the gradient (of the cross_entropy for a trainable variable) and the trainable variable. If you run the script multiple times from IPython, the default graph of the tf object will contain trainable variables from previous runs, so unless you call tf.reset_default_graph(), you need to use gvs = [(g, v) for g, v in gvs if g != None] to remove those obsolete training variables, which would return None gradients. Now, play some games and save the rewards and gradient values: with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for _ in range(1000): rewards, grads = [], [] obs = env.reset() # using current policy to test play a game while True: a, gs_val = sess.run([action, gs], feed_dict={inputs: obs.reshape(1, num_inputs)}) obs, reward, done, info = env.step(a[0][0]) rewards.append(reward) grads.append(gs_val) if done: break After the test play of a game, update the gradients with discounted rewards and train the network (remember that training_op is defined as optimizer.apply_gradients(gvs_feed)): # update gradients and do the training nd_rewards = normalized_discounted_rewards(rewards) gp_val = {} for i, gp in enumerate(gps): gp_val[gp] = np.mean([grads[k][i] * reward for k, reward in enumerate(nd_rewards)], axis=0) sess.run(training_op, feed_dict=gp_val) Finally, after 1,000 iterations of test play and updates, we can test the trained model: total_rewards = [] for _ in range(100): rewards = 0 obs = env.reset() while True: a = sess.run(action, feed_dict={inputs: obs.reshape(1, num_inputs)}) obs, reward, done, info = env.step(a[0][0]) rewards += reward if done: break total_rewards.append(rewards) print(np.mean(total_rewards)) Note that we now use the trained policy network and sess.run to get the next action with the current observation as input. The output mean of the total rewards will be about 200. You can also save a trained model after the training using tf.train.Saver: saver = tf.train.Saver() saver.save(sess, "./nnpg.ckpt") Then you can reload it in a separate test program with: with tf.Session() as sess: saver.restore(sess, "./nnpg.ckpt") Now that you have a powerful neural-network-based policy model that can help your robot keep in balance, fully tested in a simulated environment, you can deploy it in a real physical environment, after replacing the simulated environment API returns with real environment data, of course—but the code to build and train the neural network reinforcement learning model can certainly be easily reused. If you liked this tutorial and would like to learn more such techniques, pick up this book, Intelligent Mobile Projects with TensorFlow, authored by Jeff Tang. AI on mobile: How AI is taking over the mobile devices marketspace Introducing Intelligent Apps AI and the Raspberry Pi: Machine Learning and IoT, What’s the Impact?
Read more
  • 0
  • 0
  • 5864
article-image-inkscape-svg-filter-effects
Packt
26 Apr 2011
8 min read
Save for later

Inkscape: SVG filter effects

Packt
26 Apr 2011
8 min read
Solid color and gradient vector objects can only get us so far when trying to create realistic shading and other complex effects. SVG filters give us the ability to achieve such effects on vector objects by combining various filter primitives. Inkscape comes with many different preset filters (all listed under the Filters menu) we can use straight away, although a few tweaks might be necessary to produce the intended result. This article will show us how to make those tweaks on some of the filters, and in the final recipe, we will create our very own filter. Due to a large number of filters available, trying to find the one closest to our needs using a trial and error process can be time-consuming. To compare all the filter outcomes on the same object you can open the filters.svg file that comes with your Inkscape installation in the share/examples folder. When testing filters, be sure to use objects having strokes, gradient fills, and transparency as specimens, as some filters produce varying results depending on the object's attributes. Filter rendering is quite CPU intensive and once they stack up they can reduce the Canvas update rate, so you might want to disable them temporarily by selecting the Menu | View | Display Mode | No filters mode, if you need to concentrate on another part of the drawing. To remove all filters from an object, select it and choose Menu | Filters | Remove Filters. Blurring The blur effect is versatile and has many possible applications in the realm of vector graphics. It is frequently used to enhance depth perception in a drawing, and to make certain elements stand out. It has its own section in the Inkscape menu (Menu | Filters | Blurs), where you will find several filter presets with descriptive names, such as "Apparition" and "Noisy". These presets combine different effects to produce a particular blur, and can be modified using the Filter Editor. This recipe will show us how to use the Gaussian Blur filter and introduce us to some basic filters related options. How to do it... The following steps will demonstrate how to use Blurs: Select the Ellipse tool (F5 or E) and create an ellipse inside the page area. Set its fill to Lime (#00FF00), stroke to Green (#008000), and stroke width to 32. Open the Fill and Stroke dialog (Shift + Ctrl + F) and increase the blur to 7 using the Blur: slider. Notice how the edges of the ellipse (outside edge and the one between the fill and the stroke) get more and more blurred, and how the object bounding box is now larger. Open the Filter Editor by going to Filters | Filter Editor.... Notice that there is one filter listed in the Filter list. This is the blur filter applied to the ellipse, and under Effect there is one filter primitive, namely Gaussian Blur. Sometimes the filter selection in the Filter list isn't updated automatically, to refresh it simply deselect the object (Esc) and select it again. Under the Effect parameters tab make sure that the Link under Standard Deviation: is pressed and move the top slider to the right until you reach 25. Notice how the blur of the ellipse changes as the slider is moved, and also notice how the Blur: slider in the Fill and Stroke dialog changes with it (it will end up on 12.5). The object bounding box changes too. Press the Link button to unlink the X from the Y standard deviation slider and change the second (Y) slider to 0 to only blur the object in the X direction. Select the Filter General Settings tab and change the X box of the Coordinates: to 0.2 and the Y to 0.25. Set X box of the Dimensions: to 0.6 and the Y box to 0.5. Notice how the object gets clipped as we change the settings. How it works... We can apply the Blur filter to the currently selected object by using the slider in the Fill and Stroke dialog (Blend can also be applied to layers). If we want to apply blurring through the Filter Editor we can use the Simple blur that can be accessed from the menu, under the Filters | ABCs category. The Blur: slider is actually the standard deviation property of the blur although the scale is doubled. Changing one automatically updates the value of the other. If we want to apply a more complex blurring effect we have to open the Filter Editor and unlink the X and Y standard deviation options. The size of the filter region is a common setting to all filters and is defined by the Dimensions parameters in the Filter General Settings tab found in the Filter Editor. It is obvious from our blur example how the filter region needs to be larger than the original object size to avoid clipping at the original bounding box edges and get the expected result. A predefined, finite region is necessary because the standard deviation function is calculated using an infinite plane, which is both impractical and unnecessary in this context. A more "natural selection" Selecting filtered objects using the rubber-band selection is a hit-and-miss affair because the bounding boxes are usually larger compared to the unfiltered state. Another way of selecting can be very helpful in that case. First empty your selection in case you have anything selected (Esc), hold Alt, and start dragging over the objects you want selected. A red line appears when we start dragging and all objects it touches are selected. The selection must be empty otherwise Alt triggers the movement of the selection. The Shift key can also come in handy to add any objects we missed. Using the Blur: slider in the Fill and Stroke dialog adjusts the filter region automatically so the object isn't clipped at any side. If we want to change the region manually we can do so through the Filter Editor. Having only one filter primitive inside the filter structure makes it the simplest filter we'll encounter. More complex filters consist of more than one filter primitive and those primitives are usually of different types. There's more... Many different blur presets are present in the Blurs section, such as Motion blur, horizontal and Motion blur, vertical, which we learned how to do manually. Using the presets can be a bit quicker but they always apply the same amount of blurring so if it doesn't turn out to be the right amount you will need to adjust that manually. A lot of filters from other categories also use the Gaussian Blur filter primitive as part of their structure and although most of them can't be labeled as blur effects some could easily be listed under the Blurs category. Examples are the Feather filter under the ABCs category, Soft colors under the Color category that produces the well-known Ortoneffect, used in photo manipulation to produce hazy, dreamlike landscapes, Soft focus lens under Image effects, and Fuzzy Glow under Shadows and Glows. Creating irregular edges using filters In this recipe we will create a sheet of paper with burnt edges, using the Roughen and Blur content filters. How to do it... The following steps will demonstrate how to create burnt paper edges: Select the Rectangle tool (F4 or R) and create a 400x500 px rectangle. Set its fill to #fff7d5ff, stroke to #2b1100, and stroke width to 16. Apply the Roughen filter by going to Filters | ABCs | Roughen. Notice how all the edges (outside edge and the one between the fill and the stroke) turned irregular. Now all that's left is to blur the edge between the fill and the stroke, which can be done by going to Filters | Blurs | Blur content. Change the stroke width to 8. Open Filter Editor, select the Turbulence primitive from the Effect list, change the Type: to Fractal Noise, and change the Base Frequency: to 0.025. Select the Displacement Map primitive from the Effect list and change the Scale: slider to 30. We now have our burnt sheet of paper. How it works... Applying Roughen and Blur content brings us close to our goal but some tweaks to the filters are necessary. Since no clipping occurs when applying Roughen or Blur content we had to compensate by halving the stroke in this recipe. The Turbulence filter primitive is used when our object has a property that is chaotic and random up to a point. It is often used when creating realistic textures. There are two types of Turbulence primitive filters and it can be hard sometimes to predict which one is better for our current case and it's the same with a lot of settings. Try experimenting with the different options available before settling on one. As you can guess from the name, the Displacement Map shifts parts of the object from their positions. Although this description is overly simplified we can deduce why its Scale: slide changes the object edges to be more or less irregular. The amount of blurring we got using the Blur content filter seems to be right for our example. Else, we would be able to tweak the amount in the Gaussian Blur primitive. There's more... Roughen isn't the only filter that can be used to create irregular edges, but it proved itself as a good choice for the burnt paper edges look we expected to achieve. Most of the other filter presets that make the object edges irregular can be found under Distort, Protrusions, and Textures.
Read more
  • 0
  • 0
  • 5863

article-image-using-phpstorm-team
Packt
26 Dec 2014
11 min read
Save for later

Using PhpStorm in a Team

Packt
26 Dec 2014
11 min read
In this article by Mukund Chaudhary and Ankur Kumar, authors of the book PhpStorm Cookbook, we will cover the following recipes: Getting a VCS server Creating a VCS repository Connecting PhpStorm to a VCS repository Storing a PhpStorm project in a VCS repository (For more resources related to this topic, see here.) Getting a VCS server The first action that you have to undertake is to decide which version of VCS you are going to use. There are a number of systems available, such as Git and Subversion (commonly known as SVN). It is free and open source software that you can download and install on your development server. There is another system named concurrent versions system (CVS). Both are meant to provide a code versioning service to you. SVN is newer and supposedly faster than CVS. Since SVN is the newer system and in order to provide information to you on the latest matters, this text will concentrate on the features of Subversion only. Getting ready So, finally that moment has arrived when you will start off working in a team by getting a VCS system for you and your team. The installation of SVN on the development system can be done in two ways: easy and difficult. The difficult step can be skipped without consideration because that is for the developers who want to contribute to the Subversion system. Since you are dealing with PhpStorm, you need to remember the easier way because you have a lot more to do. How to do it... The installation step is very easy. There is this aptitude utility available with Debian-based systems, and there is the Yum utility available with Red Hat-based systems. Perform the following steps: You just need to issue the command apt-get install subversion. The operating system's package manager will do the remaining work for you. In a very short time, after flooding the command-line console with messages, you will have the Subversion system installed. To check whether the installation was successful, you need to issue the command whereis svn. If there is a message, it means that you installed Subversion successfully. If you do not want to bear the load of installing Subversion on your development system, you can use commercial third-party servers. But that is more of a layman's approach to solving problems, and no PhpStorm cookbook author will recommend that you do that. You are a software engineer; you should not let go easily. How it works... When you install the version control system, you actually install a server that provides the version control service to a version control client. The subversion control service listens for incoming connections from remote clients on port number 3690 by default. There's more... If you want to install the older companion, CVS, you can do that in a similar way, as shown in the following steps: You need to download the archive for the CVS server software. You need to unpack it from the archive using your favorite unpacking software. You can move it to another convenient location since you will not need to disturb this folder in the future. You then need to move into the directory, and there will start your compilation process. You need to do #. /configure to create the make targets. Having made the target, you need to enter #make install to complete the installation procedure. Due to it being older software, you might have to compile from the source code as the only alternative. Creating a VCS repository More often than not, a PHP programmer is expected to know some system concepts because it is often required to change settings for the PHP interpreter. The changes could be in the form of, say, changing the execution time or adding/removing modules, and so on. In order to start working in a team, you are going to get your hands dirty with system actions. Getting ready You will have to create a new repository on the development server so that PhpStorm can act as a client and get connected. Here, it is important to note the difference between an SVN client and an SVN server—an SVN client can be any of these: a standalone client or an embedded client such as an IDE. The SVN server, on the other hand, is a single item. It is a continuously running process on a server of your choice. How to do it... You need to be careful while performing this activity as a single mistake can ruin your efforts. Perform the following steps: There is a command svnadmin that you need to know. Using this command, you can create a new directory on the server that will contain the code base in it. Again, you should be careful when selecting a directory on the server as it will appear in your SVN URL for the rest part of your life. The command should be executed as: svnadmin create /path/to/your/repo/ Having created a new repository on the server, you need to make certain settings for the server. This is just a normal phenomenon because every server requires a configuration. The SVN server configuration is located under /path/to/your/repo/conf/ with the name svnserve.conf. Inside the file, you need to make three changes. You need to add these lines at the bottom of the file: anon-access = none auth-access = write password-db = passwd There has to be a password file to authorize a list of users who will be allowed to use the repository. The password file in this case will be named passwd (the default filename). The contents in the file will be a number of lines, each containing a username and the corresponding password in the form of username = password. Since these files are scanned by the server according to a particular algorithm, you don't have the freedom to leave deliberate spaces in the file—there will be error messages displayed in those cases. Having made the appropriate settings, you can now make the SVN service run so that an SVN client can access it. You need to issue the command svnserve -d to do that. It is always good practice to keep checking whether what you do is correct. To validate proper installation, you need to issue the command svn ls svn://user@host/path/to/subversion/repo/. The output will be as shown in the following screenshot:   How it works... The svnadmin command is used to perform admin tasks on the Subversion server. The create option creates a new folder on the server that acts as the repository for access from Subversion clients. The configuration file is created by default at the time of server installation. The contents that are added to the file are actually the configuration directives that control the behavior of the Subversion server. Thus, the settings mentioned prevent anonymous access and restrict the write operations to certain users whose access details are mentioned in a file. The command svnserve is again a command that needs to be run on the server side and which starts the instance of the server. The -d switch mentions that the server should be run as a daemon (system process). This also means that your server will continue running until you manually stop it or the entire system goes down. Again, you can skip this section if you have opted for a third-party version control service provider. Connecting PhpStorm to a VCS repository The real utility of software is when you use it. So, having installed the version control system, you need to be prepared to use it. Getting ready With SVN being client-server software, having installed the server, you now need a client. Again, you will have difficulty searching for a good SVN client. Don't worry; the client has been factory-provided to you inside PhpStorm. The PhpStorm SVN client provides you with features that accelerate your development task by providing you detailed information about the changes made to the code. So, go ahead and connect PhpStorm to the Subversion repository you created. How to do it... In order to connect PhpStorm to the Subversion repository, you need to activate the Subversion view. It is available at View | Tool Windows | Svn Repositories. Perform the following steps to activate the Subversion view: 1. Having activated the Subversion view, you now need to add the repository location to PhpStorm. To do that, you need to use the + symbol in the top-left corner in the view you have opened, as shown in the following screenshot: Upon selecting the Add option, there is a question asked by PhpStorm about the location of the repository. You need to provide the full location of the repository. Once you provide the location, you will be able to see the repository in the same Subversion view in which you have pressed the Add button. Here, you should always keep in mind the correct protocol to use. This depends on the way you installed the Subversion system on the development machine. If you used the default installation by installing from the installer utility (apt-get or aptitude), you need to specify svn://. If you have configured SVN to be accessible via SSH, you need to specify svn+ssh://. If you have explicitly configured SVN to be used with the Apache web server, you need to specify http://. If you configured SVN with Apache over the secure protocol, you need to specify https://. Storing a PhpStorm project in a VCS repository Here comes the actual start of the teamwork. Even if you and your other team members have connected to the repository, what advantage does it serve? What is the purpose solved by merely connecting to the version control repository? Correct. The actual thing is the code that you work on. It is the code that earns you your bread. Getting ready You should now store a project in the Subversion repository so that the other team members can work and add more features to your code. It is time to add a project to version control. It is not that you need to start a new project from scratch to add to the repository. Any project, any work that you have done and you wish to have the team work on now can be added to the repository. Since the most relevant project in the current context is the cooking project, you can try adding that. There you go. How to do it... In order to add a project to the repository, perform the following steps: You need to use the menu item provided at VCS | Import into version control | Share project (subversion). PhpStorm will ask you a question, as shown in the following screenshot: Select the correct hierarchy to define the share target—the correct location where your project will be saved. If you wish to create the tags and branches in the code base, you need to select the checkbox for the same. It is good practice to provide comments to the commits that you make. The reason behind this is apparent when you sit down to create a release document. It also makes the change more understandable for the other team members. PhpStorm then asks you the format you want the working copy to be in. This is related to the version of the version control software. You just need to smile and select the latest version number and proceed, as shown in the following screenshot:   Having done that, PhpStorm will now ask you to enter your credentials. You need to enter the same credentials that you saved in the configuration file (see the Creating a VCS repository recipe) or the credentials that your service provider gave you. You can ask PhpStorm to save the credentials for you, as shown in the following screenshot:   How it works... Here it is worth understanding what is going on behind the curtains. When you do any Subversion related task in PhpStorm, there is an inbuilt SVN client that executes the commands for you. Thus, when you add a project to version control, the code is given a version number. This makes the version system remember the state of the code base. In other words, when you add the code base to version control, you add a checkpoint that you can revisit at any point in future for the time the code base is under the same version control system. Interesting phenomenon, isn't it? There's more... If you have installed the version control software yourself and if you did not make the setting to store the password in encrypted text, PhpStorm will provide you a warning about it, as shown in the following screenshot: Summary We got to know about version control systems, step-by-step process to create a VCS repository, and connecting PhpStorm to a VCS repository. Resources for Article:  Further resources on this subject: FuelPHP [article] A look into the high-level programming operations for the PHP language [article] PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1 [article]
Read more
  • 0
  • 0
  • 5859