Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-how-to-create-multithreaded-applications-in-qt
Gebin George
02 May 2018
10 min read
Save for later

How to create multithreaded applications in Qt

Gebin George
02 May 2018
10 min read
In today’s tutorial, we will learn how to use QThread and its affiliate classes to create multithreaded applications. We will go through this by creating an example project, which processes and displays the input and output frames from a video source using a separate thread. This helps leave the GUI thread (main thread) free and responsive while more intensive processes are handled with the second thread. As it was mentioned earlier, we will focus mostly on the use cases common to computer vision and GUI development; however, the same (or a very similar) approach can be applied to any multithreading problem. [box type="note" align="" class="" width=""]This article is an excerpt from the book, Computer Vision with OpenCV 3 and Qt5 written by Amin Ahmadi Tazehkandi. This book will help you blend the power of Qt with OpenCV to build cross-platform computer vision applications.[/box] We will use this example project to implement multithreading using two different approaches available in Qt for working with QThread classes. First, subclassing and overriding the run method, and second, using the moveToThread function available in all Qt objects, or, in other words, QObject subclasses. Subclassing QThread Let's start by creating an example Qt Widgets application in the Qt Creator named MultithreadedCV. To start with, add an OpenCV framework to this project: win32: { include("c:/dev/opencv/opencv.pri") } unix: !macx{ CONFIG += link_pkgconfig PKGCONFIG += opencv } unix: macx{ INCLUDEPATH += /usr/local/include LIBS += -L"/usr/local/lib" -lopencv_world } Then, add two label widgets to your mainwindow.ui file, shown as follows. We will use these labels to display the original and processed video from the default webcam on the computer: Make sure to set the objectName property of the label on the left to inVideo and the one on the right to outVideo. Also, set their alignment/Horizontal property to AlignHCenter. Now, create a new class called VideoProcessorThread by right-clicking on the project PRO file and selecting Add New from the menu. Then, choose C++ Class and make sure the combo boxes and checkboxes in the new class wizard look like the following screenshot: After your class is created, you'll have two new files in your project called videoprocessorthread.h and videoprocessor.cpp, in which you'll implement a video processor that works in a thread separate from the mainwindow files and GUI threads. First, make sure that this class inherits QThread by adding the relevant include line and class inheritance, as seen here (just replace QObject with QThread in the header file). Also, make sure you include OpenCV headers: #include <QThread> #include "opencv2/opencv.hpp" class VideoProcessorThread : public QThread You need to similarly update the videoprocessor.cpp file so that it calls the correct constructor: VideoProcessorThread::VideoProcessorThread(QObject *parent) : QThread(parent) Now, we need to add some required declarations to the videoprocessor.h file. Add the following line to the private members area of your class: void run() override; And then, add the following to the signals section: void inDisplay(QPixmap pixmap); void outDisplay(QPixmap pixmap); And finally, add the following code block to the videoprocessorthread.cpp file: void VideoProcessorThread::run() { using namespace cv; VideoCapture camera(0); Mat inFrame, outFrame; while(camera.isOpened() && !isInterruptionRequested()) { camera >> inFrame; if(inFrame.empty()) continue; bitwise_not(inFrame, outFrame); emit inDisplay( QPixmap::fromImage( QImage( inFrame.data, inFrame.cols, inFrame.rows, inFrame.step, QImage::Format_RGB888) .rgbSwapped())); emit outDisplay( QPixmap::fromImage( QImage( outFrame.data, outFrame.cols, outFrame.rows, Multithreading Chapter 8 [ 309 ] outFrame.step, QImage::Format_RGB888) .rgbSwapped())); } } The run function is overridden and it's implemented to do the required video processing task. If you try to do the same inside the mainwindow.cpp code in a loop, you'll notice that your program becomes unresponsive, and, eventually, you have to terminate it. However, with this approach, the same code is now in a separate thread. You just need to make sure you start this thread by calling the start function, not run! Note that the run function is meant to be called internally, so you only need to reimplement it as seen in this example; however, to control the thread and its execution behavior, you need to use the following functions: start: This can be used to start a thread if it is not already started. This function starts the execution by calling the run function we implemented. You can pass one of the following values to the start function to control the priority of the thread: QThread::IdlePriority (this is scheduled when no other thread is running) QThread::LowestPriority QThread::LowPriority QThread::NormalPriority QThread::HighPriority QThread::HighestPriority QThread::TimeCriticalPriority (this is scheduled as much as possible) QThread::InheritPriority (this is the default value, which simply inherits priority from the parent) terminate: This function, which should only be used in extreme cases (means never, hopefully), forces a thread to terminate. setTerminationEnabled: This can be used to allow or disallow the terminate Function. wait: This function can be used to block a thread (force waiting) until the thread is finished or the timeout value (in milliseconds) is reached. requestInterruption and isRequestInterrupted: These functions can be used to set and get the interruption request status. Using these functions is a useful approach to make sure the thread is stopped safely in the middle of a process that can go on forever. isRunning and isFinished: These functions can be used to request the execution status of the thread. Apart from the functions we mentioned here, QThread contains other functions useful for dealing with multithreading, such as quit, exit, idealThreadCount, and so on. It is a good idea to check them out for yourself and think about use cases for each one of them. QThread is a powerful class that can help maximize the efficiency of your applications. Let's continue with our example. In the run function, we used an OpenCV VideoCapture class to read the video frames (forever) and apply a simple bitwise_not operator to the Mat frame (we can do any other image processing at this point, so bitwise_not is only an example and a fairly simple one to explain our point), then converted that to QPixmap via QImage, and then sent the original and modified frames using two signals. Notice that in our loop that will go on forever, we will always check if the camera is still open and also check if there is an interruption request to this thread. Now, let's use our thread in MainWindow. Start by including its header file in the mainwindow.h file: #include "videoprocessorthread.h" Then, add the following line to the private members' section of MainWindow, in the mainwindow.h file: VideoProcessorThread processor; Now, add the following code to the MainWindow constructor, right after the setupUi line: connect(&processor, SIGNAL(inDisplay(QPixmap)), ui->inVideo, SLOT(setPixmap(QPixmap))); connect(&processor, SIGNAL(outDisplay(QPixmap)), ui->outVideo, SLOT(setPixmap(QPixmap))); processor.start(); And then add the following lines to the MainWindow destructor, right before the delete ui; line: processor.requestInterruption(); processor.wait(); We simply connected the two signals from the VideoProcessorThread class to two labels we had added to the MainWindow GUI, and then started the thread as soon as the program started. We also request the thread to stop as soon as MainWindow is closed and right before the GUI is deleted. The wait function call makes sure to wait for the thread to clean up and safely finish executing before continuing with the delete instruction. Try running this code to check it for yourself. You should see something similar to the following image as soon as the program starts: The video from your default webcam on the computer should start as soon as the program starts, and it will be stopped as soon as you close the program. Try extending the VideoProcessorThread class by passing a camera index number or a video file path into it. You can instantiate as many VideoProcessorThread classes as you want. You just need to make sure to connect the signals to correct widgets on the GUI, and this way you can have multiple videos or cameras processed and displayed dynamically at runtime. Using the moveToThread function As we mentioned earlier, you can also use the moveToThread function of any QObject subclass to make sure it is running in a separate thread. To see exactly how this works, let's repeat the same example by creating exactly the same GUI, and then creating a new C++ class (same as before), but, this time, naming it VideoProcessor. This time though, after the class is created, you don't need to inherit it from QThread, leave it as QObject (as it is). Just add the following members to the videoprocessor.h file: signals: void inDisplay(QPixmap pixmap); void outDisplay(QPixmap pixmap); public slots: void startVideo(); void stopVideo(); private: bool stopped; The signals are exactly the same as before. The stopped is a flag we'll use to help us stop the video so that it does not go on forever. The startVideo and stopVideo are functions we'll use to start and stop the processing of the video from the default webcam. Now, we can switch to the videoprocessor.cpp file and add the following code blocks. Quite similar to what we had before, with the obvious difference that we don't need to implement the run function since it's not a QThread subclass, and we have named our functions as we like: void VideoProcessor::startVideo() { using namespace cv; VideoCapture camera(0); Mat inFrame, outFrame; stopped = false; while(camera.isOpened() && !stopped) { camera >> inFrame; if(inFrame.empty()) continue; bitwise_not(inFrame, outFrame); emit inDisplay( QPixmap::fromImage( QImage( inFrame.data, inFrame.cols, inFrame.rows, inFrame.step, QImage::Format_RGB888) .rgbSwapped())); emit outDisplay( QPixmap::fromImage( QImage( outFrame.data, outFrame.cols, outFrame.rows, outFrame.step, QImage::Format_RGB888) .rgbSwapped())); } } void VideoProcessor::stopVideo() { stopped = true; } Now we can use it in our MainWindow class. Make sure to add the include file for the VideoProcessor class, and then add the following to the private members' section of MainWindow: VideoProcessor *processor; Now, add the following piece of code to the MainWindow constructor in the mainwindow.cpp file: processor = new VideoProcessor(); processor->moveToThread(new QThread(this)); connect(processor->thread(), SIGNAL(started()), processor, SLOT(startVideo())); connect(processor->thread(), SIGNAL(finished()), processor, SLOT(deleteLater())); connect(processor, SIGNAL(inDisplay(QPixmap)), ui->inVideo, SLOT(setPixmap(QPixmap))); connect(processor, SIGNAL(outDisplay(QPixmap)), ui->outVideo, SLOT(setPixmap(QPixmap))); processor->thread()->start(); In the preceding code snippet, first, we created an instance of VideoProcessor. Note that we didn't assign any parents in the constructor, and we also made sure to define it as a pointer. This is extremely important when we intend to use the moveToThread function. An object that has a parent cannot be moved into a new thread. The second highly important lesson in this code snippet is the fact that we should not directly call the startVideo function of VideoProcessor, and it should only be invoked by connecting a proper signal to it. In this case, we have used its own thread's started signal; however, you can use any other signal that has the same signature. The rest is all about connections. In the MainWindow destructor function, add the following lines: processor->stopVideo(); processor->thread()->quit(); processor->thread()->wait(); If you found our post useful, do check out this book Computer Vision with OpenCV 3 and Qt5  which will help you understand how to build a multithreaded computer vision applications with the help of OpenCV 3 and Qt5. Top 10 Tools for Computer Vision OpenCV Primer: What can you do with Computer Vision and how to get started? Image filtering techniques in OpenCV  
Read more
  • 0
  • 0
  • 8567

article-image-solve-web-dev-problem-functional-programming
Guest Contributor
01 May 2018
14 min read
Save for later

Seven wrongs don’t make the one right: Solving a problem with Functional Javascript

Guest Contributor
01 May 2018
14 min read
Functional Programming has several advantages for better, more understandable and maintainable code. Currently, it's popping up in many languages and frameworks, but let's not dwell into the theory right now, and instead, let's consider a simple problem, and how to solve it in a functional way. [box type="shadow" align="" class="" width=""]This is a guest post written by Federico Kereki, the author of the book Mastering Functional JavaScriptProgramming. Federico Kereki is a Uruguayan Systems Engineer, with a Masters Degree in Education, and more than thirty years’ experience as a consultant, system developer, university professor and writer.[/box] Assume the following situation. You have developed an e-commerce site: the user can fill his shopping cart, and at the end, he must click on a BILL ME button, so his credit card will be charged. However, the user shouldn't click twice (or more) or he would be billed several times. The HTML part of your application might have something like this, somewhere: <button id=“billButton” onclick=“billTheUser(some, sales, data)”>Bill me</button> And, among the scripts you’d have something similar to this: function billTheUser(some, sales, data) { window.alert("Billing the user..."); // actually bill the user } (Assigning events handler directly in HTML, the way I did it, isn’t recommended. Rather, in unobtrusive fashion, you should assign the handlerthrough code. So... Do as I say, not as I do!) This is a very barebone explanation of the problem and your web page, but it's enough for our purposes. Let's now get to think about ways of avoiding repeated clicks on that button… How can you manage to avoid the user clicking more than once? We’ll consider several common solutions for this, and then analyze a functional way of solving the problem.OK, how many ways can you think of, to solve our problem? Let us take several solutions one by one, and analyze their quality. Solution #1: hope for the best! How can we solve the problem? The first so called solution may seem like a joke: do nothing, tell the user not to click twice, and hope for the best! This is a weasel way of avoiding the problem, but I've seen several websites that just warn the user about the risks of clicking more than once, and actually do nothing to prevent the situation... the user got billed twice? we warned him... it’s his fault! <button id="billButton" onclick="billTheUser(some, sales, data)">Bill me</button> <b>WARNING: PRESS ONLY ONCE, DO NOT PRESS AGAIN!!</b> OK, so this isn’t actually a solution; let's move on to more serious proposals... Solution #2: use a global flag The solution most people would probably think of first, is using some global variable to record whether the user has already clicked on the button. You'd define a flag named something like clicked, initialized with false. When the user clicks on the button, you check the flag. If clicked was false, you change it to true, and execute the function; otherwise, you don't do anything at all. let clicked = false; . . . function billTheUser(some, sales, data) { if (!clicked) { clicked = true; window.alert("Billing the user..."); // actually bill the user } } This obviously works, but has several problems that must be addressed. You are using a global variable, and you could change its value by accident.Global variables aren't a good idea, neither in JS nor in other languages. (For moregood reasons NOT to use global variables, read http:/​/​wiki.​c2.​com/​?GlobalVariablesAreBad) You must also remember to re-initialize it to false when the user starts buying again. If you don't, the user won’t be able to do a second buy, because paying will have become impossible. You will have difficulties testing this code because it depends on external things: that is, the clicked variable. So, this isn’t a very good solution either... let's keep thinking! Solution #3: Remove the handler We may go for a lateral kind of solution, and instead of having the function avoid repeated clicks, we might just remove the possibility of clicking altogether. function billTheUser(some, sales, data) { document.getElementById("billButton").onclick = null; window.alert("Billing the user..."); // actually bill the user } This solution also has some problems. The code is tightly coupled to the button, so you won't be able to reuse it elsewhere; You must remember to reset the handler; otherwise, the user won't be able to make a second buy; Testing will also be harder, because you'll have to provide some DOM elements We can enhance this solution a bit, and avoid coupling the function to the button, by providing the latter's id as an extra argument in the call. (This idea can also be applied to some of the solutions below.) The HTML part would be: <button id="billButton" onclick="billTheUser('billButton', some, sales, data)" > Bill me </button>; (note the extra argument) and the called function would be: function billTheUser(buttonId, some, sales, data) { document.getElementById(buttonId).onclick = null; window.alert("Billing the user..."); // actually bill the user } This solution is somewhat better. But, in essence, we are still using a global element: not a variable, but the onclick value. So, despite the enhancement, this isn't a very good solution too. Let's move on. Solution #4: Change the handler A variant to the previous solution would be not removing the click function, and rather assign a new one instead. We are using functions as first-class objects here, when we assign the alreadyBilled() function to the click event: function alreadyBilled() { window.alert("Your billing process is running; don't click, please."); } function billTheUser(some, sales, data) { document.getElementById("billButton").onclick = alreadyBilled; window.alert("Billing the user..."); // actually bill the user } Now here’s a good point in this solution: if the user clicks a second time, he'll get a pop-up warning not to do that, but he won't be billed again. (From the point of view of the user experience, it's better.) However, this solution still has the very same objections as the previous one (code coupled to the button, need to reset the handler, harder testing) so we won't consider this to be quite good anyway. Solution #5: disable the button A similar idea: instead of removing the event handler, disable the button, so the user won't be able to click: function billTheUser(some, sales, data) { document.getElementById("billButton").setAttribute("disabled", "true"); window.alert("Billing the user..."); // actually bill the user } This also works, but we still have objections as for the previous solutions (coupling the code to button, need to re-enable the button, harder testing) so we don't like this solution either. Solution #6: redefine the handler Another idea: instead of changing anything in the button, let's have the event handler change itself. The trick is in the second line; by assigning a new value to the billTheUser variable, we are actually dynamically changing what the function does! The first time you call the function, it will do its thing... but it will also change itself out of existence, by giving its name to a new function: function billTheUser(some, sales, data) { billTheUser = function() {}; window.alert("Billing the user..."); // actually bill the user } There’s a special trick in the solution. Functions are global, so the line billTheUser=...actually changes the function inner workings; from that point on, billTheUser will be a new (null) function. This solution is still hard to test. Even worse, how would you restore the functionality of billTheUser, setting it back to its original objective? Solution #7: use a local flag We can go back to the idea of using a flag, but instead of making it global (which was our main objection) we can use an Immediately Invoked Function Expression (IIFE) that, via a closure, makes clicked local to the function, and not visible anywhere else: var billTheUser = (clicked => { return (some, sales, data) => { if (!clicked) { clicked = true; window.alert("Billing the user..."); // actually bill the user } }; })(false); In particular, see how clicked gets its initial false value, from the call, at the end. This solution is along the lines of the global variable solution, but using a private, local variable is an enhancement. The only objection we could find is that you’ll have to rework every function that needs to be called only once, to work in this fashion. (And, as we’ll see below, our FP solution is similar in some ways to it.) OK, it’s not too hard to do, but don’t forget the Don’t Repeat Yourself (D.R.Y) advice! A functional, higher order, solution Now that we've gone through several “normal” solutions, let’s try to be more general: after all, requiring that some function or other be executed only once, isn’t that outlandish, and may be required elsewhere! Let’s lay down some principles: The original function (the one that may be called only once) should do that thing, and no other; We don’t want to modify the original function in any way; so, We need to have a new function, that will call the original one only once, and We want a general solution, that we can apply to any number of original functions. (The first principle listed above is the single responsibility principle (the S in S.O.L.I.D.) which states that every function should be responsible over a single functionality. For more on S.O.L.I.D., check the article by “Uncle Bob” (Robert C. Martin, who wrote the five principles) at http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod). Can we do it? Yes; and we'll write a higher order function, which we'll be able to apply to any function, to produce a new function that will work only once. Let's see how! If we don't want to modify the original function, we'll create a higher order function, which we'll, inspiredly, name once(). This function will receive a function as a parameter, and will return a new function, which will work only a single time. (By the way, libraries such as Underscore and LoDash already have a similar function, invoked as _.once(). Ramda also provides R.once(), and most FP libraries include similar functionality, so you wouldn’t actually have to program it on your own.) Our once() function way seem imposing at first, but as you get accustomed to working in FP fashion, you'll get used to this sort of code, and will find it to be quite understandable: const once = fn => { let done = false; return (...args) => { if (!done) { done = true; fn(...args); } }; }; Let’s go over some of the finer points of this function: The first line shows that once() receives a function (fn) as its parameter. We are defining an internal, private done variable, by taking advantage of a closure, as in Solution #7, above. We opted not to call it clicked, as earlier, because you don’t necessarily need to click on a button to call the function; we went for a more general term. The line return (...args) => … says that once() will return a function, with some (0, 1, or more) parameters. We are using the modern spread syntax; with older versions of JS you'd have to work with the arguments object; see https:/​/developer.​mozilla.​org/​en/​docs/​Web/​JavaScript/​Reference/​Functions/arguments for more on that. The ES8 way is simpler and shorter! We assign done = true before calling fn(), just in case that function throws an exception. Of course, if you don't want to disable the function unless it has successfully ended, then you could move the assignment just below the fn() call. After that setting is done, we finally call the original function. Note the use of the spread operator to pass along whatever parameters the original fn() had. So, how would we use it? We don't even need to store the newly generated function in any place; we can simply write the onclick method, as shown as follows: <button id="billButton" onclick="once(billTheUser)(some, sales, data)"> Bill me </button>; Pay close attention to the syntax! When the user clicks on the button, the function that gets called with (some, sales, data) argument isn’t billTheUser(), but rather the result of having called once() with billTheUser as a parameter. That result is the one that can be called only a single time. We can run a simple test: const squeak = a => console.log(a + " squeak!!"); squeak("original");        // "original squeak!!" squeak("original");        // "original squeak!!" squeak("original");        // "original squeak!!" const squeakOnce = once(squeak); squeakOnce("only once");   // "only once squeak!!" squeakOnce("only once");   // no output squeakOnce("only once");   // no output Some varieties In one of the solutions above, we mentioned that it would be a good idea to do something every time after the first, and not silently ignoring the user's clicks. We'll write a new higher order function, that takes a second parameter; a function to be called every time from the second call onward: const onceAndAfter = (f, g) => { let done = false; return (...args) => { if (!done) { done = true; f(...args); } else { g(...args); } }; }; We have ventured further in higher-order functions; onceAndAfter() takes two functions as parameters and produces yet a third one, that includes the other two within. We could make onceAndAfter() more flexible, by giving a default value for g, along the lines of const onceAndAfter = (f, g = ()=>{})... so if you didn't want to specify the second function, it would still work fine, because it would then call a do-nothing function. We can do a quick-and-dirty test, along with the same lines as we did earlier: const squeak = () => console.log("squeak!!"); const creak = () => console.log("creak!!"); const makeSound = onceAndAfter(squeak, creak); makeSound(); // "squeak!!" makeSound(); // "creak!!" makeSound(); // "creak!!" makeSound(); // "creak!!" So, we got an even more enhanced function! Just to practice with some more varieties and possibilities, can you solve these extra questions? Everything has a limit! As an extension ofonce(), could you write a higherorder function thisManyTimes(fn,n) that would let you call fn() up to n times, but would afterward do nothing? To give an example, once(fn) and thisManyTimes(fn,1) would produce functions that behave in exactly the same way. Alternating functions. In the spirit of ouronceAndAfter()function, could youwrite an alternator() higher-order function that gets two functions as arguments, and on each call, alternatively calls one and another? The expected behavior should be as in the following example: let sayA = () => console.log("A"); let sayB = () => console.log("B"); let alt = alternator(sayA, sayB); alt(); // A alt(); // B alt(); // A alt(); // B alt(); // A alt(); // B To summarize, we've seen a simple problem, based on a real-life situation, and after analyzing several common ways of solving that, we chose a functional thinking solution. We saw how to apply FP to our problem, and we also found a more general higher order way that we could apply to similar problems, with no further code changes. Finally, we produced an even better solution, from the user experience point of view. Functional Programming can be applied all throughout your code, to enhance and simplify your development. This has been just a simple example, using higher order functions, but FP goes beyond that and provides many more tools for better coding. [author title="Author Bio"]The author, Federico Kereki, is currently a Subject Matter Expert at Globant, where he gets to use a good mixture of development frameworks, programming tools, and operating systems, such as JavaScript, Node.JS, React and Redux, SOA, Containers, and PHP, with both Windows and Linux. He has taught several computer science courses at Universidad de la República, Universidad ORT Uruguay, and Universidad de la Empresa, he has also written texts for these courses. Federico has written several articles—on JavaScript, web development, and open source topics—for magazines such as Linux Journal and LinuxPro Magazine in the United States, Linux+ and Mundo Linux in Europe, and for web sites such as Linux and IBM Developer Works. Kereki has given talks on Functional Programming with JavaScript in public conferences (such as JSCONF 2016) and used these techniques to develop Internet systems for businesses in Uruguay and abroad.[/author] Read More Introduction to the Functional Programming What is the difference between functional and object oriented programming? Manipulating functions in functional programming  
Read more
  • 0
  • 0
  • 3550

article-image-common-design-patterns-javascript
Richa Tripathi
01 May 2018
14 min read
Save for later

Implementing 5 Common Design Patterns in JavaScript (ES8)

Richa Tripathi
01 May 2018
14 min read
In this tutorial, we'll see how common design patterns can be used as blueprints for organizing larger structures. Defining steps with template functions A template is a design pattern that details the order a given set of operations are to be executed in; however, a template does not outline the steps themselves. This pattern is useful when behavior is divided into phases that have some conceptual or side effect dependency that requires them to be executed in a specific order. Here, we'll see how to use the template function design pattern. We assume you already have a workspace that allows you to create and run ES modules in your browser for all the recipes given below: How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-01-defining-steps-with-template-functions. Copy or create an index.html file that loads and runs a main function from main.js. Create a main.js file that defines a new abstract class named Mission: // main.js class Mission { constructor () { if (this.constructor === Mission) { throw new Error('Mission is an abstract class, must extend'); } } } Add a function named execute that calls three instance methods—determineDestination, determinPayload, and launch: // main.js class Mission { execute () { this.determinDestination(); this.determinePayload(); this.launch(); } } Create a LunarRover class that extends the Mission class: // main.js class LunarRover extends Mission {} Add a constructor that assigns name to an instance property: // main.js class LunarRover extends Mission constructor (name) { super(); this.name = name; } } Implement the three methods called by Mission.execute: // main.js class LunarRover extends Mission {} determinDestination() { this.destination = 'Oceanus Procellarum'; } determinePayload() { this.payload = 'Rover with camera and mass spectrometer.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Rover Will arrive in a week. `); } } Create a JovianOrbiter class that also extends the Mission class: // main.js class LunarRover extends Mission {} constructor (name) { super(); this.name = name; } determinDestination() { this.destination = 'Jovian Orbit'; } determinePayload() { this.payload = 'Orbiter with decent module.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Orbiter Will arrive in 7 years. `); } } Create a main function that creates both concrete mission types and executes them: // main.js export function main() { const jadeRabbit = new LunarRover('Jade Rabbit'); jadeRabbit.execute(); const galileo = new JovianOrbiter('Galileo'); galileo.execute(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. The output should appear as follows: How it works... The Mission abstract class defines the execute method, which calls the other instance methods in a particular order. You'll notice that the methods called are not defined by the Mission class. This implementation detail is the responsibility of the extending classes. This use of abstract classes allows child classes to be used by code that takes advantage of the interface defined by the abstract class. In the template function pattern, it is the responsibility of the child classes to define the steps. When they are instantiated, and the execute method is called, those steps are then performed in the specified order. Ideally, we'd be able to ensure that Mission.execute was not overridden by any inheriting classes. Overriding this method works against the pattern and breaks the contract associated with it. This pattern is useful for organizing data-processing pipelines. The guarantee that these steps will occur in a given order means that, if side effects are eliminated, the instances can be organized more flexibly. The implementing class can then organize these steps in the best possible way. Assembling customized instances with builders The previous recipe shows how to organize the operations of a class. Sometimes, object initialization can also be complicated. In these situations, it can be useful to take advantage of another design pattern: builders. Now, we'll see how to use builders to organize the initialization of more complicated objects. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-02-assembling-instances-with-builders. Create a main.js file that defines a new class named Mission, which that takes a name constructor argument and assigns it to an instance property. Also, create a describe method that prints out some details: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create classes named Destination, Payload, and Rocket, which receive a name property as a constructor parameter and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } }   Create a MissionBuilder class that defines the setMissionName, setDestination, setPayload, and setRocket methods: // main.js class MissionBuilder { setMissionName (name) { this.missionName = name; return this; } setDestination (destination) { this.destination = destination; return this; } setPayload (payload) { this.payload = payload; return this; } setRocket (rocket) { this.rocket = rocket; return this; } } Create a build method that creates a new Mission instance with the appropriate properties: // main.js class MissionBuilder { build () { const mission = new Mission(this.missionName); mission.rocket = this.rocket; mission.destination = this.destination; mission.payload = this.payload; return mission; } } Create a main function that uses MissionBuilder to create a new mission instance: // main.js export function main() { // build an describe a mission new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Oceanus Procellarum')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build() .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The builder defines methods for assigning all the relevant properties and defines a build method that ensures that each is called and assigned appropriately. Builders are like template functions, but instead of ensuring that a set of operations are executed in the correct order, they ensure that an instance is properly configured before returning. Because each instance method of MissionBuilder returns the this reference, the methods can be chained. The last line of the main function calls describe on the new Mission instance that is returned from the build method. Replicating instances with factories Like builders, factories are a way of organizing object construction. They differ from builders in how they are organized. Often, the interface of factories is a single function call. This makes factories easier to use, if less customizable, than builders. Now, we'll see how to use factories to easily replicate instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-03-replicating-instances-with-factories. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Mission. Add a constructor that takes a name constructor argument and assigns it to an instance property. Also, define a simple describe method: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create three classes named Destination, Payload, and Rocket, that take name as a constructor argument and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } } Create a MarsMissionFactory object with a single create method that takes two arguments: name and rocket. This method should create a new Mission using those arguments: // main.js const MarsMissionFactory = { create (name, rocket) { const mission = new Mission(name); mission.destination = new Destination('Martian surface'); mission.payload = new Payload('Mars rover'); mission.rocket = rocket; return mission; } } Create a main method that creates and describes two similar missions: // main.js export function main() { // build an describe a mission MarsMissionFactory .create('Curiosity', new Rocket('Atlas V')) .describe(); MarsMissionFactory .create('Spirit', new Rocket('Delta II')) .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The create method takes a subset of the properties needed to create a new mission. The remaining values are provided by the method itself. This allows factories to simplify the process of creating similar instances. In the main function, you can see that two Mars missions have been created, only differing in name and Rocket instance. We've halved the number of values needed to create an instance. This pattern can help reduce instantiation logic. In this recipe, we simplified the creation of different kinds of missions by identifying the common attributes, encapsulating those in the body of the factory function, and using arguments to supply the remaining properties. In this way, commonly used instance shapes can be created without additional boilerplate code. Processing a structure with the visitor pattern The patterns we've seen thus far organize the construction of objects and the execution of operations. The next pattern we'll look at is specially made to traverse and perform operations on hierarchical structures. Here, we'll be looking at the visitor pattern. How to do it... Open your command-line application and navigate to your workspace. Copy the 09-02-assembling-instances-with-builders folder to a new 09-04-processing-a-structure-with-the-visitor-pattern directory. Add a class named MissionInspector to main.js. Create a visitor method that calls a corresponding method for each of the following types: Mission, Destination, Rocket, and Payload: // main.js /* visitor that inspects mission */ class MissionInspector { visit (element) { if (element instanceof Mission) { this.visitMission(element); } else if (element instanceof Destination) { this.visitDestination(element); } else if (element instanceof Rocket) { this.visitRocket(element); } else if (element instanceof Payload) { this.visitPayload(element); } } } Create a visitMission method that logs out an ok message: // main.js class MissionInspector { visitMission (mission) { console.log('Mission ok'); mission.describe(); } } Create a visitDestination method that throws an error if the destination is not in an approved list: // main.js class MissionInspector { visitDestination (destination) { const name = destination.name.toLowerCase(); if ( name === 'mercury' || name === 'venus' || name === 'earth' || name === 'moon' || name === 'mars' ) { console.log('Destination: ', name, ' approved'); } else { throw new Error('Destination: '' + name + '' not approved at this time'); } } } Create a visitPayload method that throws an error if the payload isn't valid: // main.js class MissionInspector { visitPayload (payload) { const name = payload.name.toLowerCase(); const payloadExpr = /(orbiter)|(rover)/; if ( payloadExpr.test(name) ) { console.log('Payload: ', name, ' approved'); } else { throw new Error('Payload: '' + name + '' not approved at this time'); } } } Create a visitRocket method that logs out an ok message: // main.js class MissionInspector { visitRocket (rocket) { console.log('Rocket: ', rocket.name, ' approved'); } } Add an accept method to the Mission class that calls accept on its constituents, then tells visitor to visit the current instance: // main.js class Mission { // other mission code ... accept (visitor) { this.rocket.accept(visitor); this.payload.accept(visitor); this.destination.accept(visitor); visitor.visit(this); } } Add an accept method to the Destination class that tells visitor to visit the current instance: // main.js class Destination { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Payload class that tells visitor to visit the current instance: // main.js class Payload { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Rocket class that tells visitor to visit the current instance: // main.js class Rocket { // other mission code ... accept (visitor) { visitor.visit(this); } } Create a main function that creates different instances with the builder, visits them with the MissionInspector instance, and logs out any thrown errors: // main.js export function main() { // build an describe a mission const jadeRabbit = new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Moon')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build(); const curiosity = new MissionBuilder() .setMissionName('Curiosity') .setDestination(new Destination('Mars')) .setPayload(new Payload('Mars Rover')) .setRocket(new Rocket('Delta II')) .build(); // expect error from Destination const buzz = new MissionBuilder() .setMissionName('Buzz Lightyear') .setDestination(new Destination('Too Infinity And Beyond')) .setPayload(new Payload('Interstellar Orbiter')) .setRocket(new Rocket('Self Propelled')) .build(); // expect error from payload const terraformer = new MissionBuilder() .setMissionName('Mars Terraformer') .setDestination(new Destination('Mars')) .setPayload(new Payload('Terraformer')) .setRocket(new Rocket('Light Sail')) .build(); const inspector = new MissionInspector(); [jadeRabbit, curiosity, buzz, terraformer].forEach((mission) => { try { mission.accept(inspector); } catch (e) { console.error(e); } }); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The visitor pattern has two components. The visitor processes the subject objects and the subjects tell other related subjects about the visitor, and when the current subject should be visited. The accept method is required for each subject to receive a notification that there is a visitor. That method then makes two types of method call. The first is the accept method on its related subjects. The second is the visitor method on the visitor. In this way, the visitor traverses a structure by being passed around by the subjects. The visitor methods are used to process different types of node. In some languages, this is handled by language-level polymorphism. In JavaScript, we can use run-time type checks to do this. The visitor pattern is a good option for processing hierarchical structures of objects, where the structure is not known ahead of time, but the types of subjects are known. Using a singleton to manage instances Sometimes, there are objects that are resource intensive. They may require time, memory, battery power, or network usage that are unavailable or inconvenient. It is often useful to manage the creation and sharing of instances. Here, we'll see how to use singletons to manage instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-05-singleton-to-manage-instances. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Rocket. Add a constructor takes a name constructor argument and assigns it to an instance property: // main.js class Rocket { constructor (name) { this.name = name; } } Create a RocketManager object that has a rockets property. Add a findOrCreate method that indexes Rocket instances by the name property: // main.js const RocketManager = { rockets: {}, findOrCreate (name) { const rocket = this.rockets[name] || new Rocket(name); this.rockets[name] = rocket; return rocket; } } Create a main function that creates instances with and without the manager. Compare the instances and see whether they are identical: // main.js export function main() { const atlas = RocketManager.findOrCreate('Atlas V'); const atlasCopy = RocketManager.findOrCreate('Atlas V'); const atlasClone = new Rocket('Atlas V'); console.log('Copy is the same: ', atlas === atlasCopy); console.log('Clone is the same: ', atlas === atlasClone); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The object stores references to the instances, indexed by the string value given with name. This map is created when the module loads, so it is persisted through the life of the program. The singleton is then able to look up the object and returns instances created by findOrCreate with the same name. Conserving resources and simplifying communication are primary motivations for using singletons. Creating a single object for multiple uses is more efficient in terms of space and time needed than creating several. Plus, having single instances for messages to be communicated through makes communication between different parts of a program easier. Singletons may require more sophisticated indexing if they are relying on more complicated data. You read an excerpt from a book written by Ross Harrison, titled ECMAScript Cookbook. This book contains over 70 recipes to help you improve your coding skills and solving practical JavaScript problems. 6 JavaScript micro optimizations you need to know Mozilla is building a bridge between Rust and JavaScript Behavior Scripting in C# and Javascript for game developers  
Read more
  • 0
  • 0
  • 10392

article-image-build-sensor-application-measure-ambient-light
Gebin George
01 May 2018
16 min read
Save for later

How to build a sensor application to measure Ambient Light

Gebin George
01 May 2018
16 min read
In today's tutorial, we will look at how to build a sensor application to measure the ambient light. Preparing our Sensor project We will create a new Universal Windows Platform application project. This time, we call it Sensor. We can use the Raspberry Pi 3, even though we will only use the light sensor and motion detector (PIR sensor) in this project. We will also add the latest version of a new NuGet package, the Waher.Persistence.FilesLW package. This package will help us with data persistence. It takes our objects and stores them in a local object database. We can later load the objects back into the memory and search for them. This is all done by analyzing the metadata available in the class definitions, so there's no need to do any database programming. Go ahead and install the package in your new project. The Waher.Persistence.Files package contains similar functionality, but it performs data encryption and dynamic code compilation of object serializes as well. These features require .NET standard v1.5, which is not compatible with the Universal Windows Platform in its current state. That is why we use the Light Weight version of the same library, which only requires .NET standard 1.3. The Universal Windows Platform supports .NET Standard up to 1.4. For more information, visit https://docs.microsoft.com/en-us/dotnet/articles/standard/library#net-platforms-support. Initializing the inventory library The next step is to initialize the libraries we have just included in the project. The persistence library includes an inventory library (Waher.Runtime.Inventory) that helps with dynamic type-related tasks, as well as keeping track of available types, interfaces and which ones have implemented which interfaces in the runtime environment. This functionality is used by the object database defined in the persistence libraries. The object database figures out how to store, load, search for, and create objects, using only their class definitions appended with a minimum of metadata in the form of attributes. So, one of the first things we need to do during startup is to tell the inventory environment which assemblies it and, by extension, the persistence library can use. We do this as follows: Log.Informational("Starting application."); Types.Initialize( typeof(FilesProvider).GetTypeInfo().Assembly, typeof(App).GetTypeInfo().Assembly); Here, Types is a static class defined in the Waher.Runtime.Inventory namespace. We initialize it by providing an array of assemblies it can use. In our case, we include the assembly of the persistence library, as well as the assembly of our own application. Initializing the persistence library We then go on to initialize our persistence library. It is accessed through the static Database class, defined in the Waher.Persistence namespace. Initialization is performed by registering one object database provider. This database provider will then be used for all object database transactions. In our case, we register our local files object database provider, FilesProvider, defined in the Waher.Persistence.Files namespace: Database.Register(new FilesProvider( Windows.Storage.ApplicationData.Current.LocalFolder.Path + Path.DirectorySeparatorChar + "Data", "Default", 8192, 1000, 8192, Encoding.UTF8, 10000)); The first parameter defines a folder where database files will be stored. In our case, we store database files in the Data subfolder of the application local data folder. Objects are divided into collections. Collections are stored in separate files and indexed differently, for performance reasons. Collections are defined using attributes in the class definition. Classes lacing a collection definition are assigned the default collection, which is specified in the second argument. Objects are then stored in B-tree ordered files. Such files are divided into blocks into which objects are crammed. For performance reasons, the block size, defined in the third argument, should be correlated to the sector size of the underlying storage medium, which is typically a power of two. This minimizes the number of reads and writes necessary. In our example, we've chosen 8,192 bytes as a suitable block size. The fourth argument defines the number of blocks the provider can cache in the memory. Caching improves performance, but requires more internal memory. In our case, we're satisfied with a relatively modest cache of 1,000 blocks (about 8 MB). Binary Large Objects (BLOBs), that is, objects that cannot efficiently be stored in a block, are stored separately in BLOB files. These are binary files consisting of doubly linked blocks. The fifth parameter controls the block size of BLOB files. The sixth parameter controls the character encoding to use when serializing strings. The seventh, and last parameter, is the maximum time the provider will wait, in milliseconds, to get access to the underlying database when an operation is to be performed. Sampling raw sensor data After the database provider has been successfully registered, the persistence layer is ready to be used. We now continue with the first step in acquiring the sensor data: sampling. Sampling is normally done using a short regular time interval. Since we use the Arduino, we get values as they change. While such values can be an excellent source for event-based algorithms, they are difficult to use in certain kinds of statistical calculations and error-correction algorithms. To set up the regular sampling of values, we begin by creating a Timer object from the System.Threading namespace, after the successful initialization of the Arduino: this.sampleTimer = new Timer(this.SampleValues, null, 1000 - DateTime.Now.Millisecond, 1000); This timer will call the SampleValues method every thousand milliseconds, starting the next second. The second parameter allows us to send a state object to the timer callback method. We will not use this, so we let it be null. We then sample the values, as follows: privateasync void SampleValues(object State) { try { ushort A0 = this.arduino.analogRead("A0"); PinState D8= this.arduino.digitalRead(8); ... } catch (Exception ex) { Log.Critical(ex); } } We define the method as asynchronous at this point, even though we still haven't used any asynchronous calls. We will do so, later in this chapter. Since the method does not return a Task object, exceptions are not propagated to the caller. This means that they must be caught inside the method to avoid unhandled exceptions closing the application. Performing basic error correction Values we sample may include different types of errors, some of which we can eliminate in the code to various degrees. There are systematic errors and random errors. Systematic errors are most often caused by the way we've constructed our device, how we sample, how the circuit is designed, how the sensors are situated, how they interact with the physical medium and our underlying mathematical model, or how we convert the sampled value into a physical quantity. Reducing systematic errors requires a deeper analysis that goes beyond the scope of this book. Random errors are errors that are induced stochastically and are often unbiased. They can be induced due to a lack of resolution or precision, by background noise, or through random events in the physical world. While background noise and the lack of resolution or precision in our electronics create a noise in the measured input, random events in the physical world may create spikes. If something briefly flutters past our light sensor, it might register a short downwards spike, even though the ambient light did not change. You'll learn how to correct for both types of random errors. Canceling noise Since the digital PIR sensor already has error correction built into it, we will only focus on how to cancel noise from our analog light sensor. Noise can be canceled electronically, using, for instance, low-pass filters. It can also be cancelled algorithmically, using a simple averaging calculation over a short window of values. The averaging calculation will increase our resolution, at the cost of a small delay in the output. If we perform the average over 10 values, we effectively gain one power of 10, or one decimal, of resolution in our output value. The value will be delayed 10 seconds, however. This algorithm is therefore only suitable for input signals that vary slowly, or where a quick reaction to changes in the input stimuli is not required. Statistically, the expected average value is the same as the expected value, if the input is a steady signal overlaid with random noise. The implementation is simple. We need the following variables to set up our averaging algorithm: privateconstintwindowSize = 10; privateint?[] windowA0 = new int?[windowSize]; privateint nrA0 = 0; privateint sumA0 = 0; We use nullable integers (int?), to be able to remove bad values later. In the beginning, all values are null. After sampling the value, we first shift the window one step, and add our newly sampled value at the end. We also update our counters and sums. This allows us to quickly calculate the average value of the entire window, without having to loop through it each time: if (this.windowA0[0].HasValue) { this.sumA0 -= this.windowA0[0].Value; this.nrA0--; } Array.Copy(this.windowA0, 1, this.windowA0, 0, windowSize - 1); this.windowA0[windowSize - 1] = A0; this.sumA0 += A0; this.nrA0++; double AvgA0 = ((double)this.sumA0) / this.nrA0; int? v; Removing random spikes We now have a value that is 10 times more accurate than the original, in cases where our ambient light is not expected to vary quickly. This is typically the case, if ambient light depends on the sun and weather. Calculating the average over a short window has an added advantage: it allows us to remove bad measurements, or spikes. When a physical quantity changes, it normally changes continuously, slowly, and smoothly. This will have the effect that roughly half of the measurements, even when the input value changes, will be on one side of the average value, and the other half on the other side. A single spike, on the other hand, especially in the middle of the window, if sufficiently large, will stand out alone on one side, while the other values remain on the other. We can use this fact to remove bad measurements from our window. We define our middle position first: private const int spikePos = windowSize / 2; We proceed by calculating the number of values on each side of the average, if our window is sufficiently full: if (this.nrA0 >= windowSize - 2) { int NrLt = 0; int NrGt = 0; foreach (int? Value in this.windowA0) { if (Value.HasValue) { if (Value.Value < AvgA0) NrLt++; else if (Value.Value > AvgA0) NrGt++; } } If we only have one value on one side, and this value happens to be in the middle of the window, we identify it as a spike and remove it from the window. We also make sure to adjust our average value accordingly: if (NrLt == 1 || NrGt == 1) { v = this.windowA0[spikePos]; if (v.HasValue) { if ((NrLt == 1 && v.Value < AvgA0) || (NrGt == 1 && v.Value > AvgA0)) { this.sumA0 -= v.Value; this.nrA0--; this.windowA0[spikePos] = null; AvgA0 = ((double)this.sumA0) / this.nrA0; } } } } Since we remove the spike when it reaches the middle of the window, it might pollute the average of the entire window up to that point. We therefore need to recalculate an average value for the half of the window, where any spikes have been removed. This part of the window is smaller, so the resolution gain is not as big. Instead, the average value will not be polluted by single spikes. But we will still have increased the resolution by a factor of five: int i, n; for (AvgA0 = i = n = 0; i < spikePos; i++) { if ((v = this.windowA0[i]).HasValue) { n++; AvgA0 += v.Value; } } if (n > 0) { AvgA0 /= n; Converting to a physical quantity It is not sufficient for a sensor to have a numerical raw value of the measured quantity. It only tells us something if we know something more about the raw value. We must, therefore, convert it to a known physical unit. We must also provide an estimate of the precision (or error) the value has. A sensor measuring a physical quantity should report a numerical value, its physical unit, and the corresponding precision, or error of the estimate. To avoid creating a complex mathematical model that converts our measured light intensity into a known physical unit, which would go beyond the scope of this book, we convert it to a percentage value. Since we've gained a factor of five of precision using our averaging calculation, we can report two decimals of precision, even though the input value is only 1,024 bits, and only contains one decimal of precision: double Light = (100.0 * AvgA0) / 1024; MainPage.Instance.LightUpdated(Light, 2, "%"); } Illustrating measurement results Following image shows how our measured quantity behaves. The light sensor is placed in broad daylight on a sunny day, so it's saturated. Things move in front of the sensor, creating short dips. The thin blue line is a scaled version of our raw input A0. Since this value is event based, it is being reported more often than once a second. Our red curve is our measured, and corrected, ambient light value, in percent. The dots correspond to our second values. Notice that the first two spikes are removed and don't affect the measurement, which remains close to 100%. Only the larger dips affect the measurement. Also, notice the small delay inherent in our algorithm. It is most noticeable if there are abrupt changes: If we, on the other hand, have a very noisy input, our averaging algorithm helps our measured value to stay more stable. Perhaps the physical quantity goes below some sensor threshold, and input values become uncertain. In the following image, we see how the floating average varies less than the noisy input: Calculating basic statistics A sensor normally reports more than the measured momentary value. It also calculates basic statistics on the measured input, such as peak values. It also makes sure to store measured values regularly, to allow its users to view historical measurements. We begin by defining variables to keep track of our peak values: private int? lastMinute = null; private double? minLight = null; private double? maxLight = null; private DateTime minLightAt = DateTime.MinValue; private DateTime maxLightAt = DateTime.MinValue; We then make sure to update these after having calculated a new measurement: DateTime Timestamp = DateTime.Now; if (!this.minLight.HasValue || Light < this.minLight.Value) { this.minLight = Light; this.minLightAt = Timestamp; } if (!this.maxLight.HasValue || Light > this.maxLight.Value) { this.maxLight = Light; this.maxLightAt = Timestamp; } Defining data persistence The last step in this is to store our values regularly. Later, when we present different communication protocols, we will show how to make these values available to users. Since we will use an object database to store our data, we need to create a class that defines what to store. We start with the class definition: [TypeName(TypeNameSerialization.None)] [CollectionName("MinuteValues")] [Index("Timestamp")] public class LastMinute { [ObjectId] public string ObjectId = null; } The class is decorated with a couple of attributes from the Waher.Persistence.Attributes namespace. The CollectionName attribute defines the collection in which objects of this class will be stored. The TypeName attribute defines if we want the type name to be stored with the data. This is useful, if you mix different types of classes in the same collection. We plan not to, so we choose not to store type names. This saves some space. The Index attribute defines an index. This makes it possible to do quick searches. Later, we will want to search historical records based on their timestamps, so we add an index on the Timestamp field. We also define an Object ID field. This is a special field that is like a primary key in object databases. We need it to be able to delete objects later. You can add any number of indices and any number of fields in each index. Placing a hyphen (-) before the field name makes the engine use descending sort order for that field. Next, we define some member fields. If you want, you can use properties as well, if you provide both getters and setters for the properties you wish to persist. By providing default values, and decorating the fields (or properties) with the corresponding default value, you can optimize storage somewhat. Only members with values different from the declared default values will then be persisted, to save space: [DefaultValueDateTimeMinValue] public DateTime Timestamp = DateTime.MinValue; [DefaultValue(0)] public double Light = 0; [DefaultValue(PinState.LOW)] public PinState Motion= PinState.LOW; [DefaultValueNull] public double? MinLight = null; [DefaultValueDateTimeMinValue] public DateTime MinLightAt = DateTime.MinValue; [DefaultValueNull] public double? MaxLight = null; [DefaultValueDateTimeMinValue] public DateTime MaxLightAt = DateTime.MinValue; Storing measured data We are now ready to store our measured data. We use the lastMinute field defined earlier to know when we pass into a new minute. We use that opportunity to store the most recent value, together with the basic statistics we've calculated: if (!this.lastMinute.HasValue) this.lastMinute = Timestamp.Minute; else if (this.lastMinute.Value != Timestamp.Minute) { this.lastMinute = Timestamp.Minute; We begin by creating an instance of the LastMinute class defined earlier: LastMinute Rec = new LastMinute() { Timestamp = Timestamp, Light = Light, Motion= D8, MinLight = this.minLight, MinLightAt = this.minLightAt, MaxLight = this.maxLight, MaxLightAt = this.maxLightAt }; Storing this object is very easy. The call is asynchronous and can be executed in parallel, if desired. We choose to wait for it to complete, since we will be making database requests after the operation has completed: await Database.Insert(Rec); We then clear our variables used for calculating peak values, to make sure peak values are calculated within the next period: this.minLight = null; this.minLightAt = DateTime.MinValue; this.maxLight = null; this.maxLightAt = DateTime.MinValue; } Removing old data We cannot continue storing new values without also having a plan for removing old ones. Doing so is easy. We choose to delete all records older than 100 minutes. This is done by first performing a search, and then deleting objects that are found in this search. The search is defined by using filters from the Waher.Persistence.Filters namespace: foreach (LastMinute Rec2 in await Database.Find<LastMinute>( new FilterFieldLesserThan("Timestamp", Timestamp.AddMinutes(-100)))) { await Database.Delete(Rec2); } You can now execute the application, and monitor how the MinuteValues collection is being filled. We created a simple sensor app for the Raspberry Pi using C#. You read an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you augment your IoT skills with the help of engaging and enlightening tutorials designed for Raspberry Pi 3. 25 Datasets for Deep Learning in IoT How IoT is going to change tech teams How to run and configure an IoT Gateway
Read more
  • 0
  • 0
  • 3084

article-image-implementing-automation-process-with-salesforce-crm
Richa Tripathi
01 May 2018
4 min read
Save for later

Implementing Automation Process with Salesforce CRM

Richa Tripathi
01 May 2018
4 min read
A CRM system must help its users to be as productive as possible to justify its investment; therefore, if there are any aspects that can be made more efficient, it is usually worth considering. The Salesforce CRM application aims to be as efficient as possible out of the box; however, there are often organization-specific business processes and rules that need to be implemented, and this is where the power of the Salesforce platform becomes truly apparent. [box type="note" align="" class="" width=""]This article is an excerpt from a book written by Paul Goodey, titled Salesforce CRM Admin Cookbook - Second Edition. This book will enable you to instantly extend and unleash the power of Salesforce CRM and its Lightning Experience framework.[/box] In this post, we have provided recipes to create business processes and automate data manipulation that can be used to satisfy an organization's unique requirements for business rules and logic. Deriving year and month values from an Opportunity close date using a formula To simplify the format of dates for presentation and reporting, we can automatically derive the year and month from a date field that contains month, day, and year. In this recipe, we will display a derived year and month text value for the opportunity close date on the opportunity record detail and edit pages calculated from the standard date field called CloseDate. How to do it… Carry out the following steps to create a formula field to derive year and month values from the opportunity close date for opportunity records: Click on the Setup gear icon in the top right-hand corner of the main Home page, as shown in the following screenshot: 2. Click on Setup, as shown in the following screenshot: 3. Navigate to the Opportunity customization setup page as follows: Objects and Fields | Object Manager | Opportunity | Fields & Relationships. Locate the Fields & Relationships section on the right of the page. 4. Click on New. We will be presented with the Step 1. Choose the field type page. 5. Select the Formula option. 6. Click on Next. We will be presented with the Step 2. Choose output type page. 7. Enter CloseDate YEAR MONTH in the Field Label textbox. 8. Click on the Field Name. When clicking out of the Field Label textbox the Field Name is automatically filled     with the value Close_Date_Year_Month. 9. Set the Formula Return Type as Text. 10. Click on Next. We will be presented with the Step 3. Enter formula page. 11. Paste or enter the following code in the formula editor box: TEXT(YEAR(CloseDate)) & " " & CASE( MONTH(CloseDate), 1, "January", 2, "February", 3, "March", 4, "April", 5, "May", 6, "June", 7, "July", 8, "August", 9, "September", 10, "October", 11, "November", 12, "December", "Error!") The formula field is to be set according to the following screenshot: Optionally, enter details in the Description field. 12. Optionally, enter details in the Help Text field. 13. In the Blank Field Handling section, select the option Treat blank fields as blanks. 14. Click on Next. We will be presented with the Step 4. Establish field-level security page. 15. Select the profiles to which you want to grant read access to this field via field- level security. The field will be hidden from all profiles if you do not add it to field-level security. 16. Click on Next. We will be presented with the Step 5. Add to page layouts page. 17. Select the page layouts that should include this field. The field will be added as the last field in the first two column section of these page layouts. The field will not appear on any pages if you do not select a layout. 18. Finally, click on Save. How it works… The Opportunity record formula field Close Date Year Month is automatically derived showing the year and the month name and appears on both the opportunity detail and edit pages. You can see what this looks like when the Close Date for an opportunity record is 12/31/2020, resulting in the automatic year and month of 2020 December, as shown in the following screenshot:   To summarize, we learned about automating tasks like how to derive year and month values from an Opportunity close date using a formula in the Salesforce CRM. If you enjoyed this post, check out the book Salesforce CRM Admin Cookbook - Second Edition to discover hidden features and hacks that extend standard configuration to provide enhanced functionality and customization in Salesforce CRM. Salesforce Spring 18 – New features to be excited about in this release! Learning the Salesforce Analytics Query Language (SAQL) How to create and prepare your first dataset in Salesforce Einstein
Read more
  • 0
  • 0
  • 2429

article-image-amazon-simple-notification-service
Vijin Boricha
30 Apr 2018
5 min read
Save for later

Using Amazon Simple Notification Service (SNS) to create an SNS topic

Vijin Boricha
30 Apr 2018
5 min read
Simple Notification Service is a managed web service that you, as an end user, can leverage to send messages to various subscribing endpoints. SNS works in a publisher–subscriber or producer and consumer model, where producers create and send messages to a particular topic, which is in turn consumed by one or more subscribers over a supported set of protocols. At the time of writing this book, SNS supports HTTP, HTTPS, email, push notifications in the form of SMS, as well as AWS Lambda and Amazon SQS, as the preferred modes of subscribers. In today's tutorial, we will learn about Amazon Simple Notification Service (SNS) and how to create your very own simple SNS topics: SNS is a really simple and yet extremely useful service that you can use for a variety of purposes, the most common being pushing notifications or system alerts to cloud administrators whenever a particular event occurs. We have been using SNS throughout this book for this same purpose; however, there are many more features and use cases that SNS can be leveraged for. For example, you can use SNS to send out promotional emails or SMS to a large group of targeted audiences or even use it as a mobile push notification service where the messages are pushed directly to your Android or IOS applications. With this in mind, let's quickly go ahead and create a simple SNS topic of our own: To do so, first log in to your AWS Management Console and, from the Filter option, filter out SNS service. Alternatively, you can also access the SNS dashboard by selecting https://console.aws.amazon.com/sns. If this is your first time with SNS, simply select the Get Started option to begin. Here, at the SNS dashboard, you can start off by selecting the Create topic option, as shown in the following screenshot: Once selected, you will be prompted to provide a suitable Topic name and its corresponding Display name. Topics form the core functionality for SNS. You can use topics to send messages to a particular type of subscribing consumer. Remember, a single topic can be subscribed by more than one consumer. Once you have typed in the required fields, select the Create topic option to complete the process. That's it! Simple, isn't it? Having created your topic, you can now go ahead and associate it with one or more subscribers. To do so, first we need to create one or more subscriptions. Select the Create subscription option provided under the newly created topic, as shown in the following screenshot: Here, in the Create subscription dialog box, select a suitable Protocol that will subscribe to the newly created topic. In this case, I've selected Email as the Protocol. Next, provide a valid email address in the subsequent Endpoint field. The Endpoint field will vary based on the selected protocol. Once completed, click on the Create subscription button to complete the process. With the subscription created, you will now have to validate the subscription. This can be performed by launching your email application and selecting the Confirm subscription link in the mail that you would have received. Once the subscription is confirmed, you will be redirected to a confirmation page where you can view the subscribed topic's name as well as the subscription ID, as shown in the following screenshot: You can use the same process to create and assign multiple subscribers to the same topic. For example, select the Create subscription option, as performed earlier, and from the Protocol drop-down list, select SMS as the new protocol. Next, provide a valid phone number in the subsequent Endpoint field. The number can be prefixed by your country code, as shown in the following screenshot. Once completed, click on the Create subscription button to complete the process: With the subscriptions created successfully, you can now test the two by publishing a message to your topic. To do so, select the Publish to topic option from your topics page. Once a message is published here, SNS will attempt to deliver that message to each of its subscribing endpoints; in this case, to the email address as well as the phone number. Type in a suitable Subject name followed by the actual message that you wish to send. Note that if your character count exceeds 160 for an SMS, SNS will automatically send another SMS with the remainder of the character count. You can optionally switch the Message format between Raw and JSON to match your requirements. Once completed, select Publish Message. Check your email application once more for the published message. You should receive an mail, as shown in the following screenshot: Similarly, you can create and associate one or more such subscriptions to each of the topics that you create. We learned about creating simple Amazon SNS topics. You read an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia.  This book will help you build a highly secure, fault-tolerant, and scalable Cloud environment. Getting to know Amazon Web Services AWS IoT Analytics: The easiest way to run analytics on IoT data How to set up a Deep Learning System on Amazon Web Services    
Read more
  • 0
  • 0
  • 4194
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-angular-pipes-angular-4
Packt Editorial Staff
30 Apr 2018
13 min read
Save for later

8 built-in Angular Pipes in Angular 4 that you should know

Packt Editorial Staff
30 Apr 2018
13 min read
Angular is a mature technology with introduction to new way to build applications. Think of Angular Pipes as modernized version of filters comprising functions or helps used to format the values within the template. Pipes in Angular are basically extension of what filters were in Angular v1. There are many useful built-in Pipes we can use easily in our templates. In today’s tutorial we will learn about Built-in Pipes as well as create our own custom user-defined pipe. Angular Pipes - overview Pipes allows us to format the values within the view of the templates before it's displayed. For example, in most modern applications, we want to display terms, such as today, tomorrow, and so on and not system date formats such as April 13 2017 08:00. Let's look  more real-world scenarios. You want the hint text in the application to always be lowercase? No problem; define and use lowercasePipe. In weather app, if you want to show month name as MAR or APR instead of full month name, use DatePipe. Cool, right? You get the point. Pipes helps you add your business rules, so you can transform the data before it's actually displayed in the templates. A good way to relate Angular Pipes is similar to Angular 1.x filters. Pipes do a lot more than just filtering. We have used Angular Router to define Route Path, so we have all the Pipes functionalities in one page. You can create in same or different apps. Feel free to use your creativity. In Angular 1.x, we had filters--Pipes are replacement of filters. Defining a Pipe The pipe operator is defined with a pipe symbol (|) followed by the name of the pipe: {{ appvalue | pipename }} The following is an example of a simple lowercase pipe: {{"Sridhar  Rao"  |  lowercase}} In the preceding code, we are transforming the text to lowercase using the lowercase pipe. Now, let's write an example component using the lowercase pipe example: @Component({ selector: 'demo-pipe', template: ` Author name is {{authorName | lowercase}} ` }) export class DemoPipeComponent { authorName = 'Sridhar Rao'; } Let's analyze the preceding code in detail: We are defining a DemoPipeComponent component class We are creating string variable authorName and assigning the value 'Sridhar Rao'. In the template view, we display authorName, but before we print it in the UI we transform it using the lowercase pipe Run the preceding code, and you should see the screenshot shown as follows as an output: Well done! In the preceding example, we have used a Built-in Pipe. In next sections, you will learn more about the Built-in Pipes and also create a few custom Pipes. Note that the pipe operator only works in your templates and not inside controllers. Built-in Pipes Angular Pipes are modernized version of Angular 1.x filters. Angular comes with a lot of predefined Built-in Pipes. We can use them directly in our views and transform the data on the fly. The following is the list of all the Pipes that Angular has built-in support for: DatePipe DecimalPipe CurrencyPipe LowercasePipe and UppercasePipe JSON Pipe SlicePipe async Pipe In the following sections, let's implement and learn more about the various pipes and see them in action. DatePipe DatePipe, as the name itself suggest, allows us to format or transform the values that are date related. DatePipe can also be used to transform values in different formats based on parameters passed at runtime. The general syntax is shown in the following code snippet: {{today | date}} // prints today's date and time {{ today | date:'MM-dd-yyyy' }} //prints only Month days and year {{ today | date:'medium' }} {{ today | date:'shortTime' }} // prints short format Let's analyze the preceding code snippets in detail: As explained in the preceding section, the general syntax is variable followed with a (|) pipe operator followed by name of the pipe operator We use the date pipe to transform the today variable Also, in the preceding example, you will note that we are passing few parameters to the pipe operator. We will cover passing parameters to the pipe in the following section Now, let's create a complete example of the date pipe component. The following is the code snippet for implementing the DatePipe component: import { Component } from '@angular/core'; @Component({ template: ` <h5>Built-In DatePipe</h5> <ol> <li> <strong>DatePipe example expression</strong> <p>Today is {{today | date}} <p>{{ today | date:'MM-dd-yyyy' }} <p>{{ today | date:'medium' }} <p>{{ today | date:'shortTime' }} </li> </ol> `, })   Let's analyze the preceding code snippet in detail: We are creating a PipeComponent component class. We define a today variable. In the view, we are transforming the value of variable into various expressions based on different parameters. Run the application, and we should see the output as shown in the following screenshot: We learned the date pipe in this section. In the following sections, we will continue to learn and implement other Built-in Pipes and also create some custom user-defined pipes. DecimalPipe In this section, you will learn about yet another Built-in Pipe--DecimalPipe. DecimalPipe allows us to format a number according to locale rules. DecimalPipe can also be used to transform a number in different formats. The general syntax is shown as follows: appExpression  |  number  [:digitInfo] In the preceding code snippet, we use the number pipe, and optionally, we can pass the parameters. Let's look at how to create a DatePipe implementing decimal points. The following is an example code of the same: import { Component } from '@angular/core'; @Component({ template: ` state_tax (.5-5): {{state_tax | number:'.5-5'}} state_tax (2.10-10): {{state_tax | number:'2.3-3'}} `, }) export class PipeComponent { state_tax: number = 5.1445; } Let's analyze the preceding code snippet in detail: We defie a component class--PipeComponent. We define a variable--state_tax. We then transform state_tax in the view. The first pipe operator tells the expression to print the decimals up to five decimal places. The second pipe operator tells the expression to print the value to three decimal places. The output of the preceding pipe component example is given as follows: Undoubtedly, number pipe is one of the most useful and used pipe across various applications. We can transform the number values specially dealing with decimals and floating points. CurrencyPipe Applications that intent to cater to multi-national geographies, we need to show country- specific codes and their respective currency values. That's where CurrencyPipe comes to our rescue. The CurrencyPipe operator is used to append the country codes or currency symbol in front of the number values. Take a look the code snippet implementing the CurrencyPipe operator: {{  value  |  currency:'USD'  }} Expenses  in  INR: {{  expenses  |  currency:'INR'  }} Let's analyze the preceding code snippet in detail: The first line of code shows the general syntax of writing a currency pipe. The second line shows the currency syntax, and we use it to transform the expenses value and append the Indian currency symbol to it. So now that we know how to use a currency pipe operator, let's put together an example to display multiple currency and country formats. The following is the complete component class, which implements a currency pipe operator: import { Component } from '@angular/core'; @Component({ selector: 'currency-pipe', template: ` <h5>Built-In CurrencyPipe</h5> <ol> <li> <p>Salary in USD: {{ salary | currency:'USD':true }}</p> <p>Expenses in INR: {{ expenses | currency:'INR':false }}</p> </li> </ol> ` }) export class CurrencyPipeComponent { salary: number = 2500; expenses: number = 1500; } Let's analyze the the preceding code in detail: We created a component class, CurrencyPipeComponent, and declared few variables, namely salary and expenses. In the component template, we transformed the display of the variables by adding the country and currency details. In the first pipe operator, we used 'currency :  USD', which will append the ($) dollar symbol before the variable. In the second pipe operator, we used 'currency :  'INR':false', which will add the currency code, and false will tell not to print the symbol. Launch the app, and we should see the output as shown in the following screenshot: In this section, we learned about and implemented CurrencyPipe. In the following sections, we will keep exploring and learning about other Built-in Pipes and much more. import { Component } from '@angular/core'; @Component({ selector: 'currency-pipe', template: ` <h5>Built-In CurrencyPipe</h5> <ol> <li> <p>Salary in USD: {{ salary | currency:'USD':true }}</p> <p>Expenses in INR: {{ expenses | currency:'INR':false }}</p> </li> </ol> ` }) export class CurrencyPipeComponent { salary: number = 2500; expenses: number = 1500; } The LowercasePipe and UppercasePipe, as the name suggests, help in transforming the text into lowercase and uppercase, respectively. Take a look at the following code snippet: Author  is  Lowercase {{authorName  |  lowercase  }} Author  in  Uppercase  is {{authorName  |  uppercase  }} Let's analyze the preceding code in detail: The first line of code transforms the value of authorName into a lowercase using the lowercase pipe The second line of code transforms the value of authorName into an uppercase using the uppercase pipe. Now that we saw how to define lowercase and uppercase pipes, it's time we create a complete component example, which implements the Pipes to show author name in both lowercase and uppercase. Take a look at the following code snippet: import { Component } from '@angular/core'; @Component({ selector: 'textcase-pipe', template: ` <h5>Built-In LowercasPipe and UppercasePipe</h5> <ol> <li> <strong>LowercasePipe example</strong> <p>Author in lowercase is {{authorName | lowercase}} </li> <li> <strong>UpperCasePipe example</strong> <p>Author in uppercase is {{authorName | uppercase}} </li> </ol> ` }) export class TextCasePipeComponent { authorName = "Sridhar Rao"; } Let's analyze the preceding code in detail: We create a component class, TextCasePipeComponent, and define a variable authorName. In the component view, we use the lowercase and uppercase pipes. The first pipe will transform the value of the variable to the lowercase text. The second pipe will transform the value of the variable to uppercase text. Run the application, and we should see the output as shown in the following screenshot: In this section, you learned how to use lowercase and uppercase pipes to transform the values. JSON Pipe Similar to JSON filter in Angular 1.x, we have JSON Pipe, which helps us transform the string into a JSON format string. In lowercase or uppercase pipe, we were transforming the strings; using JSON Pipe, we can transform and display the string into a JSON format string. The general syntax is shown in the following code snippet: <pre>{{  myObj  |  json  }}</pre> Now, let's use the preceding syntax and create a complete component example, which uses the JSON Pipe: import { Component } from '@angular/core'; @Component({ template: ` <h5>Author Page</h5> <pre>{{ authorObj | json }}</pre> ` }) export class JSONPipeComponent { authorObj: any; constructor() { this.authorObj = { name: 'Sridhar Rao', website: 'http://packtpub.com', Books: 'Mastering Angular2' }; } } Let's analyze the preceding code in detail: We created a component class JSONPipeComponent and authorObj and assigned the JSON string to the variable. In the component template view, we transformed and displayed the JSON string. Run the app, and we should see the output as shown in the following screenshot: JSON is soon becoming defacto standard of web applications to integrate between services and client technologies. Hence, JSON Pipe comes in handy every time we need to transform our values to JSON structure in the view. Slice pipe Slice Pipe is very similar to array slice JavaScript function. It gets a sub string from a strong certain start and end positions. The general syntax to define a slice pipe is given as follows: {{email_id  |  slice:0:4  }} In the preceding code snippet, we are slicing the e-mail address to show only the first four characters of the variable value email_id. Now that we know how to use a slice pipe, let's put it together in a component. The following is the complete complete code snippet implementing the slice pipe: import { Component } from '@angular/core'; @Component({ selector: 'slice-pipe', template: ` <h5>Built-In Slice Pipe</h5> <ol> <li> <strong>LowercasePipe example</strong> <p> Email Id is {{ emailAddress }} </li> <li> <strong>LowercasePipe example</strong> <p>Sliced Email Id is {{emailAddress | slice : 0: 4}} </li> </ol> ` }) export class SlicePipeComponent { emailAddress = "[email protected]"; } Let's analyze the preceding code snippet in detail: We are creating a class SlicePipeComponent. We defined a string variable emailAddress and assign it a value, [email protected]. Then, we applied the slice pipe to the {{emailAddress |  slice  :  0:  4}} variable. We get the sub string starting 0 position and get four characters from the variable value of emailAddress. Run the app, and we should the output as shown in the following screenshot: SlicePipe is certainly a very helpful Built-in Pipe specially dealing with strings or substrings. async Pipe async Pipe allows us to directly map a promises or observables into our template view. To understand async Pipe better, let me throw some light on an Observable first. Observables are Angular-injectable services, which can be used to stream data to multiple sections in the application. In the following code snippet, we are using async Pipe as a promise to resolve the list of authors being returned: <ul id="author-list"> <li *ngFor="let author of authors | async"> <!-- loop the object here --> </li> </ul> The async pipe now subscribes to the observable (authors) and retrieve the last value. Let's look at examples of how we can use the async pipe as both promise and an observable. Add the following lines of code in our app.component.ts file: getAuthorDetails(): Observable<Author[]> { return this.http.get(this.url).map((res: Response) => res.json()); } getAuthorList(): Promise<Author[]> { return this.http.get(this.url).toPromise().then((res: Response) => res.json()); } Let's analyze the preceding code snippet in detail: We created a method called getAuthorDetails and attached an observable with the same. The method will return the response from the URL--which is a JSON output. In the getAuthorList method, we are binding a promise, which needs to be resolved or rejected in the output returned by the url called through a http request. In this section, we have seen how the async pipe works. You will find it very similar to dealing with services. We can either map a promise or an observable and map the result to the template. To summarize, we demonstrated Angular Pipes by explaining in detail about various built-in Pipes such as DatePipe, DecimalPipe, CurrencyPipe, LowercasePipe and UppercasePipe, JSON Pipe, SlicePipe, and async Pipe. [box type="note" align="" class="" width=""]The above article is an excerpt from the book Expert Angular, written by Sridhar Rao, Rajesh Gunasundaram, and Mathieu Nayrolles. This book will help you learn everything you need to build highly scalable and robust web applications using Angular 4. What are you waiting for, check out the book now to become an expert Angular developer![/box] Get Familiar with Angular Interview - Why switch to Angular for web development Building Components Using Angular    
Read more
  • 0
  • 0
  • 16567

article-image-mysql-errors-to-be-aware
Amey Varangaonkar
30 Apr 2018
9 min read
Save for later

12 most common MySQL errors you should be aware of

Amey Varangaonkar
30 Apr 2018
9 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide written by Chintan Mehta, Ankit Bhavsar, Subhash Shah and Hetal Oza. This book provides tips and tricks to tackle problems you might encounter while administering MySQL solution.[/box] While using MySQL 8 there can be few scenarios where you would not be able to access or use MySQL properly. These situations can be very annoying, but are easily fixable. However, before you look for the solution, you must know the problem! Here are some of the common errors you might come across when using MySQL 8. 1. Access denied MySQL provides a privilege system that authenticates the user who connects from a host, and associates the user with access privileges on a database. The privileges include SELECT, INSERT, UPDATE, and DELETE and are able to identify anonymous users and grant privileges for MySQL specific functions, such as LOAD DATA INFILE and administrative operations. The access denied error may occur because of many causes. In many cases, the problem is caused because of MySQL accounts that the client programs use to connect with the MySQL server with permission from the server. 2. Lost connection to MySQL server The lost connection to MySQL server error can occur because of one of the three likely causes explained in this section. One potential reason for the error is that the network connectivity is troublesome. Network conditions should be checked if this is a frequent error. If an error message like “Lost connection to MySQL server” appears while querying the database, it is certain that the error has occurred because of network connection issues. The connection_timeout system variable defines the number of seconds that the mysqld server waits for a connection packet before connection timeout response. Infrequently, this error may occur when a client is trying for the initial connection to the server and the connection_timeout value is set to a few seconds. In this case, the problem can be resolved by increasing the connection_timeout value based on the the distance and connection speed. SHOW GLOBAL STATUS LIKE and Aborted_connects can be used to determine if we are experiencing this more frequently. It can be certainly said that increasing the connection_timeout value is the solution if the error message contains reading authorization packet. It is possible that the problem may be faced because of larger Binary Large OBject (BLOB) values than max_allowed_packet. This can cause a lost connection to the MySQL server error with clients. If the ER_NET_PACKET_TOO_LARGE error is observed, it confirms that the max_allowed_packet value should be increased. 3. Password fails when entered incorrectly MySQL clients ask for a password when the client program is invoked with the -- password or -p option without the password value. The following is the command: > mysql -u user_name -p Enter password: On a few systems, it may happen that the password works fine when specified in an option file or on the command line. But it does not work when entered interactively on the Command Prompt at the Enter password: prompt. It occurs because the system-provided library to read the passwords limits the password values to a small number of characters (usually eight). It is an issue with the system library and not with MySQL. As a workaround to this, change the MySQL password to a value that is eight or fewer characters or store the password in the option file. 4. Host host_name is blocked If the mysqld server receives too many connection requests from the host that is interrupted in the middle, the following error occurs: Host 'host_name' is blocked because of many connection errors. Unblock with 'mysqladmin flush-hosts' The max_connect_errors system variable determines the number of successive interrupted connection requests that are allowed. Once there are max_connect_errors failed requests without a successful connection, mysqld assumes that something is wrong and blocks the host from further connections until the FLUSH HOSTS statement or mysqladmin flush-hosts command is issued. mysqld blocks a host after 100 connection errors as a default. It can be adjusted by setting the max_connect_errors value on the server startup, as follows: > mysqld_safe --max_connect_errors=10000 This value can also be set up at runtime, as follows: mysql> SET GLOBAL max_connect_errors=10000; It should be checked first that there is nothing wrong with TCP/IP connections from the host if the host_name is blocked error is received for a particular host. Increasing the value of the max_connect_errors variable does not help if the network has problems. 5. Too many connections This error indicates that all available connection are in use for other client connections. The max_connections is the system variable that controls the number of connections to the server. The default value for the maximum number of connections is 151. We can set a larger value than 151 for the max_connections system variable to support more connections than 151. The mysqld server process actually allows one more than max_connections (max_connections + 1) value clients to connect. The additional one connection is kept reserved for accounts with CONNECTION_ADMIN or the SUPER privilege. The privilege can be granted to the administrators with access to the PROCESS privilege. With this access, the administrator can connect to the server using the reserved connection. They can execute the SHOW PROCESSLIST command to diagnose the problems even though the maximum number of client connections is exhausted. 6. Out of memory If the mysql does not have enough memory to store the entire request of the query issued by the MySQL client program, the server throws the following error: mysql: Out of memory at line 42, 'malloc.c' mysql: needed 8136 byte (8k), memory in use: 12481367 bytes (12189k) ERROR 2008: MySQL client ran out of memory In order to fix the problem, we must first check if the query is correct. Do we expect the query to return so many rows? If not, we should correct the query and execute it again. If the query is correct and needs no correction, we can connect mysql with the --quick option. Using the --quick option results in the mysql_use_result() C API function for fetching the result set. The function adds more load on the server and less load on the client. 7. Packet too large The communication packet is one of the following: A single SQL statement that the MySQL client sends to the MySQL server A single row that is sent to the MySQL client from the MySQL server A binary log event that is sent from a replication master server to the replication slave A 1 GB packet size is the largest possible packet size that can be transmitted to or from the MySQL 8 server or client. The MySQL server or client issues an ER_NET_PACKET_TOO_LARGE error and closes the connection if it receives a packet bigger than max_allowed_packet bytes. The default max_allowed_packet size is 16 MB for the MySQL client program. The following command can be used to set a larger value: > mysql --max_allowed_packet=32M The default value for the MySQL server is 64 MB. It should be noted that there is no harm in setting a larger value for this system variable, as the additional memory is allocated as needed. 8. The table is full The table-full error occurs in one of the following conditions: The disk is full The table has reached the maximum size The actual maximum table size in the MySQL database can be determined by the constraints imposed by the operating system on the file sizes. 9. Can't create/write to file This indicates that MySQL is unable to create a temporary file in the temporary directory for the result set if we get the following error while executing a query: Can't create/write to file 'sqla3fe_0.ism' The possible workaround for the error is to start the mysqld server with the --tmpdir option. The following is the command: > mysqld --tmpdir C:/temp 10. Commands out of sync If the client functions are called in the wrong order, the commands out of sync error is  received. It means that the command cannot be executed in the client code. As an example, if we execute mysql_use_result() and try to execute another query before executing mysql_free_result(), this error may occur. It may also happen if we execute two queries that return a result set without calling the mysql_use_result() or mysql_store_result() functions in between. 11. Ignoring user The following error is received when an account in the user table is found with an invalid password upon the mysqld server startup or when the server reloads the grant tables: Found wrong password for user 'some_user'@'some_host'; ignoring user The account is ignored by the MySQL permission system as a result. To fix the problem, we should assign a new valid password for the account. 12. Table tbl_name doesn't exist The following error indicates that a specified table does not exist in the default database: Table 'tbl_name' doesn't exist Can't find file: 'tbl_name' (errno: 2) In some cases, the user may be referring to the table incorrectly. It is possible because the MySQL server uses directories and files for storing database tables. Depending upon the operating system file management, the database and table names can be case sensitive. For non case-sensitive file systems, such as Windows, the references to a specified table used within a query must use the same letter case. In addition to these, you might come across MySQL 8 server errors such as issue with permission, or client errors like problem with NULL values. To know how to deal with them, you may check out this book MySQL 8 Administrator’s Guide. MySQL 8.0 is generally available with added features Basic Website using Node.js and MySQL database  
Read more
  • 0
  • 0
  • 19526

article-image-run-lambda-functions-on-aws-greengrass
Vijin Boricha
27 Apr 2018
7 min read
Save for later

How to run Lambda functions on AWS Greengrass

Vijin Boricha
27 Apr 2018
7 min read
AWS Greengrass is a form of edge computing service that extends the cloud's functionality to your IoT devices by allowing data collection and analysis closer to its point of origin. This is accomplished by executing AWS Lambda functions locally on the IoT device itself, while still using the cloud for management and analytics. Today, we will learn how to leverage AWS Greengrass to run simple lambda functions on an IoT device. How does this help a business? Well to start with, using AWS Greengrass you are now able to respond to locally generated events in near real time. With Greengrass, you can program your IoT devices to locally process and filter data and only transmit the important chunks back to AWS for analysis. This also has a direct impact on the costs as well as the amount of data transmitted back to the cloud. Here are the core components of AWS Greengrass: Greengrass Core (GGC) software: The Greengrass Core software is a packaged module that consists of a runtime to allow executions of Lambda functions, locally. It also contains an internal message broker and a deployment agent that periodically notifies the AWS Greengrass service about the device's configuration, state, available updates, and so on. The software also ensures that the connection between the device and the IoT service is secure with the help of keys and certificates. Greengrass groups: A Greengrass group is a collection of Greengrass Core settings and definitions that are used to manage one or more Greengrass-backed IoT devices. The groups internally comprise a few other components, namely: Greengrass group definition: A collection of information about your Greengrass group Device definition: A collection of IoT devices that are a part of a Greengrass group Greengrass group settings: Contains connection as well as configuration information along with the necessary IAM Roles required for interacting with other AWS services Greengrass Core: The IoT device itself Lambda functions: A list of Lambda functions that can be deployed to the Greengrass Core. Subscriptions: A collection of a message source, a message target and an MQTT topic to transmit the messages. The source or targets can be either the IoT service, a Lambda function or even the IoT device itself. Greengrass Core SDK: Greengrass also provides an SDK which you can use to write and run Lambda functions on Greengrass Core devices. The SDK currently supports Java 8, Python 2.7, and Node.js 6.10. With this key information in mind, let's go ahead and deploy our very own Greengrass Core on an IoT device. Running Lambda functions on AWS Greengrass With the Greengrass Core software up and running on your IoT device, we can now go ahead and run a simple Lambda function on it! For this particular section, we will be leveraging an AWS Lambda blueprint that prints a simple Hello World message: To get started, first, we will need to create our Lambda function. From the AWS Management Console, filter out the Lambda service using the Filter option or alternatively, select this URL: https://console.aws.amazon.com/lambda/home. Ensure that the Lambda function is launched from the same region as that of the AWS Greengrass. In this case, we are using the US-East-1 (N. Virginia) region. On the AWS Lambda console landing page, select the Create function option to get started. Since we are going to be leveraging an existing function blueprint for this use case, select the Blueprints option provided on the Create function page. Use the filter to find a blueprint with the name greengrass-hello-world. There are two templates present to date that match this name, one function is based on Python while the other is based on Node.js. For this particular section, select the greengrass-hello-world Python function and click on Configure to proceed. Fill out the required details for the new function, such as a Name followed by a valid Role. For this section, go ahead and select the Create new role from template option. Provide a suitable Role name and finally, from the Policy templates drop-down list, select the AWS IoT Button Permissions role. Once completed, click on Create function to complete the function's creation process. But before you move on to associating this function with your AWS Greengrass, you will also need to create a new version out of this function. Select the Publish new version option from the Actions tab. Provide a suitable Version description text and click on Publish once done. Your function is now ready for AWS Greengrass. Now, head back to the AWS IoT dashboard and select the newly deployed Greengrass group from the Groups option present on the navigation pane. From the Greengrass group page, select the Lambdas option from the navigation pane followed by the Add Lambda option, as shown in the following screenshot: On the Add a Lambda to your Greengrass group, you can choose to either Create a new Lambda function or Use an existing Lambda function as well. Since we have already created our function, select the Use existing function option. In the next page, select your Greengrass Lambda function and click Next to proceed. Finally, select the version of the deployed function and click on Finish once done. To finish things, we will need to create a new subscription between the Lambda function (source) and the AWS IoT service (destination). Select the Subscriptions option from the same Greengrass group page, as shown. Click on Add Subscription to proceed: On the Select your source and target page, select the newly deployed Lambda function as the source, followed by the IoT cloud as the target. Click on Next once done. You can provide an Optional topic filter as well, to filter messages published on the messaging queue. In this case, we have provided a simple hello/world as the filter for this scenario. Click on Finish once done to complete the subscription configuration. With all the pieces in place, it's now time to deploy our Lambda function over to the Greengrass Core. To do so, select the Deployments option and from the Actions drop-down list, select the Deploy option, as shown in the following screenshot: The deployment takes a few seconds to complete. Once done, verify the status of the deployment by viewing the Status column. The Status should show Successfully completed. With the function now deployed, test the setup by using the MQTT client provided by AWS IoT, as done before. Remember to enter the same hello/world topic name in the subscription topic field and click on Publish to topic once done. If all goes well, you should receive a custom Hello World message from the Greengrass Core as depicted in the following screenshot: This was just a high-level view of what you can achieve with Greengrass and Lambda. You can leverage Lambda for performing all kinds of preprocessing on data on your IoT device itself, thus saving a tremendous amount of time, as well as costs. With this, we come to the end of this post. Stay tuned for our next post where we will look at ways to effectively monitor IoT devices. We leveraged AWS Greengrass and Lambda to develop a cost-effective and speedy solution. You read an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia.  Whether you are a seasoned system admin or a rookie, this book will help you learn all the skills you need to work with the AWS cloud.  
Read more
  • 0
  • 0
  • 6347

article-image-debug-application-using-qt-creator
Gebin George
27 Apr 2018
9 min read
Save for later

How to Debug an application using Qt Creator

Gebin George
27 Apr 2018
9 min read
Today, we will learn about debugging an application using Qt Creator. A debugger is a program that can be used to test and debug other programs, in case of a sudden crash during the program execution or an unexpected behavior in the logic of the program. Most of the time (if not always), debuggers are used in the development environment and in conjunction with an IDE. In our case, we will learn how to use a debugger with Qt Creator. It is important to note that debuggers are not part of the Qt Framework, and, just like compilers, they are usually provided by the operating system SDK. Qt Creator automatically detects and uses debuggers if they are present on a system. This can be checked by navigating into the Qt Creator Options page via the main menu Tools and then Options. Make sure to select Build & Run from the list on the left side and then switch to the Debuggers tab from the top. You should be able to see one or more autodetected debuggers on the list. [box type="info" align="" class="" width=""]Windows Users: You should see something similar to the screenshot after this information box. If not, this means you have not installed any debuggers. You can easily download and install it using the instructions provided here: https:/ / docs. microsoft. com/ en- us/ windows- hardware/ drivers/debugger/ Or, you can independently search for the following topic online: Debugging Tools for Windows (WinDbg, KD, CDB, NTSD). Nevertheless, after the debugger is installed (assumingly, CDB or Microsoft Console Debugger for Microsoft Visual C++ Compilers and GDB for GCC Compilers), you can restart Qt Creator and return to this page. You should be able to have one or more entries similar to the following. Since we have installed a 32-bit version of the Qt and OpenCV Frameworks, choose the entry with x86 in its name to view its path, type, and other properties. macOS and Linux Users: There shouldn't be any action needed on your part and, depending on the OS, you'll see a GDB, LLDB, or some other debugger in the entries.[/box] Here's the screenshot of the Build & Run tab on the Options page: Depending on the operating system and the installed debugger, the preceding screenshot might be slightly different. Nevertheless, you'll have a debugger that you need to make sure is correctly set as the debugger for the Qt Kit you are using. So, make a note of the debugger path and name and switch to the Kits tab, and, after selecting the Qt Kit you were using, make sure the debugger for it is correctly set, as you can see in the following screenshot: Don't worry about choosing the wrong debugger, or any other options, since you'll be warned with relevant icons beside the Qt Kit icon selected at the top. The icon seen in the following image on the left side is usually displayed when everything is okay with the Kit, the second one from the left is an indication that something is not right, and the one on the right means a critical error. Move your mouse over the icon when it appears to see more information about the required actions needed to fix the issue: [box type="info" align="" class="" width=""]Critical issues with Qt Kits can be caused by many different factors such as a missing compiler which will make the kit completely useless until the issue is resolved. An example of a warning message in a Qt Kit would be a missing debugger, which will not make the kit useless, but you won't be able to use the debugger with it, thus it means less functionality than a completely configured Qt Kit.[/box] After the debugger is correctly set, you can start debugging your applications in one of the following ways, which basically have the same result: ending up in the Debugger view of the Qt Creator: Starting an application in Debugging mode Attaching to a running application (or process) [box type="info" align="" class="" width=""]Note that a debugging process can be started in many ways, such as remotely, by attaching to a process running on a separate machine and so on. However, the preceding methods will suffice for most cases and especially for the ones relevant to the Qt+OpenCV application development and what we learned throughout this book.[/box] Getting started with the debugging mode To start an application in the debugging mode, after opening a Qt project, you can use one of the following methods: Pressing the F5 button Using the Start Debugging button, right below the usual Run button with a similar icon, but with a small bug on it Using the main menu entries in the following order: Debug/Start Debugging/Start Debugging. To attach the debugger to a running application, you can use the main menu entries in the following order: Debug/Start Debugging/Attach to Running Application. This will open up the List of Processes window, from which you can choose your application or any other process you want to debug using its process ID or executable name. You can also use the Filter field (as seen in the following image) to find your application, since, most probably, the list of processes will be quite a long one. After choosing the correct process, make sure to press the Attach to Process button. No matter which one of the preceding methods you use, you will end up in the Qt Creator Debug mode, which is quite similar to the Edit mode, but it also allows you to do the following, among many others: Add, Enable, Disable, and View Breakpoints in the code (a Breakpoint is simply a point or a line in the code that we want the debugger to pause in the process and allow us to do a more detailed analysis of the status of the program) Interrupt running programs and processes to view and examine the code View and examine the function call stack (the call stack is a stack containing the hierarchical list of functions that led to a breakpoint or interrupted state) View and examine the variables Disassemble the source codes (disassembling in this sense means extracting the exact instructions that correspond to the function calls and other C++ codes in our program) You'll notice a performance drop in the application when it is started in debugging mode, which is obviously because of the fact that codes are being monitored and traced by the debugger. Here's a screenshot of the Qt Creator Debug mode, in which all of the capabilities mentioned earlier are visible in a single window and in the Debug mode of the Qt Creator: The area specified with the number 1 in the preceding screenshot in the code editor that you have already used through the book and are quite familiar with. Each line of code has a line number; you can click on their left side to toggle a breakpoint anywhere you want in the code. You can also right-click on the line numbers to set, remove, disable, or enable a breakpoint by selecting Set Breakpoint at Line X, Remove Breakpoint X, Disable Breakpoint X, or Enable Breakpoint X, where X in all of the commands mentioned here needs to be replaced by the line number. Apart from the code editor, you can also use the area mentioned with number 4 in the preceding screenshot to add, delete, edit, and further modify breakpoints in the code. You can also right-click on the same toolbar below the code editor that contains the debugger controls to open up the following menu and add or remove more panes to display additional debug and analysis information. We will cover the default debugger view, but make sure to check out each one of the following options on your own to familiarize yourself with the debugger even more: The area specified with number 2 in the preceding code can be used to view the call stack. Whether you interrupt the program by pressing the Interrupt button or choosing Debug/Interrupt from the menu while the it is running, set a breakpoint and stop the program in a specific line of code, or a malfunctioning code causes the program to fall into a trap and pause the process (since a crash and exception will be caught by the debugger), you can always view the hierarchy of function calls that led to the interrupted state, or further analyze them by checking the area 2 in the preceding Qt Creator screenshot. Finally, you can use the third area in the previous screenshot to view the local and global variables of the program in the interrupted location in the code. You can see the contents of the variables, whether they are standard data types, such as integers and floats or structures and classes, and also you can further expand and analyze their content to test and analyze any possible issues in your code. Using a debugger efficiently can mean hours of difference in testing and solving the issues in your code. In terms of practical usage of the debuggers, there is really no other way but to use it as much as you can and develop habits of your own to use the debugger, but also make note of good practices and tricks you found along the way and the ones we just went through. If you are interested, you can also read online about other possible methods of debugging, such as remote debugging, debugging using crash dump files (on Windows), and more. We saw how to practically debug an application using QT debugging mode. [box type="note" align="" class="" width=""]You read an excerpt from the book, Computer Vision with OpenCV 3 and Qt 5 written by Amin Ahmadi Tazehkandi.  The book covers development of cross-platform applications using OpenCV 3 and Qt 5.[/box] 3 ways to deploy a QT and OpenCV application Debugging Your .NET Application    
Read more
  • 0
  • 0
  • 15743
article-image-top-10-mysql-8-performance-benchmarking-aspects-to-know
Amey Varangaonkar
27 Apr 2018
5 min read
Save for later

Top 10 MySQL 8 performance benchmarking aspects to know

Amey Varangaonkar
27 Apr 2018
5 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, co-authored by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. This book presents an in-depth view of the newly released features of MySQL 8 and how you can leverage them to administer a high-performance MySQL solution.[/box] Following the best practices for the configuration of MySQL helps us design and manage efficient database, and are quite a cherry on top - without which, it might seem a bit incomplete. In addition to configuration, benchmarking helps us validate and find bottlenecks in the database system and address them. In this article, we look at specific areas that will help us understand the best practices for configuration and performance benchmarking. 1. Resource utilization IO activity, CPU, and memory usage is something that you should not miss out. These metrics help us know how the system is performing while doing benchmarking and at the time of scaling. It also helps us derive impacts per transaction. 2. Stretching your benchmarking timelines We may often like to have a quick glance at performance metrics; however, ensuring that MySQL behaves in the same way for a longer duration of testing is also a key element. There is some basic stuff that might impact on performance when you stretch your benchmark timelines, such as memory fragmentation, degradation of IO, impact after data accumulation, cache management, and so on. We don't want our database to get restarted just to clean up junk items, correct? Therefore, it is suggested to run benchmarking for a long duration for stability and performance Validation. 3. Replicating production settings Let's benchmark in a production-replicated environment. Wait! Let's disable database replication in a replica environment until we are done with benchmarking. Gotcha! We have got some good numbers! It often happens that we don't simulate everything completely that we are going to configure in the production environment. It could prove to be costly, as we might unintentionally be benchmarking something in an environment that might have an adverse impact when it's in production. Replicate production settings, data, workload, and so on in your replicated environment while you do benchmarking. 4. Consistency of throughput and latency Throughput and latency go hand in hand. It is important to keep your eyes primarily focused on throughput; however, latency over time might be something to look out for. Performance dips, slowness, or stalls were noticed in InnoDB in its earlier days. It has improved a lot since then, but as there might be other cases depending on your workload, it is always good to keep an eye on throughput along with latency. 5. Sysbench can do more Sysbench is a wonderful tool to simulate your workloads, whether it be thousands of tables, transaction intensive, data in-memory, and so on. It is a splendid tool to simulate and gives you nice representation. 6. Virtualization world I would like to keep this simple; bare metal as compared to virtualization isn't the same. Hence, while doing benchmarking, measure your resources according to your environment. You might be surprised to see the difference in results if you compare both. 7. Concurrency Big data is seated on heavy data workload; high concurrency is important. MySQL 8 is extending its maximum CPU core support in every new release, optimizing concurrency based on your requirements and hardware resources should be taken care of. 8. Hidden workloads Do not miss out factors that run in the background, such as reporting for big data analytics, backups, and on-the-fly operations while you are benchmarking. The impact of such hidden workloads or obsolete benchmarking workloads can make your days (and nights) Miserable. 9. Nerves of your query Oops! Did we miss the optimizer? Not yet. An optimizer is a powerful tool that will read the nerves of your query and provide recommendations. It's a tool that I use before making changes to a query in production. It's a savior when you have complex queries to be optimized. These are a few areas that we should look out for. Let's now look at a few benchmarks that we did on MySQL 8 and compare them with the ones on MySQL 5.7. 10. Benchmarks To start with, let's fetch all the column names from all the InnoDB tables. The following is the query that we executed: SELECT t.table_schema, t.table_name, c.column_name FROM information_schema.tables t, information_schema.columns c WHERE t.table_schema = c.table_schema AND t.table_name = c.table_name AND t.engine='InnoDB'; The following figure shows how MySQL 8 performed a thousand times faster when having four instances: Following this, we also performed a benchmark to find static table metadata. The following is the query that we executed: SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE, ENGINE, ROW_FORMAT FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA LIKE 'chintan%'; The following figure shows how MySQL 8 performed around 30 times faster than MySQL 5.7:   It made us eager to go into a bit more detail. So, we thought of doing one last test to find dynamic table metadata. The following is the query that we executed: SELECT TABLE_ROWS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA LIKE 'chintan%'; The following figure shows how MySQL 8 performed around 30 times faster than MySQL 5.7: MySQL 8.0 brings enormous performance improvement to the table. Scaling from one to million tables, is a need for big data requirements, which is now achievable. We look forward to more benchmarks being officially released once MySQL 8 is available for general purpose. If you found this post useful, make sure to check out the book MySQL 8 Administrator’s Guide for more tips and tricks to manage MySQL 8 effectively. MySQL 8.0 is generally available with added features New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL  
Read more
  • 0
  • 0
  • 9787

article-image-how-to-dockerize-asp-net-core-application
Aaron Lazar
27 Apr 2018
5 min read
Save for later

How to dockerize an ASP.NET Core application

Aaron Lazar
27 Apr 2018
5 min read
There are many reasons why you might want to dockerize an ASP.NET Core application. But ultimately, it's simply going to make life much easier for you. It's great for isolating components, especially if you're building a microservices or planning to deploy your application on the cloud. So, if you want an easier life (possibly) follow this tutorial to learn how to dockerize an ASP.NET Core application. Get started: Dockerize an ASP.NET Core application Create a new ASP.NET Core Web Application in Visual Studio 2017 and click OK: On the next screen, select Web Application (Model-View-Controller) or any type you like, while ensuring that ASP.NET Core 2.0 is selected from the drop-down list. Then check the Enable Docker Support checkbox. This will enable the OS drop-down list. Select Windows here and then click on the OK button: If you see the following message, you need to switch to Windows containers. This is because you have probably kept the default container setting for Docker as Linux: If you right-click on the Docker icon in the taskbar, you will see that you have an option to enable Windows containers there too. You can switch to Windows containers from the Docker icon in the taskbar by clicking on the Switch to Windows containers option: Switching to Windows containers may take several minutes to complete, depending on your line speed and the hardware configuration of your PC.If, however, you don't click on this option, Visual Studio will ask you to change to Windows containers when selecting the OS platform as Windows.There is a good reason that I am choosing Windows containers as the target OS. This reason will become clear later on in the chapter when working with Docker Hub and automated builds. After your ASP.NET Core application is created, you will see the following project setup in Solution Explorer: The Docker support that is added to Visual Studio comes not only in the form of the Dockerfile, but also in the form of the Docker configuration information. This information is contained in the global docker-compose.yml file at the solution level: 3. Clicking on the Dockerfile in Solution Explorer, you will see that it doesn't look complicated at all. Remember, the Dockerfile is the file that creates your image. The image is a read-only template that outlines how to create a Docker container. The Dockerfile, therefore, contains the steps needed to generate the image and run it. The instructions in the Dockerfile create layers in the image. This means that if anything changes in the Dockerfile, only the layers that have changed will be rebuilt when the image is rebuilt. The Dockerfile looks as follows: FROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0-nanoserver-1709 AS build WORKDIR /src COPY *.sln ./ COPY DockerApp/DockerApp.csproj DockerApp/ RUN dotnet restore COPY . . WORKDIR /src/DockerApp RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "DockerApp.dll"] When you have a look at the menu in Visual Studio 2017, you will notice that the Run button has been changed to Docker: Clicking on the Docker button to debug your ASP.NET Core application, you will notice that there are a few things popping up in the Output window. Of particular interest is the IP address at the end. In my case, it reads Launching http://172.24.12.112 (yours will differ): When the browser is launched, you will see that the ASP.NET Core application is running at the IP address listed previously in the Output window. Your ASP.NET Core application is now running inside of a Windows Docker container: This is great and really easy to get started with. But what do you need to do to Dockerize an ASP.NET Core application that already exists? As it turns out, this isn't as difficult as you may think. How to add Docker support to an existing .NET Core application Imagine that you have an ASP.NET Core application without Docker support. To add Docker support to this existing application, simply add it from the context menu: To add Docker support to an existing ASP.NET Core application, you need to do the following: Right-click on your project in Solution Explorer Click on the Add menu item Click on Docker Support in the fly-out menu: Visual Studio 2017 now asks you what the target OS is going to be. In our case, we are going to target Windows: After clicking on the OK button, Visual Studio 2017 will begin to add the Docker support to your project: It's actually extremely easy to create ASP.NET Core applications that have Docker support baked in, and even easier to add Docker support to existing ASP.NET Core applications. Lastly, if you experience any issues, such as file access issues, ensure that your antivirus software has excluded your Dockerfile from scanning. Also, make sure that you run Visual Studio as Administrator. This tutorial has been taken from C# 7 and .NET Core Blueprints. More Docker tutorials Building Docker images using Dockerfiles How to install Keras on Docker and Cloud ML
Read more
  • 0
  • 0
  • 7923

article-image-how-to-deploy-nodejs-application-to-the-web-using-heroku
Sunith Shetty
26 Apr 2018
19 min read
Save for later

How to deploy a Node.js application to the web using Heroku

Sunith Shetty
26 Apr 2018
19 min read
Heroku is a tool that helps you manage cloud hosted web applications. It's a really great service. It makes creating, deploying, and updating apps really easy. Now Heroku, like GitHub, does not require a credit card to sign up and there is a free tier, which we'll use. They have paid plans for just about everything, but we can get away with the free tier for everything we'll do in this section. In this tutorial, you'll learn to deploy your live Node.js app to the Web using Heroku. By the end of this tutorial, you'll have the URL that you can share with anybody to view the application from their browser. Installing Heroku command-line tools To kick things off, we'll open up the browser and go to Heroku's website here. Here we can go ahead and sign up for a new account. Take a quick moment to either log in to your existing one or sign up for a new one. Once logged in, it'll show you the dashboard. Now your dashboard will look something like this: Although there might be a greeting telling you to create a new application, which you can ignore. I have a bunch of apps. You might not have these. That is perfectly fine. The next thing we'll do is install the Heroku command-line tools. This will let us create apps, deploy apps, open apps, and do all sorts of really cool stuff from the Terminal, without having to come into the web app. That will save us time and make development a lot easier. We can grab the download by going to toolbelt.heroku.com. Here we're able to grab the installer for whatever operating system, you happen to be running on. So, let's start the download. It's a really small download so it should happen pretty quickly. Once it's done, we can go ahead and run through the process: This is a simple installer where you just click on Install. There is no need to customize anything. You don't have to enter any specific information about your Heroku account. Let's go ahead and complete the installer. This will give us a new command from the Terminal that we can execute. Before we can do that, we do have to log in locally in the Terminal and that's exactly what we'll do next. Log in to Heroku account locally Now we will start off the Terminal. If you already have it running, you might need to restart it in order for your operating system to recognize the new command. You can test that it got installed properly by running the following command: heroku --help When you run this command, you'll see that it's installing the CLI for the first time and then we'll get all the help information. This will tell us what commands we have access to and exactly how they work: Now we will need to log in to the Heroku account locally. This process is pretty simple. In the preceding code output, we have all of the commands available and one of them happens to be login. We can run heroku login just like this to start the process: heroku login I'll run the login command and now we just use the email and password that we had set up before: I'll type in my email and password. Typing for Password is hidden because it's secure. And when I do that you see Logged in as [email protected] shows up and this is fantastic: Now we're logged in and we're able to successfully communicate between our machine's command line and the Heroku servers. This means we can get started creating and deploying applications. Getting SSH key to Heroku Now before going ahead, we'll use the clear command to clear the Terminal output and get our SSH key on Heroku, kind of like what we did with GitHub, only this time we can do it via the command line. So it's going to be a lot easier. In order to add our local keys to Heroku, we'll run the heroku keys:add command. This will scan our SSH directory and add the key up: heroku keys:add Here you can see it found a key the id_rsa.pub file: Would you like to upload it to Heroku? Type Yes and hit enter: Now we have our key uploaded. That is all it took. Much easier than it was to configure with GitHub. From here, we can use the heroku keys command to print all the keys currently on our account: heroku keys We could always remove them using heroku keys:remove command followed by the email related to that key. In this case, we'll keep the Heroku key that we have. Next up, we can test our connection using SSH with the v flag and [email protected]: ssh -v [email protected] This will communicate with the Heroku servers: As shown, we can see it's asking that same question: The authenticity of the host 'heroku.com' can't be established, Are you sure you want to continue connecting? Type Yes. You will see the following output: Now when you run that command, you'll get a lot of cryptic output. What you're looking for is authentication succeeded and then public key in parentheses. If things did not go well, you'll see the permission denied message with public key in parentheses. In this case, the authentication was successful, which means we are good to go. I'll run clear again, clearing the Terminal output. Setting up in the application code for Heroku Now we can turn our attention towards the application code because before we can deploy to Heroku, we will need to make two changes to the code. These are things that Heroku expects your app to have in place in order to run properly because Heroku does a lot of things automatically, which means you have to have some basic stuff set up for Heroku to work. It's not too complex—some really simple changes, a couple one-liners. Changes in the server.js file First up in the server.js file down at the very bottom of the file, we have the port and our app.listen statically coded inside server.js: app.listen(3000, () => { console.log('Server is up on port 3000'); }); We need to make this port dynamic, which means we want to use a variable. We'll be using an environment variable that Heroku is going to set. Heroku will tell your app which port to use because that port will change as you deploy your app, which means that we'll be using that environment variable so we don't have to swap out our code every time we want to deploy. With environment variables, Heroku can set a variable on the operating system. Your Node app can read that variable and it can use it as the port. Now all machines have environment variables. You can actually view the ones on your machine by running the env command on Linux or macOS or the set command on Windows. What you'll get when you do that is a really long list of key-value pairs, and this is all environment variables are: Here, we have a LOGNAME environment variable set to Andrew. I have a HOME environment variable set to my home directory, all sorts of environment variables throughout my operating system. One of these that Heroku is going to set is called PORT, which means we need to go ahead and grab that port variable and use it in server.js instead of 3000. Up at the very top of the server.js file, we'd to make a constant called port, and this will store the port that we'll use for the app: const express = require('express');. const hbs = require('hbs'); const fs = require('fs'); const port Now the first thing we'll do is grab a port from process.env. The process.env is an object that stores all our environment variables as key-value pairs. We're looking for one that Heroku is going to set called PORT: const port = process.env.PORT; This is going to work great for Heroku, but when we run the app locally, the PORT environment variable is not going to exist, so we'll set a default using the OR (||) operator in this statement. If process.env.port does not exist, we'll set port equal to 3000 instead: const port = process.env.PORT || 3000; Now we have an app that's configured to work with Heroku and to still run locally, just like it did before. All we have to do is take the PORT variable and use that in app.listen instead of 3000. As shown, I'm going to reference port and inside our message, I'll swap it out for template strings and now I can replace 3000 with the injected port variable, which will change over time: app.listen(port, () => { console.log(`Server is up on port ${port}`); }); With this in place, we have now fixed the first problem with our app. I'll now run node server.js from the Terminal. node server.js We still get the exact same message: Server is up on port 3000, so your app will still works locally as expected: Changes in the package.json file Next up, we have to specify a script in package.json. Inside package.json, you might have noticed we have a scripts object, and in there we have a test script. This gets set by default for npm: We can create all sorts of scripts inside the scripts object that do whatever we like. A script is nothing more than a command that we run from the Terminal, so we could take this command, node server.js, and turn it into a script instead, and that's exactly what we're going to do. Inside the scripts object, we'll add a new script. The script needs to be called start: This is a very specific, built-in script and we'll set it equal to the command that starts our app. In this case, it will be node server.js: "start": "node server.js" This is necessary because when Heroku tries to start our app, it will not run Node with your file name because it doesn't know what your file name is called. Instead, it will run the start script and the start script will be responsible for doing the proper thing; in this case, booting up that server file. Now we can run our app using that start script from the Terminal by using the following command: npm start When I do that, we get a little output related to npm and then we get Server is up on port 3000. The big difference is that we are now ready for Heroku. We could also run the test script using from the Terminal npm test: npm test Now, we have no tests specified and that is expected: Making a commit in Heroku The next step in the process will be to make the commit and then we can finally start getting it up on the Web. First up, git status. When we run git status, we have something a little new: Instead of new files, we have modified files here as shown in the code output here. We have a modified package.json file and we have a modified server.js file. These are not going to be committed if we were to run a git commit just yet; we still have to use git add. What we'll do is run git add with the dot as the next argument. Dot is going to add every single thing showing up and get status to the next commit. Now I only recommend using the syntax of everything you have listed in the Changes not staged for commit header. These are the things you actually want to commit, and in our case, that is indeed what we want. If I run git add and then a rerun git status, we can now see what is going to be committed next, under the Changes to be committed header: Here we have our package.json file and the server.js file. Now we can go ahead and make that commit. I'll run a git commit command with the m flag so we can specify our message, and a good message for this commit would be something like Setup start script and heroku Port: git commit -m 'Setup start script and heroku port' Now we can go ahead and run that command, which will make the commit. Now we can go ahead and push that up to GitHub using the git push command, and we can leave off the origin remote because the origin is the default remote. I'll go ahead and run the following command: git push This will push it up to GitHub, and now we are ready to actually create the app, push our code up, and view it over in the browser: Running the Heroku create command The next step in the process will be to run a command called heroku create from the Terminal. heroku create needs to get executed from inside your application: heroku create Just like we run our Git commands, when I run heroku create, a couple things are going to happen: First up, it's going to make a real new application over in the Heroku web app It's also going to add a new remote to your Git repository Now remember we have an origin remote, which points to our GitHub repository. We'll have a Heroku remote, which points to our Heroku Git repository. When we deploy to the Heroku Git repository, Heroku is going to see that. It will take the changes and it will deploy them to the Web. When we run Heroku create, all of that happens: Now we do still have to push up to this URL in order to actually do the deploying process, and we can do that using git push followed by heroku: git push heroku The brand new remote was just added because we ran heroku create. Now pushing it this time around will go through the normal process. You'll then start seeing some logs. These are logs coming back from Heroku letting you know how your app is deploying. It's going through the entire process, showing you what happens along the way. This will take about 10 seconds and at the very end we have a success message—Verifying deploy... done: It also verified that the app was deployed successfully and that did indeed pass. From here we actually have a URL we can visit (https://sleepy-retreat-32096.herokuapp.com/). We can take it, copy it, and paste it in the browser. What I'll do instead is use the following command: heroku open The heroku open will open up the Heroku app in the default browser. When I run this, it will switch over to Chrome and we get our application showing up just as expected: We can switch between pages and everything works just like it did locally. Now we have a URL and this URL was given to us by Heroku. This is the default way Heroku generates app URLs. If you have your own domain registration company, you can go ahead and configure its DNS to point to this application. This will let you use a custom URL for your Heroku app. You'll have to refer to the specific instructions for your domain registrar in order to do that, but it can indeed be done. Now that we have this in place, we have successfully deployed our Node applications live to Heroku, and this is just fantastic. In order to do this, all we had to do is make a commit to change our code and push it up to a new Git remote. It could not be easier to deploy our code. You can also manage your application by going back over to the Heroku dashboard. If you give it a refresh, you should see that brand new URL somewhere on the dashboard. Remember mine was sleepy retreat. Yours is going to be something else. If I click on the sleepy retreat, I can view the app page: Here we can do a lot of configuration. We can manage Activity and Access so we can collaborate with others. We have metrics, we have Resources, all sorts of really cool stuff. With this in place, we are now done with our basic deploying section. In the next section, your challenge will be to go through that process again. You'll add some changes to the Node app. You'll commit them, deploy them, and view them live in the Web. We'll get started by creating the local changes. That means I'll register a new URL right here using app.get. We'll create a new page/projects, which is why I have that as the route for my HTTP get handler. Inside the second argument, we can specify our callback function, which will get called with request and response, and like we do for the other routes above, the root route and our about route, we'll be calling response.render to render our template. Inside the render arguments list, we'll provide two. The first one will be the file name. The file doesn't exist, but we can still go ahead and call render. I'll call it projects.hbs, then we can specify the options we want to pass to the template. In this case, we'll set page title, setting it equal to Projects with a capital P. Excellent! Now with this in place, the server file is all done. There are no more changes there. What I'll do is go ahead and go to the views directory, creating a new file called projects.hbs. In here, we'll be able to configure our template. To kick things off, I'm going to copy the template from the about page. Since it's really similar, I'll copy it. Close about, paste it into projects, and I'm just going to change this text to project page text would go here. Then we can save the file and make our last change. The last thing we want to do is update the header. We now have a brand new projects page that lives at /projects. So we'll want to go ahead and add that to the header links list. Right here, I'll create a new paragraph tag and then I'll make an anchor tag. The text for the link will be Projects with a capital P and the href, which is the URL to visit when that link is clicked. We'll set that equal to /projects, just like we did for about, where we set it equal to /about. Now that we have this in place, all our changes are done and we are ready to test things out locally. I'll fire up the app locally using Node with server.js as the file. To start, we're up on localhost 3000. So over in the browser, I can move to the localhost tab, as opposed to the Heroku app tab, and click on Refresh. Right here we have Home, which goes to home, we have About which goes to about, and we have Projects which does indeed go to /projects, rendering the projects page. Project page text would go here. With this in place we're now done locally. We have the changes, we've tested them, now it's time to go ahead and make that commit. That will happen over inside the Terminal. I'll shut down the server and run Git status. This will show me all the changes to my repository as of the last commit. I have two modified files: the server file and the header file, and I have my brand new projects file. All of this looks great. I want to add all of this to the next commit, so I can use a Git add with the . to do just that. Now before I actually make the commit, I do like to test that the proper things got added by running Git status. Right here I can see my changes to be committed are showing up in green. Everything looks great. Next up, we'll run a Git commit to actually make the commit. This is going to save all of the changes into the Git repository. A message for this one would be something like adding a project page. With a commit made, the next thing you needed to do was push it up to GitHub. This will back our code up and let others collaborate on it. I'll use Git push to do just that. Remember we can leave off the origin remote as origin is the default remote, so if you leave off a remote it'll just use that anyway. With our GitHub repository updated, the last thing to do is deploy to Heroku and we do that by pushing up the Git repository, using Git push, to the Heroku remote. When we do this, we get our long list of logs as the Heroku server goes through the process of installing our npm modules, building the app, and actually deploying it. Once it's done, we'll get brought back to the Terminal like we are here, and then we can open up the URL in the\ browser. Now I can copy it from here or run Heroku open. Since I already have a tab open with the URL in place, I'll simply give it a refresh. Now you might have a little delay as you refresh your app. Sometimes starting up the app right after a new app was deployed can take about 10 to 15 seconds. That will only happen as you first visit it. Other times where you click on the Refresh button, it should reload instantly. Now we have the projects page and if I visit it, everything looks awesome. The navbar is working great and the projects page is indeed rendering at /projects. With this in place, we are now done. We've gone through the process of adding a new feature, testing it locally, making a Git commit, pushing it up to GitHub, and deploying it to Heroku. We now have a workflow for building real-world web applications using Node.js. This tutorial has been taken from Learning Node.js Development. More Heroku tutorials Deploy a Game to Heroku Managing Heroku from the command line
Read more
  • 0
  • 0
  • 4219
article-image-creating-deploying-amazon-redshift-cluster
Vijin Boricha
26 Apr 2018
7 min read
Save for later

Creating and deploying an Amazon Redshift cluster

Vijin Boricha
26 Apr 2018
7 min read
Amazon Redshift is one of the database as a service (DBaaS) offerings from AWS that provides a massively scalable data warehouse as a managed service, at significantly lower costs. The data warehouse is based on the open source PostgreSQL database technology. However, not all features offered in PostgreSQL are present in Amazon Redshift. Today, we will learn about Amazon Redshift and perform a few steps to create a fully functioning Amazon Redshift cluster. We will also take a look at some of the essential concepts and terminologies that you ought to keep in mind when working with Amazon Redshift: Clusters: Just like Amazon EMR, Amazon Redshift also relies on the concept of clusters. Clusters here are logical containers containing one or more instances or compute nodes and one leader node that is responsible for the cluster's overall management. Here's a brief look at what each node provides: Leader node: The leader node is a single node present in a cluster that is responsible for orchestrating and executing various database operations, as well as facilitating communication between the database and associate client programs. Compute node: Compute nodes are responsible for executing the code provided by the leader node. Once executed, the compute nodes share the results back to the leader node for aggregation. Amazon Redshift supports two types of compute nodes: dense storage nodes and dense compute nodes. The dense storage nodes provide standard hard disk drives for creating large data warehouses; whereas, the dense compute nodes provide higher performance SSDs. You can start off by using a single node that provides 160 GB of storage and scale up to petabytes by leveraging one or more 16 TB capacity instances as well. Node slices: Each compute node is partitioned into one or more smaller chunks or slices by the leader node, based on the cluster's initial size. Each slice contains a portion of the compute nodes memory, CPU and disk resource, and uses these resources to process certain workloads that are assigned to it. The assignment of workloads is again performed by the leader node. Databases: As mentioned earlier, Amazon Redshift provides a scalable database that you can leverage for a data warehouse, as well as analytical purposes. With each cluster that you spin in Redshift, you can create one or more associated databases with it. The database is based on the open source relational database PostgreSQL (v8.0.2) and thus, can be used in conjunction with other RDBMS tools and functionalities. Applications and clients can communicate with the database using standard PostgreSQL JDBC and ODBC drivers. Here is a representational image of a working data warehouse cluster powered by Amazon Redshift: With this basic information in mind, let's look at some simple and easy to follow steps using which you can set up and get started with your Amazon Redshift cluster. Getting started with Amazon Redshift In this section, we will be looking at a few simple steps to create a fully functioning Amazon Redshift cluster that is up and running in a matter of minutes: First up, we have a few prerequisite steps that need to be completed before we begin with the actual set up of the Redshift cluster. From the AWS Management Console, use the Filter option to filter out IAM. Alternatively, you can also launch the IAM dashboard by selecting this URL: https://console.aws.amazon.com/iam/. Once logged in, we need to create and assign a role that will grant our Redshift cluster read-only access to Amazon S3 buckets. This role will come in handy later on in this chapter when we load some sample data on an Amazon S3 bucket and use Amazon Redshift's COPY command to copy the data locally into the Redshift cluster for processing. To create the custom role, select the Role option from the IAM dashboards' navigation pane. On the Roles page, select the Create role option. This will bring up a simple wizard using which we will create and associate the required permissions to our role. Select the Redshift option from under the AWS Service group section and opt for the Redshift - Customizable option provided under the Select your use case field. Click Next to proceed with the set up. On the Attach permissions policies page, filter and select the AmazonS3ReadOnlyAccess permission. Once done, select Next: Review. In the final Review page, type in a suitable name for the role and select the Create Role option to complete the process. Make a note of the role's ARN as we will be requiring this in the later steps. Here is snippet of the role policy for your reference: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": "*" } ] } With the role created, we can now move on to creating the Redshift cluster. To do so, log in to the AWS Management Console and use the Filter option to filter out Amazon Redshift. Alternatively, you can also launch the Redshift dashboard by selecting this URL: https://console.aws.amazon.com/redshift/. Select Launch Cluster to get started with the process. Next, on the CLUSTER DETAILS page, fill in the required information pertaining to your cluster as mentioned in the following list: Cluster identifier: A suitable name for your new Redshift cluster. Note that this name only supports lowercase strings. Database name: A suitable name for your Redshift database. You can always create more databases within a single Redshift cluster at a later stage. By default, a database named dev is created if no value is provided: Database port: The port number on which the database will accept connections. By default, the value is set to 5439, however you can change this value based on your security requirements. Master user name: Provide a suitable username for accessing the database. Master user password: Type in a strong password with at least one uppercase character, one lowercase character and one numeric value. Confirm the password by retyping it in the Confirm password field. Once completed, hit Continue to move on to the next step of the wizard. On the NODE CONFIGURATION page, select the appropriate Node type for your cluster, as well as the Cluster type based on your functional requirements. Since this particular cluster setup is for demonstration purposes, I've opted to select the dc2.large as the Node type and a Single Node deployment with 1 compute node. Click Continue to move on the next page once done. It is important to note here that the cluster that you are about to launch will be live and not running in a sandbox-like environment. As a result, you will incur the standard Amazon Redshift usage fees for the cluster until you delete it. You can read more about Redshift's pricing at: https://aws.amazon.com/redshift/pricing/. In the ADDITIONAL CONFIGURATION page, you can configure add-on settings, such as encryption enablement, selecting the default VPC for your cluster, whether or not the cluster should have direct internet access, as well as any preferences for a particular Availability Zone out of which the cluster should operate. Most of these settings do not require any changes at the moment and can be left to their default values. The only changes required on this page is associating the previously created IAM role with the cluster. To do so, from the Available Roles drop-down list, select the custom Redshift role that we created in our prerequisite section. Once completed, click on Continue. Review the settings and changes on the Review page and select the Launch Cluster option when completed. The cluster takes a few minutes to spin up depending on whether or not you have opted for a single instance deployment or multiple instances. Once completed, you should see your cluster listed on the Clusters page, as shown in the following screenshot. Ensure that the status of your cluster is shown as healthy under the DB Health column. You can additionally make a note of the cluster's endpoint as well, for accessing it programmatically: With the cluster all set up, the next thing to do is connect to the same. This Amazon Redshift tutorial has been taken from AWS Administration - The Definitive Guide - Second Edition. Read More Amazon S3 Security access and policies How to run Lambda functions on AWS Greengrass AWS Fargate makes Container infrastructure management a piece of cake
Read more
  • 0
  • 0
  • 6190

article-image-run-and-configure-iot-gateway
Gebin George
26 Apr 2018
9 min read
Save for later

How to run and configure an IoT Gateway

Gebin George
26 Apr 2018
9 min read
What is an IoT gateway? An IoT gateway is the protocol or software that's used to connect Internet of Things servers to the cloud. The IoT Gateway can be run as a standalone application, without any modification. There are different encapsulations of the IoT Gateway already prepared. They are built using the same code, but have different properties and are aimed at different operating systems. In today's tutorial, we will explore how to run and configure the IoT gateway. Since all libraries used are based on .NET Standard, they are portable across platforms and operating systems. The encapsulations are then compiled into .NET Core 2 applications. These are the ones being executed. Since both .NET Standard and .NET Core 2 are portable, the gateway can, therefore, be encapsulated for more operating systems than currently supported. Check out this link for a list of operating systems supported by .NET Core 2. Available encapsulations such as installers or app package bundles are listed in the following table. For each one is listed the start project that can be used if you build the project and want to start or debug the application from the development environment: Platform Executable project Windows console Waher.IoTGateway.Console Windows service Waher.IoTGateway.Svc Universal Windows Platform app Waher.IoTGateway.App The IoT Gateway encapsulations can be downloaded from the GitHub project page: All gateways use the library Waher.IoTGateway, which defines the executing environment of the gateway and interacts with all pluggable modules and services. They also use the Waher.IoTGateway.Resources library, which contains resource files common among all encapsulations. The Waher.IoTGateway library is also available as a NuGet: Running the console version The console version of the IoT Gateway (Waher.IoTGateway.Console) is the simplest encapsulation. It can be run from the command line. It requires some basic configuration to run properly. This configuration can be provided manually (see following sections), or by using the installer. The installer asks the user for some basic information and generates the configuration files necessary to execute the application. The console version is the simplest encapsulation, with a minimum of operating system dependencies. It's the easiest to port to other environments. It's also simple to run from the development environment. When run, it outputs any events directly to the terminal window. If sniffers are enabled, the corresponding communication is also output to the terminal window. This provides a simple means to test and debug encrypted communication: Running the gateway as a Windows service The IoT Gateway can also be run as a Windows service (Waher.IoTGateway.Svc). This requires the application be executed on a Windows operating system. The application is a .NET Core 2 console application that has command-line switches allowing it to be registered and executed in the background as a Windows service. Since it supports a command-line interface, it can be used to run the gateway from the console as well. The following table lists recognized command-line switches: Switch Description -? Shows help information. -console Runs the service as a console application. -install Installs the application as a Window Service in the underlying operating system. -displayname Name Sets a custom display name for the Windows service. The default name if omitted is IoT Gateway Service. -description Desc Sets a custom textual description for the Windows service. The default description if omitted is Windows Service hosting the Waher IoT Gateway. -immediate If the service should be started immediately. -localsystem Installed service will run using the Local System account. -localservice Installed service will run using the Local Service account (default). -networkservice Installed service will run using the Network Service account. -start Mode Sets the default starting mode of the Windows service. The default is Disabled. Available options are StartOnBoot, StartOnSystemStart, AutoStart, StartOnDemand and Disabled -uninstall Uninstalls the application as a Windows service from the operating system.   Running the gateway as an app It is possible to run the IoT Gateway as a Universal Windows Platform (UWP) app (Waher.IoTGateway.App). This allows it to be run on Windows phones or embedded devices such as the Raspberry Pi running Windows 10 IoT Core (16299 and later). It can also be used as a template for creating custom apps based on the IoT Gateway: Configuring the IoT Gateway All application data files are separated from the executable files. Application data files are files that can be potentially changed by the user. Executable files are files potentially changed by installers. For the Console and Service applications, application data files are stored in the IoT Gateway subfolder to the operating system's Program Data folder. Example: C:ProgramDataIoT Gateway. For the UWP app, a link to the program data folder is provided at the top of the window. The application data folder contains files you might have to configure to get it to work as you want. Configuring the XMPP interface All IoT Gateways connect to the XMPP network. This connection is used to provide a secure and interoperable interface to your gateway and its underlying devices. You can also administer the gateway through this XMPP connection. The XMPP connection is defined in different manners, depending on the encapsulation. The app lets the user configure the connection via a dialog window. The credentials are then persisted in the object database. The Console and Service versions of the IoT Gateway let the user define the connection using an xmpp.config file in the application data folder. The following is an example configuration file: <?xml version='1.0' encoding='utf-8'?> <SimpleXmppConfiguration xmlns='http://waher.se/Schema/SimpleXmppConfiguration.xsd'> <Host>waher.se</Host> <Port>5222</Port> <Account>USERNAME</Account> <Password>PASSWORD</Password> <ThingRegistry>waher.se</ThingRegistry> <Provisioning>waher.se</Provisioning> <Events></Events> <Sniffer>true</Sniffer> <TrustServer>false</TrustServer> <AllowCramMD5>true</AllowCramMD5> <AllowDigestMD5>true</AllowDigestMD5> <AllowPlain>false</AllowPlain> <AllowScramSHA1>true</AllowScramSHA1> <AllowEncryption>true</AllowEncryption> <RequestRosterOnStartup>true</RequestRosterOnStartup> </SimpleXmppConfiguration> The following is a short recapture: Element Type Description Host String Host name of the XMPP broker to use. Port 1-65535 Port number to connect to. Account String Name of XMPP account. Password String Password to use (or password hash). ThingRegistry String Thing registry to use, or empty if not. Provisioning String Provisioning server to use, or empty if not. Events String Event long to use, or empty if not. Sniffer Boolean If network communication is to be sniffed or not. TrustServer Boolean If the XMPP broker is to be trusted. AllowCramMD5 Boolean If the CRAM-MD5 authentication mechanism is allowed. AllowDigestMD5 Boolean If the DIGEST-MD5 authentication mechanism is allowed. AllowPlain Boolean If the PLAIN authentication mechanism is allowed. AllowScramSHA1 Boolean If the SCRAM-SHA-1 authentication mechanism is allowed. AllowEncryption Boolean If encryption is allowed. RequestRosterOnStartup Boolean If the roster is required, it should be requested on start up. Securing the password Instead of writing the password in clear text in the configuration file, it is recommended that the password hash is used instead of the authentication mechanism supports hashes. When the installer sets up the gateway, it authenticates the credentials during startup and writes the hash value in the file instead. When the hash value is used, the mechanism used to create the hash must be written as well. In the following example, new-line characters are added for readability: <Password type="SCRAM-SHA-1"> rAeAYLvAa6QoP8QWyTGRLgKO/J4= </Password> Setting basic properties of the gateway The basic properties of the IoT Gateway are defined in the Gateway.config file in the program data folder. For example: <?xml version="1.0" encoding="utf-8" ?> <GatewayConfiguration xmlns="http://waher.se/Schema/GatewayConfiguration.xsd"> <Domain>example.com</Domain> <Certificate configFileName="Certificate.config"/> <XmppClient configFileName="xmpp.config"/> <DefaultPage>/Index.md</DefaultPage> <Database folder="Data" defaultCollectionName="Default" blockSize="8192" blocksInCache="10000" blobBlockSize="8192" timeoutMs="10000" encrypted="true"/> <Ports> <Port protocol="HTTP">80</Port> <Port protocol="HTTP">8080</Port> <Port protocol="HTTP">8081</Port> <Port protocol="HTTP">8082</Port> <Port protocol="HTTPS">443</Port> <Port protocol="HTTPS">8088</Port> <Port protocol="XMPP.C2S">5222</Port> <Port protocol="XMPP.S2S">5269</Port> <Port protocol="SOCKS5">1080</Port> </Ports> <FileFolders> <FileFolder webFolder="/Folder1" folderPath="ServerPath1"/> <FileFolder webFolder="/Folder2" folderPath="ServerPath2"/> <FileFolder webFolder="/Folder3" folderPath="ServerPath3"/> </FileFolders> </GatewayConfiguration> Element Type Description Domain String The name of the domain, if any, pointing to the machine running the IoT Gateway. Certificate String The configuration file name specifying details about the certificate to use. XmppClient String The configuration file name specifying details about the XMPP connection. DefaultPage String Relative URL to the page shown if no web page is specified when browsing the IoT Gateway. Database String How the local object database is configured. Typically, these settings do not need to be changed. All you need to know is that you can persist and search for your objects using the static Database defined in Waher.Persistence. Ports Port Which port numbers to use for different protocols supported by the IoT Gateway. FileFolders FileFolder Contains definitions of virtual web folders. Providing a certificate Different protocols (such as HTTPS) require a certificate to allow callers to validate the domain name claim. Such a certificate can be defined by providing a Certificate.config file in the application data folder and then restarting the gateway. If providing such a file, different from the default file, it will be loaded and processed, and then deleted. The information, together with the certificate, will be moved to the relative safety of the object database. For example: <?xml version="1.0" encoding="utf-8" ?> <CertificateConfiguration xmlns="http://waher.se/Schema/CertificateConfiguration.xsd"> <FileName>certificate.pfx</FileName> <Password>testexamplecom</Password> </CertificateConfiguration> Element Type Description FileName String Name of certificate file to import. Password String Password needed to access private part of certificate.   This tutorial was taken from Mastering Internet of Things. Read More IoT Forensics: Security in an always connected world where things talk How IoT is going to change tech teams 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 4870